idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
31,701 | How to extract the function being approximated by a neural network? | Let's denote $f$ the true underlying function and $\hat f$ the function that your machine learning algorithm converges too ($\hat f$ belongs to a family of parametrized functions $F$).
For simplicity, let's also assume that $f$ can be expressed analytically and that $f$ is deterministic.
My question is, are there ways to actually extract this explicit
function f? Both in practice and in theory?
I assume that by "practice", you mean with machine learning (using experimental data) and by "theory", you mean modelling mathematically without machine learning (without data).
In practice, if you have enough data and if $F$ contains $f$, then it should be possible to obtain $\hat f$ = $f$ with an appropriate machine learning methodology.
Theoretically, you may try to model $f$ with physical laws (or other modelling laws). For example is $f(p,s)$ models the time it takes for an object of shape s and weight p to fall from the top of the Eiffel tower, you can use classical mechanics (assuming they are true in the scope/scale of $f$) to model $f$.
For apple and oranges, $f$ is subjective to a particular person (given an ambiguous picture, two persons may disagree). So let's consider your $f$. $f$ is then defined by your brain! So if we assume that there exists an analytical expression of $f$, here are the two ways to find it:
"in practice" (with machine learning): choose F sufficiently large to model the brain (the brain has more than 80 billions of neurons...), build a big enough dataset and choose a good machine learning algorithm. Ideally the dataset should contain all the possible images of oranges and apples. Then train until you get a null training error AND a null generalisation error.
"in theory" (modelling): model the network of biological neurons of your brain. The problem is that we do not yet understand how the brain works.
To recap, you can usually find $f$ but it is really hard in both cases:
in practice: you need a good F, enough data and a good enough ML algorithm.
in theory: you need to know all the "physical laws" and be sure they are correct in the scope of the function $f$. | How to extract the function being approximated by a neural network? | Let's denote $f$ the true underlying function and $\hat f$ the function that your machine learning algorithm converges too ($\hat f$ belongs to a family of parametrized functions $F$).
For simplicity, | How to extract the function being approximated by a neural network?
Let's denote $f$ the true underlying function and $\hat f$ the function that your machine learning algorithm converges too ($\hat f$ belongs to a family of parametrized functions $F$).
For simplicity, let's also assume that $f$ can be expressed analytically and that $f$ is deterministic.
My question is, are there ways to actually extract this explicit
function f? Both in practice and in theory?
I assume that by "practice", you mean with machine learning (using experimental data) and by "theory", you mean modelling mathematically without machine learning (without data).
In practice, if you have enough data and if $F$ contains $f$, then it should be possible to obtain $\hat f$ = $f$ with an appropriate machine learning methodology.
Theoretically, you may try to model $f$ with physical laws (or other modelling laws). For example is $f(p,s)$ models the time it takes for an object of shape s and weight p to fall from the top of the Eiffel tower, you can use classical mechanics (assuming they are true in the scope/scale of $f$) to model $f$.
For apple and oranges, $f$ is subjective to a particular person (given an ambiguous picture, two persons may disagree). So let's consider your $f$. $f$ is then defined by your brain! So if we assume that there exists an analytical expression of $f$, here are the two ways to find it:
"in practice" (with machine learning): choose F sufficiently large to model the brain (the brain has more than 80 billions of neurons...), build a big enough dataset and choose a good machine learning algorithm. Ideally the dataset should contain all the possible images of oranges and apples. Then train until you get a null training error AND a null generalisation error.
"in theory" (modelling): model the network of biological neurons of your brain. The problem is that we do not yet understand how the brain works.
To recap, you can usually find $f$ but it is really hard in both cases:
in practice: you need a good F, enough data and a good enough ML algorithm.
in theory: you need to know all the "physical laws" and be sure they are correct in the scope of the function $f$. | How to extract the function being approximated by a neural network?
Let's denote $f$ the true underlying function and $\hat f$ the function that your machine learning algorithm converges too ($\hat f$ belongs to a family of parametrized functions $F$).
For simplicity, |
31,702 | How to extract the function being approximated by a neural network? | In theory: The structure of the network -- how many layers, how many nodes in each layer and what the activation function is -- gives you the general functional form of the network. Add the estimated weights to get the particular estimated function $\hat f$. All of this should in principle be known/available to the "user" of the network.
In applications: Software that implements the neural network will often take the structure (number of layers, number of nodes and activation function) as arguments and will output the estimated weights if requested.
Thus the explicit function $\hat f$ (the network's approximation of the underlying true function $f$) can be extracted, both in theory and in practice. However, I do not see how the true function $f$ could be extracted, and I think in general this may not be possible. | How to extract the function being approximated by a neural network? | In theory: The structure of the network -- how many layers, how many nodes in each layer and what the activation function is -- gives you the general functional form of the network. Add the estimated | How to extract the function being approximated by a neural network?
In theory: The structure of the network -- how many layers, how many nodes in each layer and what the activation function is -- gives you the general functional form of the network. Add the estimated weights to get the particular estimated function $\hat f$. All of this should in principle be known/available to the "user" of the network.
In applications: Software that implements the neural network will often take the structure (number of layers, number of nodes and activation function) as arguments and will output the estimated weights if requested.
Thus the explicit function $\hat f$ (the network's approximation of the underlying true function $f$) can be extracted, both in theory and in practice. However, I do not see how the true function $f$ could be extracted, and I think in general this may not be possible. | How to extract the function being approximated by a neural network?
In theory: The structure of the network -- how many layers, how many nodes in each layer and what the activation function is -- gives you the general functional form of the network. Add the estimated |
31,703 | What is the benefit of using Manhattan distance for K-medoid than using Euclidean distance? | The manhattan distance is based on absolute value distance, as opposed to squared error (read Eclidean) distance. In practice, you should get similar results most of the time. Absolute value distance should give more robust results, whereas Euclidean would be influenced by unusual values.
This is a multivariate technique, and "distance" between two points involves aggregating the distances between each variable. So if two points are close on most variables, but more discrepant on one of them, Euclidean distance will exagerate that discrepancy, whereas Manhattan distance will shrug it off, being more influenced by the closeness of the other variables.
According to Wikipedia, the k-medoid algorithm is not defined for Euclidean distance, which could explain why you have seen no examples of it. Presumably the reason for this is have a robust clustering method.
begin(RantMode)
Thoughtless analysts often throw a whole bag of variables into an analysis, not all of which have much to do with the problem at hand, nor do those analysts wish to take the necessary time to discern which variables matter -- possibly by talking to subject matter experts. Such analysts (who may possibly call themselves Big Data specialists) would naturally favour a technique that was robust with respect to choice of variable. Statisticians, traditionally, go for small amounts of quality data, and thus favour squared error methods with their greater efficiency.
end(RantMode) | What is the benefit of using Manhattan distance for K-medoid than using Euclidean distance? | The manhattan distance is based on absolute value distance, as opposed to squared error (read Eclidean) distance. In practice, you should get similar results most of the time. Absolute value distance | What is the benefit of using Manhattan distance for K-medoid than using Euclidean distance?
The manhattan distance is based on absolute value distance, as opposed to squared error (read Eclidean) distance. In practice, you should get similar results most of the time. Absolute value distance should give more robust results, whereas Euclidean would be influenced by unusual values.
This is a multivariate technique, and "distance" between two points involves aggregating the distances between each variable. So if two points are close on most variables, but more discrepant on one of them, Euclidean distance will exagerate that discrepancy, whereas Manhattan distance will shrug it off, being more influenced by the closeness of the other variables.
According to Wikipedia, the k-medoid algorithm is not defined for Euclidean distance, which could explain why you have seen no examples of it. Presumably the reason for this is have a robust clustering method.
begin(RantMode)
Thoughtless analysts often throw a whole bag of variables into an analysis, not all of which have much to do with the problem at hand, nor do those analysts wish to take the necessary time to discern which variables matter -- possibly by talking to subject matter experts. Such analysts (who may possibly call themselves Big Data specialists) would naturally favour a technique that was robust with respect to choice of variable. Statisticians, traditionally, go for small amounts of quality data, and thus favour squared error methods with their greater efficiency.
end(RantMode) | What is the benefit of using Manhattan distance for K-medoid than using Euclidean distance?
The manhattan distance is based on absolute value distance, as opposed to squared error (read Eclidean) distance. In practice, you should get similar results most of the time. Absolute value distance |
31,704 | What is the benefit of using Manhattan distance for K-medoid than using Euclidean distance? | I don't have enough reputation to comment and this isn't really meritorious of a full answer, but..
Also worth noting is that k-means clustering can be performed using any sort of distance metric (although in practice it is nearly always done with Euclidean distance). If the manhattan distance metric is used in k-means clustering, the algorithm still yields a centroid with the median value for each dimension, rather than the mean value for each dimension as for Euclidean distance.
These clusters will not necessarily be the same clusters as given by k-mediods; thus, the main takeaway is that Manhattan distance metric is not inherently tied to k-mediods. | What is the benefit of using Manhattan distance for K-medoid than using Euclidean distance? | I don't have enough reputation to comment and this isn't really meritorious of a full answer, but..
Also worth noting is that k-means clustering can be performed using any sort of distance metric (alt | What is the benefit of using Manhattan distance for K-medoid than using Euclidean distance?
I don't have enough reputation to comment and this isn't really meritorious of a full answer, but..
Also worth noting is that k-means clustering can be performed using any sort of distance metric (although in practice it is nearly always done with Euclidean distance). If the manhattan distance metric is used in k-means clustering, the algorithm still yields a centroid with the median value for each dimension, rather than the mean value for each dimension as for Euclidean distance.
These clusters will not necessarily be the same clusters as given by k-mediods; thus, the main takeaway is that Manhattan distance metric is not inherently tied to k-mediods. | What is the benefit of using Manhattan distance for K-medoid than using Euclidean distance?
I don't have enough reputation to comment and this isn't really meritorious of a full answer, but..
Also worth noting is that k-means clustering can be performed using any sort of distance metric (alt |
31,705 | Proof of Chapman Kolmogorov equation | A simplified way to put it in words:
$P(X_{m+n} = j|X_{0} = i) $ is the probability I am at location $j$ after $m+n$ steps given I start at location i
$P(X_{m+n} = j,X_{n}=k|X_{0} = i) $ is the probability I am at location $j$ after $m+n$ steps given I start at location $i$ and am at location $k$ after $n$ steps
The summation is essentially saying, if I begin at $i$ at time 0 and end at $j$ after time $m+n$, I can do this either by being at location $0$ after time $n$ ($P(X_{m+n} = j,X_{n}=0|X_{0} = i) $) or location 1 at time $n$ ($P(X_{m+n} = j,X_{n}=1|X_{0} = i$) or location two at time $n$..... | Proof of Chapman Kolmogorov equation | A simplified way to put it in words:
$P(X_{m+n} = j|X_{0} = i) $ is the probability I am at location $j$ after $m+n$ steps given I start at location i
$P(X_{m+n} = j,X_{n}=k|X_{0} = i) $ is the prob | Proof of Chapman Kolmogorov equation
A simplified way to put it in words:
$P(X_{m+n} = j|X_{0} = i) $ is the probability I am at location $j$ after $m+n$ steps given I start at location i
$P(X_{m+n} = j,X_{n}=k|X_{0} = i) $ is the probability I am at location $j$ after $m+n$ steps given I start at location $i$ and am at location $k$ after $n$ steps
The summation is essentially saying, if I begin at $i$ at time 0 and end at $j$ after time $m+n$, I can do this either by being at location $0$ after time $n$ ($P(X_{m+n} = j,X_{n}=0|X_{0} = i) $) or location 1 at time $n$ ($P(X_{m+n} = j,X_{n}=1|X_{0} = i$) or location two at time $n$..... | Proof of Chapman Kolmogorov equation
A simplified way to put it in words:
$P(X_{m+n} = j|X_{0} = i) $ is the probability I am at location $j$ after $m+n$ steps given I start at location i
$P(X_{m+n} = j,X_{n}=k|X_{0} = i) $ is the prob |
31,706 | Is it correct to model percent change as a continuous variable? | The best way to do this is to model the final score as a function your listed independent variables PLUS the baseline score, like this:
final ~ baseline + treatment + time + treatment*time
There are a few important considerations:
You should not model % change (final-baseline/baseline) and include baseline as a predictor, because these variables are structurally correlated. Even worse might be to model raw change (final - baseline) with baseline as a predictor. As baseline changes, you expect the dependent variables to change in both cases, all else equal. Conclusion: If you use % or raw change, do not include baseline as a predictor.
Modelling percentages as continuous variables is a dying approach, because these variables are typically not normally distributed, by virtue of being bounded at 0 and 1, and also see last comment by TiffTiff. The arcsine square-root (aka angular) transformation has traditionally been used to combat this problem, but is also falling out of favor due to highly conditional efficacy (i.e., it often doesn't work well).
Modelling raw change, even when baseline is not included as a predictor, is not ideal because you in essence constrain the relationship between baseline and final to be 1:1. Basic algebra to see why:
Here is the raw change model:
final - baseline ~ treatment
which is more precisely,
1*final - 1*baseline ~ b0 + b1*treatment + error
where b0 and b1 are parameters to be estimated. if you rearrange, you get:
1*final ~ b0 + 1*baseline + b1*treatment + error
hence, the parameter describing the relationship between baseline and final is set to 1.0.
If instead you model like this:
final ~ baseline + treatment
that is more precisely
1*final ~ b0 + b1*baseline + b2*treatment + error
If indeed there is not a 1:1 relationship between baseline and final, the parameter estimate b1 will be greater than or less than 1. If it turns out that b1 ≠ 1, you will have more power to test the effect of treatment than if you used raw change, and also get information about the relationship between baseline and final. On the other hand, if b1 = 1, then you should have less power, since you're using up one more degree of freedom.
As far as reporting these stats, I understand why you want to say "The difference between scores increased when the treatment was applied": You want to control for the baseline. But that's exactly what the final ~ baseline + treatment model does, and it does it better. You can say "Scores were higher with the treatment, controlling for the baseline score". | Is it correct to model percent change as a continuous variable? | The best way to do this is to model the final score as a function your listed independent variables PLUS the baseline score, like this:
final ~ baseline + treatment + time + treatment*time
There are a | Is it correct to model percent change as a continuous variable?
The best way to do this is to model the final score as a function your listed independent variables PLUS the baseline score, like this:
final ~ baseline + treatment + time + treatment*time
There are a few important considerations:
You should not model % change (final-baseline/baseline) and include baseline as a predictor, because these variables are structurally correlated. Even worse might be to model raw change (final - baseline) with baseline as a predictor. As baseline changes, you expect the dependent variables to change in both cases, all else equal. Conclusion: If you use % or raw change, do not include baseline as a predictor.
Modelling percentages as continuous variables is a dying approach, because these variables are typically not normally distributed, by virtue of being bounded at 0 and 1, and also see last comment by TiffTiff. The arcsine square-root (aka angular) transformation has traditionally been used to combat this problem, but is also falling out of favor due to highly conditional efficacy (i.e., it often doesn't work well).
Modelling raw change, even when baseline is not included as a predictor, is not ideal because you in essence constrain the relationship between baseline and final to be 1:1. Basic algebra to see why:
Here is the raw change model:
final - baseline ~ treatment
which is more precisely,
1*final - 1*baseline ~ b0 + b1*treatment + error
where b0 and b1 are parameters to be estimated. if you rearrange, you get:
1*final ~ b0 + 1*baseline + b1*treatment + error
hence, the parameter describing the relationship between baseline and final is set to 1.0.
If instead you model like this:
final ~ baseline + treatment
that is more precisely
1*final ~ b0 + b1*baseline + b2*treatment + error
If indeed there is not a 1:1 relationship between baseline and final, the parameter estimate b1 will be greater than or less than 1. If it turns out that b1 ≠ 1, you will have more power to test the effect of treatment than if you used raw change, and also get information about the relationship between baseline and final. On the other hand, if b1 = 1, then you should have less power, since you're using up one more degree of freedom.
As far as reporting these stats, I understand why you want to say "The difference between scores increased when the treatment was applied": You want to control for the baseline. But that's exactly what the final ~ baseline + treatment model does, and it does it better. You can say "Scores were higher with the treatment, controlling for the baseline score". | Is it correct to model percent change as a continuous variable?
The best way to do this is to model the final score as a function your listed independent variables PLUS the baseline score, like this:
final ~ baseline + treatment + time + treatment*time
There are a |
31,707 | Is it correct to model percent change as a continuous variable? | Whether this is the best approach depends on what your data looks like, and why the percent change was used. For example, % change is fairly easy to understand by laypeople, and so it's sometimes preferred when that's the target audience. Including the baseline as a covariate somewhat complicates how that percent would be stated, though, since the expected change would depend on the initial value.
Certainly, to use percent change, the data must be on a scale where 0 means 0. That is, a 0 must mean an absence of disease or similar.
I think this article does a pretty good job of summarizing some problems associated with modeling percent change: http://allenfleishmanbiostatistics.com/Articles/2012/06/18-percentage-change-from-baseline-great-or-poor/.
In all cases, Analysis of Covariance (ANCOVA) with baseline as the
covariate was the most efficient statistical methodology. Analyzing
the change from baseline “has acceptable power when correlations
between baseline and post-treatment scores are high; when correlations
are low, POST [i.e., analyzing only the post-score and ignoring
baseline – AIF] has reasonable power. FRACTION [i.e., percentage
change from baseline – AIF] has the poorest statistical efficiency at
all correlations.”
[Note: In ANCOVA, one can analyze either the change from baseline or
the post treatment scores as the d.v. ‘Change’ or ‘Post’ will give
IDENTICAL p-values when baseline is a covariate in ANCOVA.]
As an example of his results, when the correlation between baseline
and post was low (i.e., 0.20) the percentage change was able to be
statistically significant only 45% of the time. Next worse, was
change from baseline with 51% significant results. Near the top was
analyzing only the post score at 70% significant results. The best
was ANCOVA with 72% significant results.
Furthermore, percentage change from baseline “is sensitive in the
characteristics of the baseline distribution.” When the baseline has
relatively large variability, he observed that “power falls.”
Also a potential concern: the ratio will blow up as the denominator approaches 0, which can create obvious problems when modeling. | Is it correct to model percent change as a continuous variable? | Whether this is the best approach depends on what your data looks like, and why the percent change was used. For example, % change is fairly easy to understand by laypeople, and so it's sometimes pre | Is it correct to model percent change as a continuous variable?
Whether this is the best approach depends on what your data looks like, and why the percent change was used. For example, % change is fairly easy to understand by laypeople, and so it's sometimes preferred when that's the target audience. Including the baseline as a covariate somewhat complicates how that percent would be stated, though, since the expected change would depend on the initial value.
Certainly, to use percent change, the data must be on a scale where 0 means 0. That is, a 0 must mean an absence of disease or similar.
I think this article does a pretty good job of summarizing some problems associated with modeling percent change: http://allenfleishmanbiostatistics.com/Articles/2012/06/18-percentage-change-from-baseline-great-or-poor/.
In all cases, Analysis of Covariance (ANCOVA) with baseline as the
covariate was the most efficient statistical methodology. Analyzing
the change from baseline “has acceptable power when correlations
between baseline and post-treatment scores are high; when correlations
are low, POST [i.e., analyzing only the post-score and ignoring
baseline – AIF] has reasonable power. FRACTION [i.e., percentage
change from baseline – AIF] has the poorest statistical efficiency at
all correlations.”
[Note: In ANCOVA, one can analyze either the change from baseline or
the post treatment scores as the d.v. ‘Change’ or ‘Post’ will give
IDENTICAL p-values when baseline is a covariate in ANCOVA.]
As an example of his results, when the correlation between baseline
and post was low (i.e., 0.20) the percentage change was able to be
statistically significant only 45% of the time. Next worse, was
change from baseline with 51% significant results. Near the top was
analyzing only the post score at 70% significant results. The best
was ANCOVA with 72% significant results.
Furthermore, percentage change from baseline “is sensitive in the
characteristics of the baseline distribution.” When the baseline has
relatively large variability, he observed that “power falls.”
Also a potential concern: the ratio will blow up as the denominator approaches 0, which can create obvious problems when modeling. | Is it correct to model percent change as a continuous variable?
Whether this is the best approach depends on what your data looks like, and why the percent change was used. For example, % change is fairly easy to understand by laypeople, and so it's sometimes pre |
31,708 | Multiple comparisons with many groups | Nice question! Let's clear up some potential confusion, first. Dunn's test (Dunn, 1964) is precisely that: a test statistic which is a nonparametric analog to the pairwise t test one would conduct post hoc to an ANOVA. It is similar to the Mann-Whitney-Wilcoxon rank sum test, except that (1) it employs a measure of the pooled variance that is implied by the null hypothesis of the Kruskal-Wallis test, and (2) it uses the same rankings of one's original data as are used by the Kruskal-Wallis test.
Dunn also developed what is commonly referred to as the Bonferroni adjustment for multiple comparisons (Dunn, 1961), which is one of many methods to control the family-wise error rate (FWER) that have since been developed, and simply entails dividing $\alpha$ (one-tailed tests) or $\frac{\alpha}{2}$ (two-tailed tests) by the number of pairwise comparisons one is making. The maximum number of pairwise comparisons one may make with $k$ variables is $\frac{k(k-1)}{2}$, so that's $17\times\frac{16}{2}=136$ possible pairwise comparisons, implying that you might be able to reject a null hypothesis for any single test if $p \le \frac{\frac{\alpha}{2}}{136}$. Your concern about power is therefore warranted for this method.
Other methods to control the FWER exist with more statistical power however. For example, the Holm and Holm-Sidak stepwise methods (Holm, 1979) do not hemorrhage power the way the Bonferroni method does. There too, you could aim to control the false discovery rate (FDR) instead, and these methods—the Benjamini-Hochberg (1995), and Benjamini-Yekutieli (2001)—generally give more statistical power by assuming that some null hypotheses are false (i.e. by building the idea that that not all rejections are false rejections into sequentially modified rejection criteria). These and other multiple comparisons adjustments are implemented specifically for Dunn's test in Stata in the dunntest package (within Stata type net describe dunntest, from(https://alexisdinno.com/stata)), and in R in the dunn.test package.
In addition, there is an alternative to Dunn's test (which is based on an approximate z test statistic): the Conover-Iman (exclusively) post hoc to a rejected Kruskal-Wallis test (which is based on a t distribution, and which is more powerful than Dunn's test; Conover & Iman, 1979; Convover, 1999). One can also use the methods to control the FWER or the FDR with the Conover-Iman tests, which is implemented for Stata in the conovertest package (within Stata type net describe conovertest, from(https://alexisdinno.com/stata)), and for R in the conover.test package.
References
Benjamini, Y. and Hochberg, Y. (1995). Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. Journal of the Royal Statistical Society. Series B (Methodological), 57(1):289–300.
Benjamini, Y. and Yekutieli, D. (2001). The control of the false discovery rate in multiple testing under dependency. Annals of Statistics, 29(4):1165–1188.
Conover, W. J. (1999). Practical Nonparametric Statistics. Wiley, Hoboken, NJ, 3rd edition.
Conover, W. J. and Iman, R. L. (1979). On multiple-comparisons procedures. Technical Report LA-7677-MS, Los Alamos Scientific Laboratory.
Dunn, O. J. (1961). Multiple comparisons among means. Journal of the American Statistical Association, 56(293):52–64.
Dunn, O. J. (1964). Multiple comparisons using rank sums. Technometrics, 6(3):241–252.
Holm, S. (1979). A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics, 6(65-70):1979. | Multiple comparisons with many groups | Nice question! Let's clear up some potential confusion, first. Dunn's test (Dunn, 1964) is precisely that: a test statistic which is a nonparametric analog to the pairwise t test one would conduct pos | Multiple comparisons with many groups
Nice question! Let's clear up some potential confusion, first. Dunn's test (Dunn, 1964) is precisely that: a test statistic which is a nonparametric analog to the pairwise t test one would conduct post hoc to an ANOVA. It is similar to the Mann-Whitney-Wilcoxon rank sum test, except that (1) it employs a measure of the pooled variance that is implied by the null hypothesis of the Kruskal-Wallis test, and (2) it uses the same rankings of one's original data as are used by the Kruskal-Wallis test.
Dunn also developed what is commonly referred to as the Bonferroni adjustment for multiple comparisons (Dunn, 1961), which is one of many methods to control the family-wise error rate (FWER) that have since been developed, and simply entails dividing $\alpha$ (one-tailed tests) or $\frac{\alpha}{2}$ (two-tailed tests) by the number of pairwise comparisons one is making. The maximum number of pairwise comparisons one may make with $k$ variables is $\frac{k(k-1)}{2}$, so that's $17\times\frac{16}{2}=136$ possible pairwise comparisons, implying that you might be able to reject a null hypothesis for any single test if $p \le \frac{\frac{\alpha}{2}}{136}$. Your concern about power is therefore warranted for this method.
Other methods to control the FWER exist with more statistical power however. For example, the Holm and Holm-Sidak stepwise methods (Holm, 1979) do not hemorrhage power the way the Bonferroni method does. There too, you could aim to control the false discovery rate (FDR) instead, and these methods—the Benjamini-Hochberg (1995), and Benjamini-Yekutieli (2001)—generally give more statistical power by assuming that some null hypotheses are false (i.e. by building the idea that that not all rejections are false rejections into sequentially modified rejection criteria). These and other multiple comparisons adjustments are implemented specifically for Dunn's test in Stata in the dunntest package (within Stata type net describe dunntest, from(https://alexisdinno.com/stata)), and in R in the dunn.test package.
In addition, there is an alternative to Dunn's test (which is based on an approximate z test statistic): the Conover-Iman (exclusively) post hoc to a rejected Kruskal-Wallis test (which is based on a t distribution, and which is more powerful than Dunn's test; Conover & Iman, 1979; Convover, 1999). One can also use the methods to control the FWER or the FDR with the Conover-Iman tests, which is implemented for Stata in the conovertest package (within Stata type net describe conovertest, from(https://alexisdinno.com/stata)), and for R in the conover.test package.
References
Benjamini, Y. and Hochberg, Y. (1995). Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. Journal of the Royal Statistical Society. Series B (Methodological), 57(1):289–300.
Benjamini, Y. and Yekutieli, D. (2001). The control of the false discovery rate in multiple testing under dependency. Annals of Statistics, 29(4):1165–1188.
Conover, W. J. (1999). Practical Nonparametric Statistics. Wiley, Hoboken, NJ, 3rd edition.
Conover, W. J. and Iman, R. L. (1979). On multiple-comparisons procedures. Technical Report LA-7677-MS, Los Alamos Scientific Laboratory.
Dunn, O. J. (1961). Multiple comparisons among means. Journal of the American Statistical Association, 56(293):52–64.
Dunn, O. J. (1964). Multiple comparisons using rank sums. Technometrics, 6(3):241–252.
Holm, S. (1979). A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics, 6(65-70):1979. | Multiple comparisons with many groups
Nice question! Let's clear up some potential confusion, first. Dunn's test (Dunn, 1964) is precisely that: a test statistic which is a nonparametric analog to the pairwise t test one would conduct pos |
31,709 | Removing outliers from data - maximum number of outliers that you can remove? | There is no maximum or minimum. Outliers should be removed if they are bad data or if there are other substantive reasons for removing them. If there are no substantive reasons, then I suggest using methods that are robust to outliers. I would not remove outliers just because they are a bit far from other points. | Removing outliers from data - maximum number of outliers that you can remove? | There is no maximum or minimum. Outliers should be removed if they are bad data or if there are other substantive reasons for removing them. If there are no substantive reasons, then I suggest using | Removing outliers from data - maximum number of outliers that you can remove?
There is no maximum or minimum. Outliers should be removed if they are bad data or if there are other substantive reasons for removing them. If there are no substantive reasons, then I suggest using methods that are robust to outliers. I would not remove outliers just because they are a bit far from other points. | Removing outliers from data - maximum number of outliers that you can remove?
There is no maximum or minimum. Outliers should be removed if they are bad data or if there are other substantive reasons for removing them. If there are no substantive reasons, then I suggest using |
31,710 | Removing outliers from data - maximum number of outliers that you can remove? | I would emphasize on something that was said in an other answer and comments (I think that @Peter Flom's answers is accurate and that EdM is right on touch about measurements, among all ).
Analyzing data is something that must be done carefully. You must be very well aware of the meaning of outliers in your contact. For example, assuming that your measurement procedure was done "correctly" (I mean, you haven't introduced biases, you equipment was calibrated, the person reading the instrument did it correctly, etc. etc.), some outliers may tell something interesting and sometime very important.
Here is a made up example, please be indulgent (point them in comments) if it is not 100% right on all aspects. ;)
Say that someone is testing the effect of applying a certain amount of a substance to some cultures (populations) of bacteria. Now, "in general", the effect is to stabilize the number of bacteria in the population, but there are some outliers among the different cultures.
Imagine all your outliers indicate situations where all the bacteria are dead. Or that all outliers represent cultures where the bacteria populations have grown out of control.
What I want to point out is that the nature of your perceived outliers might be meaningful and the consequences of each are different. You might be in a situation where it is intolerable that the number of bacteria increase, or decrease.
Of course, if you noticed that some populations where wiped out by the substance, you would probably investigate on the matter since it is an easily recognizable situation. But not all phenomenon are easily detectable.
To wrap up, the notion of outliers is somewhat arbitrary, but their meanings are multiple and of different importance. Hope it will make you think on the matter... :) | Removing outliers from data - maximum number of outliers that you can remove? | I would emphasize on something that was said in an other answer and comments (I think that @Peter Flom's answers is accurate and that EdM is right on touch about measurements, among all ).
Analyzing d | Removing outliers from data - maximum number of outliers that you can remove?
I would emphasize on something that was said in an other answer and comments (I think that @Peter Flom's answers is accurate and that EdM is right on touch about measurements, among all ).
Analyzing data is something that must be done carefully. You must be very well aware of the meaning of outliers in your contact. For example, assuming that your measurement procedure was done "correctly" (I mean, you haven't introduced biases, you equipment was calibrated, the person reading the instrument did it correctly, etc. etc.), some outliers may tell something interesting and sometime very important.
Here is a made up example, please be indulgent (point them in comments) if it is not 100% right on all aspects. ;)
Say that someone is testing the effect of applying a certain amount of a substance to some cultures (populations) of bacteria. Now, "in general", the effect is to stabilize the number of bacteria in the population, but there are some outliers among the different cultures.
Imagine all your outliers indicate situations where all the bacteria are dead. Or that all outliers represent cultures where the bacteria populations have grown out of control.
What I want to point out is that the nature of your perceived outliers might be meaningful and the consequences of each are different. You might be in a situation where it is intolerable that the number of bacteria increase, or decrease.
Of course, if you noticed that some populations where wiped out by the substance, you would probably investigate on the matter since it is an easily recognizable situation. But not all phenomenon are easily detectable.
To wrap up, the notion of outliers is somewhat arbitrary, but their meanings are multiple and of different importance. Hope it will make you think on the matter... :) | Removing outliers from data - maximum number of outliers that you can remove?
I would emphasize on something that was said in an other answer and comments (I think that @Peter Flom's answers is accurate and that EdM is right on touch about measurements, among all ).
Analyzing d |
31,711 | What exactly is a hyperparameter? | A hyperparameter is a parameter for the (prior) distribution of some parameter.
So for a simple example, let's say we state that the variance parameter $\tau^2$ in some problem has a uniform prior on $(0,\theta)$.
(I personally would be unlikely to do such a thing, but it happens; I might in some very particular circumstance)
Then $\tau^2$ is a parameter (in the distribution of the data) and $\theta$ is a hyperparameter.
If we then in turn specify a (prior) distribution for $\theta$ (e.g. that it's Gamma with mean 100 and shape parameter 2), that's a hyperprior - a prior distribution on a parameter of a prior distribution. | What exactly is a hyperparameter? | A hyperparameter is a parameter for the (prior) distribution of some parameter.
So for a simple example, let's say we state that the variance parameter $\tau^2$ in some problem has a uniform prior on | What exactly is a hyperparameter?
A hyperparameter is a parameter for the (prior) distribution of some parameter.
So for a simple example, let's say we state that the variance parameter $\tau^2$ in some problem has a uniform prior on $(0,\theta)$.
(I personally would be unlikely to do such a thing, but it happens; I might in some very particular circumstance)
Then $\tau^2$ is a parameter (in the distribution of the data) and $\theta$ is a hyperparameter.
If we then in turn specify a (prior) distribution for $\theta$ (e.g. that it's Gamma with mean 100 and shape parameter 2), that's a hyperprior - a prior distribution on a parameter of a prior distribution. | What exactly is a hyperparameter?
A hyperparameter is a parameter for the (prior) distribution of some parameter.
So for a simple example, let's say we state that the variance parameter $\tau^2$ in some problem has a uniform prior on |
31,712 | Interesting derivation of R squared | Derivation is not particularly interesting exercise of symbolic manipulation.
Since,
\begin{align}
\left.\frac{dx'}{d\theta}\right|_{\theta=0}&=-y,\\
\left.\frac{dy'}{d\theta}\right|_{\theta=0}&=x,
\end{align}
and $s_x^2=\frac{1}{n}\sum_{i=1}^{n}(x_i-\bar{x})^2$
$$ \left.\frac{ds_{x'}^2}{d\theta}\right|_{\theta=0}=-2s_{xy}$$
$$ \left.\frac{ds_{y'}^2}{d\theta}\right|_{\theta=0}=2s_{xy}$$
$$\left.\frac{d}{d\theta}\ln(s_{x'})\right|_{\theta=0} = -\frac{s_{xy}}{s_x^2},\quad \left.\frac{d}{d\theta}\ln(s_{y'})\right|_{\theta=0} = \frac{s_{xy}}{s_y^2}$$ and the result follows.
I am curious to know how you came up with such equation, especially what particular experiment revealed such identity. | Interesting derivation of R squared | Derivation is not particularly interesting exercise of symbolic manipulation.
Since,
\begin{align}
\left.\frac{dx'}{d\theta}\right|_{\theta=0}&=-y,\\
\left.\frac{dy'}{d\theta}\right|_{ | Interesting derivation of R squared
Derivation is not particularly interesting exercise of symbolic manipulation.
Since,
\begin{align}
\left.\frac{dx'}{d\theta}\right|_{\theta=0}&=-y,\\
\left.\frac{dy'}{d\theta}\right|_{\theta=0}&=x,
\end{align}
and $s_x^2=\frac{1}{n}\sum_{i=1}^{n}(x_i-\bar{x})^2$
$$ \left.\frac{ds_{x'}^2}{d\theta}\right|_{\theta=0}=-2s_{xy}$$
$$ \left.\frac{ds_{y'}^2}{d\theta}\right|_{\theta=0}=2s_{xy}$$
$$\left.\frac{d}{d\theta}\ln(s_{x'})\right|_{\theta=0} = -\frac{s_{xy}}{s_x^2},\quad \left.\frac{d}{d\theta}\ln(s_{y'})\right|_{\theta=0} = \frac{s_{xy}}{s_y^2}$$ and the result follows.
I am curious to know how you came up with such equation, especially what particular experiment revealed such identity. | Interesting derivation of R squared
Derivation is not particularly interesting exercise of symbolic manipulation.
Since,
\begin{align}
\left.\frac{dx'}{d\theta}\right|_{\theta=0}&=-y,\\
\left.\frac{dy'}{d\theta}\right|_{ |
31,713 | "weight" input in glm and lm functions in R | I found a reference supporting my understanding of the weight in glm.
The book "Modern Applied Statics with S" written by W.N Venables and B.D Ripley (Fourth edition) defines GLM model for $y_i$ as:
$$
f(y_i;\theta_i, \phi)=\exp \Big( \frac{A_i (y_i\theta_i-b(\theta_i))}{\phi}+c(y_i,\phi/A_i)\Big)
$$
(page 183, equation 7.1). Then the page 188 says
"Prior weights $A_i$ may be specified using weight argument." | "weight" input in glm and lm functions in R | I found a reference supporting my understanding of the weight in glm.
The book "Modern Applied Statics with S" written by W.N Venables and B.D Ripley (Fourth edition) defines GLM model for $y_i$ as:
| "weight" input in glm and lm functions in R
I found a reference supporting my understanding of the weight in glm.
The book "Modern Applied Statics with S" written by W.N Venables and B.D Ripley (Fourth edition) defines GLM model for $y_i$ as:
$$
f(y_i;\theta_i, \phi)=\exp \Big( \frac{A_i (y_i\theta_i-b(\theta_i))}{\phi}+c(y_i,\phi/A_i)\Big)
$$
(page 183, equation 7.1). Then the page 188 says
"Prior weights $A_i$ may be specified using weight argument." | "weight" input in glm and lm functions in R
I found a reference supporting my understanding of the weight in glm.
The book "Modern Applied Statics with S" written by W.N Venables and B.D Ripley (Fourth edition) defines GLM model for $y_i$ as:
|
31,714 | Gaussian mixture vs. Gaussian process | To answer your last question, Gaussian process is a discriminative model as opposed to generative. Therefore, you will not be able to model $p(x, y)$ using Gaussian process. Gaussian process models $p(y | x)$ instead. To generate samples $(x_i, y_i)$ you need to work with a generative model such as Gaussian mixture model. | Gaussian mixture vs. Gaussian process | To answer your last question, Gaussian process is a discriminative model as opposed to generative. Therefore, you will not be able to model $p(x, y)$ using Gaussian process. Gaussian process models | Gaussian mixture vs. Gaussian process
To answer your last question, Gaussian process is a discriminative model as opposed to generative. Therefore, you will not be able to model $p(x, y)$ using Gaussian process. Gaussian process models $p(y | x)$ instead. To generate samples $(x_i, y_i)$ you need to work with a generative model such as Gaussian mixture model. | Gaussian mixture vs. Gaussian process
To answer your last question, Gaussian process is a discriminative model as opposed to generative. Therefore, you will not be able to model $p(x, y)$ using Gaussian process. Gaussian process models |
31,715 | Cantelli's inequality proof | Defining $Y=X-\mathbb{E}[X]$, it follows that $\mathbb{E}[Y]=0$ and $\mathbb{Var}[Y]=\mathbb{Var}[X]=:\sigma^2=\mathbb{E}[Y^2]$.
For $t,u>0$, using Markov's inequality, we have
$$
\Pr(Y\geq t) = \Pr(Y+u\geq t+u) \leq \Pr((Y+u)^2\geq (t+u)^2)
$$
$$\leq \frac{\mathbb{E}[(Y+u)^2]}{(t+u)^2} = \frac{\sigma^2+u^2}{(t+u)^2}=:\varphi(u).
$$
Minimize: $\varphi'(u)=0$ gives $u=\sigma^2/t$, and the result follows:
$$
\Pr(X-\mathbb{E}[X]\geq t) \leq \frac{\sigma^2}{\sigma^2+t^2}.
$$ | Cantelli's inequality proof | Defining $Y=X-\mathbb{E}[X]$, it follows that $\mathbb{E}[Y]=0$ and $\mathbb{Var}[Y]=\mathbb{Var}[X]=:\sigma^2=\mathbb{E}[Y^2]$.
For $t,u>0$, using Markov's inequality, we have
$$
\Pr(Y\geq t) = \Pr | Cantelli's inequality proof
Defining $Y=X-\mathbb{E}[X]$, it follows that $\mathbb{E}[Y]=0$ and $\mathbb{Var}[Y]=\mathbb{Var}[X]=:\sigma^2=\mathbb{E}[Y^2]$.
For $t,u>0$, using Markov's inequality, we have
$$
\Pr(Y\geq t) = \Pr(Y+u\geq t+u) \leq \Pr((Y+u)^2\geq (t+u)^2)
$$
$$\leq \frac{\mathbb{E}[(Y+u)^2]}{(t+u)^2} = \frac{\sigma^2+u^2}{(t+u)^2}=:\varphi(u).
$$
Minimize: $\varphi'(u)=0$ gives $u=\sigma^2/t$, and the result follows:
$$
\Pr(X-\mathbb{E}[X]\geq t) \leq \frac{\sigma^2}{\sigma^2+t^2}.
$$ | Cantelli's inequality proof
Defining $Y=X-\mathbb{E}[X]$, it follows that $\mathbb{E}[Y]=0$ and $\mathbb{Var}[Y]=\mathbb{Var}[X]=:\sigma^2=\mathbb{E}[Y^2]$.
For $t,u>0$, using Markov's inequality, we have
$$
\Pr(Y\geq t) = \Pr |
31,716 | Standard Deviation After Subtracting One Mean From Another | Basic properties of expectation and variance give us:
$$E[aX+bY] = aE[X]+bE[Y]$$
$$\text{Var}[aX+bY] = a^2\text{Var}[X]+b^2\text{Var}[Y]+2ab\text{Cov}[X,Y]$$
a) With $a=1,\,b=-1$ and assuming independence, we have
$$E[X-Y] = E[X]-E[Y]$$
$$\text{Var}[X-Y] = \text{Var}[X]+\text{Var}[Y]$$
Taking square roots yields the result for the standard deviation.
b) With $a=1,\,b=-1$ in the presence of dependence, we have
$$E[X-Y] = E[X]-E[Y]$$
$$\text{Var}[X-Y] = \text{Var}[X]+\text{Var}[Y]-2 \text{Cov}[X,Y]$$
It's not clear to me how the dependence is operating (your description doesn't make it clear which observations are correlated).
If two sets of means are dependent (as "pairs of means"), you could treat the means as paired data.
(Outside of that you might need to look at random effects/mixed effects models.) | Standard Deviation After Subtracting One Mean From Another | Basic properties of expectation and variance give us:
$$E[aX+bY] = aE[X]+bE[Y]$$
$$\text{Var}[aX+bY] = a^2\text{Var}[X]+b^2\text{Var}[Y]+2ab\text{Cov}[X,Y]$$
a) With $a=1,\,b=-1$ and assuming indepe | Standard Deviation After Subtracting One Mean From Another
Basic properties of expectation and variance give us:
$$E[aX+bY] = aE[X]+bE[Y]$$
$$\text{Var}[aX+bY] = a^2\text{Var}[X]+b^2\text{Var}[Y]+2ab\text{Cov}[X,Y]$$
a) With $a=1,\,b=-1$ and assuming independence, we have
$$E[X-Y] = E[X]-E[Y]$$
$$\text{Var}[X-Y] = \text{Var}[X]+\text{Var}[Y]$$
Taking square roots yields the result for the standard deviation.
b) With $a=1,\,b=-1$ in the presence of dependence, we have
$$E[X-Y] = E[X]-E[Y]$$
$$\text{Var}[X-Y] = \text{Var}[X]+\text{Var}[Y]-2 \text{Cov}[X,Y]$$
It's not clear to me how the dependence is operating (your description doesn't make it clear which observations are correlated).
If two sets of means are dependent (as "pairs of means"), you could treat the means as paired data.
(Outside of that you might need to look at random effects/mixed effects models.) | Standard Deviation After Subtracting One Mean From Another
Basic properties of expectation and variance give us:
$$E[aX+bY] = aE[X]+bE[Y]$$
$$\text{Var}[aX+bY] = a^2\text{Var}[X]+b^2\text{Var}[Y]+2ab\text{Cov}[X,Y]$$
a) With $a=1,\,b=-1$ and assuming indepe |
31,717 | Why is a Pearson correlation of ranks valid despite normality assumption? | Normality is not required to calculate a Pearson correlation; it's just that some forms of inference about the corresponding population quantity are based on the normal assumptions (CIs and hypothesis tests).
If you don't have normality, the implied properties of that particular form of inference won't hold.
In the case of the Spearman correlation, you don't have normality, but that's fine because the inference calculations for the Spearman correlation (such as the hypothesis test) are not based on a normality assumption.
They're derived based on being a set of paired ranks from a continuous bivariate distribution; in this case the hypothesis test uses the permutation distribution of the test statistic based on the ranks.
When the usual assumptions for inference with the Pearson correlation hold (bivariate normality) the Spearman correlation is usually very close (though on average a little closer to 0).
(So when you could use the Pearson, the Spearman often does quite well. If you had nearly bivariate normal data apart from some contamination with some other process (that caused outliers), the Spearman would be a more robust way to estimate the correlation in the uncontaminated distribution.) | Why is a Pearson correlation of ranks valid despite normality assumption? | Normality is not required to calculate a Pearson correlation; it's just that some forms of inference about the corresponding population quantity are based on the normal assumptions (CIs and hypothesis | Why is a Pearson correlation of ranks valid despite normality assumption?
Normality is not required to calculate a Pearson correlation; it's just that some forms of inference about the corresponding population quantity are based on the normal assumptions (CIs and hypothesis tests).
If you don't have normality, the implied properties of that particular form of inference won't hold.
In the case of the Spearman correlation, you don't have normality, but that's fine because the inference calculations for the Spearman correlation (such as the hypothesis test) are not based on a normality assumption.
They're derived based on being a set of paired ranks from a continuous bivariate distribution; in this case the hypothesis test uses the permutation distribution of the test statistic based on the ranks.
When the usual assumptions for inference with the Pearson correlation hold (bivariate normality) the Spearman correlation is usually very close (though on average a little closer to 0).
(So when you could use the Pearson, the Spearman often does quite well. If you had nearly bivariate normal data apart from some contamination with some other process (that caused outliers), the Spearman would be a more robust way to estimate the correlation in the uncontaminated distribution.) | Why is a Pearson correlation of ranks valid despite normality assumption?
Normality is not required to calculate a Pearson correlation; it's just that some forms of inference about the corresponding population quantity are based on the normal assumptions (CIs and hypothesis |
31,718 | Why is a Pearson correlation of ranks valid despite normality assumption? | when I ran a few examples, the p-values for rho and for the t-test of the Pearson correlation of ranks always matched, save for the last few digits
Well you've been running the wrong examples then!
a = c(1,2,3,4,5,6,7,8,9)
b = c(1,2,3,4,5,6,7,8,90)
cor.test(a,b,method='pearson')
Pearson's product-moment correlation
data: a and b
t = 2.0528, df = 7, p-value = 0.0792
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
-0.08621009 0.90762506
sample estimates:
cor
0.6130088
cor.test(a,b,method='spearman')
Spearman's rank correlation rho
data: a and b
S = 0, p-value = 5.511e-06
alternative hypothesis: true rho is not equal to 0
sample estimates:
rho
1
Vectors a and b have a good, but far from perfect linear (Pearson) correlation. However, they have perfect rank correlation. See - to Spearman's $\rho$, in this case, it matters not if the last digit of b is 8.1, 9, 90 or 9000 (try it!), it matters only if it's larger than 8. That's what a difference correlating ranks makes.
Conversely, while a and b have perfect rank correlation, their Pearson correlation coefficient is smaller than 1. This shows that the Pearson correlation is not reflecting ranks.
A Pearson correlation reflects a linear function, a rank correlation simply a monotonic function. In the case of normal data, the two will strongly resemble each other, and I suspect this is why your data does not show big differences between Spearman and Pearson.
For a practical example, consider the following; you want to see if taller people weigh more. Yes, it's a silly question ... but just assume this is what you care about. Now, mass does not scale linearly with weight, as tall people are also wider than small people; so weight is not a linear function of height. Somebody who is 10% taller than you is (on average) more than 10% heavier. This is why the body/mass index uses the cube in the denominator.
Consequently, you would assume a linear correlation to inaccurately reflect the height/weight relationship. In contrast, rank correlation is insensitive to the annoying laws of physics and biology in this case; it doesn't reflect if people grow heavier linearly as they gain in height, it simply reflects if taller people (higher in rank on one scale) are heavier (higher in rank on the other scale).
A more typical example might be that of Likert-like questionnaire rankings, such as people rating something as "perfect/good/decent/mediocre/bad/awful". "perfect" is as far from "decent" as "decent" is from "bad" on the scale, but can we really say that the distance between the two is the same? A linear correlation is not necessarily appropriate. Rank correlation is more natural.
To more directly address your question: no, p values for Pearson and Spearman correlations mustn't be calculated differently. Much is different about the two, conceptually as well as numerically, but if the test statistic is equivalent, the p value will be equivalent.
On the question of an assumption of normality in Pearson correlation, see this.
More generally, other people have elaborated much better than I could regarding the topic of parametric vs. non-parametric correlations (also see here), and what this means regarding distributional assumptions. | Why is a Pearson correlation of ranks valid despite normality assumption? | when I ran a few examples, the p-values for rho and for the t-test of the Pearson correlation of ranks always matched, save for the last few digits
Well you've been running the wrong examples then!
a | Why is a Pearson correlation of ranks valid despite normality assumption?
when I ran a few examples, the p-values for rho and for the t-test of the Pearson correlation of ranks always matched, save for the last few digits
Well you've been running the wrong examples then!
a = c(1,2,3,4,5,6,7,8,9)
b = c(1,2,3,4,5,6,7,8,90)
cor.test(a,b,method='pearson')
Pearson's product-moment correlation
data: a and b
t = 2.0528, df = 7, p-value = 0.0792
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
-0.08621009 0.90762506
sample estimates:
cor
0.6130088
cor.test(a,b,method='spearman')
Spearman's rank correlation rho
data: a and b
S = 0, p-value = 5.511e-06
alternative hypothesis: true rho is not equal to 0
sample estimates:
rho
1
Vectors a and b have a good, but far from perfect linear (Pearson) correlation. However, they have perfect rank correlation. See - to Spearman's $\rho$, in this case, it matters not if the last digit of b is 8.1, 9, 90 or 9000 (try it!), it matters only if it's larger than 8. That's what a difference correlating ranks makes.
Conversely, while a and b have perfect rank correlation, their Pearson correlation coefficient is smaller than 1. This shows that the Pearson correlation is not reflecting ranks.
A Pearson correlation reflects a linear function, a rank correlation simply a monotonic function. In the case of normal data, the two will strongly resemble each other, and I suspect this is why your data does not show big differences between Spearman and Pearson.
For a practical example, consider the following; you want to see if taller people weigh more. Yes, it's a silly question ... but just assume this is what you care about. Now, mass does not scale linearly with weight, as tall people are also wider than small people; so weight is not a linear function of height. Somebody who is 10% taller than you is (on average) more than 10% heavier. This is why the body/mass index uses the cube in the denominator.
Consequently, you would assume a linear correlation to inaccurately reflect the height/weight relationship. In contrast, rank correlation is insensitive to the annoying laws of physics and biology in this case; it doesn't reflect if people grow heavier linearly as they gain in height, it simply reflects if taller people (higher in rank on one scale) are heavier (higher in rank on the other scale).
A more typical example might be that of Likert-like questionnaire rankings, such as people rating something as "perfect/good/decent/mediocre/bad/awful". "perfect" is as far from "decent" as "decent" is from "bad" on the scale, but can we really say that the distance between the two is the same? A linear correlation is not necessarily appropriate. Rank correlation is more natural.
To more directly address your question: no, p values for Pearson and Spearman correlations mustn't be calculated differently. Much is different about the two, conceptually as well as numerically, but if the test statistic is equivalent, the p value will be equivalent.
On the question of an assumption of normality in Pearson correlation, see this.
More generally, other people have elaborated much better than I could regarding the topic of parametric vs. non-parametric correlations (also see here), and what this means regarding distributional assumptions. | Why is a Pearson correlation of ranks valid despite normality assumption?
when I ran a few examples, the p-values for rho and for the t-test of the Pearson correlation of ranks always matched, save for the last few digits
Well you've been running the wrong examples then!
a |
31,719 | When should I use each of these methods to calculate correlation? | Pearson's product-moment coefficient (pearson parameter) measures linear correlation between variables. Therefore it is appropriate when your suspected correlation is linear, which can be visually inspected with a plot.
Kendall Tau coefficient (kendall paramter) and Spearman's correlation coefficient (spearman parameter) are measures rank correlations. So the correlation between the two variables does not need to be linear. spearman method is basically the pearson method, but applied on the ranks of the values (the rank of a value is given by it's position after sorting the values). kendal method is build basically as a statistic in a form of a ration between the additional number of ordered pairs and the total number of pairs. For kendal method, because it is build as a statistic, one can build also use it in the framework of hypothesis testing, with all the benefits (it is called tau test).
All these methods are instruments used to infer something about the dependencies between random variables. See more on Wikipedia dedicated page dedicated to Correlation and Dependence | When should I use each of these methods to calculate correlation? | Pearson's product-moment coefficient (pearson parameter) measures linear correlation between variables. Therefore it is appropriate when your suspected correlation is linear, which can be visually ins | When should I use each of these methods to calculate correlation?
Pearson's product-moment coefficient (pearson parameter) measures linear correlation between variables. Therefore it is appropriate when your suspected correlation is linear, which can be visually inspected with a plot.
Kendall Tau coefficient (kendall paramter) and Spearman's correlation coefficient (spearman parameter) are measures rank correlations. So the correlation between the two variables does not need to be linear. spearman method is basically the pearson method, but applied on the ranks of the values (the rank of a value is given by it's position after sorting the values). kendal method is build basically as a statistic in a form of a ration between the additional number of ordered pairs and the total number of pairs. For kendal method, because it is build as a statistic, one can build also use it in the framework of hypothesis testing, with all the benefits (it is called tau test).
All these methods are instruments used to infer something about the dependencies between random variables. See more on Wikipedia dedicated page dedicated to Correlation and Dependence | When should I use each of these methods to calculate correlation?
Pearson's product-moment coefficient (pearson parameter) measures linear correlation between variables. Therefore it is appropriate when your suspected correlation is linear, which can be visually ins |
31,720 | Is there a name for bar charts that replace the bars with color coded objects? | That type of infographic stems from the work of Otto Neurath on the Vienna method of pictorial statistics (a picture language later called Isotype) between the mid 1920s and the mid 1930s.
As a result, these might be called numerous things - "Isotype charts", "Vienna method charts" or "Vienna pictorial charts". You might even call it a "Neurath chart". You do sometimes see "pictogram chart" as perhaps a slightly more generic term. If you search on all those terms you turn up plenty of examples (plus imposters that Neurath would disown, such as versions using image size instead of repetition to indicate quantity).
Simple ones resemble bar charts or are reminiscent of dot charts, but more commonly they contain icons or symbols meant to suggest the subject of whatever information is being displayed - often people, but they could be almost anything; for Isotype, the particular visual style of the images is due to the artist Gerd Arntz. The graphics were often black and white, but in other cases used colors, symbols inside the icons - or both - to distinguish subgroups.
Here are a few examples of this style of graphic:
http://isotyperevisited.org/Section%201%20Introduction%20Good%20Marriage.jpg
http://isotyperevisited.org/Second-5-year-creches-main.jpg
http://raumgegenzement.blogsport.de/images/GMDH02_50003.jpg
http://www.brainpickings.org/wp-content/uploads/2011/03/thetransformer2.jpg
http://plato.stanford.edu/entries/neurath/figure2.jpg
A few additional links:
http://www.datascope.be/sog/SOG-Chapter6.pdf
http://plato.stanford.edu/entries/neurath/
http://www.viennareview.net/on-the-town/on-display/how-isotype-conquered-the-world
http://isotyperevisited.org/isotype-revisited/
With a suitable typeface ("font"), they're relatively simple to do - for example, this one uses a character from the Windows font Webdings (which has a number of somewhat-Isotype-style symbols in it): | Is there a name for bar charts that replace the bars with color coded objects? | That type of infographic stems from the work of Otto Neurath on the Vienna method of pictorial statistics (a picture language later called Isotype) between the mid 1920s and the mid 1930s.
As a resul | Is there a name for bar charts that replace the bars with color coded objects?
That type of infographic stems from the work of Otto Neurath on the Vienna method of pictorial statistics (a picture language later called Isotype) between the mid 1920s and the mid 1930s.
As a result, these might be called numerous things - "Isotype charts", "Vienna method charts" or "Vienna pictorial charts". You might even call it a "Neurath chart". You do sometimes see "pictogram chart" as perhaps a slightly more generic term. If you search on all those terms you turn up plenty of examples (plus imposters that Neurath would disown, such as versions using image size instead of repetition to indicate quantity).
Simple ones resemble bar charts or are reminiscent of dot charts, but more commonly they contain icons or symbols meant to suggest the subject of whatever information is being displayed - often people, but they could be almost anything; for Isotype, the particular visual style of the images is due to the artist Gerd Arntz. The graphics were often black and white, but in other cases used colors, symbols inside the icons - or both - to distinguish subgroups.
Here are a few examples of this style of graphic:
http://isotyperevisited.org/Section%201%20Introduction%20Good%20Marriage.jpg
http://isotyperevisited.org/Second-5-year-creches-main.jpg
http://raumgegenzement.blogsport.de/images/GMDH02_50003.jpg
http://www.brainpickings.org/wp-content/uploads/2011/03/thetransformer2.jpg
http://plato.stanford.edu/entries/neurath/figure2.jpg
A few additional links:
http://www.datascope.be/sog/SOG-Chapter6.pdf
http://plato.stanford.edu/entries/neurath/
http://www.viennareview.net/on-the-town/on-display/how-isotype-conquered-the-world
http://isotyperevisited.org/isotype-revisited/
With a suitable typeface ("font"), they're relatively simple to do - for example, this one uses a character from the Windows font Webdings (which has a number of somewhat-Isotype-style symbols in it): | Is there a name for bar charts that replace the bars with color coded objects?
That type of infographic stems from the work of Otto Neurath on the Vienna method of pictorial statistics (a picture language later called Isotype) between the mid 1920s and the mid 1930s.
As a resul |
31,721 | If $Z_i =\min \{k_i, X_i\}$, $X_i \sim U[a_i, b_i]$, what is the distribution of $\sum_iZ_i$? | I would follow Henry's tip and check Lyapunov with $\delta=1$. The fact that the distributions are mixed should not be a problem, as long as the $a_i$'s and $b_i$'s behave properly. Simulation of the particular case in which $a_i=0$, $b_i=1$, $k_i=2/3$ for each $i\geq 1$ shows that normality is ok.
xbar <- replicate(10^4, mean(pmin(runif(10^4), 2/3)))
hist((xbar - mean(xbar)) / sd(xbar), breaks = "FD", freq = FALSE)
curve(dnorm, col = "blue", lwd = 2, add = TRUE) | If $Z_i =\min \{k_i, X_i\}$, $X_i \sim U[a_i, b_i]$, what is the distribution of $\sum_iZ_i$? | I would follow Henry's tip and check Lyapunov with $\delta=1$. The fact that the distributions are mixed should not be a problem, as long as the $a_i$'s and $b_i$'s behave properly. Simulation of the | If $Z_i =\min \{k_i, X_i\}$, $X_i \sim U[a_i, b_i]$, what is the distribution of $\sum_iZ_i$?
I would follow Henry's tip and check Lyapunov with $\delta=1$. The fact that the distributions are mixed should not be a problem, as long as the $a_i$'s and $b_i$'s behave properly. Simulation of the particular case in which $a_i=0$, $b_i=1$, $k_i=2/3$ for each $i\geq 1$ shows that normality is ok.
xbar <- replicate(10^4, mean(pmin(runif(10^4), 2/3)))
hist((xbar - mean(xbar)) / sd(xbar), breaks = "FD", freq = FALSE)
curve(dnorm, col = "blue", lwd = 2, add = TRUE) | If $Z_i =\min \{k_i, X_i\}$, $X_i \sim U[a_i, b_i]$, what is the distribution of $\sum_iZ_i$?
I would follow Henry's tip and check Lyapunov with $\delta=1$. The fact that the distributions are mixed should not be a problem, as long as the $a_i$'s and $b_i$'s behave properly. Simulation of the |
31,722 | If $Z_i =\min \{k_i, X_i\}$, $X_i \sim U[a_i, b_i]$, what is the distribution of $\sum_iZ_i$? | Hints:
Assuming that $c$ is fixed and the $X_i$ are independent then you can calculate the mean $\mu_i$ and variance $\sigma_i^2$ of each $Z_i$: for example $\mu_i=E[ Z_i] = c\frac{a_i+k_i}{2} + (1-c)k_i$ and you know $k_i = ca_i + (1-c)b_i$.
Then, providing $a_i$ and $b_i$ do not grow too quickly, you can use the Lyapunov or Lindeberg conditions to apply the central limit theorem with the conclusion that $\displaystyle\frac{1}{\sqrt{\sum_1^n \sigma_i^2}}\left(\sum_1^n Z_i - \sum_1^n \mu_i\right)$ converges in distribution to a standard normal, or in a hand-waving sense $\sum_1^n Z_i$ is approximately normally distributed with mean $\sum_1^n \mu_i$ and variance $\sum_1^n \sigma_i^2$. | If $Z_i =\min \{k_i, X_i\}$, $X_i \sim U[a_i, b_i]$, what is the distribution of $\sum_iZ_i$? | Hints:
Assuming that $c$ is fixed and the $X_i$ are independent then you can calculate the mean $\mu_i$ and variance $\sigma_i^2$ of each $Z_i$: for example $\mu_i=E[ Z_i] = c\frac{a_i+k_i}{2} + (1-c) | If $Z_i =\min \{k_i, X_i\}$, $X_i \sim U[a_i, b_i]$, what is the distribution of $\sum_iZ_i$?
Hints:
Assuming that $c$ is fixed and the $X_i$ are independent then you can calculate the mean $\mu_i$ and variance $\sigma_i^2$ of each $Z_i$: for example $\mu_i=E[ Z_i] = c\frac{a_i+k_i}{2} + (1-c)k_i$ and you know $k_i = ca_i + (1-c)b_i$.
Then, providing $a_i$ and $b_i$ do not grow too quickly, you can use the Lyapunov or Lindeberg conditions to apply the central limit theorem with the conclusion that $\displaystyle\frac{1}{\sqrt{\sum_1^n \sigma_i^2}}\left(\sum_1^n Z_i - \sum_1^n \mu_i\right)$ converges in distribution to a standard normal, or in a hand-waving sense $\sum_1^n Z_i$ is approximately normally distributed with mean $\sum_1^n \mu_i$ and variance $\sum_1^n \sigma_i^2$. | If $Z_i =\min \{k_i, X_i\}$, $X_i \sim U[a_i, b_i]$, what is the distribution of $\sum_iZ_i$?
Hints:
Assuming that $c$ is fixed and the $X_i$ are independent then you can calculate the mean $\mu_i$ and variance $\sigma_i^2$ of each $Z_i$: for example $\mu_i=E[ Z_i] = c\frac{a_i+k_i}{2} + (1-c) |
31,723 | If $Z_i =\min \{k_i, X_i\}$, $X_i \sim U[a_i, b_i]$, what is the distribution of $\sum_iZ_i$? | My main worry in this question was whether one could apply the CLT "as usual" in the case I am examining. User @Henry asserted that one can, user @Zen showed it through a simulation. Thus encouraged, I will now prove it analytically.
What I am going to do first is to verify that this variable with the mixed distribution has a "usual" moment generating function.
Denote $\mu_i$ the expected value of $Z_i$, $\sigma_i$ its standard deviation, and the centered and scaled version of $Z_i$ by $\tilde Z_i = \frac {Z_i-\mu_i}{\sigma_i}$.
Applying the change-of-variable formula we find that the continuous part is
$$f_{\tilde Z}(\tilde z_i) = \sigma_if_Z(z_i) = \frac {\sigma_i}{b_i-a_i}$$
The moment generating function of $\tilde Z_i$ should be
$$\tilde M_i(t) = E(e^{\tilde z_it}) = \int_{-\infty}^{\infty}e^{\tilde z_it}dF_{\tilde Z}(\tilde z_i) = \int_{\tilde a_i}^{\tilde k_i}\frac{\sigma_ie^{\tilde z_it}}{b_i-a_i}dz_i + ce^{\tilde k_it}$$
$$\Rightarrow \tilde M_i(t)=\frac {\sigma_i}{b_i-a_i}\frac{e^{\tilde k_it}-e^{\tilde a_it}}{t} +ce^{\tilde k_it}$$
with
$$\tilde k_i = \frac {k_i-\mu_i}{\sigma_i},\;\; \tilde a_i = \frac {a_i-\mu_i}{\sigma_i}$$
Using primes to denote derivatives, if we have specified the moment generating function correctly then we should obtain
$$\tilde M_i(0) = 1, \;\; \tilde M_i'(0) = E(\tilde Z) = 0 \Rightarrow \tilde M_i''(0) = E(\tilde Z_i^2) = \operatorname {Var}(\tilde Z_i)=1 $$
since this is a centered and scaled random variable.
And indeed, by calculating derivatives, applying L'Hopital's rule many times, (since the value of the MGF at zero must be calculated through limits), and doing algebraic manipulations, I have verified the first two equalities. The third equality proved too tiresome, but I trust that it holds.
So we have a proper MGF. If we take its 2nd-order Taylor expansion around zero, we have
$$\tilde M(t) = \tilde M(0) + \tilde M'(0)t +\frac 12\tilde M''(0)t^2 + o(t^2)$$
$$\Rightarrow \tilde M(t) = 1 + \frac 12t^2+ o(t^2)$$
This implies that the characteristic function is (here $i$ denotes the imaginary unit)
$$\tilde \phi(t) = 1 + \frac 12 (it)^2 + o(t^2)= 1 - \frac 12 t^2 + o(t^2)$$.
By the properties of the characteristic function, we have that the characteristic function of $\tilde Z/\sqrt n$ is equal to
$$\tilde \phi_{\tilde Z/\sqrt n}(t)=\tilde \phi_{\tilde Z}(t/\sqrt n) = 1 - \frac {t^2}{2n} + o(t^2/n)$$
and since we have independent random variables, the characteristic function of
$\frac 1{\sqrt n}\sum_i^n\tilde Z_i$ is
$$\tilde \phi_{\frac 1{\sqrt n}\sum_i^n\tilde Z_i}(t)= \prod_{i=1}^n\tilde \phi_{\tilde Z}(t/\sqrt n)=\prod_{i=1}^n\left(1 - \frac {t^2}{2n} + o(t^2/n)\right)$$
Then
$$\lim_{n\rightarrow \infty}\tilde \phi_{\frac 1{\sqrt n}\sum_i^n\tilde Z_i}(t) = \lim_{n\rightarrow \infty}\left(1 - \frac {t^2}{2n}\right)^n = e^{-t^2/2}$$
by how the number $e$ is represented. It so happens that the last term is the characteristic function of the standard normal distribution, and by Levy's continuity theorem, we have that
$$\frac 1{\sqrt n}\sum_i^n\tilde Z_i \xrightarrow{d} N(0,1)$$
which is the CLT. Note that the fact that the $Z$- variables are not-identically distributed,"disappeared" from view once we considered their centered and scaled versions and considered the 2nd-order Taylor expansion of their MGF/CHF: at that level of approximation, these functions are identical, and all differences are compacted in the remainder terms which disappear asymptotically.
The fact that idiosyncratic behavior at the individual level, from all individual elements, nevertheless vanishes when we consider the average behavior, I believe it is very well showcased using a nasty creature like a random variable having a mixed distribution. | If $Z_i =\min \{k_i, X_i\}$, $X_i \sim U[a_i, b_i]$, what is the distribution of $\sum_iZ_i$? | My main worry in this question was whether one could apply the CLT "as usual" in the case I am examining. User @Henry asserted that one can, user @Zen showed it through a simulation. Thus encouraged, | If $Z_i =\min \{k_i, X_i\}$, $X_i \sim U[a_i, b_i]$, what is the distribution of $\sum_iZ_i$?
My main worry in this question was whether one could apply the CLT "as usual" in the case I am examining. User @Henry asserted that one can, user @Zen showed it through a simulation. Thus encouraged, I will now prove it analytically.
What I am going to do first is to verify that this variable with the mixed distribution has a "usual" moment generating function.
Denote $\mu_i$ the expected value of $Z_i$, $\sigma_i$ its standard deviation, and the centered and scaled version of $Z_i$ by $\tilde Z_i = \frac {Z_i-\mu_i}{\sigma_i}$.
Applying the change-of-variable formula we find that the continuous part is
$$f_{\tilde Z}(\tilde z_i) = \sigma_if_Z(z_i) = \frac {\sigma_i}{b_i-a_i}$$
The moment generating function of $\tilde Z_i$ should be
$$\tilde M_i(t) = E(e^{\tilde z_it}) = \int_{-\infty}^{\infty}e^{\tilde z_it}dF_{\tilde Z}(\tilde z_i) = \int_{\tilde a_i}^{\tilde k_i}\frac{\sigma_ie^{\tilde z_it}}{b_i-a_i}dz_i + ce^{\tilde k_it}$$
$$\Rightarrow \tilde M_i(t)=\frac {\sigma_i}{b_i-a_i}\frac{e^{\tilde k_it}-e^{\tilde a_it}}{t} +ce^{\tilde k_it}$$
with
$$\tilde k_i = \frac {k_i-\mu_i}{\sigma_i},\;\; \tilde a_i = \frac {a_i-\mu_i}{\sigma_i}$$
Using primes to denote derivatives, if we have specified the moment generating function correctly then we should obtain
$$\tilde M_i(0) = 1, \;\; \tilde M_i'(0) = E(\tilde Z) = 0 \Rightarrow \tilde M_i''(0) = E(\tilde Z_i^2) = \operatorname {Var}(\tilde Z_i)=1 $$
since this is a centered and scaled random variable.
And indeed, by calculating derivatives, applying L'Hopital's rule many times, (since the value of the MGF at zero must be calculated through limits), and doing algebraic manipulations, I have verified the first two equalities. The third equality proved too tiresome, but I trust that it holds.
So we have a proper MGF. If we take its 2nd-order Taylor expansion around zero, we have
$$\tilde M(t) = \tilde M(0) + \tilde M'(0)t +\frac 12\tilde M''(0)t^2 + o(t^2)$$
$$\Rightarrow \tilde M(t) = 1 + \frac 12t^2+ o(t^2)$$
This implies that the characteristic function is (here $i$ denotes the imaginary unit)
$$\tilde \phi(t) = 1 + \frac 12 (it)^2 + o(t^2)= 1 - \frac 12 t^2 + o(t^2)$$.
By the properties of the characteristic function, we have that the characteristic function of $\tilde Z/\sqrt n$ is equal to
$$\tilde \phi_{\tilde Z/\sqrt n}(t)=\tilde \phi_{\tilde Z}(t/\sqrt n) = 1 - \frac {t^2}{2n} + o(t^2/n)$$
and since we have independent random variables, the characteristic function of
$\frac 1{\sqrt n}\sum_i^n\tilde Z_i$ is
$$\tilde \phi_{\frac 1{\sqrt n}\sum_i^n\tilde Z_i}(t)= \prod_{i=1}^n\tilde \phi_{\tilde Z}(t/\sqrt n)=\prod_{i=1}^n\left(1 - \frac {t^2}{2n} + o(t^2/n)\right)$$
Then
$$\lim_{n\rightarrow \infty}\tilde \phi_{\frac 1{\sqrt n}\sum_i^n\tilde Z_i}(t) = \lim_{n\rightarrow \infty}\left(1 - \frac {t^2}{2n}\right)^n = e^{-t^2/2}$$
by how the number $e$ is represented. It so happens that the last term is the characteristic function of the standard normal distribution, and by Levy's continuity theorem, we have that
$$\frac 1{\sqrt n}\sum_i^n\tilde Z_i \xrightarrow{d} N(0,1)$$
which is the CLT. Note that the fact that the $Z$- variables are not-identically distributed,"disappeared" from view once we considered their centered and scaled versions and considered the 2nd-order Taylor expansion of their MGF/CHF: at that level of approximation, these functions are identical, and all differences are compacted in the remainder terms which disappear asymptotically.
The fact that idiosyncratic behavior at the individual level, from all individual elements, nevertheless vanishes when we consider the average behavior, I believe it is very well showcased using a nasty creature like a random variable having a mixed distribution. | If $Z_i =\min \{k_i, X_i\}$, $X_i \sim U[a_i, b_i]$, what is the distribution of $\sum_iZ_i$?
My main worry in this question was whether one could apply the CLT "as usual" in the case I am examining. User @Henry asserted that one can, user @Zen showed it through a simulation. Thus encouraged, |
31,724 | Accurately generating variates from discrete power law distribution | I think (a slightly modified version of) method 2 is quite straightforward, actually
Using the definition of the Pareto distribution function given in Wikipedia
$$F_X(x) = \begin{cases}1-\left(\frac{x_\mathrm{m}}{x}\right)^\alpha & x \ge x_\mathrm{m}, \\0 & x < x_\mathrm{m},\end{cases}$$
if you take $x_m=\frac{1}{2}$ and $\alpha=\gamma$ then the ratio of $p_x$ to $q_x=F_X(x+\frac{1}{2})-F_X(x-\frac{1}{2})$ is maximized at $x=1$, meaning you can just scale by the ratio at $x=1$ and use straight rejection sampling. It seems to be reasonably efficient.
To be more explicit: if you generate from a Pareto with $x_m=\frac{1}{2}$ and $\alpha=\gamma$ and round to the nearest integer (rather than truncate), then it seems to be possible to use rejection sampling with $M = p_1/q_1$ -- each generated value of $x$ from that process is accepted with probability $\frac{p_x}{Mq_x}$.
($M$ here was slightly rounded up since I'm lazy; in reality the fit for this case would be a tiny bit different, but not enough to look different in the plot - in fact the small image makes it look a tad too small when it's actually a fraction too large)
More careful tuning of $x_m$ and $\alpha$ ($\alpha=\gamma-a$ for some $a$ between 0 and 1 say) would probably boost efficiency further, but this approach does reasonably well in the cases I've played with.
If you can give some sense of the typical range of values of $\gamma$ I can take a closer look at efficiency there.
Method 1 can be adapted to be exact, as well, by performing method 1 almost always, then applying another method to deal with the tail. This can be done is ways that may be very fast.
For example, if you take an integer vector of length 256, and fill the first $\lfloor 256 p_1\rfloor$, values with 1, the next $\lfloor 256 p_2\rfloor$ values with 2 and so on until $256 p_i <1$ -- that will almost use up the whole array. The remaining few cells then indicate to move to a second method which combines dealing with the right tail and also the tiny 'left-over' bits of probability from the left part.
The left remnant might then be done by a number of approaches (even with, say 'squaring the histogram' if it is automated, but it doesn't have to be as efficient as that), and the right tail can then be done using something like the above accept-reject approach.
The basic algorithm involves generating an integer from 1 to 256 (which requires only 8 bits from the rng; if efficiency is paramount, bit-operations can take those 'off the top', leaving the remainder of the uniform number (it would best be left as an unnormalized integer value to this point) able to be used to deal with the left remnant and right tail if required.
Carefully implemented, this kind of thing can be very fast. You can use different values of $2^k$ than 256 (e.g. $2^{16}$ might be a possibility), but everything is notionally the same. If you take a very large table, however, there may not be enough bits left in the uniform for it to be suitable for generating the tail and you need a second uniform value there (but it becomes very rarely needed, so it's not much of an issue)
In the same zeta(2) example as above, you'd have 212 1's, 26 2's, 7 3's, 3 4's, one 5 and the values from 250-256 would deal with the remnant. Over 97% of the time you generate one of the values in the table (1-5). | Accurately generating variates from discrete power law distribution | I think (a slightly modified version of) method 2 is quite straightforward, actually
Using the definition of the Pareto distribution function given in Wikipedia
$$F_X(x) = \begin{cases}1-\left(\frac{x | Accurately generating variates from discrete power law distribution
I think (a slightly modified version of) method 2 is quite straightforward, actually
Using the definition of the Pareto distribution function given in Wikipedia
$$F_X(x) = \begin{cases}1-\left(\frac{x_\mathrm{m}}{x}\right)^\alpha & x \ge x_\mathrm{m}, \\0 & x < x_\mathrm{m},\end{cases}$$
if you take $x_m=\frac{1}{2}$ and $\alpha=\gamma$ then the ratio of $p_x$ to $q_x=F_X(x+\frac{1}{2})-F_X(x-\frac{1}{2})$ is maximized at $x=1$, meaning you can just scale by the ratio at $x=1$ and use straight rejection sampling. It seems to be reasonably efficient.
To be more explicit: if you generate from a Pareto with $x_m=\frac{1}{2}$ and $\alpha=\gamma$ and round to the nearest integer (rather than truncate), then it seems to be possible to use rejection sampling with $M = p_1/q_1$ -- each generated value of $x$ from that process is accepted with probability $\frac{p_x}{Mq_x}$.
($M$ here was slightly rounded up since I'm lazy; in reality the fit for this case would be a tiny bit different, but not enough to look different in the plot - in fact the small image makes it look a tad too small when it's actually a fraction too large)
More careful tuning of $x_m$ and $\alpha$ ($\alpha=\gamma-a$ for some $a$ between 0 and 1 say) would probably boost efficiency further, but this approach does reasonably well in the cases I've played with.
If you can give some sense of the typical range of values of $\gamma$ I can take a closer look at efficiency there.
Method 1 can be adapted to be exact, as well, by performing method 1 almost always, then applying another method to deal with the tail. This can be done is ways that may be very fast.
For example, if you take an integer vector of length 256, and fill the first $\lfloor 256 p_1\rfloor$, values with 1, the next $\lfloor 256 p_2\rfloor$ values with 2 and so on until $256 p_i <1$ -- that will almost use up the whole array. The remaining few cells then indicate to move to a second method which combines dealing with the right tail and also the tiny 'left-over' bits of probability from the left part.
The left remnant might then be done by a number of approaches (even with, say 'squaring the histogram' if it is automated, but it doesn't have to be as efficient as that), and the right tail can then be done using something like the above accept-reject approach.
The basic algorithm involves generating an integer from 1 to 256 (which requires only 8 bits from the rng; if efficiency is paramount, bit-operations can take those 'off the top', leaving the remainder of the uniform number (it would best be left as an unnormalized integer value to this point) able to be used to deal with the left remnant and right tail if required.
Carefully implemented, this kind of thing can be very fast. You can use different values of $2^k$ than 256 (e.g. $2^{16}$ might be a possibility), but everything is notionally the same. If you take a very large table, however, there may not be enough bits left in the uniform for it to be suitable for generating the tail and you need a second uniform value there (but it becomes very rarely needed, so it's not much of an issue)
In the same zeta(2) example as above, you'd have 212 1's, 26 2's, 7 3's, 3 4's, one 5 and the values from 250-256 would deal with the remnant. Over 97% of the time you generate one of the values in the table (1-5). | Accurately generating variates from discrete power law distribution
I think (a slightly modified version of) method 2 is quite straightforward, actually
Using the definition of the Pareto distribution function given in Wikipedia
$$F_X(x) = \begin{cases}1-\left(\frac{x |
31,725 | Accurately generating variates from discrete power law distribution | As far as I am aware,the state of the art on power laws is the paper by Clauset, Shalizi and Newman which discusses your problem in Appendix D. Note in particular (where $y$ is a draw from a continuous power law) they say:
Other approximate approaches for generating integers, such as rounding
down (truncating) the value of y, give substantially poorer results
and should not be used.
As an alternative to the accepted answer, the Clauset et al. method for getting accurate draws from the discrete power law distribution is to draw a uniform random $r \in [0,1)$ and then do $x= P^{-1}(1-r)$ where $P(x) = \sum_{a=x}^\infty P(X=a)$ is the complementary cdf of the discrete power law. You need the zeta function to compute $P(x)$ but it only has to be computed up to a certain accuracy, so it is possible to generate draws which have the discrete power law distribution in this way. You need to use the bisection method to solve the equation $P(x) = 1-r$.
Because the exact computation is expensive, an approximate method is also given, which is to define
$$x = \lfloor \frac{1}{2}(1-r)^{-1/(1-\gamma)} + \frac{1}{2}\rfloor$$
which is not quite the same as just rounding values from the continuous power law. The error of this approximation is given in Equation (D.7) of Clauset et al. and depends on $\gamma$. | Accurately generating variates from discrete power law distribution | As far as I am aware,the state of the art on power laws is the paper by Clauset, Shalizi and Newman which discusses your problem in Appendix D. Note in particular (where $y$ is a draw from a continuou | Accurately generating variates from discrete power law distribution
As far as I am aware,the state of the art on power laws is the paper by Clauset, Shalizi and Newman which discusses your problem in Appendix D. Note in particular (where $y$ is a draw from a continuous power law) they say:
Other approximate approaches for generating integers, such as rounding
down (truncating) the value of y, give substantially poorer results
and should not be used.
As an alternative to the accepted answer, the Clauset et al. method for getting accurate draws from the discrete power law distribution is to draw a uniform random $r \in [0,1)$ and then do $x= P^{-1}(1-r)$ where $P(x) = \sum_{a=x}^\infty P(X=a)$ is the complementary cdf of the discrete power law. You need the zeta function to compute $P(x)$ but it only has to be computed up to a certain accuracy, so it is possible to generate draws which have the discrete power law distribution in this way. You need to use the bisection method to solve the equation $P(x) = 1-r$.
Because the exact computation is expensive, an approximate method is also given, which is to define
$$x = \lfloor \frac{1}{2}(1-r)^{-1/(1-\gamma)} + \frac{1}{2}\rfloor$$
which is not quite the same as just rounding values from the continuous power law. The error of this approximation is given in Equation (D.7) of Clauset et al. and depends on $\gamma$. | Accurately generating variates from discrete power law distribution
As far as I am aware,the state of the art on power laws is the paper by Clauset, Shalizi and Newman which discusses your problem in Appendix D. Note in particular (where $y$ is a draw from a continuou |
31,726 | How to measure a classifier's performance when close to 100% of the class labels belong to one class? | A few possibilities come to my mind.
Looking at the overall hit rate is usually not a very good idea as it will depend on the composition of the test set if the performance for the different classes differs. So at the very least, you should be specify (and justify) the relative frequency of the classes in your test data in order to derive a meaningful value.
Secondly, as @Shorack already said, specify which types of error are how important. Often, the classifier needs to meet certain performance criteria in order to be useful (and overall accuracy is rarely the adequate measure).
There are measures like sensitivity, specificity, positive and negative precdictive value that take into account the different classes and different types of misclassification. You can say that these measures answer different questions about the classifier:
sensitivity: What fraction of cases truely belonging to class C is recognized as such?
specificity: What fraction of cases truely not belonging to class C is recognized as such?
positive predictive value: Given the classifier predicts class C, what is the probability that this prediction is correct?
negative predictive value: Given the classifier predicts that the case is not form class C, what is the probability that this prediction is correct?
These questions often allow to formulate specifications that the classifier must need in order to be useful.
The predictive values are often more important from the point of view of the practical application of the classifier: they are conditioned on the prediction, which is the situation you are in when applying the classifer (a patient usually is not interested in knowing how likely the test is to recognize diseased cases, but rather how likely the stated diagnosis is correct). However, in order to properly calculate them you need to know the relative frequencies of the different classes in the population the classifier is used for (seems you have this information - so there's nothing that keeps you from looking at that).
You can also look at the information gain that a positive or negative prediction gives you. This is measured by positive and negative likelihood ratio , LR⁺ and LR⁻. Briefly, they tell you how much the prediction changes the odds towards the class in question.
(see my answer here for a more detailed explanation)
For your trivial classifier, things look like this:
I'll use the "0" class as the class in question, so "positive" means class "0".
Out of 100 cases, 100 are predicted positive (to belong to class 0). 97 of them really do, 3 don't.
The sensitivity for class 0 is 100% (all 97 cases truely belonging to class 0 were recognized), specificity is 0 (none of the other cases were recognized). positive predicitve value (assuming the 97:3 relative frequency is representative) is 97%, negative predictive value cannot be calculated as no negative prediction occurred.
$LR^+ = \frac{\text{sensitivity}}{1 - \text{specificity}} = 1$
$LR^- = \frac{1 - \text{sensitivity}}{\text{specificity}} = \frac{0}{0}$
Now LR⁺ and LR⁻ are factors with which you multiply the odds for the case to belong to the positive class ("0"). Having an LR⁺ of 1 means that the positive prediction did not give you any information: it will not change the odds. So here you have a measure that clearly expresses the fact that your trivial classifier does not add any information.
Completely different direction of thoughts: You mention that you'd like to evaluate different classifiers. That sounds a bit like classifier comparison or selection. The caveat with the measures I discuss above is that they are subject to very high random uncertainty (meaning you need lots of test cases) if you evaluate them on "hard" class labels. If your prediction is primarily continuous (metric, e.g. posterior probability) you can use related measures that look at the same kind of question but do not use fractions of cases but continuous measures, see here.
These will also be better suited to detect small differences in the predictions.
(@FrankHarrell will tell you that you need "proper scoring rules", so that is another search term to keep in mind.) | How to measure a classifier's performance when close to 100% of the class labels belong to one class | A few possibilities come to my mind.
Looking at the overall hit rate is usually not a very good idea as it will depend on the composition of the test set if the performance for the different classes d | How to measure a classifier's performance when close to 100% of the class labels belong to one class?
A few possibilities come to my mind.
Looking at the overall hit rate is usually not a very good idea as it will depend on the composition of the test set if the performance for the different classes differs. So at the very least, you should be specify (and justify) the relative frequency of the classes in your test data in order to derive a meaningful value.
Secondly, as @Shorack already said, specify which types of error are how important. Often, the classifier needs to meet certain performance criteria in order to be useful (and overall accuracy is rarely the adequate measure).
There are measures like sensitivity, specificity, positive and negative precdictive value that take into account the different classes and different types of misclassification. You can say that these measures answer different questions about the classifier:
sensitivity: What fraction of cases truely belonging to class C is recognized as such?
specificity: What fraction of cases truely not belonging to class C is recognized as such?
positive predictive value: Given the classifier predicts class C, what is the probability that this prediction is correct?
negative predictive value: Given the classifier predicts that the case is not form class C, what is the probability that this prediction is correct?
These questions often allow to formulate specifications that the classifier must need in order to be useful.
The predictive values are often more important from the point of view of the practical application of the classifier: they are conditioned on the prediction, which is the situation you are in when applying the classifer (a patient usually is not interested in knowing how likely the test is to recognize diseased cases, but rather how likely the stated diagnosis is correct). However, in order to properly calculate them you need to know the relative frequencies of the different classes in the population the classifier is used for (seems you have this information - so there's nothing that keeps you from looking at that).
You can also look at the information gain that a positive or negative prediction gives you. This is measured by positive and negative likelihood ratio , LR⁺ and LR⁻. Briefly, they tell you how much the prediction changes the odds towards the class in question.
(see my answer here for a more detailed explanation)
For your trivial classifier, things look like this:
I'll use the "0" class as the class in question, so "positive" means class "0".
Out of 100 cases, 100 are predicted positive (to belong to class 0). 97 of them really do, 3 don't.
The sensitivity for class 0 is 100% (all 97 cases truely belonging to class 0 were recognized), specificity is 0 (none of the other cases were recognized). positive predicitve value (assuming the 97:3 relative frequency is representative) is 97%, negative predictive value cannot be calculated as no negative prediction occurred.
$LR^+ = \frac{\text{sensitivity}}{1 - \text{specificity}} = 1$
$LR^- = \frac{1 - \text{sensitivity}}{\text{specificity}} = \frac{0}{0}$
Now LR⁺ and LR⁻ are factors with which you multiply the odds for the case to belong to the positive class ("0"). Having an LR⁺ of 1 means that the positive prediction did not give you any information: it will not change the odds. So here you have a measure that clearly expresses the fact that your trivial classifier does not add any information.
Completely different direction of thoughts: You mention that you'd like to evaluate different classifiers. That sounds a bit like classifier comparison or selection. The caveat with the measures I discuss above is that they are subject to very high random uncertainty (meaning you need lots of test cases) if you evaluate them on "hard" class labels. If your prediction is primarily continuous (metric, e.g. posterior probability) you can use related measures that look at the same kind of question but do not use fractions of cases but continuous measures, see here.
These will also be better suited to detect small differences in the predictions.
(@FrankHarrell will tell you that you need "proper scoring rules", so that is another search term to keep in mind.) | How to measure a classifier's performance when close to 100% of the class labels belong to one class
A few possibilities come to my mind.
Looking at the overall hit rate is usually not a very good idea as it will depend on the composition of the test set if the performance for the different classes d |
31,727 | How to measure a classifier's performance when close to 100% of the class labels belong to one class? | First of all: are all hits equally important and all misses equally important? If so, then there is nothing wrong with your null-model scoring that good: it simply is an excellent solution.
If you find it important to have a good performance on predicting the 1's, you could use the F-measure instead. It is basically the harmonic mean of recall (what portion of the actual 1's have been predicted as 1) and precision (what portion of the predicted 1's were actually a 1).
For a model to score high on this measure, it needs to:
Find most of the 1's.
Not often predict a 1 when it is actually 0.
And it needs to do both simultaneously. Even if your model does only one of the 2 in almost a perfect manner, it will have a low score if it does not perform on the other requirement.
https://en.wikipedia.org/wiki/F1_score | How to measure a classifier's performance when close to 100% of the class labels belong to one class | First of all: are all hits equally important and all misses equally important? If so, then there is nothing wrong with your null-model scoring that good: it simply is an excellent solution.
If you fin | How to measure a classifier's performance when close to 100% of the class labels belong to one class?
First of all: are all hits equally important and all misses equally important? If so, then there is nothing wrong with your null-model scoring that good: it simply is an excellent solution.
If you find it important to have a good performance on predicting the 1's, you could use the F-measure instead. It is basically the harmonic mean of recall (what portion of the actual 1's have been predicted as 1) and precision (what portion of the predicted 1's were actually a 1).
For a model to score high on this measure, it needs to:
Find most of the 1's.
Not often predict a 1 when it is actually 0.
And it needs to do both simultaneously. Even if your model does only one of the 2 in almost a perfect manner, it will have a low score if it does not perform on the other requirement.
https://en.wikipedia.org/wiki/F1_score | How to measure a classifier's performance when close to 100% of the class labels belong to one class
First of all: are all hits equally important and all misses equally important? If so, then there is nothing wrong with your null-model scoring that good: it simply is an excellent solution.
If you fin |
31,728 | How to measure a classifier's performance when close to 100% of the class labels belong to one class? | I'm glad that @cbeleites opened the door ... The concordance probability or $c$-index, which happens to equal the ROC area in the special case of binary $Y$, is a nice summary of predictive discrimination. The ROC curve itself has a high ink:information ratio, but the area under the curve, because it equals the concordance probability, has many nice features, one of them being that it is independent of the prevalence of $Y=1$ since it conditions on $Y$. It is not quite proper (use generalized $R^2$ measures or likelihood ratio $\chi^2$ to achieve that) and is not sensitive enough to be used to compare two models, it is a nice summary of a single model. | How to measure a classifier's performance when close to 100% of the class labels belong to one class | I'm glad that @cbeleites opened the door ... The concordance probability or $c$-index, which happens to equal the ROC area in the special case of binary $Y$, is a nice summary of predictive discrimin | How to measure a classifier's performance when close to 100% of the class labels belong to one class?
I'm glad that @cbeleites opened the door ... The concordance probability or $c$-index, which happens to equal the ROC area in the special case of binary $Y$, is a nice summary of predictive discrimination. The ROC curve itself has a high ink:information ratio, but the area under the curve, because it equals the concordance probability, has many nice features, one of them being that it is independent of the prevalence of $Y=1$ since it conditions on $Y$. It is not quite proper (use generalized $R^2$ measures or likelihood ratio $\chi^2$ to achieve that) and is not sensitive enough to be used to compare two models, it is a nice summary of a single model. | How to measure a classifier's performance when close to 100% of the class labels belong to one class
I'm glad that @cbeleites opened the door ... The concordance probability or $c$-index, which happens to equal the ROC area in the special case of binary $Y$, is a nice summary of predictive discrimin |
31,729 | How to measure a classifier's performance when close to 100% of the class labels belong to one class? | The Receiver Operating Characteristic (ROC) http://en.wikipedia.org/wiki/Receiver_operating_characteristic curve and associated calculations ( namely Area Under Curve- AUC) are commonly used. basically you imagine your classifier gives a continuous reponse ( eg between 0 and 1) and you plot the sensitivity vs false alarm rate (1- specificity) as the decision threshold varies between 0 and 1. These were specifically designed for rare events ( spotting enemy planes?). | How to measure a classifier's performance when close to 100% of the class labels belong to one class | The Receiver Operating Characteristic (ROC) http://en.wikipedia.org/wiki/Receiver_operating_characteristic curve and associated calculations ( namely Area Under Curve- AUC) are commonly used. basicall | How to measure a classifier's performance when close to 100% of the class labels belong to one class?
The Receiver Operating Characteristic (ROC) http://en.wikipedia.org/wiki/Receiver_operating_characteristic curve and associated calculations ( namely Area Under Curve- AUC) are commonly used. basically you imagine your classifier gives a continuous reponse ( eg between 0 and 1) and you plot the sensitivity vs false alarm rate (1- specificity) as the decision threshold varies between 0 and 1. These were specifically designed for rare events ( spotting enemy planes?). | How to measure a classifier's performance when close to 100% of the class labels belong to one class
The Receiver Operating Characteristic (ROC) http://en.wikipedia.org/wiki/Receiver_operating_characteristic curve and associated calculations ( namely Area Under Curve- AUC) are commonly used. basicall |
31,730 | How to measure a classifier's performance when close to 100% of the class labels belong to one class? | When you are dealing with strongly imbalanced data, the Precision-Recall curve is a very good tool, better than its more common cousin the ROC curve.
Davis et. al. have shown that an algorithm which optimizes the area under the
ROC curve is not guaranteed to optimize the area under the PR curve. | How to measure a classifier's performance when close to 100% of the class labels belong to one class | When you are dealing with strongly imbalanced data, the Precision-Recall curve is a very good tool, better than its more common cousin the ROC curve.
Davis et. al. have shown that an algorithm which o | How to measure a classifier's performance when close to 100% of the class labels belong to one class?
When you are dealing with strongly imbalanced data, the Precision-Recall curve is a very good tool, better than its more common cousin the ROC curve.
Davis et. al. have shown that an algorithm which optimizes the area under the
ROC curve is not guaranteed to optimize the area under the PR curve. | How to measure a classifier's performance when close to 100% of the class labels belong to one class
When you are dealing with strongly imbalanced data, the Precision-Recall curve is a very good tool, better than its more common cousin the ROC curve.
Davis et. al. have shown that an algorithm which o |
31,731 | When do we have nuisance parameters? | Nuisance parameters are typically introduced to account for extra variation in the model. Typically, the amount of variation in the data accounted for by your parameters of interest is compared to the amount of unaccounted for variation (Residual Error). By reducing the amount of unaccounted for variation, you become better able to detect effects of your parameters of interest.
As an example, the F-statistic is essentially the amount of variation explained by each of your parameters on average divided by the amount of variation a total junk parameter would explain by chance. This junk variation is derived by dividing the Residual Error by the unused degrees of freedom in the model. As you can see, by making the Residual Error smaller, you reduce the size of the denominator, which serves to increase the size of the F-statistic. This increases the chance that you will find an effect of your parameters of interest.
To generate a real-world example: I'm administering painful heat to people at different temperatures in the context of different treatments for pain-relief and I'm measuring how much pain people report feeling. In this case, I would want to include the temperature of my stimulations as a nuisance covariate when examining the effects of the analgesics. This isn't because I'm wondering if higher temperatures would lead to higher pain - I know they would. That's exactly why I would like to account for that variation when I'm trying to compare the effects of the different treatments, thus giving me a better chance of detecting differences due to treatment. | When do we have nuisance parameters? | Nuisance parameters are typically introduced to account for extra variation in the model. Typically, the amount of variation in the data accounted for by your parameters of interest is compared to th | When do we have nuisance parameters?
Nuisance parameters are typically introduced to account for extra variation in the model. Typically, the amount of variation in the data accounted for by your parameters of interest is compared to the amount of unaccounted for variation (Residual Error). By reducing the amount of unaccounted for variation, you become better able to detect effects of your parameters of interest.
As an example, the F-statistic is essentially the amount of variation explained by each of your parameters on average divided by the amount of variation a total junk parameter would explain by chance. This junk variation is derived by dividing the Residual Error by the unused degrees of freedom in the model. As you can see, by making the Residual Error smaller, you reduce the size of the denominator, which serves to increase the size of the F-statistic. This increases the chance that you will find an effect of your parameters of interest.
To generate a real-world example: I'm administering painful heat to people at different temperatures in the context of different treatments for pain-relief and I'm measuring how much pain people report feeling. In this case, I would want to include the temperature of my stimulations as a nuisance covariate when examining the effects of the analgesics. This isn't because I'm wondering if higher temperatures would lead to higher pain - I know they would. That's exactly why I would like to account for that variation when I'm trying to compare the effects of the different treatments, thus giving me a better chance of detecting differences due to treatment. | When do we have nuisance parameters?
Nuisance parameters are typically introduced to account for extra variation in the model. Typically, the amount of variation in the data accounted for by your parameters of interest is compared to th |
31,732 | When do we have nuisance parameters? | you can't introduce a nuisance parameter unless you forgot or lost your collected data for the non-hypothesis tested parameter given a model with two parameters or changed to a more parameters model and now have only partial knowledge of the parameters.
In a test of equal means for normals where unknown variances you have a nuisance parameter, the variances, because you don't know the variance.
If you move to a more parameters model, you have unknowns, uncertainty and estimate the unknowns A-Y to hypothesis test about Z. | When do we have nuisance parameters? | you can't introduce a nuisance parameter unless you forgot or lost your collected data for the non-hypothesis tested parameter given a model with two parameters or changed to a more parameters model a | When do we have nuisance parameters?
you can't introduce a nuisance parameter unless you forgot or lost your collected data for the non-hypothesis tested parameter given a model with two parameters or changed to a more parameters model and now have only partial knowledge of the parameters.
In a test of equal means for normals where unknown variances you have a nuisance parameter, the variances, because you don't know the variance.
If you move to a more parameters model, you have unknowns, uncertainty and estimate the unknowns A-Y to hypothesis test about Z. | When do we have nuisance parameters?
you can't introduce a nuisance parameter unless you forgot or lost your collected data for the non-hypothesis tested parameter given a model with two parameters or changed to a more parameters model a |
31,733 | Occam's razor obsolete? | Depends on what you consider to be the "Occam's razor"; the original formulation is an unclear theological mumbo-jumbo, so it flourished into a bunch of (often incompatible) interpretations.
Vapnik criticizes the ultranaive version saying more less that a model with lower number of fitted parameters is better because too much parameters imply overfitting, i.e. something in the melody of the Runge's paradox.
It is of course false in machine learning because the "greedyness of fitting" there is not constrained by the number parameters but (via some heuristic) by the model accuracy on the future data.
But does it mean that ML training is introducing plurality without necessity? I would personally say no, mainly due to the second part -- ML models are usually better than hand-razored classical regressions, so this extra complexity pays off. Even if it can be reduced by a human to a simpler theory, this almost always come for a price of extra assumptions, so it is not a fair comparison. | Occam's razor obsolete? | Depends on what you consider to be the "Occam's razor"; the original formulation is an unclear theological mumbo-jumbo, so it flourished into a bunch of (often incompatible) interpretations.
Vapnik c | Occam's razor obsolete?
Depends on what you consider to be the "Occam's razor"; the original formulation is an unclear theological mumbo-jumbo, so it flourished into a bunch of (often incompatible) interpretations.
Vapnik criticizes the ultranaive version saying more less that a model with lower number of fitted parameters is better because too much parameters imply overfitting, i.e. something in the melody of the Runge's paradox.
It is of course false in machine learning because the "greedyness of fitting" there is not constrained by the number parameters but (via some heuristic) by the model accuracy on the future data.
But does it mean that ML training is introducing plurality without necessity? I would personally say no, mainly due to the second part -- ML models are usually better than hand-razored classical regressions, so this extra complexity pays off. Even if it can be reduced by a human to a simpler theory, this almost always come for a price of extra assumptions, so it is not a fair comparison. | Occam's razor obsolete?
Depends on what you consider to be the "Occam's razor"; the original formulation is an unclear theological mumbo-jumbo, so it flourished into a bunch of (often incompatible) interpretations.
Vapnik c |
31,734 | Maximum likelihood equivalent to maximum a posterior estimation | One can proof that in the limit of infinite data, both estimates converge.
Let us consider the case of regression, where you assume that the target data is generated from a smooth function width additive Gaussian noise. Then you have for the likelihood of your training data,
$$p(D|\mathbf{w}) = \prod_{n} p(t_{n}|\mathbf{x_{n}},\mathbf{w}) = \prod_{n}\exp \left(\frac{\beta}{2} \left[t_{n}- y(\mathbf{x_{n}},\mathbf{w}) \right]^{2}\right)/Z_{D}(\beta)$$
where $\mathbf{w}$ is a vector containing all parameters which characterize your algorithm and $Z_{D}(\beta)$ is a normalization constant. If you maximize the log-likelihood if this expression you get the ML estimate.
Now, you add a prior on the parameters which acts as a regularizer and helps you avoid overfitting by controlling the complexity of your classifier. Concretely, in the case it is natural to assume that your parameters are Gaussian distributed,
$$p(\mathbf{w}) = \exp \left( -\frac{\alpha ||\mathbf{w}||^{2}}{2}\right)/Z_{W}(\alpha)$$
MAP is defined as $\arg\max_{w} p(\mathbf{w}|D)$. Using Bayes' theorem,
$$p(\mathbf{w}|D) = p(D|\mathbf{w})p(\mathbf{w})$$
If you substitute the above expressions and take logarithms you end up with (the $Z$'s do not depend on $\mathbf{w}$),
$$\arg\min_{w} \sum_{n}\frac{\beta}{2} \left[t_{n}- y(\mathbf{x_{n}},\mathbf{w}) \right]^{2} + \frac{\alpha}{2}\sum_{i}w_{i}^{2}$$
which is nothing more as ridge regression. The more data you add, the bigger the first term will be in comparison to the second, i.e. the closer to the ML estimate. A very similar derivation can be done for the case of classification.
If you are interested on Machine Learning, I would recommend you to get a copy of Bishop's book. | Maximum likelihood equivalent to maximum a posterior estimation | One can proof that in the limit of infinite data, both estimates converge.
Let us consider the case of regression, where you assume that the target data is generated from a smooth function width addit | Maximum likelihood equivalent to maximum a posterior estimation
One can proof that in the limit of infinite data, both estimates converge.
Let us consider the case of regression, where you assume that the target data is generated from a smooth function width additive Gaussian noise. Then you have for the likelihood of your training data,
$$p(D|\mathbf{w}) = \prod_{n} p(t_{n}|\mathbf{x_{n}},\mathbf{w}) = \prod_{n}\exp \left(\frac{\beta}{2} \left[t_{n}- y(\mathbf{x_{n}},\mathbf{w}) \right]^{2}\right)/Z_{D}(\beta)$$
where $\mathbf{w}$ is a vector containing all parameters which characterize your algorithm and $Z_{D}(\beta)$ is a normalization constant. If you maximize the log-likelihood if this expression you get the ML estimate.
Now, you add a prior on the parameters which acts as a regularizer and helps you avoid overfitting by controlling the complexity of your classifier. Concretely, in the case it is natural to assume that your parameters are Gaussian distributed,
$$p(\mathbf{w}) = \exp \left( -\frac{\alpha ||\mathbf{w}||^{2}}{2}\right)/Z_{W}(\alpha)$$
MAP is defined as $\arg\max_{w} p(\mathbf{w}|D)$. Using Bayes' theorem,
$$p(\mathbf{w}|D) = p(D|\mathbf{w})p(\mathbf{w})$$
If you substitute the above expressions and take logarithms you end up with (the $Z$'s do not depend on $\mathbf{w}$),
$$\arg\min_{w} \sum_{n}\frac{\beta}{2} \left[t_{n}- y(\mathbf{x_{n}},\mathbf{w}) \right]^{2} + \frac{\alpha}{2}\sum_{i}w_{i}^{2}$$
which is nothing more as ridge regression. The more data you add, the bigger the first term will be in comparison to the second, i.e. the closer to the ML estimate. A very similar derivation can be done for the case of classification.
If you are interested on Machine Learning, I would recommend you to get a copy of Bishop's book. | Maximum likelihood equivalent to maximum a posterior estimation
One can proof that in the limit of infinite data, both estimates converge.
Let us consider the case of regression, where you assume that the target data is generated from a smooth function width addit |
31,735 | Maximum likelihood equivalent to maximum a posterior estimation | At Wikipédia article maximum a posterior and also at slide 17 of this presentation here, you will find that both estimators coincide when you use an uniform prior over the support of the likelihood function (improper "flat" prior if the support is infinite).
EDIT: I'm pretty sure this is valid for location parameters, but not sure and couldn't found if this is also valid for scale parameters. I think that for those, one needs to use Jeffreys' prior, but I'm not sure, I'd be glad if someone answers that. | Maximum likelihood equivalent to maximum a posterior estimation | At Wikipédia article maximum a posterior and also at slide 17 of this presentation here, you will find that both estimators coincide when you use an uniform prior over the support of the likelihood fu | Maximum likelihood equivalent to maximum a posterior estimation
At Wikipédia article maximum a posterior and also at slide 17 of this presentation here, you will find that both estimators coincide when you use an uniform prior over the support of the likelihood function (improper "flat" prior if the support is infinite).
EDIT: I'm pretty sure this is valid for location parameters, but not sure and couldn't found if this is also valid for scale parameters. I think that for those, one needs to use Jeffreys' prior, but I'm not sure, I'd be glad if someone answers that. | Maximum likelihood equivalent to maximum a posterior estimation
At Wikipédia article maximum a posterior and also at slide 17 of this presentation here, you will find that both estimators coincide when you use an uniform prior over the support of the likelihood fu |
31,736 | Is there a theoretical basis for the shrinkage used in Boosted Regression Trees? | Is there ever a theoretical basis for any kind of regularization parameter? Usually, I see them introduced as convenient priors.
In addition to $\nu$, there are a lot of ways to regularize gradient boosted trees.
Tree depth,
Minimum sample size for splitting trees,
Minimum sample size for tree leaves,
Number of trees,
Randomly choosing small subsets of features for different trees.
I'm sure I forgot some. A good summary is made in this talk about Gradient Boosted Regression Trees (GBRT). | Is there a theoretical basis for the shrinkage used in Boosted Regression Trees? | Is there ever a theoretical basis for any kind of regularization parameter? Usually, I see them introduced as convenient priors.
In addition to $\nu$, there are a lot of ways to regularize gradient b | Is there a theoretical basis for the shrinkage used in Boosted Regression Trees?
Is there ever a theoretical basis for any kind of regularization parameter? Usually, I see them introduced as convenient priors.
In addition to $\nu$, there are a lot of ways to regularize gradient boosted trees.
Tree depth,
Minimum sample size for splitting trees,
Minimum sample size for tree leaves,
Number of trees,
Randomly choosing small subsets of features for different trees.
I'm sure I forgot some. A good summary is made in this talk about Gradient Boosted Regression Trees (GBRT). | Is there a theoretical basis for the shrinkage used in Boosted Regression Trees?
Is there ever a theoretical basis for any kind of regularization parameter? Usually, I see them introduced as convenient priors.
In addition to $\nu$, there are a lot of ways to regularize gradient b |
31,737 | Is there a theoretical basis for the shrinkage used in Boosted Regression Trees? | Yes, there is theoretical basis for the shrinkage $\nu$. It is not only a regularization parameter.
Remember that Gradient Boosting is equivalent to estimating the parameters of an additive model by minimizing a differentiable loss function (exponential loss in the case of Adaboost, multinomial deviance for classification, etc.) using Gradient Descent (see Friedman et al. 2000).
So $\nu$ controls the rate at which the loss function is minimized. Smaller values of $\nu$ result in greater accuracy because with smaller steps, the optimization is more precise (however, it takes more time because more steps are required).
With $\nu$ we have control on the rate at which the boosting algorithm
descends the error surface (or ascends the likelihood surface).
Performance is best when $\nu$ is as small as possible
with decreasing marginal utility for smaller and smaller $\nu$.
(Both citations are from Ridgeway 2007) | Is there a theoretical basis for the shrinkage used in Boosted Regression Trees? | Yes, there is theoretical basis for the shrinkage $\nu$. It is not only a regularization parameter.
Remember that Gradient Boosting is equivalent to estimating the parameters of an additive model by m | Is there a theoretical basis for the shrinkage used in Boosted Regression Trees?
Yes, there is theoretical basis for the shrinkage $\nu$. It is not only a regularization parameter.
Remember that Gradient Boosting is equivalent to estimating the parameters of an additive model by minimizing a differentiable loss function (exponential loss in the case of Adaboost, multinomial deviance for classification, etc.) using Gradient Descent (see Friedman et al. 2000).
So $\nu$ controls the rate at which the loss function is minimized. Smaller values of $\nu$ result in greater accuracy because with smaller steps, the optimization is more precise (however, it takes more time because more steps are required).
With $\nu$ we have control on the rate at which the boosting algorithm
descends the error surface (or ascends the likelihood surface).
Performance is best when $\nu$ is as small as possible
with decreasing marginal utility for smaller and smaller $\nu$.
(Both citations are from Ridgeway 2007) | Is there a theoretical basis for the shrinkage used in Boosted Regression Trees?
Yes, there is theoretical basis for the shrinkage $\nu$. It is not only a regularization parameter.
Remember that Gradient Boosting is equivalent to estimating the parameters of an additive model by m |
31,738 | How can I find the standard deviation in categorical distribution | There is no standard deviation of a categorical variable - it makes no sense, just as the mean makes no sense. E.g. in your example, what is the "average color"?
But there are ways to estimate the error of a binomial or multinomial proportion. It isn't clear which you want, since your title seems to ask for the latter while your text seems to ask for the former. Even for the binomial proportion, it's trickier than many people think.
The classic formula for a 95% CI for a binomial proportion is
$CI=\hat{p} \pm 1.96\sqrt{\frac{\hat{p}(1-\hat{p})}{n}}$
but this may not be best. See e.g Brown, Cai & DasGupta | How can I find the standard deviation in categorical distribution | There is no standard deviation of a categorical variable - it makes no sense, just as the mean makes no sense. E.g. in your example, what is the "average color"?
But there are ways to estimate the er | How can I find the standard deviation in categorical distribution
There is no standard deviation of a categorical variable - it makes no sense, just as the mean makes no sense. E.g. in your example, what is the "average color"?
But there are ways to estimate the error of a binomial or multinomial proportion. It isn't clear which you want, since your title seems to ask for the latter while your text seems to ask for the former. Even for the binomial proportion, it's trickier than many people think.
The classic formula for a 95% CI for a binomial proportion is
$CI=\hat{p} \pm 1.96\sqrt{\frac{\hat{p}(1-\hat{p})}{n}}$
but this may not be best. See e.g Brown, Cai & DasGupta | How can I find the standard deviation in categorical distribution
There is no standard deviation of a categorical variable - it makes no sense, just as the mean makes no sense. E.g. in your example, what is the "average color"?
But there are ways to estimate the er |
31,739 | Intercept calculation in Theil-Sen estimator | The Theil-Sen estimator is essentially an estimator for the slope alone; the line has been constructed in a host of different ways - there are a large variety of ways to calculate the intercept.
You said:
My understanding of the intercept calculation is that I first calculate the median slope, and then construct a line through every data point with this slope, find the intercept of every line, and then take the median intercept.
A common one (probably the most common) is to compute median($y-bx$). This is what Sen looked at, for example; if I understand your intercept definition correctly this is the same as the intercept you mention.
There are a couple of approaches that compute the intercept of the line through each pair of points and attempts to get some kind of weighted-median but based off that (putting more weight on the points further apart in x-space).
Another is to try to get an estimator with higher efficiency at the normal (akin to that of the slope estimator in typical situations) and similar breakdown point to the slope estimate (there's probably little point in having better breakdown at the expense of efficiency), such as using the Hodges-Lehmann estimator (median of pairwise averages) on $y-bx$. This has a kind of symmetry in the way the slopes and intercepts are defined ... and generally gives something very close to the LS line when the normal assumptions nearly hold, whereas the Sen-intercept can be - relatively speaking - quite different.
Some people just compute the mean residual.
There are still other suggestions that have been looked at. There's really no 'one' intercept to go with the slope estimate.
Dietz lists several possibilities, possibly even including all the ones I mentioned, but that's by no means exhaustive. | Intercept calculation in Theil-Sen estimator | The Theil-Sen estimator is essentially an estimator for the slope alone; the line has been constructed in a host of different ways - there are a large variety of ways to calculate the intercept.
You s | Intercept calculation in Theil-Sen estimator
The Theil-Sen estimator is essentially an estimator for the slope alone; the line has been constructed in a host of different ways - there are a large variety of ways to calculate the intercept.
You said:
My understanding of the intercept calculation is that I first calculate the median slope, and then construct a line through every data point with this slope, find the intercept of every line, and then take the median intercept.
A common one (probably the most common) is to compute median($y-bx$). This is what Sen looked at, for example; if I understand your intercept definition correctly this is the same as the intercept you mention.
There are a couple of approaches that compute the intercept of the line through each pair of points and attempts to get some kind of weighted-median but based off that (putting more weight on the points further apart in x-space).
Another is to try to get an estimator with higher efficiency at the normal (akin to that of the slope estimator in typical situations) and similar breakdown point to the slope estimate (there's probably little point in having better breakdown at the expense of efficiency), such as using the Hodges-Lehmann estimator (median of pairwise averages) on $y-bx$. This has a kind of symmetry in the way the slopes and intercepts are defined ... and generally gives something very close to the LS line when the normal assumptions nearly hold, whereas the Sen-intercept can be - relatively speaking - quite different.
Some people just compute the mean residual.
There are still other suggestions that have been looked at. There's really no 'one' intercept to go with the slope estimate.
Dietz lists several possibilities, possibly even including all the ones I mentioned, but that's by no means exhaustive. | Intercept calculation in Theil-Sen estimator
The Theil-Sen estimator is essentially an estimator for the slope alone; the line has been constructed in a host of different ways - there are a large variety of ways to calculate the intercept.
You s |
31,740 | Intercept calculation in Theil-Sen estimator | The Kendall–Theil Robust Line program from the USGS has a companion PDF.
On page 8 (PDF page 15) it states the method used and formula as you found but gives the reference as Conover.
Intercept
The estimate of the intercept is calculated by use of the
Conover (1980) equation
$$b = Y_\text{median} -m\times X_\text{median}\ \ ,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (6)$$
where
$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ b\ \ \ \ \ $ is the estimated intercept,
$\ \ \ \ \ \ \ Y_\text{median}\ \ \ \ $ is the median of the response variables,
$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ m\ \ \ \ $ is the estimated slope,
and
$\ \ \ \ \ \ X_\text{median}\ \ \ \ $ is the median of the explanatory variables.
I confirm this does produce the same result as the program.
Whether there are superior methods and so on, as always, a matter of opinion and your particular circumstances.
The M-estimation algorithm is arguably erroneous.
for i = 1, # dat-1 do
for j = i+1, # dat do
Change there is j indexing from i+1 and don't process instances of i == j.
Then either rank (sort) the result choosing index as described, or arguably take the median, which will give a slightly different result. If you plot, the data will look like a CDF plot. | Intercept calculation in Theil-Sen estimator | The Kendall–Theil Robust Line program from the USGS has a companion PDF.
On page 8 (PDF page 15) it states the method used and formula as you found but gives the reference as Conover.
Intercept
The e | Intercept calculation in Theil-Sen estimator
The Kendall–Theil Robust Line program from the USGS has a companion PDF.
On page 8 (PDF page 15) it states the method used and formula as you found but gives the reference as Conover.
Intercept
The estimate of the intercept is calculated by use of the
Conover (1980) equation
$$b = Y_\text{median} -m\times X_\text{median}\ \ ,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (6)$$
where
$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ b\ \ \ \ \ $ is the estimated intercept,
$\ \ \ \ \ \ \ Y_\text{median}\ \ \ \ $ is the median of the response variables,
$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ m\ \ \ \ $ is the estimated slope,
and
$\ \ \ \ \ \ X_\text{median}\ \ \ \ $ is the median of the explanatory variables.
I confirm this does produce the same result as the program.
Whether there are superior methods and so on, as always, a matter of opinion and your particular circumstances.
The M-estimation algorithm is arguably erroneous.
for i = 1, # dat-1 do
for j = i+1, # dat do
Change there is j indexing from i+1 and don't process instances of i == j.
Then either rank (sort) the result choosing index as described, or arguably take the median, which will give a slightly different result. If you plot, the data will look like a CDF plot. | Intercept calculation in Theil-Sen estimator
The Kendall–Theil Robust Line program from the USGS has a companion PDF.
On page 8 (PDF page 15) it states the method used and formula as you found but gives the reference as Conover.
Intercept
The e |
31,741 | What is the difference between the sum of two covariance matrices and the covariance matrix of the sum of two variables? | While searching through unanswered questions I noticed this one again and decided, in agreement with whuber, that keeping essentially answered questions off of the unanswered tab is higher priority than my own personal preferences about what is "worthy" of answer vs. comment status, so I pasted my comment as an answer.
They are different because ${\bf K}_{X} + {\bf K}_Y$ is the sum of two covariance matrices while ${\bf K}_{X+Y}$ is the covariance matrix of the random variable $X+Y$. To see why the two matrices are different, use the bilinearity of covariance to see that
$$ [{\bf K}_{X+Y}]_{ij}=[{\bf K}_{X}]_{ij} +[{\bf K}_{Y}]_{ij}+ {\rm cov}(X_i,Y_j)+{\rm cov}(X_j,Y_i)$$
i.e. the cross-covariances are missing from ${\bf K}_{X} + {\bf K}_Y$ (note I assume $X,Y$ are of equal dimension to ensure that question makes sense). So, ${\bf K}_{X+Y}$ is the covariance matrix of $X+Y$ and ${\bf K}_{X} + {\bf K}_Y$ represents the special case where ${\rm cov}(X_i,Y_j)=-{\rm cov}(X_j,Y_i)$ for each pair $(i,j)$, the most notable example being when every element of $X$ is uncorrelated with every element of $Y$. | What is the difference between the sum of two covariance matrices and the covariance matrix of the s | While searching through unanswered questions I noticed this one again and decided, in agreement with whuber, that keeping essentially answered questions off of the unanswered tab is higher priority th | What is the difference between the sum of two covariance matrices and the covariance matrix of the sum of two variables?
While searching through unanswered questions I noticed this one again and decided, in agreement with whuber, that keeping essentially answered questions off of the unanswered tab is higher priority than my own personal preferences about what is "worthy" of answer vs. comment status, so I pasted my comment as an answer.
They are different because ${\bf K}_{X} + {\bf K}_Y$ is the sum of two covariance matrices while ${\bf K}_{X+Y}$ is the covariance matrix of the random variable $X+Y$. To see why the two matrices are different, use the bilinearity of covariance to see that
$$ [{\bf K}_{X+Y}]_{ij}=[{\bf K}_{X}]_{ij} +[{\bf K}_{Y}]_{ij}+ {\rm cov}(X_i,Y_j)+{\rm cov}(X_j,Y_i)$$
i.e. the cross-covariances are missing from ${\bf K}_{X} + {\bf K}_Y$ (note I assume $X,Y$ are of equal dimension to ensure that question makes sense). So, ${\bf K}_{X+Y}$ is the covariance matrix of $X+Y$ and ${\bf K}_{X} + {\bf K}_Y$ represents the special case where ${\rm cov}(X_i,Y_j)=-{\rm cov}(X_j,Y_i)$ for each pair $(i,j)$, the most notable example being when every element of $X$ is uncorrelated with every element of $Y$. | What is the difference between the sum of two covariance matrices and the covariance matrix of the s
While searching through unanswered questions I noticed this one again and decided, in agreement with whuber, that keeping essentially answered questions off of the unanswered tab is higher priority th |
31,742 | Specification and interpretation of interaction terms using glm() | x/z expands to x + x:z and so far I have used this only to model nested random effects.
set.seed(42)
x <- rnorm(100)
z <- rnorm(100)
y <- sample(c(0,1),100,TRUE)
fit2 <- glm(y ~ x/z, family = "binomial")
fit3 <- glm(y ~ x + z %in% x, family = "binomial")
identical(summary(fit2)$coefficients,summary(fit3)$coefficients)
#TRUE
fit4 <- glm(y ~ x + x:z, family = "binomial")
identical(summary(fit2)$coefficients,summary(fit4)$coefficients)
#TRUE
fit5 <- glm(y ~ I(x/z), family = "binomial")
a <- x/z
fit6 <- glm(y ~ a, family = "binomial")
all.equal(summary(fit5)$coefficients,summary(fit6)$coefficients)
#[1] "Attributes: < Component 2: Component 1: 1 string mismatch >"
#which means that only the rownames don't match, but values are identical | Specification and interpretation of interaction terms using glm() | x/z expands to x + x:z and so far I have used this only to model nested random effects.
set.seed(42)
x <- rnorm(100)
z <- rnorm(100)
y <- sample(c(0,1),100,TRUE)
fit2 <- glm(y ~ x/z, family = "binomi | Specification and interpretation of interaction terms using glm()
x/z expands to x + x:z and so far I have used this only to model nested random effects.
set.seed(42)
x <- rnorm(100)
z <- rnorm(100)
y <- sample(c(0,1),100,TRUE)
fit2 <- glm(y ~ x/z, family = "binomial")
fit3 <- glm(y ~ x + z %in% x, family = "binomial")
identical(summary(fit2)$coefficients,summary(fit3)$coefficients)
#TRUE
fit4 <- glm(y ~ x + x:z, family = "binomial")
identical(summary(fit2)$coefficients,summary(fit4)$coefficients)
#TRUE
fit5 <- glm(y ~ I(x/z), family = "binomial")
a <- x/z
fit6 <- glm(y ~ a, family = "binomial")
all.equal(summary(fit5)$coefficients,summary(fit6)$coefficients)
#[1] "Attributes: < Component 2: Component 1: 1 string mismatch >"
#which means that only the rownames don't match, but values are identical | Specification and interpretation of interaction terms using glm()
x/z expands to x + x:z and so far I have used this only to model nested random effects.
set.seed(42)
x <- rnorm(100)
z <- rnorm(100)
y <- sample(c(0,1),100,TRUE)
fit2 <- glm(y ~ x/z, family = "binomi |
31,743 | Specification and interpretation of interaction terms using glm() | I have never seen x/d in any formula. Can you give a link to such a page?
The best way to specify a formula is using + and :, for e.g., if you want to model y on x1 and x2 and interaction of x1 and x2, you will need to give: y ~ x1 + x2 + x1:x2 or x1 * x2 (which is a shortcut).
Now comes the question of interpreting coeff when you have interaction terms. Imagine a simple linear model: y ~ x1 + x2. The coeff of x1 or x2 indicates the increase in y with a unit increase in x1 or x2 respectively.
However, the moment you add an interaction term, interpretation is not so easy. If you increase x1 by 1 unit in a model: y = b0 + b1 x1 + b2 x2 + b3 x1:x2, the increase in y is : b1 + b3*x2. As you see the increase is not linear, it depends on the level of x2. What you can possibly do is plot response curves for various levels of x2, and plot y vs x1, to show change in response.
Hope this helps. I will try and answer the rest of the questions in another post. | Specification and interpretation of interaction terms using glm() | I have never seen x/d in any formula. Can you give a link to such a page?
The best way to specify a formula is using + and :, for e.g., if you want to model y on x1 and x2 and interaction of x1 and x2 | Specification and interpretation of interaction terms using glm()
I have never seen x/d in any formula. Can you give a link to such a page?
The best way to specify a formula is using + and :, for e.g., if you want to model y on x1 and x2 and interaction of x1 and x2, you will need to give: y ~ x1 + x2 + x1:x2 or x1 * x2 (which is a shortcut).
Now comes the question of interpreting coeff when you have interaction terms. Imagine a simple linear model: y ~ x1 + x2. The coeff of x1 or x2 indicates the increase in y with a unit increase in x1 or x2 respectively.
However, the moment you add an interaction term, interpretation is not so easy. If you increase x1 by 1 unit in a model: y = b0 + b1 x1 + b2 x2 + b3 x1:x2, the increase in y is : b1 + b3*x2. As you see the increase is not linear, it depends on the level of x2. What you can possibly do is plot response curves for various levels of x2, and plot y vs x1, to show change in response.
Hope this helps. I will try and answer the rest of the questions in another post. | Specification and interpretation of interaction terms using glm()
I have never seen x/d in any formula. Can you give a link to such a page?
The best way to specify a formula is using + and :, for e.g., if you want to model y on x1 and x2 and interaction of x1 and x2 |
31,744 | Stepwise regression vs. elastic net | Your question has an implicit assumption that $R^2$ is a good measure of the quality of the fit and is appropriate for comparing between models. I think that your background information provides evidence that $R^2$ is not a good tool for what you are trying to do. After all, you can increase $R^2$ by adding nonsense variables to your model.
Did you take the variables that were found using the elastic net and refit a new regression model using those variables rather than use the estimates from the elasticnet fit? That is kind of like entering your data into a nice statistical software program and using it to round the data and print it out so you can calculate the mean using an abacus.
If you want the fewest predictors possible (and still get a reasonable fit) then lasso methods will tend to result in fewer predictors than elasticnet methods. The advantage of the elasticnet method is not in finding the fewest variables, but in finding a good model that takes advantage of the information in the variables and avoids the bias that you get with stepwise models.
A better comparison would be how well they predict a new set of observations, or maybe a press statistic or cross-validation. | Stepwise regression vs. elastic net | Your question has an implicit assumption that $R^2$ is a good measure of the quality of the fit and is appropriate for comparing between models. I think that your background information provides evid | Stepwise regression vs. elastic net
Your question has an implicit assumption that $R^2$ is a good measure of the quality of the fit and is appropriate for comparing between models. I think that your background information provides evidence that $R^2$ is not a good tool for what you are trying to do. After all, you can increase $R^2$ by adding nonsense variables to your model.
Did you take the variables that were found using the elastic net and refit a new regression model using those variables rather than use the estimates from the elasticnet fit? That is kind of like entering your data into a nice statistical software program and using it to round the data and print it out so you can calculate the mean using an abacus.
If you want the fewest predictors possible (and still get a reasonable fit) then lasso methods will tend to result in fewer predictors than elasticnet methods. The advantage of the elasticnet method is not in finding the fewest variables, but in finding a good model that takes advantage of the information in the variables and avoids the bias that you get with stepwise models.
A better comparison would be how well they predict a new set of observations, or maybe a press statistic or cross-validation. | Stepwise regression vs. elastic net
Your question has an implicit assumption that $R^2$ is a good measure of the quality of the fit and is appropriate for comparing between models. I think that your background information provides evid |
31,745 | Ordinal predictors in linear multiple regression in SPSS or R | You have two options for including this variable in the regression:
Just use the variable as it is, no dummy variable coding. People do this all the time with 5 point Likert scales. This method assumes that moving from 0 to 1 has the same effect as moving from 1 to 2 and from 2 to 3. You may not want to make this assumption.
Use the as.factor function in R to code the variable into three dummy variables relative to the base case (0). You no longer have to assume that the marginal effect of increasing by one level is constant.
The more levels you have in your ordinal variable, the more option 1 would be preferred over option 2 - at some point you have more dummy variables than you want to deal with and interpret.
I don't think there is a way to "force" an independent variable to be ordinal. | Ordinal predictors in linear multiple regression in SPSS or R | You have two options for including this variable in the regression:
Just use the variable as it is, no dummy variable coding. People do this all the time with 5 point Likert scales. This method assum | Ordinal predictors in linear multiple regression in SPSS or R
You have two options for including this variable in the regression:
Just use the variable as it is, no dummy variable coding. People do this all the time with 5 point Likert scales. This method assumes that moving from 0 to 1 has the same effect as moving from 1 to 2 and from 2 to 3. You may not want to make this assumption.
Use the as.factor function in R to code the variable into three dummy variables relative to the base case (0). You no longer have to assume that the marginal effect of increasing by one level is constant.
The more levels you have in your ordinal variable, the more option 1 would be preferred over option 2 - at some point you have more dummy variables than you want to deal with and interpret.
I don't think there is a way to "force" an independent variable to be ordinal. | Ordinal predictors in linear multiple regression in SPSS or R
You have two options for including this variable in the regression:
Just use the variable as it is, no dummy variable coding. People do this all the time with 5 point Likert scales. This method assum |
31,746 | Ordinal predictors in linear multiple regression in SPSS or R | A third option is to use a dummy coding as in (2) but to penalize differences in the coefficients of adjacent categories:
http://cran.r-project.org/web/packages/ordPens/ordPens.pdf | Ordinal predictors in linear multiple regression in SPSS or R | A third option is to use a dummy coding as in (2) but to penalize differences in the coefficients of adjacent categories:
http://cran.r-project.org/web/packages/ordPens/ordPens.pdf | Ordinal predictors in linear multiple regression in SPSS or R
A third option is to use a dummy coding as in (2) but to penalize differences in the coefficients of adjacent categories:
http://cran.r-project.org/web/packages/ordPens/ordPens.pdf | Ordinal predictors in linear multiple regression in SPSS or R
A third option is to use a dummy coding as in (2) but to penalize differences in the coefficients of adjacent categories:
http://cran.r-project.org/web/packages/ordPens/ordPens.pdf |
31,747 | Why does standard error not involve population size? | This formula assumes that the sample is a very small proportion of the population.
If there is a finite population and the sample is a substantial part of it, you can use the finite population correction:
$\text{FPC} = \sqrt{\frac{N-n}{N}}$
where $n$ is the sample size and $N$ is the population size. If $N = n$ then this will become 0, as you suspected. Ordinarily, though, it makes very little difference. | Why does standard error not involve population size? | This formula assumes that the sample is a very small proportion of the population.
If there is a finite population and the sample is a substantial part of it, you can use the finite population correc | Why does standard error not involve population size?
This formula assumes that the sample is a very small proportion of the population.
If there is a finite population and the sample is a substantial part of it, you can use the finite population correction:
$\text{FPC} = \sqrt{\frac{N-n}{N}}$
where $n$ is the sample size and $N$ is the population size. If $N = n$ then this will become 0, as you suspected. Ordinarily, though, it makes very little difference. | Why does standard error not involve population size?
This formula assumes that the sample is a very small proportion of the population.
If there is a finite population and the sample is a substantial part of it, you can use the finite population correc |
31,748 | Problem calculating, interpreting regsubsets and general questions about model selection procedure | To further the idea about using all subsets or best subsets tools for finding a "Best" fitting model, The book "How to Lie with Statistics" by Darrell Huff tells a story about Readers Digest publishing a comparison of the chemicals in cigarette smoke. The point of their article was to show that there was no real difference between the different brands, but one brand was lowest in some of the chemicals (but by so little that the difference was meaningless) and that brand started a big advertisement campaign based on being the "lowest" or "best" according to Readers Digest.
All subsets or best subsets regressions are similar, the real message from the graph you show is not "here is the Best" but really that there is no one best model. From a statistical view (using adjusted r-squared) the majority of your model are pretty much the same (the few at the bottom are inferior to those above, but the rest are all similar). Your wanting to find a "Best" model from that table is like the cigarette company saying that their product was the best when the purpose was to show that they were all similar.
Here is something to try, randomly delete one point from the dataset and rerun the analysis, do you get the same "Best" model? or does it change? repeat a few times deleting a different point each time to see how the "Best" model changes. Are you really comfortable claiming a model is "Best" when that small of a change in the data gives a different "Best"? Also look at how much different the coefficients are between the different models, how do you interpret those changes?
It is better to understand the question and the science behind the data and use that information to help decide on a "Best" model. Consider 2 models that are very similar the only difference is that one model includes $x_1$ and the other includes $x_2$ instead. The model with $x_1$ fits slightly better (adj r-squared of 0.49 vs. 0.48) however to measure $x_1$ requires surgery and waiting 2 weeks for lab results while measuring $x_2$ takes 5 minutes and a Sphygmomanometer. Would it really be worth the extra time, expense, and risk to get that extra 0.01 in the adjuster r-squared, or would the better model be the quicker, cheaper, safer model? What makes sense from the science standpoint? In your example above do you really think that increasing spending on the military will improve olympic performance? or is this a case of that variable acting as a surrogate for other spending variables that would have more direct affect?
Other things to consider include taking several good models and combining them (Model Averaging), or rather than having each variable be either all in or all out adding some form of penalty (Ridge regression, LASSO, elasticnet,...). | Problem calculating, interpreting regsubsets and general questions about model selection procedure | To further the idea about using all subsets or best subsets tools for finding a "Best" fitting model, The book "How to Lie with Statistics" by Darrell Huff tells a story about Readers Digest publishin | Problem calculating, interpreting regsubsets and general questions about model selection procedure
To further the idea about using all subsets or best subsets tools for finding a "Best" fitting model, The book "How to Lie with Statistics" by Darrell Huff tells a story about Readers Digest publishing a comparison of the chemicals in cigarette smoke. The point of their article was to show that there was no real difference between the different brands, but one brand was lowest in some of the chemicals (but by so little that the difference was meaningless) and that brand started a big advertisement campaign based on being the "lowest" or "best" according to Readers Digest.
All subsets or best subsets regressions are similar, the real message from the graph you show is not "here is the Best" but really that there is no one best model. From a statistical view (using adjusted r-squared) the majority of your model are pretty much the same (the few at the bottom are inferior to those above, but the rest are all similar). Your wanting to find a "Best" model from that table is like the cigarette company saying that their product was the best when the purpose was to show that they were all similar.
Here is something to try, randomly delete one point from the dataset and rerun the analysis, do you get the same "Best" model? or does it change? repeat a few times deleting a different point each time to see how the "Best" model changes. Are you really comfortable claiming a model is "Best" when that small of a change in the data gives a different "Best"? Also look at how much different the coefficients are between the different models, how do you interpret those changes?
It is better to understand the question and the science behind the data and use that information to help decide on a "Best" model. Consider 2 models that are very similar the only difference is that one model includes $x_1$ and the other includes $x_2$ instead. The model with $x_1$ fits slightly better (adj r-squared of 0.49 vs. 0.48) however to measure $x_1$ requires surgery and waiting 2 weeks for lab results while measuring $x_2$ takes 5 minutes and a Sphygmomanometer. Would it really be worth the extra time, expense, and risk to get that extra 0.01 in the adjuster r-squared, or would the better model be the quicker, cheaper, safer model? What makes sense from the science standpoint? In your example above do you really think that increasing spending on the military will improve olympic performance? or is this a case of that variable acting as a surrogate for other spending variables that would have more direct affect?
Other things to consider include taking several good models and combining them (Model Averaging), or rather than having each variable be either all in or all out adding some form of penalty (Ridge regression, LASSO, elasticnet,...). | Problem calculating, interpreting regsubsets and general questions about model selection procedure
To further the idea about using all subsets or best subsets tools for finding a "Best" fitting model, The book "How to Lie with Statistics" by Darrell Huff tells a story about Readers Digest publishin |
31,749 | Problem calculating, interpreting regsubsets and general questions about model selection procedure | Some questions have been answered so I am only addressing the ones regarding model selection. AIC, BIC, Mallow Cp and adjusted R$^2$ are all methods to compare and select models that tke into account problems of overfitted models by an adjusted measure or a penalty function in the criteria. But in cases where the penalty functions differ it is very possible for two similar criteria to lead to different choices for a final model. The minimum value for different criteria can occurat different models. This has been observed quite often when looking at models chosen by AIC and BIC.
I really don't know what you mean by best model. Each criterion essentially give a different definition of best. You can call a model best in terms of information, entropy, stochastic complexity, percentage variance explained (adjusted) and more. If you are dealing with a specific crtierion and are meaning by best capturing the true minimum for say AIC over all possible models then that can only be guaranteed by looking at all models (i.e. all subset selections for the variables). Step-up, step-down and step-wise procedure do not always find the best model in the sense of a specific crtierion. With step-wise regression you can even get different answers by starting a different models. I am sure Frank Harrell would have a lot to say about this.
To learn more, there are several good books on model/subset selection available and I have referenced some here on other posts. Also Lacey Gunter's monograph with Springer in their SpringerBrief series will be coming out soon. I was a coauthor with her on that book. | Problem calculating, interpreting regsubsets and general questions about model selection procedure | Some questions have been answered so I am only addressing the ones regarding model selection. AIC, BIC, Mallow Cp and adjusted R$^2$ are all methods to compare and select models that tke into account | Problem calculating, interpreting regsubsets and general questions about model selection procedure
Some questions have been answered so I am only addressing the ones regarding model selection. AIC, BIC, Mallow Cp and adjusted R$^2$ are all methods to compare and select models that tke into account problems of overfitted models by an adjusted measure or a penalty function in the criteria. But in cases where the penalty functions differ it is very possible for two similar criteria to lead to different choices for a final model. The minimum value for different criteria can occurat different models. This has been observed quite often when looking at models chosen by AIC and BIC.
I really don't know what you mean by best model. Each criterion essentially give a different definition of best. You can call a model best in terms of information, entropy, stochastic complexity, percentage variance explained (adjusted) and more. If you are dealing with a specific crtierion and are meaning by best capturing the true minimum for say AIC over all possible models then that can only be guaranteed by looking at all models (i.e. all subset selections for the variables). Step-up, step-down and step-wise procedure do not always find the best model in the sense of a specific crtierion. With step-wise regression you can even get different answers by starting a different models. I am sure Frank Harrell would have a lot to say about this.
To learn more, there are several good books on model/subset selection available and I have referenced some here on other posts. Also Lacey Gunter's monograph with Springer in their SpringerBrief series will be coming out soon. I was a coauthor with her on that book. | Problem calculating, interpreting regsubsets and general questions about model selection procedure
Some questions have been answered so I am only addressing the ones regarding model selection. AIC, BIC, Mallow Cp and adjusted R$^2$ are all methods to compare and select models that tke into account |
31,750 | Error distribution for linear and logistic regression | 1) If $u$ has normal distribution i.e. $N(0,σ^2)$ then $Var(Y|X_2)=Var(β_1+β_2X_2)+Var(u)=0+σ^2=σ^2$, since $β_1+β_2X_2$ is not a random variable.
2) In the logistic regression, it is assumed that the errors follows a binomial distribution as mentioned here. It is better to write it as $Var(Y_j|X_j)=m_j.E[Y_j|X_j].(1-E[Y_j|X_j])=m_j\pi(X_j).(1-\pi(X_j))$, since those probabilities depend on $X_j$, as referenced here or in Applied Logistic Regression. | Error distribution for linear and logistic regression | 1) If $u$ has normal distribution i.e. $N(0,σ^2)$ then $Var(Y|X_2)=Var(β_1+β_2X_2)+Var(u)=0+σ^2=σ^2$, since $β_1+β_2X_2$ is not a random variable.
2) In the logistic regression, it is assumed that t | Error distribution for linear and logistic regression
1) If $u$ has normal distribution i.e. $N(0,σ^2)$ then $Var(Y|X_2)=Var(β_1+β_2X_2)+Var(u)=0+σ^2=σ^2$, since $β_1+β_2X_2$ is not a random variable.
2) In the logistic regression, it is assumed that the errors follows a binomial distribution as mentioned here. It is better to write it as $Var(Y_j|X_j)=m_j.E[Y_j|X_j].(1-E[Y_j|X_j])=m_j\pi(X_j).(1-\pi(X_j))$, since those probabilities depend on $X_j$, as referenced here or in Applied Logistic Regression. | Error distribution for linear and logistic regression
1) If $u$ has normal distribution i.e. $N(0,σ^2)$ then $Var(Y|X_2)=Var(β_1+β_2X_2)+Var(u)=0+σ^2=σ^2$, since $β_1+β_2X_2$ is not a random variable.
2) In the logistic regression, it is assumed that t |
31,751 | Solving a simple integral equation by random sampling | As cardinal pointed out in his comment, your question can be restated as follows.
By simple algebra, the integral equation can be rewritten as
$$
\int_0^z g(x)\,dx = \frac{1}{2} \, ,
$$
in which $g$ is the probability density function defined as
$$
g(x)=\frac{f(x)}{\int_0^1 f(t)\,dt} \, .
$$
Let $X$ be a random variable with density $g$. By definition, $P\{X\leq z\}=\int_0^z g(x)\,dx$, so your integral equation is equivalent to
$$
P\{X\leq z\}=\frac{1}{2} \, ,
$$
which means that your problem can be stated as:
"Let $X$ be a random variable with density $g$. Find the median of $X$."
To estimate the median of $X$, use any simulation method to draw a sample of values of $X$ and take as your estimate the sample median.
One possibility is to use the Metropolis-Hastings algorithm to get a sample of points with the desired distribution. Due to the expression of the acceptance probability in the Metropolis-Hastings algorithm, we don't need to know the value of the normalization constant $\int_0^1 f(t)\,dt$ of the density $g$. So, we don't have to do this integration.
The code bellow uses a particularly simple form of the Metropolis-Hastings algorithm known as the Indepence Sampler, which uses a proposal whose distribution does not depend on the current value of the chain. I have used independent uniform proposals. For comparison, the script outputs the Monte Carlo minimum and the result found with standard optimization. The sample points are stored in the vector chain, but we discard the first $10000$ points which form the so called "burn in" period of the simulation.
BURN_IN = 10000
DRAWS = 100000
f = function(x) exp(sin(x))
chain = numeric(BURN_IN + DRAWS)
x = 1/2
for (i in 1:(BURN_IN + DRAWS)) {
y = runif(1) # proposal
if (runif(1) < min(1, f(y)/f(x))) x = y
chain[i] = x
}
x_min = median(chain[BURN_IN : (BURN_IN + DRAWS)])
cat("Metropolis minimum found at", x_min, "\n\n")
# MONTE CARLO ENDS HERE. The integrations bellow are just to check the results.
A = integrate(f, 0, 1)$value
F = function(x) (abs(integrate(f, 0, x)$value - A/2))
cat("Optimize minimum found at", optimize(F, c(0, 1))$minimum, "\n")
Here are some results:
Metropolis minimum found at 0.6005409
Optimize minimum found at 0.601365
This code is meant just as a starting point for what you really need. Hence, use with care. | Solving a simple integral equation by random sampling | As cardinal pointed out in his comment, your question can be restated as follows.
By simple algebra, the integral equation can be rewritten as
$$
\int_0^z g(x)\,dx = \frac{1}{2} \, ,
$$
in which $g$ | Solving a simple integral equation by random sampling
As cardinal pointed out in his comment, your question can be restated as follows.
By simple algebra, the integral equation can be rewritten as
$$
\int_0^z g(x)\,dx = \frac{1}{2} \, ,
$$
in which $g$ is the probability density function defined as
$$
g(x)=\frac{f(x)}{\int_0^1 f(t)\,dt} \, .
$$
Let $X$ be a random variable with density $g$. By definition, $P\{X\leq z\}=\int_0^z g(x)\,dx$, so your integral equation is equivalent to
$$
P\{X\leq z\}=\frac{1}{2} \, ,
$$
which means that your problem can be stated as:
"Let $X$ be a random variable with density $g$. Find the median of $X$."
To estimate the median of $X$, use any simulation method to draw a sample of values of $X$ and take as your estimate the sample median.
One possibility is to use the Metropolis-Hastings algorithm to get a sample of points with the desired distribution. Due to the expression of the acceptance probability in the Metropolis-Hastings algorithm, we don't need to know the value of the normalization constant $\int_0^1 f(t)\,dt$ of the density $g$. So, we don't have to do this integration.
The code bellow uses a particularly simple form of the Metropolis-Hastings algorithm known as the Indepence Sampler, which uses a proposal whose distribution does not depend on the current value of the chain. I have used independent uniform proposals. For comparison, the script outputs the Monte Carlo minimum and the result found with standard optimization. The sample points are stored in the vector chain, but we discard the first $10000$ points which form the so called "burn in" period of the simulation.
BURN_IN = 10000
DRAWS = 100000
f = function(x) exp(sin(x))
chain = numeric(BURN_IN + DRAWS)
x = 1/2
for (i in 1:(BURN_IN + DRAWS)) {
y = runif(1) # proposal
if (runif(1) < min(1, f(y)/f(x))) x = y
chain[i] = x
}
x_min = median(chain[BURN_IN : (BURN_IN + DRAWS)])
cat("Metropolis minimum found at", x_min, "\n\n")
# MONTE CARLO ENDS HERE. The integrations bellow are just to check the results.
A = integrate(f, 0, 1)$value
F = function(x) (abs(integrate(f, 0, x)$value - A/2))
cat("Optimize minimum found at", optimize(F, c(0, 1))$minimum, "\n")
Here are some results:
Metropolis minimum found at 0.6005409
Optimize minimum found at 0.601365
This code is meant just as a starting point for what you really need. Hence, use with care. | Solving a simple integral equation by random sampling
As cardinal pointed out in his comment, your question can be restated as follows.
By simple algebra, the integral equation can be rewritten as
$$
\int_0^z g(x)\,dx = \frac{1}{2} \, ,
$$
in which $g$ |
31,752 | Solving a simple integral equation by random sampling | The quality of the integral approximation, at least in the case as simple as 1D, is given by (Theorem 2.10 in Niederreiter (1992)):
$$
\Bigl|\frac 1N \sum_{n=1}^N f(x_n) - \int_0^1 f(u) \, {\rm d}u \Bigr| \le \omega (f; D_N^*(x_1, \ldots, x_N) )
$$
where
$$
\omega(f;t) = \sup \{ |f(u)-f(v)| : u, v \in [0,1], |u-v|\le t , t>0\}
$$
is the function's modulus of continuity (related to total variation, and easily expressable for Lipschitz functions), and
$$
D_N^*(x_1,\ldots,x_N) = \sup_u \Bigl| \frac1N \sum_n 1\bigl\{ x_n \in [0,u) \bigr\} - u \Bigr| = \frac1{2N} + \max_n \Bigl|x_n - \frac{2n-1}{2N}\Bigr|
$$
is the (extreme) discrepancy, or the maximum difference between the fraction of hits by the sequence $x_1, \ldots, x_N$ of a semi-open interval $[0,u)$ and its Lebesgue measure $u$. The first expression is the definition, and the second expression is the property of the 1D sequences in $[0,1]$ (Theorem 2.6 in the same book).
So obviously to minimize the error in the integral approximation, at least in the right-hand side of your equation, you need to take $x_n = (2n-1)/2N$. Screw the random evaluations, they run a risk of having a random gap at an important feature of the function.
A big disadvantage to this approach is that you have to commit to a value of $N$ to produce this uniformly distributed sequence. If you are not happy with the quality of approximation it provides, all you can do is to double the value of $N$ and hit all the midpoints of the previously created intervals.
If you want to have a solution where you can increase the number of points more gradually, you can continue reading that book, and learn about van der Corput sequences and radical inverses. See Low discrepancy sequences on Wikipedia, it provides all the details.
Update: to solve for $z$, define the partial sum
$$
S_k = \frac1N \sum_{n=1}^k f\Bigl( \frac{2n-1}{2N} \Bigr).$$
Find $k$ such that
$$
S_k \le \frac12 S_N < S_{k+1},
$$
and interpolate to find
$$
z_N = \frac{2k-1}{2N} + \frac{S_N/2 - S_k}{N(S_{k+1}-S_k)}.
$$
This interpolation assumes that $f(\cdot)$ is continuous. If additionally $f(\cdot)$ is twice differentiable, then this approximation by integrating the second order expansion to incorporate $S_{k-1}$ and $S_{k+2}$, and solving a cubic equation for $z$. | Solving a simple integral equation by random sampling | The quality of the integral approximation, at least in the case as simple as 1D, is given by (Theorem 2.10 in Niederreiter (1992)):
$$
\Bigl|\frac 1N \sum_{n=1}^N f(x_n) - \int_0^1 f(u) \, {\rm d}u \B | Solving a simple integral equation by random sampling
The quality of the integral approximation, at least in the case as simple as 1D, is given by (Theorem 2.10 in Niederreiter (1992)):
$$
\Bigl|\frac 1N \sum_{n=1}^N f(x_n) - \int_0^1 f(u) \, {\rm d}u \Bigr| \le \omega (f; D_N^*(x_1, \ldots, x_N) )
$$
where
$$
\omega(f;t) = \sup \{ |f(u)-f(v)| : u, v \in [0,1], |u-v|\le t , t>0\}
$$
is the function's modulus of continuity (related to total variation, and easily expressable for Lipschitz functions), and
$$
D_N^*(x_1,\ldots,x_N) = \sup_u \Bigl| \frac1N \sum_n 1\bigl\{ x_n \in [0,u) \bigr\} - u \Bigr| = \frac1{2N} + \max_n \Bigl|x_n - \frac{2n-1}{2N}\Bigr|
$$
is the (extreme) discrepancy, or the maximum difference between the fraction of hits by the sequence $x_1, \ldots, x_N$ of a semi-open interval $[0,u)$ and its Lebesgue measure $u$. The first expression is the definition, and the second expression is the property of the 1D sequences in $[0,1]$ (Theorem 2.6 in the same book).
So obviously to minimize the error in the integral approximation, at least in the right-hand side of your equation, you need to take $x_n = (2n-1)/2N$. Screw the random evaluations, they run a risk of having a random gap at an important feature of the function.
A big disadvantage to this approach is that you have to commit to a value of $N$ to produce this uniformly distributed sequence. If you are not happy with the quality of approximation it provides, all you can do is to double the value of $N$ and hit all the midpoints of the previously created intervals.
If you want to have a solution where you can increase the number of points more gradually, you can continue reading that book, and learn about van der Corput sequences and radical inverses. See Low discrepancy sequences on Wikipedia, it provides all the details.
Update: to solve for $z$, define the partial sum
$$
S_k = \frac1N \sum_{n=1}^k f\Bigl( \frac{2n-1}{2N} \Bigr).$$
Find $k$ such that
$$
S_k \le \frac12 S_N < S_{k+1},
$$
and interpolate to find
$$
z_N = \frac{2k-1}{2N} + \frac{S_N/2 - S_k}{N(S_{k+1}-S_k)}.
$$
This interpolation assumes that $f(\cdot)$ is continuous. If additionally $f(\cdot)$ is twice differentiable, then this approximation by integrating the second order expansion to incorporate $S_{k-1}$ and $S_{k+2}$, and solving a cubic equation for $z$. | Solving a simple integral equation by random sampling
The quality of the integral approximation, at least in the case as simple as 1D, is given by (Theorem 2.10 in Niederreiter (1992)):
$$
\Bigl|\frac 1N \sum_{n=1}^N f(x_n) - \int_0^1 f(u) \, {\rm d}u \B |
31,753 | Understanding the Behrens–Fisher problem | I may have mentioned this on the site once before. I will try to find a link to a post where I discussed this. Around 1977 when I was a graduate student at Stanford we had a Fisher seminar that I enrolled in. A number of Stanford professors and visitors participated including Brad Efron and visitors Seymour Geisser and David Hinkley. Jimmie Savage had just at that time published an article with the title "On Rereading R. A. Fisher" in Annal of Statistics I think. Since you are so interested in Fisher I recommend you find and read this paper.
Motivated by the paper the seminar was designed to reread many of Fisher's famous papers. My assignment was the article on the Behrens-Fisher problem. My feeling is that Fisher was vain and stubborn but never foolish. He had great geometric intuition and at times had difficulty communicating with others. He had a very cordial relationship with Gosset but harsh disagreements with Karl Pearson (maximum likelihood vs method of moments) and with Neyman and Egon Pearson (significance testing via fiducial inference vs the Neyman-Pearson approach to hypothesis testing). Although the fiducial argument is generally considered to be Fisher's only big flaw and has been discredited, the approach is not totally dead and there has been new research in it in recent years.
I think that fiducial inference was Fisher's way to try to be an "objective Bayesian". I am sure he thought long and hard about the statistical foundations. He didn't accept the Bayesian approach but also did not see the idea of basing inference on considering the possible samples that you didn't draw as making sense either. He believed that inference should be based only on the data at hand. This idea is a lot like Bayesian inference in that the Bayesians draw inference based soley on the data (the likelihood) and the parameters (the prior distribution). Fisher in my view was thinking a lot like Jeffreys except that he wanted inference to be based on the likelihood and wanted to dispense with the prior altogether. That is what led to fiducial inference.
A Link to the Savage article
The Biography by Fisher's daughter Joan Fisher Box
R A Fisher An Appreciation, Hinkley and Feinberg editors
A book by Erich Lehmann about Fisher and Neyman and the birth of Classical Statistics
This is a link to an earlier post that I commented on that you also posted. Behrens–Fisher problem
In conclusion I think I need to address your short question. If the statement you quoted "Fisher approximated the distribution of this by ignoring the random variation of the relative sizes of the standard deviations" is what you are referring to I think that is totally false. Fisher never ignored variation. I reiterate that I think the fiducial argument was grounded in the idea that the observed data and the likelihood function should be the basis of inference and not the other samples that you could have gotten from the population distribution. So I would side with you on this one. With respect to Bartlett as I recall from my study of this so many years ago, they also had heated debates on this and Bartlett made a good case and held his own in the debate. | Understanding the Behrens–Fisher problem | I may have mentioned this on the site once before. I will try to find a link to a post where I discussed this. Around 1977 when I was a graduate student at Stanford we had a Fisher seminar that I en | Understanding the Behrens–Fisher problem
I may have mentioned this on the site once before. I will try to find a link to a post where I discussed this. Around 1977 when I was a graduate student at Stanford we had a Fisher seminar that I enrolled in. A number of Stanford professors and visitors participated including Brad Efron and visitors Seymour Geisser and David Hinkley. Jimmie Savage had just at that time published an article with the title "On Rereading R. A. Fisher" in Annal of Statistics I think. Since you are so interested in Fisher I recommend you find and read this paper.
Motivated by the paper the seminar was designed to reread many of Fisher's famous papers. My assignment was the article on the Behrens-Fisher problem. My feeling is that Fisher was vain and stubborn but never foolish. He had great geometric intuition and at times had difficulty communicating with others. He had a very cordial relationship with Gosset but harsh disagreements with Karl Pearson (maximum likelihood vs method of moments) and with Neyman and Egon Pearson (significance testing via fiducial inference vs the Neyman-Pearson approach to hypothesis testing). Although the fiducial argument is generally considered to be Fisher's only big flaw and has been discredited, the approach is not totally dead and there has been new research in it in recent years.
I think that fiducial inference was Fisher's way to try to be an "objective Bayesian". I am sure he thought long and hard about the statistical foundations. He didn't accept the Bayesian approach but also did not see the idea of basing inference on considering the possible samples that you didn't draw as making sense either. He believed that inference should be based only on the data at hand. This idea is a lot like Bayesian inference in that the Bayesians draw inference based soley on the data (the likelihood) and the parameters (the prior distribution). Fisher in my view was thinking a lot like Jeffreys except that he wanted inference to be based on the likelihood and wanted to dispense with the prior altogether. That is what led to fiducial inference.
A Link to the Savage article
The Biography by Fisher's daughter Joan Fisher Box
R A Fisher An Appreciation, Hinkley and Feinberg editors
A book by Erich Lehmann about Fisher and Neyman and the birth of Classical Statistics
This is a link to an earlier post that I commented on that you also posted. Behrens–Fisher problem
In conclusion I think I need to address your short question. If the statement you quoted "Fisher approximated the distribution of this by ignoring the random variation of the relative sizes of the standard deviations" is what you are referring to I think that is totally false. Fisher never ignored variation. I reiterate that I think the fiducial argument was grounded in the idea that the observed data and the likelihood function should be the basis of inference and not the other samples that you could have gotten from the population distribution. So I would side with you on this one. With respect to Bartlett as I recall from my study of this so many years ago, they also had heated debates on this and Bartlett made a good case and held his own in the debate. | Understanding the Behrens–Fisher problem
I may have mentioned this on the site once before. I will try to find a link to a post where I discussed this. Around 1977 when I was a graduate student at Stanford we had a Fisher seminar that I en |
31,754 | Maximum Likelihood Estimation of Inverse Gamma Distribution in R or RPy | Since you know the density, you can just use fitdistr.
# Sample data
library(LaplacesDemon)
x <- rinvgamma(1000, 1,2)
library(MASS)
f <- function(x, rho, a, s)
1/(a*gamma(rho)) * (a / (x-s))^(rho+1) * exp( - a/(x-s) )
fitdistr( x, f, list(rho=1, a=1, s=0) ) | Maximum Likelihood Estimation of Inverse Gamma Distribution in R or RPy | Since you know the density, you can just use fitdistr.
# Sample data
library(LaplacesDemon)
x <- rinvgamma(1000, 1,2)
library(MASS)
f <- function(x, rho, a, s)
1/(a*gamma(rho)) * (a / (x-s))^(rho+ | Maximum Likelihood Estimation of Inverse Gamma Distribution in R or RPy
Since you know the density, you can just use fitdistr.
# Sample data
library(LaplacesDemon)
x <- rinvgamma(1000, 1,2)
library(MASS)
f <- function(x, rho, a, s)
1/(a*gamma(rho)) * (a / (x-s))^(rho+1) * exp( - a/(x-s) )
fitdistr( x, f, list(rho=1, a=1, s=0) ) | Maximum Likelihood Estimation of Inverse Gamma Distribution in R or RPy
Since you know the density, you can just use fitdistr.
# Sample data
library(LaplacesDemon)
x <- rinvgamma(1000, 1,2)
library(MASS)
f <- function(x, rho, a, s)
1/(a*gamma(rho)) * (a / (x-s))^(rho+ |
31,755 | How to measure correlation between categorical variable? [duplicate] | There are way too many measures starting from chi-square based (phi coefficient) going to less commonly used (Goodman and Kruskal's lambda).
There is a series of four articles starting here on the issue. | How to measure correlation between categorical variable? [duplicate] | There are way too many measures starting from chi-square based (phi coefficient) going to less commonly used (Goodman and Kruskal's lambda).
There is a series of four articles starting here on the iss | How to measure correlation between categorical variable? [duplicate]
There are way too many measures starting from chi-square based (phi coefficient) going to less commonly used (Goodman and Kruskal's lambda).
There is a series of four articles starting here on the issue. | How to measure correlation between categorical variable? [duplicate]
There are way too many measures starting from chi-square based (phi coefficient) going to less commonly used (Goodman and Kruskal's lambda).
There is a series of four articles starting here on the iss |
31,756 | Example of estimation vs. calibration | Model estimation is the process of picking the best (according to some metric) kind and structure of model. Estimation may include calibration.
Calibration is the process of finding the coefficients that enable a model (the kind and structure of which is already determined) to most closely (according to some metric) reflect a particular known dataset.
So: estimation will set kind, structure and coefficients. Calibration will tweak coefficients, holding kind and structure constant.
Newton's model of motion is fine for most purposes. By calibrating the gravitational coefficient in it, we can make estimates of the mass of the Earth. But it won't work as a model of relativistic motion - that needs the estimation of a different model: there is no recalibration of Newton's model that works for relativistic motion - no coeffecient will work, because the model itself is simply the wrong kind and structure. It omits mechanisms and responses that are absolutely crucial, if the model is to be useful.
Similarly with economic models, Paul Krugman's point is that freshwater economists are saying that their model structures are fine, just the coefficients need tweaking. The problem with that is that if their structures are wrong, no amount of tweaking will make the models useful. Only by going back to basics, and re-estimating the whole model, would they incorporate the crucial mechanisms and responses. He argues that they won't do that, because that would require them to recognise that their existing paradigm is inadequate. | Example of estimation vs. calibration | Model estimation is the process of picking the best (according to some metric) kind and structure of model. Estimation may include calibration.
Calibration is the process of finding the coefficients t | Example of estimation vs. calibration
Model estimation is the process of picking the best (according to some metric) kind and structure of model. Estimation may include calibration.
Calibration is the process of finding the coefficients that enable a model (the kind and structure of which is already determined) to most closely (according to some metric) reflect a particular known dataset.
So: estimation will set kind, structure and coefficients. Calibration will tweak coefficients, holding kind and structure constant.
Newton's model of motion is fine for most purposes. By calibrating the gravitational coefficient in it, we can make estimates of the mass of the Earth. But it won't work as a model of relativistic motion - that needs the estimation of a different model: there is no recalibration of Newton's model that works for relativistic motion - no coeffecient will work, because the model itself is simply the wrong kind and structure. It omits mechanisms and responses that are absolutely crucial, if the model is to be useful.
Similarly with economic models, Paul Krugman's point is that freshwater economists are saying that their model structures are fine, just the coefficients need tweaking. The problem with that is that if their structures are wrong, no amount of tweaking will make the models useful. Only by going back to basics, and re-estimating the whole model, would they incorporate the crucial mechanisms and responses. He argues that they won't do that, because that would require them to recognise that their existing paradigm is inadequate. | Example of estimation vs. calibration
Model estimation is the process of picking the best (according to some metric) kind and structure of model. Estimation may include calibration.
Calibration is the process of finding the coefficients t |
31,757 | Example of estimation vs. calibration | As the edit changed the meaning of the question a little:
What Krugman described is the following process:
One wants to model something, like the monetary policy
One creates a model and estimates something out of it
The results are for some reason not satisfactory, for example, they counter some widely accepted theory
Not believing that what was estimated is a correct model, one "calibrates" it (tweaking some variables, assumptions, etc.) until the estimations conform to what one believes is the proper answer
For example, one creates a model to estimate sales of a product in a store at a given day of the year. The forecast for most of the year look plausible, but the estimation looks wrong for Christmas season (for example, the sales are on the similar level as in November, but they should be bigger). One then calibrates the model, perhaps changing or adding some new variables, so the forecast for December will be bigger than the ones previously received. | Example of estimation vs. calibration | As the edit changed the meaning of the question a little:
What Krugman described is the following process:
One wants to model something, like the monetary policy
One creates a model and estimates som | Example of estimation vs. calibration
As the edit changed the meaning of the question a little:
What Krugman described is the following process:
One wants to model something, like the monetary policy
One creates a model and estimates something out of it
The results are for some reason not satisfactory, for example, they counter some widely accepted theory
Not believing that what was estimated is a correct model, one "calibrates" it (tweaking some variables, assumptions, etc.) until the estimations conform to what one believes is the proper answer
For example, one creates a model to estimate sales of a product in a store at a given day of the year. The forecast for most of the year look plausible, but the estimation looks wrong for Christmas season (for example, the sales are on the similar level as in November, but they should be bigger). One then calibrates the model, perhaps changing or adding some new variables, so the forecast for December will be bigger than the ones previously received. | Example of estimation vs. calibration
As the edit changed the meaning of the question a little:
What Krugman described is the following process:
One wants to model something, like the monetary policy
One creates a model and estimates som |
31,758 | Example of estimation vs. calibration | Calibration is comparing between two measurements - one of known magnitude or correctness, and one we want to be as close to the first one as possible. For example, if we have data on how much given merchandise a shop did sell on a given day, and we want to calibrate a model that will predict the sales, we give past data to the model and compare the given output to the real value (and possibly alter the model to accurately predict the data).
Estimation is approximation of the results, even if we don't have all the data. In the same example, estimation would be asking the model what will the sales be in the future (as we don't yet know what all the variables that will occur from now until the date of estimation).
So in short, you calibrate the model until it works as correctly as you want it, and then you use it for estimating of what will happen in the future. | Example of estimation vs. calibration | Calibration is comparing between two measurements - one of known magnitude or correctness, and one we want to be as close to the first one as possible. For example, if we have data on how much given m | Example of estimation vs. calibration
Calibration is comparing between two measurements - one of known magnitude or correctness, and one we want to be as close to the first one as possible. For example, if we have data on how much given merchandise a shop did sell on a given day, and we want to calibrate a model that will predict the sales, we give past data to the model and compare the given output to the real value (and possibly alter the model to accurately predict the data).
Estimation is approximation of the results, even if we don't have all the data. In the same example, estimation would be asking the model what will the sales be in the future (as we don't yet know what all the variables that will occur from now until the date of estimation).
So in short, you calibrate the model until it works as correctly as you want it, and then you use it for estimating of what will happen in the future. | Example of estimation vs. calibration
Calibration is comparing between two measurements - one of known magnitude or correctness, and one we want to be as close to the first one as possible. For example, if we have data on how much given m |
31,759 | Logistic regression in R returning NA values | Singularity means that your predictor variables are linearly dependent, i.e. one of the variables can be expressed as linear combination of other variables. Seeing that your predictor variables are dummies, you probably encountered dummy variable trap problem. | Logistic regression in R returning NA values | Singularity means that your predictor variables are linearly dependent, i.e. one of the variables can be expressed as linear combination of other variables. Seeing that your predictor variables are du | Logistic regression in R returning NA values
Singularity means that your predictor variables are linearly dependent, i.e. one of the variables can be expressed as linear combination of other variables. Seeing that your predictor variables are dummies, you probably encountered dummy variable trap problem. | Logistic regression in R returning NA values
Singularity means that your predictor variables are linearly dependent, i.e. one of the variables can be expressed as linear combination of other variables. Seeing that your predictor variables are du |
31,760 | How are classifications merged in an ensemble classifier? | I read a clear example from Introduction to Data Mining by Tan et al.
The example claims that if you are combining your classifiers with a voting system, that is classify a record with the most voted class, you obtain better performance. However, this example uses directly the output label of classifiers, and not the predictions (I think you meant probabilities).
Let's have 25 independent classifiers that have generalization error $e = 1 - \mbox{accuracy} = 0.35$. In order to misclassify a record at least half of them have to misclassify it.
Everything can be modeled with random variables, but you just have to compute the probability that at least 13 of them misclassify the record
$$\sum_{i=13}^{25}\binom{25}{i}e^i(1-e)^{(25-i)} = 0.06$$
where each term of the summation means that $i$ classifier get the record class correctly and $25-i$ get it wrong.
Using directly predictions and using as a combination method an average, I think that it could be a bit more difficult to show the improvment in ensemble performance. However, focusing only on predictions and without caring at the output label of the ensemble, averaging more predictions can be seen as an estimator of the real probability. Therefore, adding classifiers should improve the predictions of the ensemble technique. | How are classifications merged in an ensemble classifier? | I read a clear example from Introduction to Data Mining by Tan et al.
The example claims that if you are combining your classifiers with a voting system, that is classify a record with the most voted | How are classifications merged in an ensemble classifier?
I read a clear example from Introduction to Data Mining by Tan et al.
The example claims that if you are combining your classifiers with a voting system, that is classify a record with the most voted class, you obtain better performance. However, this example uses directly the output label of classifiers, and not the predictions (I think you meant probabilities).
Let's have 25 independent classifiers that have generalization error $e = 1 - \mbox{accuracy} = 0.35$. In order to misclassify a record at least half of them have to misclassify it.
Everything can be modeled with random variables, but you just have to compute the probability that at least 13 of them misclassify the record
$$\sum_{i=13}^{25}\binom{25}{i}e^i(1-e)^{(25-i)} = 0.06$$
where each term of the summation means that $i$ classifier get the record class correctly and $25-i$ get it wrong.
Using directly predictions and using as a combination method an average, I think that it could be a bit more difficult to show the improvment in ensemble performance. However, focusing only on predictions and without caring at the output label of the ensemble, averaging more predictions can be seen as an estimator of the real probability. Therefore, adding classifiers should improve the predictions of the ensemble technique. | How are classifications merged in an ensemble classifier?
I read a clear example from Introduction to Data Mining by Tan et al.
The example claims that if you are combining your classifiers with a voting system, that is classify a record with the most voted |
31,761 | How are classifications merged in an ensemble classifier? | You are missing the fact that "bad" classifier do not have 0% accuracy, rather it is not significantly better than a random guessing.
This way good predictions are always the same and accumulate (since the truth is only one) while bad predictions are random noise which average out. | How are classifications merged in an ensemble classifier? | You are missing the fact that "bad" classifier do not have 0% accuracy, rather it is not significantly better than a random guessing.
This way good predictions are always the same and accumulate (sinc | How are classifications merged in an ensemble classifier?
You are missing the fact that "bad" classifier do not have 0% accuracy, rather it is not significantly better than a random guessing.
This way good predictions are always the same and accumulate (since the truth is only one) while bad predictions are random noise which average out. | How are classifications merged in an ensemble classifier?
You are missing the fact that "bad" classifier do not have 0% accuracy, rather it is not significantly better than a random guessing.
This way good predictions are always the same and accumulate (sinc |
31,762 | How are classifications merged in an ensemble classifier? | In case of classification generally there are two ways to ensemble the prediction.
Lets say it's a binary class classification problem and you have 3 models to ensemble called m1,m2 and m3 and the training dataset is called train and testing dataset called test.Models are already build on train.Then a python code will be as following.
First method is to take a round of the average
pred=round([m1.predict(test)+m2.predict(test)+m3.predict(test)]/3)
So the output will be a vector of value 0 and 1
Second method is to ensemble the prediction probability of each class from these models and ensemble that and then decide the class either on the basis of a hard threshold or some logic.
pred_proba=[m1.predict(test).predict_proba++ m2.predict(test).predict_proba
+m3.predict(test).predict_proba]/3
# Simple average ensemble,however you can try weighted average as well
iterate through the entire pred_proba vector to find which one in 0 and which 1 is 1 basing on hard threshold 0.5
pred=[] # Initialize a blank list for prediction
for x in pred_proba:
if x>0.5:
pred.append[1]
else:
pred.append[0]
So pred is the final ensemble prediction. | How are classifications merged in an ensemble classifier? | In case of classification generally there are two ways to ensemble the prediction.
Lets say it's a binary class classification problem and you have 3 models to ensemble called m1,m2 and m3 and the tra | How are classifications merged in an ensemble classifier?
In case of classification generally there are two ways to ensemble the prediction.
Lets say it's a binary class classification problem and you have 3 models to ensemble called m1,m2 and m3 and the training dataset is called train and testing dataset called test.Models are already build on train.Then a python code will be as following.
First method is to take a round of the average
pred=round([m1.predict(test)+m2.predict(test)+m3.predict(test)]/3)
So the output will be a vector of value 0 and 1
Second method is to ensemble the prediction probability of each class from these models and ensemble that and then decide the class either on the basis of a hard threshold or some logic.
pred_proba=[m1.predict(test).predict_proba++ m2.predict(test).predict_proba
+m3.predict(test).predict_proba]/3
# Simple average ensemble,however you can try weighted average as well
iterate through the entire pred_proba vector to find which one in 0 and which 1 is 1 basing on hard threshold 0.5
pred=[] # Initialize a blank list for prediction
for x in pred_proba:
if x>0.5:
pred.append[1]
else:
pred.append[0]
So pred is the final ensemble prediction. | How are classifications merged in an ensemble classifier?
In case of classification generally there are two ways to ensemble the prediction.
Lets say it's a binary class classification problem and you have 3 models to ensemble called m1,m2 and m3 and the tra |
31,763 | How is election fraud by ballot stuffing possible? | The trouble is that the "known accurate" sample is probably not a random sample from the population of all ballots, as it is made up of 100% (approximately) of the votes from a small collection of specific polling sites, and we don't know how those specific polling sites were selected. If they were randomly selected, and there were enough of them, then you could compare them with the results of other polling places and have some hope of detecting fraud, although the power of whatever tests you might construct might not be high unless you have many hundreds of polling sites in your known accurate sample. Of course, Russia is very big, so I assume they could have thousands of polling places in their known accurate sample.
In many cases, rigging an election sometimes does produce huge statistical anomalies. Often the government has little interest in reporting enough information for people to find that out, and in many countries, the press is largely compliant with government wishes and won't really investigate.
If it's done with some care, though, it can be hard to tell. Imagine Chicago in the 1960s, which was a) very large and b) heavily Democratic. If an extra 4-6% Democratic ballots were added across the city, consistently, year after year, who could tell? (Ignoring for the sake of the example the pointlessness of such an effort.)
Here's a link to an interesting look at the 2009 Iranian election that reviews some techniques (good and bad) that can be used even in situations where you have no clean polling place data: Thomas Lotze | How is election fraud by ballot stuffing possible? | The trouble is that the "known accurate" sample is probably not a random sample from the population of all ballots, as it is made up of 100% (approximately) of the votes from a small collection of spe | How is election fraud by ballot stuffing possible?
The trouble is that the "known accurate" sample is probably not a random sample from the population of all ballots, as it is made up of 100% (approximately) of the votes from a small collection of specific polling sites, and we don't know how those specific polling sites were selected. If they were randomly selected, and there were enough of them, then you could compare them with the results of other polling places and have some hope of detecting fraud, although the power of whatever tests you might construct might not be high unless you have many hundreds of polling sites in your known accurate sample. Of course, Russia is very big, so I assume they could have thousands of polling places in their known accurate sample.
In many cases, rigging an election sometimes does produce huge statistical anomalies. Often the government has little interest in reporting enough information for people to find that out, and in many countries, the press is largely compliant with government wishes and won't really investigate.
If it's done with some care, though, it can be hard to tell. Imagine Chicago in the 1960s, which was a) very large and b) heavily Democratic. If an extra 4-6% Democratic ballots were added across the city, consistently, year after year, who could tell? (Ignoring for the sake of the example the pointlessness of such an effort.)
Here's a link to an interesting look at the 2009 Iranian election that reviews some techniques (good and bad) that can be used even in situations where you have no clean polling place data: Thomas Lotze | How is election fraud by ballot stuffing possible?
The trouble is that the "known accurate" sample is probably not a random sample from the population of all ballots, as it is made up of 100% (approximately) of the votes from a small collection of spe |
31,764 | How is election fraud by ballot stuffing possible? | This is a complex topic. Elections are a complex societal mechanism, so you should not expect there to be any simple "silver bullet" solution to election fraud.
There are many kinds of election fraud. Let me draw one distinction between fraud where the attacker inserts extra ballots not associated with any eligible voter (call this "ballot stuffing"), vs. those where the attacker causes correctly-cast votes to be miscounted. It sounds like you are concerned with the former, but random sampling is generally not a very effective way of dealing with ballot stuffing, for a variety of reasons. The best defenses against ballot stuffing tend to be procedural, not statistical.
You were vague about the details of what you were proposing, but I infer that you are proposing the following: at some point after the polls close, randomly select 5% of the ballots from among all the ballots in the ballot box at that time, manually count them, and see if the candidate with the most votes among your sample is the same as the officially declared candidate (the one who allegedly has the largest number of votes among all of the ballots).
This proposal has a number of serious shortcomings, which means that it likely will not be effective at detecting large-scale "ballot stuffing":
It does nothing to detect any ballot stuffing that may have occurred before the random sampling takes place. If dishonest poll workers stuff extra ballots into the ballot box before the polls open, or during the day, or after the polls close but before the ballots are sampled, you'll never detect that sort of ballot stuffing.
It does no good to take a random sample of a population of ballots that is not representative of the will of the populace. Counting 5% of the stuffed ballots will not give you any more accuracy than counting 100% of the stuffed ballots.
Your proposal is awfully vague about who will perform the sampling and recounting, and when they will do it. If you had in mind that, at every polling station, the poll workers would be responsible for sampling 5% of their ballots and counting them and reporting their counts, then this does nothing to detect misbehavior by dishonest poll workers; if poll workers are dishonest, they can conduct this stage dishonestly or lie about the results of this stage. On the other hand, if you had in mind that the ballots would all be transferred to some central location where election workers perform the sampling, it introduces a different set of problems; it does nothing to detect ballot stuffing that may occur during the day or during transit (which is probably the most common form of ballot stuffing), and it also doesn't work if those workers are dishonest.
Your proposal doesn't say about how to provide transparency to the public. An essential requirement for elections is that they must provide transparency. As Dan Wallach has written, the winners almost never complain about the results of the election; an election has to convince the losers, and their supporters. If random sampling and recounting is done in the polling places, it is too hard for concerned members of the public to observe this. If it is done at a central location, at a fixed time, then observation becomes possible -- but we need to preserve the chain of custody for the physical ballots until then, and we need to make sure no ballots have been stuffed (that every ballot comes from an eligible voter).
Finally, the statistical power of this approach is less than state-of-the-art methods for election auditing. With your scheme, you need to sample $O(1/\epsilon^2)$ ballots to detect errors where an $\epsilon$ fraction of the ballots have been miscounted. State-of-the-art schemes only need to sample $O(1/\epsilon)$ ballots.
Of these, the first is probably the most severe for the Russian application you mention.
All of these problems can be solved, given appropriate design of the election mechanism (e.g., selection of poll workers, publicly observable processes, careful design of the audit mechanism), but it takes care. There has been a tremendous amount of work on this problem. If you are interested, I urge you to read some of the following references:
Arlene Ash, Steve Pierson, and Philip Stark. Thinking outside the urn: Statisticians make their marks on U.S. Ballots. Amstat News.
Mark Lindeman, Mark Halvorson, Pamela Smith, Lynn Garland, Vittorio Addona, Dan McCrea. Principles and Best Practices for Post-Election Audits. See also their website, electionaudits.org.
David Jefferson, Elaine Ginnold, Kathleen Midstokke, Kim Alexander, Philip Stark, Amy Lehmkuhl. Post Election Audit Standards Report--Evaluation of Audit Sampling Models and Options for Strengthening California's Manual Count.
Lawrence Norden, Aaron Burstein, Joseph Lorenzo Hall, Margaret Chen. Post-election audits: Restoring trust in elections.
Philip Stark. The Status and Near Future of Post-Election Auditing.
Philip Stark's papers. He is a statistician and doing some of the best work on election auditing.
Andrew W. Appel. Effective Audit Policy for Voter-Verified Paper Ballots. 2007 Annual Meeting of the American Political Science Association.
As you can see, the statistics community is making important contributions to this topic. | How is election fraud by ballot stuffing possible? | This is a complex topic. Elections are a complex societal mechanism, so you should not expect there to be any simple "silver bullet" solution to election fraud.
There are many kinds of election fraud | How is election fraud by ballot stuffing possible?
This is a complex topic. Elections are a complex societal mechanism, so you should not expect there to be any simple "silver bullet" solution to election fraud.
There are many kinds of election fraud. Let me draw one distinction between fraud where the attacker inserts extra ballots not associated with any eligible voter (call this "ballot stuffing"), vs. those where the attacker causes correctly-cast votes to be miscounted. It sounds like you are concerned with the former, but random sampling is generally not a very effective way of dealing with ballot stuffing, for a variety of reasons. The best defenses against ballot stuffing tend to be procedural, not statistical.
You were vague about the details of what you were proposing, but I infer that you are proposing the following: at some point after the polls close, randomly select 5% of the ballots from among all the ballots in the ballot box at that time, manually count them, and see if the candidate with the most votes among your sample is the same as the officially declared candidate (the one who allegedly has the largest number of votes among all of the ballots).
This proposal has a number of serious shortcomings, which means that it likely will not be effective at detecting large-scale "ballot stuffing":
It does nothing to detect any ballot stuffing that may have occurred before the random sampling takes place. If dishonest poll workers stuff extra ballots into the ballot box before the polls open, or during the day, or after the polls close but before the ballots are sampled, you'll never detect that sort of ballot stuffing.
It does no good to take a random sample of a population of ballots that is not representative of the will of the populace. Counting 5% of the stuffed ballots will not give you any more accuracy than counting 100% of the stuffed ballots.
Your proposal is awfully vague about who will perform the sampling and recounting, and when they will do it. If you had in mind that, at every polling station, the poll workers would be responsible for sampling 5% of their ballots and counting them and reporting their counts, then this does nothing to detect misbehavior by dishonest poll workers; if poll workers are dishonest, they can conduct this stage dishonestly or lie about the results of this stage. On the other hand, if you had in mind that the ballots would all be transferred to some central location where election workers perform the sampling, it introduces a different set of problems; it does nothing to detect ballot stuffing that may occur during the day or during transit (which is probably the most common form of ballot stuffing), and it also doesn't work if those workers are dishonest.
Your proposal doesn't say about how to provide transparency to the public. An essential requirement for elections is that they must provide transparency. As Dan Wallach has written, the winners almost never complain about the results of the election; an election has to convince the losers, and their supporters. If random sampling and recounting is done in the polling places, it is too hard for concerned members of the public to observe this. If it is done at a central location, at a fixed time, then observation becomes possible -- but we need to preserve the chain of custody for the physical ballots until then, and we need to make sure no ballots have been stuffed (that every ballot comes from an eligible voter).
Finally, the statistical power of this approach is less than state-of-the-art methods for election auditing. With your scheme, you need to sample $O(1/\epsilon^2)$ ballots to detect errors where an $\epsilon$ fraction of the ballots have been miscounted. State-of-the-art schemes only need to sample $O(1/\epsilon)$ ballots.
Of these, the first is probably the most severe for the Russian application you mention.
All of these problems can be solved, given appropriate design of the election mechanism (e.g., selection of poll workers, publicly observable processes, careful design of the audit mechanism), but it takes care. There has been a tremendous amount of work on this problem. If you are interested, I urge you to read some of the following references:
Arlene Ash, Steve Pierson, and Philip Stark. Thinking outside the urn: Statisticians make their marks on U.S. Ballots. Amstat News.
Mark Lindeman, Mark Halvorson, Pamela Smith, Lynn Garland, Vittorio Addona, Dan McCrea. Principles and Best Practices for Post-Election Audits. See also their website, electionaudits.org.
David Jefferson, Elaine Ginnold, Kathleen Midstokke, Kim Alexander, Philip Stark, Amy Lehmkuhl. Post Election Audit Standards Report--Evaluation of Audit Sampling Models and Options for Strengthening California's Manual Count.
Lawrence Norden, Aaron Burstein, Joseph Lorenzo Hall, Margaret Chen. Post-election audits: Restoring trust in elections.
Philip Stark. The Status and Near Future of Post-Election Auditing.
Philip Stark's papers. He is a statistician and doing some of the best work on election auditing.
Andrew W. Appel. Effective Audit Policy for Voter-Verified Paper Ballots. 2007 Annual Meeting of the American Political Science Association.
As you can see, the statistics community is making important contributions to this topic. | How is election fraud by ballot stuffing possible?
This is a complex topic. Elections are a complex societal mechanism, so you should not expect there to be any simple "silver bullet" solution to election fraud.
There are many kinds of election fraud |
31,765 | SVM parameter selection | Grid search is slow as it spends a lot of time investigating hyper-parameter settings that are no where near optimal. A better solution is the Nelder-Mead simplex algorithm, which doesn't require calculation of gradient information and is straightforward to implement (there should be enough information on the Wikipedia page). There may also be some java code in the Weka toolbox, however I work in MATLAB and haven't looked at Weka in any great detail.
SMO is an algorithm for finding the model parameters, rather than the hyper-parameters. | SVM parameter selection | Grid search is slow as it spends a lot of time investigating hyper-parameter settings that are no where near optimal. A better solution is the Nelder-Mead simplex algorithm, which doesn't require cal | SVM parameter selection
Grid search is slow as it spends a lot of time investigating hyper-parameter settings that are no where near optimal. A better solution is the Nelder-Mead simplex algorithm, which doesn't require calculation of gradient information and is straightforward to implement (there should be enough information on the Wikipedia page). There may also be some java code in the Weka toolbox, however I work in MATLAB and haven't looked at Weka in any great detail.
SMO is an algorithm for finding the model parameters, rather than the hyper-parameters. | SVM parameter selection
Grid search is slow as it spends a lot of time investigating hyper-parameter settings that are no where near optimal. A better solution is the Nelder-Mead simplex algorithm, which doesn't require cal |
31,766 | SVM parameter selection | The Nelder-Mead simplex method can involve as many function evaluations as a simple grid search. Usually the error surface is smooth enough close to the optimal parameter values that a coarse grid search followed by a finer one in a smaller region should suffice.
If you're interested in gradient based optimization of C and gamma, there are methods like optimizing the radius-margin bounds or optimizing the error rate on a validation set. The computation of the gradient of the objective function involves something like one SVM train but a simple gradient descent may involve only a few dozen iterations. (Look at http://olivier.chapelle.cc/ams/ for an article and a Matlab implementation.) | SVM parameter selection | The Nelder-Mead simplex method can involve as many function evaluations as a simple grid search. Usually the error surface is smooth enough close to the optimal parameter values that a coarse grid sea | SVM parameter selection
The Nelder-Mead simplex method can involve as many function evaluations as a simple grid search. Usually the error surface is smooth enough close to the optimal parameter values that a coarse grid search followed by a finer one in a smaller region should suffice.
If you're interested in gradient based optimization of C and gamma, there are methods like optimizing the radius-margin bounds or optimizing the error rate on a validation set. The computation of the gradient of the objective function involves something like one SVM train but a simple gradient descent may involve only a few dozen iterations. (Look at http://olivier.chapelle.cc/ams/ for an article and a Matlab implementation.) | SVM parameter selection
The Nelder-Mead simplex method can involve as many function evaluations as a simple grid search. Usually the error surface is smooth enough close to the optimal parameter values that a coarse grid sea |
31,767 | SVM parameter selection | Here is an entry in Alex Smola's blog related to your question
Here is a quote:
[...] pick, say 1000 pairs (x,x’) at random from your dataset, compute the distance of all such pairs and take the median, the 0.1 and the 0.9 quantile. Now pick λ to be the inverse any of these three numbers. With a little bit of crossvalidation you will figure out which one of the three is best. In most cases you won’t need to search any further. | SVM parameter selection | Here is an entry in Alex Smola's blog related to your question
Here is a quote:
[...] pick, say 1000 pairs (x,x’) at random from your dataset, compute the distance of all such pairs and take the medi | SVM parameter selection
Here is an entry in Alex Smola's blog related to your question
Here is a quote:
[...] pick, say 1000 pairs (x,x’) at random from your dataset, compute the distance of all such pairs and take the median, the 0.1 and the 0.9 quantile. Now pick λ to be the inverse any of these three numbers. With a little bit of crossvalidation you will figure out which one of the three is best. In most cases you won’t need to search any further. | SVM parameter selection
Here is an entry in Alex Smola's blog related to your question
Here is a quote:
[...] pick, say 1000 pairs (x,x’) at random from your dataset, compute the distance of all such pairs and take the medi |
31,768 | How to deal with identical p-values with the Benjamini-Hochberg method for correcting for multiple testing | Verifying also in the multtest package from Bioconductor, I would suggest to give them the same rank - and very importantly - increment the rank by one for the following p-value(s) rather than using their index+1 in an array! This would have the following result:
considering your examplemulttest's BH would rank $r_1$: 1,
$r_2$: 2,
$r_3$: 2,
$r_4$: 2,
$r_5$: 3
rather than of
$r_2$: 2,
$r_3$: 2,
$r_4$: 2,
$r_5$: 5 | How to deal with identical p-values with the Benjamini-Hochberg method for correcting for multiple t | Verifying also in the multtest package from Bioconductor, I would suggest to give them the same rank - and very importantly - increment the rank by one for the following p-value(s) rather than using t | How to deal with identical p-values with the Benjamini-Hochberg method for correcting for multiple testing
Verifying also in the multtest package from Bioconductor, I would suggest to give them the same rank - and very importantly - increment the rank by one for the following p-value(s) rather than using their index+1 in an array! This would have the following result:
considering your examplemulttest's BH would rank $r_1$: 1,
$r_2$: 2,
$r_3$: 2,
$r_4$: 2,
$r_5$: 3
rather than of
$r_2$: 2,
$r_3$: 2,
$r_4$: 2,
$r_5$: 5 | How to deal with identical p-values with the Benjamini-Hochberg method for correcting for multiple t
Verifying also in the multtest package from Bioconductor, I would suggest to give them the same rank - and very importantly - increment the rank by one for the following p-value(s) rather than using t |
31,769 | How to deal with identical p-values with the Benjamini-Hochberg method for correcting for multiple testing | Possible ad hoc solution is to give repeated p-values the same rank. | How to deal with identical p-values with the Benjamini-Hochberg method for correcting for multiple t | Possible ad hoc solution is to give repeated p-values the same rank. | How to deal with identical p-values with the Benjamini-Hochberg method for correcting for multiple testing
Possible ad hoc solution is to give repeated p-values the same rank. | How to deal with identical p-values with the Benjamini-Hochberg method for correcting for multiple t
Possible ad hoc solution is to give repeated p-values the same rank. |
31,770 | How to deal with identical p-values with the Benjamini-Hochberg method for correcting for multiple testing | All answers here are misleading:
considering your p-values are:
p1 = 0.001 p2 = 0.03 p3 = 0.03 p4 = 0.03 p5 = 0.09
Then, BH would rank them: r1 = 1, r2 = 4, r3 = 4, r4 = 4, r5 = 5
Means the ranks for identical p-values will be the max index of them in the sorted list of p-values.
Verified with R using the p.adjust function:
p.adjust(c(0.001, 0.03, 0.03, 0.03, 0.09), method = 'BH')
Yielding the next adjusted p-values:
0.0050 0.0375 0.0375 0.0375 0.0900
Note that for the identical p-values, we got an adjusted p-value of:
0.03 * n / k where n = 5 (as the number of p-values) and k = 4 which is the max index of the identical p-values in the sorted list of p-values...
Accordingly, the adjusted values of alpha are:
alpha * k / n yielding:
q1 = 0.001 q2 = 0.04 q3 = 0.04 q4 = 0.04 q5 = 0.05
Added this part after I was asked:
multtest does the same thing using the same ranks as I mentioned:
using mt.rawp2adjp(c(0.001, 0.03, 0.03, 0.03, 0.09), 'BH') yields the same p-values as the p.adjust function, therefore it ranks identical p-values by the max index of them in the sorted list of p-values. The output of the relevant function from multtest is:
rawp BH
[1,] 0.001 0.0050
[2,] 0.030 0.0375
[3,] 0.030 0.0375
[4,] 0.030 0.0375
[5,] 0.090 0.0900
Where the rawp column stands for the original p-values and BH stands for the adjusted p-values. From the adjusted p-values you can calculate the respective alphas as I did above. | How to deal with identical p-values with the Benjamini-Hochberg method for correcting for multiple t | All answers here are misleading:
considering your p-values are:
p1 = 0.001 p2 = 0.03 p3 = 0.03 p4 = 0.03 p5 = 0.09
Then, BH would rank them: r1 = 1, r2 = 4, r3 = 4, r4 = 4, r5 = 5
Means the ranks for | How to deal with identical p-values with the Benjamini-Hochberg method for correcting for multiple testing
All answers here are misleading:
considering your p-values are:
p1 = 0.001 p2 = 0.03 p3 = 0.03 p4 = 0.03 p5 = 0.09
Then, BH would rank them: r1 = 1, r2 = 4, r3 = 4, r4 = 4, r5 = 5
Means the ranks for identical p-values will be the max index of them in the sorted list of p-values.
Verified with R using the p.adjust function:
p.adjust(c(0.001, 0.03, 0.03, 0.03, 0.09), method = 'BH')
Yielding the next adjusted p-values:
0.0050 0.0375 0.0375 0.0375 0.0900
Note that for the identical p-values, we got an adjusted p-value of:
0.03 * n / k where n = 5 (as the number of p-values) and k = 4 which is the max index of the identical p-values in the sorted list of p-values...
Accordingly, the adjusted values of alpha are:
alpha * k / n yielding:
q1 = 0.001 q2 = 0.04 q3 = 0.04 q4 = 0.04 q5 = 0.05
Added this part after I was asked:
multtest does the same thing using the same ranks as I mentioned:
using mt.rawp2adjp(c(0.001, 0.03, 0.03, 0.03, 0.09), 'BH') yields the same p-values as the p.adjust function, therefore it ranks identical p-values by the max index of them in the sorted list of p-values. The output of the relevant function from multtest is:
rawp BH
[1,] 0.001 0.0050
[2,] 0.030 0.0375
[3,] 0.030 0.0375
[4,] 0.030 0.0375
[5,] 0.090 0.0900
Where the rawp column stands for the original p-values and BH stands for the adjusted p-values. From the adjusted p-values you can calculate the respective alphas as I did above. | How to deal with identical p-values with the Benjamini-Hochberg method for correcting for multiple t
All answers here are misleading:
considering your p-values are:
p1 = 0.001 p2 = 0.03 p3 = 0.03 p4 = 0.03 p5 = 0.09
Then, BH would rank them: r1 = 1, r2 = 4, r3 = 4, r4 = 4, r5 = 5
Means the ranks for |
31,771 | How to deal with identical p-values with the Benjamini-Hochberg method for correcting for multiple testing | These procedures can be confusing. This nice post by spätzle explains the Benjamin Hochberg procedure very well
The Benjamini-Hochberg method is as follows:
Order the p-values $p_{(1)},...,p_{(m)}$ and then respectively the hypotheses $H_{0,(1)},...,H_{0,(m)}$
Mark as $i_0$ the largest $i$ for which $p_{(i)}\le \frac{i}{m}\alpha$
Reject $H_{0,(1)},...,H_{0,(i_0)}$
So if you have several $p_{(i)}$ with the same value, then all that counts is the largest rank. If $p_{(i)}\le \frac{i}{m}\alpha$ for some rank $i$ then also $p_{(j)}\le \frac{j}{m}\alpha$ for some rank $j>i$.
See in the image below for a visual explanation of the method. What counts is the highest ranked p-value that is still below the line. In this case it is the 10-th p-value. All the previous hypothesis tests will be rejected (even if they are above the line). So when you have identical p-values, then what counts is the highest rank.
The following ordered set of p-values could be an example:
$p_1$ = 0.01
$p_2$ = 0.03
$p_3$ = 0.03
$p_4$ = 0.03
$p_5$ = 0.09
Assuming a threshold of $\alpha$ = 0.05, what I have to compare my values to are:
$q_1$ = 0.01
$q_2$ = 0.02
$q_3$ = 0.03
$q_4$ = 0.04
$q_5$ = 0.05
And thus I accept the second test and reject the third and fourth even if they have the same original p-value.
You can not have the situation where you accept the second ranked hypothesis while rejecting the third and fourth. You decide on some boundary and all the tests below it are rejected and all the test above it are accepted.
In the BH procedure the second test for which you got $p_2 > q_2$ will be rejected as well. | How to deal with identical p-values with the Benjamini-Hochberg method for correcting for multiple t | These procedures can be confusing. This nice post by spätzle explains the Benjamin Hochberg procedure very well
The Benjamini-Hochberg method is as follows:
Order the p-values $p_{(1)},...,p_{(m)}$ | How to deal with identical p-values with the Benjamini-Hochberg method for correcting for multiple testing
These procedures can be confusing. This nice post by spätzle explains the Benjamin Hochberg procedure very well
The Benjamini-Hochberg method is as follows:
Order the p-values $p_{(1)},...,p_{(m)}$ and then respectively the hypotheses $H_{0,(1)},...,H_{0,(m)}$
Mark as $i_0$ the largest $i$ for which $p_{(i)}\le \frac{i}{m}\alpha$
Reject $H_{0,(1)},...,H_{0,(i_0)}$
So if you have several $p_{(i)}$ with the same value, then all that counts is the largest rank. If $p_{(i)}\le \frac{i}{m}\alpha$ for some rank $i$ then also $p_{(j)}\le \frac{j}{m}\alpha$ for some rank $j>i$.
See in the image below for a visual explanation of the method. What counts is the highest ranked p-value that is still below the line. In this case it is the 10-th p-value. All the previous hypothesis tests will be rejected (even if they are above the line). So when you have identical p-values, then what counts is the highest rank.
The following ordered set of p-values could be an example:
$p_1$ = 0.01
$p_2$ = 0.03
$p_3$ = 0.03
$p_4$ = 0.03
$p_5$ = 0.09
Assuming a threshold of $\alpha$ = 0.05, what I have to compare my values to are:
$q_1$ = 0.01
$q_2$ = 0.02
$q_3$ = 0.03
$q_4$ = 0.04
$q_5$ = 0.05
And thus I accept the second test and reject the third and fourth even if they have the same original p-value.
You can not have the situation where you accept the second ranked hypothesis while rejecting the third and fourth. You decide on some boundary and all the tests below it are rejected and all the test above it are accepted.
In the BH procedure the second test for which you got $p_2 > q_2$ will be rejected as well. | How to deal with identical p-values with the Benjamini-Hochberg method for correcting for multiple t
These procedures can be confusing. This nice post by spätzle explains the Benjamin Hochberg procedure very well
The Benjamini-Hochberg method is as follows:
Order the p-values $p_{(1)},...,p_{(m)}$ |
31,772 | How to interpret two standard deviations below the mean of a count variable being less than zero? | The short answer, is no, it is not an error.
As @whuber notes, there is nothing surprising (at least to a statistician) about the fact that two standard deviations below the mean of a count variable could be a negative value. Thus, to answer your question, perhaps it would be more useful to ponder why you might find the result surprising.
Why you might be surprised
Many introductory statistics textbooks show how you can use the mean, standard deviation, and the normal distribution to make claims like approximately 2.5% of the sample is expected to score below two standard deviations below the mean. You may have generalised this idea to a variable where the assumptions of such a procedure are invalid.
If you did this, you would be saying to yourself: "this is strange, how is it possible for 2.5% of the data to have counts below -0.6".
Estimating percentiles for counts
Your variable is not normally distributed, it is a count variable. It is discrete; it is a non-negative integer. Thus, in order to estimate the percentage that is greater than or equal to a given value, you need an approach suited to counts. A basic approach would involve using the sample data to estimate such percentiles. More sophisticated approaches could involve developing a model of the distribution suited to counts, justified by the data and knowledge of the phenomena, and estimated using the sample data. | How to interpret two standard deviations below the mean of a count variable being less than zero? | The short answer, is no, it is not an error.
As @whuber notes, there is nothing surprising (at least to a statistician) about the fact that two standard deviations below the mean of a count variable c | How to interpret two standard deviations below the mean of a count variable being less than zero?
The short answer, is no, it is not an error.
As @whuber notes, there is nothing surprising (at least to a statistician) about the fact that two standard deviations below the mean of a count variable could be a negative value. Thus, to answer your question, perhaps it would be more useful to ponder why you might find the result surprising.
Why you might be surprised
Many introductory statistics textbooks show how you can use the mean, standard deviation, and the normal distribution to make claims like approximately 2.5% of the sample is expected to score below two standard deviations below the mean. You may have generalised this idea to a variable where the assumptions of such a procedure are invalid.
If you did this, you would be saying to yourself: "this is strange, how is it possible for 2.5% of the data to have counts below -0.6".
Estimating percentiles for counts
Your variable is not normally distributed, it is a count variable. It is discrete; it is a non-negative integer. Thus, in order to estimate the percentage that is greater than or equal to a given value, you need an approach suited to counts. A basic approach would involve using the sample data to estimate such percentiles. More sophisticated approaches could involve developing a model of the distribution suited to counts, justified by the data and knowledge of the phenomena, and estimated using the sample data. | How to interpret two standard deviations below the mean of a count variable being less than zero?
The short answer, is no, it is not an error.
As @whuber notes, there is nothing surprising (at least to a statistician) about the fact that two standard deviations below the mean of a count variable c |
31,773 | How to generate nD point process? | You don't need to grid. You could just draw a Poisson count, $n$, for the total number of points and then simulate $n$ iid points uniform on the region. For a square, you can draw $x_i$ and $y_i$ as independent uniforms. (For a more complex region, the simplest thing would be to embed it in a square, simulate for the square, and then just keep the points in the target region.)
The same thing can be done in higher dimensions. You draw the count from a Poisson with mean $\lambda V$ where $\lambda$ is the rate for the Poisson process and $V$ is the volume of the region, and then simulate that number of iid uniform points from the region.
And so 1: yes, 2: yes (if the grid regions are the same area), and 3: yes. | How to generate nD point process? | You don't need to grid. You could just draw a Poisson count, $n$, for the total number of points and then simulate $n$ iid points uniform on the region. For a square, you can draw $x_i$ and $y_i$ as | How to generate nD point process?
You don't need to grid. You could just draw a Poisson count, $n$, for the total number of points and then simulate $n$ iid points uniform on the region. For a square, you can draw $x_i$ and $y_i$ as independent uniforms. (For a more complex region, the simplest thing would be to embed it in a square, simulate for the square, and then just keep the points in the target region.)
The same thing can be done in higher dimensions. You draw the count from a Poisson with mean $\lambda V$ where $\lambda$ is the rate for the Poisson process and $V$ is the volume of the region, and then simulate that number of iid uniform points from the region.
And so 1: yes, 2: yes (if the grid regions are the same area), and 3: yes. | How to generate nD point process?
You don't need to grid. You could just draw a Poisson count, $n$, for the total number of points and then simulate $n$ iid points uniform on the region. For a square, you can draw $x_i$ and $y_i$ as |
31,774 | How to generate nD point process? | @Karl gave a good answer but it deserves an explanation.
A homogeneous Poisson point process (or "complete spatial randomness," CSR) is determined by two intuitive properties:
The probability that a point will be located within a small region $dA$ is directly proportional to $dA$ (up to second order in the hypervolume of $dA$). Note that this immediately implies the expected number of points in any finite region of hypervolume $A$ is proportional to $A$.
The points are located independently of each other.
Assuming the points introduced within each cell are done so independently, using a Poisson distribution of counts in each cell assures overall independence. To check (1), we lose no generality by assuming $dA$ is wholly located within a grid cell (because the probability that it straddles two cells is vanishingly small). That reduces the check to verifying that points are generated with the same intensity within the cells. Using a Poisson distribution of counts with expectation proportional to each cell's hypervolume assures this, as noted in (1).
This argument is independent of dimension. In fact, it would apply to any finite dimensional manifold with a volume form (such as the surface of a sphere in 3D) in which a region has been partitioned into measurable "cells" of arbitrary shape. The reason for using a grid, of course, is that grid cells can be addressed in $O(1)$ computational time and it's simple to generate random points within a rectangular region (merely by generating random coordinates within it). If, as a preliminary matter, an irregular region is preprocessed to identify the cells it contains, this leads to a computationally efficient way to simulate a CSR process. In higher dimensions the preprocessing for an arbitrary or complex region could be messy and time-consuming, but for simply defined manifolds and regions within them (such as spheres or their boundaries) there's no problem. | How to generate nD point process? | @Karl gave a good answer but it deserves an explanation.
A homogeneous Poisson point process (or "complete spatial randomness," CSR) is determined by two intuitive properties:
The probability that a | How to generate nD point process?
@Karl gave a good answer but it deserves an explanation.
A homogeneous Poisson point process (or "complete spatial randomness," CSR) is determined by two intuitive properties:
The probability that a point will be located within a small region $dA$ is directly proportional to $dA$ (up to second order in the hypervolume of $dA$). Note that this immediately implies the expected number of points in any finite region of hypervolume $A$ is proportional to $A$.
The points are located independently of each other.
Assuming the points introduced within each cell are done so independently, using a Poisson distribution of counts in each cell assures overall independence. To check (1), we lose no generality by assuming $dA$ is wholly located within a grid cell (because the probability that it straddles two cells is vanishingly small). That reduces the check to verifying that points are generated with the same intensity within the cells. Using a Poisson distribution of counts with expectation proportional to each cell's hypervolume assures this, as noted in (1).
This argument is independent of dimension. In fact, it would apply to any finite dimensional manifold with a volume form (such as the surface of a sphere in 3D) in which a region has been partitioned into measurable "cells" of arbitrary shape. The reason for using a grid, of course, is that grid cells can be addressed in $O(1)$ computational time and it's simple to generate random points within a rectangular region (merely by generating random coordinates within it). If, as a preliminary matter, an irregular region is preprocessed to identify the cells it contains, this leads to a computationally efficient way to simulate a CSR process. In higher dimensions the preprocessing for an arbitrary or complex region could be messy and time-consuming, but for simply defined manifolds and regions within them (such as spheres or their boundaries) there's no problem. | How to generate nD point process?
@Karl gave a good answer but it deserves an explanation.
A homogeneous Poisson point process (or "complete spatial randomness," CSR) is determined by two intuitive properties:
The probability that a |
31,775 | How to correctly use the GPML Matlab code for an actual (non-demo) problem? | The GP does a good job for your problem's training data. However, it's not so great for the test data. You've probably already ran something like the following yourself:
load('../XYdata_01_01_ab.mat');
for N = 1 : 25
% normalize
m = mean(Y1(N,:));
s = std(Y1(N,:));
Y1(N,:) = 1/s * (Y1(N,:) - m);
Y2(N,:) = 1/s * (Y2(N,:) - m);
covfunc = @covSEiso;
ell = 2;
sf = 1;
hyp.cov = [ log(ell); log(sf)];
likfunc = @likGauss;
sn = 1;
hyp.lik = log(sn);
hyp = minimize(hyp, @gp, -100, @infExact, [], covfunc, likfunc, X1', Y1(N,:)');
[m s2] = gp(hyp, @infExact, [], covfunc, likfunc, X1', Y1(N,:)', X1');
figure;
subplot(2,1,1); hold on;
title(['N = ' num2str(N)]);
f = [m+2*sqrt(s2); flipdim(m-2*sqrt(s2),1)];
x = [1:length(m)];
fill([x'; flipdim(x',1)], f, [7 7 7]/8);
plot(Y1(N,:)', 'b');
plot(m, 'r');
mse_train = mse(Y1(N,:)' - m);
[m s2] = gp(hyp, @infExact, [], covfunc, likfunc, X1', Y1(N,:)', X2');
subplot(2,1,2); hold on;
f = [m+2*sqrt(s2); flipdim(m-2*sqrt(s2),1)];
x = [1:length(m)];
fill([x'; flipdim(x',1)], f, [7 7 7]/8);
plot(Y2(N,:)', 'b');
plot(m, 'r');
mse_test = mse(Y2(N,:)' - m);
disp(sprintf('N = %d -- train = %5.2f test = %5.2f', N, mse_train, mse_test));
end
Tuning the hyperparameters manually and not using the minimize function it is possible to balance the train and test error somewhat, but tuning the method by looking at the test error is not what you're supposed to do. I think what's happening is heavy overfitting to your three subjects that generated the training data. No method will out-of-the-box do a good job here, and how could it? You provide the training data, so the method tries to get as good as possible on the training data without overfitting. And it fact, it doesn't overfit in the classical sense. It doesn't overfit to the data, but it overfits to the three training subjects. E.g., cross-validating with the training set would tell us that there's no overfitting. Still, your test set will be explained poorly.
What you can do is:
Get data from more subjects for training. This way your fourth person will be less likely to look like an "outlier" as it does currently. Also, you have just one sequence of each person, right? Maybe it would help to record the sequence multiple times.
Somehow incorporate prior knowledge about your task that would keep a method from overfitting to specific subjects. In a GP that could be done via the covariance function, but it's probably not that easy to do ...
If I'm not mistaken, the sequences are in fact time-series. Maybe it would make sense to exploit the temporal relations, for instance using recurrent neural networks.
There's most definitely more, but those are the things I can think of right now. | How to correctly use the GPML Matlab code for an actual (non-demo) problem? | The GP does a good job for your problem's training data. However, it's not so great for the test data. You've probably already ran something like the following yourself:
load('../XYdata_01_01_ab.mat') | How to correctly use the GPML Matlab code for an actual (non-demo) problem?
The GP does a good job for your problem's training data. However, it's not so great for the test data. You've probably already ran something like the following yourself:
load('../XYdata_01_01_ab.mat');
for N = 1 : 25
% normalize
m = mean(Y1(N,:));
s = std(Y1(N,:));
Y1(N,:) = 1/s * (Y1(N,:) - m);
Y2(N,:) = 1/s * (Y2(N,:) - m);
covfunc = @covSEiso;
ell = 2;
sf = 1;
hyp.cov = [ log(ell); log(sf)];
likfunc = @likGauss;
sn = 1;
hyp.lik = log(sn);
hyp = minimize(hyp, @gp, -100, @infExact, [], covfunc, likfunc, X1', Y1(N,:)');
[m s2] = gp(hyp, @infExact, [], covfunc, likfunc, X1', Y1(N,:)', X1');
figure;
subplot(2,1,1); hold on;
title(['N = ' num2str(N)]);
f = [m+2*sqrt(s2); flipdim(m-2*sqrt(s2),1)];
x = [1:length(m)];
fill([x'; flipdim(x',1)], f, [7 7 7]/8);
plot(Y1(N,:)', 'b');
plot(m, 'r');
mse_train = mse(Y1(N,:)' - m);
[m s2] = gp(hyp, @infExact, [], covfunc, likfunc, X1', Y1(N,:)', X2');
subplot(2,1,2); hold on;
f = [m+2*sqrt(s2); flipdim(m-2*sqrt(s2),1)];
x = [1:length(m)];
fill([x'; flipdim(x',1)], f, [7 7 7]/8);
plot(Y2(N,:)', 'b');
plot(m, 'r');
mse_test = mse(Y2(N,:)' - m);
disp(sprintf('N = %d -- train = %5.2f test = %5.2f', N, mse_train, mse_test));
end
Tuning the hyperparameters manually and not using the minimize function it is possible to balance the train and test error somewhat, but tuning the method by looking at the test error is not what you're supposed to do. I think what's happening is heavy overfitting to your three subjects that generated the training data. No method will out-of-the-box do a good job here, and how could it? You provide the training data, so the method tries to get as good as possible on the training data without overfitting. And it fact, it doesn't overfit in the classical sense. It doesn't overfit to the data, but it overfits to the three training subjects. E.g., cross-validating with the training set would tell us that there's no overfitting. Still, your test set will be explained poorly.
What you can do is:
Get data from more subjects for training. This way your fourth person will be less likely to look like an "outlier" as it does currently. Also, you have just one sequence of each person, right? Maybe it would help to record the sequence multiple times.
Somehow incorporate prior knowledge about your task that would keep a method from overfitting to specific subjects. In a GP that could be done via the covariance function, but it's probably not that easy to do ...
If I'm not mistaken, the sequences are in fact time-series. Maybe it would make sense to exploit the temporal relations, for instance using recurrent neural networks.
There's most definitely more, but those are the things I can think of right now. | How to correctly use the GPML Matlab code for an actual (non-demo) problem?
The GP does a good job for your problem's training data. However, it's not so great for the test data. You've probably already ran something like the following yourself:
load('../XYdata_01_01_ab.mat') |
31,776 | How to correctly use the GPML Matlab code for an actual (non-demo) problem? | I think the problem may be one of model mis-specification. If your targets are angles wrapped to +-180 degrees, then the "noise process" for your data may be sufficiently non-Guassian that the Baysian evidence is not a good way to optimise the hyper-parameters. For instance, consider what happens when "noise" causes the signal to wrap-around. In that case it may be wise to either perform model selection by minimising the cross-validation error (there is a public domain implementation of the Nelder-Mead simplex method here if you don't have the optimisation toolbox). The cross-validation estimate of performance is not so sensitive to model mis-specification as it is a direct estimate of test performance, whereas the marginal likelihood of the model is the evidence in suport of the model given that the model assumptions are correct. See the discussion starting on page 123 of Rasmussen and Williams' book.
Another approach would be to re-code the outputs so that a Gaussian noise model is more appropriate. One thing you could do is some form of unsupervised dimensionality reduction, as there are non-linear relationships between your targets (as there are only a limited way in which a body can move), so there will be a lower-dimensional manifold that your targets live on, and it would be better to regress the co-ordinates of that manifold rather than the angles themselves (there may be fewer targets that way as well).
Also some sort of Procrustes analysis might be a good idea to normalise the differences between subjects before training the model.
You may find some of the work done by Neil Lawrence on human pose recovery of interest. I remember seeing a demo of this at a conference a few years ago and was very impressed. | How to correctly use the GPML Matlab code for an actual (non-demo) problem? | I think the problem may be one of model mis-specification. If your targets are angles wrapped to +-180 degrees, then the "noise process" for your data may be sufficiently non-Guassian that the Baysia | How to correctly use the GPML Matlab code for an actual (non-demo) problem?
I think the problem may be one of model mis-specification. If your targets are angles wrapped to +-180 degrees, then the "noise process" for your data may be sufficiently non-Guassian that the Baysian evidence is not a good way to optimise the hyper-parameters. For instance, consider what happens when "noise" causes the signal to wrap-around. In that case it may be wise to either perform model selection by minimising the cross-validation error (there is a public domain implementation of the Nelder-Mead simplex method here if you don't have the optimisation toolbox). The cross-validation estimate of performance is not so sensitive to model mis-specification as it is a direct estimate of test performance, whereas the marginal likelihood of the model is the evidence in suport of the model given that the model assumptions are correct. See the discussion starting on page 123 of Rasmussen and Williams' book.
Another approach would be to re-code the outputs so that a Gaussian noise model is more appropriate. One thing you could do is some form of unsupervised dimensionality reduction, as there are non-linear relationships between your targets (as there are only a limited way in which a body can move), so there will be a lower-dimensional manifold that your targets live on, and it would be better to regress the co-ordinates of that manifold rather than the angles themselves (there may be fewer targets that way as well).
Also some sort of Procrustes analysis might be a good idea to normalise the differences between subjects before training the model.
You may find some of the work done by Neil Lawrence on human pose recovery of interest. I remember seeing a demo of this at a conference a few years ago and was very impressed. | How to correctly use the GPML Matlab code for an actual (non-demo) problem?
I think the problem may be one of model mis-specification. If your targets are angles wrapped to +-180 degrees, then the "noise process" for your data may be sufficiently non-Guassian that the Baysia |
31,777 | How does one appropriately apply cross-validation in the context of selecting learning parameters for support vector machines? | If you learn the hyper-parameters in the full training data and then cross-validate, you will get an optimistically biased performance estimate, because the test data in each fold will already have been used in setting the hyper-parameters, so the hyper-parameters selected are selected in part because they suit the data in the test set. The optimistic bias introduced in this way can be unexpectedly large. See Cawley and Talbot, "On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation", JMLR 11(Jul):2079−2107, 2010. (Particularly section 5.3). The best thing to do is nested cross-validation. The basic idea is that you cross-validate the entire method used to generate the model, so treat model selection (choosing the hyper-parameters) as simply part of the model fitting procedure (where the parameters are determined) and you can't go too far wrong.
If you use cross-validation on the training set to determine the hyper-parameters and then evaluate the performance of a model trained using those parameters on the whole training set, using a separate test set, that is also fine (provided you have enough data for reliably fitting the model and estimating performance using disjoint partitions). | How does one appropriately apply cross-validation in the context of selecting learning parameters fo | If you learn the hyper-parameters in the full training data and then cross-validate, you will get an optimistically biased performance estimate, because the test data in each fold will already have be | How does one appropriately apply cross-validation in the context of selecting learning parameters for support vector machines?
If you learn the hyper-parameters in the full training data and then cross-validate, you will get an optimistically biased performance estimate, because the test data in each fold will already have been used in setting the hyper-parameters, so the hyper-parameters selected are selected in part because they suit the data in the test set. The optimistic bias introduced in this way can be unexpectedly large. See Cawley and Talbot, "On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation", JMLR 11(Jul):2079−2107, 2010. (Particularly section 5.3). The best thing to do is nested cross-validation. The basic idea is that you cross-validate the entire method used to generate the model, so treat model selection (choosing the hyper-parameters) as simply part of the model fitting procedure (where the parameters are determined) and you can't go too far wrong.
If you use cross-validation on the training set to determine the hyper-parameters and then evaluate the performance of a model trained using those parameters on the whole training set, using a separate test set, that is also fine (provided you have enough data for reliably fitting the model and estimating performance using disjoint partitions). | How does one appropriately apply cross-validation in the context of selecting learning parameters fo
If you learn the hyper-parameters in the full training data and then cross-validate, you will get an optimistically biased performance estimate, because the test data in each fold will already have be |
31,778 | How does one appropriately apply cross-validation in the context of selecting learning parameters for support vector machines? | I don't think cross-validation is misused in the case of LIBSVM because it is done on the testing data level. All it does is k-fold cross validation and search for the best parameter for RBF kernel. Let me know of you disagree. | How does one appropriately apply cross-validation in the context of selecting learning parameters fo | I don't think cross-validation is misused in the case of LIBSVM because it is done on the testing data level. All it does is k-fold cross validation and search for the best parameter for RBF kernel. L | How does one appropriately apply cross-validation in the context of selecting learning parameters for support vector machines?
I don't think cross-validation is misused in the case of LIBSVM because it is done on the testing data level. All it does is k-fold cross validation and search for the best parameter for RBF kernel. Let me know of you disagree. | How does one appropriately apply cross-validation in the context of selecting learning parameters fo
I don't think cross-validation is misused in the case of LIBSVM because it is done on the testing data level. All it does is k-fold cross validation and search for the best parameter for RBF kernel. L |
31,779 | The estimated logarithm of the hazard ratio is approximately normally distributed | The fact that this is approximately normally distributed relies on the central limit theorem (CLT), so will be a better approximation in large samples. The CLT works better for the log of any ratio (risk ratio, odds ratio, hazard ratio..) than for the ratio itself.
In suitably large samples, I think this is a good approximation to the variance in two situations:
The hazard in each group is constant over time (regardless of the hazard ratio)
The proportional hazards assumption holds and the hazard ratio is close to 1
I think it may become a fairly crude assumption in situations far from these, i.e. if the hazards vary considerably over time and the hazard ratio is far from 1. Whether you can do better depends on what information is available. If you have access to the full data you can fit a proportional hazards model and get the variance of the log hazard ratio from that. If you only have the info in a published paper, various other approximations have been developed by meta-analysts. These two references are taken from the Cochrane Handbook:
M. K. B. Parmar, V. Torri, and L. Stewart (1998). "Extracting summary statistics to perform meta-analyses of the published literature for survival endpoints." Statistics in Medicine 17 (24):2815-2834.
Paula R. Williamson, Catrin Tudur Smith, Jane L. Hutton, and Anthony G. Marson. "Aggregate data meta-analysis with time-to-event outcomes". Statistics in Medicine 21 (22):3337-3351, 2002.
In Parmar et al, the expression you give would follow from using observed numbers in place of expected in their equation (5), or combining equations (6) and (12). Equations (5) and (6) are based on logrank methods. They reference Kalbfleisch & Prentice for equation (12) but I don't have that to hand, so maybe someone who does might like to check it and add to this. | The estimated logarithm of the hazard ratio is approximately normally distributed | The fact that this is approximately normally distributed relies on the central limit theorem (CLT), so will be a better approximation in large samples. The CLT works better for the log of any ratio (r | The estimated logarithm of the hazard ratio is approximately normally distributed
The fact that this is approximately normally distributed relies on the central limit theorem (CLT), so will be a better approximation in large samples. The CLT works better for the log of any ratio (risk ratio, odds ratio, hazard ratio..) than for the ratio itself.
In suitably large samples, I think this is a good approximation to the variance in two situations:
The hazard in each group is constant over time (regardless of the hazard ratio)
The proportional hazards assumption holds and the hazard ratio is close to 1
I think it may become a fairly crude assumption in situations far from these, i.e. if the hazards vary considerably over time and the hazard ratio is far from 1. Whether you can do better depends on what information is available. If you have access to the full data you can fit a proportional hazards model and get the variance of the log hazard ratio from that. If you only have the info in a published paper, various other approximations have been developed by meta-analysts. These two references are taken from the Cochrane Handbook:
M. K. B. Parmar, V. Torri, and L. Stewart (1998). "Extracting summary statistics to perform meta-analyses of the published literature for survival endpoints." Statistics in Medicine 17 (24):2815-2834.
Paula R. Williamson, Catrin Tudur Smith, Jane L. Hutton, and Anthony G. Marson. "Aggregate data meta-analysis with time-to-event outcomes". Statistics in Medicine 21 (22):3337-3351, 2002.
In Parmar et al, the expression you give would follow from using observed numbers in place of expected in their equation (5), or combining equations (6) and (12). Equations (5) and (6) are based on logrank methods. They reference Kalbfleisch & Prentice for equation (12) but I don't have that to hand, so maybe someone who does might like to check it and add to this. | The estimated logarithm of the hazard ratio is approximately normally distributed
The fact that this is approximately normally distributed relies on the central limit theorem (CLT), so will be a better approximation in large samples. The CLT works better for the log of any ratio (r |
31,780 | General approaches to model car traffic in a parking garage | The field that is relevant to the problem is the Queuing theory, a particular sub-field is a Birth-death processes. An article that in my opinion is helpful to your task is R.C. Larson and K.Satsunama (2010) Congestion Pricing: A Parking Queue Model, following the links in references would give more ideas on where to proceed.
Note, that recently R package queueing has been released (with misprint in the title however). Finally, I think, that this link for queuing software could be helpful. | General approaches to model car traffic in a parking garage | The field that is relevant to the problem is the Queuing theory, a particular sub-field is a Birth-death processes. An article that in my opinion is helpful to your task is R.C. Larson and K.Satsunama | General approaches to model car traffic in a parking garage
The field that is relevant to the problem is the Queuing theory, a particular sub-field is a Birth-death processes. An article that in my opinion is helpful to your task is R.C. Larson and K.Satsunama (2010) Congestion Pricing: A Parking Queue Model, following the links in references would give more ideas on where to proceed.
Note, that recently R package queueing has been released (with misprint in the title however). Finally, I think, that this link for queuing software could be helpful. | General approaches to model car traffic in a parking garage
The field that is relevant to the problem is the Queuing theory, a particular sub-field is a Birth-death processes. An article that in my opinion is helpful to your task is R.C. Larson and K.Satsunama |
31,781 | General approaches to model car traffic in a parking garage | Predicting hourly data has become my main interest. This problem arises normally in Call Center Forecasting. One needs to be concerned with hourly patterns within the day , different daily patterns across the week and seasonal patterns across the year ( Monthly indicators/Weekly Indicators. In addition there can be and I have seen interaction between hourly patterns and daily patterns. Transfer Function ( a generalization/super-set of Regression for time series data ) can easily accommodate the mentioned structures. Additionally events during the year (Xmas, Easter etc) need to be possibly included using lead, contemporaneous and/or lag structures. In time series analysis we need to validate via Intervention Detection schemes that there are no Pulses, Level/Step Shifts , Seasonal Pulses and/or Local Time Trends remaining in the error process suggesting an augmentation to the model. If the residual series suggests autotregressive structure then one simply adds a suitable ARIMA structure. Care should be taken when selecting a resource to deal with this problem. I recently analyzed and developed forecasts for a similar problem: the number of passengers in the Paris Subway System by hour and by day. IMHO this is a problem of constructing a useful equation from the data which can then be used to simulate possible scenarios which can then be used to evaluate queue length etc. | General approaches to model car traffic in a parking garage | Predicting hourly data has become my main interest. This problem arises normally in Call Center Forecasting. One needs to be concerned with hourly patterns within the day , different daily patterns ac | General approaches to model car traffic in a parking garage
Predicting hourly data has become my main interest. This problem arises normally in Call Center Forecasting. One needs to be concerned with hourly patterns within the day , different daily patterns across the week and seasonal patterns across the year ( Monthly indicators/Weekly Indicators. In addition there can be and I have seen interaction between hourly patterns and daily patterns. Transfer Function ( a generalization/super-set of Regression for time series data ) can easily accommodate the mentioned structures. Additionally events during the year (Xmas, Easter etc) need to be possibly included using lead, contemporaneous and/or lag structures. In time series analysis we need to validate via Intervention Detection schemes that there are no Pulses, Level/Step Shifts , Seasonal Pulses and/or Local Time Trends remaining in the error process suggesting an augmentation to the model. If the residual series suggests autotregressive structure then one simply adds a suitable ARIMA structure. Care should be taken when selecting a resource to deal with this problem. I recently analyzed and developed forecasts for a similar problem: the number of passengers in the Paris Subway System by hour and by day. IMHO this is a problem of constructing a useful equation from the data which can then be used to simulate possible scenarios which can then be used to evaluate queue length etc. | General approaches to model car traffic in a parking garage
Predicting hourly data has become my main interest. This problem arises normally in Call Center Forecasting. One needs to be concerned with hourly patterns within the day , different daily patterns ac |
31,782 | How can I speed up calculation of the fixed effects in a GLMM? | It should help to specify starting values, though it's hard to know how much. As you're doing simulation and bootstrapping, you should know the 'true' values or the un-bootstrapped estimates or both. Try using those in the start = option of glmer.
You could also consider looking into the whether the tolerances for declaring convergence are stricter than you need. I'm not clear how to alter them from the lme4 documentation though. | How can I speed up calculation of the fixed effects in a GLMM? | It should help to specify starting values, though it's hard to know how much. As you're doing simulation and bootstrapping, you should know the 'true' values or the un-bootstrapped estimates or both. | How can I speed up calculation of the fixed effects in a GLMM?
It should help to specify starting values, though it's hard to know how much. As you're doing simulation and bootstrapping, you should know the 'true' values or the un-bootstrapped estimates or both. Try using those in the start = option of glmer.
You could also consider looking into the whether the tolerances for declaring convergence are stricter than you need. I'm not clear how to alter them from the lme4 documentation though. | How can I speed up calculation of the fixed effects in a GLMM?
It should help to specify starting values, though it's hard to know how much. As you're doing simulation and bootstrapping, you should know the 'true' values or the un-bootstrapped estimates or both. |
31,783 | How can I speed up calculation of the fixed effects in a GLMM? | Two other possibilities too consider, before buying a new computer.
Parallel computing - bootstrapping is easy to run in parallel. If your computer is reasonably new, you probably have four cores. Take a look the the multicore library in R.
Cloud computing is also a possibility and reasonably cheap. I have colleagues that have used the amazon cloud for running R scripts. They found that it was quite cost effective. | How can I speed up calculation of the fixed effects in a GLMM? | Two other possibilities too consider, before buying a new computer.
Parallel computing - bootstrapping is easy to run in parallel. If your computer is reasonably new, you probably have four cores. Ta | How can I speed up calculation of the fixed effects in a GLMM?
Two other possibilities too consider, before buying a new computer.
Parallel computing - bootstrapping is easy to run in parallel. If your computer is reasonably new, you probably have four cores. Take a look the the multicore library in R.
Cloud computing is also a possibility and reasonably cheap. I have colleagues that have used the amazon cloud for running R scripts. They found that it was quite cost effective. | How can I speed up calculation of the fixed effects in a GLMM?
Two other possibilities too consider, before buying a new computer.
Parallel computing - bootstrapping is easy to run in parallel. If your computer is reasonably new, you probably have four cores. Ta |
31,784 | How can I speed up calculation of the fixed effects in a GLMM? | It could possibly be a faster computer. But here is one trick which may work.
Generate a simultation of $Y^*$, but only conditional on $Y$, then just do OLS or LMM on the simulated $Y^*$ values.
Supposing your link function is $g(.)$. this says how you get from the probability of $Y=1$ to the $Y^*$ value, and is most likely the logistic function $g(z)=log \Big(\frac{z}{1-z}\Big)$.
So if you assume a bernouli sampling distribution for $Y\rightarrow Y\sim Bernoulli(p)$, and then use the jeffreys prior for the probability, you get a beta posterior for $p\sim Beta(Y_{obs}+\frac{1}{2},1-Y_{obs}+\frac{1}{2})$. Simulating from this should be like lighting, and if it isn't, then you need a faster computer. Further, the samples are independent, so no need to check any "convergence" diagnostics such as in MCMC, and you probably don't need as many samples - 100 may work fine for your case. If you have binomial $Y's$, then just replace the $1$ in the above posterior with $n_i$, the number of trials of the binomial for each $Y_i$.
So you have a set of simulated values $p_{sim}$. You then apply the link function to each of these values, to get $Y_{sim}=g(p_{sim})$. Fit a LMM to $Y_{sim}$, which is probably quicker than the GLMM program. You can basically ignore the original binary values (but don't delete them!), and just work with the "simulation matrix" ($N\times S$, where $N$ is the sample size, and $S$ is the number of simulations).
So in your program, I would replace the $gmler()$ function with the $lmer()$ function, and $Y$ with a single simultation, You would then create some sort of loop which applies the $lmer()$ function to each simulation, and then takes the average as the estimate of $b$. Something like
$$a=\dots$$
$$b=0$$
$$do \ s=1,\dots,S$$
$$b_{est}=lmer(Y_s\dots)$$
$$b=b+\frac{1}{s}(b_{est}-b)$$
$$end$$
$$return(a*b)$$
Let me know if I need to explain anything a bit clearer | How can I speed up calculation of the fixed effects in a GLMM? | It could possibly be a faster computer. But here is one trick which may work.
Generate a simultation of $Y^*$, but only conditional on $Y$, then just do OLS or LMM on the simulated $Y^*$ values.
Supp | How can I speed up calculation of the fixed effects in a GLMM?
It could possibly be a faster computer. But here is one trick which may work.
Generate a simultation of $Y^*$, but only conditional on $Y$, then just do OLS or LMM on the simulated $Y^*$ values.
Supposing your link function is $g(.)$. this says how you get from the probability of $Y=1$ to the $Y^*$ value, and is most likely the logistic function $g(z)=log \Big(\frac{z}{1-z}\Big)$.
So if you assume a bernouli sampling distribution for $Y\rightarrow Y\sim Bernoulli(p)$, and then use the jeffreys prior for the probability, you get a beta posterior for $p\sim Beta(Y_{obs}+\frac{1}{2},1-Y_{obs}+\frac{1}{2})$. Simulating from this should be like lighting, and if it isn't, then you need a faster computer. Further, the samples are independent, so no need to check any "convergence" diagnostics such as in MCMC, and you probably don't need as many samples - 100 may work fine for your case. If you have binomial $Y's$, then just replace the $1$ in the above posterior with $n_i$, the number of trials of the binomial for each $Y_i$.
So you have a set of simulated values $p_{sim}$. You then apply the link function to each of these values, to get $Y_{sim}=g(p_{sim})$. Fit a LMM to $Y_{sim}$, which is probably quicker than the GLMM program. You can basically ignore the original binary values (but don't delete them!), and just work with the "simulation matrix" ($N\times S$, where $N$ is the sample size, and $S$ is the number of simulations).
So in your program, I would replace the $gmler()$ function with the $lmer()$ function, and $Y$ with a single simultation, You would then create some sort of loop which applies the $lmer()$ function to each simulation, and then takes the average as the estimate of $b$. Something like
$$a=\dots$$
$$b=0$$
$$do \ s=1,\dots,S$$
$$b_{est}=lmer(Y_s\dots)$$
$$b=b+\frac{1}{s}(b_{est}-b)$$
$$end$$
$$return(a*b)$$
Let me know if I need to explain anything a bit clearer | How can I speed up calculation of the fixed effects in a GLMM?
It could possibly be a faster computer. But here is one trick which may work.
Generate a simultation of $Y^*$, but only conditional on $Y$, then just do OLS or LMM on the simulated $Y^*$ values.
Supp |
31,785 | Linear regression effect sizes when using transformed variables | I would suggest that transformations aren't important to get a normal distribution for your errors. Normality isn't a necessary assumption. If you have "enough" data, the central limit theorem kicks in and your standard estimates become asymptotically normal. Alternatively, you can use bootstrapping as a non-parametric means to estimate the standard errors. (Homoskedasticity, a common variance for the observations across units, is required for your standard errors to be right; robust options permit heteroskedasticity).
Instead, transformations help to ensure that a linear model is appropriate. To give a sense of this, let's consider how we can interpret the coefficients in transformed models:
outcome is units, predictors is units: A one unit change in the predictor leads to a beta unit change in the outcome.
outcome in units, predictor in log units: A one percent change in the predictor leads to a beta/100 unit change in the outcome.
outcome in log units, predictor in units: A one unit change in the predictor leads to a beta x 100% change in the outcome.
outcome in log units, predictor in log units: A one percent change in the predictor leads to a beta percent change in the outcome.
If transformations are necessary to have your model make sense (i.e., for linearity to hold), then the estimate from this model should be used for inference. An estimate from a model that you don't believe isn't very helpful. The interpretations above can be quite useful in understanding the estimates from a transformed model and can often be more relevant to the question at hand. For example, economists like the log-log formulation because the interpretation of beta is an elasticity, an important measure in economics.
I'd add that the back transformation doesn't work because the expectation of a function is not the function of the expectation; the log of the expected value of beta is not the expected value of the log of beta. Hence, your estimator is not unbiased. This throws off standard errors, too. | Linear regression effect sizes when using transformed variables | I would suggest that transformations aren't important to get a normal distribution for your errors. Normality isn't a necessary assumption. If you have "enough" data, the central limit theorem kicks i | Linear regression effect sizes when using transformed variables
I would suggest that transformations aren't important to get a normal distribution for your errors. Normality isn't a necessary assumption. If you have "enough" data, the central limit theorem kicks in and your standard estimates become asymptotically normal. Alternatively, you can use bootstrapping as a non-parametric means to estimate the standard errors. (Homoskedasticity, a common variance for the observations across units, is required for your standard errors to be right; robust options permit heteroskedasticity).
Instead, transformations help to ensure that a linear model is appropriate. To give a sense of this, let's consider how we can interpret the coefficients in transformed models:
outcome is units, predictors is units: A one unit change in the predictor leads to a beta unit change in the outcome.
outcome in units, predictor in log units: A one percent change in the predictor leads to a beta/100 unit change in the outcome.
outcome in log units, predictor in units: A one unit change in the predictor leads to a beta x 100% change in the outcome.
outcome in log units, predictor in log units: A one percent change in the predictor leads to a beta percent change in the outcome.
If transformations are necessary to have your model make sense (i.e., for linearity to hold), then the estimate from this model should be used for inference. An estimate from a model that you don't believe isn't very helpful. The interpretations above can be quite useful in understanding the estimates from a transformed model and can often be more relevant to the question at hand. For example, economists like the log-log formulation because the interpretation of beta is an elasticity, an important measure in economics.
I'd add that the back transformation doesn't work because the expectation of a function is not the function of the expectation; the log of the expected value of beta is not the expected value of the log of beta. Hence, your estimator is not unbiased. This throws off standard errors, too. | Linear regression effect sizes when using transformed variables
I would suggest that transformations aren't important to get a normal distribution for your errors. Normality isn't a necessary assumption. If you have "enough" data, the central limit theorem kicks i |
31,786 | Linear regression effect sizes when using transformed variables | The question is about marginal effects (of X on Y), I think, not so much about interpreting individual coefficients. As folk have usefully noted, these are only sometimes identifiable with an effect size, e.g. when there are linear and additive relationships.
If that's the focus then the (conceptually, if not practically) simplest way to think about the problem would seem to be this:
To get the marginal effect of X on Y in a linear normal regression model with no interactions, you can just look at the coefficient on X. But that's not quite enough since it is estimated not known. In any case, what one really wants for marginal effects is some kind of plot or summary that provides a prediction about Y for a range of values of X, and a measure of uncertainty. Typically one might want the predicted mean Y and a confidence interval, but one might also want predictions for the complete conditional distribution of Y for an X. That distribution is wider than the fitted model's sigma estimate because it takes into account uncertainty about the model coefficients.
There are various closed form solutions for simple models like this one. For current purposes we can ignore them and think instead more generally about how to get that marginal effects graph by simulation, in a way that deals with arbitrarily complex models.
Assume you want the effects of varying X on the mean of Y, and you're happy to fix all the other variables at some meaningful values. For each new value of X, take a size B sample from the distribution of model coefficients. An easy way to do so in R is to assume that it is Normal with mean coef(model) and covariance matrix vcov(model). Compute a new expected Y for each set of coefficients and summarize the lot with an interval. Then move on to the next value of X.
It seems to me that this method should be unaffected by any fancy transformations applied to any of the variables, provided you also apply them (or their inverses) in each sampling step. So, if the fitted model has log(X) as a predictor then log your new X before multiplying it by the sampled coefficient. If the fitted model has sqrt(Y) as a dependent variable then square each predicted mean in the sample before summarizing them as an interval.
In short, more programming but less probability calculation, and clinically comprehensible marginal effects as a result. This 'method' is sometimes referred to CLARIFY in the political science literature, but is quite general. | Linear regression effect sizes when using transformed variables | The question is about marginal effects (of X on Y), I think, not so much about interpreting individual coefficients. As folk have usefully noted, these are only sometimes identifiable with an effect | Linear regression effect sizes when using transformed variables
The question is about marginal effects (of X on Y), I think, not so much about interpreting individual coefficients. As folk have usefully noted, these are only sometimes identifiable with an effect size, e.g. when there are linear and additive relationships.
If that's the focus then the (conceptually, if not practically) simplest way to think about the problem would seem to be this:
To get the marginal effect of X on Y in a linear normal regression model with no interactions, you can just look at the coefficient on X. But that's not quite enough since it is estimated not known. In any case, what one really wants for marginal effects is some kind of plot or summary that provides a prediction about Y for a range of values of X, and a measure of uncertainty. Typically one might want the predicted mean Y and a confidence interval, but one might also want predictions for the complete conditional distribution of Y for an X. That distribution is wider than the fitted model's sigma estimate because it takes into account uncertainty about the model coefficients.
There are various closed form solutions for simple models like this one. For current purposes we can ignore them and think instead more generally about how to get that marginal effects graph by simulation, in a way that deals with arbitrarily complex models.
Assume you want the effects of varying X on the mean of Y, and you're happy to fix all the other variables at some meaningful values. For each new value of X, take a size B sample from the distribution of model coefficients. An easy way to do so in R is to assume that it is Normal with mean coef(model) and covariance matrix vcov(model). Compute a new expected Y for each set of coefficients and summarize the lot with an interval. Then move on to the next value of X.
It seems to me that this method should be unaffected by any fancy transformations applied to any of the variables, provided you also apply them (or their inverses) in each sampling step. So, if the fitted model has log(X) as a predictor then log your new X before multiplying it by the sampled coefficient. If the fitted model has sqrt(Y) as a dependent variable then square each predicted mean in the sample before summarizing them as an interval.
In short, more programming but less probability calculation, and clinically comprehensible marginal effects as a result. This 'method' is sometimes referred to CLARIFY in the political science literature, but is quite general. | Linear regression effect sizes when using transformed variables
The question is about marginal effects (of X on Y), I think, not so much about interpreting individual coefficients. As folk have usefully noted, these are only sometimes identifiable with an effect |
31,787 | Linear regression effect sizes when using transformed variables | SHORT ANSWER: Absolutely correct, the back transformation of the beta value is meaningless. However, you can report the non-linearity as something like. "If you weigh 100kg then eating two pieces of cake a day will increase your weight by approximately 2kg in one week. However, if you weigh 200kg your weight would increase 2.5kg. See figure 1 for a depiction of this non-linear relationship (figure 1 being a fit of the curve over the raw data)."
LONG ANSWER:
The meaningfulness of the back transformed value varies but when properly done it usually has some meaning.
If you have a regression of natural log values on two x predictors with a beta of 0.13, and an intercept of 7.0, then the back transformation of 0.13 (1.14) is pretty much meaningless. That is correct. However, the back transformation of 7.13 is going to be a value that can be interpreted with some meaning. You could then subtract out the back transformation of 7.0 and be left with a remainder value that is your effect in a meaningful scale (152.2). If you want to look at any predicted value you would need to first calculate it all out in log values and then back-transform. This would have to be done separately for every predicted value and result in a curve if graphed.
This is often reasonable to do if your transformation has a relatively small effect on your data. Log transformation of reaction times are one kind of value that can be back transformed. When it's done correctly you'll find that the values seem close to median values doing simple calculations on the raw data.
Even then though one must be careful with interactions and non-interactions. The relative values vary across the scale. The analysis was sensitive to the log value while the back transformed values may show different patterns that make interactions seem like they shouldn't be there or vice versa. In other words, you can back transform things that make small changes to the data as long as you're careful.
Some changes, like logistic transform of probability, can have quite massive impacts, especially near the end of the scale. An example of a place you should never back transform is interaction plots near the high or low end of probability. | Linear regression effect sizes when using transformed variables | SHORT ANSWER: Absolutely correct, the back transformation of the beta value is meaningless. However, you can report the non-linearity as something like. "If you weigh 100kg then eating two pieces of | Linear regression effect sizes when using transformed variables
SHORT ANSWER: Absolutely correct, the back transformation of the beta value is meaningless. However, you can report the non-linearity as something like. "If you weigh 100kg then eating two pieces of cake a day will increase your weight by approximately 2kg in one week. However, if you weigh 200kg your weight would increase 2.5kg. See figure 1 for a depiction of this non-linear relationship (figure 1 being a fit of the curve over the raw data)."
LONG ANSWER:
The meaningfulness of the back transformed value varies but when properly done it usually has some meaning.
If you have a regression of natural log values on two x predictors with a beta of 0.13, and an intercept of 7.0, then the back transformation of 0.13 (1.14) is pretty much meaningless. That is correct. However, the back transformation of 7.13 is going to be a value that can be interpreted with some meaning. You could then subtract out the back transformation of 7.0 and be left with a remainder value that is your effect in a meaningful scale (152.2). If you want to look at any predicted value you would need to first calculate it all out in log values and then back-transform. This would have to be done separately for every predicted value and result in a curve if graphed.
This is often reasonable to do if your transformation has a relatively small effect on your data. Log transformation of reaction times are one kind of value that can be back transformed. When it's done correctly you'll find that the values seem close to median values doing simple calculations on the raw data.
Even then though one must be careful with interactions and non-interactions. The relative values vary across the scale. The analysis was sensitive to the log value while the back transformed values may show different patterns that make interactions seem like they shouldn't be there or vice versa. In other words, you can back transform things that make small changes to the data as long as you're careful.
Some changes, like logistic transform of probability, can have quite massive impacts, especially near the end of the scale. An example of a place you should never back transform is interaction plots near the high or low end of probability. | Linear regression effect sizes when using transformed variables
SHORT ANSWER: Absolutely correct, the back transformation of the beta value is meaningless. However, you can report the non-linearity as something like. "If you weigh 100kg then eating two pieces of |
31,788 | How to motivate the definition of $R^2$ in `sklearn.metrics.r2_score`? | Squared correlation between the feature and the outcome
That would be the case if you have a single feature and the model is linear regression.
Squared correlation between the outcome and the predictions
Same as above, but it will hold also if there are more features.
Proportion of variance explained
It tells us the proportion of the variance explained, but only for the linear regression.
Comparison of the square loss incurred by the model to the square loss incurred
Again, for linear regression using the formula from Scikit-learn is equivalent to the as we can decompose the squared error to TSS = ESS + RSS and get the equivalent formulation.
So in a sense, all the formulations are the same, just have varying degrees of generality.
As for calculating $R^2$ on the test set, you can check this thread and if you search through CrossValidated.com and Scikit-learn GitHub issues, you can find many discussions considering the Scikit-learn's choice of test set mean in the denominator as controversial. As you can learn from this discussion, one of the problems with using the train set mean for the test set $R^2$ is the API, where the metrics usually are defined as metric(y_true, y_pred) and the same interface can be used regardless if it is training or test set. It would also mean that during evaluation time you would need to have access to the training data, which may not be possible in some setups where there is a hard split of the data between training time and validation time.
Notice also that $R^2$ comes from statistics, where we are usually interested in in-sample metrics, so the derivations would as well regard the train set $R^2$. | How to motivate the definition of $R^2$ in `sklearn.metrics.r2_score`? | Squared correlation between the feature and the outcome
That would be the case if you have a single feature and the model is linear regression.
Squared correlation between the outcome and the pred | How to motivate the definition of $R^2$ in `sklearn.metrics.r2_score`?
Squared correlation between the feature and the outcome
That would be the case if you have a single feature and the model is linear regression.
Squared correlation between the outcome and the predictions
Same as above, but it will hold also if there are more features.
Proportion of variance explained
It tells us the proportion of the variance explained, but only for the linear regression.
Comparison of the square loss incurred by the model to the square loss incurred
Again, for linear regression using the formula from Scikit-learn is equivalent to the as we can decompose the squared error to TSS = ESS + RSS and get the equivalent formulation.
So in a sense, all the formulations are the same, just have varying degrees of generality.
As for calculating $R^2$ on the test set, you can check this thread and if you search through CrossValidated.com and Scikit-learn GitHub issues, you can find many discussions considering the Scikit-learn's choice of test set mean in the denominator as controversial. As you can learn from this discussion, one of the problems with using the train set mean for the test set $R^2$ is the API, where the metrics usually are defined as metric(y_true, y_pred) and the same interface can be used regardless if it is training or test set. It would also mean that during evaluation time you would need to have access to the training data, which may not be possible in some setups where there is a hard split of the data between training time and validation time.
Notice also that $R^2$ comes from statistics, where we are usually interested in in-sample metrics, so the derivations would as well regard the train set $R^2$. | How to motivate the definition of $R^2$ in `sklearn.metrics.r2_score`?
Squared correlation between the feature and the outcome
That would be the case if you have a single feature and the model is linear regression.
Squared correlation between the outcome and the pred |
31,789 | How to motivate the definition of $R^2$ in `sklearn.metrics.r2_score`? | Comparison of the square loss incurred by the model to the square loss incurred by a baseline model
Comparison of the loss seems a lot like the pseudo-$R^2$ value e.g.
$$R^2_{pseudo} = 1 - \frac{D_{null} - D_{model}}{D_{null}}$$
But with the deviance or loss equal to the sum of squared residuals and the null model the mean, then it becomes the same as the regular $R^2$.
In-sample, I am totally on board with the Python function. Out-of-sample, I have a problem. In the $R^2$ formula above, the Python implementation uses the $\hat{y}$ from the given data.
Possibly the problem stems from the use of $R^2$ as a measure for goodness of fit. But the $R^2$ value is not a goodness of fit measure. The value doesn't tell directly whether your model is a good fit or not; A perfect fit of the conditional distribution mean does not need to coincide with a $R^2=1$. Instead, it is a descriptive statistic that tells how large the variance in the noise/randomness is relative to the variance in the deterministic part. We see this in an alternative way to compute $R^2$
$$R^2 = \frac{SS_{model}}{SS_{model}+SS_{residuals}}$$
where $SS_{model} = \sum (\hat{y} - \bar{\hat{y}})^2$ and $SS_{residuals} = \sum (y - \hat{y})^2$ and $\bar{\hat{y}}$ is the mean of the modelled values.
This way of computing $R^2$ will be equivalent to the 'other $R^2$' if you have a linear model with an intercept. But it will be slightly different in
other situations, for instance there is no occurrence of cases with negative values as in the question Why is R^2 negative in my multiple linear regression model in python? . | How to motivate the definition of $R^2$ in `sklearn.metrics.r2_score`? | Comparison of the square loss incurred by the model to the square loss incurred by a baseline model
Comparison of the loss seems a lot like the pseudo-$R^2$ value e.g.
$$R^2_{pseudo} = 1 - \frac{D_{ | How to motivate the definition of $R^2$ in `sklearn.metrics.r2_score`?
Comparison of the square loss incurred by the model to the square loss incurred by a baseline model
Comparison of the loss seems a lot like the pseudo-$R^2$ value e.g.
$$R^2_{pseudo} = 1 - \frac{D_{null} - D_{model}}{D_{null}}$$
But with the deviance or loss equal to the sum of squared residuals and the null model the mean, then it becomes the same as the regular $R^2$.
In-sample, I am totally on board with the Python function. Out-of-sample, I have a problem. In the $R^2$ formula above, the Python implementation uses the $\hat{y}$ from the given data.
Possibly the problem stems from the use of $R^2$ as a measure for goodness of fit. But the $R^2$ value is not a goodness of fit measure. The value doesn't tell directly whether your model is a good fit or not; A perfect fit of the conditional distribution mean does not need to coincide with a $R^2=1$. Instead, it is a descriptive statistic that tells how large the variance in the noise/randomness is relative to the variance in the deterministic part. We see this in an alternative way to compute $R^2$
$$R^2 = \frac{SS_{model}}{SS_{model}+SS_{residuals}}$$
where $SS_{model} = \sum (\hat{y} - \bar{\hat{y}})^2$ and $SS_{residuals} = \sum (y - \hat{y})^2$ and $\bar{\hat{y}}$ is the mean of the modelled values.
This way of computing $R^2$ will be equivalent to the 'other $R^2$' if you have a linear model with an intercept. But it will be slightly different in
other situations, for instance there is no occurrence of cases with negative values as in the question Why is R^2 negative in my multiple linear regression model in python? . | How to motivate the definition of $R^2$ in `sklearn.metrics.r2_score`?
Comparison of the square loss incurred by the model to the square loss incurred by a baseline model
Comparison of the loss seems a lot like the pseudo-$R^2$ value e.g.
$$R^2_{pseudo} = 1 - \frac{D_{ |
31,790 | Should stepwise regressions also be avoided for exploratory (hypothesis generating) modelling? | In exploratory analysis you have much more latitude for how you generate the hypotheses of interest, since there is no biasing of tests due to optimisation processes. (Of course, for this to apply you should ensure that when you undertake confirmatory analysis on the hypotheses, you use different data.) Nevertheless, that does not mean that there is no difference in optimality of different kinds of processes that can be used to identify hypotheses of possible interest. So while you can, in theory, "do whatever you want", you probably shouldn't.
Generally speaking, in exploratory analysis you will still want to identify hypotheses that have some evidentiary basis, so that you don't waste time testing lots of false hypotheses in the confirmatory phase. For this reason, it is often useful to have regard to the same types of statistical/evidentiary issues that will arise in confirmatory testing, though for a different reason. The main deficiency of stepwise methods in selection of variables is that it can travel through idiosyncratic paths that miss sets of explanatory variables with high evidence of a relationship to the response variable. This is why comparisons like the all-possible-models method are considered preferable to stepwise methods.
Assuming you have sufficient computational power to do so, I would recommend you conduct exploratory analysis by computing the goodness-of-fit statistics for all possible models and then examining those models that yield high levels of fit relative to the number of model parameters. This method is more likely to identify models with true hypotheses, and unlike the stepwise procedure, it is more systematic and will not miss important models. Since this is exploratory analysis, you should also allow yourself to be guided by exogenous concerns about what hypotheses/models are "interesting" in the context of your field, what are the costs of collecting data, etc., but you can use the all-possible-models method to augment this. This latter method will give more systematic statistical information on your exploratory data than stepwise methods.
Finally, you say that hypotheses generated by the stepwise method are "shaky". It is okay for hypotheses generated in the exploratory phase to be shaky, because the whole point is that you are only generating tentative hypotheses for later testing and confirmation. Indeed, arguably all hypotheses generated in the exploratory phase are and ought to be "shaky". The reason to prefer all-possible-models over stepwise methods is that it more systematically identifies hypotheses supported in the explanatory phase, which makes it a bit less likely that you will run down rabbit-holes in the confirmatory phase pursuing false hypotheses. | Should stepwise regressions also be avoided for exploratory (hypothesis generating) modelling? | In exploratory analysis you have much more latitude for how you generate the hypotheses of interest, since there is no biasing of tests due to optimisation processes. (Of course, for this to apply yo | Should stepwise regressions also be avoided for exploratory (hypothesis generating) modelling?
In exploratory analysis you have much more latitude for how you generate the hypotheses of interest, since there is no biasing of tests due to optimisation processes. (Of course, for this to apply you should ensure that when you undertake confirmatory analysis on the hypotheses, you use different data.) Nevertheless, that does not mean that there is no difference in optimality of different kinds of processes that can be used to identify hypotheses of possible interest. So while you can, in theory, "do whatever you want", you probably shouldn't.
Generally speaking, in exploratory analysis you will still want to identify hypotheses that have some evidentiary basis, so that you don't waste time testing lots of false hypotheses in the confirmatory phase. For this reason, it is often useful to have regard to the same types of statistical/evidentiary issues that will arise in confirmatory testing, though for a different reason. The main deficiency of stepwise methods in selection of variables is that it can travel through idiosyncratic paths that miss sets of explanatory variables with high evidence of a relationship to the response variable. This is why comparisons like the all-possible-models method are considered preferable to stepwise methods.
Assuming you have sufficient computational power to do so, I would recommend you conduct exploratory analysis by computing the goodness-of-fit statistics for all possible models and then examining those models that yield high levels of fit relative to the number of model parameters. This method is more likely to identify models with true hypotheses, and unlike the stepwise procedure, it is more systematic and will not miss important models. Since this is exploratory analysis, you should also allow yourself to be guided by exogenous concerns about what hypotheses/models are "interesting" in the context of your field, what are the costs of collecting data, etc., but you can use the all-possible-models method to augment this. This latter method will give more systematic statistical information on your exploratory data than stepwise methods.
Finally, you say that hypotheses generated by the stepwise method are "shaky". It is okay for hypotheses generated in the exploratory phase to be shaky, because the whole point is that you are only generating tentative hypotheses for later testing and confirmation. Indeed, arguably all hypotheses generated in the exploratory phase are and ought to be "shaky". The reason to prefer all-possible-models over stepwise methods is that it more systematically identifies hypotheses supported in the explanatory phase, which makes it a bit less likely that you will run down rabbit-holes in the confirmatory phase pursuing false hypotheses. | Should stepwise regressions also be avoided for exploratory (hypothesis generating) modelling?
In exploratory analysis you have much more latitude for how you generate the hypotheses of interest, since there is no biasing of tests due to optimisation processes. (Of course, for this to apply yo |
31,791 | Should stepwise regressions also be avoided for exploratory (hypothesis generating) modelling? | Exploration means basically, that you can do whatever you want. If there are, among your too many variables, some which are actually true predictors you can hope for the drop functions to find them. You can then gather nuew data to infer, whether these are really actual predictors.
However, gathering new data comes with a cost (or you would have done that before). So the actual question is: Are you ready to gather more data based on nothing more then a stepwise regression approach?
That is not so much a mathematical/statistical question but depends on how costly gathering new data is and how much a positive result would be worth for your further research/career etc.
So, basically, see if you got any better options then stepwise. If not, perform stepwise to reduce the number of candidate predictors. If the result looks really promising, consider to sample new data on those predictors to do inferential statistics on. | Should stepwise regressions also be avoided for exploratory (hypothesis generating) modelling? | Exploration means basically, that you can do whatever you want. If there are, among your too many variables, some which are actually true predictors you can hope for the drop functions to find them. Y | Should stepwise regressions also be avoided for exploratory (hypothesis generating) modelling?
Exploration means basically, that you can do whatever you want. If there are, among your too many variables, some which are actually true predictors you can hope for the drop functions to find them. You can then gather nuew data to infer, whether these are really actual predictors.
However, gathering new data comes with a cost (or you would have done that before). So the actual question is: Are you ready to gather more data based on nothing more then a stepwise regression approach?
That is not so much a mathematical/statistical question but depends on how costly gathering new data is and how much a positive result would be worth for your further research/career etc.
So, basically, see if you got any better options then stepwise. If not, perform stepwise to reduce the number of candidate predictors. If the result looks really promising, consider to sample new data on those predictors to do inferential statistics on. | Should stepwise regressions also be avoided for exploratory (hypothesis generating) modelling?
Exploration means basically, that you can do whatever you want. If there are, among your too many variables, some which are actually true predictors you can hope for the drop functions to find them. Y |
31,792 | Should stepwise regressions also be avoided for exploratory (hypothesis generating) modelling? | I think the assertion that stepwise selection is always unreliable is a bit too strong.
I think stepwise selection procedure can probably be amended to remove some of its weaknesses. One example might be to use a different sample from the same population in each step so that you are not inadvertently overfitting the model, and are only picking predictors or features that result in a model that generalises well enough for a purpose.
As with anything, when we are performing regression analysis, I do not think the notion that there is a "correct" model, or that there are "correct" or "incorrect" variables to choose is a useful one. I generally only care about a model's predictive performance, and its ability to describe the associations between the predictive and dependent variables. In fact, in many cases, the number of models that may produce an acceptable "fit" to the data and provide good predictive performance is probably very large, and it might be useful to think rather in terms of a set of models that are acceptable, and a set of models that are not, with the goal of a feature selection / model selection exercise being to identify a model that is in the set of acceptable ones.
But just my thought as a practitioner, rather than as a "scientist".
PS - I think the LASSO method, or other methods that use penalties and shrink coefficients, might well be slightly better founded, but I am not convinced they solve all of the issues with step-wise in any case. | Should stepwise regressions also be avoided for exploratory (hypothesis generating) modelling? | I think the assertion that stepwise selection is always unreliable is a bit too strong.
I think stepwise selection procedure can probably be amended to remove some of its weaknesses. One example might | Should stepwise regressions also be avoided for exploratory (hypothesis generating) modelling?
I think the assertion that stepwise selection is always unreliable is a bit too strong.
I think stepwise selection procedure can probably be amended to remove some of its weaknesses. One example might be to use a different sample from the same population in each step so that you are not inadvertently overfitting the model, and are only picking predictors or features that result in a model that generalises well enough for a purpose.
As with anything, when we are performing regression analysis, I do not think the notion that there is a "correct" model, or that there are "correct" or "incorrect" variables to choose is a useful one. I generally only care about a model's predictive performance, and its ability to describe the associations between the predictive and dependent variables. In fact, in many cases, the number of models that may produce an acceptable "fit" to the data and provide good predictive performance is probably very large, and it might be useful to think rather in terms of a set of models that are acceptable, and a set of models that are not, with the goal of a feature selection / model selection exercise being to identify a model that is in the set of acceptable ones.
But just my thought as a practitioner, rather than as a "scientist".
PS - I think the LASSO method, or other methods that use penalties and shrink coefficients, might well be slightly better founded, but I am not convinced they solve all of the issues with step-wise in any case. | Should stepwise regressions also be avoided for exploratory (hypothesis generating) modelling?
I think the assertion that stepwise selection is always unreliable is a bit too strong.
I think stepwise selection procedure can probably be amended to remove some of its weaknesses. One example might |
31,793 | Intuition for why LDA is a special case of naive Bayes | Here's my intuition:
The LDA classifier assumes that across all classes, the $p$ predictors $\boldsymbol{X}_k$ (for $k=1, \dots,p$) all share some covariance matrix ${\boldsymbol \Sigma}$, but may have different means $\boldsymbol{\mu}_k$. Thus, if you define the alternate set of predictors $\boldsymbol{Z}$ to be $p$ independent normal random variables with variance 1 and means $\boldsymbol{\Sigma}^{-1/2} \boldsymbol{\mu}$, then $\boldsymbol{X} = \boldsymbol{\Sigma}^{1/2} \boldsymbol{Z}$. This is a linear transformation, so a linear classifier on the $\boldsymbol{Z}$ variables would be linear on the $\boldsymbol{X}$ variables, too.
Note that the naive Bayes classifier is linear on its predictors (this is shown on page 159 of your reference), and it clearly applies on the $\boldsymbol{Z}$ predictors since they are independent by definition. So LDA is the same as some naive Bayes classifier. But as mentioned in your reference (also page 159), the same is true of any linear classifier. | Intuition for why LDA is a special case of naive Bayes | Here's my intuition:
The LDA classifier assumes that across all classes, the $p$ predictors $\boldsymbol{X}_k$ (for $k=1, \dots,p$) all share some covariance matrix ${\boldsymbol \Sigma}$, but may hav | Intuition for why LDA is a special case of naive Bayes
Here's my intuition:
The LDA classifier assumes that across all classes, the $p$ predictors $\boldsymbol{X}_k$ (for $k=1, \dots,p$) all share some covariance matrix ${\boldsymbol \Sigma}$, but may have different means $\boldsymbol{\mu}_k$. Thus, if you define the alternate set of predictors $\boldsymbol{Z}$ to be $p$ independent normal random variables with variance 1 and means $\boldsymbol{\Sigma}^{-1/2} \boldsymbol{\mu}$, then $\boldsymbol{X} = \boldsymbol{\Sigma}^{1/2} \boldsymbol{Z}$. This is a linear transformation, so a linear classifier on the $\boldsymbol{Z}$ variables would be linear on the $\boldsymbol{X}$ variables, too.
Note that the naive Bayes classifier is linear on its predictors (this is shown on page 159 of your reference), and it clearly applies on the $\boldsymbol{Z}$ predictors since they are independent by definition. So LDA is the same as some naive Bayes classifier. But as mentioned in your reference (also page 159), the same is true of any linear classifier. | Intuition for why LDA is a special case of naive Bayes
Here's my intuition:
The LDA classifier assumes that across all classes, the $p$ predictors $\boldsymbol{X}_k$ (for $k=1, \dots,p$) all share some covariance matrix ${\boldsymbol \Sigma}$, but may hav |
31,794 | Intuition for why LDA is a special case of naive Bayes | LDA is a special case of a n̶a̶i̶v̶e̶ Bayes classifier.
It is assuming Gaussian distributions
For different classes, the distributions have the same variance (the same covariance matrix for their distribution with respect to the variables $X$).
Gaussian Naive Bayes classifier is a special case of LDA
If you consider the naive Bayes classifier with the assumption of Gaussian distributions and the same variance for different groups/classes, then you could see this as a special case of LDA. It is like LDA with the restriction that the covariance matrix $\Sigma$ is diagonal.
Other point of view
You might also see the LDA as a pre-treatment step giving you as result one or more components. Then afterward you apply naive Bayes on the components. So LDA can be seen as a special case of naive Bayes in the sense that it is naive Bayes with a pretreatment extracting first LDA components. | Intuition for why LDA is a special case of naive Bayes | LDA is a special case of a n̶a̶i̶v̶e̶ Bayes classifier.
It is assuming Gaussian distributions
For different classes, the distributions have the same variance (the same covariance matrix for their dis | Intuition for why LDA is a special case of naive Bayes
LDA is a special case of a n̶a̶i̶v̶e̶ Bayes classifier.
It is assuming Gaussian distributions
For different classes, the distributions have the same variance (the same covariance matrix for their distribution with respect to the variables $X$).
Gaussian Naive Bayes classifier is a special case of LDA
If you consider the naive Bayes classifier with the assumption of Gaussian distributions and the same variance for different groups/classes, then you could see this as a special case of LDA. It is like LDA with the restriction that the covariance matrix $\Sigma$ is diagonal.
Other point of view
You might also see the LDA as a pre-treatment step giving you as result one or more components. Then afterward you apply naive Bayes on the components. So LDA can be seen as a special case of naive Bayes in the sense that it is naive Bayes with a pretreatment extracting first LDA components. | Intuition for why LDA is a special case of naive Bayes
LDA is a special case of a n̶a̶i̶v̶e̶ Bayes classifier.
It is assuming Gaussian distributions
For different classes, the distributions have the same variance (the same covariance matrix for their dis |
31,795 | Marginal distribution of uniform distribution over sphere | I want to flesh out John L's idea. Let $d\gt 2$ be the dimension of the space in which we will be working. (When $d=2$ the marginal is the uniform distribution on the circle -- that fully determines it, but it has no density function.) As you go through, note that the same analysis applies mutatis mutandis to finding the distribution of any proper subset of the coordinates, from $1$ through $d-1$ of them.
The uniform distribution on the surface of the unit $d-1$ sphere $S^{d-1}\subset\mathbb{R}^d$ is the radial projection of the standard $d$-variate Normal distribution in $\mathbb{R}^d.$ See How to generate uniformly distributed points on the surface of the 3-d unit sphere?.
Writing $(Z_1,Z_2,\ldots,Z_d)$ for such a Normal variate and $|Z| = \sqrt{Z_1^2+\cdots + Z_d^2}$ for its norm, $(1)$ means $(X_1,\ldots, X_d) = (Z_1/|Z|,\ldots,Z_d/|Z|)$ has a uniform distribution on $S^{d-1}.$
By definition, $U^2=Z_1^2+Z_2^2$ has a $\chi^2(2)$ distribution and $V^2=Z_3^2 + \cdots + Z_d^2$ has a $\chi^2(d-2)$ distribution. (This is one place we must require $d\gt 2.$)
Because the $Z_i$ are independent, $(Z_1,Z_2)$ is independent of $(Z_3,\ldots,Z_d),$ whence $U^2$ and $V^2$ are independent.
The ratio $\frac{U^2}{U^2+V^2} = X_1^2+X_2^2$ has a Beta$(1,d/2-1)$ distribution.
Because $(Z_1,\ldots,Z_d)$ is spherically symmetric, so is $(X_1,\ldots,X_d),$ whence (via projection) $(X_1,X_2)$ is circularly symmetric.
$(5)$ and $(6)$ say that in polar coordinates $(R,\Theta)$ with $X_1=R\cos\Theta$ and $X_2=R\sin\Theta,$ $R^2$ has a Beta distribution and $\Theta$ is independently uniformly distributed on (say) the interval $[0,2\pi).$
Conditional on $(X_1,X_2),$ the remaining coordinates $(X_3,\ldots,X_d)$ must be uniformly distributed on the slice of the sphere determined by $(X_1,X_2).$ That slice has radius $\sqrt{1-R^2}.$
We can immediately write down expressions for the distribution. Since the density of a Beta distribution is $f(t;\alpha,\beta) = t^{\alpha-1}(1-t)^{\beta-1}/B(\alpha,\beta),$ setting $t=r^2$ gives the density for $R$ as
$$f_R(r;d) = \frac{2}{B\left(1,\frac{d}{2}-1\right)}\,r(1-r^2)^{d/2-2} = (d-2)\,r\,(1-r^2)^{d/2-2}$$
for $0\le r\le 1$ and $d\gt 2.$
The joint density $f_{R,\Theta;d}$ is the product of $f_R$ and the density of $\Theta,$ given by $1/(2\pi)$ on the interval $[0,2\pi).$ Changing back to rectangular coordinates gives
$$f_{x_1,x_2}(x_1,x_2;d) = \frac{d-2}{2\pi}\,(1-x_1^2-x_2^2)^{d/2-2}\tag{*}$$
for $x_1^2+x_2^2\le 1.$ (See the example at https://stats.stackexchange.com/a/154298/919 for details.)
As an example, I generated a million draws from the uniform distribution on $S^{11}$ (so that $d=12$). Here is a scatterplot matrix of the first five components (showing just the first thousand observations for clarity).
The circular symmetry of each pair is apparent.
Next is a histogram of all million values of $R$ on which the root-Beta density function $f_R$ is plotted, with excellent agreement:
The $(X_i+1)/2$ have Beta$((d-1)/2,(d-1)/2)$ distributions, as shown at Distribution of scalar products of two random unit vectors in $D$ dimensions. This indeed is what one obtains from $(*)$ by integrating out one of the variables. Here is a histogram from the simulation with that Beta density overplotted: | Marginal distribution of uniform distribution over sphere | I want to flesh out John L's idea. Let $d\gt 2$ be the dimension of the space in which we will be working. (When $d=2$ the marginal is the uniform distribution on the circle -- that fully determines | Marginal distribution of uniform distribution over sphere
I want to flesh out John L's idea. Let $d\gt 2$ be the dimension of the space in which we will be working. (When $d=2$ the marginal is the uniform distribution on the circle -- that fully determines it, but it has no density function.) As you go through, note that the same analysis applies mutatis mutandis to finding the distribution of any proper subset of the coordinates, from $1$ through $d-1$ of them.
The uniform distribution on the surface of the unit $d-1$ sphere $S^{d-1}\subset\mathbb{R}^d$ is the radial projection of the standard $d$-variate Normal distribution in $\mathbb{R}^d.$ See How to generate uniformly distributed points on the surface of the 3-d unit sphere?.
Writing $(Z_1,Z_2,\ldots,Z_d)$ for such a Normal variate and $|Z| = \sqrt{Z_1^2+\cdots + Z_d^2}$ for its norm, $(1)$ means $(X_1,\ldots, X_d) = (Z_1/|Z|,\ldots,Z_d/|Z|)$ has a uniform distribution on $S^{d-1}.$
By definition, $U^2=Z_1^2+Z_2^2$ has a $\chi^2(2)$ distribution and $V^2=Z_3^2 + \cdots + Z_d^2$ has a $\chi^2(d-2)$ distribution. (This is one place we must require $d\gt 2.$)
Because the $Z_i$ are independent, $(Z_1,Z_2)$ is independent of $(Z_3,\ldots,Z_d),$ whence $U^2$ and $V^2$ are independent.
The ratio $\frac{U^2}{U^2+V^2} = X_1^2+X_2^2$ has a Beta$(1,d/2-1)$ distribution.
Because $(Z_1,\ldots,Z_d)$ is spherically symmetric, so is $(X_1,\ldots,X_d),$ whence (via projection) $(X_1,X_2)$ is circularly symmetric.
$(5)$ and $(6)$ say that in polar coordinates $(R,\Theta)$ with $X_1=R\cos\Theta$ and $X_2=R\sin\Theta,$ $R^2$ has a Beta distribution and $\Theta$ is independently uniformly distributed on (say) the interval $[0,2\pi).$
Conditional on $(X_1,X_2),$ the remaining coordinates $(X_3,\ldots,X_d)$ must be uniformly distributed on the slice of the sphere determined by $(X_1,X_2).$ That slice has radius $\sqrt{1-R^2}.$
We can immediately write down expressions for the distribution. Since the density of a Beta distribution is $f(t;\alpha,\beta) = t^{\alpha-1}(1-t)^{\beta-1}/B(\alpha,\beta),$ setting $t=r^2$ gives the density for $R$ as
$$f_R(r;d) = \frac{2}{B\left(1,\frac{d}{2}-1\right)}\,r(1-r^2)^{d/2-2} = (d-2)\,r\,(1-r^2)^{d/2-2}$$
for $0\le r\le 1$ and $d\gt 2.$
The joint density $f_{R,\Theta;d}$ is the product of $f_R$ and the density of $\Theta,$ given by $1/(2\pi)$ on the interval $[0,2\pi).$ Changing back to rectangular coordinates gives
$$f_{x_1,x_2}(x_1,x_2;d) = \frac{d-2}{2\pi}\,(1-x_1^2-x_2^2)^{d/2-2}\tag{*}$$
for $x_1^2+x_2^2\le 1.$ (See the example at https://stats.stackexchange.com/a/154298/919 for details.)
As an example, I generated a million draws from the uniform distribution on $S^{11}$ (so that $d=12$). Here is a scatterplot matrix of the first five components (showing just the first thousand observations for clarity).
The circular symmetry of each pair is apparent.
Next is a histogram of all million values of $R$ on which the root-Beta density function $f_R$ is plotted, with excellent agreement:
The $(X_i+1)/2$ have Beta$((d-1)/2,(d-1)/2)$ distributions, as shown at Distribution of scalar products of two random unit vectors in $D$ dimensions. This indeed is what one obtains from $(*)$ by integrating out one of the variables. Here is a histogram from the simulation with that Beta density overplotted: | Marginal distribution of uniform distribution over sphere
I want to flesh out John L's idea. Let $d\gt 2$ be the dimension of the space in which we will be working. (When $d=2$ the marginal is the uniform distribution on the circle -- that fully determines |
31,796 | Marginal distribution of uniform distribution over sphere | I don't think there is a closed form, but there are some observations below.
Let $Z_1,...,Z_d$ be iid standard normal.
Then, $\left(\frac{Z_1}{\sqrt{Z_1^2+...+Z_d^2}},...,\frac{Z_d}{\sqrt{Z_1^2+...+Z_d^2}} \right)$ is uniform on the $d$-dimensional unit sphere.
We want to find the joint distribution of $(X_1,X_2)=\left(\frac{Z_1}{\sqrt{Z_1^2+...+Z_d^2}},\frac{Z_2}{\sqrt{Z_1^2+...+Z_d^2}} \right)$.
First notice that $\left(\frac{1}{X_1^2},\frac{1}{X_2^2} \right)=\left(\frac{Z_1^2+...+Z_d^2}{Z_1^2},\frac{Z_1^2+...+Z_d^2}{Z_2^2} \right)=\left(1+\frac{Z_2^2+Z_3^2...+Z_d^2}{Z_1^2},1+\frac{Z_1^2+Z_3^2...+Z_d^2}{Z_2^2} \right)$.
Thus, $\left(\frac{1}{d-1}\left( \frac{1}{X_1^2}-1 \right),\frac{1}{d-1}\left( \frac{1}{X_2^2}-1 \right)\right)$ has the same distribution as $\left(\frac{(V_2+V_3)/(d-1)}{V_1},\frac{(V_1+V_3)/(d-1)}{V_2} \right)$ where $V_1,V_2,V_3$ are independent chi-square random variables with degrees of freedom 1, 1, and $d-2$ respectively.
$\frac{(V_2+V_3)/(d-1)}{V_1}$ and $\frac{(V_1+V_3)/(d-1)}{V_2}$ are dependent and each has an F-distribution with $d-1$ (numerator) and 1 (denominator) degrees of freedom. There are some other definitions of the bivariate F-distribution that have closed forms, but I don't think this one does. The density for $X_1$ or $X_2$ is $$f(x)=\frac{((d-2)(1 - x^2))^{d/2}(d-2 + x^2)^{(1 - d)/2}\Gamma((1 + d)/2)}{\sqrt{d-1}\sqrt{\pi}(1 - x^2)^2\Gamma(d/2)}$$
for $x \in (-1,1)$. I don't think it is possible to find the distribution of $X_2$ given $X_1$ in an easy form.
The following R functions generate random numbers and calculate the joint distribution function by numerical integration.
library(cubature)
rX1X2=function(n,d) {
z1=rnorm(n)
z2=rnorm(n)
v3=rchisq(n,d-2)
return(matrix(c(z1,z2)/sqrt(z1^2+z2^2+v3),byrow=F,ncol=2))
}
pX1X2=function(x1,x2,d) {
f1=function(x,d,x1,x2) {
den=sqrt(x[1]^2+x[2]^2+x[3])
return(ifelse((x[1]/den)<x1 & (x[2]/den)<x2,
dnorm(x[1])*dnorm(x[2])*dchisq(x[3],d-2),0))
}
return(adaptIntegrate(f1,lowerLimit=c(-Inf,-Inf,0),upperLimit =c(Inf,Inf,Inf),
d=d,x1=x1,x2=x2,maxEval=10000)$integral)
}
pX1X2(-0.2,0.3,5) #estimated probability that X1<-0.2 and X2<0.3
x=rX1X2(100000,5)
mean(x[,1]<(-0.2) & x[,2]<0.3) #estimated probability from simulation
y=(1/x[,1]^2-1)/4
plot(log(quantile(y,c(1:99)/100)),
log(qf(c(1:99)/100,4,1))) #verify that y has an F(4,1) distribution
Q-Q plot (log-scale) verifying that the transformed variable has an F-distribution. | Marginal distribution of uniform distribution over sphere | I don't think there is a closed form, but there are some observations below.
Let $Z_1,...,Z_d$ be iid standard normal.
Then, $\left(\frac{Z_1}{\sqrt{Z_1^2+...+Z_d^2}},...,\frac{Z_d}{\sqrt{Z_1^2+...+Z_ | Marginal distribution of uniform distribution over sphere
I don't think there is a closed form, but there are some observations below.
Let $Z_1,...,Z_d$ be iid standard normal.
Then, $\left(\frac{Z_1}{\sqrt{Z_1^2+...+Z_d^2}},...,\frac{Z_d}{\sqrt{Z_1^2+...+Z_d^2}} \right)$ is uniform on the $d$-dimensional unit sphere.
We want to find the joint distribution of $(X_1,X_2)=\left(\frac{Z_1}{\sqrt{Z_1^2+...+Z_d^2}},\frac{Z_2}{\sqrt{Z_1^2+...+Z_d^2}} \right)$.
First notice that $\left(\frac{1}{X_1^2},\frac{1}{X_2^2} \right)=\left(\frac{Z_1^2+...+Z_d^2}{Z_1^2},\frac{Z_1^2+...+Z_d^2}{Z_2^2} \right)=\left(1+\frac{Z_2^2+Z_3^2...+Z_d^2}{Z_1^2},1+\frac{Z_1^2+Z_3^2...+Z_d^2}{Z_2^2} \right)$.
Thus, $\left(\frac{1}{d-1}\left( \frac{1}{X_1^2}-1 \right),\frac{1}{d-1}\left( \frac{1}{X_2^2}-1 \right)\right)$ has the same distribution as $\left(\frac{(V_2+V_3)/(d-1)}{V_1},\frac{(V_1+V_3)/(d-1)}{V_2} \right)$ where $V_1,V_2,V_3$ are independent chi-square random variables with degrees of freedom 1, 1, and $d-2$ respectively.
$\frac{(V_2+V_3)/(d-1)}{V_1}$ and $\frac{(V_1+V_3)/(d-1)}{V_2}$ are dependent and each has an F-distribution with $d-1$ (numerator) and 1 (denominator) degrees of freedom. There are some other definitions of the bivariate F-distribution that have closed forms, but I don't think this one does. The density for $X_1$ or $X_2$ is $$f(x)=\frac{((d-2)(1 - x^2))^{d/2}(d-2 + x^2)^{(1 - d)/2}\Gamma((1 + d)/2)}{\sqrt{d-1}\sqrt{\pi}(1 - x^2)^2\Gamma(d/2)}$$
for $x \in (-1,1)$. I don't think it is possible to find the distribution of $X_2$ given $X_1$ in an easy form.
The following R functions generate random numbers and calculate the joint distribution function by numerical integration.
library(cubature)
rX1X2=function(n,d) {
z1=rnorm(n)
z2=rnorm(n)
v3=rchisq(n,d-2)
return(matrix(c(z1,z2)/sqrt(z1^2+z2^2+v3),byrow=F,ncol=2))
}
pX1X2=function(x1,x2,d) {
f1=function(x,d,x1,x2) {
den=sqrt(x[1]^2+x[2]^2+x[3])
return(ifelse((x[1]/den)<x1 & (x[2]/den)<x2,
dnorm(x[1])*dnorm(x[2])*dchisq(x[3],d-2),0))
}
return(adaptIntegrate(f1,lowerLimit=c(-Inf,-Inf,0),upperLimit =c(Inf,Inf,Inf),
d=d,x1=x1,x2=x2,maxEval=10000)$integral)
}
pX1X2(-0.2,0.3,5) #estimated probability that X1<-0.2 and X2<0.3
x=rX1X2(100000,5)
mean(x[,1]<(-0.2) & x[,2]<0.3) #estimated probability from simulation
y=(1/x[,1]^2-1)/4
plot(log(quantile(y,c(1:99)/100)),
log(qf(c(1:99)/100,4,1))) #verify that y has an F(4,1) distribution
Q-Q plot (log-scale) verifying that the transformed variable has an F-distribution. | Marginal distribution of uniform distribution over sphere
I don't think there is a closed form, but there are some observations below.
Let $Z_1,...,Z_d$ be iid standard normal.
Then, $\left(\frac{Z_1}{\sqrt{Z_1^2+...+Z_d^2}},...,\frac{Z_d}{\sqrt{Z_1^2+...+Z_ |
31,797 | Why is the F-Statistic $\approx$ 1 when the null hypothesis is true? | Consider a linear model $y_i=\beta_0+x_i'\beta+u_i$, with $u_i\sim (0,\sigma^2)$.
The F-statistic is (see e.g. Proof that F-statistic follows F-distribution)
$$ F = \frac{(\text{TSS}-\text{RSS})/p}{\text{RSS}/(n-p-1)},$$
with $TSS=\sum_i(y_i-\bar{y})^2$ and $RSS=\sum_i(y_i-\hat{y}_i)^2$ with $p$ the number of slope parameters.
Under classical assumptions, $\text{RSS}/(n-p-1)$ is an unbiased estimator of $\sigma^2$, i.e.,
$$E[\text{RSS}/(n-p-1)]=\sigma^2.$$
Likewise, it is a well-known result that, under the null $y_i=\beta_0+u_i$, the sample variance $\sum_i(y_i-\bar{y})^2/(n-1)$ is an unbiased estimator of $\sigma^2$, i.e.,
$$E[\text{TSS}/(n-1)]=\sigma^2.$$
(It always is an unbiased estimator of the variance of $\sigma^2_y$, the variance of $y$, which however does not coincide with the variance of the error under the alternative anymore, which is what gives the test its power.)
Putting things in the numerator together,
$$
E[(\text{TSS}-\text{RSS})/p]=E[(n-1)\sigma^2-(n-p-1)\sigma^2]/p=\sigma^2
$$
So if you approximate $E(F)$ (of course, the expectation of a ratio is not generally the ratio of expectations), you get
$$
E(F)\approx\frac{E[(\text{TSS}-\text{RSS})/p]}{E[\text{RSS}/(n-p-1)]}=\frac{\sigma^2}{\sigma^2}=1
$$
Actually, given that the F-statistic follows an F-distribution with $p$ and $d:=n-p-1$ degrees of freedom, we may use known results for the exact expectation of F-distributed random variables, namely that
$$E(F)=\frac{d}{d-2}$$
when $d>2$. So
$$E(F)=\frac{n-p-1}{n-p-1-2}=\frac{n-p-1}{n-p-3},$$
which will of course be close to 1 for cases where the sample size $n$ is large relative to the number of regressors. Hence, the above approximation works very well in this case.
Of course, what we have here is a result for the expected value of the F-statistic if the null is true. This does not mean that (like for any expectation) the statistic $F\approx1$, but that it will "hover around" 1 when we were to repeatedly compute F-statistic for situations in which the null is true. See e.g. the simulation provided at https://stats.stackexchange.com/a/258476/67799 for an illustration. | Why is the F-Statistic $\approx$ 1 when the null hypothesis is true? | Consider a linear model $y_i=\beta_0+x_i'\beta+u_i$, with $u_i\sim (0,\sigma^2)$.
The F-statistic is (see e.g. Proof that F-statistic follows F-distribution)
$$ F = \frac{(\text{TSS}-\text{RSS})/p}{\t | Why is the F-Statistic $\approx$ 1 when the null hypothesis is true?
Consider a linear model $y_i=\beta_0+x_i'\beta+u_i$, with $u_i\sim (0,\sigma^2)$.
The F-statistic is (see e.g. Proof that F-statistic follows F-distribution)
$$ F = \frac{(\text{TSS}-\text{RSS})/p}{\text{RSS}/(n-p-1)},$$
with $TSS=\sum_i(y_i-\bar{y})^2$ and $RSS=\sum_i(y_i-\hat{y}_i)^2$ with $p$ the number of slope parameters.
Under classical assumptions, $\text{RSS}/(n-p-1)$ is an unbiased estimator of $\sigma^2$, i.e.,
$$E[\text{RSS}/(n-p-1)]=\sigma^2.$$
Likewise, it is a well-known result that, under the null $y_i=\beta_0+u_i$, the sample variance $\sum_i(y_i-\bar{y})^2/(n-1)$ is an unbiased estimator of $\sigma^2$, i.e.,
$$E[\text{TSS}/(n-1)]=\sigma^2.$$
(It always is an unbiased estimator of the variance of $\sigma^2_y$, the variance of $y$, which however does not coincide with the variance of the error under the alternative anymore, which is what gives the test its power.)
Putting things in the numerator together,
$$
E[(\text{TSS}-\text{RSS})/p]=E[(n-1)\sigma^2-(n-p-1)\sigma^2]/p=\sigma^2
$$
So if you approximate $E(F)$ (of course, the expectation of a ratio is not generally the ratio of expectations), you get
$$
E(F)\approx\frac{E[(\text{TSS}-\text{RSS})/p]}{E[\text{RSS}/(n-p-1)]}=\frac{\sigma^2}{\sigma^2}=1
$$
Actually, given that the F-statistic follows an F-distribution with $p$ and $d:=n-p-1$ degrees of freedom, we may use known results for the exact expectation of F-distributed random variables, namely that
$$E(F)=\frac{d}{d-2}$$
when $d>2$. So
$$E(F)=\frac{n-p-1}{n-p-1-2}=\frac{n-p-1}{n-p-3},$$
which will of course be close to 1 for cases where the sample size $n$ is large relative to the number of regressors. Hence, the above approximation works very well in this case.
Of course, what we have here is a result for the expected value of the F-statistic if the null is true. This does not mean that (like for any expectation) the statistic $F\approx1$, but that it will "hover around" 1 when we were to repeatedly compute F-statistic for situations in which the null is true. See e.g. the simulation provided at https://stats.stackexchange.com/a/258476/67799 for an illustration. | Why is the F-Statistic $\approx$ 1 when the null hypothesis is true?
Consider a linear model $y_i=\beta_0+x_i'\beta+u_i$, with $u_i\sim (0,\sigma^2)$.
The F-statistic is (see e.g. Proof that F-statistic follows F-distribution)
$$ F = \frac{(\text{TSS}-\text{RSS})/p}{\t |
31,798 | Why is the F-Statistic $\approx$ 1 when the null hypothesis is true? | Another way to think about this:
When there is no experimental effect, the only variation you have is within-subjects. So even when you split the participants into two groups, the same variation is endemic within each (no experimental variance, only within-subjects exists). Thus, your variance ratio of between / within is really just within / within, or 1. | Why is the F-Statistic $\approx$ 1 when the null hypothesis is true? | Another way to think about this:
When there is no experimental effect, the only variation you have is within-subjects. So even when you split the participants into two groups, the same variation is e | Why is the F-Statistic $\approx$ 1 when the null hypothesis is true?
Another way to think about this:
When there is no experimental effect, the only variation you have is within-subjects. So even when you split the participants into two groups, the same variation is endemic within each (no experimental variance, only within-subjects exists). Thus, your variance ratio of between / within is really just within / within, or 1. | Why is the F-Statistic $\approx$ 1 when the null hypothesis is true?
Another way to think about this:
When there is no experimental effect, the only variation you have is within-subjects. So even when you split the participants into two groups, the same variation is e |
31,799 | Convincing Causal Analysis using a DAG and Backdoor Path Criterion | No, we can never be sure that the DAG is correct. This is one of the fundamental principles of causal inference informed by DAGs. DAGs are a non-parametric abstraction of reality. You will find in much of the DAG literature things like:
In causal diagrams, an arrow represents a "direct effect" of the parent on the child, although this effect is direct only relative to a certain level of abstraction, in that the graph omits any variables that might mediate the effect represented by the arrow.
Greenland and Pearl, 2017
This is completely unavoidable. Take pharmacological research. There are many, many cases of drugs which reach the market, where the researchers do not know the actual biological mechanism that causes their product to work. They may have theories, and these theories can be encapsulated using DAGs. The resulting analysis is conditional on the DAG being correct (at a level of abstraction). Other researchers may have different theories and consequently different DAGs, and that is completely OK. | Convincing Causal Analysis using a DAG and Backdoor Path Criterion | No, we can never be sure that the DAG is correct. This is one of the fundamental principles of causal inference informed by DAGs. DAGs are a non-parametric abstraction of reality. You will find in muc | Convincing Causal Analysis using a DAG and Backdoor Path Criterion
No, we can never be sure that the DAG is correct. This is one of the fundamental principles of causal inference informed by DAGs. DAGs are a non-parametric abstraction of reality. You will find in much of the DAG literature things like:
In causal diagrams, an arrow represents a "direct effect" of the parent on the child, although this effect is direct only relative to a certain level of abstraction, in that the graph omits any variables that might mediate the effect represented by the arrow.
Greenland and Pearl, 2017
This is completely unavoidable. Take pharmacological research. There are many, many cases of drugs which reach the market, where the researchers do not know the actual biological mechanism that causes their product to work. They may have theories, and these theories can be encapsulated using DAGs. The resulting analysis is conditional on the DAG being correct (at a level of abstraction). Other researchers may have different theories and consequently different DAGs, and that is completely OK. | Convincing Causal Analysis using a DAG and Backdoor Path Criterion
No, we can never be sure that the DAG is correct. This is one of the fundamental principles of causal inference informed by DAGs. DAGs are a non-parametric abstraction of reality. You will find in muc |
31,800 | Convincing Causal Analysis using a DAG and Backdoor Path Criterion | We can first think more generally about what a causal diagram really is. Then let's discuss how one might practically use them as an informative prior, and jointly with observational data, to confidently predict causal effects.
A causal diagram is a directed acyclic graph (DAG) representation of the functional relationships between the variables (i.e. nodes) within the distribution. And the structure of the graph serves to encode the conditional dependence or independence among the variables. The diagram essentially asserts our assumptions about the world in a easy-to-understand visual format. Provided with a joint distribution p(a,b,c), the same distribution can be written as either:
So which causal diagram is the correct one for the joint distribution? The example demonstrates that the mapping of causal diagrams to our observational data is many to one. Multiple correct hypothesis are plausible, and it is usually impossible to definitely choose between them just by looking at the observational data only.
How can we then use observational data to infer the correct diagram? For a said causal diagram, we mimic the effects of a intervention by conditioning on a variable (i.e. we force it to take a particular value). The action is encapsulated by the do-operator in p(Y|do(X)) and more formally by do-calculas, a tool for causal inference that allows us to disambiguate what needs to be estimated from the observational data. The front- and back-door approaches are but just two doors through which we can eliminate all the do's in our quest to climb Mount Intervention.
Suffice to say, by removing all incoming edges to the node of interest, an intervention modifies the original joint distribution to become the post-interventional distribution. A causal query becomes identifiable if we can remove all do-operators and therefore we can use the observational data to estimate causal effect. Else the causal query is considered non-identifiable and a real-world interventional experiment would be required for determining the causal effect.
While a researcher may never be completely persuaded in the soundness and integrity of the causal diagram they've constructed, they do have mechanisms in place to empirically test a partial collection of relationships between the sets of variables. If the dependencies and independencies are not present in the observational data, this might be a signal that the diagram is inaccurate. The researcher can then iteratively test and update the causal diagram to be more inline with the information contained within the observational data (and domain knowledge if applicable). | Convincing Causal Analysis using a DAG and Backdoor Path Criterion | We can first think more generally about what a causal diagram really is. Then let's discuss how one might practically use them as an informative prior, and jointly with observational data, to confiden | Convincing Causal Analysis using a DAG and Backdoor Path Criterion
We can first think more generally about what a causal diagram really is. Then let's discuss how one might practically use them as an informative prior, and jointly with observational data, to confidently predict causal effects.
A causal diagram is a directed acyclic graph (DAG) representation of the functional relationships between the variables (i.e. nodes) within the distribution. And the structure of the graph serves to encode the conditional dependence or independence among the variables. The diagram essentially asserts our assumptions about the world in a easy-to-understand visual format. Provided with a joint distribution p(a,b,c), the same distribution can be written as either:
So which causal diagram is the correct one for the joint distribution? The example demonstrates that the mapping of causal diagrams to our observational data is many to one. Multiple correct hypothesis are plausible, and it is usually impossible to definitely choose between them just by looking at the observational data only.
How can we then use observational data to infer the correct diagram? For a said causal diagram, we mimic the effects of a intervention by conditioning on a variable (i.e. we force it to take a particular value). The action is encapsulated by the do-operator in p(Y|do(X)) and more formally by do-calculas, a tool for causal inference that allows us to disambiguate what needs to be estimated from the observational data. The front- and back-door approaches are but just two doors through which we can eliminate all the do's in our quest to climb Mount Intervention.
Suffice to say, by removing all incoming edges to the node of interest, an intervention modifies the original joint distribution to become the post-interventional distribution. A causal query becomes identifiable if we can remove all do-operators and therefore we can use the observational data to estimate causal effect. Else the causal query is considered non-identifiable and a real-world interventional experiment would be required for determining the causal effect.
While a researcher may never be completely persuaded in the soundness and integrity of the causal diagram they've constructed, they do have mechanisms in place to empirically test a partial collection of relationships between the sets of variables. If the dependencies and independencies are not present in the observational data, this might be a signal that the diagram is inaccurate. The researcher can then iteratively test and update the causal diagram to be more inline with the information contained within the observational data (and domain knowledge if applicable). | Convincing Causal Analysis using a DAG and Backdoor Path Criterion
We can first think more generally about what a causal diagram really is. Then let's discuss how one might practically use them as an informative prior, and jointly with observational data, to confiden |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.