idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
18,901 | Does the "No Free Lunch Theorem" apply to general statistical tests? | You can cite the No Free Lunch Theorem if you want, but you could also just cite the Modus Ponens (also known as the Law of Detachment, the basis of deductive reasoning), which is the root of the No Free Lunch Theorem.
The No Free Lunch Theorem encompass a more specific idea: the fact that there's no algorithm that can fit all purposes. In other words, the No Free Lunch Theorem is basically saying that there's no algorithmic magic bullet. This roots on the Modus Ponens, because for an algorithm or a statistical test to give the correct result, you need to satisfy the premisses.
Just like in all mathematical theorems, if you violate the premisses, then the statistical test is just empty of sense, and you cannot derive any truth from it. So if you want to explain your data using your test, you must assume that the required premisses are met, if they're not (and you know that), then your test is dead wrong.
That's because scientific reasoning is based on deduction: basically, your test/law/theorem is an implication rule, which says that if you have the premisse A then you can conclude B: A=>B, but if you don't have A, then you can either have B or not B, and both cases are true, that's one of the basic tenets of logical inference/deduction (the Modus Ponens rule). In other words, if you violate the premisses, the result doesn't matter, and you cannot deduce anything.
Remember the binary table of implication:
A B A=>B
F F T
F T T
T F F
T T T
So in your case, to simplify, you have Dependent_Variables => ANOVA_correct. Now, if you use independent variables, thus Dependent_Variables is False, then the implication will be true, since the Dependent_Variables assumption is violated.
Of course this is simplistic, and in practice your ANOVA test may still return useful results because there is almost always some degree of independence between dependent variables, but this gives you the idea why you just can't rely on the test without fulfilling the assumptions.
However, you can also use tests which premisses are not satisfied by the original by reducing your problem: by explicitly relaxing the independency constraint, your result may still be meaningful, althrough not guaranteed (because then your results apply to the reduced problem, not the full problem, so you cannot translate every results except if you can prove that the additional constraints of the new problem do not impact your test and thus your results).
In practice, this is often used to model practical data, by using Naive Bayes for example, by modelling dependent (instead of independent) variables using a model that assume independent variables, and surprisingly it works often very well, and sometimes better than models accounting for dependencies. You can also be interested by this question about how to use ANOVA when the data doesn't exactly meet all expectations.
To summary: if you intend to work on practical data and your goal is not to prove any scientific result but to make a system that just works (ie, a web service or whatever practical application), the independency assumption (and maybe other assumptions) can be relaxed, but if you're trying to deduce/prove some general truth, then you should always use tests which you can mathematically guarantee (or at least safely and provably assume) that you satisfy all premisses. | Does the "No Free Lunch Theorem" apply to general statistical tests? | You can cite the No Free Lunch Theorem if you want, but you could also just cite the Modus Ponens (also known as the Law of Detachment, the basis of deductive reasoning), which is the root of the No F | Does the "No Free Lunch Theorem" apply to general statistical tests?
You can cite the No Free Lunch Theorem if you want, but you could also just cite the Modus Ponens (also known as the Law of Detachment, the basis of deductive reasoning), which is the root of the No Free Lunch Theorem.
The No Free Lunch Theorem encompass a more specific idea: the fact that there's no algorithm that can fit all purposes. In other words, the No Free Lunch Theorem is basically saying that there's no algorithmic magic bullet. This roots on the Modus Ponens, because for an algorithm or a statistical test to give the correct result, you need to satisfy the premisses.
Just like in all mathematical theorems, if you violate the premisses, then the statistical test is just empty of sense, and you cannot derive any truth from it. So if you want to explain your data using your test, you must assume that the required premisses are met, if they're not (and you know that), then your test is dead wrong.
That's because scientific reasoning is based on deduction: basically, your test/law/theorem is an implication rule, which says that if you have the premisse A then you can conclude B: A=>B, but if you don't have A, then you can either have B or not B, and both cases are true, that's one of the basic tenets of logical inference/deduction (the Modus Ponens rule). In other words, if you violate the premisses, the result doesn't matter, and you cannot deduce anything.
Remember the binary table of implication:
A B A=>B
F F T
F T T
T F F
T T T
So in your case, to simplify, you have Dependent_Variables => ANOVA_correct. Now, if you use independent variables, thus Dependent_Variables is False, then the implication will be true, since the Dependent_Variables assumption is violated.
Of course this is simplistic, and in practice your ANOVA test may still return useful results because there is almost always some degree of independence between dependent variables, but this gives you the idea why you just can't rely on the test without fulfilling the assumptions.
However, you can also use tests which premisses are not satisfied by the original by reducing your problem: by explicitly relaxing the independency constraint, your result may still be meaningful, althrough not guaranteed (because then your results apply to the reduced problem, not the full problem, so you cannot translate every results except if you can prove that the additional constraints of the new problem do not impact your test and thus your results).
In practice, this is often used to model practical data, by using Naive Bayes for example, by modelling dependent (instead of independent) variables using a model that assume independent variables, and surprisingly it works often very well, and sometimes better than models accounting for dependencies. You can also be interested by this question about how to use ANOVA when the data doesn't exactly meet all expectations.
To summary: if you intend to work on practical data and your goal is not to prove any scientific result but to make a system that just works (ie, a web service or whatever practical application), the independency assumption (and maybe other assumptions) can be relaxed, but if you're trying to deduce/prove some general truth, then you should always use tests which you can mathematically guarantee (or at least safely and provably assume) that you satisfy all premisses. | Does the "No Free Lunch Theorem" apply to general statistical tests?
You can cite the No Free Lunch Theorem if you want, but you could also just cite the Modus Ponens (also known as the Law of Detachment, the basis of deductive reasoning), which is the root of the No F |
18,902 | Is Support Vector Machine sensitive to the correlation between the attributes? | Linear kernel: The effect here is similar to that of multicollinearity in linear regression. Your learned model may not be particularly stable against small variations in the training set, because different weight vectors will have similar outputs. The training set predictions, though, will be fairly stable, and so will test predictions if they come from the same distribution.
RBF kernel: The RBF kernel only looks at distances between data points. Thus, imagine you actually have 11 attributes, but one of them is repeated 10 times (a pretty extreme case). Then that repeated attribute will contribute 10 times as much to the distance as any other attribute, and the learned model will probably be much more impacted by that feature.
One simple way to discount correlations with an RBF kernel is to use the Mahalanobis distance: $d(x, y) = \sqrt{ (x - y)^T S^{-1} (x - y) }$, where $S$ is an estimate of the sample covariance matrix. Equivalently, map all your vectors $x$ to $C x$ and then use the regular RBF kernel, where $C$ is such that $S^{-1} = C^T C$, e.g. the Cholesky decomposition of $S^{-1}$. | Is Support Vector Machine sensitive to the correlation between the attributes? | Linear kernel: The effect here is similar to that of multicollinearity in linear regression. Your learned model may not be particularly stable against small variations in the training set, because dif | Is Support Vector Machine sensitive to the correlation between the attributes?
Linear kernel: The effect here is similar to that of multicollinearity in linear regression. Your learned model may not be particularly stable against small variations in the training set, because different weight vectors will have similar outputs. The training set predictions, though, will be fairly stable, and so will test predictions if they come from the same distribution.
RBF kernel: The RBF kernel only looks at distances between data points. Thus, imagine you actually have 11 attributes, but one of them is repeated 10 times (a pretty extreme case). Then that repeated attribute will contribute 10 times as much to the distance as any other attribute, and the learned model will probably be much more impacted by that feature.
One simple way to discount correlations with an RBF kernel is to use the Mahalanobis distance: $d(x, y) = \sqrt{ (x - y)^T S^{-1} (x - y) }$, where $S$ is an estimate of the sample covariance matrix. Equivalently, map all your vectors $x$ to $C x$ and then use the regular RBF kernel, where $C$ is such that $S^{-1} = C^T C$, e.g. the Cholesky decomposition of $S^{-1}$. | Is Support Vector Machine sensitive to the correlation between the attributes?
Linear kernel: The effect here is similar to that of multicollinearity in linear regression. Your learned model may not be particularly stable against small variations in the training set, because dif |
18,903 | Getting negative predicted values after linear regression | Linear regression does not respect the bounds of 0. It's linear, always and everywhere. It may not be appropriate for values that need to be close to 0 but are strictly positive.
One way to manage this, particularly in the case of price, is to use the natural log of price. | Getting negative predicted values after linear regression | Linear regression does not respect the bounds of 0. It's linear, always and everywhere. It may not be appropriate for values that need to be close to 0 but are strictly positive.
One way to manage th | Getting negative predicted values after linear regression
Linear regression does not respect the bounds of 0. It's linear, always and everywhere. It may not be appropriate for values that need to be close to 0 but are strictly positive.
One way to manage this, particularly in the case of price, is to use the natural log of price. | Getting negative predicted values after linear regression
Linear regression does not respect the bounds of 0. It's linear, always and everywhere. It may not be appropriate for values that need to be close to 0 but are strictly positive.
One way to manage th |
18,904 | F statistic, F-critical value, and P-value | Think about if you have 2 friends who are both arguing over which one lives farther from work/school. You offer to settle the debate and ask them to measure how far they have to travel between home and work. They both report back to you, but one reports in miles and the other reports in kilometers, so you cannot compare the 2 numbers directly. You can convert the miles to kilometers or the kilometers to miles and make the comparison, which conversion you make does not matter, you will come to the same decision either way.
It is similar with test statistics, you cannot compare your alpha value to the F-statistic you need to either convert alpha to a critical value and compare the F-statistic to the critical value or you need to convert your F-statistic to a p-value and compare the p-value to alpha.
Alpha is chosen ahead of time (computers often default to 0.05 if you don't set it otherwise) and represents your willingness to falsely reject the null hypothesis if it is true (type I error). The F-statistic is computed from the data and represents how much the variability among the means exceeds that expected due to chance. An F-statistic greater than the critical value is equivalent to a p-value less than alpha and both mean that you reject the null hypothesis.
We don't compare the F-statistic to 1 because it can be greater than 1 due only to chance, it is only when it is greater than the critical value that we say it is unlikely to be due to chance and would rather reject the null hypothesis.
In the classes that I teach I have found that the students who are not quite as young as the others and are returning to school after working for a while often ask the best questions and are more interested in what they can actually do with the answers (rather than just worrying if it is on the test), so don't be afraid to ask. | F statistic, F-critical value, and P-value | Think about if you have 2 friends who are both arguing over which one lives farther from work/school. You offer to settle the debate and ask them to measure how far they have to travel between home a | F statistic, F-critical value, and P-value
Think about if you have 2 friends who are both arguing over which one lives farther from work/school. You offer to settle the debate and ask them to measure how far they have to travel between home and work. They both report back to you, but one reports in miles and the other reports in kilometers, so you cannot compare the 2 numbers directly. You can convert the miles to kilometers or the kilometers to miles and make the comparison, which conversion you make does not matter, you will come to the same decision either way.
It is similar with test statistics, you cannot compare your alpha value to the F-statistic you need to either convert alpha to a critical value and compare the F-statistic to the critical value or you need to convert your F-statistic to a p-value and compare the p-value to alpha.
Alpha is chosen ahead of time (computers often default to 0.05 if you don't set it otherwise) and represents your willingness to falsely reject the null hypothesis if it is true (type I error). The F-statistic is computed from the data and represents how much the variability among the means exceeds that expected due to chance. An F-statistic greater than the critical value is equivalent to a p-value less than alpha and both mean that you reject the null hypothesis.
We don't compare the F-statistic to 1 because it can be greater than 1 due only to chance, it is only when it is greater than the critical value that we say it is unlikely to be due to chance and would rather reject the null hypothesis.
In the classes that I teach I have found that the students who are not quite as young as the others and are returning to school after working for a while often ask the best questions and are more interested in what they can actually do with the answers (rather than just worrying if it is on the test), so don't be afraid to ask. | F statistic, F-critical value, and P-value
Think about if you have 2 friends who are both arguing over which one lives farther from work/school. You offer to settle the debate and ask them to measure how far they have to travel between home a |
18,905 | F statistic, F-critical value, and P-value | So in short, Reject the null when your p value is smaller than your alpha level. You should also reject the null if your critical f value is smaller than your F Value, you should also reject the null hypothesis.The F value should always be used along with the p value in deciding whether your results are significant enough to reject the null hypothesis. If you get a large f value, it means something is significant, while a small p value means all your results are significant. The F statistic just compares the joint effect of all the variables together. To put it simply, reject the null hypothesis only if your alpha level is larger than your p value.
Source:
http://www.statisticshowto.com/f-value-one-way-anova-reject-null-hypotheses/ | F statistic, F-critical value, and P-value | So in short, Reject the null when your p value is smaller than your alpha level. You should also reject the null if your critical f value is smaller than your F Value, you should also reject the null | F statistic, F-critical value, and P-value
So in short, Reject the null when your p value is smaller than your alpha level. You should also reject the null if your critical f value is smaller than your F Value, you should also reject the null hypothesis.The F value should always be used along with the p value in deciding whether your results are significant enough to reject the null hypothesis. If you get a large f value, it means something is significant, while a small p value means all your results are significant. The F statistic just compares the joint effect of all the variables together. To put it simply, reject the null hypothesis only if your alpha level is larger than your p value.
Source:
http://www.statisticshowto.com/f-value-one-way-anova-reject-null-hypotheses/ | F statistic, F-critical value, and P-value
So in short, Reject the null when your p value is smaller than your alpha level. You should also reject the null if your critical f value is smaller than your F Value, you should also reject the null |
18,906 | F statistic, F-critical value, and P-value | I had read the post you recommended, however I felt that it had got a problem and I still don't understand. I captured its content and attached as an image bellow.
Could you help to explain it clearly? | F statistic, F-critical value, and P-value | I had read the post you recommended, however I felt that it had got a problem and I still don't understand. I captured its content and attached as an image bellow.
Could you help to explain it clearly | F statistic, F-critical value, and P-value
I had read the post you recommended, however I felt that it had got a problem and I still don't understand. I captured its content and attached as an image bellow.
Could you help to explain it clearly? | F statistic, F-critical value, and P-value
I had read the post you recommended, however I felt that it had got a problem and I still don't understand. I captured its content and attached as an image bellow.
Could you help to explain it clearly |
18,907 | For a random matrix, shouldn't a SVD explain nothing at all? What am I doing wrong? | The first PC is explaining that the variables are not centered around zero. Scaling first or centering your random variables around zero will have the result you expect. For example, either of these:
m <- matrix(runif(10000,min=0,max=25), nrow=100,ncol=100)
m <- scale(m, scale=FALSE)
m <- matrix(runif(10000,min=-25,max=25), nrow=100,ncol=100) | For a random matrix, shouldn't a SVD explain nothing at all? What am I doing wrong? | The first PC is explaining that the variables are not centered around zero. Scaling first or centering your random variables around zero will have the result you expect. For example, either of these | For a random matrix, shouldn't a SVD explain nothing at all? What am I doing wrong?
The first PC is explaining that the variables are not centered around zero. Scaling first or centering your random variables around zero will have the result you expect. For example, either of these:
m <- matrix(runif(10000,min=0,max=25), nrow=100,ncol=100)
m <- scale(m, scale=FALSE)
m <- matrix(runif(10000,min=-25,max=25), nrow=100,ncol=100) | For a random matrix, shouldn't a SVD explain nothing at all? What am I doing wrong?
The first PC is explaining that the variables are not centered around zero. Scaling first or centering your random variables around zero will have the result you expect. For example, either of these |
18,908 | For a random matrix, shouldn't a SVD explain nothing at all? What am I doing wrong? | I'll add a more visual answer to your question, through use of a null model comparison. The procedure randomly shuffles the data in each column to preserve the overall variance while covariance between variables (columns) is lost. This is performed several times and the resulting distribution of singular values in the randomized matrix is compared to the original values.
I use prcomp instead of svd for the matrix decomposition, but the results are similar:
set.seed(1)
m <- matrix(runif(10000,min=0,max=25), nrow=100,ncol=100)
S <- svd(scale(m, center = TRUE, scale=FALSE))
P <- prcomp(m, center = TRUE, scale=FALSE)
plot(S$d, P$sdev) # linearly related
The null model comparison is performed on the centered matrix below:
library(sinkr) # https://github.com/marchtaylor/sinkr
# centred data
Pnull <- prcompNull(m, center = TRUE, scale=FALSE, nperm = 100)
Pnull$n.sig
boxplot(Pnull$Lambda[,1:20], ylim=range(Pnull$Lambda[,1:20], Pnull$Lambda.orig[1:20]), outline=FALSE, col=8, border="grey50", log="y", main=paste("m (center=FALSE); n sig. =", Pnull$n.sig))
lines(apply(Pnull$Lambda, 2, FUN=quantile, probs=0.95))
points(Pnull$Lambda.orig[1:20], pch=16)
The following is a boxplot of the permutated matrix with the 95 % quantile of each singular value shown as the solid line. The original values of PCA of m are the dots. all of which lie beneath the 95 % line - Thus their amplitude is indistinguishable from random noise.
The same procedure can be done on the un-centred version of m with the same result - No significant singular values:
# centred data
Pnull <- prcompNull(m, center = FALSE, scale=FALSE, nperm = 100)
Pnull$n.sig
boxplot(Pnull$Lambda[,1:20], ylim=range(Pnull$Lambda[,1:20], Pnull$Lambda.orig[1:20]), outline=FALSE, col=8, border="grey50", log="y", main=paste("m (center=TRUE); n sig. =", Pnull$n.sig))
lines(apply(Pnull$Lambda, 2, FUN=quantile, probs=0.95))
points(Pnull$Lambda.orig[1:20], pch=16)
For comparison, let's look at a dataset with a non-random dataset: iris
# iris dataset example
m <- iris[,1:4]
Pnull <- prcompNull(m, center = TRUE, scale=FALSE, nperm = 100)
Pnull$n.sig
boxplot(Pnull$Lambda, ylim=range(Pnull$Lambda, Pnull$Lambda.orig), outline=FALSE, col=8, border="grey50", log="y", main=paste("m (center=FALSE); n sig. =", Pnull$n.sig))
lines(apply(Pnull$Lambda, 2, FUN=quantile, probs=0.95))
points(Pnull$Lambda.orig[1:20], pch=16)
Here, the 1st singular value is significant, and explains over 92 % of the total variance:
P <- prcomp(m, center = TRUE)
P$sdev^2 / sum(P$sdev^2)
# [1] 0.924618723 0.053066483 0.017102610 0.005212184 | For a random matrix, shouldn't a SVD explain nothing at all? What am I doing wrong? | I'll add a more visual answer to your question, through use of a null model comparison. The procedure randomly shuffles the data in each column to preserve the overall variance while covariance betwee | For a random matrix, shouldn't a SVD explain nothing at all? What am I doing wrong?
I'll add a more visual answer to your question, through use of a null model comparison. The procedure randomly shuffles the data in each column to preserve the overall variance while covariance between variables (columns) is lost. This is performed several times and the resulting distribution of singular values in the randomized matrix is compared to the original values.
I use prcomp instead of svd for the matrix decomposition, but the results are similar:
set.seed(1)
m <- matrix(runif(10000,min=0,max=25), nrow=100,ncol=100)
S <- svd(scale(m, center = TRUE, scale=FALSE))
P <- prcomp(m, center = TRUE, scale=FALSE)
plot(S$d, P$sdev) # linearly related
The null model comparison is performed on the centered matrix below:
library(sinkr) # https://github.com/marchtaylor/sinkr
# centred data
Pnull <- prcompNull(m, center = TRUE, scale=FALSE, nperm = 100)
Pnull$n.sig
boxplot(Pnull$Lambda[,1:20], ylim=range(Pnull$Lambda[,1:20], Pnull$Lambda.orig[1:20]), outline=FALSE, col=8, border="grey50", log="y", main=paste("m (center=FALSE); n sig. =", Pnull$n.sig))
lines(apply(Pnull$Lambda, 2, FUN=quantile, probs=0.95))
points(Pnull$Lambda.orig[1:20], pch=16)
The following is a boxplot of the permutated matrix with the 95 % quantile of each singular value shown as the solid line. The original values of PCA of m are the dots. all of which lie beneath the 95 % line - Thus their amplitude is indistinguishable from random noise.
The same procedure can be done on the un-centred version of m with the same result - No significant singular values:
# centred data
Pnull <- prcompNull(m, center = FALSE, scale=FALSE, nperm = 100)
Pnull$n.sig
boxplot(Pnull$Lambda[,1:20], ylim=range(Pnull$Lambda[,1:20], Pnull$Lambda.orig[1:20]), outline=FALSE, col=8, border="grey50", log="y", main=paste("m (center=TRUE); n sig. =", Pnull$n.sig))
lines(apply(Pnull$Lambda, 2, FUN=quantile, probs=0.95))
points(Pnull$Lambda.orig[1:20], pch=16)
For comparison, let's look at a dataset with a non-random dataset: iris
# iris dataset example
m <- iris[,1:4]
Pnull <- prcompNull(m, center = TRUE, scale=FALSE, nperm = 100)
Pnull$n.sig
boxplot(Pnull$Lambda, ylim=range(Pnull$Lambda, Pnull$Lambda.orig), outline=FALSE, col=8, border="grey50", log="y", main=paste("m (center=FALSE); n sig. =", Pnull$n.sig))
lines(apply(Pnull$Lambda, 2, FUN=quantile, probs=0.95))
points(Pnull$Lambda.orig[1:20], pch=16)
Here, the 1st singular value is significant, and explains over 92 % of the total variance:
P <- prcomp(m, center = TRUE)
P$sdev^2 / sum(P$sdev^2)
# [1] 0.924618723 0.053066483 0.017102610 0.005212184 | For a random matrix, shouldn't a SVD explain nothing at all? What am I doing wrong?
I'll add a more visual answer to your question, through use of a null model comparison. The procedure randomly shuffles the data in each column to preserve the overall variance while covariance betwee |
18,909 | Cost function for validating Poisson regression models | Assuming nothing special in your particular case, I think there is a good argument for either using the default (Mean Square Error) or use the mean of the error of the logs, or even the chi-squared error.
The purpose of the cost function is to express how "upset" you are with wrong predictions, specifically what "wrongness" bothers you most. This is particularly important for binary responses, but can matter in any situation.
Mean Square Error (of responses)
$C = \frac{1}{n}\sum_i (Y_i-\hat Y_i)^2$
Using the MSE you are equally sensitive to errors from above and below and equally sensitive for large and small predictions. This is a pretty standard thing to do, and so I don't think would be frowned on in most situations.
Mean Square Error (of log responses)
$C = \frac{1}{n}\sum_i (\ln Y_i-\ln \hat Y_i)^2$
Because you are working with count data, it could be argued that you are not symmetric nor size indifferent. Being out by 10 counts for a prediction of 10 is very different from a prediction of 1000. This is a somewhat "canonical" cost function, because you have matched the costs up to the link function. This ensures that that costs match the variance distribution being assumed in the model.
Chi-Squared Error
$C = \frac{1}{n}\sum_i \frac{(Y_i-\hat Y_i)^2}{\hat Y_i}$
A third way would be to use the chi-squared error. This could be particularly appealing if you are comparing your GLM to other count based models - particularly if there are factors in your GLM. Similar to the error log responses, this will scale with size, but it is symmetric around the predicted count. You are now evaluating goodness of fit based on percentage error.
On The Discreteness
The question cites the documentation example where they have a binary response variable, so use a different cost function. The issue for a binary response is that the GLM will forecast a real number between 0 and 1, even though the response is always exactly 0 or 1. It is perfectly valid to say that the closer that number is to the correct response the better the forecast, but often people don't want this. The reasoning being that one often must act either as though it is 0 or 1, and so will take anything less than 0.5 as a forecast for 0. In that case, it makes sense simply to count the number of "wrong" forecasts. The argument here is that for a True/False question you can only ever be right or wrong - there is no gradation of wrongness.
In your case you have count data. Here it is far more common to accept predictions that are not on the same support as the response. A prediction of 2.4 children per family for example, or 9.7 deaths per year. Usually one would not try to do anything about this because it is not about being "right" or "wrong", just as close as you can get. If you really must have a prediction that is an integer though, perhaps because you have a very very low count rate, then there is no reason you can't round the prediction first and count the "whole number" or error. In this case, the three expressions above still apply, but you simply need to round $\hat Y$ first. | Cost function for validating Poisson regression models | Assuming nothing special in your particular case, I think there is a good argument for either using the default (Mean Square Error) or use the mean of the error of the logs, or even the chi-squared er | Cost function for validating Poisson regression models
Assuming nothing special in your particular case, I think there is a good argument for either using the default (Mean Square Error) or use the mean of the error of the logs, or even the chi-squared error.
The purpose of the cost function is to express how "upset" you are with wrong predictions, specifically what "wrongness" bothers you most. This is particularly important for binary responses, but can matter in any situation.
Mean Square Error (of responses)
$C = \frac{1}{n}\sum_i (Y_i-\hat Y_i)^2$
Using the MSE you are equally sensitive to errors from above and below and equally sensitive for large and small predictions. This is a pretty standard thing to do, and so I don't think would be frowned on in most situations.
Mean Square Error (of log responses)
$C = \frac{1}{n}\sum_i (\ln Y_i-\ln \hat Y_i)^2$
Because you are working with count data, it could be argued that you are not symmetric nor size indifferent. Being out by 10 counts for a prediction of 10 is very different from a prediction of 1000. This is a somewhat "canonical" cost function, because you have matched the costs up to the link function. This ensures that that costs match the variance distribution being assumed in the model.
Chi-Squared Error
$C = \frac{1}{n}\sum_i \frac{(Y_i-\hat Y_i)^2}{\hat Y_i}$
A third way would be to use the chi-squared error. This could be particularly appealing if you are comparing your GLM to other count based models - particularly if there are factors in your GLM. Similar to the error log responses, this will scale with size, but it is symmetric around the predicted count. You are now evaluating goodness of fit based on percentage error.
On The Discreteness
The question cites the documentation example where they have a binary response variable, so use a different cost function. The issue for a binary response is that the GLM will forecast a real number between 0 and 1, even though the response is always exactly 0 or 1. It is perfectly valid to say that the closer that number is to the correct response the better the forecast, but often people don't want this. The reasoning being that one often must act either as though it is 0 or 1, and so will take anything less than 0.5 as a forecast for 0. In that case, it makes sense simply to count the number of "wrong" forecasts. The argument here is that for a True/False question you can only ever be right or wrong - there is no gradation of wrongness.
In your case you have count data. Here it is far more common to accept predictions that are not on the same support as the response. A prediction of 2.4 children per family for example, or 9.7 deaths per year. Usually one would not try to do anything about this because it is not about being "right" or "wrong", just as close as you can get. If you really must have a prediction that is an integer though, perhaps because you have a very very low count rate, then there is no reason you can't round the prediction first and count the "whole number" or error. In this case, the three expressions above still apply, but you simply need to round $\hat Y$ first. | Cost function for validating Poisson regression models
Assuming nothing special in your particular case, I think there is a good argument for either using the default (Mean Square Error) or use the mean of the error of the logs, or even the chi-squared er |
18,910 | The operation of chance in a deterministic world | Interesting thought (+1).
In cases 1) and 2), the problem is the same: we do not have complete information. And probability is a measure of the lack of information.
1) The puny causes may be purely deterministic, but which particular causes operate is impossible to know by a deterministic process. Think of molecules in a gaz. The laws of mechanics apply, so what is random here? The information hidden to us: where is which molecule with what speed. So the CLT applies, not because there is randomness in the system, but because there is randomness in our representation of the system.
2) There is a time component in HMM that is not necessarily present in this case. My interpretation is the same as before, the system may be non random, but our access to its state has some randomness.
EDIT: I don't know if Poincare was thinking of a different statistical approach for these two cases. In case 1) we know the varialbes, but we cannot measure them because there are too many and they are too small. In case 2) we don't know the variables. Both ways, you end up making assumptions and modeling the observable the best we can, and quite often we assume Normality in case 2).
But still, if there was one difference, I think it would be emergence. If all systems were determined by sums of puny causes then all random variables of the physical world would be Gaussian. Clearly, this is not the case. Why? Because scale matters. Why? Because new properties emerge from interactions at smaller scale, and these new properties need not be Gaussian. Actually, we have no statistical theory for emergence (as far as I know) but maybe one day we will. Then it will be justified to have different statistical approaches for cases 1) and 2) | The operation of chance in a deterministic world | Interesting thought (+1).
In cases 1) and 2), the problem is the same: we do not have complete information. And probability is a measure of the lack of information.
1) The puny causes may be purely de | The operation of chance in a deterministic world
Interesting thought (+1).
In cases 1) and 2), the problem is the same: we do not have complete information. And probability is a measure of the lack of information.
1) The puny causes may be purely deterministic, but which particular causes operate is impossible to know by a deterministic process. Think of molecules in a gaz. The laws of mechanics apply, so what is random here? The information hidden to us: where is which molecule with what speed. So the CLT applies, not because there is randomness in the system, but because there is randomness in our representation of the system.
2) There is a time component in HMM that is not necessarily present in this case. My interpretation is the same as before, the system may be non random, but our access to its state has some randomness.
EDIT: I don't know if Poincare was thinking of a different statistical approach for these two cases. In case 1) we know the varialbes, but we cannot measure them because there are too many and they are too small. In case 2) we don't know the variables. Both ways, you end up making assumptions and modeling the observable the best we can, and quite often we assume Normality in case 2).
But still, if there was one difference, I think it would be emergence. If all systems were determined by sums of puny causes then all random variables of the physical world would be Gaussian. Clearly, this is not the case. Why? Because scale matters. Why? Because new properties emerge from interactions at smaller scale, and these new properties need not be Gaussian. Actually, we have no statistical theory for emergence (as far as I know) but maybe one day we will. Then it will be justified to have different statistical approaches for cases 1) and 2) | The operation of chance in a deterministic world
Interesting thought (+1).
In cases 1) and 2), the problem is the same: we do not have complete information. And probability is a measure of the lack of information.
1) The puny causes may be purely de |
18,911 | The operation of chance in a deterministic world | I think you are reading too much into the statement. It all seems to lie under the premise that the world is deterministic and that humans model it probabilistically because it is easier to approximate what is going on that way than to go through all the details of the physics and any other mathematical equations that describe it. I think that there has been a long standing debate about determininism versus random effects particularly between physicist and statisticians. I was particularly struck by the following preceding sentences to what you bolded. "Even a coin flip can be predicted from the starting conditions and the laws of physics, and a skilled magician can exploit those laws to throw heads every time." When I was a graduate student at Stanford in the late 1970s Persi Diaconis a statistician and a magician and Joe Keller a physicist actually tried to apply the laws of physics to a coin flip to determine what the otucome would be based on the intial conditions regarding whether or not heads is face up and exact;y how the force of the finger flip strikes the coin. i think they may have worked it out. But to think a magician even with the magical training and statistical knowledge of a persi diaconis could flip the coin and have it come up heads every time is preposterous. I believe they found that it is impossible to replicate the initial conditions and I think chaos theory applies. Small perturbations in the initial condition have large effects on the flight of the coin and make the outcome unpredictable. As a statistician I would say even if the world is deterministic stochastic models do a better job of predicting outcomes than complex deterministic laws. When the physics is simple deterministic laws can and should be used. For example Newton's gravitational law works well at determining the speed that an object has when it hits the ground being dropped from 10 feet above the ground and using the equation d=gt$^2$/2 you can solve for the time it takes to complete the fall very accurately as well since the gravitational constant g has been determined to a high level of accuracy and the equation applies almost exactly. | The operation of chance in a deterministic world | I think you are reading too much into the statement. It all seems to lie under the premise that the world is deterministic and that humans model it probabilistically because it is easier to approxima | The operation of chance in a deterministic world
I think you are reading too much into the statement. It all seems to lie under the premise that the world is deterministic and that humans model it probabilistically because it is easier to approximate what is going on that way than to go through all the details of the physics and any other mathematical equations that describe it. I think that there has been a long standing debate about determininism versus random effects particularly between physicist and statisticians. I was particularly struck by the following preceding sentences to what you bolded. "Even a coin flip can be predicted from the starting conditions and the laws of physics, and a skilled magician can exploit those laws to throw heads every time." When I was a graduate student at Stanford in the late 1970s Persi Diaconis a statistician and a magician and Joe Keller a physicist actually tried to apply the laws of physics to a coin flip to determine what the otucome would be based on the intial conditions regarding whether or not heads is face up and exact;y how the force of the finger flip strikes the coin. i think they may have worked it out. But to think a magician even with the magical training and statistical knowledge of a persi diaconis could flip the coin and have it come up heads every time is preposterous. I believe they found that it is impossible to replicate the initial conditions and I think chaos theory applies. Small perturbations in the initial condition have large effects on the flight of the coin and make the outcome unpredictable. As a statistician I would say even if the world is deterministic stochastic models do a better job of predicting outcomes than complex deterministic laws. When the physics is simple deterministic laws can and should be used. For example Newton's gravitational law works well at determining the speed that an object has when it hits the ground being dropped from 10 feet above the ground and using the equation d=gt$^2$/2 you can solve for the time it takes to complete the fall very accurately as well since the gravitational constant g has been determined to a high level of accuracy and the equation applies almost exactly. | The operation of chance in a deterministic world
I think you are reading too much into the statement. It all seems to lie under the premise that the world is deterministic and that humans model it probabilistically because it is easier to approxima |
18,912 | The operation of chance in a deterministic world | earlier version had an incorrect $2^{-N}$ term in the first equation, and was missing a $2^N$ in the third equation. Thanks to Cardianl for noting this
This is not a full answer, but too long for a comment. Just to give some mathematical intuition about point 1) we have the following limit in large $N$ (using stirling's approximation, so $a_n \sim b_n$ means $\lim_{n\to\infty}\frac{a_n}{b_n}=1$)
$${N\choose Nf}\sim\sqrt{\frac{1}{2\pi Nf(1-f)}}\exp\left(-NH(f)\right)$$
Where $H(f)=f\log\left(f\right)+(1-f)\log\left(1-f\right)$ is the entropy function. We also have second order Taylor series for $H(f)$ (about the mode of $\frac{1}{2}$) of:
$$H(f)\approx-\log\left(2\right)+2(f-\frac{1}{2})^2$$
So we also have:
$${N\choose Nf}\sim 2^N\sqrt{\frac{1}{2\pi Nf(1-f)}}\exp\left(-\frac{2}{N}(Nf-\frac{N}{2})^2\right)$$
The meaning of these limits is that any procedure which consists of counting the possibile ways in which something could happen (such as causal-effect analysis) is lead to the normal distribution. This does not depend on $f$ being random or deterministic. What the central limit theorem says is that the majority of ways in which a given set of events could happen is well approximated by a normal distribution. | The operation of chance in a deterministic world | earlier version had an incorrect $2^{-N}$ term in the first equation, and was missing a $2^N$ in the third equation. Thanks to Cardianl for noting this
This is not a full answer, but too long for a c | The operation of chance in a deterministic world
earlier version had an incorrect $2^{-N}$ term in the first equation, and was missing a $2^N$ in the third equation. Thanks to Cardianl for noting this
This is not a full answer, but too long for a comment. Just to give some mathematical intuition about point 1) we have the following limit in large $N$ (using stirling's approximation, so $a_n \sim b_n$ means $\lim_{n\to\infty}\frac{a_n}{b_n}=1$)
$${N\choose Nf}\sim\sqrt{\frac{1}{2\pi Nf(1-f)}}\exp\left(-NH(f)\right)$$
Where $H(f)=f\log\left(f\right)+(1-f)\log\left(1-f\right)$ is the entropy function. We also have second order Taylor series for $H(f)$ (about the mode of $\frac{1}{2}$) of:
$$H(f)\approx-\log\left(2\right)+2(f-\frac{1}{2})^2$$
So we also have:
$${N\choose Nf}\sim 2^N\sqrt{\frac{1}{2\pi Nf(1-f)}}\exp\left(-\frac{2}{N}(Nf-\frac{N}{2})^2\right)$$
The meaning of these limits is that any procedure which consists of counting the possibile ways in which something could happen (such as causal-effect analysis) is lead to the normal distribution. This does not depend on $f$ being random or deterministic. What the central limit theorem says is that the majority of ways in which a given set of events could happen is well approximated by a normal distribution. | The operation of chance in a deterministic world
earlier version had an incorrect $2^{-N}$ term in the first equation, and was missing a $2^N$ in the third equation. Thanks to Cardianl for noting this
This is not a full answer, but too long for a c |
18,913 | The operation of chance in a deterministic world | The quote from Pinker's book and the idea of a deterministic world completely ignore Quantum Mechanics and the Heisenberg Uncertaintly Principle. Imagine putting a small amount of something radioactive near a detector and arranging the amounts and distances so that there will be a 50% chance of detecting a decay during a pre-determined time interval. Now connect the detector to a relay that will do something highly significant if a decay is detected and operate the device once and only once.
You have now created a situation where the future is inherently unpredictable. (This example is drawn from one described by whomever taught sophomore or junior year physics at MIT back in the middle 1960's.) | The operation of chance in a deterministic world | The quote from Pinker's book and the idea of a deterministic world completely ignore Quantum Mechanics and the Heisenberg Uncertaintly Principle. Imagine putting a small amount of something radioacti | The operation of chance in a deterministic world
The quote from Pinker's book and the idea of a deterministic world completely ignore Quantum Mechanics and the Heisenberg Uncertaintly Principle. Imagine putting a small amount of something radioactive near a detector and arranging the amounts and distances so that there will be a 50% chance of detecting a decay during a pre-determined time interval. Now connect the detector to a relay that will do something highly significant if a decay is detected and operate the device once and only once.
You have now created a situation where the future is inherently unpredictable. (This example is drawn from one described by whomever taught sophomore or junior year physics at MIT back in the middle 1960's.) | The operation of chance in a deterministic world
The quote from Pinker's book and the idea of a deterministic world completely ignore Quantum Mechanics and the Heisenberg Uncertaintly Principle. Imagine putting a small amount of something radioacti |
18,914 | Best distance measure to use to compare vectors of angles | you can calculate the covariance matrix for each set and then calculate the Hausdorff distance between the two set using the Mahalanobis distance.
The Mahalanobis distance is a useful way of determining similarity of an unknown sample set to a known one. It differs from Euclidean distance in that it takes into account the correlations of the data set and is scale-invariant. | Best distance measure to use to compare vectors of angles | you can calculate the covariance matrix for each set and then calculate the Hausdorff distance between the two set using the Mahalanobis distance.
The Mahalanobis distance is a useful way of determini | Best distance measure to use to compare vectors of angles
you can calculate the covariance matrix for each set and then calculate the Hausdorff distance between the two set using the Mahalanobis distance.
The Mahalanobis distance is a useful way of determining similarity of an unknown sample set to a known one. It differs from Euclidean distance in that it takes into account the correlations of the data set and is scale-invariant. | Best distance measure to use to compare vectors of angles
you can calculate the covariance matrix for each set and then calculate the Hausdorff distance between the two set using the Mahalanobis distance.
The Mahalanobis distance is a useful way of determini |
18,915 | Best distance measure to use to compare vectors of angles | What are you trying to do with the nearest neighbor information?
I would answer that question, and then compare the different distance measures in light of that.
For example, say you are trying to classify poses based on the joint configuration, and would like joint vectors from the same pose to be close together. A straightforward way to evaluate the suitability of different distance metrics is to use each of them in a KNN classifier, and compare the out-of-sample accuracies of each of the resulting models. | Best distance measure to use to compare vectors of angles | What are you trying to do with the nearest neighbor information?
I would answer that question, and then compare the different distance measures in light of that.
For example, say you are trying to cla | Best distance measure to use to compare vectors of angles
What are you trying to do with the nearest neighbor information?
I would answer that question, and then compare the different distance measures in light of that.
For example, say you are trying to classify poses based on the joint configuration, and would like joint vectors from the same pose to be close together. A straightforward way to evaluate the suitability of different distance metrics is to use each of them in a KNN classifier, and compare the out-of-sample accuracies of each of the resulting models. | Best distance measure to use to compare vectors of angles
What are you trying to do with the nearest neighbor information?
I would answer that question, and then compare the different distance measures in light of that.
For example, say you are trying to cla |
18,916 | Best distance measure to use to compare vectors of angles | This sounds like it is similar to a certain application of Information Retrieval (IR). A few years ago I attended a talk about gait recognition that sounds similar to what you are doing. In Information Retrieval, "documents" (in your case: a person's angle data) are compared to some query (which in your case could be "is there a person with angle data (.., ..)"). Then the documents are listed in the order of the one that matches the closest down to the one that matches the least. That, in turn, means that one central component of IR is putting a document in some kind of vector space (in your case: angle space) and comparing it to one specific query or example document or measuring their distance. (See below.) If you have a sound definition of the distance between two individual vectors, all you have to do is coming up with a measure for the distance of two data sets. (Traditionally in IR the distance in vector space model is calculated either by the cosine measure or Euclidean distance but I don't remember how they did it in that case.)
In IR there is also a mechanism called "relevance feedback" that, conceptually, works with the distance of two sets of documents. That mechanism normally uses a measure of distance that sums up all individual distances between all pairs of documents (or in your case: person vectors). Maybe that is of use to you.
The following page has some papers that seem relevant to your issue: http://www.mpi-inf.mpg.de/~mmueller/index_publications.html
Especially this one http://www.mpi-inf.mpg.de/~mmueller/publications/2006_DemuthRoederMuellerEberhardt_MocapRetrievalSystem_ECIR.pdf seems interesting.
The talk of Müller that I attended mentions similarity measures from Kovar and Gleicher called "point cloud" (see http://portal.acm.org/citation.cfm?id=1186562.1015760&coll=DL&dl=ACM) and one called "quaternions".
Hope, it helps. | Best distance measure to use to compare vectors of angles | This sounds like it is similar to a certain application of Information Retrieval (IR). A few years ago I attended a talk about gait recognition that sounds similar to what you are doing. In Informatio | Best distance measure to use to compare vectors of angles
This sounds like it is similar to a certain application of Information Retrieval (IR). A few years ago I attended a talk about gait recognition that sounds similar to what you are doing. In Information Retrieval, "documents" (in your case: a person's angle data) are compared to some query (which in your case could be "is there a person with angle data (.., ..)"). Then the documents are listed in the order of the one that matches the closest down to the one that matches the least. That, in turn, means that one central component of IR is putting a document in some kind of vector space (in your case: angle space) and comparing it to one specific query or example document or measuring their distance. (See below.) If you have a sound definition of the distance between two individual vectors, all you have to do is coming up with a measure for the distance of two data sets. (Traditionally in IR the distance in vector space model is calculated either by the cosine measure or Euclidean distance but I don't remember how they did it in that case.)
In IR there is also a mechanism called "relevance feedback" that, conceptually, works with the distance of two sets of documents. That mechanism normally uses a measure of distance that sums up all individual distances between all pairs of documents (or in your case: person vectors). Maybe that is of use to you.
The following page has some papers that seem relevant to your issue: http://www.mpi-inf.mpg.de/~mmueller/index_publications.html
Especially this one http://www.mpi-inf.mpg.de/~mmueller/publications/2006_DemuthRoederMuellerEberhardt_MocapRetrievalSystem_ECIR.pdf seems interesting.
The talk of Müller that I attended mentions similarity measures from Kovar and Gleicher called "point cloud" (see http://portal.acm.org/citation.cfm?id=1186562.1015760&coll=DL&dl=ACM) and one called "quaternions".
Hope, it helps. | Best distance measure to use to compare vectors of angles
This sounds like it is similar to a certain application of Information Retrieval (IR). A few years ago I attended a talk about gait recognition that sounds similar to what you are doing. In Informatio |
18,917 | Best distance measure to use to compare vectors of angles | This problem is called Distance Metric Learning. Every distance metric can be represented as $\sqrt{(x-y)^tA(x-y)}$ where $A$ is positive semi-definite. Methods under this sub-area, learn the optimal $A$ for your data. In fact, if the optimal $A$ happens to be an identity matrix, it is okay to use euclidean distances. If it is the inverse covariance, it would be optimal to use the Mahalanobis distance, and so on and so forth. Hence, a distance metric learning method must be used to learn the optimal $A$, to learn the right distance metric. | Best distance measure to use to compare vectors of angles | This problem is called Distance Metric Learning. Every distance metric can be represented as $\sqrt{(x-y)^tA(x-y)}$ where $A$ is positive semi-definite. Methods under this sub-area, learn the optimal | Best distance measure to use to compare vectors of angles
This problem is called Distance Metric Learning. Every distance metric can be represented as $\sqrt{(x-y)^tA(x-y)}$ where $A$ is positive semi-definite. Methods under this sub-area, learn the optimal $A$ for your data. In fact, if the optimal $A$ happens to be an identity matrix, it is okay to use euclidean distances. If it is the inverse covariance, it would be optimal to use the Mahalanobis distance, and so on and so forth. Hence, a distance metric learning method must be used to learn the optimal $A$, to learn the right distance metric. | Best distance measure to use to compare vectors of angles
This problem is called Distance Metric Learning. Every distance metric can be represented as $\sqrt{(x-y)^tA(x-y)}$ where $A$ is positive semi-definite. Methods under this sub-area, learn the optimal |
18,918 | Best distance measure to use to compare vectors of angles | One problem with using the angles as a proxy for shape is that small perturbations in the angles can lead to large perturbations in the shape. Further, different angle configurations could result in the same (or similar) shape. | Best distance measure to use to compare vectors of angles | One problem with using the angles as a proxy for shape is that small perturbations in the angles can lead to large perturbations in the shape. Further, different angle configurations could result in t | Best distance measure to use to compare vectors of angles
One problem with using the angles as a proxy for shape is that small perturbations in the angles can lead to large perturbations in the shape. Further, different angle configurations could result in the same (or similar) shape. | Best distance measure to use to compare vectors of angles
One problem with using the angles as a proxy for shape is that small perturbations in the angles can lead to large perturbations in the shape. Further, different angle configurations could result in t |
18,919 | Random numbers and the multicore package | I'm not sure how the foreach works (from the doMC package, I guess), but in multicore if you did something like mclapply the mc.set.seed parameter defaults to TRUE which gives each process a different seed (e.g. mclapply(1:1000, rnorm)). I assume your code is translated into something similar, i.e. it boils down to calls to parallel which has the same convention.
But also see page 16 of the slides by Charlie Geyer, which recommends the rlecuyer package for parallel independent streams with theoretical guarantees. Geyer's page also has sample code in R for the different setups. | Random numbers and the multicore package | I'm not sure how the foreach works (from the doMC package, I guess), but in multicore if you did something like mclapply the mc.set.seed parameter defaults to TRUE which gives each process a different | Random numbers and the multicore package
I'm not sure how the foreach works (from the doMC package, I guess), but in multicore if you did something like mclapply the mc.set.seed parameter defaults to TRUE which gives each process a different seed (e.g. mclapply(1:1000, rnorm)). I assume your code is translated into something similar, i.e. it boils down to calls to parallel which has the same convention.
But also see page 16 of the slides by Charlie Geyer, which recommends the rlecuyer package for parallel independent streams with theoretical guarantees. Geyer's page also has sample code in R for the different setups. | Random numbers and the multicore package
I'm not sure how the foreach works (from the doMC package, I guess), but in multicore if you did something like mclapply the mc.set.seed parameter defaults to TRUE which gives each process a different |
18,920 | Random numbers and the multicore package | You might want to look at page 5 of this document and of this document. By default, under R, each core sets is own seed (i seem to recall using high precision time).
NB: if you use foreach() from Revolution-computing under windows then i suspect something sensible will not happen. Windows is not POSIX compliant, and this should pose problems when each core needs a different high prec. starting time to set it's seed (unfortunately i don't have windows handy so i can't check this empirically). | Random numbers and the multicore package | You might want to look at page 5 of this document and of this document. By default, under R, each core sets is own seed (i seem to recall using high precision time).
NB: if you use foreach() from Rev | Random numbers and the multicore package
You might want to look at page 5 of this document and of this document. By default, under R, each core sets is own seed (i seem to recall using high precision time).
NB: if you use foreach() from Revolution-computing under windows then i suspect something sensible will not happen. Windows is not POSIX compliant, and this should pose problems when each core needs a different high prec. starting time to set it's seed (unfortunately i don't have windows handy so i can't check this empirically). | Random numbers and the multicore package
You might want to look at page 5 of this document and of this document. By default, under R, each core sets is own seed (i seem to recall using high precision time).
NB: if you use foreach() from Rev |
18,921 | Statistical similarity of time series | The Euclidean distance is a common metric in machine learning. The following slides provide a good overview of this area along with references:
Making Time-series Classification More Accurate Using Learned Constraints
Introduction to Machine Learning Research on Time Series
Also see the references on Keogh's benchmarks page for time series classification:
UCR Time Series Classification/Clustering Page | Statistical similarity of time series | The Euclidean distance is a common metric in machine learning. The following slides provide a good overview of this area along with references:
Making Time-series Classification More Accurate Using | Statistical similarity of time series
The Euclidean distance is a common metric in machine learning. The following slides provide a good overview of this area along with references:
Making Time-series Classification More Accurate Using Learned Constraints
Introduction to Machine Learning Research on Time Series
Also see the references on Keogh's benchmarks page for time series classification:
UCR Time Series Classification/Clustering Page | Statistical similarity of time series
The Euclidean distance is a common metric in machine learning. The following slides provide a good overview of this area along with references:
Making Time-series Classification More Accurate Using |
18,922 | Statistical similarity of time series | If you have a specific model you wish to compare against: I would recommend Least-squares as a metric to minimize and score possible parameter values against a specific dataset. All you basically have to do is plug in your parameter estimates, use those to generate predicted values, and compute the average squared deviation from the true values.
However, You might consider turning your question around slightly: "Which model would best fit my data?" In which case I would suggest making an assumption of a normally distributed error term ~ something one could argue is akin to the least squares assumption. Then, depending on your choice of model, you could make an assumption about how you think the other model parameters are distributed (assigning a Bayesian prior) and the use something like the MCMC package from R to sample from the distribution of the parameters. Then you could look at posterior means & variances to get an idea of which model has the best fit. | Statistical similarity of time series | If you have a specific model you wish to compare against: I would recommend Least-squares as a metric to minimize and score possible parameter values against a specific dataset. All you basically ha | Statistical similarity of time series
If you have a specific model you wish to compare against: I would recommend Least-squares as a metric to minimize and score possible parameter values against a specific dataset. All you basically have to do is plug in your parameter estimates, use those to generate predicted values, and compute the average squared deviation from the true values.
However, You might consider turning your question around slightly: "Which model would best fit my data?" In which case I would suggest making an assumption of a normally distributed error term ~ something one could argue is akin to the least squares assumption. Then, depending on your choice of model, you could make an assumption about how you think the other model parameters are distributed (assigning a Bayesian prior) and the use something like the MCMC package from R to sample from the distribution of the parameters. Then you could look at posterior means & variances to get an idea of which model has the best fit. | Statistical similarity of time series
If you have a specific model you wish to compare against: I would recommend Least-squares as a metric to minimize and score possible parameter values against a specific dataset. All you basically ha |
18,923 | Statistical similarity of time series | Your "simplistic first thought" of qualitatively representing just the directional movement is similar in spirit to Keogh's SAX algorithm for comparing time series. I'd recommend you take a look at it: Eamonn Keogh & Jessica Lin: SAX.
From your edit, it sounds like you're now thinking about tackling the problem differently, but you might find that SAX provides a piece of the puzzle. | Statistical similarity of time series | Your "simplistic first thought" of qualitatively representing just the directional movement is similar in spirit to Keogh's SAX algorithm for comparing time series. I'd recommend you take a look at it | Statistical similarity of time series
Your "simplistic first thought" of qualitatively representing just the directional movement is similar in spirit to Keogh's SAX algorithm for comparing time series. I'd recommend you take a look at it: Eamonn Keogh & Jessica Lin: SAX.
From your edit, it sounds like you're now thinking about tackling the problem differently, but you might find that SAX provides a piece of the puzzle. | Statistical similarity of time series
Your "simplistic first thought" of qualitatively representing just the directional movement is similar in spirit to Keogh's SAX algorithm for comparing time series. I'd recommend you take a look at it |
18,924 | Statistical similarity of time series | While I'm a bit late to the party, if you are thinking about anything sinusoidal, wavelet transforms are a good tool to have in your pocket also. In theory, you can use wavelet transforms to decompose a sequence into various "parts" (e.g., waves of different shapes/frequencies, non-wave components such as trends, etc). A specific form of wave transform that is used a ton is the Fourier transform, but there's a lot of work in this area. I'd love to be able to recommend a current package, but I haven't done signal analysis work in quite a while. I recall some Matlab packages supporting functionality on this vein, however.
Another direction to go if you're only trying to find trends in cyclic data is something like the Mann-Kendall Trend test. It's used a lot for things like detecting changes in weather or water quality, which has strong seasonal influences. It doesn't have the bells and whistles of some more advanced approaches, but since it's a veteran statistical test it is fairly easy to interpret and report. | Statistical similarity of time series | While I'm a bit late to the party, if you are thinking about anything sinusoidal, wavelet transforms are a good tool to have in your pocket also. In theory, you can use wavelet transforms to decompos | Statistical similarity of time series
While I'm a bit late to the party, if you are thinking about anything sinusoidal, wavelet transforms are a good tool to have in your pocket also. In theory, you can use wavelet transforms to decompose a sequence into various "parts" (e.g., waves of different shapes/frequencies, non-wave components such as trends, etc). A specific form of wave transform that is used a ton is the Fourier transform, but there's a lot of work in this area. I'd love to be able to recommend a current package, but I haven't done signal analysis work in quite a while. I recall some Matlab packages supporting functionality on this vein, however.
Another direction to go if you're only trying to find trends in cyclic data is something like the Mann-Kendall Trend test. It's used a lot for things like detecting changes in weather or water quality, which has strong seasonal influences. It doesn't have the bells and whistles of some more advanced approaches, but since it's a veteran statistical test it is fairly easy to interpret and report. | Statistical similarity of time series
While I'm a bit late to the party, if you are thinking about anything sinusoidal, wavelet transforms are a good tool to have in your pocket also. In theory, you can use wavelet transforms to decompos |
18,925 | Expected ratio of x'Ax and x'AAx on a unit sphere? | For symmetric matrices $A, B$, the quantities $\mathcal R_A(x) = \frac{x^T A x}{x^T x}$ and $\mathcal R_{A, B}(x) = \frac{x^T Ax}{x^TBx}$ are known as the (generalized) Rayleigh quotient. A question about this was already asked here: Distribution of the Rayleigh quotient or here Expected value of Rayleigh quotient
The accepted answer refers to the 1992 book Quadratic Forms in Random Variables by Mathai and Provost.
There, on page 144, we are referred to the 1956 paper Quadratic Forms in Normally Distributed Random Variables by Gurland, where the distribution and the expectation of the generalized Rayleigh Quotient are discussed. Among other things, the author shows:
$$ \mathbf E\left[\frac{x^T Ax}{x^TBx}\right] = \sum_{j=0}^{n-1} \sum_{k=0}^{\infty} \frac{(-1)^{j+1}}{2^{j+2} v^{j+k+1}}c_{j} g_{k} B\left(j+k+1, \frac{3 n}{2}-j-1\right) $$
Here, $B(x, y)$ is the Beta Function and $v$, $c_j$ and $g_k$ are coefficients related to eigenvalues/characteristic polynomials of $A$ and $B$.
There are many other references giving different series/integral expansions/representations for the moments of $\mathcal R_{A, B}(x)$
The Exact Moments of a Ratio of Quadratic Forms in Normal Variables (1986)
On the expectation of a ratio of quadratic forms in normal variables (1989)
On the moments of ratios of quadratic forms in normal random variables (2013)
None of which indicate that there is a general simple "closed form" for $\mathbf E[\mathcal R_{A, B}(x)]$.
Some simplification steps we can do in any case, given Eigenvalue Decomposition $B=U^T\Lambda U$:
$$
\mathbf E_{x\sim\mathcal N(0,𝕀)}\left[\frac{x^T Ax}{x^TBx}\right]
= \mathbf E_{y\sim\mathcal N(0,𝕀)}\left[\frac{y^TU^T AUy}{y^T \Lambda y}\right]
= \mathbf E_{z\sim\mathcal N(0,\Lambda)}\left[\frac{z^T\Lambda ^{1/2}U^T AU\Lambda ^{1/2}z}{z^T z}\right]
$$
Letting $C=\Lambda ^{1/2}U^T AU\Lambda ^{1/2}$ and using linearity we have:
$$
\mathbf E_{z\sim\mathcal N(0,\Lambda)}\left[\frac{z^TCz}{z^T z}\right]
= \mathbf E_{z\sim\mathcal N(0,\Lambda)}\left[\left\langle C, \;\tfrac{zz^T}{z^T z}\right\rangle\right]
= \left\langle C, \; \mathbf E_{z\sim\mathcal N(0,\Lambda)}\left[\tfrac{zz^T}{z^T z}\right]\right\rangle
$$
Here, both $\mathbf E_{z\sim\mathcal N(0,\Lambda)}[zz^T] = \Lambda$ and $\mathbf E_{z\sim\mathcal N(0,\Lambda)}[z^Tz] = \operatorname{tr}(\Lambda)$ are trivial, however numerical simulation suggests that $\mathbf E_{z\sim\mathcal N(0,\Lambda)}\left[\tfrac{zz^T}{z^T z}\right]$ is a diagonal matrix whose diagonal has some non-trivial, non-linear relationship w.r.t. $\Lambda$. | Expected ratio of x'Ax and x'AAx on a unit sphere? | For symmetric matrices $A, B$, the quantities $\mathcal R_A(x) = \frac{x^T A x}{x^T x}$ and $\mathcal R_{A, B}(x) = \frac{x^T Ax}{x^TBx}$ are known as the (generalized) Rayleigh quotient. A question | Expected ratio of x'Ax and x'AAx on a unit sphere?
For symmetric matrices $A, B$, the quantities $\mathcal R_A(x) = \frac{x^T A x}{x^T x}$ and $\mathcal R_{A, B}(x) = \frac{x^T Ax}{x^TBx}$ are known as the (generalized) Rayleigh quotient. A question about this was already asked here: Distribution of the Rayleigh quotient or here Expected value of Rayleigh quotient
The accepted answer refers to the 1992 book Quadratic Forms in Random Variables by Mathai and Provost.
There, on page 144, we are referred to the 1956 paper Quadratic Forms in Normally Distributed Random Variables by Gurland, where the distribution and the expectation of the generalized Rayleigh Quotient are discussed. Among other things, the author shows:
$$ \mathbf E\left[\frac{x^T Ax}{x^TBx}\right] = \sum_{j=0}^{n-1} \sum_{k=0}^{\infty} \frac{(-1)^{j+1}}{2^{j+2} v^{j+k+1}}c_{j} g_{k} B\left(j+k+1, \frac{3 n}{2}-j-1\right) $$
Here, $B(x, y)$ is the Beta Function and $v$, $c_j$ and $g_k$ are coefficients related to eigenvalues/characteristic polynomials of $A$ and $B$.
There are many other references giving different series/integral expansions/representations for the moments of $\mathcal R_{A, B}(x)$
The Exact Moments of a Ratio of Quadratic Forms in Normal Variables (1986)
On the expectation of a ratio of quadratic forms in normal variables (1989)
On the moments of ratios of quadratic forms in normal random variables (2013)
None of which indicate that there is a general simple "closed form" for $\mathbf E[\mathcal R_{A, B}(x)]$.
Some simplification steps we can do in any case, given Eigenvalue Decomposition $B=U^T\Lambda U$:
$$
\mathbf E_{x\sim\mathcal N(0,𝕀)}\left[\frac{x^T Ax}{x^TBx}\right]
= \mathbf E_{y\sim\mathcal N(0,𝕀)}\left[\frac{y^TU^T AUy}{y^T \Lambda y}\right]
= \mathbf E_{z\sim\mathcal N(0,\Lambda)}\left[\frac{z^T\Lambda ^{1/2}U^T AU\Lambda ^{1/2}z}{z^T z}\right]
$$
Letting $C=\Lambda ^{1/2}U^T AU\Lambda ^{1/2}$ and using linearity we have:
$$
\mathbf E_{z\sim\mathcal N(0,\Lambda)}\left[\frac{z^TCz}{z^T z}\right]
= \mathbf E_{z\sim\mathcal N(0,\Lambda)}\left[\left\langle C, \;\tfrac{zz^T}{z^T z}\right\rangle\right]
= \left\langle C, \; \mathbf E_{z\sim\mathcal N(0,\Lambda)}\left[\tfrac{zz^T}{z^T z}\right]\right\rangle
$$
Here, both $\mathbf E_{z\sim\mathcal N(0,\Lambda)}[zz^T] = \Lambda$ and $\mathbf E_{z\sim\mathcal N(0,\Lambda)}[z^Tz] = \operatorname{tr}(\Lambda)$ are trivial, however numerical simulation suggests that $\mathbf E_{z\sim\mathcal N(0,\Lambda)}\left[\tfrac{zz^T}{z^T z}\right]$ is a diagonal matrix whose diagonal has some non-trivial, non-linear relationship w.r.t. $\Lambda$. | Expected ratio of x'Ax and x'AAx on a unit sphere?
For symmetric matrices $A, B$, the quantities $\mathcal R_A(x) = \frac{x^T A x}{x^T x}$ and $\mathcal R_{A, B}(x) = \frac{x^T Ax}{x^TBx}$ are known as the (generalized) Rayleigh quotient. A question |
18,926 | Expected ratio of x'Ax and x'AAx on a unit sphere? | $A\in\mathbb{R}^{n\times n}$ is symmetric positive, so there exists an orthonormal base $U=u_1,...,u_n$ and scalars $\lambda_1,...,\lambda_n$ s.t. $A=UDU^T$, with $D=\begin{pmatrix}
\lambda_1 & & 0 \\
& \ddots & \\
0 & & \lambda_n \\
\end{pmatrix}$. This is the spectral decomposition.
With these, we can decompose $$x^TAx=x^TUDU^Tx$$ and more importantly
$$x^TA^2x=x^TUDU^TUDU^Tx$$. As U is orthonormal, $U^TU=I$ and thus we get the denominator as $x^TA^2x=x^TUD^2U^Tx$.
Now we denote $w=U^Tx$. As a transformation, we get $w\sim N(U^T\mu_x, U^T\Sigma_xU)$. Plug in $x\sim N(0,I_n)$ and we get $w\sim N(0,I_n)$, again using $U^TU=I$.
The numerator is $w^TDw=\sum_{i=1}^{n}{\lambda_iw_i^Tw_i}$. Denote $g=\sum_{i=1}^{n}{\lambda_i}$, this is simply a $\chi^2_{ng}$ variable ($w^Tw\sim\chi^2_n$, we sum $g$ of those). Similarly, the denominator is a $\chi^2_{nk}$ variable, where $k=\sum_{i=1}^{n}{\lambda_i^2}$.
Overall, $\frac{x^TAx}{x^TA^2x}=\frac{w^TDw}{w^TD^2w}$ is a ratio of $\chi^2_{ng}$ variable and a $\chi^2_{nk}$ variable, so it has a beta prime distribution with parameters $\left(\alpha=\frac{ng}{2},\beta=\frac{nk}{2}\right)$. Assuming $nk>2$, the mean is
$$ \frac{\alpha}{\beta-1} = \frac{\frac{ng}{2}}{\frac{nk}{2}-1}=\frac{ng}{nk-2} $$
That's the first moment of $\frac{x^TAx}{x^TA^2x}$. Hope this helps. | Expected ratio of x'Ax and x'AAx on a unit sphere? | $A\in\mathbb{R}^{n\times n}$ is symmetric positive, so there exists an orthonormal base $U=u_1,...,u_n$ and scalars $\lambda_1,...,\lambda_n$ s.t. $A=UDU^T$, with $D=\begin{pmatrix}
\lambda_1 & & 0 \ | Expected ratio of x'Ax and x'AAx on a unit sphere?
$A\in\mathbb{R}^{n\times n}$ is symmetric positive, so there exists an orthonormal base $U=u_1,...,u_n$ and scalars $\lambda_1,...,\lambda_n$ s.t. $A=UDU^T$, with $D=\begin{pmatrix}
\lambda_1 & & 0 \\
& \ddots & \\
0 & & \lambda_n \\
\end{pmatrix}$. This is the spectral decomposition.
With these, we can decompose $$x^TAx=x^TUDU^Tx$$ and more importantly
$$x^TA^2x=x^TUDU^TUDU^Tx$$. As U is orthonormal, $U^TU=I$ and thus we get the denominator as $x^TA^2x=x^TUD^2U^Tx$.
Now we denote $w=U^Tx$. As a transformation, we get $w\sim N(U^T\mu_x, U^T\Sigma_xU)$. Plug in $x\sim N(0,I_n)$ and we get $w\sim N(0,I_n)$, again using $U^TU=I$.
The numerator is $w^TDw=\sum_{i=1}^{n}{\lambda_iw_i^Tw_i}$. Denote $g=\sum_{i=1}^{n}{\lambda_i}$, this is simply a $\chi^2_{ng}$ variable ($w^Tw\sim\chi^2_n$, we sum $g$ of those). Similarly, the denominator is a $\chi^2_{nk}$ variable, where $k=\sum_{i=1}^{n}{\lambda_i^2}$.
Overall, $\frac{x^TAx}{x^TA^2x}=\frac{w^TDw}{w^TD^2w}$ is a ratio of $\chi^2_{ng}$ variable and a $\chi^2_{nk}$ variable, so it has a beta prime distribution with parameters $\left(\alpha=\frac{ng}{2},\beta=\frac{nk}{2}\right)$. Assuming $nk>2$, the mean is
$$ \frac{\alpha}{\beta-1} = \frac{\frac{ng}{2}}{\frac{nk}{2}-1}=\frac{ng}{nk-2} $$
That's the first moment of $\frac{x^TAx}{x^TA^2x}$. Hope this helps. | Expected ratio of x'Ax and x'AAx on a unit sphere?
$A\in\mathbb{R}^{n\times n}$ is symmetric positive, so there exists an orthonormal base $U=u_1,...,u_n$ and scalars $\lambda_1,...,\lambda_n$ s.t. $A=UDU^T$, with $D=\begin{pmatrix}
\lambda_1 & & 0 \ |
18,927 | Time series seasonality test | Before you test for seasonality you should reflect which type of seasonality you have. Note that there are many different types of seasonality:
Additive vs. Multiplicative seasonality
Single vs. Multiple seasonalities
Seasonality with even vs. uneven number of periods. Each year has twelve months, but 52,1429 weeks.
Trend vs. Seasonality: A seasonality pattern always appears in the same period, but a trend may appear a little bit later or earlier and not exactly each 5 years. One example for a trend are business cycles.
One of the most common methods to detect seasonality is to decompose the time series into several components.
In R you can do this with the decompose() command from the preinstalled stats package or with the stl() command from the forecast package.
The following code is taken from A little book of R for time series
births <- scan("http://robjhyndman.com/tsdldata/data/nybirths.dat")
birthstimeseries <- ts(births, frequency = 12, start = c(1946,1))
birthstimeseriescomponents <- decompose(birthstimeseries)
plot(birthstimeseriescomponents)
You can check the single components with
birthstimeseriescomponents$seasonal
birthstimeseriescomponents$random
birthstimeseriescomponents$trend
An other method is to include seasonal dummies and to check whether they have significant p-values when you compute the regression. If the single months have siginificant coefficients your monthly time series is seasonal.
An other method to detect seasonality is either to plot the data itself or to plot the ACF (autocorrelation function). In our case you can easily notice, that there is seasonality.
And last, but not least there are some "formal" hypothesis tests in order to detect seasonality such as the Student T-Test and the Wilcoxon Signed Rank Test. | Time series seasonality test | Before you test for seasonality you should reflect which type of seasonality you have. Note that there are many different types of seasonality:
Additive vs. Multiplicative seasonality
Single vs. Mult | Time series seasonality test
Before you test for seasonality you should reflect which type of seasonality you have. Note that there are many different types of seasonality:
Additive vs. Multiplicative seasonality
Single vs. Multiple seasonalities
Seasonality with even vs. uneven number of periods. Each year has twelve months, but 52,1429 weeks.
Trend vs. Seasonality: A seasonality pattern always appears in the same period, but a trend may appear a little bit later or earlier and not exactly each 5 years. One example for a trend are business cycles.
One of the most common methods to detect seasonality is to decompose the time series into several components.
In R you can do this with the decompose() command from the preinstalled stats package or with the stl() command from the forecast package.
The following code is taken from A little book of R for time series
births <- scan("http://robjhyndman.com/tsdldata/data/nybirths.dat")
birthstimeseries <- ts(births, frequency = 12, start = c(1946,1))
birthstimeseriescomponents <- decompose(birthstimeseries)
plot(birthstimeseriescomponents)
You can check the single components with
birthstimeseriescomponents$seasonal
birthstimeseriescomponents$random
birthstimeseriescomponents$trend
An other method is to include seasonal dummies and to check whether they have significant p-values when you compute the regression. If the single months have siginificant coefficients your monthly time series is seasonal.
An other method to detect seasonality is either to plot the data itself or to plot the ACF (autocorrelation function). In our case you can easily notice, that there is seasonality.
And last, but not least there are some "formal" hypothesis tests in order to detect seasonality such as the Student T-Test and the Wilcoxon Signed Rank Test. | Time series seasonality test
Before you test for seasonality you should reflect which type of seasonality you have. Note that there are many different types of seasonality:
Additive vs. Multiplicative seasonality
Single vs. Mult |
18,928 | Time series seasonality test | In R you have the seastests package that includes several seasonality tests and the function isSeasonal() that conveniently combines several seasonality tests to indicate whether your time series is seasonal. | Time series seasonality test | In R you have the seastests package that includes several seasonality tests and the function isSeasonal() that conveniently combines several seasonality tests to indicate whether your time series is s | Time series seasonality test
In R you have the seastests package that includes several seasonality tests and the function isSeasonal() that conveniently combines several seasonality tests to indicate whether your time series is seasonal. | Time series seasonality test
In R you have the seastests package that includes several seasonality tests and the function isSeasonal() that conveniently combines several seasonality tests to indicate whether your time series is s |
18,929 | Time series seasonality test | My thoughts are to check the amplitude of the:
ACF autocorrelation function
PACF partial autocorrelation function
Fourier Coefficients
(Fourier Coefficients are related to ACF via Wiener-Khinchin theorem.) | Time series seasonality test | My thoughts are to check the amplitude of the:
ACF autocorrelation function
PACF partial autocorrelation function
Fourier Coefficients
(Fourier Coefficients are related to ACF via Wiener-Khinchin t | Time series seasonality test
My thoughts are to check the amplitude of the:
ACF autocorrelation function
PACF partial autocorrelation function
Fourier Coefficients
(Fourier Coefficients are related to ACF via Wiener-Khinchin theorem.) | Time series seasonality test
My thoughts are to check the amplitude of the:
ACF autocorrelation function
PACF partial autocorrelation function
Fourier Coefficients
(Fourier Coefficients are related to ACF via Wiener-Khinchin t |
18,930 | Is the mean (Bayesian) posterior estimate of $\theta$ a (Frequentist) unbiased estimator of $\theta$? | This is a meaningful question which answer is well-known: when using a proper prior $\pi$ on $\theta$, the posterior mean $\delta^\pi(x) = \mathbb{E}^\pi[\theta|x]$ cannot be unbiased. As otherwise the integrated Bayes risk would be zero:
\begin{align*}
r(\pi; \delta^\pi) &= \overbrace{\mathbb{E}^\pi\{\underbrace{\mathbb{E}^X[(\delta^\pi(X)-\theta)^2|\theta]}_{\text{exp. under likelihood}}\}}^{\text{expectation under prior}}\\
&= \mathbb{E}^\pi\{\mathbb{E}^X[\delta^\pi(X)^2+\theta^2-2\delta^\pi(X)\theta|\theta]\}\\
&= \mathbb{E}^\pi\{\mathbb{E}^X[\delta^\pi(X)^2+\theta^2]|\theta\}-
\mathbb{E}^\pi\{\theta \mathbb{E}^X[\delta^\pi(X)|\theta]\}-\overbrace{\mathbb{E}^X\{\mathbb{E}^\pi[\theta|X]\delta^\pi(X)\}}^{\text{exp. under marginal}}\\
&= \mathbb{E}^\pi[\theta^2]+\underbrace{\mathbb{E}^X[\delta^\pi(X)^2]}_{\text{exp. under marginal}}
-\mathbb{E}^\pi[\theta^2]-\mathbb{E}^X[\delta^\pi(X)^2]\\
& = 0
\end{align*}
[Notations: $\mathbb{E}^X$ means that $X$ is the random variable to be integrated in this expectation, either under likelihood (conditional on $\theta$) or marginal (integrating out $\theta$) while $𝔼^π$ considers $θ$ to be the random variable to be integrated. Note that $\mathbb{E}^X[\delta^\pi(X)]$ is an integral wrt to the marginal, while $\mathbb{E}^X[\delta^\pi(X)|\theta]$ is an integral wrt to the sampling distribution.]
The argument does not extend to improper priors like the flat prior (which is not uniform!) since the integrated Bayes risk is infinite. Hence, some generalised Bayes estimators may turn out to be unbiased, as for instance the MLE in the Normal mean problem which is also a Bayes posterior expectation under the flat prior. (But there is no general property of unbiasedness for improper priors!)
A side property of interest is that $\delta^\pi(x) =
\mathbb{E}^\pi[\theta|x]$ is sufficient in a Bayesian sense, as
$$\mathbb{E}^\pi\{\theta|\mathbb{E}^\pi[\theta|x]\}=\mathbb{E}^\pi[\theta|x]$$Conditioning
upon $\mathbb{E}^\pi[\theta|x]$ is the same as conditioning on $x$ for
estimating $\theta$. | Is the mean (Bayesian) posterior estimate of $\theta$ a (Frequentist) unbiased estimator of $\theta$ | This is a meaningful question which answer is well-known: when using a proper prior $\pi$ on $\theta$, the posterior mean $\delta^\pi(x) = \mathbb{E}^\pi[\theta|x]$ cannot be unbiased. As otherwise th | Is the mean (Bayesian) posterior estimate of $\theta$ a (Frequentist) unbiased estimator of $\theta$?
This is a meaningful question which answer is well-known: when using a proper prior $\pi$ on $\theta$, the posterior mean $\delta^\pi(x) = \mathbb{E}^\pi[\theta|x]$ cannot be unbiased. As otherwise the integrated Bayes risk would be zero:
\begin{align*}
r(\pi; \delta^\pi) &= \overbrace{\mathbb{E}^\pi\{\underbrace{\mathbb{E}^X[(\delta^\pi(X)-\theta)^2|\theta]}_{\text{exp. under likelihood}}\}}^{\text{expectation under prior}}\\
&= \mathbb{E}^\pi\{\mathbb{E}^X[\delta^\pi(X)^2+\theta^2-2\delta^\pi(X)\theta|\theta]\}\\
&= \mathbb{E}^\pi\{\mathbb{E}^X[\delta^\pi(X)^2+\theta^2]|\theta\}-
\mathbb{E}^\pi\{\theta \mathbb{E}^X[\delta^\pi(X)|\theta]\}-\overbrace{\mathbb{E}^X\{\mathbb{E}^\pi[\theta|X]\delta^\pi(X)\}}^{\text{exp. under marginal}}\\
&= \mathbb{E}^\pi[\theta^2]+\underbrace{\mathbb{E}^X[\delta^\pi(X)^2]}_{\text{exp. under marginal}}
-\mathbb{E}^\pi[\theta^2]-\mathbb{E}^X[\delta^\pi(X)^2]\\
& = 0
\end{align*}
[Notations: $\mathbb{E}^X$ means that $X$ is the random variable to be integrated in this expectation, either under likelihood (conditional on $\theta$) or marginal (integrating out $\theta$) while $𝔼^π$ considers $θ$ to be the random variable to be integrated. Note that $\mathbb{E}^X[\delta^\pi(X)]$ is an integral wrt to the marginal, while $\mathbb{E}^X[\delta^\pi(X)|\theta]$ is an integral wrt to the sampling distribution.]
The argument does not extend to improper priors like the flat prior (which is not uniform!) since the integrated Bayes risk is infinite. Hence, some generalised Bayes estimators may turn out to be unbiased, as for instance the MLE in the Normal mean problem which is also a Bayes posterior expectation under the flat prior. (But there is no general property of unbiasedness for improper priors!)
A side property of interest is that $\delta^\pi(x) =
\mathbb{E}^\pi[\theta|x]$ is sufficient in a Bayesian sense, as
$$\mathbb{E}^\pi\{\theta|\mathbb{E}^\pi[\theta|x]\}=\mathbb{E}^\pi[\theta|x]$$Conditioning
upon $\mathbb{E}^\pi[\theta|x]$ is the same as conditioning on $x$ for
estimating $\theta$. | Is the mean (Bayesian) posterior estimate of $\theta$ a (Frequentist) unbiased estimator of $\theta$
This is a meaningful question which answer is well-known: when using a proper prior $\pi$ on $\theta$, the posterior mean $\delta^\pi(x) = \mathbb{E}^\pi[\theta|x]$ cannot be unbiased. As otherwise th |
18,931 | Generating Data from Arbitrary Distribution | This is known as inverse transform sampling. The idea is well encapsulated in the following picture from Wikipedia:
Note that the image of the cumulative distribution function (CDF) $F_X$ is the interval $[0,1]$ on the $y$ axis. (Purists will discuss whether the endpoints should be included or not.) Also note that the CDF is of course monotone.
In inverse transform sampling, we sample uniformly from this image, i.e., $U[0,1]$. These are the dots on the $y$ axis. We then go right from these dots to the graph of $F_X$, then down to the $x$ axis. This is where the "inverse" comes in: because we start from the $y$ axis and end up on the $x$ axis.
The result on the $x$ axis is distributed according to $F_X$.
Where $F_X$ is steep (i.e., the density $f_X$ is large), $y$ values that are close together yield $x$ values that are close together. We get a high density of $x$ values.
Where $F_X$ is flat (i.e., $f_X$ is small), $y$ values that are close together yield $x$ values that are farther apart. We get a low density of $x$ values. | Generating Data from Arbitrary Distribution | This is known as inverse transform sampling. The idea is well encapsulated in the following picture from Wikipedia:
Note that the image of the cumulative distribution function (CDF) $F_X$ is the inte | Generating Data from Arbitrary Distribution
This is known as inverse transform sampling. The idea is well encapsulated in the following picture from Wikipedia:
Note that the image of the cumulative distribution function (CDF) $F_X$ is the interval $[0,1]$ on the $y$ axis. (Purists will discuss whether the endpoints should be included or not.) Also note that the CDF is of course monotone.
In inverse transform sampling, we sample uniformly from this image, i.e., $U[0,1]$. These are the dots on the $y$ axis. We then go right from these dots to the graph of $F_X$, then down to the $x$ axis. This is where the "inverse" comes in: because we start from the $y$ axis and end up on the $x$ axis.
The result on the $x$ axis is distributed according to $F_X$.
Where $F_X$ is steep (i.e., the density $f_X$ is large), $y$ values that are close together yield $x$ values that are close together. We get a high density of $x$ values.
Where $F_X$ is flat (i.e., $f_X$ is small), $y$ values that are close together yield $x$ values that are farther apart. We get a low density of $x$ values. | Generating Data from Arbitrary Distribution
This is known as inverse transform sampling. The idea is well encapsulated in the following picture from Wikipedia:
Note that the image of the cumulative distribution function (CDF) $F_X$ is the inte |
18,932 | Choosing filter size, strides etc. in a CNN? | As an introductory text to all the issues you name, I would recommend the deep learning book. It provides a broad overview of the field. It explains the role each of those parameters play.
In my opinion is very helpful to read about some of the most popular architectures (resnet, inception, alex-net), and extract the key ideas leading to the design decisions. After reading the aforementioned book.
In the syllabus of the lectures you refer to, it is explained in great detail how the convolution layer adds a big number of parameters (weights, biases) and neurons. This layer, once trained, it is able to extract meaning patterns from the image. For lower layers those filters look like edge extractors. For higher layers, those primitive shapes are combined to describe more complex forms. Those filters involve a high number of parameters, and a big issue of the design of deep networks in how to be able to describe complex forms and still be able to reduce the number of parameters.
Since neighboring pixels are strongly correlated (specially in lowest layers), it makes sense to reduce the size of the output by subsampling (pooling) the filter response. The further apart two pixels are from each other, the less correlated. Therefore, a big stride in the pooling layer leads to high information loss. Loosely speaking. A stride of 2 and a kernel size 2x2 for the pooling layer is a common choice.
A more sophisticated approach is the Inception network (Going deeper with convolutions) where the idea is to increase sparsity but still be able to achieve a higher accuracy, by trading the number of parameters in a convolutional layer vs an inception module for deeper networks.
A nice paper that provides hints on current architectures and the role of some of the design dimensions in a structured, systematic way is SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size. It builds on ideas introduced in the models previously mentioned. | Choosing filter size, strides etc. in a CNN? | As an introductory text to all the issues you name, I would recommend the deep learning book. It provides a broad overview of the field. It explains the role each of those parameters play.
In my opini | Choosing filter size, strides etc. in a CNN?
As an introductory text to all the issues you name, I would recommend the deep learning book. It provides a broad overview of the field. It explains the role each of those parameters play.
In my opinion is very helpful to read about some of the most popular architectures (resnet, inception, alex-net), and extract the key ideas leading to the design decisions. After reading the aforementioned book.
In the syllabus of the lectures you refer to, it is explained in great detail how the convolution layer adds a big number of parameters (weights, biases) and neurons. This layer, once trained, it is able to extract meaning patterns from the image. For lower layers those filters look like edge extractors. For higher layers, those primitive shapes are combined to describe more complex forms. Those filters involve a high number of parameters, and a big issue of the design of deep networks in how to be able to describe complex forms and still be able to reduce the number of parameters.
Since neighboring pixels are strongly correlated (specially in lowest layers), it makes sense to reduce the size of the output by subsampling (pooling) the filter response. The further apart two pixels are from each other, the less correlated. Therefore, a big stride in the pooling layer leads to high information loss. Loosely speaking. A stride of 2 and a kernel size 2x2 for the pooling layer is a common choice.
A more sophisticated approach is the Inception network (Going deeper with convolutions) where the idea is to increase sparsity but still be able to achieve a higher accuracy, by trading the number of parameters in a convolutional layer vs an inception module for deeper networks.
A nice paper that provides hints on current architectures and the role of some of the design dimensions in a structured, systematic way is SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size. It builds on ideas introduced in the models previously mentioned. | Choosing filter size, strides etc. in a CNN?
As an introductory text to all the issues you name, I would recommend the deep learning book. It provides a broad overview of the field. It explains the role each of those parameters play.
In my opini |
18,933 | Choosing filter size, strides etc. in a CNN? | If you consider better learning over learning time, I want to suggest these kernel and stride sizes;
Regarding filter size, I think it depends on your image characteristics. For example, large amount of pixels are necessary for the network recognize the object, you may use bigger filters, on other hand if objects are somewhat small or local features, you consider applying smaller filters relative to your input image size.
For the stride size, for me, small stride would be better at capturing the finer details of the input image.
For me, the benefit of pooling is that it extracts the sharpest features of an image. In general, the sharpest features look like the best lower level representation of an image. | Choosing filter size, strides etc. in a CNN? | If you consider better learning over learning time, I want to suggest these kernel and stride sizes;
Regarding filter size, I think it depends on your image characteristics. For example, large amount | Choosing filter size, strides etc. in a CNN?
If you consider better learning over learning time, I want to suggest these kernel and stride sizes;
Regarding filter size, I think it depends on your image characteristics. For example, large amount of pixels are necessary for the network recognize the object, you may use bigger filters, on other hand if objects are somewhat small or local features, you consider applying smaller filters relative to your input image size.
For the stride size, for me, small stride would be better at capturing the finer details of the input image.
For me, the benefit of pooling is that it extracts the sharpest features of an image. In general, the sharpest features look like the best lower level representation of an image. | Choosing filter size, strides etc. in a CNN?
If you consider better learning over learning time, I want to suggest these kernel and stride sizes;
Regarding filter size, I think it depends on your image characteristics. For example, large amount |
18,934 | Motivating sigmoid output units in neural networks starting with unnormalized log probabilities linear in $z=w^Th+b$ and $\phi(z)$ | There are two possible outcomes for $y \in \{0, 1\}$. It's very important, because this property changes meaning of the multiplication. There are two possible cases:
\begin{align}
\log\tilde P(y=1) &= z \\
\log\tilde P(y=0) &= 0 \\
\end{align}
In addition important to notice that unnormalized logarithmic probability for $y=0$ is constant. This property derives from the main assumption. Applying any deterministic function to the constant value will produce constant output. This property will simplify final formula when we will do normalization over all possible probabilities, because we just need to know only unnormalized probability for $y=1$ and for $y=0$ it's always constant. And since output from the network in unnormalized logarithmic probability we will require only one output, because another one assumed to be constant.
Next, we are applying exponentiation to the unnormalized logarithm probability in order to obtain unnormalized probability.
\begin{align}
\tilde P(y=1) &= e ^ z \\
\tilde P(y=0) &= e ^ 0 = 1
\end{align}
Next we just normalize probabilities dividing each unnormalized probability by the sum of all possible unnormalized probabilities.
\begin{align}
P(y=1) = \frac{e ^ z}{1 + e ^ z} \\
P(y=0) = \frac{1}{1 + e ^ z}
\end{align}
We are interested only in $P(y=1)$, because that's what probability from the sigmoid function means. The obtained function doesn't look like sigmoid on the first look, but they are equal and it's easy to show.
\begin{align}
P(y=1) = \frac{e ^ x}{1 + e ^ x} = \frac{1}{\frac{e ^ x + 1}{e ^ x}} = \frac{1}{1 + \frac{1}{e ^ x}} = \frac{1}{1 + e ^ {-x}}
\end{align}
The last statement can be confusing at first, but it just a way to show that that final probability function is a sigmoid. The $(2y−1)$ value converts $0$ to $-1$ and $1$ to $1$ (or we can say that it would be without change).
$$
P(y) = \sigma((2y - 1)z) = \begin{cases}
\sigma(z) = \frac{1}{1 + e ^ {-z}} = \frac{e ^ z}{1 + e ^ z} & \text{when } y = 1 \\
\sigma(-z) = \frac{1}{1 + e ^ {-(-z)}} = \frac{1}{1 + e ^ z} & \text{when } y = 0 \\
\end{cases}
$$
As we can see, it just the way to show the relation between $\sigma$ and $P(y)$ | Motivating sigmoid output units in neural networks starting with unnormalized log probabilities line | There are two possible outcomes for $y \in \{0, 1\}$. It's very important, because this property changes meaning of the multiplication. There are two possible cases:
\begin{align}
\log\tilde P(y=1) &= | Motivating sigmoid output units in neural networks starting with unnormalized log probabilities linear in $z=w^Th+b$ and $\phi(z)$
There are two possible outcomes for $y \in \{0, 1\}$. It's very important, because this property changes meaning of the multiplication. There are two possible cases:
\begin{align}
\log\tilde P(y=1) &= z \\
\log\tilde P(y=0) &= 0 \\
\end{align}
In addition important to notice that unnormalized logarithmic probability for $y=0$ is constant. This property derives from the main assumption. Applying any deterministic function to the constant value will produce constant output. This property will simplify final formula when we will do normalization over all possible probabilities, because we just need to know only unnormalized probability for $y=1$ and for $y=0$ it's always constant. And since output from the network in unnormalized logarithmic probability we will require only one output, because another one assumed to be constant.
Next, we are applying exponentiation to the unnormalized logarithm probability in order to obtain unnormalized probability.
\begin{align}
\tilde P(y=1) &= e ^ z \\
\tilde P(y=0) &= e ^ 0 = 1
\end{align}
Next we just normalize probabilities dividing each unnormalized probability by the sum of all possible unnormalized probabilities.
\begin{align}
P(y=1) = \frac{e ^ z}{1 + e ^ z} \\
P(y=0) = \frac{1}{1 + e ^ z}
\end{align}
We are interested only in $P(y=1)$, because that's what probability from the sigmoid function means. The obtained function doesn't look like sigmoid on the first look, but they are equal and it's easy to show.
\begin{align}
P(y=1) = \frac{e ^ x}{1 + e ^ x} = \frac{1}{\frac{e ^ x + 1}{e ^ x}} = \frac{1}{1 + \frac{1}{e ^ x}} = \frac{1}{1 + e ^ {-x}}
\end{align}
The last statement can be confusing at first, but it just a way to show that that final probability function is a sigmoid. The $(2y−1)$ value converts $0$ to $-1$ and $1$ to $1$ (or we can say that it would be without change).
$$
P(y) = \sigma((2y - 1)z) = \begin{cases}
\sigma(z) = \frac{1}{1 + e ^ {-z}} = \frac{e ^ z}{1 + e ^ z} & \text{when } y = 1 \\
\sigma(-z) = \frac{1}{1 + e ^ {-(-z)}} = \frac{1}{1 + e ^ z} & \text{when } y = 0 \\
\end{cases}
$$
As we can see, it just the way to show the relation between $\sigma$ and $P(y)$ | Motivating sigmoid output units in neural networks starting with unnormalized log probabilities line
There are two possible outcomes for $y \in \{0, 1\}$. It's very important, because this property changes meaning of the multiplication. There are two possible cases:
\begin{align}
\log\tilde P(y=1) &= |
18,935 | Motivating sigmoid output units in neural networks starting with unnormalized log probabilities linear in $z=w^Th+b$ and $\phi(z)$ | I also find this fragment of the book challenging to follow, and the above answer by itdxer deserves quite some time to understand as well for someone who's not properly fluent with probabilities and maths thinking. I made it however by reading the answer backwards, so start with the sigmoid of z
\begin{align}
P(y=1) = \frac{e ^ z}{1 + e ^ z} = \frac{1}{1 + e ^ {-z}}
\end{align}
and try to follow back to.
\begin{align}
\log\tilde P(y) &= yz
\end{align}
Then it makes sense why they started the explanation with yz - it's by design, same as the final
\begin{align}
\sigma((2y-1)z)
\end{align}
by construction allows to get -1 for y=0 and 1 for y=1, which are the only possible values of y under the Bernoulli. | Motivating sigmoid output units in neural networks starting with unnormalized log probabilities line | I also find this fragment of the book challenging to follow, and the above answer by itdxer deserves quite some time to understand as well for someone who's not properly fluent with probabilities and | Motivating sigmoid output units in neural networks starting with unnormalized log probabilities linear in $z=w^Th+b$ and $\phi(z)$
I also find this fragment of the book challenging to follow, and the above answer by itdxer deserves quite some time to understand as well for someone who's not properly fluent with probabilities and maths thinking. I made it however by reading the answer backwards, so start with the sigmoid of z
\begin{align}
P(y=1) = \frac{e ^ z}{1 + e ^ z} = \frac{1}{1 + e ^ {-z}}
\end{align}
and try to follow back to.
\begin{align}
\log\tilde P(y) &= yz
\end{align}
Then it makes sense why they started the explanation with yz - it's by design, same as the final
\begin{align}
\sigma((2y-1)z)
\end{align}
by construction allows to get -1 for y=0 and 1 for y=1, which are the only possible values of y under the Bernoulli. | Motivating sigmoid output units in neural networks starting with unnormalized log probabilities line
I also find this fragment of the book challenging to follow, and the above answer by itdxer deserves quite some time to understand as well for someone who's not properly fluent with probabilities and |
18,936 | Motivating sigmoid output units in neural networks starting with unnormalized log probabilities linear in $z=w^Th+b$ and $\phi(z)$ | Here's a more formal phrasing that will appeal to those with a measure-theoretic background.
Let $Y$ be a Bernoulli r.v. and let $P_Y$ denote the pushforward measure, i.e for $y\in \{0,1\}$, $P_Y(y)=P(Y=y)$ and let $\tilde P_Y$ denote its unnormalized counterpart.
We have the following chain of implications:
$$\begin{aligned}
\log \tilde P_Y(y)=yz &\implies \tilde P_Y(y) = \exp(yz)\\
&\implies P_Y(y) = \frac{e^{yz}}{e^{0\cdot z}+e^{1\cdot z}}=\frac{e^{yz}}{1+e^{ z}}\\
&\implies P_Y(y) =y\frac{e^{z}}{1+e^{ z}} + (1-y)\frac{1}{1+e^{ z}}\\
&\implies P_Y(y) =y\sigma(z) + (1-y)\sigma(-z)\\
&\implies P_Y(y) = \sigma((2y-1)z)
\end{aligned}$$
The last equality is a smart way of mapping $\{0,1\}$ to $\{-1,1\}$ | Motivating sigmoid output units in neural networks starting with unnormalized log probabilities line | Here's a more formal phrasing that will appeal to those with a measure-theoretic background.
Let $Y$ be a Bernoulli r.v. and let $P_Y$ denote the pushforward measure, i.e for $y\in \{0,1\}$, $P_Y(y)=P | Motivating sigmoid output units in neural networks starting with unnormalized log probabilities linear in $z=w^Th+b$ and $\phi(z)$
Here's a more formal phrasing that will appeal to those with a measure-theoretic background.
Let $Y$ be a Bernoulli r.v. and let $P_Y$ denote the pushforward measure, i.e for $y\in \{0,1\}$, $P_Y(y)=P(Y=y)$ and let $\tilde P_Y$ denote its unnormalized counterpart.
We have the following chain of implications:
$$\begin{aligned}
\log \tilde P_Y(y)=yz &\implies \tilde P_Y(y) = \exp(yz)\\
&\implies P_Y(y) = \frac{e^{yz}}{e^{0\cdot z}+e^{1\cdot z}}=\frac{e^{yz}}{1+e^{ z}}\\
&\implies P_Y(y) =y\frac{e^{z}}{1+e^{ z}} + (1-y)\frac{1}{1+e^{ z}}\\
&\implies P_Y(y) =y\sigma(z) + (1-y)\sigma(-z)\\
&\implies P_Y(y) = \sigma((2y-1)z)
\end{aligned}$$
The last equality is a smart way of mapping $\{0,1\}$ to $\{-1,1\}$ | Motivating sigmoid output units in neural networks starting with unnormalized log probabilities line
Here's a more formal phrasing that will appeal to those with a measure-theoretic background.
Let $Y$ be a Bernoulli r.v. and let $P_Y$ denote the pushforward measure, i.e for $y\in \{0,1\}$, $P_Y(y)=P |
18,937 | Motivating sigmoid output units in neural networks starting with unnormalized log probabilities linear in $z=w^Th+b$ and $\phi(z)$ | I will try to expand on the answer of @itdxer. From the comment section of the answer, it seems that the doubt centres around the justification of the line $log(\tilde{P}(y))=yz$, which becomes $z$ for $y=1$ and $0$ for $y=0$. What might be the justification for taking such a form? I will try to provide some insight into that.
At $y=1$, as per the above formula $-log(\tilde{P}(y))=-z$.
At $y=0$, $-log(\tilde{P}(y))=0$.
We will consider both the cases where $y=1$ and $y=0$.
Case I - Original value of $y$ is 1
If we take negative-log as the cost function and concentrate on $\tilde{P}(y=1)$, gradient descent on the cost function has the tendency to push $z$ towards the right side. This is what we want.
However, if originally $y=0$, we require $z$ to be pushed towards the left side. The negative-log cost being $0$, fails to deliver that.
Thus the above formula $log(\tilde{P}(y))=yz$ along with negative-log cost has the capability to learn cases with $y=1$ but fails to do the same for $y=0$ cases.
Case II - Original value of $y$ is 0
We could instead have started off with the formula
$-log(\tilde{P}(y))=(1-y)z$
$-log(\tilde{P}(y))=z$ for $y=0$
$-log(\tilde{P}(y))=0$ for $y=1$
Here the negative-log cost of $\tilde{P}(y=0)$ has the capability to push $z$ towards the left side, which is what we want when $y=0$. But, the negative-log cost of $\tilde{P}(y=1)$, being 0, fails to push $z$ towards the right.
Therefore, both formulas can deliver on one set of $y$ values but fails on the other. The final formula for $-log(\tilde P(y))$ should be such that it can select the preferred scenario based on the original value of $y$.
I am providing below a plot, which will clarify the point.
Thus we see that for cases with $y=1$, $\sigma(z)$ is the choice for the output unit; for $y=0$ cases, $\sigma(-z)$ is the preferred choice for output unit.
$\sigma((2y-1)z)$ happens to be the unified formula that can make this interchange possible based on the value of $y$. Besides, we can see that if we start with either of the formulas
$-log(\tilde{P}(y))=-yz$ or
$-log(\tilde{P}(y))=(1-y)z$
we can end up with $P(y)=\sigma((2y-1)z)$, progressing as done in the book or in the answer of @itdxer. Although not proved here, I think that this happy ending happens due to the identity $\sigma(-z)=1-\sigma(z)$.
Finally, I would like to mention that, personally, I feel motivating the equation $P(y)=\sigma((2y-1)z)$ would have been more appealing by first showing the plots I gave above. Then explaining that each of $\sigma(z)$ or $\sigma(-z)$ would be suitable for only one type of cases. Hence, we require some transformation capable of making this switch based on the value of $y$. | Motivating sigmoid output units in neural networks starting with unnormalized log probabilities line | I will try to expand on the answer of @itdxer. From the comment section of the answer, it seems that the doubt centres around the justification of the line $log(\tilde{P}(y))=yz$, which becomes $z$ fo | Motivating sigmoid output units in neural networks starting with unnormalized log probabilities linear in $z=w^Th+b$ and $\phi(z)$
I will try to expand on the answer of @itdxer. From the comment section of the answer, it seems that the doubt centres around the justification of the line $log(\tilde{P}(y))=yz$, which becomes $z$ for $y=1$ and $0$ for $y=0$. What might be the justification for taking such a form? I will try to provide some insight into that.
At $y=1$, as per the above formula $-log(\tilde{P}(y))=-z$.
At $y=0$, $-log(\tilde{P}(y))=0$.
We will consider both the cases where $y=1$ and $y=0$.
Case I - Original value of $y$ is 1
If we take negative-log as the cost function and concentrate on $\tilde{P}(y=1)$, gradient descent on the cost function has the tendency to push $z$ towards the right side. This is what we want.
However, if originally $y=0$, we require $z$ to be pushed towards the left side. The negative-log cost being $0$, fails to deliver that.
Thus the above formula $log(\tilde{P}(y))=yz$ along with negative-log cost has the capability to learn cases with $y=1$ but fails to do the same for $y=0$ cases.
Case II - Original value of $y$ is 0
We could instead have started off with the formula
$-log(\tilde{P}(y))=(1-y)z$
$-log(\tilde{P}(y))=z$ for $y=0$
$-log(\tilde{P}(y))=0$ for $y=1$
Here the negative-log cost of $\tilde{P}(y=0)$ has the capability to push $z$ towards the left side, which is what we want when $y=0$. But, the negative-log cost of $\tilde{P}(y=1)$, being 0, fails to push $z$ towards the right.
Therefore, both formulas can deliver on one set of $y$ values but fails on the other. The final formula for $-log(\tilde P(y))$ should be such that it can select the preferred scenario based on the original value of $y$.
I am providing below a plot, which will clarify the point.
Thus we see that for cases with $y=1$, $\sigma(z)$ is the choice for the output unit; for $y=0$ cases, $\sigma(-z)$ is the preferred choice for output unit.
$\sigma((2y-1)z)$ happens to be the unified formula that can make this interchange possible based on the value of $y$. Besides, we can see that if we start with either of the formulas
$-log(\tilde{P}(y))=-yz$ or
$-log(\tilde{P}(y))=(1-y)z$
we can end up with $P(y)=\sigma((2y-1)z)$, progressing as done in the book or in the answer of @itdxer. Although not proved here, I think that this happy ending happens due to the identity $\sigma(-z)=1-\sigma(z)$.
Finally, I would like to mention that, personally, I feel motivating the equation $P(y)=\sigma((2y-1)z)$ would have been more appealing by first showing the plots I gave above. Then explaining that each of $\sigma(z)$ or $\sigma(-z)$ would be suitable for only one type of cases. Hence, we require some transformation capable of making this switch based on the value of $y$. | Motivating sigmoid output units in neural networks starting with unnormalized log probabilities line
I will try to expand on the answer of @itdxer. From the comment section of the answer, it seems that the doubt centres around the justification of the line $log(\tilde{P}(y))=yz$, which becomes $z$ fo |
18,938 | Why is it wrong to stop an A/B test before optimal sample size is reached? | A/B tests that simply test repeatedly on the same data with a fixed type-1 error ($\alpha$) level are fundamentally flawed. There are at least two reasons why this is so. First, the repeated tests are correlated but the tests are conducted independently. Second, the fixed $\alpha$ does not account for the multiply conducted tests leading to type-1 error inflation.
To see the first, assume that upon each new observation you conduct a new test. Clearly any two subsequent p-values will be correlated because $n-1$ cases have not changed between the two tests. Consequently we see a trend in @Bernhard's plot demonstrating this correlatedness of p-values.
To see the second, we note that even when tests are independent the probability of having a p-value below $\alpha$ increases with the number of tests $t$ $$P(A) = 1-(1-\alpha)^t,$$ where $A$ is the event of a falsely rejected null hypothesis. So the probability to have at least one positive test result goes against $1$ as you repeatedly a/b test. If you then simply stop after the first positive result, you will have only shown the correctness of this formula. Put differently, even if the null hypothesis is true you will ultimately reject it. The a/b test is thus the ultimate way of finding effects where there are none.
Since in this situation both correlatedness and multiple testing hold at the same time, the p-value of the test $t+1$ depend on the p-value of $t$. So if you finally reach a $p< \alpha$, you are likely to stay in this region for a while. You can also see this in @Bernhard's plot in the region of 2500 to 3500 and 4000 to 5000.
Multiple testing per-se is legitimate, but testing against a fixed $\alpha$ is not. There are many procedures that deal with both the multiple testing procedure and correlated tests. One family of test corrections is called the family wise error rate control. What they do is to assure $$P(A) \le \alpha.$$
The arguably most famous adjustment (due to its simplicity) is Bonferroni. Here we set $$\alpha_{adj} = \alpha/t,$$ for which it can easily be shown that $P(A) \approx \alpha$ if the number of independent tests is large. If tests are correlated it is likely to be conservative, $P(A) < \alpha$. So the easiest adjustment you could make is dividing your alpha level of $0.05$ by the number of tests you have already made.
If we apply Bonferroni to @Bernhard's simulation, and zoom in to the $(0,0.1)$ interval on the y-axis, we find the plot below. For clarity I assumed we do not test after each coin flip (trial) but only every hundredth. The black dashed line is the standard $\alpha = 0.05$ cut off and the red dashed line is the Bonferroni adjustment.
As we can see the adjustment is very effective and demonstrates how radical we have to change the p-value to control the family wise error rate. Specifically we now do not find any significant test anymore, as it should be because @Berhard's null hypothesis is true.
Having done this we note that Bonferroni is very conservative in this situation due to the correlated tests. There are superior tests that will be more useful in this situation in the sense of having $P(A) \approx \alpha$, such as the permutation test. Also there is much more to say about testing than simply referring to Bonferroni (e.g. look up false discovery rate and related Bayesian techniques). Nevertheless this answers your questions with a minimum amount of math.
Here is the code:
set.seed(1)
n=10000
toss <- sample(1:2, n, TRUE)
p.values <- numeric(n)
for (i in 5:n){
p.values[i] <- binom.test(table(toss[1:i]))$p.value
}
p.values = p.values[-(1:6)]
plot(p.values[seq(1, length(p.values), 100)], type="l", ylim=c(0,0.1),ylab='p-values')
abline(h=0.05, lty="dashed")
abline(v=0)
abline(h=0)
curve(0.05/x,add=TRUE, col="red", lty="dashed") | Why is it wrong to stop an A/B test before optimal sample size is reached? | A/B tests that simply test repeatedly on the same data with a fixed type-1 error ($\alpha$) level are fundamentally flawed. There are at least two reasons why this is so. First, the repeated tests are | Why is it wrong to stop an A/B test before optimal sample size is reached?
A/B tests that simply test repeatedly on the same data with a fixed type-1 error ($\alpha$) level are fundamentally flawed. There are at least two reasons why this is so. First, the repeated tests are correlated but the tests are conducted independently. Second, the fixed $\alpha$ does not account for the multiply conducted tests leading to type-1 error inflation.
To see the first, assume that upon each new observation you conduct a new test. Clearly any two subsequent p-values will be correlated because $n-1$ cases have not changed between the two tests. Consequently we see a trend in @Bernhard's plot demonstrating this correlatedness of p-values.
To see the second, we note that even when tests are independent the probability of having a p-value below $\alpha$ increases with the number of tests $t$ $$P(A) = 1-(1-\alpha)^t,$$ where $A$ is the event of a falsely rejected null hypothesis. So the probability to have at least one positive test result goes against $1$ as you repeatedly a/b test. If you then simply stop after the first positive result, you will have only shown the correctness of this formula. Put differently, even if the null hypothesis is true you will ultimately reject it. The a/b test is thus the ultimate way of finding effects where there are none.
Since in this situation both correlatedness and multiple testing hold at the same time, the p-value of the test $t+1$ depend on the p-value of $t$. So if you finally reach a $p< \alpha$, you are likely to stay in this region for a while. You can also see this in @Bernhard's plot in the region of 2500 to 3500 and 4000 to 5000.
Multiple testing per-se is legitimate, but testing against a fixed $\alpha$ is not. There are many procedures that deal with both the multiple testing procedure and correlated tests. One family of test corrections is called the family wise error rate control. What they do is to assure $$P(A) \le \alpha.$$
The arguably most famous adjustment (due to its simplicity) is Bonferroni. Here we set $$\alpha_{adj} = \alpha/t,$$ for which it can easily be shown that $P(A) \approx \alpha$ if the number of independent tests is large. If tests are correlated it is likely to be conservative, $P(A) < \alpha$. So the easiest adjustment you could make is dividing your alpha level of $0.05$ by the number of tests you have already made.
If we apply Bonferroni to @Bernhard's simulation, and zoom in to the $(0,0.1)$ interval on the y-axis, we find the plot below. For clarity I assumed we do not test after each coin flip (trial) but only every hundredth. The black dashed line is the standard $\alpha = 0.05$ cut off and the red dashed line is the Bonferroni adjustment.
As we can see the adjustment is very effective and demonstrates how radical we have to change the p-value to control the family wise error rate. Specifically we now do not find any significant test anymore, as it should be because @Berhard's null hypothesis is true.
Having done this we note that Bonferroni is very conservative in this situation due to the correlated tests. There are superior tests that will be more useful in this situation in the sense of having $P(A) \approx \alpha$, such as the permutation test. Also there is much more to say about testing than simply referring to Bonferroni (e.g. look up false discovery rate and related Bayesian techniques). Nevertheless this answers your questions with a minimum amount of math.
Here is the code:
set.seed(1)
n=10000
toss <- sample(1:2, n, TRUE)
p.values <- numeric(n)
for (i in 5:n){
p.values[i] <- binom.test(table(toss[1:i]))$p.value
}
p.values = p.values[-(1:6)]
plot(p.values[seq(1, length(p.values), 100)], type="l", ylim=c(0,0.1),ylab='p-values')
abline(h=0.05, lty="dashed")
abline(v=0)
abline(h=0)
curve(0.05/x,add=TRUE, col="red", lty="dashed") | Why is it wrong to stop an A/B test before optimal sample size is reached?
A/B tests that simply test repeatedly on the same data with a fixed type-1 error ($\alpha$) level are fundamentally flawed. There are at least two reasons why this is so. First, the repeated tests are |
18,939 | Why is it wrong to stop an A/B test before optimal sample size is reached? | If the null hypothesis is true, then people often expect the p value to be very high. This is not true. If the null hypothesis is true, then p is a uniformely distributed random variable. Meaning, that from time to time is will be below 0.05 just randomly. If you look at a lot of different subsamples, sometimes the p value will be below 0.05.
To make that easier to grasp, here is a small simulation in R:
This will throw a coin 10,000 times and we know, it is a fair coin:
set.seed(1)
n=10000
toss <- sample(1:2, n, TRUE)
Starting from the 5th toss, this will perform a binomial test for fairness after every toss and save the p values:
p.values <- numeric(n)
for (i in 5:n){
p.values[i] <- binom.test(table(toss[1:i]))$p.value
}
And this will plot the p-values one after the other:
plot(p.values, type="l")
abline(h=0.05)
As you can see, the p value dips below 0.05 a couple of times just to recover and finally end up being far above p=0.05. If we stopped the trial any time p was "significant", we would come to the wrong conclusion. One might argue "We have a sample of roughly over 4000 trials i.i.d. and p was below .05. Shurely we can stop sampling any further". The more frequently you check the p-value, the more likely you are to check at a random dip. In this case we generated the data under the $H_0$ and know, that $H_0$ is true.
(Just to be perfectly open, I've tried more than one seed for the number generator before it was as clear as this example, but that is fair for educational purposes. If you have Rinstalled and running, you can easily play with the numbers.) | Why is it wrong to stop an A/B test before optimal sample size is reached? | If the null hypothesis is true, then people often expect the p value to be very high. This is not true. If the null hypothesis is true, then p is a uniformely distributed random variable. Meaning, tha | Why is it wrong to stop an A/B test before optimal sample size is reached?
If the null hypothesis is true, then people often expect the p value to be very high. This is not true. If the null hypothesis is true, then p is a uniformely distributed random variable. Meaning, that from time to time is will be below 0.05 just randomly. If you look at a lot of different subsamples, sometimes the p value will be below 0.05.
To make that easier to grasp, here is a small simulation in R:
This will throw a coin 10,000 times and we know, it is a fair coin:
set.seed(1)
n=10000
toss <- sample(1:2, n, TRUE)
Starting from the 5th toss, this will perform a binomial test for fairness after every toss and save the p values:
p.values <- numeric(n)
for (i in 5:n){
p.values[i] <- binom.test(table(toss[1:i]))$p.value
}
And this will plot the p-values one after the other:
plot(p.values, type="l")
abline(h=0.05)
As you can see, the p value dips below 0.05 a couple of times just to recover and finally end up being far above p=0.05. If we stopped the trial any time p was "significant", we would come to the wrong conclusion. One might argue "We have a sample of roughly over 4000 trials i.i.d. and p was below .05. Shurely we can stop sampling any further". The more frequently you check the p-value, the more likely you are to check at a random dip. In this case we generated the data under the $H_0$ and know, that $H_0$ is true.
(Just to be perfectly open, I've tried more than one seed for the number generator before it was as clear as this example, but that is fair for educational purposes. If you have Rinstalled and running, you can easily play with the numbers.) | Why is it wrong to stop an A/B test before optimal sample size is reached?
If the null hypothesis is true, then people often expect the p value to be very high. This is not true. If the null hypothesis is true, then p is a uniformely distributed random variable. Meaning, tha |
18,940 | Why are neural networks easily fooled? | The sort of models you are referring to are called 'generative' models as opposed to discriminatory, and do not really scale up to high dimensional data.
Part of the successes of NN in language tasks is the move from a generative model (HMM) do a 'more' discriminatory model (eg MEMM uses logistic regression which allows contextual data to be used effectively https://en.wikipedia.org/wiki/Hidden_Markov_model#Extensions)
I would argue that the reason they are fooled is a more general problem. It is the current dominance of 'shallow' ML-driven AI over more sophisticated methods.
[in many of the papers it is mentioned that other ML models are also easily fooled - http://www.kdnuggets.com/2015/07/deep-learning-adversarial-examples-misconceptions.html - Ian Goodfellow]
the most effective 'language model' for many tasks is 'bag of words'. No one would claim that this represents a meaningful model of human language.
its not hard to imagine that these sort of models are also easily fooled.
similarly computer vision tasks such as object recognition were revolutionised by 'visual bag of words' which blew the more computationally intensive methods away (which couldn't be applied to massive data sets).
CNN are I would argue a better 'visual bag of words' - as you show in your images, the mistakes are made at the pixel level/low level features; despite all the hyperbole there is no high level representation in the hidden layers- (everyone makes mistakes, the point is that a person would make 'mistakes' due to higher level features and would eg recognise a cartoon of a cat, which I don't believe an NN would).
An example of a more sophisticated model of computer vision (which perform worse than NN) is eg 'deformable parts' model. | Why are neural networks easily fooled? | The sort of models you are referring to are called 'generative' models as opposed to discriminatory, and do not really scale up to high dimensional data.
Part of the successes of NN in language tasks | Why are neural networks easily fooled?
The sort of models you are referring to are called 'generative' models as opposed to discriminatory, and do not really scale up to high dimensional data.
Part of the successes of NN in language tasks is the move from a generative model (HMM) do a 'more' discriminatory model (eg MEMM uses logistic regression which allows contextual data to be used effectively https://en.wikipedia.org/wiki/Hidden_Markov_model#Extensions)
I would argue that the reason they are fooled is a more general problem. It is the current dominance of 'shallow' ML-driven AI over more sophisticated methods.
[in many of the papers it is mentioned that other ML models are also easily fooled - http://www.kdnuggets.com/2015/07/deep-learning-adversarial-examples-misconceptions.html - Ian Goodfellow]
the most effective 'language model' for many tasks is 'bag of words'. No one would claim that this represents a meaningful model of human language.
its not hard to imagine that these sort of models are also easily fooled.
similarly computer vision tasks such as object recognition were revolutionised by 'visual bag of words' which blew the more computationally intensive methods away (which couldn't be applied to massive data sets).
CNN are I would argue a better 'visual bag of words' - as you show in your images, the mistakes are made at the pixel level/low level features; despite all the hyperbole there is no high level representation in the hidden layers- (everyone makes mistakes, the point is that a person would make 'mistakes' due to higher level features and would eg recognise a cartoon of a cat, which I don't believe an NN would).
An example of a more sophisticated model of computer vision (which perform worse than NN) is eg 'deformable parts' model. | Why are neural networks easily fooled?
The sort of models you are referring to are called 'generative' models as opposed to discriminatory, and do not really scale up to high dimensional data.
Part of the successes of NN in language tasks |
18,941 | Why are neural networks easily fooled? | As far as I know, most neural networks don't use a a priori probability distribution over the input images. However you could interpret the selection of the training set to be such a probability distribution. In that view, these artificially generated images are unlikely to be picked as images in the test-set. One way to measure the 'joint-probability' would be to randomly generate images and then label them. The problem would be that the vast, VAST majority would have no label. So to get a reasonable number of labelled examples would take for too much time. | Why are neural networks easily fooled? | As far as I know, most neural networks don't use a a priori probability distribution over the input images. However you could interpret the selection of the training set to be such a probability distr | Why are neural networks easily fooled?
As far as I know, most neural networks don't use a a priori probability distribution over the input images. However you could interpret the selection of the training set to be such a probability distribution. In that view, these artificially generated images are unlikely to be picked as images in the test-set. One way to measure the 'joint-probability' would be to randomly generate images and then label them. The problem would be that the vast, VAST majority would have no label. So to get a reasonable number of labelled examples would take for too much time. | Why are neural networks easily fooled?
As far as I know, most neural networks don't use a a priori probability distribution over the input images. However you could interpret the selection of the training set to be such a probability distr |
18,942 | Are there circumstances in which BIC is useful and AIC is not? | According to Wikipedia, the AIC can be written as follows:
$$
2k - 2 \ln(\mathcal L)
$$
As the BIC allows a large penalization for complex models there are situations in which the AIC will hint that you should select a model that is too complex, while the BIC is still useful. The BIC can be written as follows:
$$
-2 \ln(\mathcal L) + k \ln(n)
$$
So the difference is that the BIC penalizes for the size of the sample. If you do not want to penalize for the sample there
A quick explanation by Rob Hyndman can be found here: Is there any reason to prefer the AIC or BIC over the other? He writes:
AIC is best for prediction as it is asymptotically equivalent to cross-validation.
BIC is best for explanation as it allows consistent estimation of the underlying data generating process.**
Edit:
One example can be found in Time Series analysis. In VAR models the AIC (as well as its corrected version the AICc) often take to many lags. Therefore one should primarily look at the BIC when choosing the number of lags of a VAR Modell. For further information you can read chapter 9.2 from Forecasting- Principles and Practice by Rob J. Hyndman and George Athanasopoulos. | Are there circumstances in which BIC is useful and AIC is not? | According to Wikipedia, the AIC can be written as follows:
$$
2k - 2 \ln(\mathcal L)
$$
As the BIC allows a large penalization for complex models there are situations in which the AIC will hint that y | Are there circumstances in which BIC is useful and AIC is not?
According to Wikipedia, the AIC can be written as follows:
$$
2k - 2 \ln(\mathcal L)
$$
As the BIC allows a large penalization for complex models there are situations in which the AIC will hint that you should select a model that is too complex, while the BIC is still useful. The BIC can be written as follows:
$$
-2 \ln(\mathcal L) + k \ln(n)
$$
So the difference is that the BIC penalizes for the size of the sample. If you do not want to penalize for the sample there
A quick explanation by Rob Hyndman can be found here: Is there any reason to prefer the AIC or BIC over the other? He writes:
AIC is best for prediction as it is asymptotically equivalent to cross-validation.
BIC is best for explanation as it allows consistent estimation of the underlying data generating process.**
Edit:
One example can be found in Time Series analysis. In VAR models the AIC (as well as its corrected version the AICc) often take to many lags. Therefore one should primarily look at the BIC when choosing the number of lags of a VAR Modell. For further information you can read chapter 9.2 from Forecasting- Principles and Practice by Rob J. Hyndman and George Athanasopoulos. | Are there circumstances in which BIC is useful and AIC is not?
According to Wikipedia, the AIC can be written as follows:
$$
2k - 2 \ln(\mathcal L)
$$
As the BIC allows a large penalization for complex models there are situations in which the AIC will hint that y |
18,943 | Are there circumstances in which BIC is useful and AIC is not? | It is not meaningful to ask the question whether AIC is better than BIC. Even though these two different model selection criteria look superficially similar they were each designed to solve fundamentally different problems. So you should choose the model selection criterion which is appropriate for the problem you have.
AIC is a formula estimates the expected value of twice the negative log likelihood of test data using a correctly specified probability model whose parameters were obtained by fitting the model to training data. That is, AIC estimates expected cross validation error using a negative log likelihood error. That is,
$AIC \approx E\{-2 \log \prod_{i=1}^n p(x_i | \hat{\theta}_n)\}$
Where $x_1, \ldots, x_n$ are test data, $\hat{\theta}_n$ is estimated using training data, and $E\{ \}$ denotes the expectation operator with respect to the iid data generating process which generated both the training and test data.
BIC on the other hand is not designed to estimate cross validation error. BIC estimates twice the negative logarithm of the likelihood of the observed data given the model. This likelihood is also called the marginal likelihood it is computed by integrating the likelihood function weighted by a parameter prior $p(\theta)$ over the parameter space.
That is,
$ BIC \approx -2 \log \int [\prod_{i=1}^n p( x_i | \theta) ] p(\theta)d\theta$. | Are there circumstances in which BIC is useful and AIC is not? | It is not meaningful to ask the question whether AIC is better than BIC. Even though these two different model selection criteria look superficially similar they were each designed to solve fundamenta | Are there circumstances in which BIC is useful and AIC is not?
It is not meaningful to ask the question whether AIC is better than BIC. Even though these two different model selection criteria look superficially similar they were each designed to solve fundamentally different problems. So you should choose the model selection criterion which is appropriate for the problem you have.
AIC is a formula estimates the expected value of twice the negative log likelihood of test data using a correctly specified probability model whose parameters were obtained by fitting the model to training data. That is, AIC estimates expected cross validation error using a negative log likelihood error. That is,
$AIC \approx E\{-2 \log \prod_{i=1}^n p(x_i | \hat{\theta}_n)\}$
Where $x_1, \ldots, x_n$ are test data, $\hat{\theta}_n$ is estimated using training data, and $E\{ \}$ denotes the expectation operator with respect to the iid data generating process which generated both the training and test data.
BIC on the other hand is not designed to estimate cross validation error. BIC estimates twice the negative logarithm of the likelihood of the observed data given the model. This likelihood is also called the marginal likelihood it is computed by integrating the likelihood function weighted by a parameter prior $p(\theta)$ over the parameter space.
That is,
$ BIC \approx -2 \log \int [\prod_{i=1}^n p( x_i | \theta) ] p(\theta)d\theta$. | Are there circumstances in which BIC is useful and AIC is not?
It is not meaningful to ask the question whether AIC is better than BIC. Even though these two different model selection criteria look superficially similar they were each designed to solve fundamenta |
18,944 | Are there circumstances in which BIC is useful and AIC is not? | Q: Are there circumstances in which BIC is useful and AIC is not?
A: Yes. BIC and AIC have fundamentally different goals. BIC estimates the probability that a model minimizes the loss function (specifically, the Kullback-Leibler divergence); a BIC difference of .1 between A and B implies that model A is roughly 10% more likely to be the best model -- assuming you start with close to no information and have a large sample size. AIC, by contrast, measures how good a model is at making predictions -- a difference of .1 means (very roughly) that model A will be about 10% better at making new predictions than model B.
This means that BIC can be better if you want to know the probability that a model is true. AIC can't give you that; if you try using AIC in this way, you get inconsistent estimates (i.e. AIC will not always select the true model).
On the other hand, AIC will be better at minimizing the expected loss.
AIC and BIC have two fundamentally different goals (BIC tries to maximize the chances of picking the best model, while AIC tries to maximize the expected quality of the model you select). | Are there circumstances in which BIC is useful and AIC is not? | Q: Are there circumstances in which BIC is useful and AIC is not?
A: Yes. BIC and AIC have fundamentally different goals. BIC estimates the probability that a model minimizes the loss function (specif | Are there circumstances in which BIC is useful and AIC is not?
Q: Are there circumstances in which BIC is useful and AIC is not?
A: Yes. BIC and AIC have fundamentally different goals. BIC estimates the probability that a model minimizes the loss function (specifically, the Kullback-Leibler divergence); a BIC difference of .1 between A and B implies that model A is roughly 10% more likely to be the best model -- assuming you start with close to no information and have a large sample size. AIC, by contrast, measures how good a model is at making predictions -- a difference of .1 means (very roughly) that model A will be about 10% better at making new predictions than model B.
This means that BIC can be better if you want to know the probability that a model is true. AIC can't give you that; if you try using AIC in this way, you get inconsistent estimates (i.e. AIC will not always select the true model).
On the other hand, AIC will be better at minimizing the expected loss.
AIC and BIC have two fundamentally different goals (BIC tries to maximize the chances of picking the best model, while AIC tries to maximize the expected quality of the model you select). | Are there circumstances in which BIC is useful and AIC is not?
Q: Are there circumstances in which BIC is useful and AIC is not?
A: Yes. BIC and AIC have fundamentally different goals. BIC estimates the probability that a model minimizes the loss function (specif |
18,945 | Why is the arithmetic mean smaller than the distribution mean in a log-normal distribution? | The two estimators you are comparing are the method of moments estimator (1.) and the MLE (2.), see here. Both are consistent (so for large $N$, they are in a certain sense likely to be close to the true value $\exp[\mu+1/2\sigma^2]$).
For the MM estimator, this is a direct consequence of the Law of large numbers, which says that
$\bar X\to_pE(X_i)$. For the MLE, the continuous mapping theorem implies that
$$
\exp[\hat\mu+1/2\hat\sigma^2]\to_p\exp[\mu+1/2\sigma^2],$$
as $\hat\mu\to_p\mu$ and $\hat\sigma^2\to_p\sigma^2$.
The MLE is, however, not unbiased.
In fact, Jensen's inequality tells us that, for $N$ small, the MLE is to be expected to be biased upwards (see also the simulation below): $\hat\mu$ and $\hat\sigma^2$ are (in the latter case, almost, but with a negligible bias for $N=100$, as the unbiased estimator divides by $N-1$) well known to be unbiased estimators of the parameters of a normal distribution $\mu$ and $\sigma^2$ (I use hats to indicate estimators).
Hence, $E(\hat\mu+1/2\hat\sigma^2)\approx\mu+1/2\sigma^2$. Since the exponential is a convex function, this implies that
$$E[\exp(\hat\mu+1/2\hat\sigma^2)]>\exp[E(\hat\mu+1/2\hat\sigma^2)]\approx \exp[\mu+1/2\sigma^2]$$
Try increasing $N=100$ to a larger number, which should center both distributions around the true value.
See this Monte Carlo illustration for $N=1000$ in R:
Created with:
N <- 1000
reps <- 10000
mu <- 3
sigma <- 1.5
mm <- mle <- rep(NA,reps)
for (i in 1:reps){
X <- rlnorm(N, meanlog = mu, sdlog = sigma)
mm[i] <- mean(X)
normmean <- mean(log(X))
normvar <- (N-1)/N*var(log(X))
mle[i] <- exp(normmean+normvar/2)
}
plot(density(mm),col="green",lwd=2)
truemean <- exp(mu+1/2*sigma^2)
abline(v=truemean,lty=2)
lines(density(mle),col="red",lwd=2,lty=2)
> truemean
[1] 61.86781
> mean(mm)
[1] 61.97504
> mean(mle)
[1] 61.98256
We note that while both distributions are now (more or less) centered around the true value $\exp(\mu+\sigma^2/2)$, the MLE, as is often the case, is more efficient.
One can indeed show explicitly that this must be so by comparing the asymptotic variances. This very nice CV answer tells us that the asymptotic variance of the MLE is
$$V_t = (\sigma^2 + \sigma^4/2)\cdot \exp\left\{2(\mu + \frac 12\sigma^2)\right\},$$
while that of the MM estimator, by a direct application of the CLT applied to samples averages is that of the variance of the log-normal distribution,
$$
\exp\left\{2(\mu + \frac 12\sigma^2)\right\}(\exp\{\sigma^2\}-1)
$$
The second is larger than the first because
$$
\exp\{\sigma^2\}>1+\sigma^2 + \sigma^4/2,
$$
as $\exp(x)=\sum_{i=0}^\infty x^i/i!$ and $\sigma^2>0$.
To see that the MLE is indeed biased for small $N$, I repeat the simulation for N <- c(50,100,200,500,1000,2000,3000,5000) and 50,000 replications and obtain a simulated bias as follows:
We see that the MLE is indeed seriously biased for small $N$. I am a little surprised about the somewhat erratic behavior of the bias of the MM estimator as a function of $N$. The simulated bias for small $N=50$ for MM is likely caused by outliers that affect the non-logged MM estimator more heavily than the MLE. In one simulation run, the largest estimates turned out to be
> tail(sort(mm))
[1] 336.7619 356.6176 369.3869 385.8879 413.1249 784.6867
> tail(sort(mle))
[1] 187.7215 205.1379 216.0167 222.8078 229.6142 259.8727 | Why is the arithmetic mean smaller than the distribution mean in a log-normal distribution? | The two estimators you are comparing are the method of moments estimator (1.) and the MLE (2.), see here. Both are consistent (so for large $N$, they are in a certain sense likely to be close to the t | Why is the arithmetic mean smaller than the distribution mean in a log-normal distribution?
The two estimators you are comparing are the method of moments estimator (1.) and the MLE (2.), see here. Both are consistent (so for large $N$, they are in a certain sense likely to be close to the true value $\exp[\mu+1/2\sigma^2]$).
For the MM estimator, this is a direct consequence of the Law of large numbers, which says that
$\bar X\to_pE(X_i)$. For the MLE, the continuous mapping theorem implies that
$$
\exp[\hat\mu+1/2\hat\sigma^2]\to_p\exp[\mu+1/2\sigma^2],$$
as $\hat\mu\to_p\mu$ and $\hat\sigma^2\to_p\sigma^2$.
The MLE is, however, not unbiased.
In fact, Jensen's inequality tells us that, for $N$ small, the MLE is to be expected to be biased upwards (see also the simulation below): $\hat\mu$ and $\hat\sigma^2$ are (in the latter case, almost, but with a negligible bias for $N=100$, as the unbiased estimator divides by $N-1$) well known to be unbiased estimators of the parameters of a normal distribution $\mu$ and $\sigma^2$ (I use hats to indicate estimators).
Hence, $E(\hat\mu+1/2\hat\sigma^2)\approx\mu+1/2\sigma^2$. Since the exponential is a convex function, this implies that
$$E[\exp(\hat\mu+1/2\hat\sigma^2)]>\exp[E(\hat\mu+1/2\hat\sigma^2)]\approx \exp[\mu+1/2\sigma^2]$$
Try increasing $N=100$ to a larger number, which should center both distributions around the true value.
See this Monte Carlo illustration for $N=1000$ in R:
Created with:
N <- 1000
reps <- 10000
mu <- 3
sigma <- 1.5
mm <- mle <- rep(NA,reps)
for (i in 1:reps){
X <- rlnorm(N, meanlog = mu, sdlog = sigma)
mm[i] <- mean(X)
normmean <- mean(log(X))
normvar <- (N-1)/N*var(log(X))
mle[i] <- exp(normmean+normvar/2)
}
plot(density(mm),col="green",lwd=2)
truemean <- exp(mu+1/2*sigma^2)
abline(v=truemean,lty=2)
lines(density(mle),col="red",lwd=2,lty=2)
> truemean
[1] 61.86781
> mean(mm)
[1] 61.97504
> mean(mle)
[1] 61.98256
We note that while both distributions are now (more or less) centered around the true value $\exp(\mu+\sigma^2/2)$, the MLE, as is often the case, is more efficient.
One can indeed show explicitly that this must be so by comparing the asymptotic variances. This very nice CV answer tells us that the asymptotic variance of the MLE is
$$V_t = (\sigma^2 + \sigma^4/2)\cdot \exp\left\{2(\mu + \frac 12\sigma^2)\right\},$$
while that of the MM estimator, by a direct application of the CLT applied to samples averages is that of the variance of the log-normal distribution,
$$
\exp\left\{2(\mu + \frac 12\sigma^2)\right\}(\exp\{\sigma^2\}-1)
$$
The second is larger than the first because
$$
\exp\{\sigma^2\}>1+\sigma^2 + \sigma^4/2,
$$
as $\exp(x)=\sum_{i=0}^\infty x^i/i!$ and $\sigma^2>0$.
To see that the MLE is indeed biased for small $N$, I repeat the simulation for N <- c(50,100,200,500,1000,2000,3000,5000) and 50,000 replications and obtain a simulated bias as follows:
We see that the MLE is indeed seriously biased for small $N$. I am a little surprised about the somewhat erratic behavior of the bias of the MM estimator as a function of $N$. The simulated bias for small $N=50$ for MM is likely caused by outliers that affect the non-logged MM estimator more heavily than the MLE. In one simulation run, the largest estimates turned out to be
> tail(sort(mm))
[1] 336.7619 356.6176 369.3869 385.8879 413.1249 784.6867
> tail(sort(mle))
[1] 187.7215 205.1379 216.0167 222.8078 229.6142 259.8727 | Why is the arithmetic mean smaller than the distribution mean in a log-normal distribution?
The two estimators you are comparing are the method of moments estimator (1.) and the MLE (2.), see here. Both are consistent (so for large $N$, they are in a certain sense likely to be close to the t |
18,946 | Principal Component Analysis Vs Feature Selection | Feature selection
we consider a subset of attributes which has the greatest impact
towards our targeted classification.
This understanding is perfectly correct.
PCA
we generate a smaller amount of artificial set of attributes that will
account for our target.
This is partially correct. We are not accounting target in PCA. In layman terms, we do some assumption about the data and its distribution, and represent the data with higher dimension in much smaller dimension (say 3) which have most of the information content as original data. Thus, PCA is a transforming your attributes to artificial set with retaining most of information.
Comparison
Which one is better? Does it depend on the particular study someone
is doing?
Yes, it depends on the particular study. IF the assumption made in PCA transformation holds, then by doing PCA, you will have same information in small number of attributes. IF the assumption fails largely, Then doing PCA may ruin your classification.
Combination
Does it make any sense?
It perfectly makes sense.
By feature selection you reducing number of dimension by throwing out irrelevant information.
By PCA, you reducing number of dimension by transforming to artificial set but retaining same information. | Principal Component Analysis Vs Feature Selection | Feature selection
we consider a subset of attributes which has the greatest impact
towards our targeted classification.
This understanding is perfectly correct.
PCA
we generate a smaller amount o | Principal Component Analysis Vs Feature Selection
Feature selection
we consider a subset of attributes which has the greatest impact
towards our targeted classification.
This understanding is perfectly correct.
PCA
we generate a smaller amount of artificial set of attributes that will
account for our target.
This is partially correct. We are not accounting target in PCA. In layman terms, we do some assumption about the data and its distribution, and represent the data with higher dimension in much smaller dimension (say 3) which have most of the information content as original data. Thus, PCA is a transforming your attributes to artificial set with retaining most of information.
Comparison
Which one is better? Does it depend on the particular study someone
is doing?
Yes, it depends on the particular study. IF the assumption made in PCA transformation holds, then by doing PCA, you will have same information in small number of attributes. IF the assumption fails largely, Then doing PCA may ruin your classification.
Combination
Does it make any sense?
It perfectly makes sense.
By feature selection you reducing number of dimension by throwing out irrelevant information.
By PCA, you reducing number of dimension by transforming to artificial set but retaining same information. | Principal Component Analysis Vs Feature Selection
Feature selection
we consider a subset of attributes which has the greatest impact
towards our targeted classification.
This understanding is perfectly correct.
PCA
we generate a smaller amount o |
18,947 | How does the back-propagation work in a siamese neural network? | Both networks share the similar architectures and but they are constrained to have the same weights as the publication describes at section 4 [1].
Their goal is to learn features that minimize the cosine similarity between, their output vectors when signatures are genuine, and maximize it when they are forged (this is the backprop goal as well, but the actual loss function is not presented).
The cosine similarity $\cos(A,B) = {A \cdot B \over \|A\| \|B\|}$ of two vectors $A, B$, is a measure of similarity that gives you the cosine of the angle between them (therefore, its output is not binary). If your concern is how you can backprop to a function that outputs either true or false, think of the case of binary classification.
You shouldn't change the output layer, it consists of trained neurons with linear values and its a higher-level abstraction of your input. The whole network should be trained together. Both outputs $O_1$ and $O_2$ are passed through a $cos(O_1,O_2)$ function that outputs their cosine similarity ($1$ if they are similar, and $0$ if they are not). Given that, and that we have two sets of input tuples $X_{Forged}, X_{Genuine}$, an example of the simplest possible loss function you could have to train against could be:
$$\mathcal{L}=\sum_{(x_A,x_B) \in X_{Forged}} cos(x_A,x_B) - \sum_{(x_C,x_D) \in X_{Genuine}} cos(x_C,x_D)$$
After you have trained your network, you just input the two signatures you get the two outputs pass them to the $cos(O_1,O_2)$ function, and check their similarity.
Finally, to keep the network weights identical there are several ways to do that (and they are used in Recurrent Neural Networks too); a common approach is to average the gradients of the two networks before performing the Gradient Descent update step.
[1] http://papers.nips.cc/paper/769-signature-verification-using-a-siamese-time-delay-neural-network.pdf | How does the back-propagation work in a siamese neural network? | Both networks share the similar architectures and but they are constrained to have the same weights as the publication describes at section 4 [1].
Their goal is to learn features that minimize the cos | How does the back-propagation work in a siamese neural network?
Both networks share the similar architectures and but they are constrained to have the same weights as the publication describes at section 4 [1].
Their goal is to learn features that minimize the cosine similarity between, their output vectors when signatures are genuine, and maximize it when they are forged (this is the backprop goal as well, but the actual loss function is not presented).
The cosine similarity $\cos(A,B) = {A \cdot B \over \|A\| \|B\|}$ of two vectors $A, B$, is a measure of similarity that gives you the cosine of the angle between them (therefore, its output is not binary). If your concern is how you can backprop to a function that outputs either true or false, think of the case of binary classification.
You shouldn't change the output layer, it consists of trained neurons with linear values and its a higher-level abstraction of your input. The whole network should be trained together. Both outputs $O_1$ and $O_2$ are passed through a $cos(O_1,O_2)$ function that outputs their cosine similarity ($1$ if they are similar, and $0$ if they are not). Given that, and that we have two sets of input tuples $X_{Forged}, X_{Genuine}$, an example of the simplest possible loss function you could have to train against could be:
$$\mathcal{L}=\sum_{(x_A,x_B) \in X_{Forged}} cos(x_A,x_B) - \sum_{(x_C,x_D) \in X_{Genuine}} cos(x_C,x_D)$$
After you have trained your network, you just input the two signatures you get the two outputs pass them to the $cos(O_1,O_2)$ function, and check their similarity.
Finally, to keep the network weights identical there are several ways to do that (and they are used in Recurrent Neural Networks too); a common approach is to average the gradients of the two networks before performing the Gradient Descent update step.
[1] http://papers.nips.cc/paper/769-signature-verification-using-a-siamese-time-delay-neural-network.pdf | How does the back-propagation work in a siamese neural network?
Both networks share the similar architectures and but they are constrained to have the same weights as the publication describes at section 4 [1].
Their goal is to learn features that minimize the cos |
18,948 | Caret varImp for randomForest model | As I understood you have only 3 variables. By default varImp function returns scaled results in range 0-100. Var3 has the lowest importance value and its scaled importance is zero. Try to call varImp(rf, scale = FALSE). | Caret varImp for randomForest model | As I understood you have only 3 variables. By default varImp function returns scaled results in range 0-100. Var3 has the lowest importance value and its scaled importance is zero. Try to call varImp( | Caret varImp for randomForest model
As I understood you have only 3 variables. By default varImp function returns scaled results in range 0-100. Var3 has the lowest importance value and its scaled importance is zero. Try to call varImp(rf, scale = FALSE). | Caret varImp for randomForest model
As I understood you have only 3 variables. By default varImp function returns scaled results in range 0-100. Var3 has the lowest importance value and its scaled importance is zero. Try to call varImp( |
18,949 | Caret varImp for randomForest model | Adding to @DrDom's answer, in order to provide further intuition:
The importance scores that varImp(rf, scale = FALSE) gives, is simply calculated by the following:
rf$finalModel$importance[,1]/rf$finalModel$importanceSD
This is the feature's mean %IncMSE divided by its standard deviation. | Caret varImp for randomForest model | Adding to @DrDom's answer, in order to provide further intuition:
The importance scores that varImp(rf, scale = FALSE) gives, is simply calculated by the following:
rf$finalModel$importance[,1]/rf$fin | Caret varImp for randomForest model
Adding to @DrDom's answer, in order to provide further intuition:
The importance scores that varImp(rf, scale = FALSE) gives, is simply calculated by the following:
rf$finalModel$importance[,1]/rf$finalModel$importanceSD
This is the feature's mean %IncMSE divided by its standard deviation. | Caret varImp for randomForest model
Adding to @DrDom's answer, in order to provide further intuition:
The importance scores that varImp(rf, scale = FALSE) gives, is simply calculated by the following:
rf$finalModel$importance[,1]/rf$fin |
18,950 | What is the difference between a distribution and a process (Poisson)? | Poisson distribution = a specific discrete probability distribution, i.e. a probability distribution characterized by a probability mass function. Specifically, in the case of Poisson, it is defined as $P(k \text{ events in interval}) = \frac{\lambda^k e^{-\lambda}}{k!}$, $\lambda \in \mathbb{R^+}, k \in \mathbb{N}$.
Poisson process = a stochastic process, i.e. a collection of random variables representing the evolution of some system of random values over time. In other words, it is a family of real random variables $(X_t)_{t\in T}$ defined on a probability space $(\Omega,\Sigma,P)\ ,$ where the set T is interpreted as ''time''. Specifically, a Poisson process can be defined in different ways, not necessarily using the Poisson distribution: | What is the difference between a distribution and a process (Poisson)? | Poisson distribution = a specific discrete probability distribution, i.e. a probability distribution characterized by a probability mass function. Specifically, in the case of Poisson, it is defined | What is the difference between a distribution and a process (Poisson)?
Poisson distribution = a specific discrete probability distribution, i.e. a probability distribution characterized by a probability mass function. Specifically, in the case of Poisson, it is defined as $P(k \text{ events in interval}) = \frac{\lambda^k e^{-\lambda}}{k!}$, $\lambda \in \mathbb{R^+}, k \in \mathbb{N}$.
Poisson process = a stochastic process, i.e. a collection of random variables representing the evolution of some system of random values over time. In other words, it is a family of real random variables $(X_t)_{t\in T}$ defined on a probability space $(\Omega,\Sigma,P)\ ,$ where the set T is interpreted as ''time''. Specifically, a Poisson process can be defined in different ways, not necessarily using the Poisson distribution: | What is the difference between a distribution and a process (Poisson)?
Poisson distribution = a specific discrete probability distribution, i.e. a probability distribution characterized by a probability mass function. Specifically, in the case of Poisson, it is defined |
18,951 | What is the difference between a distribution and a process (Poisson)? | A random process is a sequence of random variables. That means, when talking about a process, e.g. Poisson process, an element of occurrences as a sequence in time is involved, while when we talk about random variables and their distribution, e.g. Poisson distribution, there is no such element involved, and we only have a random variable X with its associated distribution.
Example:
Random variable X: the number of phone calls to a receptionist per hour follows the Poisson distribution (and with the distribution known we can have the probability of receiving a certain number of phone calls in a given time interval);
Random process {X1, X2, ....}, where Xi: the time when the ith phone call was received, is a Poisson process. | What is the difference between a distribution and a process (Poisson)? | A random process is a sequence of random variables. That means, when talking about a process, e.g. Poisson process, an element of occurrences as a sequence in time is involved, while when we talk abou | What is the difference between a distribution and a process (Poisson)?
A random process is a sequence of random variables. That means, when talking about a process, e.g. Poisson process, an element of occurrences as a sequence in time is involved, while when we talk about random variables and their distribution, e.g. Poisson distribution, there is no such element involved, and we only have a random variable X with its associated distribution.
Example:
Random variable X: the number of phone calls to a receptionist per hour follows the Poisson distribution (and with the distribution known we can have the probability of receiving a certain number of phone calls in a given time interval);
Random process {X1, X2, ....}, where Xi: the time when the ith phone call was received, is a Poisson process. | What is the difference between a distribution and a process (Poisson)?
A random process is a sequence of random variables. That means, when talking about a process, e.g. Poisson process, an element of occurrences as a sequence in time is involved, while when we talk abou |
18,952 | How does one show that there is no unbiased estimator of $\lambda^{-1}$ for a Poisson distribution with mean $\lambda$? | Assume that $g(X_0, \ldots, X_n)$ is an unbiased estimator of $1/\lambda$, that is,
$$\sum_{(x_0, \ldots, x_n) \in \mathbb{N}_0^{n+1}} g(x_0, \ldots, x_n) \frac{\lambda^{\sum_{i=0}^n x_i}}{\prod_{i=0}^n x_i!} e^{-(n + 1) \lambda} = \frac{1}{\lambda}, \quad \forall \lambda > 0.$$
Then multiplying by $\lambda e^{(n + 1) \lambda}$ and invoking the MacLaurin series of $e^{(n + 1) \lambda}$ we can write the equality as
$$ \sum_{(x_0, \ldots, x_n) \in \mathbb{N}_0^{n+1}} \frac{g(x_0, \ldots, x_n)}{\prod_{i=0}^n x_i!} \lambda^{1 + \sum_{i=0}^n x_i} = 1 + (n + 1)\lambda + \frac{(n + 1)^2 \lambda^2}{2} + \ldots , \quad \forall \lambda > 0, $$
where we have an equality of two power series of which one has a constant term (the right-hand side) and the other doesn't: a contradiction. Thus no unbiased estimator exists. | How does one show that there is no unbiased estimator of $\lambda^{-1}$ for a Poisson distribution w | Assume that $g(X_0, \ldots, X_n)$ is an unbiased estimator of $1/\lambda$, that is,
$$\sum_{(x_0, \ldots, x_n) \in \mathbb{N}_0^{n+1}} g(x_0, \ldots, x_n) \frac{\lambda^{\sum_{i=0}^n x_i}}{\prod_{i=0} | How does one show that there is no unbiased estimator of $\lambda^{-1}$ for a Poisson distribution with mean $\lambda$?
Assume that $g(X_0, \ldots, X_n)$ is an unbiased estimator of $1/\lambda$, that is,
$$\sum_{(x_0, \ldots, x_n) \in \mathbb{N}_0^{n+1}} g(x_0, \ldots, x_n) \frac{\lambda^{\sum_{i=0}^n x_i}}{\prod_{i=0}^n x_i!} e^{-(n + 1) \lambda} = \frac{1}{\lambda}, \quad \forall \lambda > 0.$$
Then multiplying by $\lambda e^{(n + 1) \lambda}$ and invoking the MacLaurin series of $e^{(n + 1) \lambda}$ we can write the equality as
$$ \sum_{(x_0, \ldots, x_n) \in \mathbb{N}_0^{n+1}} \frac{g(x_0, \ldots, x_n)}{\prod_{i=0}^n x_i!} \lambda^{1 + \sum_{i=0}^n x_i} = 1 + (n + 1)\lambda + \frac{(n + 1)^2 \lambda^2}{2} + \ldots , \quad \forall \lambda > 0, $$
where we have an equality of two power series of which one has a constant term (the right-hand side) and the other doesn't: a contradiction. Thus no unbiased estimator exists. | How does one show that there is no unbiased estimator of $\lambda^{-1}$ for a Poisson distribution w
Assume that $g(X_0, \ldots, X_n)$ is an unbiased estimator of $1/\lambda$, that is,
$$\sum_{(x_0, \ldots, x_n) \in \mathbb{N}_0^{n+1}} g(x_0, \ldots, x_n) \frac{\lambda^{\sum_{i=0}^n x_i}}{\prod_{i=0} |
18,953 | Distance between two Gaussian mixtures to evaluate cluster solutions | Suppose we have two Gaussian mixtures in $\mathbb R^d$:$\DeclareMathOperator{\N}{\mathcal N} \newcommand{\ud}{\mathrm{d}} \DeclareMathOperator{\E}{\mathbb E} \DeclareMathOperator{\MMD}{\mathrm{MMD}}$
$$
P = \sum_{i=1}^{n} \alpha_i P_i = \sum_{i=1}^n \alpha_i \N(\mu_i, \Sigma_i)
\qquad
Q = \sum_{j=1}^m \beta_j Q_j = \sum_{j=1}^m \beta_j \N(m_j, S_j)
.$$
Call their densities $p(\cdot)$ and $q(\cdot)$, respectively,
and denote the densities of their components $P_i$, $Q_j$ by $p_i(x) = \N(x; \mu_i, \Sigma_i)$, $q_j(x) = \N(x; m_j, S_j)$.
The following distances are available in closed form:
$L_2$ distance, as suggested in a comment by user39665. This is:
\begin{align}
L_2(P, Q)^2
&= \int (p(x) - q(x))^2 \,\ud x
\\&= \int \left( \sum_{i} \alpha_i p_i(x) - \sum_j \beta_j q_j(x) \right)^2 \ud x
\\&= \sum_{i,i'} \alpha_i \alpha_{i'} \int p_i(x) p_{i'}(x) \ud x
+ \sum_{j,j'} \beta_j \beta_{j'} \int q_j(x) q_{j'}(x) \ud x
\\&\qquad - 2 \sum_{i,j} \alpha_i \beta_j \int p_i(x) q_j(x) \ud x
.\end{align}
Note that, as seen for example in section 8.1.8 of the matrix cookbook:
\begin{align}
\int \N(x; \mu, \Sigma) \N(x; \mu', \Sigma') \,\ud x
&= \N(\mu; \mu', \Sigma + \Sigma')
\end{align}
so this can be evaluated easily in $O(m n)$ time.
The maximum mean discrepancy (MMD) with a Gaussian RBF kernel. This is a cool distance, not yet super-well-known among the statistics community, that takes a bit of math to define.
Letting $$k(x, y) := \exp\left( - \frac{1}{2 \sigma^2} \lVert x - y \rVert^2 \right),$$
define the Hilbert space $\mathcal{H}$ as the reproducing kernel Hilbert space corresponding to $k$: $k(x, y) = \langle \varphi(x), \varphi(y) \rangle_{\mathcal H}$.
Define the mean map kernel as
$$
K(P, Q)
= \E_{X \sim P, Y \sim Q} k(X, Y)
= \langle \E_{X \sim P} \varphi(X), \E_{Y \sim Q} \varphi(Y) \rangle
.$$
The MMD is then
\begin{align}
\MMD(P, Q)
&= \lVert \E_{X \sim P}[\varphi(X)] - \E_{Y \sim Q}[\varphi(Y)] \rVert
\\&= \sqrt{K(P, P) + K(Q, Q) - 2 K(P, Q)}
\\&= \sup_{f : \lVert f \rVert_{\mathcal H} \le 1} \E_{X \sim P} f(X) - \E_{Y \sim Q} f(Y)
.\end{align}
For our mixtures $P$ and $Q$,
note that
$$
K(P, Q) = \sum_{i, j} \alpha_i \beta_j K(P_i, Q_j)
$$
and similarly for $K(P, P)$ and $K(Q, Q)$.
It turns out, using similar tricks as for $L_2$, that $K(\N(\mu, \Sigma), \N(\mu', \Sigma'))$ is
$$
(2 \pi \sigma^2)^{d/2} \N(\mu; \mu', \Sigma + \Sigma' + \sigma^2 I)
.$$
As $\sigma \to 0$, clearly this converges to a multiple of the $L_2$ distance. You'd normally want to use a different $\sigma$, though, one on the scale of the data variation.
Closed forms are also available for polynomial kernels $k$ in the MMD; see
Muandet, Fukumizu, Dinuzzo, and Schölkopf (2012). Learning from Distributions via Support Measure Machines. In Advances in Neural Information Processing Systems (official version). arXiv:1202.6504.
For a lot of nice properties of this distance, see
Sriperumbudur, Gretton, Fukumizu, Schölkopf, and Lanckriet (2010). Hilbert space embeddings and metrics on probability measures. Journal of Machine Learning Research, 11, 1517–1561. arXiv:0907.5309.
Quadratic Jensen-Rényi divergence. The Rényi-$\alpha$ entropy is defined as
$$
H_\alpha(p) = \frac{1}{1-\alpha} \log\left( \int p(x)^\alpha \,\ud x \right)
.$$
Its limit as $\alpha \to 1$ is the Shannon entropy. The Jensen-Rényi divergence is
$$
\mathrm{JR}_\alpha(p, q) = H_\alpha\left( \frac{p + q}{2} \right) - \frac{H_\alpha(p) + H_\alpha(q)}{2}
$$
where $\frac{p + q}{2}$ denotes an equal mixture between $p$ and $q$.
It turns out that, when $\alpha = 2$ and when $P$ and $Q$ are Gaussian mixtures (as here), you can compute a closed form for $\mathrm{JR}_2$. This was done by
Wang, Syeda-Mahmood, Vemuri, Beymer, and Rangarajan (2009). Closed-Form Jensen-Renyi Divergence for Mixture of Gaussians and Applications to Group-Wise Shape Registration. Med Image Comput Comput Assist Interv., 12(1), 648–655. (free pubmed version) | Distance between two Gaussian mixtures to evaluate cluster solutions | Suppose we have two Gaussian mixtures in $\mathbb R^d$:$\DeclareMathOperator{\N}{\mathcal N} \newcommand{\ud}{\mathrm{d}} \DeclareMathOperator{\E}{\mathbb E} \DeclareMathOperator{\MMD}{\mathrm{MMD}}$
| Distance between two Gaussian mixtures to evaluate cluster solutions
Suppose we have two Gaussian mixtures in $\mathbb R^d$:$\DeclareMathOperator{\N}{\mathcal N} \newcommand{\ud}{\mathrm{d}} \DeclareMathOperator{\E}{\mathbb E} \DeclareMathOperator{\MMD}{\mathrm{MMD}}$
$$
P = \sum_{i=1}^{n} \alpha_i P_i = \sum_{i=1}^n \alpha_i \N(\mu_i, \Sigma_i)
\qquad
Q = \sum_{j=1}^m \beta_j Q_j = \sum_{j=1}^m \beta_j \N(m_j, S_j)
.$$
Call their densities $p(\cdot)$ and $q(\cdot)$, respectively,
and denote the densities of their components $P_i$, $Q_j$ by $p_i(x) = \N(x; \mu_i, \Sigma_i)$, $q_j(x) = \N(x; m_j, S_j)$.
The following distances are available in closed form:
$L_2$ distance, as suggested in a comment by user39665. This is:
\begin{align}
L_2(P, Q)^2
&= \int (p(x) - q(x))^2 \,\ud x
\\&= \int \left( \sum_{i} \alpha_i p_i(x) - \sum_j \beta_j q_j(x) \right)^2 \ud x
\\&= \sum_{i,i'} \alpha_i \alpha_{i'} \int p_i(x) p_{i'}(x) \ud x
+ \sum_{j,j'} \beta_j \beta_{j'} \int q_j(x) q_{j'}(x) \ud x
\\&\qquad - 2 \sum_{i,j} \alpha_i \beta_j \int p_i(x) q_j(x) \ud x
.\end{align}
Note that, as seen for example in section 8.1.8 of the matrix cookbook:
\begin{align}
\int \N(x; \mu, \Sigma) \N(x; \mu', \Sigma') \,\ud x
&= \N(\mu; \mu', \Sigma + \Sigma')
\end{align}
so this can be evaluated easily in $O(m n)$ time.
The maximum mean discrepancy (MMD) with a Gaussian RBF kernel. This is a cool distance, not yet super-well-known among the statistics community, that takes a bit of math to define.
Letting $$k(x, y) := \exp\left( - \frac{1}{2 \sigma^2} \lVert x - y \rVert^2 \right),$$
define the Hilbert space $\mathcal{H}$ as the reproducing kernel Hilbert space corresponding to $k$: $k(x, y) = \langle \varphi(x), \varphi(y) \rangle_{\mathcal H}$.
Define the mean map kernel as
$$
K(P, Q)
= \E_{X \sim P, Y \sim Q} k(X, Y)
= \langle \E_{X \sim P} \varphi(X), \E_{Y \sim Q} \varphi(Y) \rangle
.$$
The MMD is then
\begin{align}
\MMD(P, Q)
&= \lVert \E_{X \sim P}[\varphi(X)] - \E_{Y \sim Q}[\varphi(Y)] \rVert
\\&= \sqrt{K(P, P) + K(Q, Q) - 2 K(P, Q)}
\\&= \sup_{f : \lVert f \rVert_{\mathcal H} \le 1} \E_{X \sim P} f(X) - \E_{Y \sim Q} f(Y)
.\end{align}
For our mixtures $P$ and $Q$,
note that
$$
K(P, Q) = \sum_{i, j} \alpha_i \beta_j K(P_i, Q_j)
$$
and similarly for $K(P, P)$ and $K(Q, Q)$.
It turns out, using similar tricks as for $L_2$, that $K(\N(\mu, \Sigma), \N(\mu', \Sigma'))$ is
$$
(2 \pi \sigma^2)^{d/2} \N(\mu; \mu', \Sigma + \Sigma' + \sigma^2 I)
.$$
As $\sigma \to 0$, clearly this converges to a multiple of the $L_2$ distance. You'd normally want to use a different $\sigma$, though, one on the scale of the data variation.
Closed forms are also available for polynomial kernels $k$ in the MMD; see
Muandet, Fukumizu, Dinuzzo, and Schölkopf (2012). Learning from Distributions via Support Measure Machines. In Advances in Neural Information Processing Systems (official version). arXiv:1202.6504.
For a lot of nice properties of this distance, see
Sriperumbudur, Gretton, Fukumizu, Schölkopf, and Lanckriet (2010). Hilbert space embeddings and metrics on probability measures. Journal of Machine Learning Research, 11, 1517–1561. arXiv:0907.5309.
Quadratic Jensen-Rényi divergence. The Rényi-$\alpha$ entropy is defined as
$$
H_\alpha(p) = \frac{1}{1-\alpha} \log\left( \int p(x)^\alpha \,\ud x \right)
.$$
Its limit as $\alpha \to 1$ is the Shannon entropy. The Jensen-Rényi divergence is
$$
\mathrm{JR}_\alpha(p, q) = H_\alpha\left( \frac{p + q}{2} \right) - \frac{H_\alpha(p) + H_\alpha(q)}{2}
$$
where $\frac{p + q}{2}$ denotes an equal mixture between $p$ and $q$.
It turns out that, when $\alpha = 2$ and when $P$ and $Q$ are Gaussian mixtures (as here), you can compute a closed form for $\mathrm{JR}_2$. This was done by
Wang, Syeda-Mahmood, Vemuri, Beymer, and Rangarajan (2009). Closed-Form Jensen-Renyi Divergence for Mixture of Gaussians and Applications to Group-Wise Shape Registration. Med Image Comput Comput Assist Interv., 12(1), 648–655. (free pubmed version) | Distance between two Gaussian mixtures to evaluate cluster solutions
Suppose we have two Gaussian mixtures in $\mathbb R^d$:$\DeclareMathOperator{\N}{\mathcal N} \newcommand{\ud}{\mathrm{d}} \DeclareMathOperator{\E}{\mathbb E} \DeclareMathOperator{\MMD}{\mathrm{MMD}}$
|
18,954 | Distance between two Gaussian mixtures to evaluate cluster solutions | Here is a generalization of the Mahalanobis D to GMMs using the Fisher Kernel method and other techniques:
Tipping, Michael E. "Deriving cluster analytic distance functions from Gaussian mixture models." (1999): 815-820.
https://pdfs.semanticscholar.org/08d2/0f55442aeb79edfaaaafa7ad54c513ee1dcb.pdf
See also: Is there a multi-Gaussian version of the Mahalanobis distance ? | Distance between two Gaussian mixtures to evaluate cluster solutions | Here is a generalization of the Mahalanobis D to GMMs using the Fisher Kernel method and other techniques:
Tipping, Michael E. "Deriving cluster analytic distance functions from Gaussian mixture model | Distance between two Gaussian mixtures to evaluate cluster solutions
Here is a generalization of the Mahalanobis D to GMMs using the Fisher Kernel method and other techniques:
Tipping, Michael E. "Deriving cluster analytic distance functions from Gaussian mixture models." (1999): 815-820.
https://pdfs.semanticscholar.org/08d2/0f55442aeb79edfaaaafa7ad54c513ee1dcb.pdf
See also: Is there a multi-Gaussian version of the Mahalanobis distance ? | Distance between two Gaussian mixtures to evaluate cluster solutions
Here is a generalization of the Mahalanobis D to GMMs using the Fisher Kernel method and other techniques:
Tipping, Michael E. "Deriving cluster analytic distance functions from Gaussian mixture model |
18,955 | Distance between two Gaussian mixtures to evaluate cluster solutions | If your clusters are actually not Gaussian mixtures but arbitrarily shaped, your results may actually be much better when you produce much more clusters, then merge some again afterwards.
In many cases, one just chooses k to be arbitrarily high, e.g. 1000 for a large data set; in particular when you aren't really interested in the models, but just want to reduce the complexity of the data set via vector quantization. | Distance between two Gaussian mixtures to evaluate cluster solutions | If your clusters are actually not Gaussian mixtures but arbitrarily shaped, your results may actually be much better when you produce much more clusters, then merge some again afterwards.
In many case | Distance between two Gaussian mixtures to evaluate cluster solutions
If your clusters are actually not Gaussian mixtures but arbitrarily shaped, your results may actually be much better when you produce much more clusters, then merge some again afterwards.
In many cases, one just chooses k to be arbitrarily high, e.g. 1000 for a large data set; in particular when you aren't really interested in the models, but just want to reduce the complexity of the data set via vector quantization. | Distance between two Gaussian mixtures to evaluate cluster solutions
If your clusters are actually not Gaussian mixtures but arbitrarily shaped, your results may actually be much better when you produce much more clusters, then merge some again afterwards.
In many case |
18,956 | Question about sample autocovariance function | $\widehat{\gamma}$ is used to create covariance matrices: given "times" $t_1, t_2, \ldots, t_k$, it estimates that the covariance of the random vector $X_{t_1}, X_{t_2}, \ldots, X_{t_k}$ (obtained from the random field at those times) is the matrix $\left(\widehat{\gamma}(t_i - t_j), 1 \le i, j \le k\right)$. For many problems, such as prediction, it is crucial that all such matrices be nonsingular. As putative covariance matrices, obviously they cannot have any negative eigenvalues, whence they must all be positive-definite.
The simplest situation in which the distinction between the two formulas
$$\widehat{\gamma}(h) = n^{-1}\sum_{t=1}^{n-h}(x_{t+h}-\bar{x})(x_t-\bar{x})$$
and
$$\widehat{\gamma}_0(h) = (n-h)^{-1}\sum_{t=1}^{n-h}(x_{t+h}-\bar{x})(x_t-\bar{x})$$
appears is when $x$ has length $2$; say, $x = (0,1)$. For $t_1=t$ and $t_2 = t+1$ it's simple to compute
$$\widehat{\gamma}_0 = \left(
\begin{array}{cc}
\frac{1}{4} & -\frac{1}{4} \\
-\frac{1}{4} & \frac{1}{4}
\end{array}
\right),$$
which is singular, whereas
$$\widehat{\gamma} = \left(
\begin{array}{cc}
\frac{1}{4} & -\frac{1}{8} \\
-\frac{1}{8} & \frac{1}{4}
\end{array}
\right)$$
which has eigenvalues $3/8$ and $1/8$, whence it is positive-definite.
A similar phenomenon happens for $x = (0,1,0,1)$, where $\widehat{\gamma}$ is positive-definite but $\widehat{\gamma}_0$--when applied to the times $t_i = (1,2,3,4)$, say--degenerates into a matrix of rank $1$ (its entries alternate between $1/4$ and $-1/4$).
(There is a pattern here: problems arise for any $x$ of the form $(a,b,a,b,\ldots,a,b)$.)
In most applications the series of observations $x_t$ is so long that for most $h$ of interest--which are much less than $n$--the difference between $n^{-1}$ and $(n-h)^{-1}$ is of no consequence. So in practice the distinction is no big deal and theoretically the need for positive-definiteness strongly overrides any possible desire for unbiased estimates. | Question about sample autocovariance function | $\widehat{\gamma}$ is used to create covariance matrices: given "times" $t_1, t_2, \ldots, t_k$, it estimates that the covariance of the random vector $X_{t_1}, X_{t_2}, \ldots, X_{t_k}$ (obtained fro | Question about sample autocovariance function
$\widehat{\gamma}$ is used to create covariance matrices: given "times" $t_1, t_2, \ldots, t_k$, it estimates that the covariance of the random vector $X_{t_1}, X_{t_2}, \ldots, X_{t_k}$ (obtained from the random field at those times) is the matrix $\left(\widehat{\gamma}(t_i - t_j), 1 \le i, j \le k\right)$. For many problems, such as prediction, it is crucial that all such matrices be nonsingular. As putative covariance matrices, obviously they cannot have any negative eigenvalues, whence they must all be positive-definite.
The simplest situation in which the distinction between the two formulas
$$\widehat{\gamma}(h) = n^{-1}\sum_{t=1}^{n-h}(x_{t+h}-\bar{x})(x_t-\bar{x})$$
and
$$\widehat{\gamma}_0(h) = (n-h)^{-1}\sum_{t=1}^{n-h}(x_{t+h}-\bar{x})(x_t-\bar{x})$$
appears is when $x$ has length $2$; say, $x = (0,1)$. For $t_1=t$ and $t_2 = t+1$ it's simple to compute
$$\widehat{\gamma}_0 = \left(
\begin{array}{cc}
\frac{1}{4} & -\frac{1}{4} \\
-\frac{1}{4} & \frac{1}{4}
\end{array}
\right),$$
which is singular, whereas
$$\widehat{\gamma} = \left(
\begin{array}{cc}
\frac{1}{4} & -\frac{1}{8} \\
-\frac{1}{8} & \frac{1}{4}
\end{array}
\right)$$
which has eigenvalues $3/8$ and $1/8$, whence it is positive-definite.
A similar phenomenon happens for $x = (0,1,0,1)$, where $\widehat{\gamma}$ is positive-definite but $\widehat{\gamma}_0$--when applied to the times $t_i = (1,2,3,4)$, say--degenerates into a matrix of rank $1$ (its entries alternate between $1/4$ and $-1/4$).
(There is a pattern here: problems arise for any $x$ of the form $(a,b,a,b,\ldots,a,b)$.)
In most applications the series of observations $x_t$ is so long that for most $h$ of interest--which are much less than $n$--the difference between $n^{-1}$ and $(n-h)^{-1}$ is of no consequence. So in practice the distinction is no big deal and theoretically the need for positive-definiteness strongly overrides any possible desire for unbiased estimates. | Question about sample autocovariance function
$\widehat{\gamma}$ is used to create covariance matrices: given "times" $t_1, t_2, \ldots, t_k$, it estimates that the covariance of the random vector $X_{t_1}, X_{t_2}, \ldots, X_{t_k}$ (obtained fro |
18,957 | How to prove there is no finite-dimensional feature space for Gaussian RBF kernel? | The Moore-Aronszajn theorem guarantees that a symmetric positive definite kernel is associated to a unique reproducing kernel Hilbert space. (Note that while the RKHS is unique, the mapping itself is not.)
Therefore, your question can be answered by exhibiting an infinite-dimensional RKHS corresponding to the Gaussian kernel (or RBF). You can find an in-depth study of this in "An explicit description of the reproducing kernel Hilbert spaces of Gaussian RBF kernels", Steinwart et al. | How to prove there is no finite-dimensional feature space for Gaussian RBF kernel? | The Moore-Aronszajn theorem guarantees that a symmetric positive definite kernel is associated to a unique reproducing kernel Hilbert space. (Note that while the RKHS is unique, the mapping itself is | How to prove there is no finite-dimensional feature space for Gaussian RBF kernel?
The Moore-Aronszajn theorem guarantees that a symmetric positive definite kernel is associated to a unique reproducing kernel Hilbert space. (Note that while the RKHS is unique, the mapping itself is not.)
Therefore, your question can be answered by exhibiting an infinite-dimensional RKHS corresponding to the Gaussian kernel (or RBF). You can find an in-depth study of this in "An explicit description of the reproducing kernel Hilbert spaces of Gaussian RBF kernels", Steinwart et al. | How to prove there is no finite-dimensional feature space for Gaussian RBF kernel?
The Moore-Aronszajn theorem guarantees that a symmetric positive definite kernel is associated to a unique reproducing kernel Hilbert space. (Note that while the RKHS is unique, the mapping itself is |
18,958 | How to prove there is no finite-dimensional feature space for Gaussian RBF kernel? | Assume that Gaussian RBF kernel $k(x, y)$ is defined on domain $X \times X$ where $X$ contains an infinite number of vectors. One can prove (Gaussian Kernels, Why are they full rank?) that for any set of distinct vectors $x_1, ..., x_m \in X$ matrix $(k(x_i, x_j))_{m \times m}$ is not singular, which means that vectors $\mathrm\Phi(x_1), ..., \mathrm\Phi(x_m)$ are linearly independent. Thus, a feature space $H$ for the kernel $k$ cannot have a finite number of dimensions. | How to prove there is no finite-dimensional feature space for Gaussian RBF kernel? | Assume that Gaussian RBF kernel $k(x, y)$ is defined on domain $X \times X$ where $X$ contains an infinite number of vectors. One can prove (Gaussian Kernels, Why are they full rank?) that for any set | How to prove there is no finite-dimensional feature space for Gaussian RBF kernel?
Assume that Gaussian RBF kernel $k(x, y)$ is defined on domain $X \times X$ where $X$ contains an infinite number of vectors. One can prove (Gaussian Kernels, Why are they full rank?) that for any set of distinct vectors $x_1, ..., x_m \in X$ matrix $(k(x_i, x_j))_{m \times m}$ is not singular, which means that vectors $\mathrm\Phi(x_1), ..., \mathrm\Phi(x_m)$ are linearly independent. Thus, a feature space $H$ for the kernel $k$ cannot have a finite number of dimensions. | How to prove there is no finite-dimensional feature space for Gaussian RBF kernel?
Assume that Gaussian RBF kernel $k(x, y)$ is defined on domain $X \times X$ where $X$ contains an infinite number of vectors. One can prove (Gaussian Kernels, Why are they full rank?) that for any set |
18,959 | Is there an alternative to the Kolmogorov-Smirnov test for tied data with correction? | Instead of using the KS test you could simply use a permutation or resampling procedure as implemented in the oneway_test function of the coin package. Have a look at the accepted answer to this question.
Update: My package afex contains the function compare.2.vectors implementing a permutation and other tests for two vectors. You can get it from CRAN:
install.packages("afex")
For two vectors x and y it (currently) returns something like:
> compare.2.vectors(x,y)
$parametric
test test.statistic test.value test.df p
1 t t -1.861 18.00 0.07919
2 Welch t -1.861 17.78 0.07939
$nonparametric
test test.statistic test.value test.df p
1 stats::Wilcoxon W 25.500 NA 0.06933
2 permutation Z -1.751 NA 0.08154
3 coin::Wilcoxon Z -1.854 NA 0.06487
4 median Z 1.744 NA 0.17867
Any comments regarding this function are highly welcomed. | Is there an alternative to the Kolmogorov-Smirnov test for tied data with correction? | Instead of using the KS test you could simply use a permutation or resampling procedure as implemented in the oneway_test function of the coin package. Have a look at the accepted answer to this quest | Is there an alternative to the Kolmogorov-Smirnov test for tied data with correction?
Instead of using the KS test you could simply use a permutation or resampling procedure as implemented in the oneway_test function of the coin package. Have a look at the accepted answer to this question.
Update: My package afex contains the function compare.2.vectors implementing a permutation and other tests for two vectors. You can get it from CRAN:
install.packages("afex")
For two vectors x and y it (currently) returns something like:
> compare.2.vectors(x,y)
$parametric
test test.statistic test.value test.df p
1 t t -1.861 18.00 0.07919
2 Welch t -1.861 17.78 0.07939
$nonparametric
test test.statistic test.value test.df p
1 stats::Wilcoxon W 25.500 NA 0.06933
2 permutation Z -1.751 NA 0.08154
3 coin::Wilcoxon Z -1.854 NA 0.06487
4 median Z 1.744 NA 0.17867
Any comments regarding this function are highly welcomed. | Is there an alternative to the Kolmogorov-Smirnov test for tied data with correction?
Instead of using the KS test you could simply use a permutation or resampling procedure as implemented in the oneway_test function of the coin package. Have a look at the accepted answer to this quest |
18,960 | Is bootstrapping a valid method to assess the uncertainty of the median estimate? | The median can be bootstrapped and estimation of the median is a good application of the bootstrap. Staudte and Sheather (1990, pp.83-850 described here derive the exact calculation of the bootstrap estimate of the standard error of the estimate of the median that was originally derived in a paper by Maritz and Jarrett in 1978. Details of this can be found on pages 48-50 of my book on the bootstrap here on amazon.com. | Is bootstrapping a valid method to assess the uncertainty of the median estimate? | The median can be bootstrapped and estimation of the median is a good application of the bootstrap. Staudte and Sheather (1990, pp.83-850 described here derive the exact calculation of the bootstrap | Is bootstrapping a valid method to assess the uncertainty of the median estimate?
The median can be bootstrapped and estimation of the median is a good application of the bootstrap. Staudte and Sheather (1990, pp.83-850 described here derive the exact calculation of the bootstrap estimate of the standard error of the estimate of the median that was originally derived in a paper by Maritz and Jarrett in 1978. Details of this can be found on pages 48-50 of my book on the bootstrap here on amazon.com. | Is bootstrapping a valid method to assess the uncertainty of the median estimate?
The median can be bootstrapped and estimation of the median is a good application of the bootstrap. Staudte and Sheather (1990, pp.83-850 described here derive the exact calculation of the bootstrap |
18,961 | Finding local extrema of a density function using splines | What you want to do is called peak detection in chemometrics. There are various methods you can use for that. I demonstrate only a very simple approach here.
require(graphics)
#some data
d <- density(faithful$eruptions, bw = "sj")
#make it a time series
ts_y<-ts(d$y)
#calculate turning points (extrema)
require(pastecs)
tp<-turnpoints(ts_y)
#plot
plot(d)
points(d$x[tp$tppos],d$y[tp$tppos],col="red") | Finding local extrema of a density function using splines | What you want to do is called peak detection in chemometrics. There are various methods you can use for that. I demonstrate only a very simple approach here.
require(graphics)
#some data
d <- density( | Finding local extrema of a density function using splines
What you want to do is called peak detection in chemometrics. There are various methods you can use for that. I demonstrate only a very simple approach here.
require(graphics)
#some data
d <- density(faithful$eruptions, bw = "sj")
#make it a time series
ts_y<-ts(d$y)
#calculate turning points (extrema)
require(pastecs)
tp<-turnpoints(ts_y)
#plot
plot(d)
points(d$x[tp$tppos],d$y[tp$tppos],col="red") | Finding local extrema of a density function using splines
What you want to do is called peak detection in chemometrics. There are various methods you can use for that. I demonstrate only a very simple approach here.
require(graphics)
#some data
d <- density( |
18,962 | Combining classifiers by flipping a coin | (Edited)
The lecture slides are right.
Method A has an "optimal point" that gives true and false positive rates of (TPA, FPA in the graph) respectively . This point would correspond to a threshold, or more in general[*] a optimal decision boundary for A. All the same goes for B. (But the thresholds and the boundaries are not related).
It's seen that classifier A performs nice under the preference "minimize false positives"
(conservative strategy)
and classifier B when we want to "maximize true positives"
(eager strategy).
The answer to your first question, is basically yes, except that the probability of the coin is (in some sense)
arbitrary. The final clasiffier would be:
If $x$ belongs to the "optimal acceptance region for A" (conservative), use that classifier A (i.e.: accept it)
If $x$ belongs to the "optimal rejection region for B" (eager), use that classifier B (i.e., reject it)
Elsewhere , flip a coin with probability $p$ and use the classifier A or B.
(Corrected: actually, the lectures are completely right, we can just flip the coin in any case. See diagrams)
You can use any fixed $p$ in the range (0,1), it depends on whether you want to be more or less conservative, i.e., if you want to be more near to one of the points or in the middle.
[*] You should be general here: if you think in terms of a single scalar threshold, all this makes little sense;
a one-dimensional feature with a threshold-based classifier does not gives you enough
degrees of freedom to have different classifiers as A and B, that performs along different curves
when the free paramenters (decision boundary=threshold) varies.
In other words: A and B are called "methods" or "systems", not "classifiers"; because A is a whole family of classifiers, parametrized by some parameter (scalar) that determines a decision boundary, not just a scalar]
I added some diagrams to make it more clear:
Suppose a bidimensional feature, the diagram displays some samples, the green points are the "good" ones, the red the "bad" ones. Suppose that the method A has a tunable parameter $t$ (threshold, offset, bias), higher values of $t$ turns the classifier more eager to accept ('Yes'). The orange lines correspond to the boundary decision for this method, for different values of $t$. It's seen that this method (actually a family of classifiers) performs particularly well for the $t_A=2$, in the sense that it has very few false positives for a moderate amount of true positives. By contrast, the method B (blue), which has its own tunable parameter $t$ (unrelated to that of A) performs particularly well ($t_B=4$) in the region of high acceptance: the filled blue line attains high true positive ratio.
In this scenario, then, one can say that the filled orange line is the "optimal A classifier" (inside its family), and the same for B. But one cannot tell whether the orange line is better than the blue line: one performs better when we asssign high cost to false positives, the other when false negatives are much more costly.
Now, it might happen that these two classifiers are too extremes for our needs, we'd like that both types of errors have similar weights. We'd prefer, instead of using classifier A (orange dot) or B (blue dot) to attain a performance that it's in between them. As the course say, one can attain that result by just flipping a coin and choose one of the classifiers at random.
Just by simply flipping a coin, how can we gain information?
We don't gain information. Our new randomized classifier is not simply "better" than A or B, it's performance is sort of an average of A and B, in what respect to the costs assigned to each type of error. That can be or not beneficial to us, depending on what are our costs.
AFAIK, the correct way (as suggested by the book) is the following ... Is this correct?
Not really. The correct way is simply: flip a coin with probability $p$, choose a classifier (the optimal A or the optimal B) and classify using that classifier. | Combining classifiers by flipping a coin | (Edited)
The lecture slides are right.
Method A has an "optimal point" that gives true and false positive rates of (TPA, FPA in the graph) respectively . This point would correspond to a threshold, | Combining classifiers by flipping a coin
(Edited)
The lecture slides are right.
Method A has an "optimal point" that gives true and false positive rates of (TPA, FPA in the graph) respectively . This point would correspond to a threshold, or more in general[*] a optimal decision boundary for A. All the same goes for B. (But the thresholds and the boundaries are not related).
It's seen that classifier A performs nice under the preference "minimize false positives"
(conservative strategy)
and classifier B when we want to "maximize true positives"
(eager strategy).
The answer to your first question, is basically yes, except that the probability of the coin is (in some sense)
arbitrary. The final clasiffier would be:
If $x$ belongs to the "optimal acceptance region for A" (conservative), use that classifier A (i.e.: accept it)
If $x$ belongs to the "optimal rejection region for B" (eager), use that classifier B (i.e., reject it)
Elsewhere , flip a coin with probability $p$ and use the classifier A or B.
(Corrected: actually, the lectures are completely right, we can just flip the coin in any case. See diagrams)
You can use any fixed $p$ in the range (0,1), it depends on whether you want to be more or less conservative, i.e., if you want to be more near to one of the points or in the middle.
[*] You should be general here: if you think in terms of a single scalar threshold, all this makes little sense;
a one-dimensional feature with a threshold-based classifier does not gives you enough
degrees of freedom to have different classifiers as A and B, that performs along different curves
when the free paramenters (decision boundary=threshold) varies.
In other words: A and B are called "methods" or "systems", not "classifiers"; because A is a whole family of classifiers, parametrized by some parameter (scalar) that determines a decision boundary, not just a scalar]
I added some diagrams to make it more clear:
Suppose a bidimensional feature, the diagram displays some samples, the green points are the "good" ones, the red the "bad" ones. Suppose that the method A has a tunable parameter $t$ (threshold, offset, bias), higher values of $t$ turns the classifier more eager to accept ('Yes'). The orange lines correspond to the boundary decision for this method, for different values of $t$. It's seen that this method (actually a family of classifiers) performs particularly well for the $t_A=2$, in the sense that it has very few false positives for a moderate amount of true positives. By contrast, the method B (blue), which has its own tunable parameter $t$ (unrelated to that of A) performs particularly well ($t_B=4$) in the region of high acceptance: the filled blue line attains high true positive ratio.
In this scenario, then, one can say that the filled orange line is the "optimal A classifier" (inside its family), and the same for B. But one cannot tell whether the orange line is better than the blue line: one performs better when we asssign high cost to false positives, the other when false negatives are much more costly.
Now, it might happen that these two classifiers are too extremes for our needs, we'd like that both types of errors have similar weights. We'd prefer, instead of using classifier A (orange dot) or B (blue dot) to attain a performance that it's in between them. As the course say, one can attain that result by just flipping a coin and choose one of the classifiers at random.
Just by simply flipping a coin, how can we gain information?
We don't gain information. Our new randomized classifier is not simply "better" than A or B, it's performance is sort of an average of A and B, in what respect to the costs assigned to each type of error. That can be or not beneficial to us, depending on what are our costs.
AFAIK, the correct way (as suggested by the book) is the following ... Is this correct?
Not really. The correct way is simply: flip a coin with probability $p$, choose a classifier (the optimal A or the optimal B) and classify using that classifier. | Combining classifiers by flipping a coin
(Edited)
The lecture slides are right.
Method A has an "optimal point" that gives true and false positive rates of (TPA, FPA in the graph) respectively . This point would correspond to a threshold, |
18,963 | Combining classifiers by flipping a coin | I agree with your reasoning. If you use the classifier by coin flipping to pick one when you are between points A and B your point on the curve would always be below the better classifier and above the poorer one and not possibly above both! There must be something wrong with the diagram. At the point where the 2 ROC curves cross the random selection algorithm will have the same performance as the two algorithms. It will not be above it the way the diagram depicts it. | Combining classifiers by flipping a coin | I agree with your reasoning. If you use the classifier by coin flipping to pick one when you are between points A and B your point on the curve would always be below the better classifier and above t | Combining classifiers by flipping a coin
I agree with your reasoning. If you use the classifier by coin flipping to pick one when you are between points A and B your point on the curve would always be below the better classifier and above the poorer one and not possibly above both! There must be something wrong with the diagram. At the point where the 2 ROC curves cross the random selection algorithm will have the same performance as the two algorithms. It will not be above it the way the diagram depicts it. | Combining classifiers by flipping a coin
I agree with your reasoning. If you use the classifier by coin flipping to pick one when you are between points A and B your point on the curve would always be below the better classifier and above t |
18,964 | Generating values from a multivariate Gaussian distribution | If $X \sim \mathcal{N}(0,I)$ is a column vector of standard normal RV's, then if you set $Y = L X$, the covariance of $Y$ is $L L^T$.
I think the problem you're having may arise from the fact that matlab's mvnrnd function returns row vectors as samples, even if you specify the mean as a column vector. e.g.,
> size(mvnrnd(ones(10,1),eye(10))
> ans =
> 1 10
And note that transforming a row vector gives you the opposite formula. if $X$ is a row vector, then $Z = X L^T$ is also a row vector, so $Z^T = L X^T$ is a column vector, and the covariance of $Z^T$ can be written $E[Z^TZ] = LL^T$.
Based on what you wrote though, the Wikipedia formula is correct: if $\Phi^{-1}(U)$ were a row vector returned by matlab, you can't left-multiply it by $L^T$. (But right-multiplying by $L^T$ would give you a sample with the same covariance of $LL^T$). | Generating values from a multivariate Gaussian distribution | If $X \sim \mathcal{N}(0,I)$ is a column vector of standard normal RV's, then if you set $Y = L X$, the covariance of $Y$ is $L L^T$.
I think the problem you're having may arise from the fact that m | Generating values from a multivariate Gaussian distribution
If $X \sim \mathcal{N}(0,I)$ is a column vector of standard normal RV's, then if you set $Y = L X$, the covariance of $Y$ is $L L^T$.
I think the problem you're having may arise from the fact that matlab's mvnrnd function returns row vectors as samples, even if you specify the mean as a column vector. e.g.,
> size(mvnrnd(ones(10,1),eye(10))
> ans =
> 1 10
And note that transforming a row vector gives you the opposite formula. if $X$ is a row vector, then $Z = X L^T$ is also a row vector, so $Z^T = L X^T$ is a column vector, and the covariance of $Z^T$ can be written $E[Z^TZ] = LL^T$.
Based on what you wrote though, the Wikipedia formula is correct: if $\Phi^{-1}(U)$ were a row vector returned by matlab, you can't left-multiply it by $L^T$. (But right-multiplying by $L^T$ would give you a sample with the same covariance of $LL^T$). | Generating values from a multivariate Gaussian distribution
If $X \sim \mathcal{N}(0,I)$ is a column vector of standard normal RV's, then if you set $Y = L X$, the covariance of $Y$ is $L L^T$.
I think the problem you're having may arise from the fact that m |
18,965 | Supervised learning with "rare" events, when rarity is due to the large number of counter-factual events | If I understand correctly, you have a two class classification problem, where the positive class (matches) is rare. Many classifiers struggle with such a class imbalance, and it is common practice to sub-sample the majority class in order to obtain better performance, so the answer to the first question is "yes". However, if you sub-sample too much, you will end up with a classifier that over-predicts the minority positive class, so the best thing to do is to choose the sub-sampling ration to maximise performance, perhaps by minimising the cross-validation error where the test data has not been sub-sampled so you get a good indication of operational performance.
If you have a probabilistic classifier, that gives an estimate of the probabiility of class memebership, you can go one better and post-process the output to compensate for the difference between class frequencies in the training set and in operation. I suspect that for some classifiers, the optimal approach is to optimise both the sub-sampling ratio and the correction to the output by optimising the cross-validation error.
Rather than sub-sampling, for some classifiers (e.g. SVMs) you can give different weights to positive and negative patterns. I prefer this to sub-sampling as it means there is no variability in the results due to the particular sub-sample used. Where this is not possible, use bootstrapping to make a bagged classifier, where a different sub-sample of the majority class is used in each iteration.
The one other thing I would say is that commonly where there is a large class imbalance, false negative errors and false positive error are not equally bad, and it is a good idea to build this into the classifier design (which can be accomplished by sub-sampling or weighting patterns belonging to each class). | Supervised learning with "rare" events, when rarity is due to the large number of counter-factual ev | If I understand correctly, you have a two class classification problem, where the positive class (matches) is rare. Many classifiers struggle with such a class imbalance, and it is common practice to | Supervised learning with "rare" events, when rarity is due to the large number of counter-factual events
If I understand correctly, you have a two class classification problem, where the positive class (matches) is rare. Many classifiers struggle with such a class imbalance, and it is common practice to sub-sample the majority class in order to obtain better performance, so the answer to the first question is "yes". However, if you sub-sample too much, you will end up with a classifier that over-predicts the minority positive class, so the best thing to do is to choose the sub-sampling ration to maximise performance, perhaps by minimising the cross-validation error where the test data has not been sub-sampled so you get a good indication of operational performance.
If you have a probabilistic classifier, that gives an estimate of the probabiility of class memebership, you can go one better and post-process the output to compensate for the difference between class frequencies in the training set and in operation. I suspect that for some classifiers, the optimal approach is to optimise both the sub-sampling ratio and the correction to the output by optimising the cross-validation error.
Rather than sub-sampling, for some classifiers (e.g. SVMs) you can give different weights to positive and negative patterns. I prefer this to sub-sampling as it means there is no variability in the results due to the particular sub-sample used. Where this is not possible, use bootstrapping to make a bagged classifier, where a different sub-sample of the majority class is used in each iteration.
The one other thing I would say is that commonly where there is a large class imbalance, false negative errors and false positive error are not equally bad, and it is a good idea to build this into the classifier design (which can be accomplished by sub-sampling or weighting patterns belonging to each class). | Supervised learning with "rare" events, when rarity is due to the large number of counter-factual ev
If I understand correctly, you have a two class classification problem, where the positive class (matches) is rare. Many classifiers struggle with such a class imbalance, and it is common practice to |
18,966 | Supervised learning with "rare" events, when rarity is due to the large number of counter-factual events | Concerning (1). You need to keep positive and negative observations if you want meaningful results.
(2) There is no wiser method of subsampling than uniform distribution if you don't have any a priori on your data. | Supervised learning with "rare" events, when rarity is due to the large number of counter-factual ev | Concerning (1). You need to keep positive and negative observations if you want meaningful results.
(2) There is no wiser method of subsampling than uniform distribution if you don't have any a priori | Supervised learning with "rare" events, when rarity is due to the large number of counter-factual events
Concerning (1). You need to keep positive and negative observations if you want meaningful results.
(2) There is no wiser method of subsampling than uniform distribution if you don't have any a priori on your data. | Supervised learning with "rare" events, when rarity is due to the large number of counter-factual ev
Concerning (1). You need to keep positive and negative observations if you want meaningful results.
(2) There is no wiser method of subsampling than uniform distribution if you don't have any a priori |
18,967 | Can I change the proposal distribution in random-walk MH MCMC without affecting Markovianity? | I think that this paper from Heikki Haario et al. will give you the answer you need. The markovianity of the chain is affected by the adaptation of the proposal density, because then a new proposed value depends not only of the previous one but on the whole chain. But it seems that the sequence has still the good properties if great care is taken. | Can I change the proposal distribution in random-walk MH MCMC without affecting Markovianity? | I think that this paper from Heikki Haario et al. will give you the answer you need. The markovianity of the chain is affected by the adaptation of the proposal density, because then a new proposed va | Can I change the proposal distribution in random-walk MH MCMC without affecting Markovianity?
I think that this paper from Heikki Haario et al. will give you the answer you need. The markovianity of the chain is affected by the adaptation of the proposal density, because then a new proposed value depends not only of the previous one but on the whole chain. But it seems that the sequence has still the good properties if great care is taken. | Can I change the proposal distribution in random-walk MH MCMC without affecting Markovianity?
I think that this paper from Heikki Haario et al. will give you the answer you need. The markovianity of the chain is affected by the adaptation of the proposal density, because then a new proposed va |
18,968 | Can I change the proposal distribution in random-walk MH MCMC without affecting Markovianity? | You can improve the acceptance rate using delayed rejection as described in Tierney, Mira (1999). It is based on a second proposal function and a second acceptance probability, which guarantees the Markov chain is still reversible with the same invariant distribution: you have to be cautious since "it is easy to construct adaptive methods that might seem to work but in fact sample from the wrong distribution". | Can I change the proposal distribution in random-walk MH MCMC without affecting Markovianity? | You can improve the acceptance rate using delayed rejection as described in Tierney, Mira (1999). It is based on a second proposal function and a second acceptance probability, which guarantees the Ma | Can I change the proposal distribution in random-walk MH MCMC without affecting Markovianity?
You can improve the acceptance rate using delayed rejection as described in Tierney, Mira (1999). It is based on a second proposal function and a second acceptance probability, which guarantees the Markov chain is still reversible with the same invariant distribution: you have to be cautious since "it is easy to construct adaptive methods that might seem to work but in fact sample from the wrong distribution". | Can I change the proposal distribution in random-walk MH MCMC without affecting Markovianity?
You can improve the acceptance rate using delayed rejection as described in Tierney, Mira (1999). It is based on a second proposal function and a second acceptance probability, which guarantees the Ma |
18,969 | Can I change the proposal distribution in random-walk MH MCMC without affecting Markovianity? | The approaches suggested by users wok and robertsy cover the most commonly cited examples of what you're looking for that I know of. Just to expand on those answers, Haario and Mira wrote a paper in 2006 that combines the two approaches, an approach they call DRAM (delayed rejection adaptive Metropolis).
Andrieu has a nice treatment of various different adaptive MCMC approaches (pdf) which covers Haario 2001 but also discusses various alternatives that have been proposed in recent years. | Can I change the proposal distribution in random-walk MH MCMC without affecting Markovianity? | The approaches suggested by users wok and robertsy cover the most commonly cited examples of what you're looking for that I know of. Just to expand on those answers, Haario and Mira wrote a paper in | Can I change the proposal distribution in random-walk MH MCMC without affecting Markovianity?
The approaches suggested by users wok and robertsy cover the most commonly cited examples of what you're looking for that I know of. Just to expand on those answers, Haario and Mira wrote a paper in 2006 that combines the two approaches, an approach they call DRAM (delayed rejection adaptive Metropolis).
Andrieu has a nice treatment of various different adaptive MCMC approaches (pdf) which covers Haario 2001 but also discusses various alternatives that have been proposed in recent years. | Can I change the proposal distribution in random-walk MH MCMC without affecting Markovianity?
The approaches suggested by users wok and robertsy cover the most commonly cited examples of what you're looking for that I know of. Just to expand on those answers, Haario and Mira wrote a paper in |
18,970 | Can I change the proposal distribution in random-walk MH MCMC without affecting Markovianity? | This is a bit of a shameless plug of a publication of mine, but we do exactly this in this work (arxiv). Amongst other things, we propose adapting the variance of the exponential distribution to improve the acceptance (step S3.2 in algorithm in the paper).
In our case, asymptotically the adaptation does not change the proposal distribution (which in the paper is when $f \rightarrow 1$). Thus, asymptotically, the process is still Markovian in the same spirit as the Wang-Landau algorithm. We numerically verify that the process is ergodic and the chain samples from the target distribution we choose (e.g. left bottom panel of Fig. 4).
We don't use information about the acceptance rate, but we obtain an acceptance independent of the quantity we are interested in (equivalent to the energy of a spin system, bottom-right of Fig. 4). | Can I change the proposal distribution in random-walk MH MCMC without affecting Markovianity? | This is a bit of a shameless plug of a publication of mine, but we do exactly this in this work (arxiv). Amongst other things, we propose adapting the variance of the exponential distribution to impro | Can I change the proposal distribution in random-walk MH MCMC without affecting Markovianity?
This is a bit of a shameless plug of a publication of mine, but we do exactly this in this work (arxiv). Amongst other things, we propose adapting the variance of the exponential distribution to improve the acceptance (step S3.2 in algorithm in the paper).
In our case, asymptotically the adaptation does not change the proposal distribution (which in the paper is when $f \rightarrow 1$). Thus, asymptotically, the process is still Markovian in the same spirit as the Wang-Landau algorithm. We numerically verify that the process is ergodic and the chain samples from the target distribution we choose (e.g. left bottom panel of Fig. 4).
We don't use information about the acceptance rate, but we obtain an acceptance independent of the quantity we are interested in (equivalent to the energy of a spin system, bottom-right of Fig. 4). | Can I change the proposal distribution in random-walk MH MCMC without affecting Markovianity?
This is a bit of a shameless plug of a publication of mine, but we do exactly this in this work (arxiv). Amongst other things, we propose adapting the variance of the exponential distribution to impro |
18,971 | Flowcharts to help selecting the proper analysis technique and test | These are not really interactive flowcharts, but maybe this could be useful: (1) http://j.mp/cmakYq, (2) http://j.mp/aaxUsz, and (3) http://j.mp/bDMyAR. | Flowcharts to help selecting the proper analysis technique and test | These are not really interactive flowcharts, but maybe this could be useful: (1) http://j.mp/cmakYq, (2) http://j.mp/aaxUsz, and (3) http://j.mp/bDMyAR. | Flowcharts to help selecting the proper analysis technique and test
These are not really interactive flowcharts, but maybe this could be useful: (1) http://j.mp/cmakYq, (2) http://j.mp/aaxUsz, and (3) http://j.mp/bDMyAR. | Flowcharts to help selecting the proper analysis technique and test
These are not really interactive flowcharts, but maybe this could be useful: (1) http://j.mp/cmakYq, (2) http://j.mp/aaxUsz, and (3) http://j.mp/bDMyAR. |
18,972 | Flowcharts to help selecting the proper analysis technique and test | You can look at the solution given on the question "Statistical models cheat sheet" | Flowcharts to help selecting the proper analysis technique and test | You can look at the solution given on the question "Statistical models cheat sheet" | Flowcharts to help selecting the proper analysis technique and test
You can look at the solution given on the question "Statistical models cheat sheet" | Flowcharts to help selecting the proper analysis technique and test
You can look at the solution given on the question "Statistical models cheat sheet" |
18,973 | Flowcharts to help selecting the proper analysis technique and test | I have made this https://www.statsflowchart.co.uk, an interactive stats test flowchart. Let me know what you think. (This is based on an Andy Field flowchart from a text-book, linked on the site). | Flowcharts to help selecting the proper analysis technique and test | I have made this https://www.statsflowchart.co.uk, an interactive stats test flowchart. Let me know what you think. (This is based on an Andy Field flowchart from a text-book, linked on the site). | Flowcharts to help selecting the proper analysis technique and test
I have made this https://www.statsflowchart.co.uk, an interactive stats test flowchart. Let me know what you think. (This is based on an Andy Field flowchart from a text-book, linked on the site). | Flowcharts to help selecting the proper analysis technique and test
I have made this https://www.statsflowchart.co.uk, an interactive stats test flowchart. Let me know what you think. (This is based on an Andy Field flowchart from a text-book, linked on the site). |
18,974 | Estimating mean and st dev of a truncated gaussian curve without spike | The model for your data would be:
$y_i \sim N(\mu,\sigma^2) I(y_i > 0)$
Thus, the density function is:
$$f(y_i|-) = \frac{exp(-\frac{(y_i-\mu)^2}{2 \sigma^2})}{\sqrt{2 \pi \sigma}\ (1 - \phi(-\frac{\mu}{\sigma}))}$$
where,
$\phi(.)$ is the standard normal cdf.
You can then estimate the parameters $\mu$ and $\sigma$ using either maximum likelihood or bayesian methods. | Estimating mean and st dev of a truncated gaussian curve without spike | The model for your data would be:
$y_i \sim N(\mu,\sigma^2) I(y_i > 0)$
Thus, the density function is:
$$f(y_i|-) = \frac{exp(-\frac{(y_i-\mu)^2}{2 \sigma^2})}{\sqrt{2 \pi \sigma}\ (1 - \phi(-\frac{\m | Estimating mean and st dev of a truncated gaussian curve without spike
The model for your data would be:
$y_i \sim N(\mu,\sigma^2) I(y_i > 0)$
Thus, the density function is:
$$f(y_i|-) = \frac{exp(-\frac{(y_i-\mu)^2}{2 \sigma^2})}{\sqrt{2 \pi \sigma}\ (1 - \phi(-\frac{\mu}{\sigma}))}$$
where,
$\phi(.)$ is the standard normal cdf.
You can then estimate the parameters $\mu$ and $\sigma$ using either maximum likelihood or bayesian methods. | Estimating mean and st dev of a truncated gaussian curve without spike
The model for your data would be:
$y_i \sim N(\mu,\sigma^2) I(y_i > 0)$
Thus, the density function is:
$$f(y_i|-) = \frac{exp(-\frac{(y_i-\mu)^2}{2 \sigma^2})}{\sqrt{2 \pi \sigma}\ (1 - \phi(-\frac{\m |
18,975 | Estimating mean and st dev of a truncated gaussian curve without spike | As Srikant Vadali has suggested, Cohen and Hald solved this problem using ML (with a Newton-Raphson root finder) around 1950. Another paper is Max Halperin's "Estimation in the Truncated Normal Distribution" available on JSTOR (for those with access). Googling "truncated gaussian estimation" produces lots of useful-looking hits.
Details are provided in a thread that generalizes this question (to truncated distributions generally). See Maximum likelihood estimators for a truncated distribution. It might also be of interest to compare the Maximum Likelihood estimators to the Maximum Entropy solution given (with code) at Max Entropy Solver in R. | Estimating mean and st dev of a truncated gaussian curve without spike | As Srikant Vadali has suggested, Cohen and Hald solved this problem using ML (with a Newton-Raphson root finder) around 1950. Another paper is Max Halperin's "Estimation in the Truncated Normal Distr | Estimating mean and st dev of a truncated gaussian curve without spike
As Srikant Vadali has suggested, Cohen and Hald solved this problem using ML (with a Newton-Raphson root finder) around 1950. Another paper is Max Halperin's "Estimation in the Truncated Normal Distribution" available on JSTOR (for those with access). Googling "truncated gaussian estimation" produces lots of useful-looking hits.
Details are provided in a thread that generalizes this question (to truncated distributions generally). See Maximum likelihood estimators for a truncated distribution. It might also be of interest to compare the Maximum Likelihood estimators to the Maximum Entropy solution given (with code) at Max Entropy Solver in R. | Estimating mean and st dev of a truncated gaussian curve without spike
As Srikant Vadali has suggested, Cohen and Hald solved this problem using ML (with a Newton-Raphson root finder) around 1950. Another paper is Max Halperin's "Estimation in the Truncated Normal Distr |
18,976 | Estimating mean and st dev of a truncated gaussian curve without spike | With having a technical border TB for $a=0$ a simplified approach by H. Schneider is very usefull to calculate the mean $\mu_t$ and standard deviation $\sigma_t$ of the truncated normal distribution:
calculate the mean $\mu$ and the standard deviation $\sigma$ (entire population!) for the data set:
$\mu = \bar{x}= \frac{1}{n} \sum_{i=1}^{n}x_i$
$\sigma = s = \sqrt{\frac{1}{n} \sum_{i=1}^{n}(x_i-\bar{x})^2}$
check if the technical border $TB=a=0$ has a valid distance to the average $\bar{x}$:
consideration of $TB = a$ is not necessary when $\bar{x} \le 3s$
calculate $\omega, P_3(\omega), P_4(\omega)$ and $Q(\omega)$:
$\omega = \frac{s^2}{(a-\bar{x})^2}$
$P_3(\omega) = 1+5,74050101\omega - 13,53427037\omega^2 + 6,88665552\omega^3$
$P_4(\omega) = -0,00374615 + 0,17462558\omega - 2,87168509\omega^2 + 17,48932655\omega^3 - 11,91716546\omega^4$
$Q(\omega) = \frac{P_4(\omega)}{P_3(\omega)}$
check if $\omega \le 0,57081$, otherwise the mean $\mu_t$ is $<0$ which is not useful technically
calculate $\mu_t$ and $\sigma_t$ for the truncated normal distribution:
$\mu_t = \bar{x} + Q(\omega) \cdot (a-\bar{x})$
$\sigma_{t}^{2} = s^2 + Q(\omega) \cdot (a-\bar{x})^2$
That's all... | Estimating mean and st dev of a truncated gaussian curve without spike | With having a technical border TB for $a=0$ a simplified approach by H. Schneider is very usefull to calculate the mean $\mu_t$ and standard deviation $\sigma_t$ of the truncated normal distribution:
| Estimating mean and st dev of a truncated gaussian curve without spike
With having a technical border TB for $a=0$ a simplified approach by H. Schneider is very usefull to calculate the mean $\mu_t$ and standard deviation $\sigma_t$ of the truncated normal distribution:
calculate the mean $\mu$ and the standard deviation $\sigma$ (entire population!) for the data set:
$\mu = \bar{x}= \frac{1}{n} \sum_{i=1}^{n}x_i$
$\sigma = s = \sqrt{\frac{1}{n} \sum_{i=1}^{n}(x_i-\bar{x})^2}$
check if the technical border $TB=a=0$ has a valid distance to the average $\bar{x}$:
consideration of $TB = a$ is not necessary when $\bar{x} \le 3s$
calculate $\omega, P_3(\omega), P_4(\omega)$ and $Q(\omega)$:
$\omega = \frac{s^2}{(a-\bar{x})^2}$
$P_3(\omega) = 1+5,74050101\omega - 13,53427037\omega^2 + 6,88665552\omega^3$
$P_4(\omega) = -0,00374615 + 0,17462558\omega - 2,87168509\omega^2 + 17,48932655\omega^3 - 11,91716546\omega^4$
$Q(\omega) = \frac{P_4(\omega)}{P_3(\omega)}$
check if $\omega \le 0,57081$, otherwise the mean $\mu_t$ is $<0$ which is not useful technically
calculate $\mu_t$ and $\sigma_t$ for the truncated normal distribution:
$\mu_t = \bar{x} + Q(\omega) \cdot (a-\bar{x})$
$\sigma_{t}^{2} = s^2 + Q(\omega) \cdot (a-\bar{x})^2$
That's all... | Estimating mean and st dev of a truncated gaussian curve without spike
With having a technical border TB for $a=0$ a simplified approach by H. Schneider is very usefull to calculate the mean $\mu_t$ and standard deviation $\sigma_t$ of the truncated normal distribution:
|
18,977 | What is dense prediction in Deep learning? | In computer vision pixelwise dense prediction is the task of predicting a label for each pixel in the image https://arxiv.org/abs/1611.09288 | What is dense prediction in Deep learning? | In computer vision pixelwise dense prediction is the task of predicting a label for each pixel in the image https://arxiv.org/abs/1611.09288 | What is dense prediction in Deep learning?
In computer vision pixelwise dense prediction is the task of predicting a label for each pixel in the image https://arxiv.org/abs/1611.09288 | What is dense prediction in Deep learning?
In computer vision pixelwise dense prediction is the task of predicting a label for each pixel in the image https://arxiv.org/abs/1611.09288 |
18,978 | What is dense prediction in Deep learning? | Agreeing to Thomas, Dense prediction task is producing dense output for the given input. In computer vision, unlike image classfication, tasks such as semantic segmentation, instance segmentation, etc. are considered as dense prediction task because label for each pixel is predicted. | What is dense prediction in Deep learning? | Agreeing to Thomas, Dense prediction task is producing dense output for the given input. In computer vision, unlike image classfication, tasks such as semantic segmentation, instance segmentation, etc | What is dense prediction in Deep learning?
Agreeing to Thomas, Dense prediction task is producing dense output for the given input. In computer vision, unlike image classfication, tasks such as semantic segmentation, instance segmentation, etc. are considered as dense prediction task because label for each pixel is predicted. | What is dense prediction in Deep learning?
Agreeing to Thomas, Dense prediction task is producing dense output for the given input. In computer vision, unlike image classfication, tasks such as semantic segmentation, instance segmentation, etc |
18,979 | How to obtain optimal hyperparameters after nested cross validation? | Overview
As @RockTheStar correctly concluded in the commentaries, the nested cross-validation is used only to access the model performance estimate. Dissociated from that, to find the best hyperparameters we need to do a simple tuning with cross-validation on the whole data.
In details:
Tuning and validation (inner and outer resampling loops)
In the inner loop you perform hyperparameter tuning, models are trained in training data and validated on validation data. You find the optimal parameters and train your model on the whole inner loop data. Though it was trained to optimize performance on validation data the evaluation is biased.
So this model is tested with the test data so hopefully there's no bias, giving you a performance estimate.
The final model
Now that you know the expected performance of your model you have to train it with all your data. But our model isn't simply the algorithm, it's the whole model building process!
So perform hyperparameter tuning with all your data and the same specifications of the inner loop. With the best hyperparameters, train your final model with the whole data. The expected performance of this final model is what you evaluated with nested crossvalidation earlier.
To reiterate, the hyperparmeters of the final model is what you expect will give you the performance you found in the Tuning and validation step. | How to obtain optimal hyperparameters after nested cross validation? | Overview
As @RockTheStar correctly concluded in the commentaries, the nested cross-validation is used only to access the model performance estimate. Dissociated from that, to find the best hyperparame | How to obtain optimal hyperparameters after nested cross validation?
Overview
As @RockTheStar correctly concluded in the commentaries, the nested cross-validation is used only to access the model performance estimate. Dissociated from that, to find the best hyperparameters we need to do a simple tuning with cross-validation on the whole data.
In details:
Tuning and validation (inner and outer resampling loops)
In the inner loop you perform hyperparameter tuning, models are trained in training data and validated on validation data. You find the optimal parameters and train your model on the whole inner loop data. Though it was trained to optimize performance on validation data the evaluation is biased.
So this model is tested with the test data so hopefully there's no bias, giving you a performance estimate.
The final model
Now that you know the expected performance of your model you have to train it with all your data. But our model isn't simply the algorithm, it's the whole model building process!
So perform hyperparameter tuning with all your data and the same specifications of the inner loop. With the best hyperparameters, train your final model with the whole data. The expected performance of this final model is what you evaluated with nested crossvalidation earlier.
To reiterate, the hyperparmeters of the final model is what you expect will give you the performance you found in the Tuning and validation step. | How to obtain optimal hyperparameters after nested cross validation?
Overview
As @RockTheStar correctly concluded in the commentaries, the nested cross-validation is used only to access the model performance estimate. Dissociated from that, to find the best hyperparame |
18,980 | Confidence intervals on predictions for a non-linear mixed model (nlme) | What you've done here looks reasonable. The short answer is that for the most part the issues of predicting confidence intervals from mixed models and from nonlinear models are more or less orthogonal, that is, you need to worry about both sets of problems, but they don't (that I know of) interact in any strange ways.
Mixed model issues: are you trying to predict at the population or the group level? How do you account for variability in the random-effects parameters? Are you conditioning on the group-level observations or not?
Nonlinear model issues: is the sampling distribution of the parameters Normal? How do I account for nonlinearity when propagating error?
Throughout, I will assume you're predicting at the population level and constructing confidence intervals as the population level - in other words you're trying to plot the predicted values of a typical group, and not including the among-group variation in your confidence intervals. This simplifies the mixed-model issues. The following plots compare three approaches (see below for code dump):
population prediction intervals: this is the approach you tried above. It assumes the model is correct and that the sampling distributions of the fixed-effect parameters are multivariate Normal; it also ignores uncertainty in the random-effects parameters
bootstrapping: I implemented hierarchical bootstrapping; we resample both at the level of groups and within groups. The within-group sampling samples the residuals and adds them back to the predictions. This approach makes the fewest assumptions.
delta method: this assumes both multivariate Normality of sampling distributions and that the nonlinearity is weak enough to allow a second-order approximation.
We could also do parametric bootstrapping ...
Here are the CIs plotted along with the data ...
... but we can hardly see the differences.
Zooming in by subtracting off the predicted values (red=bootstrap, blue=PPI, cyan=delta method)
In this case the bootstrap intervals are actually narrowest (e.g. presumably the sampling distributions of the parameters are actually slightly thinner-tailed than Normal), while the PPI and delta-method intervals are very similar to each other.
library(nlme)
library(MASS)
fm1 <- nlme(height ~ SSasymp(age, Asym, R0, lrc),
data = Loblolly,
fixed = Asym + R0 + lrc ~ 1,
random = Asym ~ 1,
start = c(Asym = 103, R0 = -8.5, lrc = -3.3))
xvals <- with(Loblolly,seq(min(age),max(age),length.out=100))
nresamp <- 1000
## pick new parameter values by sampling from multivariate normal distribution based on fit
pars.picked <- mvrnorm(nresamp, mu = fixef(fm1), Sigma = vcov(fm1))
## predicted values: useful below
pframe <- with(Loblolly,data.frame(age=xvals))
pframe$height <- predict(fm1,newdata=pframe,level=0)
## utility function
get_CI <- function(y,pref="") {
r1 <- t(apply(y,1,quantile,c(0.025,0.975)))
setNames(as.data.frame(r1),paste0(pref,c("lwr","upr")))
}
set.seed(101)
yvals <- apply(pars.picked,1,
function(x) { SSasymp(xvals,x[1], x[2], x[3]) }
)
c1 <- get_CI(yvals)
## bootstrapping
sampfun <- function(fitted,data,idvar="Seed") {
pp <- predict(fitted,levels=1)
rr <- residuals(fitted)
dd <- data.frame(data,pred=pp,res=rr)
## sample groups with replacement
iv <- levels(data[[idvar]])
bsamp1 <- sample(iv,size=length(iv),replace=TRUE)
bsamp2 <- lapply(bsamp1,
function(x) {
## within groups, sample *residuals* with replacement
ddb <- dd[dd[[idvar]]==x,]
## bootstrapped response = pred + bootstrapped residual
ddb$height <- ddb$pred +
sample(ddb$res,size=nrow(ddb),replace=TRUE)
return(ddb)
})
res <- do.call(rbind,bsamp2) ## collect results
if (is(data,"groupedData"))
res <- groupedData(res,formula=formula(data))
return(res)
}
pfun <- function(fm) {
predict(fm,newdata=pframe,level=0)
}
set.seed(101)
yvals2 <- replicate(nresamp,
pfun(update(fm1,data=sampfun(fm1,Loblolly,"Seed"))))
c2 <- get_CI(yvals2,"boot_")
## delta method
ss0 <- with(as.list(fixef(fm1)),SSasymp(xvals,Asym,R0,lrc))
gg <- attr(ss0,"gradient")
V <- vcov(fm1)
delta_sd <- sqrt(diag(gg %*% V %*% t(gg)))
c3 <- with(pframe,data.frame(delta_lwr=height-1.96*delta_sd,
delta_upr=height+1.96*delta_sd))
pframe <- data.frame(pframe,c1,c2,c3)
library(ggplot2); theme_set(theme_bw())
ggplot(Loblolly,aes(age,height))+
geom_line(alpha=0.2,aes(group=Seed))+
geom_line(data=pframe,col="red")+
geom_ribbon(data=pframe,aes(ymin=lwr,ymax=upr),colour=NA,alpha=0.3,
fill="blue")+
geom_ribbon(data=pframe,aes(ymin=boot_lwr,ymax=boot_upr),
colour=NA,alpha=0.3,
fill="red")+
geom_ribbon(data=pframe,aes(ymin=delta_lwr,ymax=delta_upr),
colour=NA,alpha=0.3,
fill="cyan")
ggplot(Loblolly,aes(age))+
geom_hline(yintercept=0,lty=2)+
geom_ribbon(data=pframe,aes(ymin=lwr-height,ymax=upr-height),
colour="blue",
fill=NA)+
geom_ribbon(data=pframe,aes(ymin=boot_lwr-height,ymax=boot_upr-height),
colour="red",
fill=NA)+
geom_ribbon(data=pframe,aes(ymin=delta_lwr-height,ymax=delta_upr-height),
colour="cyan",
fill=NA) | Confidence intervals on predictions for a non-linear mixed model (nlme) | What you've done here looks reasonable. The short answer is that for the most part the issues of predicting confidence intervals from mixed models and from nonlinear models are more or less orthogonal | Confidence intervals on predictions for a non-linear mixed model (nlme)
What you've done here looks reasonable. The short answer is that for the most part the issues of predicting confidence intervals from mixed models and from nonlinear models are more or less orthogonal, that is, you need to worry about both sets of problems, but they don't (that I know of) interact in any strange ways.
Mixed model issues: are you trying to predict at the population or the group level? How do you account for variability in the random-effects parameters? Are you conditioning on the group-level observations or not?
Nonlinear model issues: is the sampling distribution of the parameters Normal? How do I account for nonlinearity when propagating error?
Throughout, I will assume you're predicting at the population level and constructing confidence intervals as the population level - in other words you're trying to plot the predicted values of a typical group, and not including the among-group variation in your confidence intervals. This simplifies the mixed-model issues. The following plots compare three approaches (see below for code dump):
population prediction intervals: this is the approach you tried above. It assumes the model is correct and that the sampling distributions of the fixed-effect parameters are multivariate Normal; it also ignores uncertainty in the random-effects parameters
bootstrapping: I implemented hierarchical bootstrapping; we resample both at the level of groups and within groups. The within-group sampling samples the residuals and adds them back to the predictions. This approach makes the fewest assumptions.
delta method: this assumes both multivariate Normality of sampling distributions and that the nonlinearity is weak enough to allow a second-order approximation.
We could also do parametric bootstrapping ...
Here are the CIs plotted along with the data ...
... but we can hardly see the differences.
Zooming in by subtracting off the predicted values (red=bootstrap, blue=PPI, cyan=delta method)
In this case the bootstrap intervals are actually narrowest (e.g. presumably the sampling distributions of the parameters are actually slightly thinner-tailed than Normal), while the PPI and delta-method intervals are very similar to each other.
library(nlme)
library(MASS)
fm1 <- nlme(height ~ SSasymp(age, Asym, R0, lrc),
data = Loblolly,
fixed = Asym + R0 + lrc ~ 1,
random = Asym ~ 1,
start = c(Asym = 103, R0 = -8.5, lrc = -3.3))
xvals <- with(Loblolly,seq(min(age),max(age),length.out=100))
nresamp <- 1000
## pick new parameter values by sampling from multivariate normal distribution based on fit
pars.picked <- mvrnorm(nresamp, mu = fixef(fm1), Sigma = vcov(fm1))
## predicted values: useful below
pframe <- with(Loblolly,data.frame(age=xvals))
pframe$height <- predict(fm1,newdata=pframe,level=0)
## utility function
get_CI <- function(y,pref="") {
r1 <- t(apply(y,1,quantile,c(0.025,0.975)))
setNames(as.data.frame(r1),paste0(pref,c("lwr","upr")))
}
set.seed(101)
yvals <- apply(pars.picked,1,
function(x) { SSasymp(xvals,x[1], x[2], x[3]) }
)
c1 <- get_CI(yvals)
## bootstrapping
sampfun <- function(fitted,data,idvar="Seed") {
pp <- predict(fitted,levels=1)
rr <- residuals(fitted)
dd <- data.frame(data,pred=pp,res=rr)
## sample groups with replacement
iv <- levels(data[[idvar]])
bsamp1 <- sample(iv,size=length(iv),replace=TRUE)
bsamp2 <- lapply(bsamp1,
function(x) {
## within groups, sample *residuals* with replacement
ddb <- dd[dd[[idvar]]==x,]
## bootstrapped response = pred + bootstrapped residual
ddb$height <- ddb$pred +
sample(ddb$res,size=nrow(ddb),replace=TRUE)
return(ddb)
})
res <- do.call(rbind,bsamp2) ## collect results
if (is(data,"groupedData"))
res <- groupedData(res,formula=formula(data))
return(res)
}
pfun <- function(fm) {
predict(fm,newdata=pframe,level=0)
}
set.seed(101)
yvals2 <- replicate(nresamp,
pfun(update(fm1,data=sampfun(fm1,Loblolly,"Seed"))))
c2 <- get_CI(yvals2,"boot_")
## delta method
ss0 <- with(as.list(fixef(fm1)),SSasymp(xvals,Asym,R0,lrc))
gg <- attr(ss0,"gradient")
V <- vcov(fm1)
delta_sd <- sqrt(diag(gg %*% V %*% t(gg)))
c3 <- with(pframe,data.frame(delta_lwr=height-1.96*delta_sd,
delta_upr=height+1.96*delta_sd))
pframe <- data.frame(pframe,c1,c2,c3)
library(ggplot2); theme_set(theme_bw())
ggplot(Loblolly,aes(age,height))+
geom_line(alpha=0.2,aes(group=Seed))+
geom_line(data=pframe,col="red")+
geom_ribbon(data=pframe,aes(ymin=lwr,ymax=upr),colour=NA,alpha=0.3,
fill="blue")+
geom_ribbon(data=pframe,aes(ymin=boot_lwr,ymax=boot_upr),
colour=NA,alpha=0.3,
fill="red")+
geom_ribbon(data=pframe,aes(ymin=delta_lwr,ymax=delta_upr),
colour=NA,alpha=0.3,
fill="cyan")
ggplot(Loblolly,aes(age))+
geom_hline(yintercept=0,lty=2)+
geom_ribbon(data=pframe,aes(ymin=lwr-height,ymax=upr-height),
colour="blue",
fill=NA)+
geom_ribbon(data=pframe,aes(ymin=boot_lwr-height,ymax=boot_upr-height),
colour="red",
fill=NA)+
geom_ribbon(data=pframe,aes(ymin=delta_lwr-height,ymax=delta_upr-height),
colour="cyan",
fill=NA) | Confidence intervals on predictions for a non-linear mixed model (nlme)
What you've done here looks reasonable. The short answer is that for the most part the issues of predicting confidence intervals from mixed models and from nonlinear models are more or less orthogonal |
18,981 | Are Residual Networks related to Gradient Boosting? | Potentially a newer paper which attempts to address more of it from Langford and Shapire team: Learning Deep ResNet Blocks Sequentially using Boosting Theory
Parts of interest are (See section 3):
The key difference is that boosting is an ensemble of estimated
hypothesis whereas ResNet is an ensemble of estimated feature
representations $\sum_{t=0}^T f_t(g_t(x))$. To solve this problem, we
introduce an auxiliary linear classifier $\mathbf{w}_t$ on top of each
residual block to construct a hypothesis module. Formally a
hypothesis module is defined as $$o_t(x) := \mathbf{w}_t^T g_t(x) \in \mathbb{R}$$
...
(where) $o_t(x) = \sum_{{t'} = 0}^{t-1} \mathbf{w}_t^T f_{t'}(g_{t'}(x))$
The paper goes into much more detail around the construction of the weak module classifier $h_t(x)$ and how that integrates with their BoostResNet algorithm.
Adding a bit more detail to this answer, all boosting algorithms can be written in some form of [1](p 5, 180, 185...):
$$F_T(x) := \sum_{t=0}^T \alpha_t h_t(x)$$
Where $h_t$ is the $t^{th}$ weak hypothesis, for some choice of $\alpha_t$. Note that different boosting algorithms will yield $\alpha_t$ and $h_t$ in different ways.
For example AdaBoost [1](p 5.) uses $h_t$ to minimize the weighted error $\epsilon_t$ with $\alpha_t = \frac{1}{2} \log \frac{1- \epsilon_t}{\epsilon_t}$
On the other hand, in gradient boosting setting [1](p 190.), $h_t$ is selected that maximizes $\nabla\mathcal{L}(F_{t-1}(x)) \cdot h_t$, and $\alpha_t > 0$ is chosen (as learning rate etc)
Where as in [2] under Lemma 3.2, it is shown that the output of depth-$T$ ResNet is $F(x)$ which is equivalent to
$$F(x) \propto \sum_{t=0}^T h_t(x)$$
this completes the relationship between boosting and resnet. The paper [2] proposes adding auxiliary linear layer to get it into the form $F_T(x) := \sum_{t=0}^T \alpha_t h_t(x)$, which leads to their BoostResNet algorithm and some discussion around that
[1] Robert E. Schapire and Yoav Freund. 2012. Boosting: Foundations and Algorithms. The MIT Press. p 5, 180, 189
[2] Furong Huang, Jordan Ash, John Langford, Robert Schapire: Learning Deep ResNet Blocks Sequentially using Boosting Theory, ICML 2018 | Are Residual Networks related to Gradient Boosting? | Potentially a newer paper which attempts to address more of it from Langford and Shapire team: Learning Deep ResNet Blocks Sequentially using Boosting Theory
Parts of interest are (See section 3):
Th | Are Residual Networks related to Gradient Boosting?
Potentially a newer paper which attempts to address more of it from Langford and Shapire team: Learning Deep ResNet Blocks Sequentially using Boosting Theory
Parts of interest are (See section 3):
The key difference is that boosting is an ensemble of estimated
hypothesis whereas ResNet is an ensemble of estimated feature
representations $\sum_{t=0}^T f_t(g_t(x))$. To solve this problem, we
introduce an auxiliary linear classifier $\mathbf{w}_t$ on top of each
residual block to construct a hypothesis module. Formally a
hypothesis module is defined as $$o_t(x) := \mathbf{w}_t^T g_t(x) \in \mathbb{R}$$
...
(where) $o_t(x) = \sum_{{t'} = 0}^{t-1} \mathbf{w}_t^T f_{t'}(g_{t'}(x))$
The paper goes into much more detail around the construction of the weak module classifier $h_t(x)$ and how that integrates with their BoostResNet algorithm.
Adding a bit more detail to this answer, all boosting algorithms can be written in some form of [1](p 5, 180, 185...):
$$F_T(x) := \sum_{t=0}^T \alpha_t h_t(x)$$
Where $h_t$ is the $t^{th}$ weak hypothesis, for some choice of $\alpha_t$. Note that different boosting algorithms will yield $\alpha_t$ and $h_t$ in different ways.
For example AdaBoost [1](p 5.) uses $h_t$ to minimize the weighted error $\epsilon_t$ with $\alpha_t = \frac{1}{2} \log \frac{1- \epsilon_t}{\epsilon_t}$
On the other hand, in gradient boosting setting [1](p 190.), $h_t$ is selected that maximizes $\nabla\mathcal{L}(F_{t-1}(x)) \cdot h_t$, and $\alpha_t > 0$ is chosen (as learning rate etc)
Where as in [2] under Lemma 3.2, it is shown that the output of depth-$T$ ResNet is $F(x)$ which is equivalent to
$$F(x) \propto \sum_{t=0}^T h_t(x)$$
this completes the relationship between boosting and resnet. The paper [2] proposes adding auxiliary linear layer to get it into the form $F_T(x) := \sum_{t=0}^T \alpha_t h_t(x)$, which leads to their BoostResNet algorithm and some discussion around that
[1] Robert E. Schapire and Yoav Freund. 2012. Boosting: Foundations and Algorithms. The MIT Press. p 5, 180, 189
[2] Furong Huang, Jordan Ash, John Langford, Robert Schapire: Learning Deep ResNet Blocks Sequentially using Boosting Theory, ICML 2018 | Are Residual Networks related to Gradient Boosting?
Potentially a newer paper which attempts to address more of it from Langford and Shapire team: Learning Deep ResNet Blocks Sequentially using Boosting Theory
Parts of interest are (See section 3):
Th |
18,982 | Are Residual Networks related to Gradient Boosting? | Answering my own question: I have found a notable paper that investigates and proves that Deep Residual Networks are indeed an ensemble of shallow networks.
ANOTHER EDIT, after comprehending this issue abit more:
I look at Resnets as a way to learn 'Feature Boosting'. The residual connection performs boosting but not on the objective but actually on the output features of the next layer. So they are in fact connected, but its not classical gradient boosting, but in fact, 'Gradient Feature Boosting'. | Are Residual Networks related to Gradient Boosting? | Answering my own question: I have found a notable paper that investigates and proves that Deep Residual Networks are indeed an ensemble of shallow networks.
ANOTHER EDIT, after comprehending this issu | Are Residual Networks related to Gradient Boosting?
Answering my own question: I have found a notable paper that investigates and proves that Deep Residual Networks are indeed an ensemble of shallow networks.
ANOTHER EDIT, after comprehending this issue abit more:
I look at Resnets as a way to learn 'Feature Boosting'. The residual connection performs boosting but not on the objective but actually on the output features of the next layer. So they are in fact connected, but its not classical gradient boosting, but in fact, 'Gradient Feature Boosting'. | Are Residual Networks related to Gradient Boosting?
Answering my own question: I have found a notable paper that investigates and proves that Deep Residual Networks are indeed an ensemble of shallow networks.
ANOTHER EDIT, after comprehending this issu |
18,983 | Different ways to produce a confidence interval for odds ratio from logistic regression | The justification for the procedure is the asymptotic normality of the MLE for $\beta$ and results from arguments involving the Central Limit Theorem.
The Delta method comes from a linear (i.e first order Taylor) expansion of the function around the MLE. Subsequently we appeal to the asymptotic normality and unbiasedness of the MLE.
Asymptotically both give the same answer. But practically, you would favor the one which looks more closely normal. In this example, I would favor the first one because the latter is likely to be less symmetric. | Different ways to produce a confidence interval for odds ratio from logistic regression | The justification for the procedure is the asymptotic normality of the MLE for $\beta$ and results from arguments involving the Central Limit Theorem.
The Delta method comes from a linear (i.e first o | Different ways to produce a confidence interval for odds ratio from logistic regression
The justification for the procedure is the asymptotic normality of the MLE for $\beta$ and results from arguments involving the Central Limit Theorem.
The Delta method comes from a linear (i.e first order Taylor) expansion of the function around the MLE. Subsequently we appeal to the asymptotic normality and unbiasedness of the MLE.
Asymptotically both give the same answer. But practically, you would favor the one which looks more closely normal. In this example, I would favor the first one because the latter is likely to be less symmetric. | Different ways to produce a confidence interval for odds ratio from logistic regression
The justification for the procedure is the asymptotic normality of the MLE for $\beta$ and results from arguments involving the Central Limit Theorem.
The Delta method comes from a linear (i.e first o |
18,984 | Different ways to produce a confidence interval for odds ratio from logistic regression | A comparison of confidence intervals methods on an example from ISL
The book "Introduction to Statistical Learning" by Tibshirani, James, Hastie provides an example on page 267 of confidence intervals for polynomial logistic regression degree 4 on the wage data. Quoting the book:
We model the binary event $wage>250$ using logistic regression with a degree-4 polynomial. The fitted posterior probability of wage exceeding $250,000 is shown in blue, along with an estimated 95 % confidence interval.
Below is a quick recap of two methods for constructing such intervals as well as comments on how to implement them from scratch
Wald / Endpoint transformation intervals
Compute the upper and lower bounds of the confidence interval for the linear combination $x^T\beta$ (using the Wald CI)
Apply a monotonic transformation to the endpoints $F(x^T\beta)$ to obtain the probabilities.
Since $Pr(x^T\beta) = F(x^T\beta)$ is a monotonic transformation of $x^T\beta$
$$ [Pr(x^T\beta)_L \leq Pr(x^T\beta) \leq Pr(x^T\beta)_U] = [F(x^T\beta)_L \leq F(x^T\beta) \leq F(x^T\beta)_U] $$
Concretely this means computing $\beta^Tx \pm z^* SE(\beta^Tx)$ and then applying the logit transform to the result to get the lower and upper bounds:
$$[\frac{e^{x^T\beta - z^* SE(x^T\beta)}}{1 + e^{x^T\beta - z^* SE(x^T\beta)}}, \frac{e^{x^T\beta + z^* SE(x^T\beta)}}{1 + e^{x^T\beta + z^* SE(x^T\beta)}},] $$
Computing the standard error
Maximum Likelihood theory tells us that the approximate variance of $x^T\beta$ can be calculated using the covariance matrix $\Sigma$ of the regression coefficients using
$$ Var(x^T\beta) = x^T \Sigma x$$
Define the design matrix $X$ and the matrix $V$ as
$$\textbf{X = }\begin{bmatrix} 1 & x_{1,1} & \ldots & x_{1,p} \\ 1 & x_{2,1} & \ldots & x_{2,p} \\ \vdots & \vdots & \ddots & \vdots \\ 1 & x_{n,1} & \ldots & x_{n,p}
\end{bmatrix} \ \ \ \ \textbf{V = } \begin{bmatrix} \hat{\pi}_{1}(1 - \hat{\pi}_{1}) & 0 & \ldots & 0 \\ 0 & \hat{\pi}_{2}(1 - \hat{\pi}_{2}) & \ldots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \ldots & \hat{\pi}_{n}(1 - \hat{\pi}_{n}) \end{bmatrix}$$
where $x_{i,j}$ is the value of the $j$th variable for the $i$th observations and $\hat{\pi}_{i}$ represents the predicted probability for observation $i$.
The covariance matrix can then be found as: $\Sigma = \textbf{(X}^{T}\textbf{V}\textbf{X)}^{-1}$ and the standard error as $SE(x^T\beta) = \sqrt{Var(x^T\beta)}$
The 95% confidence intervals for the predicted probability can then be plotted as
Delta method confidence intervals
The approach is to compute the variance of a linear approximation of the function $F$ and use this to construct large sample confidence intervals.
$$ \text{Var}[F\mathbf{(x^T \hat \beta)}] \approx \nabla F^T \ \Sigma \ \nabla F $$
Where $\nabla$ is the gradient and $ \Sigma$ the estimated covariance matrix. Note that in one dimension:
$$\frac{\partial F(x\beta)}{\partial \beta} = \frac{\partial F(x\beta)}{\partial x\beta} \frac{\partial x\beta}{\partial \beta} = x f(x\beta)$$
Where $f$ is the derivative of $F$. This generalizes in the multivariate case
$$ \text{Var}[F\mathbf{(x^T \hat \beta)}] \approx f^T \ \mathbf{x^T} \ \Sigma \ \mathbf{x} \ f $$
In our case F is the logistic function (which we will denote $\pi(x^T\beta)$) whose derivative is
$$ \pi'(x^T\beta) = \pi (x^T\beta) (1 - \pi (x^T\beta) ) $$
We can now construct a confidence interval using the variance computed above.
$$ C.I. = [Pr(x\hat \beta) - z^* \sqrt{\text{Var}[ \pi(x \hat \beta) ]} \leq Pr(x\hat \beta) + z^* \sqrt{\text{Var}[ \pi(x \hat \beta) ]} ]$$
In vector form for the multivariate case
$$ C.I. = \mathbf{[\pi(x^T\hat \beta) \pm z^* \sqrt{ \left(\pi(x^T \hat \beta) (1 - \pi(x^T \hat \beta) ) \right)^T x^T \ \ \text{Var}[ \hat \beta] \ \ x \ \ \pi(x^T \hat \beta) (1 - \pi(x^T \hat \beta) ) ]}}$$
Note that $\mathbf{x}$ represent a single data point in $\mathbb{R}^{p+1}$, i.e. a single row of the design matrix $X$
A open ended conclusion
A look at the Normal QQ plots for both the probabilities and the negative log odds show that neither are normally distributed. Could this explain the difference ?
Source:
http://www.indiana.edu/~jslsoc/stata/ci_computations/xulong-prvalue-23aug2005.pdf
https://stackoverflow.com/questions/47414842/confidence-interval-of-probability-prediction-from-logistic-regression-statsmode
http://www.indiana.edu/~jslsoc/stata/ci_computations/xulong-prvalue-23aug2005.pdf
http://www.indiana.edu/~jslsoc/stata/ci_computations/spost_deltaci.pdf | Different ways to produce a confidence interval for odds ratio from logistic regression | A comparison of confidence intervals methods on an example from ISL
The book "Introduction to Statistical Learning" by Tibshirani, James, Hastie provides an example on page 267 of confidence intervals | Different ways to produce a confidence interval for odds ratio from logistic regression
A comparison of confidence intervals methods on an example from ISL
The book "Introduction to Statistical Learning" by Tibshirani, James, Hastie provides an example on page 267 of confidence intervals for polynomial logistic regression degree 4 on the wage data. Quoting the book:
We model the binary event $wage>250$ using logistic regression with a degree-4 polynomial. The fitted posterior probability of wage exceeding $250,000 is shown in blue, along with an estimated 95 % confidence interval.
Below is a quick recap of two methods for constructing such intervals as well as comments on how to implement them from scratch
Wald / Endpoint transformation intervals
Compute the upper and lower bounds of the confidence interval for the linear combination $x^T\beta$ (using the Wald CI)
Apply a monotonic transformation to the endpoints $F(x^T\beta)$ to obtain the probabilities.
Since $Pr(x^T\beta) = F(x^T\beta)$ is a monotonic transformation of $x^T\beta$
$$ [Pr(x^T\beta)_L \leq Pr(x^T\beta) \leq Pr(x^T\beta)_U] = [F(x^T\beta)_L \leq F(x^T\beta) \leq F(x^T\beta)_U] $$
Concretely this means computing $\beta^Tx \pm z^* SE(\beta^Tx)$ and then applying the logit transform to the result to get the lower and upper bounds:
$$[\frac{e^{x^T\beta - z^* SE(x^T\beta)}}{1 + e^{x^T\beta - z^* SE(x^T\beta)}}, \frac{e^{x^T\beta + z^* SE(x^T\beta)}}{1 + e^{x^T\beta + z^* SE(x^T\beta)}},] $$
Computing the standard error
Maximum Likelihood theory tells us that the approximate variance of $x^T\beta$ can be calculated using the covariance matrix $\Sigma$ of the regression coefficients using
$$ Var(x^T\beta) = x^T \Sigma x$$
Define the design matrix $X$ and the matrix $V$ as
$$\textbf{X = }\begin{bmatrix} 1 & x_{1,1} & \ldots & x_{1,p} \\ 1 & x_{2,1} & \ldots & x_{2,p} \\ \vdots & \vdots & \ddots & \vdots \\ 1 & x_{n,1} & \ldots & x_{n,p}
\end{bmatrix} \ \ \ \ \textbf{V = } \begin{bmatrix} \hat{\pi}_{1}(1 - \hat{\pi}_{1}) & 0 & \ldots & 0 \\ 0 & \hat{\pi}_{2}(1 - \hat{\pi}_{2}) & \ldots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \ldots & \hat{\pi}_{n}(1 - \hat{\pi}_{n}) \end{bmatrix}$$
where $x_{i,j}$ is the value of the $j$th variable for the $i$th observations and $\hat{\pi}_{i}$ represents the predicted probability for observation $i$.
The covariance matrix can then be found as: $\Sigma = \textbf{(X}^{T}\textbf{V}\textbf{X)}^{-1}$ and the standard error as $SE(x^T\beta) = \sqrt{Var(x^T\beta)}$
The 95% confidence intervals for the predicted probability can then be plotted as
Delta method confidence intervals
The approach is to compute the variance of a linear approximation of the function $F$ and use this to construct large sample confidence intervals.
$$ \text{Var}[F\mathbf{(x^T \hat \beta)}] \approx \nabla F^T \ \Sigma \ \nabla F $$
Where $\nabla$ is the gradient and $ \Sigma$ the estimated covariance matrix. Note that in one dimension:
$$\frac{\partial F(x\beta)}{\partial \beta} = \frac{\partial F(x\beta)}{\partial x\beta} \frac{\partial x\beta}{\partial \beta} = x f(x\beta)$$
Where $f$ is the derivative of $F$. This generalizes in the multivariate case
$$ \text{Var}[F\mathbf{(x^T \hat \beta)}] \approx f^T \ \mathbf{x^T} \ \Sigma \ \mathbf{x} \ f $$
In our case F is the logistic function (which we will denote $\pi(x^T\beta)$) whose derivative is
$$ \pi'(x^T\beta) = \pi (x^T\beta) (1 - \pi (x^T\beta) ) $$
We can now construct a confidence interval using the variance computed above.
$$ C.I. = [Pr(x\hat \beta) - z^* \sqrt{\text{Var}[ \pi(x \hat \beta) ]} \leq Pr(x\hat \beta) + z^* \sqrt{\text{Var}[ \pi(x \hat \beta) ]} ]$$
In vector form for the multivariate case
$$ C.I. = \mathbf{[\pi(x^T\hat \beta) \pm z^* \sqrt{ \left(\pi(x^T \hat \beta) (1 - \pi(x^T \hat \beta) ) \right)^T x^T \ \ \text{Var}[ \hat \beta] \ \ x \ \ \pi(x^T \hat \beta) (1 - \pi(x^T \hat \beta) ) ]}}$$
Note that $\mathbf{x}$ represent a single data point in $\mathbb{R}^{p+1}$, i.e. a single row of the design matrix $X$
A open ended conclusion
A look at the Normal QQ plots for both the probabilities and the negative log odds show that neither are normally distributed. Could this explain the difference ?
Source:
http://www.indiana.edu/~jslsoc/stata/ci_computations/xulong-prvalue-23aug2005.pdf
https://stackoverflow.com/questions/47414842/confidence-interval-of-probability-prediction-from-logistic-regression-statsmode
http://www.indiana.edu/~jslsoc/stata/ci_computations/xulong-prvalue-23aug2005.pdf
http://www.indiana.edu/~jslsoc/stata/ci_computations/spost_deltaci.pdf | Different ways to produce a confidence interval for odds ratio from logistic regression
A comparison of confidence intervals methods on an example from ISL
The book "Introduction to Statistical Learning" by Tibshirani, James, Hastie provides an example on page 267 of confidence intervals |
18,985 | Different ways to produce a confidence interval for odds ratio from logistic regression | For most purposes the simplest way is probably best, as discussed in the context of a log transform on this page. Think about your dependent variable as being analyzed in the logit scale, with statistical tests performed and confidence intervals (CI) defined on that logit scale. The back transformation to odds ratio is simply to put those results into a scale that a reader might more readily grasp. This is also done, for example, in Cox survival analysis, where the regression coefficients (and the 95% CI) are exponentiated to obtain hazard ratios and their CI. | Different ways to produce a confidence interval for odds ratio from logistic regression | For most purposes the simplest way is probably best, as discussed in the context of a log transform on this page. Think about your dependent variable as being analyzed in the logit scale, with statist | Different ways to produce a confidence interval for odds ratio from logistic regression
For most purposes the simplest way is probably best, as discussed in the context of a log transform on this page. Think about your dependent variable as being analyzed in the logit scale, with statistical tests performed and confidence intervals (CI) defined on that logit scale. The back transformation to odds ratio is simply to put those results into a scale that a reader might more readily grasp. This is also done, for example, in Cox survival analysis, where the regression coefficients (and the 95% CI) are exponentiated to obtain hazard ratios and their CI. | Different ways to produce a confidence interval for odds ratio from logistic regression
For most purposes the simplest way is probably best, as discussed in the context of a log transform on this page. Think about your dependent variable as being analyzed in the logit scale, with statist |
18,986 | What does Bayesian Hypothesis Testing mean in the framework of inference and decision theory? | A statistical model is given by a family of probability distributions. When the model is parametric, this family is indexed by an unknown parameter $\theta$:
$$\mathcal{F}=\left\{ f(\cdot|\theta);\ \theta\in\Theta \right\}$$
If one wants to test an hypothesis on $\theta$ like $H_0:\,\theta\in\Theta_0$, one can consider two models are in opposition: $\mathcal{F}$ versus
$$\mathcal{F}_0=\left\{ f(\cdot|\theta);\ \theta\in\Theta_0 \right\}$$
From my Bayesian perspective, I am drawing inference on the index of the model behind the data, $\mathcal{M}$. Hence I put a prior on this index, $\rho_0$ and $\rho_a$, as well as on the parameters of both models, $\pi_0(\theta)$ over $\Theta_0$ and $\pi_a(\theta)$ over $\Theta$. And I then deduce the posterior distribution of this index:
$$\pi(m=0|x)=\dfrac{\rho_0\int_{\Theta_0} f(x|\theta)\pi_0(\theta)\text{d}\theta}{\rho_0\int_{\Theta_0} f(x|\theta)\pi_0(\theta)\text{d}\theta
+(1-\rho_0)\int_{\Theta} f(x|\theta)\pi_a(\theta)\text{d}\theta}$$
The document you linked to goes into much more details into this perspective and should be your entry of choice into statistical testing of hypotheses, unless you can afford to go through a whole Bayesian book. Or even a machine learning book like Kevin Murphy's.
For instance, in the setting where $X\sim\mathcal{N}(\theta,1)$ is observed, if the hypothesis to be tested is $H_0:\theta=0$, the posterior probability that $\theta=0$ is the posterior probability that the model producing the data is $\mathcal{N}(0,1)$. According to the above formula, if the prior distribution on $\theta$ is $\theta\sim\mathcal{N}(0,10)$, and if we put equal weights on both hypotheses, i.e., $\rho_0=1/2$, this posterior probability is
\begin{align*}\pi(m=0|x)&=\dfrac{\frac{1}{\sqrt{2\pi}}\exp\{-x^2/2\}}{\frac{1}{\sqrt{2\pi}}\exp\{-x^2/2\}
+\int_{\mathbb{R}} \frac{1}{\sqrt{2\pi}}\exp\{-(x-\theta)^2/2\}\frac{1}{\sqrt{2\pi\times10}}\exp\{-\theta^2/20\}\text{d}\theta}\\
&=\dfrac{\exp\{-x^2/2\}}{\exp\{-x^2/2\}
+\frac{1}{\sqrt{11}}\exp\{-x^2/22\}}
\end{align*} | What does Bayesian Hypothesis Testing mean in the framework of inference and decision theory? | A statistical model is given by a family of probability distributions. When the model is parametric, this family is indexed by an unknown parameter $\theta$:
$$\mathcal{F}=\left\{ f(\cdot|\theta);\ \t | What does Bayesian Hypothesis Testing mean in the framework of inference and decision theory?
A statistical model is given by a family of probability distributions. When the model is parametric, this family is indexed by an unknown parameter $\theta$:
$$\mathcal{F}=\left\{ f(\cdot|\theta);\ \theta\in\Theta \right\}$$
If one wants to test an hypothesis on $\theta$ like $H_0:\,\theta\in\Theta_0$, one can consider two models are in opposition: $\mathcal{F}$ versus
$$\mathcal{F}_0=\left\{ f(\cdot|\theta);\ \theta\in\Theta_0 \right\}$$
From my Bayesian perspective, I am drawing inference on the index of the model behind the data, $\mathcal{M}$. Hence I put a prior on this index, $\rho_0$ and $\rho_a$, as well as on the parameters of both models, $\pi_0(\theta)$ over $\Theta_0$ and $\pi_a(\theta)$ over $\Theta$. And I then deduce the posterior distribution of this index:
$$\pi(m=0|x)=\dfrac{\rho_0\int_{\Theta_0} f(x|\theta)\pi_0(\theta)\text{d}\theta}{\rho_0\int_{\Theta_0} f(x|\theta)\pi_0(\theta)\text{d}\theta
+(1-\rho_0)\int_{\Theta} f(x|\theta)\pi_a(\theta)\text{d}\theta}$$
The document you linked to goes into much more details into this perspective and should be your entry of choice into statistical testing of hypotheses, unless you can afford to go through a whole Bayesian book. Or even a machine learning book like Kevin Murphy's.
For instance, in the setting where $X\sim\mathcal{N}(\theta,1)$ is observed, if the hypothesis to be tested is $H_0:\theta=0$, the posterior probability that $\theta=0$ is the posterior probability that the model producing the data is $\mathcal{N}(0,1)$. According to the above formula, if the prior distribution on $\theta$ is $\theta\sim\mathcal{N}(0,10)$, and if we put equal weights on both hypotheses, i.e., $\rho_0=1/2$, this posterior probability is
\begin{align*}\pi(m=0|x)&=\dfrac{\frac{1}{\sqrt{2\pi}}\exp\{-x^2/2\}}{\frac{1}{\sqrt{2\pi}}\exp\{-x^2/2\}
+\int_{\mathbb{R}} \frac{1}{\sqrt{2\pi}}\exp\{-(x-\theta)^2/2\}\frac{1}{\sqrt{2\pi\times10}}\exp\{-\theta^2/20\}\text{d}\theta}\\
&=\dfrac{\exp\{-x^2/2\}}{\exp\{-x^2/2\}
+\frac{1}{\sqrt{11}}\exp\{-x^2/22\}}
\end{align*} | What does Bayesian Hypothesis Testing mean in the framework of inference and decision theory?
A statistical model is given by a family of probability distributions. When the model is parametric, this family is indexed by an unknown parameter $\theta$:
$$\mathcal{F}=\left\{ f(\cdot|\theta);\ \t |
18,987 | What does Bayesian Hypothesis Testing mean in the framework of inference and decision theory? | Excellent question. I think your confusion may result from some of the basic differences between the "frequentist" and "Bayesian" perspectives. I have a lot of experience with the former and am new to the later so attempting a few simple observations might help me too. I edited your question to make a few distinctions clear - at least, as I understand them. I hope you don't mind! If I got something wrong, you could re-edit your question or add a comment on this response.
1) At the risk of sounding somewhat too elementary: A model is any statement that attempts an explanation of reality like "If I had pancakes for breakfast, it must be Tuesday." As such, a model is an hypothesis. A famous quote by George Box: "All models are wrong, some models are useful." For a model to be useful there must be some way to test it. Enter the concept of competing hypotheses and the answer to one of your questions. I would suggest that "...in the context of statistical inference," an hypothesis is any model that may be useful and can be tested mathematically. So hypothesis testing is a means of making a decision about whether a model is useful of not. In summary, an hypothesis is a model under consideration. It could be different parameter values of the same function or different functions. I think your lecture notes are showing that different outcomes (measurements) in the sample space would make different hypotheses (Is the intercept parameter zero? Do I need a cube in that polynomial? Maybe it's really exponential?), more or less likely.
2) Your Kahn video is an example of what Bayesian's call the "Frequentist" approach to hypothesis testing so it may have confused you when trying to apply it to your lecture notes which are Bayesian. I have been trying to come up with a simple distinction between application of the two approaches (which may be dangerous). I think I understand the philosophical distinction reasonably well. From what I have seen, the "Frequentist" assumes a random component to the data and tests how likely the observed data are given non-random parameters. The "Bayesian" assumes the data are fixed and determines the most likely value of random parameters. This difference leads to different testing methods.
In "Frequentist" hypothesis testing, a model that may be useful is one which explains some effect so it is compared with the "null hypothesis" - the model of no effect. The attempt is made to set up a useful model that is mutually exclusive to the model of no effect. The test is then on the probability of observing the data under the assumption of no effect. If that probability is found to be low, the null hypothesis is rejected and the alternative is all that's left. (Note that a purist would never "accept" the null hypothesis, only "fail to reject" one. It may sound like angels dancing on the head of a pin but the distinction is a fundamental philosophical one) Intro statistics usually starts with what may be the simplest example: "Two groups are different." The null hypothesis that they are not different is tested by calculating how likely it would be to observe differences as great or greater as measured by a random experiment given that they are not different. This is usually a t-test where the null hypothesis is that the difference of the means is zero. So the parameter is the mean at a fixed value of zero.
The Bayesian says, "Hold on a minute, we made those measurements and they are different, so how likely is that?" They calculate the probability for every value of the (now) random parameter and pick the one that is highest as the most likely. So in a sense, every possible value of the parameter is a separate model. But now they need a way to make a decision about whether the model with the highest probability is different enough to matter. That's why your lecture notes introduced the cost function. To make a good decision, some assumption of the consequences of making the wrong decision is needed.
3) "What does it mean to assign a hypothesis to each data sample?" I don't think they are. Be careful with what is meant by "sample point." I believe they are referring to a particular sample vector and want to know how likely each hypothesis is for all sample vectors in the sample space. Equations (14) and (15) show how to compare two hypotheses for a particular sample vector. So they are simplifying a general argument of comparing multiple hypotheses by showing how to compare only two. | What does Bayesian Hypothesis Testing mean in the framework of inference and decision theory? | Excellent question. I think your confusion may result from some of the basic differences between the "frequentist" and "Bayesian" perspectives. I have a lot of experience with the former and am new | What does Bayesian Hypothesis Testing mean in the framework of inference and decision theory?
Excellent question. I think your confusion may result from some of the basic differences between the "frequentist" and "Bayesian" perspectives. I have a lot of experience with the former and am new to the later so attempting a few simple observations might help me too. I edited your question to make a few distinctions clear - at least, as I understand them. I hope you don't mind! If I got something wrong, you could re-edit your question or add a comment on this response.
1) At the risk of sounding somewhat too elementary: A model is any statement that attempts an explanation of reality like "If I had pancakes for breakfast, it must be Tuesday." As such, a model is an hypothesis. A famous quote by George Box: "All models are wrong, some models are useful." For a model to be useful there must be some way to test it. Enter the concept of competing hypotheses and the answer to one of your questions. I would suggest that "...in the context of statistical inference," an hypothesis is any model that may be useful and can be tested mathematically. So hypothesis testing is a means of making a decision about whether a model is useful of not. In summary, an hypothesis is a model under consideration. It could be different parameter values of the same function or different functions. I think your lecture notes are showing that different outcomes (measurements) in the sample space would make different hypotheses (Is the intercept parameter zero? Do I need a cube in that polynomial? Maybe it's really exponential?), more or less likely.
2) Your Kahn video is an example of what Bayesian's call the "Frequentist" approach to hypothesis testing so it may have confused you when trying to apply it to your lecture notes which are Bayesian. I have been trying to come up with a simple distinction between application of the two approaches (which may be dangerous). I think I understand the philosophical distinction reasonably well. From what I have seen, the "Frequentist" assumes a random component to the data and tests how likely the observed data are given non-random parameters. The "Bayesian" assumes the data are fixed and determines the most likely value of random parameters. This difference leads to different testing methods.
In "Frequentist" hypothesis testing, a model that may be useful is one which explains some effect so it is compared with the "null hypothesis" - the model of no effect. The attempt is made to set up a useful model that is mutually exclusive to the model of no effect. The test is then on the probability of observing the data under the assumption of no effect. If that probability is found to be low, the null hypothesis is rejected and the alternative is all that's left. (Note that a purist would never "accept" the null hypothesis, only "fail to reject" one. It may sound like angels dancing on the head of a pin but the distinction is a fundamental philosophical one) Intro statistics usually starts with what may be the simplest example: "Two groups are different." The null hypothesis that they are not different is tested by calculating how likely it would be to observe differences as great or greater as measured by a random experiment given that they are not different. This is usually a t-test where the null hypothesis is that the difference of the means is zero. So the parameter is the mean at a fixed value of zero.
The Bayesian says, "Hold on a minute, we made those measurements and they are different, so how likely is that?" They calculate the probability for every value of the (now) random parameter and pick the one that is highest as the most likely. So in a sense, every possible value of the parameter is a separate model. But now they need a way to make a decision about whether the model with the highest probability is different enough to matter. That's why your lecture notes introduced the cost function. To make a good decision, some assumption of the consequences of making the wrong decision is needed.
3) "What does it mean to assign a hypothesis to each data sample?" I don't think they are. Be careful with what is meant by "sample point." I believe they are referring to a particular sample vector and want to know how likely each hypothesis is for all sample vectors in the sample space. Equations (14) and (15) show how to compare two hypotheses for a particular sample vector. So they are simplifying a general argument of comparing multiple hypotheses by showing how to compare only two. | What does Bayesian Hypothesis Testing mean in the framework of inference and decision theory?
Excellent question. I think your confusion may result from some of the basic differences between the "frequentist" and "Bayesian" perspectives. I have a lot of experience with the former and am new |
18,988 | What does Bayesian Hypothesis Testing mean in the framework of inference and decision theory? | Say you have data from a set of boxes. The data consists of Length (L), Width (W), Height (H), and Volume (V).
If we don't know much about boxes/geometry we might try the model:
V = a*L + b*W + c*H + e
This model has three parameters (a, b, c) that could be varied, plus an error/cost term (e) describing how well the hypothesis fits the data. Each combination of parameter values would be considered a different hypothesis. The "default" parameter value chosen is usually zero, which in the above example would correspond to "no relationship" between V and L, W, H.
What people do is test this "default" hypothesis by checking if e is beyond some cutoff value, usually by calculating a p-value assuming a normal distribution of error around the model fit. If that hypothesis is rejected, then they find the combination of a, b, c parameters that maximizes the likelihood and present this is the most likely hypothesis. If they are bayesian they multiply the likelihood by the prior for each set of parameter values and choose the solution that maximizes the posterior probability.
Obviously this strategy is non-optimal in that the model assumes additivity, and will miss that the correct hypothesis is:
V = L*W*H + e
Edit:
@Pinocchio
Perhaps someone disagreed with the claim that hypothesis testing is non-optimal when there is no rational reason to choose one/few functions (or as you put it: "hypothesis classes") out of the infinitely many possible . Of course this is trivially true, and "optimal" can be used in the limited sense of "best fit given the cost function and choices supplied". That comment made it into my answer because I disliked how the issue of model specification was glossed over in your class notes. It is the main problem facing most scientific workers, for which afaik there is no algorithm.
Further, I could not understand p-values, hypothesis testing, etc until I understood the history, so perhaps it will help you as well. There are multiple sources of confusion surrounding frequentist hypothesis testing (I am not so familiar with the history of the bayesian variant).
There is what was originally called "hypothesis testing" in the Neyman-Pearson sense, "significance testing" as developed by Ronald Fisher, and also an ill defined, never properly justified "hybrid" of these two strategies widely used throughout the sciences (which may be casually referred to using either above term, or "null hypothesis significance testing"). While I wouldn't recommend taking a wikipedia page as authoritative, many sources discussing these issues can be found here. Some main points:
The use of a "default" hypothesis is not part of the original
hypothesis testing procedure, rather the user is supposed to use
prior knowledge to determine the models under consideration. I have never seen explicit recommendation by proponents of this model regarding what to do if we have no particular reason to choose a given set of hypotheses to compare. It is often said that this approach is suitable for quality control, when there are known tolerances to compare some measurement to.
There is no alternative hypothesis under Fisher's "significance
testing" paradigm, only a null hypothesis, which can be rejected
if deemed unlikely given the data. From my reading, Fisher himself
was equivocal on the use of default null hypotheses. I could never
find him commenting explicitly on the matter, however he surely did
not recommend that this should be the only null hypothesis.
The use of the default null hypothesis is sometimes
construed as an "abuse" of hypothesis testing, but it is central to
the popular hybrid method mentioned. The argument goes that this
practice is often "a useless preliminary":
"The researcher formulates a theoretical prediction, generally the
direction of an effect... When the data in fact show the predicted
directional result, this seems to confirm the hypothesis. The
researcher tests a 'straw person' null hypothesis that the effect is
actually zero. If the latter cannot be rejected at the .05 level (or
some variant), then the apparent confirmation of the theory cannot be
claimed...A common error in this type of test is to confuse the
significance level actually attained (for rejecting the straw-person
null) with the confirmation level attained for the original theory...
the strength of confirmation actually depends on [the sharpness of a
researcher's numerical predictions], not on the significance level
attained for a straw-person null."
The null hypothesis testing controversy in psychology. David H
Krantz. Journal of the American Statistical Association; Dec 1999;
94, 448; 1372-1381
The Khan academy video is an example of this hybrid method, and is guilty of committing the error noted in that quote. From the information available in that video we can only conclude that the injected rats differ from the non-injected, while the video claims we can conclude "the drug definitely has some effect". A bit of reflection would lead us to consider that perhaps the tested rats were older than the non-injected, etc. We need to rule out plausible alternative explanations before claiming evidence for our theory. The less specific the prediction of the theory, the more difficult it is to accomplish this.
Edit 2:
Perhaps taking the example from your notes of a medical diagnosis will help. Say a patient can be either "normal" or in "hypertensive crisis".
We have prior information that only 1% of people are in hypertensive crisis. People in hypertensive crisis have systolic blood pressure that follows a normal distribution with mean=180 and sd=10. Meanwhile, normal people have blood pressure from a normal distribution with mean=120, sd=10. The cost of judging a person normal when they are is zero, the cost of missing a diagnosis is 1, and the cost due to side effects due to the treatment is 0.2 regardless of whether they are in crisis or not. Then the following R code calculates the threshold (eta) and likelihood ratio. If the likelihood ratio is greater than the threshold we decide to treat, if less than we do not:
#Prior probabilities
P0=.99 #Prior probability patient is normal
P1=1-P0 #Prior probability patient is in crisis
#Hypotheses
H0<-dnorm(x=50:250, mean=120, sd=10) #H0: Patient is normal
H1<-dnorm(x=50:250, mean=180, sd=10) #H1: Patient in hypertensive crisis
#Costs
C00=0 #Decide normal when normal
C01=1 #Decide normal when in crisis
C10=.2 #Decide crisis when normal
C11=.2 #Decide crisis when in crisis
#Threshold
eta=P0*(C10-C00)/ P1*(C01-C11)
#Blood Pressure Measurements
y<-rnorm(3, 150, 20)
#Calculate Likelihood of Each Datapoint Given Each Hypothesis
L0vec=dnorm(x=y, mean=120, sd=10) #Vector of Likelihoods under H0
L1vec=dnorm(x=y, mean=180, sd=10) #Vector of Likelihoods under H1
#P(y|H) is the product of the likelihoods under each hypothesis
L0<-prod(L0vec)
L1<-prod(L1vec)
#L(y) is the ratio of the two likelihoods
LikRatio<-L1/L0
#Plot
plot(50:250, H0, type="l", col="Green", lwd=4,
xlab=" Systolic Blood Pressure", ylab="Probability Density Given Model",
main=paste0("L=",signif(LikRatio,3)," eta=", signif(eta,3)))
lines(50:250, H1, col="Red", lwd=4)
abline(v=y)
#Decision
if(LikRatio>eta){
print("L > eta ---> Decision: Treat Patient")
}else{
print("L < eta ---> Do Not Treat Patient")
}
In the above scenario the threshold eta=15.84. If we take three blood pressure measurements and get 139.9237, 125.2278, 190.3765, then the likelihood ratio is 27.6 in favor of H1: Patient in hypertensive crisis. Since 27.6 is greater than than the threshold we would choose to treat. The graph shows the normal hypothesis in green and hypertensive in red. Vertical black lines indicate the values of the observations. | What does Bayesian Hypothesis Testing mean in the framework of inference and decision theory? | Say you have data from a set of boxes. The data consists of Length (L), Width (W), Height (H), and Volume (V).
If we don't know much about boxes/geometry we might try the model:
V = a*L + b*W + c*H + | What does Bayesian Hypothesis Testing mean in the framework of inference and decision theory?
Say you have data from a set of boxes. The data consists of Length (L), Width (W), Height (H), and Volume (V).
If we don't know much about boxes/geometry we might try the model:
V = a*L + b*W + c*H + e
This model has three parameters (a, b, c) that could be varied, plus an error/cost term (e) describing how well the hypothesis fits the data. Each combination of parameter values would be considered a different hypothesis. The "default" parameter value chosen is usually zero, which in the above example would correspond to "no relationship" between V and L, W, H.
What people do is test this "default" hypothesis by checking if e is beyond some cutoff value, usually by calculating a p-value assuming a normal distribution of error around the model fit. If that hypothesis is rejected, then they find the combination of a, b, c parameters that maximizes the likelihood and present this is the most likely hypothesis. If they are bayesian they multiply the likelihood by the prior for each set of parameter values and choose the solution that maximizes the posterior probability.
Obviously this strategy is non-optimal in that the model assumes additivity, and will miss that the correct hypothesis is:
V = L*W*H + e
Edit:
@Pinocchio
Perhaps someone disagreed with the claim that hypothesis testing is non-optimal when there is no rational reason to choose one/few functions (or as you put it: "hypothesis classes") out of the infinitely many possible . Of course this is trivially true, and "optimal" can be used in the limited sense of "best fit given the cost function and choices supplied". That comment made it into my answer because I disliked how the issue of model specification was glossed over in your class notes. It is the main problem facing most scientific workers, for which afaik there is no algorithm.
Further, I could not understand p-values, hypothesis testing, etc until I understood the history, so perhaps it will help you as well. There are multiple sources of confusion surrounding frequentist hypothesis testing (I am not so familiar with the history of the bayesian variant).
There is what was originally called "hypothesis testing" in the Neyman-Pearson sense, "significance testing" as developed by Ronald Fisher, and also an ill defined, never properly justified "hybrid" of these two strategies widely used throughout the sciences (which may be casually referred to using either above term, or "null hypothesis significance testing"). While I wouldn't recommend taking a wikipedia page as authoritative, many sources discussing these issues can be found here. Some main points:
The use of a "default" hypothesis is not part of the original
hypothesis testing procedure, rather the user is supposed to use
prior knowledge to determine the models under consideration. I have never seen explicit recommendation by proponents of this model regarding what to do if we have no particular reason to choose a given set of hypotheses to compare. It is often said that this approach is suitable for quality control, when there are known tolerances to compare some measurement to.
There is no alternative hypothesis under Fisher's "significance
testing" paradigm, only a null hypothesis, which can be rejected
if deemed unlikely given the data. From my reading, Fisher himself
was equivocal on the use of default null hypotheses. I could never
find him commenting explicitly on the matter, however he surely did
not recommend that this should be the only null hypothesis.
The use of the default null hypothesis is sometimes
construed as an "abuse" of hypothesis testing, but it is central to
the popular hybrid method mentioned. The argument goes that this
practice is often "a useless preliminary":
"The researcher formulates a theoretical prediction, generally the
direction of an effect... When the data in fact show the predicted
directional result, this seems to confirm the hypothesis. The
researcher tests a 'straw person' null hypothesis that the effect is
actually zero. If the latter cannot be rejected at the .05 level (or
some variant), then the apparent confirmation of the theory cannot be
claimed...A common error in this type of test is to confuse the
significance level actually attained (for rejecting the straw-person
null) with the confirmation level attained for the original theory...
the strength of confirmation actually depends on [the sharpness of a
researcher's numerical predictions], not on the significance level
attained for a straw-person null."
The null hypothesis testing controversy in psychology. David H
Krantz. Journal of the American Statistical Association; Dec 1999;
94, 448; 1372-1381
The Khan academy video is an example of this hybrid method, and is guilty of committing the error noted in that quote. From the information available in that video we can only conclude that the injected rats differ from the non-injected, while the video claims we can conclude "the drug definitely has some effect". A bit of reflection would lead us to consider that perhaps the tested rats were older than the non-injected, etc. We need to rule out plausible alternative explanations before claiming evidence for our theory. The less specific the prediction of the theory, the more difficult it is to accomplish this.
Edit 2:
Perhaps taking the example from your notes of a medical diagnosis will help. Say a patient can be either "normal" or in "hypertensive crisis".
We have prior information that only 1% of people are in hypertensive crisis. People in hypertensive crisis have systolic blood pressure that follows a normal distribution with mean=180 and sd=10. Meanwhile, normal people have blood pressure from a normal distribution with mean=120, sd=10. The cost of judging a person normal when they are is zero, the cost of missing a diagnosis is 1, and the cost due to side effects due to the treatment is 0.2 regardless of whether they are in crisis or not. Then the following R code calculates the threshold (eta) and likelihood ratio. If the likelihood ratio is greater than the threshold we decide to treat, if less than we do not:
#Prior probabilities
P0=.99 #Prior probability patient is normal
P1=1-P0 #Prior probability patient is in crisis
#Hypotheses
H0<-dnorm(x=50:250, mean=120, sd=10) #H0: Patient is normal
H1<-dnorm(x=50:250, mean=180, sd=10) #H1: Patient in hypertensive crisis
#Costs
C00=0 #Decide normal when normal
C01=1 #Decide normal when in crisis
C10=.2 #Decide crisis when normal
C11=.2 #Decide crisis when in crisis
#Threshold
eta=P0*(C10-C00)/ P1*(C01-C11)
#Blood Pressure Measurements
y<-rnorm(3, 150, 20)
#Calculate Likelihood of Each Datapoint Given Each Hypothesis
L0vec=dnorm(x=y, mean=120, sd=10) #Vector of Likelihoods under H0
L1vec=dnorm(x=y, mean=180, sd=10) #Vector of Likelihoods under H1
#P(y|H) is the product of the likelihoods under each hypothesis
L0<-prod(L0vec)
L1<-prod(L1vec)
#L(y) is the ratio of the two likelihoods
LikRatio<-L1/L0
#Plot
plot(50:250, H0, type="l", col="Green", lwd=4,
xlab=" Systolic Blood Pressure", ylab="Probability Density Given Model",
main=paste0("L=",signif(LikRatio,3)," eta=", signif(eta,3)))
lines(50:250, H1, col="Red", lwd=4)
abline(v=y)
#Decision
if(LikRatio>eta){
print("L > eta ---> Decision: Treat Patient")
}else{
print("L < eta ---> Do Not Treat Patient")
}
In the above scenario the threshold eta=15.84. If we take three blood pressure measurements and get 139.9237, 125.2278, 190.3765, then the likelihood ratio is 27.6 in favor of H1: Patient in hypertensive crisis. Since 27.6 is greater than than the threshold we would choose to treat. The graph shows the normal hypothesis in green and hypertensive in red. Vertical black lines indicate the values of the observations. | What does Bayesian Hypothesis Testing mean in the framework of inference and decision theory?
Say you have data from a set of boxes. The data consists of Length (L), Width (W), Height (H), and Volume (V).
If we don't know much about boxes/geometry we might try the model:
V = a*L + b*W + c*H + |
18,989 | How to use Delta Method while the first-order derivative is zero? | When the first derivative is 0 you can use the second-order delta method.
Loosely:
$g(X_n)=g(\theta+X_n-\theta)=g(\theta)+(X_n-\theta)g'(\theta)+(X_n-\theta)^2/2\cdot g''(\theta)+o_p(1)$
So $g(X_n)-g(\theta)= \frac{g''(\theta)}{2} (X_n-\theta)^2 +o_p(1)$
But $n(X_n-\theta)^2/\sigma^2\stackrel{d}{\to} \chi^2_1$
... and so forth. | How to use Delta Method while the first-order derivative is zero? | When the first derivative is 0 you can use the second-order delta method.
Loosely:
$g(X_n)=g(\theta+X_n-\theta)=g(\theta)+(X_n-\theta)g'(\theta)+(X_n-\theta)^2/2\cdot g''(\theta)+o_p(1)$
So $g(X_n)-g( | How to use Delta Method while the first-order derivative is zero?
When the first derivative is 0 you can use the second-order delta method.
Loosely:
$g(X_n)=g(\theta+X_n-\theta)=g(\theta)+(X_n-\theta)g'(\theta)+(X_n-\theta)^2/2\cdot g''(\theta)+o_p(1)$
So $g(X_n)-g(\theta)= \frac{g''(\theta)}{2} (X_n-\theta)^2 +o_p(1)$
But $n(X_n-\theta)^2/\sigma^2\stackrel{d}{\to} \chi^2_1$
... and so forth. | How to use Delta Method while the first-order derivative is zero?
When the first derivative is 0 you can use the second-order delta method.
Loosely:
$g(X_n)=g(\theta+X_n-\theta)=g(\theta)+(X_n-\theta)g'(\theta)+(X_n-\theta)^2/2\cdot g''(\theta)+o_p(1)$
So $g(X_n)-g( |
18,990 | Determinant of Fisher information | In many examples, the inverse of the fisher information matrix is the covariance matrix of the parameter estimates $\hat{\beta}$, exactly or approximately. Often it gives that covariance matrix asymptotically. The determinant of a covariance matrix is often called a generalized variance.
So the determinant of the Fisher information matrix is the inverse of that generalized variance. This can be used in experimental design to find optimal experiments (for parameter estimation). In that context, this is called D-optimality, which has a huge literature. so google for "D-optimal experimental design". In practice, it is often easier to maximize the determinant of the inverse covariance matrix, but that is obviously the same thing as minimizing the determinant of its inverse.
There are also many posts on this site, but few has good answers. Here is one: Experimental (factorial) design not exploiting the variance | Determinant of Fisher information | In many examples, the inverse of the fisher information matrix is the covariance matrix of the parameter estimates $\hat{\beta}$, exactly or approximately. Often it gives that covariance matrix asympt | Determinant of Fisher information
In many examples, the inverse of the fisher information matrix is the covariance matrix of the parameter estimates $\hat{\beta}$, exactly or approximately. Often it gives that covariance matrix asymptotically. The determinant of a covariance matrix is often called a generalized variance.
So the determinant of the Fisher information matrix is the inverse of that generalized variance. This can be used in experimental design to find optimal experiments (for parameter estimation). In that context, this is called D-optimality, which has a huge literature. so google for "D-optimal experimental design". In practice, it is often easier to maximize the determinant of the inverse covariance matrix, but that is obviously the same thing as minimizing the determinant of its inverse.
There are also many posts on this site, but few has good answers. Here is one: Experimental (factorial) design not exploiting the variance | Determinant of Fisher information
In many examples, the inverse of the fisher information matrix is the covariance matrix of the parameter estimates $\hat{\beta}$, exactly or approximately. Often it gives that covariance matrix asympt |
18,991 | MCMC with Metropolis-Hastings algorithm: Choosing proposal | 1) You could think about this method as a random walk approach. When the proposal distribution $x \mid x^t \sim N( x^t, \sigma^2)$, it is commonly referred to as the Metropolis Algorithm. If $\sigma^2$ is too small, you will have a high acceptance rate and very slowly explore the target distribution. In fact, if $\sigma^2$ is too small and the distribution is multi-modal, the sampler may get stuck in a particular mode and won't be able to fully explore the target distribution. On the other hand, if $\sigma^2$ is too large, the acceptance rate will be too low. Since you have three dimensions, your proposal distribution would have a covariance matrix $\Sigma$ which will likely require different variances and covariances for each dimension. Choosing an appropriate $\Sigma$ may be difficult.
2) If your proposal distribution is always $N(\mu, \sigma^2)$, then this is the independent Metropolis-Hastings algorithm since your proposal distribution does not depend on your current sample. This method works best if your proposal distribution is a good approximation of the target distribution you wish to sample from. You are correct that choosing a good normal approximation can be difficult.
Neither method's success should depend on the starting value of the sampler. No matter where you start, the Markov chain should eventually converge to the target distribution. To check convergence, you could run several chains from different starting points and perform a convergence diagnostic such as the Gelman-Rubin convergence diagnostic. | MCMC with Metropolis-Hastings algorithm: Choosing proposal | 1) You could think about this method as a random walk approach. When the proposal distribution $x \mid x^t \sim N( x^t, \sigma^2)$, it is commonly referred to as the Metropolis Algorithm. If $\sigma | MCMC with Metropolis-Hastings algorithm: Choosing proposal
1) You could think about this method as a random walk approach. When the proposal distribution $x \mid x^t \sim N( x^t, \sigma^2)$, it is commonly referred to as the Metropolis Algorithm. If $\sigma^2$ is too small, you will have a high acceptance rate and very slowly explore the target distribution. In fact, if $\sigma^2$ is too small and the distribution is multi-modal, the sampler may get stuck in a particular mode and won't be able to fully explore the target distribution. On the other hand, if $\sigma^2$ is too large, the acceptance rate will be too low. Since you have three dimensions, your proposal distribution would have a covariance matrix $\Sigma$ which will likely require different variances and covariances for each dimension. Choosing an appropriate $\Sigma$ may be difficult.
2) If your proposal distribution is always $N(\mu, \sigma^2)$, then this is the independent Metropolis-Hastings algorithm since your proposal distribution does not depend on your current sample. This method works best if your proposal distribution is a good approximation of the target distribution you wish to sample from. You are correct that choosing a good normal approximation can be difficult.
Neither method's success should depend on the starting value of the sampler. No matter where you start, the Markov chain should eventually converge to the target distribution. To check convergence, you could run several chains from different starting points and perform a convergence diagnostic such as the Gelman-Rubin convergence diagnostic. | MCMC with Metropolis-Hastings algorithm: Choosing proposal
1) You could think about this method as a random walk approach. When the proposal distribution $x \mid x^t \sim N( x^t, \sigma^2)$, it is commonly referred to as the Metropolis Algorithm. If $\sigma |
18,992 | Estimating n in coupon collector's problem | For the equal probability/frequency case, this approach may work for you.
Let $K$ be the total sample size, $N$ be the number of different items observed, $N_1$ be the number of items seen exactly once, $N_2$ be the number of items seen exactly twice, $A=N_1(1− {N_1 \over K} )+2N_2,$ and $\hat Q = {N_1 \over K}.$
Then an approximate 95% confidence interval on the total population size $n$ is given by
$$ \hat n_{Lower}={1 \over {1-\hat Q+{1.96 \sqrt{A} \over K} }}$$
$$\hat n_{Upper}={1 \over {1-\hat Q-{1.96 \sqrt{A} \over K} }}$$
When implementing, you may need to adjust these depending on your data.
The method is due to Good and Turing. A reference with the confidence interval is Esty, Warren W. (1983), "A Normal Limit Law for a Nonparametric Estimator of the Coverage of a Random Sample", Ann. Statist., Volume 11, Number 3, 905-912.
For the more general problem, Bunge has produced free software that produces several estimates. Search with his name and the word CatchAll. | Estimating n in coupon collector's problem | For the equal probability/frequency case, this approach may work for you.
Let $K$ be the total sample size, $N$ be the number of different items observed, $N_1$ be the number of items seen exactly on | Estimating n in coupon collector's problem
For the equal probability/frequency case, this approach may work for you.
Let $K$ be the total sample size, $N$ be the number of different items observed, $N_1$ be the number of items seen exactly once, $N_2$ be the number of items seen exactly twice, $A=N_1(1− {N_1 \over K} )+2N_2,$ and $\hat Q = {N_1 \over K}.$
Then an approximate 95% confidence interval on the total population size $n$ is given by
$$ \hat n_{Lower}={1 \over {1-\hat Q+{1.96 \sqrt{A} \over K} }}$$
$$\hat n_{Upper}={1 \over {1-\hat Q-{1.96 \sqrt{A} \over K} }}$$
When implementing, you may need to adjust these depending on your data.
The method is due to Good and Turing. A reference with the confidence interval is Esty, Warren W. (1983), "A Normal Limit Law for a Nonparametric Estimator of the Coverage of a Random Sample", Ann. Statist., Volume 11, Number 3, 905-912.
For the more general problem, Bunge has produced free software that produces several estimates. Search with his name and the word CatchAll. | Estimating n in coupon collector's problem
For the equal probability/frequency case, this approach may work for you.
Let $K$ be the total sample size, $N$ be the number of different items observed, $N_1$ be the number of items seen exactly on |
18,993 | Estimating n in coupon collector's problem | Likelihood function and probability
In an answer to a question about the reverse birthday problem a solution for a likelihood function has been given by Cody Maughan.
The likelihood function for the number of fortune cooky types $m$ when we draw $k$ different fortune cookies in $n$ draws (where every fortune cookie type has equal probability of appearing in a draw) can be expressed as:
$$\begin{array}{}
\mathcal{L}(m \, \vert \, k,n ) = m^{-n} \frac{m!}{(m-k)!} \propto P(k \, \vert \, m,n) &=& m^{-n}\frac{m!}{(m-k)!} \cdot \underbrace{S(n,k)}_{\begin{subarray}{l}\text{Stirling number }\\ \text{of the 2nd kind}\end{subarray}}\\
&=& m^{-n}\frac{m!}{(m-k)!} \cdot \frac{1}{k!} \sum_{i=0}^k {(-1)^{i}{k \choose i}}{(k-i)^n} \\
&=& {{m}\choose{k}} \sum_{i=0}^k {(-1)^{i}{k \choose i}}{\left(\frac{k-i}{m}\right)^n}
\end{array}$$
For a derivation of the probability on the right hand side see the the occupancy problem. This has been described before on this website by Ben. The expression is similar to the one in the answer by Sylvain.
Maximum likelihood estimate
We can compute first order and second order approximations of the maximum of the likelihood function at
$$m_1 \approx \frac{ {{n}\choose{2}}}{n-k}$$
$$m_2 \approx \frac{ {{n}\choose{2}} + \sqrt{{{n}\choose{2}}^2 - 4(n-k) {{n}\choose{3}}}}{2(n-k)}$$
Likelihood interval
(note, this is not the same as a confidence interval see: The basic logic of constructing a confidence interval)
This remains an open problem for me. I am not sure yet how to deal with the expression $m^{-n} \frac{m!}{(m-k)!}$ (of course one can compute all values and select the boundaries based on that, but it would be more nice to have some explicit exact formula or estimate). I can not seem to relate it to any other distribution which would greatly help to evaluate it. But I feel like a nice (simple) expression could be possible from this likelihood interval approach.
Confidence interval
For the confidence interval we can use a normal approximation. In Ben's answer the following mean and variance are given:
$$\mathbb{E}[K] = m \left(1-\left(1 - \frac{1}{m}\right)^n\right)$$
$$\mathbb{V}[K] = m \left(\left(m-1\right)\left(1-\frac{2}{m}\right)^n + \left(1 - \frac{1}{m}\right)^n - m \left(1 - \frac{1}{m}\right)^{2n} \right)$$
Say for a given sample $n=200$ and observed unique cookies $k$ the 95% boundaries $\mathbb{E}[K] \pm 1.96 \sqrt{\mathbb{V}[K]}$ look like:
In the image above the curves for the interval have been drawn by expressing the lines as a function of the population size $m$ and sample size $n$ (so the x-axis is the dependent variable in drawing these curves).
The difficulty is to inverse this and obtain the interval values for a given observed value $k$. It can be done computationally, but possibly there might be some more direct function.
In the image I have also added Clopper Pearson confidence intervals based on a direct computation of the cumulative distribution based on all the probabilities $P(k \, \vert \, m,n)$ (I did this in R where I needed to use the Strlng2 function from the CryptRndTest package which is an asymptotic approximation of the logarithm of the Stirling number of the second kind). You can see that the boundaries coincide reasonably well, so the normal approximation is performing well in this case.
# function to compute Probability
library("CryptRndTest")
P5 <- function(m,n,k) {
exp(-n*log(m)+lfactorial(m)-lfactorial(m-k)+Strlng2(n,k))
}
P5 <- Vectorize(P5)
# function for expected value
m4 <- function(m,n) {
m*(1-(1-1/m)^n)
}
# function for variance
v4 <- function(m,n) {
m*((m-1)*(1-2/m)^n+(1-1/m)^n-m*(1-1/m)^(2*n))
}
# compute 95% boundaries based on Pearson Clopper intervals
# first a distribution is computed
# then the 2.5% and 97.5% boundaries of the cumulative values are located
simDist <- function(m,n,p=0.05) {
k <- 1:min(n,m)
dist <- P5(m,n,k)
dist[is.na(dist)] <- 0
dist[dist == Inf] <- 0
c(max(which(cumsum(dist)<p/2))+1,
min(which(cumsum(dist)>1-p/2))-1)
}
# some values for the example
n <- 200
m <- 1:5000
k <- 1:n
# compute the Pearon Clopper intervals
res <- sapply(m, FUN = function(x) {simDist(x,n)})
# plot the maximum likelihood estimate
plot(m4(m,n),m,
log="", ylab="estimated population size m", xlab = "observed uniques k",
xlim =c(1,200),ylim =c(1,5000),
pch=21,col=1,bg=1,cex=0.7, type = "l", yaxt = "n")
axis(2, at = c(0,2500,5000))
# add lines for confidence intervals based on normal approximation
lines(m4(m,n)+1.96*sqrt(v4(m,n)),m, lty=2)
lines(m4(m,n)-1.96*sqrt(v4(m,n)),m, lty=2)
# add lines for conficence intervals based on Clopper Pearson
lines(res[1,],m,col=3,lty=2)
lines(res[2,],m,col=3,lty=2)
# add legend
legend(0,5100,
c("MLE","95% interval\n(Normal Approximation)\n","95% interval\n(Clopper-Pearson)\n")
, lty=c(1,2,2), col=c(1,1,3),cex=0.7,
box.col = rgb(0,0,0,0)) | Estimating n in coupon collector's problem | Likelihood function and probability
In an answer to a question about the reverse birthday problem a solution for a likelihood function has been given by Cody Maughan.
The likelihood function for the n | Estimating n in coupon collector's problem
Likelihood function and probability
In an answer to a question about the reverse birthday problem a solution for a likelihood function has been given by Cody Maughan.
The likelihood function for the number of fortune cooky types $m$ when we draw $k$ different fortune cookies in $n$ draws (where every fortune cookie type has equal probability of appearing in a draw) can be expressed as:
$$\begin{array}{}
\mathcal{L}(m \, \vert \, k,n ) = m^{-n} \frac{m!}{(m-k)!} \propto P(k \, \vert \, m,n) &=& m^{-n}\frac{m!}{(m-k)!} \cdot \underbrace{S(n,k)}_{\begin{subarray}{l}\text{Stirling number }\\ \text{of the 2nd kind}\end{subarray}}\\
&=& m^{-n}\frac{m!}{(m-k)!} \cdot \frac{1}{k!} \sum_{i=0}^k {(-1)^{i}{k \choose i}}{(k-i)^n} \\
&=& {{m}\choose{k}} \sum_{i=0}^k {(-1)^{i}{k \choose i}}{\left(\frac{k-i}{m}\right)^n}
\end{array}$$
For a derivation of the probability on the right hand side see the the occupancy problem. This has been described before on this website by Ben. The expression is similar to the one in the answer by Sylvain.
Maximum likelihood estimate
We can compute first order and second order approximations of the maximum of the likelihood function at
$$m_1 \approx \frac{ {{n}\choose{2}}}{n-k}$$
$$m_2 \approx \frac{ {{n}\choose{2}} + \sqrt{{{n}\choose{2}}^2 - 4(n-k) {{n}\choose{3}}}}{2(n-k)}$$
Likelihood interval
(note, this is not the same as a confidence interval see: The basic logic of constructing a confidence interval)
This remains an open problem for me. I am not sure yet how to deal with the expression $m^{-n} \frac{m!}{(m-k)!}$ (of course one can compute all values and select the boundaries based on that, but it would be more nice to have some explicit exact formula or estimate). I can not seem to relate it to any other distribution which would greatly help to evaluate it. But I feel like a nice (simple) expression could be possible from this likelihood interval approach.
Confidence interval
For the confidence interval we can use a normal approximation. In Ben's answer the following mean and variance are given:
$$\mathbb{E}[K] = m \left(1-\left(1 - \frac{1}{m}\right)^n\right)$$
$$\mathbb{V}[K] = m \left(\left(m-1\right)\left(1-\frac{2}{m}\right)^n + \left(1 - \frac{1}{m}\right)^n - m \left(1 - \frac{1}{m}\right)^{2n} \right)$$
Say for a given sample $n=200$ and observed unique cookies $k$ the 95% boundaries $\mathbb{E}[K] \pm 1.96 \sqrt{\mathbb{V}[K]}$ look like:
In the image above the curves for the interval have been drawn by expressing the lines as a function of the population size $m$ and sample size $n$ (so the x-axis is the dependent variable in drawing these curves).
The difficulty is to inverse this and obtain the interval values for a given observed value $k$. It can be done computationally, but possibly there might be some more direct function.
In the image I have also added Clopper Pearson confidence intervals based on a direct computation of the cumulative distribution based on all the probabilities $P(k \, \vert \, m,n)$ (I did this in R where I needed to use the Strlng2 function from the CryptRndTest package which is an asymptotic approximation of the logarithm of the Stirling number of the second kind). You can see that the boundaries coincide reasonably well, so the normal approximation is performing well in this case.
# function to compute Probability
library("CryptRndTest")
P5 <- function(m,n,k) {
exp(-n*log(m)+lfactorial(m)-lfactorial(m-k)+Strlng2(n,k))
}
P5 <- Vectorize(P5)
# function for expected value
m4 <- function(m,n) {
m*(1-(1-1/m)^n)
}
# function for variance
v4 <- function(m,n) {
m*((m-1)*(1-2/m)^n+(1-1/m)^n-m*(1-1/m)^(2*n))
}
# compute 95% boundaries based on Pearson Clopper intervals
# first a distribution is computed
# then the 2.5% and 97.5% boundaries of the cumulative values are located
simDist <- function(m,n,p=0.05) {
k <- 1:min(n,m)
dist <- P5(m,n,k)
dist[is.na(dist)] <- 0
dist[dist == Inf] <- 0
c(max(which(cumsum(dist)<p/2))+1,
min(which(cumsum(dist)>1-p/2))-1)
}
# some values for the example
n <- 200
m <- 1:5000
k <- 1:n
# compute the Pearon Clopper intervals
res <- sapply(m, FUN = function(x) {simDist(x,n)})
# plot the maximum likelihood estimate
plot(m4(m,n),m,
log="", ylab="estimated population size m", xlab = "observed uniques k",
xlim =c(1,200),ylim =c(1,5000),
pch=21,col=1,bg=1,cex=0.7, type = "l", yaxt = "n")
axis(2, at = c(0,2500,5000))
# add lines for confidence intervals based on normal approximation
lines(m4(m,n)+1.96*sqrt(v4(m,n)),m, lty=2)
lines(m4(m,n)-1.96*sqrt(v4(m,n)),m, lty=2)
# add lines for conficence intervals based on Clopper Pearson
lines(res[1,],m,col=3,lty=2)
lines(res[2,],m,col=3,lty=2)
# add legend
legend(0,5100,
c("MLE","95% interval\n(Normal Approximation)\n","95% interval\n(Clopper-Pearson)\n")
, lty=c(1,2,2), col=c(1,1,3),cex=0.7,
box.col = rgb(0,0,0,0)) | Estimating n in coupon collector's problem
Likelihood function and probability
In an answer to a question about the reverse birthday problem a solution for a likelihood function has been given by Cody Maughan.
The likelihood function for the n |
18,994 | Estimating n in coupon collector's problem | I do not know if it can help but it is the problem of taking $k$ different balls during $n$ trials in an urn with $m$ balls labelled differently with replacement. According to this page (in french) if $X_n$ if the random variable counting the number of different balls the probability function is given by:
$P(X_n = k) = {m \choose k} \sum_{i=0}^k {(-1)^{k-i}{k \choose i}}{(\frac{i}{m})^n}$
Then you can use a maximum likelihood estimator.
Another formula with proof is given here to solve the occupancy problem. | Estimating n in coupon collector's problem | I do not know if it can help but it is the problem of taking $k$ different balls during $n$ trials in an urn with $m$ balls labelled differently with replacement. According to this page (in french) if | Estimating n in coupon collector's problem
I do not know if it can help but it is the problem of taking $k$ different balls during $n$ trials in an urn with $m$ balls labelled differently with replacement. According to this page (in french) if $X_n$ if the random variable counting the number of different balls the probability function is given by:
$P(X_n = k) = {m \choose k} \sum_{i=0}^k {(-1)^{k-i}{k \choose i}}{(\frac{i}{m})^n}$
Then you can use a maximum likelihood estimator.
Another formula with proof is given here to solve the occupancy problem. | Estimating n in coupon collector's problem
I do not know if it can help but it is the problem of taking $k$ different balls during $n$ trials in an urn with $m$ balls labelled differently with replacement. According to this page (in french) if |
18,995 | Can I reconstruct a normal distribution from sample size, and min and max values? I can use mid-point to proxy the mean | The joint cumulative distribution function for the minimum $x_{(1)}$ & maximum $x_{(n)}$ for a sample of $n$ from a Gaussian distribution with mean $\mu$ & standard deviation $\sigma$ is
$$
F(x_{(1)},x_{(n)};\mu,\sigma) = \Pr(X_{(1)}<x_{(1)}, X_{(n)}<x_{(n)})\\
=\Pr( X_{(n)}<x_{(n)}) - \Pr(X_{(1)}>x_{(1)}, X_{(n)}<x_{(n)}\\
=\Phi\left(\tfrac{x_{(n)}-\mu}{\sigma}\right)^n - \left[\Phi\left(\tfrac{x_{(n)}-\mu}{\sigma}\right) -\Phi\left(\tfrac{x_{(1)}-\mu}{\sigma}\right)\right]^n
$$
where $\Phi(\cdot)$ is the standard Gaussian CDF. Differentiation with respect to $x_{(1)}$ & $x_{(n)}$ gives the joint probability density function
$$
f(x_{(1)},x_{(n)};\mu,\sigma) =\\ n(n-1)\left[\Phi\left(\tfrac{x_{(n)}-\mu}{\sigma}\right) - \Phi\left(\tfrac{x_{(1)}-\mu}{\sigma}\right)\right]^{n-2}\cdot\phi\left(\tfrac{x_{(n)}-\mu}{\sigma}\right)\cdot\phi\left(\tfrac{x_{(1)}-\mu}{\sigma}\right)\cdot\tfrac{1}{\sigma^2}
$$
where $\phi(\cdot)$ is the standard Gaussian PDF. Taking the log & dropping terms that don't contain parameters gives the log-likelihood function
$$
\ell(\mu,\sigma;x_{(1)},x_{(n)}) =\\ (n-2)\log\left[\Phi\left(\tfrac{x_{(n)}-\mu}{\sigma}\right) - \Phi\left(\tfrac{x_{(1)}-\mu}{\sigma}\right)\right]
+ \log\phi\left(\tfrac{x_{(n)}-\mu}{\sigma}\right) + \log\phi\left(\tfrac{x_{(1)}-\mu}{\sigma}\right) - 2\log\sigma
$$
This doesn't look very tractable but it's easy to see that it's maximized whatever the value of $\sigma$ by setting $\mu=\hat\mu=\frac{x_{(n)}+x_{(1)}}{2}$, i.e. the midpoint—the first term is maximized when the argument of one CDF is the negative of the argument of the other; the second & third terms represent the joint likelihood of two independent normal variates.
Substituting $\hat\mu$ into the log-likelihood & writing $r=x_{(n)}-x_{(1)}$ gives
$$\ell(\sigma;x_{(1)},x_{(n)},\hat\mu)=(n-2)\log\left[1 - 2\Phi\left(\tfrac{-r}{2\sigma}\right)\right] - \frac{r^2}{4\sigma^2} -2\log{\sigma}$$
This expression has to be maximized numerically (e.g. with optimize from R's stat package) to find $\hat\sigma$. (It turns out that $\hat\sigma=k(n)\cdot r$, where $k$ is a constant depending only on $n$—perhaps someone more mathematically adroit than I could show why.)
Estimates are no use without an accompanying measure of precision. The observed Fisher information can be evaluated numerically (e.g. with hessian from R's numDeriv package) & used to calculate approximate standard errors:
$$I(\mu)=-\left.\frac{\partial^2{\ell(\mu;\hat\sigma)}}{(\partial\mu)^2}\right|_{\mu=\hat\mu}$$
$$I(\sigma)=-\left.\frac{\partial^2{\ell(\sigma;\hat\mu)}}{(\partial\sigma)^2}\right|_{\sigma=\hat\sigma}$$
It would be interesting to compare the likelihood & the method-of-moments estimates for $\sigma$ in terms of bias (is the MLE consistent?), variance, & mean-square error. There's also the issue of estimation for those groups where the sample mean is known in addition to the minimum & maximum. | Can I reconstruct a normal distribution from sample size, and min and max values? I can use mid-poin | The joint cumulative distribution function for the minimum $x_{(1)}$ & maximum $x_{(n)}$ for a sample of $n$ from a Gaussian distribution with mean $\mu$ & standard deviation $\sigma$ is
$$
F(x_{(1)}, | Can I reconstruct a normal distribution from sample size, and min and max values? I can use mid-point to proxy the mean
The joint cumulative distribution function for the minimum $x_{(1)}$ & maximum $x_{(n)}$ for a sample of $n$ from a Gaussian distribution with mean $\mu$ & standard deviation $\sigma$ is
$$
F(x_{(1)},x_{(n)};\mu,\sigma) = \Pr(X_{(1)}<x_{(1)}, X_{(n)}<x_{(n)})\\
=\Pr( X_{(n)}<x_{(n)}) - \Pr(X_{(1)}>x_{(1)}, X_{(n)}<x_{(n)}\\
=\Phi\left(\tfrac{x_{(n)}-\mu}{\sigma}\right)^n - \left[\Phi\left(\tfrac{x_{(n)}-\mu}{\sigma}\right) -\Phi\left(\tfrac{x_{(1)}-\mu}{\sigma}\right)\right]^n
$$
where $\Phi(\cdot)$ is the standard Gaussian CDF. Differentiation with respect to $x_{(1)}$ & $x_{(n)}$ gives the joint probability density function
$$
f(x_{(1)},x_{(n)};\mu,\sigma) =\\ n(n-1)\left[\Phi\left(\tfrac{x_{(n)}-\mu}{\sigma}\right) - \Phi\left(\tfrac{x_{(1)}-\mu}{\sigma}\right)\right]^{n-2}\cdot\phi\left(\tfrac{x_{(n)}-\mu}{\sigma}\right)\cdot\phi\left(\tfrac{x_{(1)}-\mu}{\sigma}\right)\cdot\tfrac{1}{\sigma^2}
$$
where $\phi(\cdot)$ is the standard Gaussian PDF. Taking the log & dropping terms that don't contain parameters gives the log-likelihood function
$$
\ell(\mu,\sigma;x_{(1)},x_{(n)}) =\\ (n-2)\log\left[\Phi\left(\tfrac{x_{(n)}-\mu}{\sigma}\right) - \Phi\left(\tfrac{x_{(1)}-\mu}{\sigma}\right)\right]
+ \log\phi\left(\tfrac{x_{(n)}-\mu}{\sigma}\right) + \log\phi\left(\tfrac{x_{(1)}-\mu}{\sigma}\right) - 2\log\sigma
$$
This doesn't look very tractable but it's easy to see that it's maximized whatever the value of $\sigma$ by setting $\mu=\hat\mu=\frac{x_{(n)}+x_{(1)}}{2}$, i.e. the midpoint—the first term is maximized when the argument of one CDF is the negative of the argument of the other; the second & third terms represent the joint likelihood of two independent normal variates.
Substituting $\hat\mu$ into the log-likelihood & writing $r=x_{(n)}-x_{(1)}$ gives
$$\ell(\sigma;x_{(1)},x_{(n)},\hat\mu)=(n-2)\log\left[1 - 2\Phi\left(\tfrac{-r}{2\sigma}\right)\right] - \frac{r^2}{4\sigma^2} -2\log{\sigma}$$
This expression has to be maximized numerically (e.g. with optimize from R's stat package) to find $\hat\sigma$. (It turns out that $\hat\sigma=k(n)\cdot r$, where $k$ is a constant depending only on $n$—perhaps someone more mathematically adroit than I could show why.)
Estimates are no use without an accompanying measure of precision. The observed Fisher information can be evaluated numerically (e.g. with hessian from R's numDeriv package) & used to calculate approximate standard errors:
$$I(\mu)=-\left.\frac{\partial^2{\ell(\mu;\hat\sigma)}}{(\partial\mu)^2}\right|_{\mu=\hat\mu}$$
$$I(\sigma)=-\left.\frac{\partial^2{\ell(\sigma;\hat\mu)}}{(\partial\sigma)^2}\right|_{\sigma=\hat\sigma}$$
It would be interesting to compare the likelihood & the method-of-moments estimates for $\sigma$ in terms of bias (is the MLE consistent?), variance, & mean-square error. There's also the issue of estimation for those groups where the sample mean is known in addition to the minimum & maximum. | Can I reconstruct a normal distribution from sample size, and min and max values? I can use mid-poin
The joint cumulative distribution function for the minimum $x_{(1)}$ & maximum $x_{(n)}$ for a sample of $n$ from a Gaussian distribution with mean $\mu$ & standard deviation $\sigma$ is
$$
F(x_{(1)}, |
18,996 | Can I reconstruct a normal distribution from sample size, and min and max values? I can use mid-point to proxy the mean | You need to relate the range to the standard deviation/variance.Let $\mu$ be the mean, $\sigma$ the standard deviation and $R=x_{(n)} - x_{(1)}$ be the range. Then for the normal distribution we have that $99.7$% of probability mass lies within 3 standard deviations from the mean. This, as a practical rule means that with very high probability,
$$\mu + 3\sigma \approx x_{(n)}$$
and
$$\mu - 3\sigma \approx x_{(1)}$$
Subtracting the second from the first we obtain
$$6\sigma \approx x_{(n)} - x_{(1)}= R$$
(this, by the way is whence the "six-sigma" quality assurance methodology in industry comes).
Then you can obtain an estimate for the standard deviation by
$$\hat \sigma = \frac 16 \Big(\bar x_{(n)} - \bar x_{(1)}\Big)$$
where the bar denotes averages. This is when you assume that all sub-samples come from the same distribution (you wrote about having expected ranges). If each sample is a different normal, with different mean and variance, then you can use the formula for each sample, but the uncertainty / possible inaccuracy in the estimated value of the standard deviation will be much larger.
Having a value for the mean and for the standard deviation completely characterizes the normal distribution. | Can I reconstruct a normal distribution from sample size, and min and max values? I can use mid-poin | You need to relate the range to the standard deviation/variance.Let $\mu$ be the mean, $\sigma$ the standard deviation and $R=x_{(n)} - x_{(1)}$ be the range. Then for the normal distribution we have | Can I reconstruct a normal distribution from sample size, and min and max values? I can use mid-point to proxy the mean
You need to relate the range to the standard deviation/variance.Let $\mu$ be the mean, $\sigma$ the standard deviation and $R=x_{(n)} - x_{(1)}$ be the range. Then for the normal distribution we have that $99.7$% of probability mass lies within 3 standard deviations from the mean. This, as a practical rule means that with very high probability,
$$\mu + 3\sigma \approx x_{(n)}$$
and
$$\mu - 3\sigma \approx x_{(1)}$$
Subtracting the second from the first we obtain
$$6\sigma \approx x_{(n)} - x_{(1)}= R$$
(this, by the way is whence the "six-sigma" quality assurance methodology in industry comes).
Then you can obtain an estimate for the standard deviation by
$$\hat \sigma = \frac 16 \Big(\bar x_{(n)} - \bar x_{(1)}\Big)$$
where the bar denotes averages. This is when you assume that all sub-samples come from the same distribution (you wrote about having expected ranges). If each sample is a different normal, with different mean and variance, then you can use the formula for each sample, but the uncertainty / possible inaccuracy in the estimated value of the standard deviation will be much larger.
Having a value for the mean and for the standard deviation completely characterizes the normal distribution. | Can I reconstruct a normal distribution from sample size, and min and max values? I can use mid-poin
You need to relate the range to the standard deviation/variance.Let $\mu$ be the mean, $\sigma$ the standard deviation and $R=x_{(n)} - x_{(1)}$ be the range. Then for the normal distribution we have |
18,997 | Can I reconstruct a normal distribution from sample size, and min and max values? I can use mid-point to proxy the mean | It is straightforward to get the distribution function of the maximum of the normal distribution (see "P.max.norm" in code).
From it (with some calculus) you can get the quantile function (see "Q.max.norm").
Using "Q.max.norm" and "Q.min.norm" you can get the median of the range that is related with N.
Using the idea presented by Alecos Papadopoulos (in previous answer) you can calculate sd.
Try this:
N = 100000 # the size of the sample
# Probability function given q and N
P.max.norm <- function(q, N=1, mean=0, sd=1){
pnorm(q,mean,sd)^N
}
# Quantile functions given p and N
Q.max.norm <- function(p, N=1, mean=0, sd=1){
qnorm(p^(1/N),mean,sd)
}
Q.min.norm <- function(p, N=1, mean=0, sd=1){
mean-(Q.max.norm(p, N=N, mean=mean, sd=sd)-mean)
}
### lets test it (takes some time)
Q.max.norm(0.5, N=N) # The median on the maximum
Q.min.norm(0.5, N=N) # The median on the minimum
iter = 100
median(replicate(iter, max(rnorm(N))))
median(replicate(iter, min(rnorm(N))))
# it is quite OK
### Lets try to get estimations
true_mean = -3
true_sd = 2
N = 100000
x = rnorm(N, true_mean, true_sd) # simulation
x.vec = range(x) # observations
# estimation
est_mean = mean(x.vec)
est_sd = diff(x.vec)/(Q.max.norm(0.5, N=N)-Q.min.norm(0.5, N=N))
c(true_mean, true_sd)
c(est_mean, est_sd)
# Quite good, but only for large N
# -3 2
# -3.252606 1.981593 | Can I reconstruct a normal distribution from sample size, and min and max values? I can use mid-poin | It is straightforward to get the distribution function of the maximum of the normal distribution (see "P.max.norm" in code).
From it (with some calculus) you can get the quantile function (see "Q.max. | Can I reconstruct a normal distribution from sample size, and min and max values? I can use mid-point to proxy the mean
It is straightforward to get the distribution function of the maximum of the normal distribution (see "P.max.norm" in code).
From it (with some calculus) you can get the quantile function (see "Q.max.norm").
Using "Q.max.norm" and "Q.min.norm" you can get the median of the range that is related with N.
Using the idea presented by Alecos Papadopoulos (in previous answer) you can calculate sd.
Try this:
N = 100000 # the size of the sample
# Probability function given q and N
P.max.norm <- function(q, N=1, mean=0, sd=1){
pnorm(q,mean,sd)^N
}
# Quantile functions given p and N
Q.max.norm <- function(p, N=1, mean=0, sd=1){
qnorm(p^(1/N),mean,sd)
}
Q.min.norm <- function(p, N=1, mean=0, sd=1){
mean-(Q.max.norm(p, N=N, mean=mean, sd=sd)-mean)
}
### lets test it (takes some time)
Q.max.norm(0.5, N=N) # The median on the maximum
Q.min.norm(0.5, N=N) # The median on the minimum
iter = 100
median(replicate(iter, max(rnorm(N))))
median(replicate(iter, min(rnorm(N))))
# it is quite OK
### Lets try to get estimations
true_mean = -3
true_sd = 2
N = 100000
x = rnorm(N, true_mean, true_sd) # simulation
x.vec = range(x) # observations
# estimation
est_mean = mean(x.vec)
est_sd = diff(x.vec)/(Q.max.norm(0.5, N=N)-Q.min.norm(0.5, N=N))
c(true_mean, true_sd)
c(est_mean, est_sd)
# Quite good, but only for large N
# -3 2
# -3.252606 1.981593 | Can I reconstruct a normal distribution from sample size, and min and max values? I can use mid-poin
It is straightforward to get the distribution function of the maximum of the normal distribution (see "P.max.norm" in code).
From it (with some calculus) you can get the quantile function (see "Q.max. |
18,998 | Non-linear mixed effects regression in R | I wanted to share some of the things I learned since asking this question. nlme seems a reasonably way to model non-linear mixed effects in R. Start with a simple base model:
library(nlme)
data <- groupedData(y ~ t | UID, data=data) ## not strictly necessary
initVals <- getInitial(y ~ SSlogis(t, Asym, xmid, scal), data = data)
baseModel<- nlme(y ~ SSlogis(t, Asym, xmid, scal),
data = data,
fixed = list(Asym ~ 1, xmid ~ 1, scal ~ 1),
random = Asym + xmid + scal ~ 1|UID,
start = initVals
)
Then use update to increase model complexity. The start parameter is slightly tricky to work with, it may take some tinkering to figure out the order. Note how the new fixed effect for var1 on Asym follows the regular fixed effect for Asym.
nestedModel <- update(baseModel, fixed=list(Asym ~ var1, xmid ~ 1, scal ~ 1), start = c(fixef(baseModel)[1], 0, fixef(baseModel)[2], fixef(baseModel)[3]))
lme4 seemed more robust against the outliers in my dataset and seemed to offer more reliable convergence for the more complex models. However, it seems the downside is that the relevant likelihood functions need to be specified manually. The following is the logistic growth model with a fixed effect of var1 (binary) on Asym. You can add fixed effects on xmid and scal in a similar fashion. Note the strange way of specifying the model using a double formula as outcome ~ fixed effects ~ random effects.
library(lme4) ## careful loading nlme and lme4 concurrently
customLogitModel <- function(t, Asym, AsymVar1, xmid, scal) {
(Asym+AsymVar1*var1)/(1+exp((xmid-t)/scal))
}
customLogitModelGradient <- deriv(
body(customLogitModel)[[2]],
namevec = c("Asym", "AsymVar1", "xmid", "scal"),
function.arg=customLogitModel
)
## find starting parameters
initVals <- getInitial(y ~ SSlogis(t, Asym, xmid, scal), data = data)
# Fit the model
model <- nlmer(
y ~ customLogitModelGradient(t=t, Asym, AsymVar1, xmid, scal, var1=var) ~
# Random effects with a second ~
(Asym | UID) + (xmid | UID) + (scal | UID),
data = data,
start = c(Asym=initVals[1], AsymVar1=0, xmid=initVals[2], scal=initVals[3])
) | Non-linear mixed effects regression in R | I wanted to share some of the things I learned since asking this question. nlme seems a reasonably way to model non-linear mixed effects in R. Start with a simple base model:
library(nlme)
data <- gro | Non-linear mixed effects regression in R
I wanted to share some of the things I learned since asking this question. nlme seems a reasonably way to model non-linear mixed effects in R. Start with a simple base model:
library(nlme)
data <- groupedData(y ~ t | UID, data=data) ## not strictly necessary
initVals <- getInitial(y ~ SSlogis(t, Asym, xmid, scal), data = data)
baseModel<- nlme(y ~ SSlogis(t, Asym, xmid, scal),
data = data,
fixed = list(Asym ~ 1, xmid ~ 1, scal ~ 1),
random = Asym + xmid + scal ~ 1|UID,
start = initVals
)
Then use update to increase model complexity. The start parameter is slightly tricky to work with, it may take some tinkering to figure out the order. Note how the new fixed effect for var1 on Asym follows the regular fixed effect for Asym.
nestedModel <- update(baseModel, fixed=list(Asym ~ var1, xmid ~ 1, scal ~ 1), start = c(fixef(baseModel)[1], 0, fixef(baseModel)[2], fixef(baseModel)[3]))
lme4 seemed more robust against the outliers in my dataset and seemed to offer more reliable convergence for the more complex models. However, it seems the downside is that the relevant likelihood functions need to be specified manually. The following is the logistic growth model with a fixed effect of var1 (binary) on Asym. You can add fixed effects on xmid and scal in a similar fashion. Note the strange way of specifying the model using a double formula as outcome ~ fixed effects ~ random effects.
library(lme4) ## careful loading nlme and lme4 concurrently
customLogitModel <- function(t, Asym, AsymVar1, xmid, scal) {
(Asym+AsymVar1*var1)/(1+exp((xmid-t)/scal))
}
customLogitModelGradient <- deriv(
body(customLogitModel)[[2]],
namevec = c("Asym", "AsymVar1", "xmid", "scal"),
function.arg=customLogitModel
)
## find starting parameters
initVals <- getInitial(y ~ SSlogis(t, Asym, xmid, scal), data = data)
# Fit the model
model <- nlmer(
y ~ customLogitModelGradient(t=t, Asym, AsymVar1, xmid, scal, var1=var) ~
# Random effects with a second ~
(Asym | UID) + (xmid | UID) + (scal | UID),
data = data,
start = c(Asym=initVals[1], AsymVar1=0, xmid=initVals[2], scal=initVals[3])
) | Non-linear mixed effects regression in R
I wanted to share some of the things I learned since asking this question. nlme seems a reasonably way to model non-linear mixed effects in R. Start with a simple base model:
library(nlme)
data <- gro |
18,999 | Does a MCMC fulfilling detailed balance yields a stationary distribution? | It is not true that MCMC fulfilling detailed balance always yield the stationary distribution. You also need the process to be ergodic. Let's see why:
Consider $x$ to be a state of the set all possible states, and identify it by the index $i$. In a markov process, a distribution $p_t(i)$ evolves according to
$$p_t(i) = \sum_{j} \Omega_{j \rightarrow i} p_{t-1}(j)$$
where $\Omega_{j \rightarrow i}$ is the matrix denoting the transition probabilities (your $q(x|y)$).
So, we have that
$$p_t(i) = \sum_{j} (\Omega_{j \rightarrow i})^t p_{0}(j)$$
The fact that $\Omega_{j \rightarrow i}$ is a transition probability implies that its eigenvalues must belong to the interval [0,1].
In order to ensure that any initial distribution $p_{0}(j)$ converges to the asymptotic one, you have to ensure that
1 There is only one eigenvalue of $\Omega$ with value 1 and it has a unique non-zero eigenvector.
To ensure that $\pi$ is the asymptotic distribution, you need to ensure that
2 The eigenvector associated with eigenvalue 1 is $\pi$.
Ergodicity implies 1., detailed balance implies 2., and that is why both form a necessary and sufficient condition of asymptotic convergence.
Why detailed balance implies 2:
Starting from
$$p(i)\Omega_{ij} = \Omega_{ji} p(j)$$
and summing over $j$ in both sides, we obtain
$$p(i) = \sum_{j}\Omega_{ji} p(j)$$
because $\sum_{j} \Omega_{ij} = 1$, since you always transit to somewhere.
The above equation is the definition of eigenvalue 1, (easier to see if you write it in vector form:)
$$ 1.v = \Omega\cdot v$$ | Does a MCMC fulfilling detailed balance yields a stationary distribution? | It is not true that MCMC fulfilling detailed balance always yield the stationary distribution. You also need the process to be ergodic. Let's see why:
Consider $x$ to be a state of the set all possibl | Does a MCMC fulfilling detailed balance yields a stationary distribution?
It is not true that MCMC fulfilling detailed balance always yield the stationary distribution. You also need the process to be ergodic. Let's see why:
Consider $x$ to be a state of the set all possible states, and identify it by the index $i$. In a markov process, a distribution $p_t(i)$ evolves according to
$$p_t(i) = \sum_{j} \Omega_{j \rightarrow i} p_{t-1}(j)$$
where $\Omega_{j \rightarrow i}$ is the matrix denoting the transition probabilities (your $q(x|y)$).
So, we have that
$$p_t(i) = \sum_{j} (\Omega_{j \rightarrow i})^t p_{0}(j)$$
The fact that $\Omega_{j \rightarrow i}$ is a transition probability implies that its eigenvalues must belong to the interval [0,1].
In order to ensure that any initial distribution $p_{0}(j)$ converges to the asymptotic one, you have to ensure that
1 There is only one eigenvalue of $\Omega$ with value 1 and it has a unique non-zero eigenvector.
To ensure that $\pi$ is the asymptotic distribution, you need to ensure that
2 The eigenvector associated with eigenvalue 1 is $\pi$.
Ergodicity implies 1., detailed balance implies 2., and that is why both form a necessary and sufficient condition of asymptotic convergence.
Why detailed balance implies 2:
Starting from
$$p(i)\Omega_{ij} = \Omega_{ji} p(j)$$
and summing over $j$ in both sides, we obtain
$$p(i) = \sum_{j}\Omega_{ji} p(j)$$
because $\sum_{j} \Omega_{ij} = 1$, since you always transit to somewhere.
The above equation is the definition of eigenvalue 1, (easier to see if you write it in vector form:)
$$ 1.v = \Omega\cdot v$$ | Does a MCMC fulfilling detailed balance yields a stationary distribution?
It is not true that MCMC fulfilling detailed balance always yield the stationary distribution. You also need the process to be ergodic. Let's see why:
Consider $x$ to be a state of the set all possibl |
19,000 | Does a MCMC fulfilling detailed balance yields a stationary distribution? | I think it does, because for an irreducible MC if detailed balance is satisfied then it has a unique stationary distribution, but for it to be independent of initial distribution it also has to be aperiodic.
In case of MCMC we start from a data point and then propose a new point. We may or may not move to the proposed point i.e we have a self loop which makes an irreducible MC aperiodic.
Now by virtue of satisfying DB it also has positive recurrent states, i.e mean return time to the states is finite. So the chain that we construct in MCMC is irreducible, aperiodic and positive recurrent, which means it is an ergodic chain.
We know that for an irreducible ergodic chain a stationary distribution exists which is unique and independent of initial distribution. | Does a MCMC fulfilling detailed balance yields a stationary distribution? | I think it does, because for an irreducible MC if detailed balance is satisfied then it has a unique stationary distribution, but for it to be independent of initial distribution it also has to be ape | Does a MCMC fulfilling detailed balance yields a stationary distribution?
I think it does, because for an irreducible MC if detailed balance is satisfied then it has a unique stationary distribution, but for it to be independent of initial distribution it also has to be aperiodic.
In case of MCMC we start from a data point and then propose a new point. We may or may not move to the proposed point i.e we have a self loop which makes an irreducible MC aperiodic.
Now by virtue of satisfying DB it also has positive recurrent states, i.e mean return time to the states is finite. So the chain that we construct in MCMC is irreducible, aperiodic and positive recurrent, which means it is an ergodic chain.
We know that for an irreducible ergodic chain a stationary distribution exists which is unique and independent of initial distribution. | Does a MCMC fulfilling detailed balance yields a stationary distribution?
I think it does, because for an irreducible MC if detailed balance is satisfied then it has a unique stationary distribution, but for it to be independent of initial distribution it also has to be ape |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.