idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
10,801
|
Posterior very different to prior and likelihood
|
Yes this situation can arise and is a feature of your modeling assumptions specifically normality in the prior and sampling model (likelihood). If instead you had chosen a Cauchy distribution for your prior, the posterior would look much different.
prior = function(x) dcauchy(x, 1.5, 0.4)
like = function(x) dnorm(x,6.1,.4)
# Posterior
propto = function(x) prior(x)*like(x)
d = integrate(propto, -Inf, Inf)
post = function(x) propto(x)/d$value
# Plot
par(mar=c(0,0,0,0)+.1, lwd=2)
curve(like, 0, 8, col="red", axes=F, frame=T)
curve(prior, add=TRUE, col="blue")
curve(post, add=TRUE, col="seagreen")
legend("bottomleft", c("Prior","Likelihood","Posterior"), col=c("blue","red","seagreen"), lty=1, bg="white")
|
Posterior very different to prior and likelihood
|
Yes this situation can arise and is a feature of your modeling assumptions specifically normality in the prior and sampling model (likelihood). If instead you had chosen a Cauchy distribution for your
|
Posterior very different to prior and likelihood
Yes this situation can arise and is a feature of your modeling assumptions specifically normality in the prior and sampling model (likelihood). If instead you had chosen a Cauchy distribution for your prior, the posterior would look much different.
prior = function(x) dcauchy(x, 1.5, 0.4)
like = function(x) dnorm(x,6.1,.4)
# Posterior
propto = function(x) prior(x)*like(x)
d = integrate(propto, -Inf, Inf)
post = function(x) propto(x)/d$value
# Plot
par(mar=c(0,0,0,0)+.1, lwd=2)
curve(like, 0, 8, col="red", axes=F, frame=T)
curve(prior, add=TRUE, col="blue")
curve(post, add=TRUE, col="seagreen")
legend("bottomleft", c("Prior","Likelihood","Posterior"), col=c("blue","red","seagreen"), lty=1, bg="white")
|
Posterior very different to prior and likelihood
Yes this situation can arise and is a feature of your modeling assumptions specifically normality in the prior and sampling model (likelihood). If instead you had chosen a Cauchy distribution for your
|
10,802
|
Posterior very different to prior and likelihood
|
Tony O'Hagan wrote about this situation of "Bayesian surprise" in detail for the one-sample location problem, in 1990. Basically, it depends on whether the prior or likelihood has heavier tails. If the prior has heavier tails, the posterior is happy to be way out in the tail of the prior, near the data. If the likelihood has heavier tails, the posterior is happy to be way out in the tail of the likelihood, where the prior is concentrated. If they're both about the same you either end up with a bimodal posterior (for two heavy-tailed priors) or a posterior in the middle (for two light-tailed priors), with one transitional example being the Laplace distribution where you can get a single very wide posterior modal plateau.
I've got an app here that lets you play around with these.
|
Posterior very different to prior and likelihood
|
Tony O'Hagan wrote about this situation of "Bayesian surprise" in detail for the one-sample location problem, in 1990. Basically, it depends on whether the prior or likelihood has heavier tails. If t
|
Posterior very different to prior and likelihood
Tony O'Hagan wrote about this situation of "Bayesian surprise" in detail for the one-sample location problem, in 1990. Basically, it depends on whether the prior or likelihood has heavier tails. If the prior has heavier tails, the posterior is happy to be way out in the tail of the prior, near the data. If the likelihood has heavier tails, the posterior is happy to be way out in the tail of the likelihood, where the prior is concentrated. If they're both about the same you either end up with a bimodal posterior (for two heavy-tailed priors) or a posterior in the middle (for two light-tailed priors), with one transitional example being the Laplace distribution where you can get a single very wide posterior modal plateau.
I've got an app here that lets you play around with these.
|
Posterior very different to prior and likelihood
Tony O'Hagan wrote about this situation of "Bayesian surprise" in detail for the one-sample location problem, in 1990. Basically, it depends on whether the prior or likelihood has heavier tails. If t
|
10,803
|
Posterior very different to prior and likelihood
|
I somewhat disagree with the answers given so far - there is nothing weird about this situation. The likelihood is asymptotically normal anyway, and a normal prior is not uncommon at all. If you put both together, with the fact that prior and likelihood don't give the same answer, we have the situation we are talking about here. I have depicted that below with the code by jaradniemi.
We mention in 1 that the normal conclusion of such an observation would be that either a) model is structurally wrong b) data is wrong c) prior is wrong. But something is wrong for sure, and you would also see this if you'd do some posterior-predictive checks, which you should do anyway.
1 Hartig, F.; Dyke, J.; Hickler, T.; Higgins, S. I.; O'Hara, R. B.; Scheiter, S. & Huth, A. (2012) Connecting dynamic vegetation models to data - an inverse perspective. J. Biogeogr., 39, 2240-2252. http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2699.2012.02745.x/abstract
prior = function(x) dnorm(x, 1, .3)
like = function(x) dnorm(x, -1, .3)
# Posterior
propto = function(x) prior(x)*like(x)
d = integrate(propto, -Inf, Inf)
post = function(x) propto(x)/d$value
# Plot
par(mar=c(0, 0, 0, 0) + .1, lwd=2)
curve(like, -2, 2, col="red", axes=FALSE, frame=TRUE,
ylim = c(0,2))
curve(prior, add=TRUE, col="blue")
curve(post, add=TRUE, col="seagreen")
legend("bottomleft", c("Prior", "Likelihood", "Posterior"),
col=c("blue", "red", "seagreen"), lty=1, bg="white")
|
Posterior very different to prior and likelihood
|
I somewhat disagree with the answers given so far - there is nothing weird about this situation. The likelihood is asymptotically normal anyway, and a normal prior is not uncommon at all. If you put b
|
Posterior very different to prior and likelihood
I somewhat disagree with the answers given so far - there is nothing weird about this situation. The likelihood is asymptotically normal anyway, and a normal prior is not uncommon at all. If you put both together, with the fact that prior and likelihood don't give the same answer, we have the situation we are talking about here. I have depicted that below with the code by jaradniemi.
We mention in 1 that the normal conclusion of such an observation would be that either a) model is structurally wrong b) data is wrong c) prior is wrong. But something is wrong for sure, and you would also see this if you'd do some posterior-predictive checks, which you should do anyway.
1 Hartig, F.; Dyke, J.; Hickler, T.; Higgins, S. I.; O'Hara, R. B.; Scheiter, S. & Huth, A. (2012) Connecting dynamic vegetation models to data - an inverse perspective. J. Biogeogr., 39, 2240-2252. http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2699.2012.02745.x/abstract
prior = function(x) dnorm(x, 1, .3)
like = function(x) dnorm(x, -1, .3)
# Posterior
propto = function(x) prior(x)*like(x)
d = integrate(propto, -Inf, Inf)
post = function(x) propto(x)/d$value
# Plot
par(mar=c(0, 0, 0, 0) + .1, lwd=2)
curve(like, -2, 2, col="red", axes=FALSE, frame=TRUE,
ylim = c(0,2))
curve(prior, add=TRUE, col="blue")
curve(post, add=TRUE, col="seagreen")
legend("bottomleft", c("Prior", "Likelihood", "Posterior"),
col=c("blue", "red", "seagreen"), lty=1, bg="white")
|
Posterior very different to prior and likelihood
I somewhat disagree with the answers given so far - there is nothing weird about this situation. The likelihood is asymptotically normal anyway, and a normal prior is not uncommon at all. If you put b
|
10,804
|
Posterior very different to prior and likelihood
|
After thinking about this for a while, my conclusion is that with bad modelling
assumptions, the posterior can be a result that accords with neither prior
beliefs or the likelihood. From this the natural result is the the posterior
is not, in general, the end of the analysis. If it is the case that the posterior
should roughly fit the data or that it should be diffuse between the prior
and likelihood (in this case), then this would have to be checked after
the fact, probably with a posterior-predictive check or something similar. To
incorporate this into the model would seem to require the ability to put probabilities
on probabilistic statements, which I don't think is possible.
|
Posterior very different to prior and likelihood
|
After thinking about this for a while, my conclusion is that with bad modelling
assumptions, the posterior can be a result that accords with neither prior
beliefs or the likelihood. From this the natu
|
Posterior very different to prior and likelihood
After thinking about this for a while, my conclusion is that with bad modelling
assumptions, the posterior can be a result that accords with neither prior
beliefs or the likelihood. From this the natural result is the the posterior
is not, in general, the end of the analysis. If it is the case that the posterior
should roughly fit the data or that it should be diffuse between the prior
and likelihood (in this case), then this would have to be checked after
the fact, probably with a posterior-predictive check or something similar. To
incorporate this into the model would seem to require the ability to put probabilities
on probabilistic statements, which I don't think is possible.
|
Posterior very different to prior and likelihood
After thinking about this for a while, my conclusion is that with bad modelling
assumptions, the posterior can be a result that accords with neither prior
beliefs or the likelihood. From this the natu
|
10,805
|
Posterior very different to prior and likelihood
|
I feel like the answer that I was looking for when it came to this question is best summarized by Lesaffre and Lawson in Bayesian Biostatistics
The posterior precision is the sum of the prior and the sample precision, i.e.: $$ \frac{1}{\sigma^2} = w_{0} + w_{1} $$ This shows that the posterior is more peaked than the prior and the likelihood function which means that the posterior contains more information about $\mu$ than the prior and the likelihood function. This property holds even when the likelihood and prior are in conflict (in contrast to the binomial-beta case). This may look counterintuitive since, in the presence of conflicting information, there is more uncertainty a posteriori rather than less uncertainty. Note that this result only holds for the special, and unrealistic, case of a known $\sigma$.
What this summarizes for me, and is roughly outlined in the other answers, is that the case of modeling normal priors with a normal likelihood can result in a situation where the posterior is more precise than either. This is counterintuitive, but is a special consequence of modeling these elements this way.
|
Posterior very different to prior and likelihood
|
I feel like the answer that I was looking for when it came to this question is best summarized by Lesaffre and Lawson in Bayesian Biostatistics
The posterior precision is the sum of the prior and the
|
Posterior very different to prior and likelihood
I feel like the answer that I was looking for when it came to this question is best summarized by Lesaffre and Lawson in Bayesian Biostatistics
The posterior precision is the sum of the prior and the sample precision, i.e.: $$ \frac{1}{\sigma^2} = w_{0} + w_{1} $$ This shows that the posterior is more peaked than the prior and the likelihood function which means that the posterior contains more information about $\mu$ than the prior and the likelihood function. This property holds even when the likelihood and prior are in conflict (in contrast to the binomial-beta case). This may look counterintuitive since, in the presence of conflicting information, there is more uncertainty a posteriori rather than less uncertainty. Note that this result only holds for the special, and unrealistic, case of a known $\sigma$.
What this summarizes for me, and is roughly outlined in the other answers, is that the case of modeling normal priors with a normal likelihood can result in a situation where the posterior is more precise than either. This is counterintuitive, but is a special consequence of modeling these elements this way.
|
Posterior very different to prior and likelihood
I feel like the answer that I was looking for when it came to this question is best summarized by Lesaffre and Lawson in Bayesian Biostatistics
The posterior precision is the sum of the prior and the
|
10,806
|
Posterior very different to prior and likelihood
|
If your model is correct, observing a likelihood function falling this far from the prior is extremely unlikely. Imagine the situation before seeing the data $X_1$. Based on on correct inferences from past data $X_0$ you believe that $\mu \sim N(1.6, 0.4^2)$. Suppose that the future set of data is a single normal variate $X_1 \sim N(\mu, 0.4^2)$. The likelihood will then have its maximum at $X_1$. From this it follows that the location of the likelihood functions maximum, once $X_1$ is observed, will fall at a normally distributed position with expectation equal to your prior mean 1.6 and standard deviation equal to $\sqrt{0.4^2 + 0.4^2}=0.56$. So the probability that it falls at 6.1 (as in your example) or even further away from your prior mean is $2\phi(-(6.1-1.6)/0.56)=9.3\cdot 10^{-16}$. So this is not going to happen in practice provided that your model including past inferences about $\mu$ are correct.
For simplicity, suppose that the "past data" was also a single variate $X_0 \sim N(\mu,0.4^2)$ and that you started off with a flat prior before observing $X_0$. Put differently, in terms of $X_0$ and $X_1$ the above situation or something more extreme arise if $|X_1-X_0|>6.1-1.6$ which is extremely unlikely given that both variates according to the model should have come from the same normal distribution with a standard deviation of 0.4.
So if you come across something like this in practice, it would strongly suggest that either the inference made from $X_0$ or $X_1$ (or both) are wrong and in some way have ignored important sources of uncertainty. If you believed this to be the case but you happened to be unable to find any error, I would say the best you could do would be to adjust the standard deviations of both likelihood and prior post hoc to make them more compatible. This would indeed result in an adjusted wider posterior more compatible with your intuition.
|
Posterior very different to prior and likelihood
|
If your model is correct, observing a likelihood function falling this far from the prior is extremely unlikely. Imagine the situation before seeing the data $X_1$. Based on on correct inferences fr
|
Posterior very different to prior and likelihood
If your model is correct, observing a likelihood function falling this far from the prior is extremely unlikely. Imagine the situation before seeing the data $X_1$. Based on on correct inferences from past data $X_0$ you believe that $\mu \sim N(1.6, 0.4^2)$. Suppose that the future set of data is a single normal variate $X_1 \sim N(\mu, 0.4^2)$. The likelihood will then have its maximum at $X_1$. From this it follows that the location of the likelihood functions maximum, once $X_1$ is observed, will fall at a normally distributed position with expectation equal to your prior mean 1.6 and standard deviation equal to $\sqrt{0.4^2 + 0.4^2}=0.56$. So the probability that it falls at 6.1 (as in your example) or even further away from your prior mean is $2\phi(-(6.1-1.6)/0.56)=9.3\cdot 10^{-16}$. So this is not going to happen in practice provided that your model including past inferences about $\mu$ are correct.
For simplicity, suppose that the "past data" was also a single variate $X_0 \sim N(\mu,0.4^2)$ and that you started off with a flat prior before observing $X_0$. Put differently, in terms of $X_0$ and $X_1$ the above situation or something more extreme arise if $|X_1-X_0|>6.1-1.6$ which is extremely unlikely given that both variates according to the model should have come from the same normal distribution with a standard deviation of 0.4.
So if you come across something like this in practice, it would strongly suggest that either the inference made from $X_0$ or $X_1$ (or both) are wrong and in some way have ignored important sources of uncertainty. If you believed this to be the case but you happened to be unable to find any error, I would say the best you could do would be to adjust the standard deviations of both likelihood and prior post hoc to make them more compatible. This would indeed result in an adjusted wider posterior more compatible with your intuition.
|
Posterior very different to prior and likelihood
If your model is correct, observing a likelihood function falling this far from the prior is extremely unlikely. Imagine the situation before seeing the data $X_1$. Based on on correct inferences fr
|
10,807
|
Posterior very different to prior and likelihood
|
I think this is actually a really interesting question. Having slept on it, I think I have a stab at an answer. The key issue is as follows:
You've treated the likelihood as a gaussian pdf. But it's not a probability distribution - it's a likelihood! What's more, you've not labelled your axis clearly. These things combined have confused everything which follows.
Let's say you're inferring the mean of a normal distribution, $\mu$. It's a one dimensional plot, so I'll assume $\sigma$ is known. In that case your prior distribution must be $P(\mu|\mu', \sigma')$, where $\mu'$ and $\sigma'$ are (fixed) hyperparameters controlling your prior's position and shape; your likelihood function is $P(X|\mu, \sigma)$, where $X$ is your observed data; and your posterior is $P(\mu|X, \sigma, \mu', \sigma')$. Given that, the only horizontal axis that makes sense to me in this diagram is one which is plotting $\mu$.
But if the horizontal axis shows values of $\mu$, why does the likelihood $P(X|\mu)$ have the same width and height as the prior? When you break it down that's actually a really weird situation. Think about the form the the prior and likelihood:
$$
P(\mu|\mu', \sigma') = exp(-\frac{(\mu-\mu')^2}{2 \sigma'^2})\frac{1}{\sqrt{2 \pi \sigma'^2}}
$$
$$
P(X|\mu,\sigma) = \prod_{i=1}^N exp(-\frac{(x_i-\mu)^2}{2 \sigma^2})\frac{1}{\sqrt{2 \pi \sigma^2}}
$$
The only way I can see that these can have the same width is if $\sigma'^2 = \sigma^2/N$. In other words, your prior is very informative, as its variance is going to be much lower than $\sigma^2$ for any reasonable value of $N$. It is literally as informative as the entire observed dataset $X$!
So, the prior and the likelihood are equally informative. Why isn't the posterior bimodal? This is because of your modelling assumptions. You've implicitly assumed a normal distribution in the way this is set up (normal prior, normal likelihood), and that constrains the posterior to give a unimodal answer. That's just a property of normal distributions, that you have baked into the problem by using them. A different model would not necessarily have done this. I have a feeling (though lack a proof right now) that a cauchy distribution can a have multimodal likelihood, and hence a multimodal posterior.
So, we have to be unimodal, and the prior is as informative as the likelihood. Under these constraints, the most sensible estimate is starting to sound like a point directly between the likelihood and prior, as we have no reasonable way to tell which to believe. But why does the posterior get tighter?
I think the confusion here comes from the fact that in this model, $\sigma$ is assumed to be known. Were it unknown, and we had a two dimensional distribution over $\mu$ and $\sigma$ the observation of data far from the prior might make a high value of $\sigma$ more probable, and so increase the variance of the posterior distribution of the mean too (as these two are linked). But we're not in that situation. $\sigma$ is treated as known here. A such adding more data can only make us more confident in our prediction of the position of $\mu$, and hence the posterior becomes narrower.
(A way to visualise it might be to imagine estimating the mean of a gaussian, with known variance, using just two sample points. If the two sample points are separated by very much more than the width of the gaussian (i.e. they're out in the tails), then that's strong evidence the mean actually lies between them. Shifting the mean just slightly from this position will cause an exponential drop off in the probability of one sample or another.)
In summary, the situation you have described is a bit odd, and by using the model you have you've included some assumptions (e.g. unimodality) into the problem that you didn't realise you had. But otherwise, the conclusion is correct.
|
Posterior very different to prior and likelihood
|
I think this is actually a really interesting question. Having slept on it, I think I have a stab at an answer. The key issue is as follows:
You've treated the likelihood as a gaussian pdf. But it's
|
Posterior very different to prior and likelihood
I think this is actually a really interesting question. Having slept on it, I think I have a stab at an answer. The key issue is as follows:
You've treated the likelihood as a gaussian pdf. But it's not a probability distribution - it's a likelihood! What's more, you've not labelled your axis clearly. These things combined have confused everything which follows.
Let's say you're inferring the mean of a normal distribution, $\mu$. It's a one dimensional plot, so I'll assume $\sigma$ is known. In that case your prior distribution must be $P(\mu|\mu', \sigma')$, where $\mu'$ and $\sigma'$ are (fixed) hyperparameters controlling your prior's position and shape; your likelihood function is $P(X|\mu, \sigma)$, where $X$ is your observed data; and your posterior is $P(\mu|X, \sigma, \mu', \sigma')$. Given that, the only horizontal axis that makes sense to me in this diagram is one which is plotting $\mu$.
But if the horizontal axis shows values of $\mu$, why does the likelihood $P(X|\mu)$ have the same width and height as the prior? When you break it down that's actually a really weird situation. Think about the form the the prior and likelihood:
$$
P(\mu|\mu', \sigma') = exp(-\frac{(\mu-\mu')^2}{2 \sigma'^2})\frac{1}{\sqrt{2 \pi \sigma'^2}}
$$
$$
P(X|\mu,\sigma) = \prod_{i=1}^N exp(-\frac{(x_i-\mu)^2}{2 \sigma^2})\frac{1}{\sqrt{2 \pi \sigma^2}}
$$
The only way I can see that these can have the same width is if $\sigma'^2 = \sigma^2/N$. In other words, your prior is very informative, as its variance is going to be much lower than $\sigma^2$ for any reasonable value of $N$. It is literally as informative as the entire observed dataset $X$!
So, the prior and the likelihood are equally informative. Why isn't the posterior bimodal? This is because of your modelling assumptions. You've implicitly assumed a normal distribution in the way this is set up (normal prior, normal likelihood), and that constrains the posterior to give a unimodal answer. That's just a property of normal distributions, that you have baked into the problem by using them. A different model would not necessarily have done this. I have a feeling (though lack a proof right now) that a cauchy distribution can a have multimodal likelihood, and hence a multimodal posterior.
So, we have to be unimodal, and the prior is as informative as the likelihood. Under these constraints, the most sensible estimate is starting to sound like a point directly between the likelihood and prior, as we have no reasonable way to tell which to believe. But why does the posterior get tighter?
I think the confusion here comes from the fact that in this model, $\sigma$ is assumed to be known. Were it unknown, and we had a two dimensional distribution over $\mu$ and $\sigma$ the observation of data far from the prior might make a high value of $\sigma$ more probable, and so increase the variance of the posterior distribution of the mean too (as these two are linked). But we're not in that situation. $\sigma$ is treated as known here. A such adding more data can only make us more confident in our prediction of the position of $\mu$, and hence the posterior becomes narrower.
(A way to visualise it might be to imagine estimating the mean of a gaussian, with known variance, using just two sample points. If the two sample points are separated by very much more than the width of the gaussian (i.e. they're out in the tails), then that's strong evidence the mean actually lies between them. Shifting the mean just slightly from this position will cause an exponential drop off in the probability of one sample or another.)
In summary, the situation you have described is a bit odd, and by using the model you have you've included some assumptions (e.g. unimodality) into the problem that you didn't realise you had. But otherwise, the conclusion is correct.
|
Posterior very different to prior and likelihood
I think this is actually a really interesting question. Having slept on it, I think I have a stab at an answer. The key issue is as follows:
You've treated the likelihood as a gaussian pdf. But it's
|
10,808
|
Posterior very different to prior and likelihood
|
Bayes theorem is
$$
p(A|B) = \frac{ p(B|A) \, p(A) }{ p(B) }
$$
Based on prior knowledge you can define a prior, it can be updated using the data, then via Bayesian updating you could use the posterior as a prior for a next update, etc. Think of the prior as a way of augmenting your data with artificial data. In such a case, there’s nothing strange if you have two different sets of data for the final result to be an average of those two datasets. This is a "feature" of Bayes theorem, we want the prior to influence the result and we want the end result to be somewhere "in between" prior and the data, weighting both by how strong information they provide.
In many cases you may be interested in running the prior predictive checks before running the analysis, i.e. simulating predictions with parameters sampled from the priors and comparing them to the data. It would help you avoid surprises like this.
|
Posterior very different to prior and likelihood
|
Bayes theorem is
$$
p(A|B) = \frac{ p(B|A) \, p(A) }{ p(B) }
$$
Based on prior knowledge you can define a prior, it can be updated using the data, then via Bayesian updating you could use the posterio
|
Posterior very different to prior and likelihood
Bayes theorem is
$$
p(A|B) = \frac{ p(B|A) \, p(A) }{ p(B) }
$$
Based on prior knowledge you can define a prior, it can be updated using the data, then via Bayesian updating you could use the posterior as a prior for a next update, etc. Think of the prior as a way of augmenting your data with artificial data. In such a case, there’s nothing strange if you have two different sets of data for the final result to be an average of those two datasets. This is a "feature" of Bayes theorem, we want the prior to influence the result and we want the end result to be somewhere "in between" prior and the data, weighting both by how strong information they provide.
In many cases you may be interested in running the prior predictive checks before running the analysis, i.e. simulating predictions with parameters sampled from the priors and comparing them to the data. It would help you avoid surprises like this.
|
Posterior very different to prior and likelihood
Bayes theorem is
$$
p(A|B) = \frac{ p(B|A) \, p(A) }{ p(B) }
$$
Based on prior knowledge you can define a prior, it can be updated using the data, then via Bayesian updating you could use the posterio
|
10,809
|
How to deal with an SVM with categorical attributes
|
If you are sure the categorical attribute is actually ordinal, then just treat it as numerical attribute.
If not, use some coding trick to turn it into numerical attribute. According to the suggestion by the author of libsvm, one can simply use 1-of-K coding. For instance, suppose a 1-dimensional category attribute taking value from $\{A,B,C\}$. Just turn it into 3-dimensional numbers such that $A = (1,0,0)$, $B = (0,1,0)$, $C = (0,0,1)$. Of course, this will incur significantly additional dimensions in your problem, but I think that is not a serious problem for modern SVM solver (no matter Linear type or Kernel type you adopt).
|
How to deal with an SVM with categorical attributes
|
If you are sure the categorical attribute is actually ordinal, then just treat it as numerical attribute.
If not, use some coding trick to turn it into numerical attribute. According to the suggestion
|
How to deal with an SVM with categorical attributes
If you are sure the categorical attribute is actually ordinal, then just treat it as numerical attribute.
If not, use some coding trick to turn it into numerical attribute. According to the suggestion by the author of libsvm, one can simply use 1-of-K coding. For instance, suppose a 1-dimensional category attribute taking value from $\{A,B,C\}$. Just turn it into 3-dimensional numbers such that $A = (1,0,0)$, $B = (0,1,0)$, $C = (0,0,1)$. Of course, this will incur significantly additional dimensions in your problem, but I think that is not a serious problem for modern SVM solver (no matter Linear type or Kernel type you adopt).
|
How to deal with an SVM with categorical attributes
If you are sure the categorical attribute is actually ordinal, then just treat it as numerical attribute.
If not, use some coding trick to turn it into numerical attribute. According to the suggestion
|
10,810
|
Comparison of ranked lists
|
Summary
I share my thoughts in Details section. I think they are useful in identifying what we really want to achieve.
I think that the main problem here is that you haven't defined what a rank similarity means. Therefore, no one knows which method of measuring the difference between the ranks is better.
Effectively, this leaves us to ambiguously choose a method based on guesses.
What I really suggest is to first define a mathematical optimization objective. Only then we will be sure whether we really know what we want.
Unless we do that, really don't know what we want. We might almost know what we want, but almost knowing $\ne$ knowing.
My text in Details essentially is a step towards reaching a mathematical definition of ranks similarity. Once we nail this, we can confidently move forward to choose the best method of measuring such similarity.
Details
Based on one of yur comments:
"The objective is to see if the two groups rankings differ", Peter Flom.
To answer this while strictly interpreting the objective:
The ranks are different if, any item $i \in \{1,2,\ldots,25\}$, there exists $i$ such that $a_i \ne b_i$, where $a_i$ is the rank of of item $i$ by group $a$ and $b_i$ is the rank of the same item but by group $b$.
Else, the ranks are not different.
But I don't think that you really want that strict interpretation. Therefore, I think what you really meant to say is:
How different are the ranks of groups $a$ and $b$?
One solution here is simply to measure the minimum edit distance. I.e. what are the minimum number of edits that need to be performed on the ranked list of group $a$ such that it becomes identical to that of group $b$.
An edit could be defined as swapping two items, and costs costs $n$ points depending how many hops are needed. So if item $1$ needs to be swapped with item $3$ (in order to achieve identical ranks between those of groups $a$ and $b$), then the cost for this edit is $3$.
But is this method suitable? To answer this, let's look at it a bit deeper:
It's not normalized. If we say that the distance between ranks of groups $a,b$ is $3$, while the distance between the ranks of groups $c,d$ is $123$, it doesn't necessarily mean that $a,b$ are more similar each other than $c,d$ are to each other (it could also possibly mean that $c,d$ were ranking a much larger set of items).
It assumes that the cost of each edit is linear with respect to number of hops. Is this true for our application domain? Could it be that a logistic relationship is more suitable? Or an exponential one?
It assumes that all items are equally important. E.g. disagreement in ranking item (say) $1$ is treated identically to the disagreement in ranking item (say) $5$. Is this true in your domain? For example, if we are ranking books, is disagreeing on ranking of a famous book such as a TAOCP one, equally important to disagreeing on the ranking of a terrible book such as TAOUP?
Once we address the points above, and reach a suitable measure of similarity between two ranks, we will then need to ask more interesting questions, such as:
What is the probability of observing such differences, or more extreme differences, if the difference between the groups $a$ and $b$ was only due to random chance?
|
Comparison of ranked lists
|
Summary
I share my thoughts in Details section. I think they are useful in identifying what we really want to achieve.
I think that the main problem here is that you haven't defined what a rank simila
|
Comparison of ranked lists
Summary
I share my thoughts in Details section. I think they are useful in identifying what we really want to achieve.
I think that the main problem here is that you haven't defined what a rank similarity means. Therefore, no one knows which method of measuring the difference between the ranks is better.
Effectively, this leaves us to ambiguously choose a method based on guesses.
What I really suggest is to first define a mathematical optimization objective. Only then we will be sure whether we really know what we want.
Unless we do that, really don't know what we want. We might almost know what we want, but almost knowing $\ne$ knowing.
My text in Details essentially is a step towards reaching a mathematical definition of ranks similarity. Once we nail this, we can confidently move forward to choose the best method of measuring such similarity.
Details
Based on one of yur comments:
"The objective is to see if the two groups rankings differ", Peter Flom.
To answer this while strictly interpreting the objective:
The ranks are different if, any item $i \in \{1,2,\ldots,25\}$, there exists $i$ such that $a_i \ne b_i$, where $a_i$ is the rank of of item $i$ by group $a$ and $b_i$ is the rank of the same item but by group $b$.
Else, the ranks are not different.
But I don't think that you really want that strict interpretation. Therefore, I think what you really meant to say is:
How different are the ranks of groups $a$ and $b$?
One solution here is simply to measure the minimum edit distance. I.e. what are the minimum number of edits that need to be performed on the ranked list of group $a$ such that it becomes identical to that of group $b$.
An edit could be defined as swapping two items, and costs costs $n$ points depending how many hops are needed. So if item $1$ needs to be swapped with item $3$ (in order to achieve identical ranks between those of groups $a$ and $b$), then the cost for this edit is $3$.
But is this method suitable? To answer this, let's look at it a bit deeper:
It's not normalized. If we say that the distance between ranks of groups $a,b$ is $3$, while the distance between the ranks of groups $c,d$ is $123$, it doesn't necessarily mean that $a,b$ are more similar each other than $c,d$ are to each other (it could also possibly mean that $c,d$ were ranking a much larger set of items).
It assumes that the cost of each edit is linear with respect to number of hops. Is this true for our application domain? Could it be that a logistic relationship is more suitable? Or an exponential one?
It assumes that all items are equally important. E.g. disagreement in ranking item (say) $1$ is treated identically to the disagreement in ranking item (say) $5$. Is this true in your domain? For example, if we are ranking books, is disagreeing on ranking of a famous book such as a TAOCP one, equally important to disagreeing on the ranking of a terrible book such as TAOUP?
Once we address the points above, and reach a suitable measure of similarity between two ranks, we will then need to ask more interesting questions, such as:
What is the probability of observing such differences, or more extreme differences, if the difference between the groups $a$ and $b$ was only due to random chance?
|
Comparison of ranked lists
Summary
I share my thoughts in Details section. I think they are useful in identifying what we really want to achieve.
I think that the main problem here is that you haven't defined what a rank simila
|
10,811
|
Comparison of ranked lists
|
Warning: it's a great question and I don't know the answer, so this is really more of a "what I would do if I had to":
In this problem there are lots of degrees of freedom and lots of comparisons one can do, but with limited data it's really a matter of aggregating data efficiently. If you don't know what test to run, you can always "invent" one using permutations:
First we define two functions:
Voting function: how to score the rankings so we can combine all the rankings of a single group. For example, you could assign 1 point to the top ranked item, and 0 to all others. You'd be losing a lot of information though, so maybe it's better to use something like: top ranked item gets 1 point, second ranked 2 points, etc.
Comparison function: How to compare two aggregated scores between two groups. Since both will be a vector, taking a suitable norm of the difference would work.
Now do the following:
First compute a test statistic by computing the average score using the voting function for each item across the two groups, this should lead to two vectors of size 25.
Then compare the two outcomes using the comparison function, this will be your test statistic.
The problem is that we don't know the distribution of the test statistic under the null that both groups are the same. But if they are the same, we could randomly shuffle observations between groups.
Thus, we can combine the data of two groups, shuffle/permute them, pick the first $n_1$ (number of observations in original group A) observations for group A and the rest for group B. Now compute the test statistic for this sample using the preceding two steps.
Repeat the process around 1000 times, and now use the permutation test statistics as empirical null distribution. This will allow you to compute a p-value, and don't forget to make a nice histogram and draw a line for your test statistic like so:
Now of course it is all about choosing the right voting and comparison functions to get good power. That really depends on your goal and intuition, but I think my second suggestion for voting function and the $l_1$ norm are good places to start. Note that these choices can and do make a big difference. The above plot was using the $l_1$ norm and this is the same data with an $l_2$ norm:
But depending on the setting, I expect there can be a lot of intrinsic randomness and you'll need a fairly large sample size to have a catch-all method work. If you have prior knowledge about specific things you think might be different between the two groups (say specific items), then use that to tailor your two functions. (Of course, the usual do this before you run the test and don't cherry-pick designs till you get something significant applies)
PS shoot me a message if you are interested in my (messy) code. It's a bit too long to add here but I'd be happy to upload it.
|
Comparison of ranked lists
|
Warning: it's a great question and I don't know the answer, so this is really more of a "what I would do if I had to":
In this problem there are lots of degrees of freedom and lots of comparisons one
|
Comparison of ranked lists
Warning: it's a great question and I don't know the answer, so this is really more of a "what I would do if I had to":
In this problem there are lots of degrees of freedom and lots of comparisons one can do, but with limited data it's really a matter of aggregating data efficiently. If you don't know what test to run, you can always "invent" one using permutations:
First we define two functions:
Voting function: how to score the rankings so we can combine all the rankings of a single group. For example, you could assign 1 point to the top ranked item, and 0 to all others. You'd be losing a lot of information though, so maybe it's better to use something like: top ranked item gets 1 point, second ranked 2 points, etc.
Comparison function: How to compare two aggregated scores between two groups. Since both will be a vector, taking a suitable norm of the difference would work.
Now do the following:
First compute a test statistic by computing the average score using the voting function for each item across the two groups, this should lead to two vectors of size 25.
Then compare the two outcomes using the comparison function, this will be your test statistic.
The problem is that we don't know the distribution of the test statistic under the null that both groups are the same. But if they are the same, we could randomly shuffle observations between groups.
Thus, we can combine the data of two groups, shuffle/permute them, pick the first $n_1$ (number of observations in original group A) observations for group A and the rest for group B. Now compute the test statistic for this sample using the preceding two steps.
Repeat the process around 1000 times, and now use the permutation test statistics as empirical null distribution. This will allow you to compute a p-value, and don't forget to make a nice histogram and draw a line for your test statistic like so:
Now of course it is all about choosing the right voting and comparison functions to get good power. That really depends on your goal and intuition, but I think my second suggestion for voting function and the $l_1$ norm are good places to start. Note that these choices can and do make a big difference. The above plot was using the $l_1$ norm and this is the same data with an $l_2$ norm:
But depending on the setting, I expect there can be a lot of intrinsic randomness and you'll need a fairly large sample size to have a catch-all method work. If you have prior knowledge about specific things you think might be different between the two groups (say specific items), then use that to tailor your two functions. (Of course, the usual do this before you run the test and don't cherry-pick designs till you get something significant applies)
PS shoot me a message if you are interested in my (messy) code. It's a bit too long to add here but I'd be happy to upload it.
|
Comparison of ranked lists
Warning: it's a great question and I don't know the answer, so this is really more of a "what I would do if I had to":
In this problem there are lots of degrees of freedom and lots of comparisons one
|
10,812
|
Comparison of ranked lists
|
This sounds like the 'Willcoxon signed-rank test' (wikipedia link). Assuming that the values of your ranks are from the same set (ie [1, 25]) then this is a paired-difference test (with the null-hypothesis being these pairs were picked randomly). NB this is a dis-similarity score!
There are both R and Python implementations linked to in that wiki page.
|
Comparison of ranked lists
|
This sounds like the 'Willcoxon signed-rank test' (wikipedia link). Assuming that the values of your ranks are from the same set (ie [1, 25]) then this is a paired-difference test (with the null-hypot
|
Comparison of ranked lists
This sounds like the 'Willcoxon signed-rank test' (wikipedia link). Assuming that the values of your ranks are from the same set (ie [1, 25]) then this is a paired-difference test (with the null-hypothesis being these pairs were picked randomly). NB this is a dis-similarity score!
There are both R and Python implementations linked to in that wiki page.
|
Comparison of ranked lists
This sounds like the 'Willcoxon signed-rank test' (wikipedia link). Assuming that the values of your ranks are from the same set (ie [1, 25]) then this is a paired-difference test (with the null-hypot
|
10,813
|
Comparison of ranked lists
|
In "Sequential rank agreement methods for comparison of ranked lists" Ekstrøm et al. discuss this in detail (including a survey of existing techniques circa 2015) while introducing a new measure called "sequential rank agreement". It's available on arxiv at: https://arxiv.org/pdf/1508.06803.pdf. The abstract says it better than I could:
The comparison of alternative rankings of a set of items is a general and prominent task in applied statistics. Predictor variables are
ranked according to magnitude of association with an outcome, prediction models rank subjects according to the personalized risk of an
event, and genetic studies rank genes according to their difference in
gene expression levels. This article constructs measures of the agreement of two or more ordered lists. We use the standard deviation
of the ranks to define a measure of agreement that both provides an
intuitive interpretation and can be applied to any number of lists even
if some or all are incomplete or censored. The approach can identify
change-points in the agreement of the lists and the sequential changes
of agreement as a function of the depth of the lists can be compared
graphically to a permutation based reference set. The usefulness of
these tools are illustrated using gene rankings, and using data from
two Danish ovarian cancer studies where we assess the within and
between agreement of different statistical classification methods.
As stated in many of the other answers, each of these techniques will provide a different summary of those differences and the selection of which is most appropriate for your application is ... well, ... application specific.
|
Comparison of ranked lists
|
In "Sequential rank agreement methods for comparison of ranked lists" Ekstrøm et al. discuss this in detail (including a survey of existing techniques circa 2015) while introducing a new measure calle
|
Comparison of ranked lists
In "Sequential rank agreement methods for comparison of ranked lists" Ekstrøm et al. discuss this in detail (including a survey of existing techniques circa 2015) while introducing a new measure called "sequential rank agreement". It's available on arxiv at: https://arxiv.org/pdf/1508.06803.pdf. The abstract says it better than I could:
The comparison of alternative rankings of a set of items is a general and prominent task in applied statistics. Predictor variables are
ranked according to magnitude of association with an outcome, prediction models rank subjects according to the personalized risk of an
event, and genetic studies rank genes according to their difference in
gene expression levels. This article constructs measures of the agreement of two or more ordered lists. We use the standard deviation
of the ranks to define a measure of agreement that both provides an
intuitive interpretation and can be applied to any number of lists even
if some or all are incomplete or censored. The approach can identify
change-points in the agreement of the lists and the sequential changes
of agreement as a function of the depth of the lists can be compared
graphically to a permutation based reference set. The usefulness of
these tools are illustrated using gene rankings, and using data from
two Danish ovarian cancer studies where we assess the within and
between agreement of different statistical classification methods.
As stated in many of the other answers, each of these techniques will provide a different summary of those differences and the selection of which is most appropriate for your application is ... well, ... application specific.
|
Comparison of ranked lists
In "Sequential rank agreement methods for comparison of ranked lists" Ekstrøm et al. discuss this in detail (including a survey of existing techniques circa 2015) while introducing a new measure calle
|
10,814
|
Is there a Random Forest implementation that works well with very sparse data?
|
No, there is no RF implementation for sparse data in R. Partially because RF does not fit very well on this type of problem -- bagging and suboptimal selection of splits may waste most of the model insight on zero-only areas.
Try some kernel method or better think of converting your data into some more lush representation with some descriptors (or use some dimensionality reduction method).
|
Is there a Random Forest implementation that works well with very sparse data?
|
No, there is no RF implementation for sparse data in R. Partially because RF does not fit very well on this type of problem -- bagging and suboptimal selection of splits may waste most of the model in
|
Is there a Random Forest implementation that works well with very sparse data?
No, there is no RF implementation for sparse data in R. Partially because RF does not fit very well on this type of problem -- bagging and suboptimal selection of splits may waste most of the model insight on zero-only areas.
Try some kernel method or better think of converting your data into some more lush representation with some descriptors (or use some dimensionality reduction method).
|
Is there a Random Forest implementation that works well with very sparse data?
No, there is no RF implementation for sparse data in R. Partially because RF does not fit very well on this type of problem -- bagging and suboptimal selection of splits may waste most of the model in
|
10,815
|
Is there a Random Forest implementation that works well with very sparse data?
|
Actually, yes there is.
It's xgboost, which is made for eXtreme gradient boosting. This is currently the package of choice for running models with sparse matrices in R for a lot of folks, and as the link above explains, you can use it for Random Forest by tweaking the parameters!
|
Is there a Random Forest implementation that works well with very sparse data?
|
Actually, yes there is.
It's xgboost, which is made for eXtreme gradient boosting. This is currently the package of choice for running models with sparse matrices in R for a lot of folks, and as the
|
Is there a Random Forest implementation that works well with very sparse data?
Actually, yes there is.
It's xgboost, which is made for eXtreme gradient boosting. This is currently the package of choice for running models with sparse matrices in R for a lot of folks, and as the link above explains, you can use it for Random Forest by tweaking the parameters!
|
Is there a Random Forest implementation that works well with very sparse data?
Actually, yes there is.
It's xgboost, which is made for eXtreme gradient boosting. This is currently the package of choice for running models with sparse matrices in R for a lot of folks, and as the
|
10,816
|
Is there a Random Forest implementation that works well with very sparse data?
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
The R package "Ranger" should do.
https://cran.r-project.org/web/packages/ranger/ranger.pdf
A fast implementation of Random Forests, particularly suited for high dimensional data.
Compared with randomForest, this package is probably the fastest RF implementation I have seen. It treats categorical variables in a native way.
|
Is there a Random Forest implementation that works well with very sparse data?
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
Is there a Random Forest implementation that works well with very sparse data?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
The R package "Ranger" should do.
https://cran.r-project.org/web/packages/ranger/ranger.pdf
A fast implementation of Random Forests, particularly suited for high dimensional data.
Compared with randomForest, this package is probably the fastest RF implementation I have seen. It treats categorical variables in a native way.
|
Is there a Random Forest implementation that works well with very sparse data?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
10,817
|
Is there a Random Forest implementation that works well with very sparse data?
|
There is a blog called Quick-R that should help you with the basics of R.
R works with packages. Each package can do something different. There is this packages called "randomForests" that should be just what you are asking for.
Be aware that sparse data will give problems no matter what method you apply. To my knowledge it is a very open problem and data mining in general is more an art than a science. Random forests do very well overall but they are not always the best method. You may want to try out a neural network with a lot of layers, that might help.
|
Is there a Random Forest implementation that works well with very sparse data?
|
There is a blog called Quick-R that should help you with the basics of R.
R works with packages. Each package can do something different. There is this packages called "randomForests" that should be
|
Is there a Random Forest implementation that works well with very sparse data?
There is a blog called Quick-R that should help you with the basics of R.
R works with packages. Each package can do something different. There is this packages called "randomForests" that should be just what you are asking for.
Be aware that sparse data will give problems no matter what method you apply. To my knowledge it is a very open problem and data mining in general is more an art than a science. Random forests do very well overall but they are not always the best method. You may want to try out a neural network with a lot of layers, that might help.
|
Is there a Random Forest implementation that works well with very sparse data?
There is a blog called Quick-R that should help you with the basics of R.
R works with packages. Each package can do something different. There is this packages called "randomForests" that should be
|
10,818
|
Have the reports of the death of the t-test been greatly exaggerated?
|
I wouldn't say the classic one sample (including paired) and two-sample equal variance t-tests are exactly obsolete, but there's a plethora of alternatives that have excellent properties and in many cases they should be used.
Nor would I say the ability to rapidly perform Wilcoxon-Mann-Whitney tests on large samples – or even permutation tests – is recent, I was doing both routinely more than 30 years ago as a student, and the capability to do so had been available for a long time at that point.
While it's vastly easier to code a permutation test – even from scratch – than it once was$^\dagger$, it wasn't difficult even then (if you had code to do it once, modifications to do it under different circumstances – different statistics, different data, etc – were straightforward, generally not requiring a background in programming).
So here are some alternatives, and why they can help:
Welch-Satterthwaite – when you're not confident variances will be close to equal (if sample sizes are the same, the equal variance assumption is not critical)
Wilcoxon-Mann-Whitney – Excellent if tails are normal or heavier than normal, particularly under cases that are close to symmetric. If tails tend to be close to normal a permutation test on the means will offer slightly more power.
robustified t-tests – there are a variety of these that have good power at the normal but also work well (and retain good power) under heavier tailed or somewhat skew alternatives.
GLMs – useful for counts or continuous right skew cases (e.g. gamma) for example; designed to deal with situations where variance is related to mean.
random effects or time-series models may be useful in cases where there's particular forms of dependence
Bayesian approaches, bootstrapping and a plethora of other important techniques which can offer similar advantages to the above ideas. For example, with a Bayesian approach it's quite possible to have a model that can account for a contaminating process, deal with counts or skewed data, and handle particular forms of dependence, all at the same time.
While a plethora of handy alternatives exist, the old stock standard equal variance two-sample t-test can often perform well in large, equal-size samples as long as the population isn't very far from normal (such as being very heavy tailed/skew) and we have near-independence.
The alternatives are useful in a host of situations where we might not be as confident with the plain t-test… and nevertheless generally perform well when the assumptions of the t-test are met or close to being met.
The Welch is a sensible default if distribution tends not to stray too far from normal (with larger samples allowing more leeway).
While the permutation test is excellent, with no loss of power compared to the t-test when its assumptions hold (and the useful benefit of giving inference directly about the quantity of interest), the Wilcoxon-Mann-Whitney is arguably a better choice if tails may be heavy; with a minor additional assumption, the WMW can give conclusions that relate to mean-shift. (There are other reasons one might prefer it to the permutation test)
[If you know you're dealing with say counts, or waiting times or similar kinds of data, the GLM route is often sensible. If you know a little about potential forms of dependence, that, too is readily handled, and the potential for dependence should be considered.]
So while the t-test surely won't be a thing of the past, you can nearly always do just as well or almost as well when it applies, and potentially gain a great deal when it doesn't by enlisting one of the alternatives. Which is to say, I broadly agree with the sentiment in that post relating to the t-test… much of the time you should probably think about your assumptions before even collecting the data, and if any of them may not be really expected to hold up, with the t-test there's usually almost nothing to lose in simply not making that assumption since the alternatives usually work very well.
If one is going to the great trouble of collecting data there's certainly no reason not to invest a little time sincerely considering the best way to approach your inferences.
Note that I generally advise against explicit testing of assumptions – not only does it answer the wrong question, but doing so and then choosing an analysis based on the rejection or non-rejection of the assumption impact the properties of both choices of test; if you can't reasonably safely make the assumption (either because you know about the process well enough that you can assume it or because the procedure is not sensitive to it in your circumstances), generally speaking you're better off to use the procedure that doesn't assume it.
$\dagger$ Nowadays, it's so simple as to be trivial. Here's a complete-enumeration permutation test and also a test based on sampling the permutation distribution (with replacement) for a two-sample comparison of means in R:
# set up some data
x <- c(53.4, 59.0, 40.4, 51.9, 43.8, 43.0, 57.6)
y <- c(49.1, 57.9, 74.8, 46.8, 48.8, 43.7)
xyv <- stack(list(x=x,y=y))$values
nx <- length(x)
# do sample-x mean for all combinations for permutation test
permmean = combn(xyv,nx,mean)
# do the equivalent resampling for a randomization test
randmean <- replicate(100000,mean(sample(xyv,nx)))
# find p-value for permutation test
left = mean(permmean<=mean(x))
# for the other tail, "at least as extreme" being as far above as the sample
# was below
right = mean(permmean>=(mean(xyv)*2-mean(x)))
pvalue_perm = left+right
"Permutation test p-value"; pvalue_perm
# this is easier:
# pvalue = mean(abs(permmean-mean(xyv))>=abs(mean(x)-mean(xyv)))
# but I'd keep left and right above for adapting to other tests
# find p-value for randomization test
left = mean(randmean<=mean(x))
right = mean(randmean>=(mean(xyv)*2-mean(x)))
pvalue_rand = left+right
"Randomization test p-value"; pvalue_rand
(The resulting p-values are 0.538 and 0.539 respectively; the corresponding ordinary two sample t-test has a p-value of 0.504 and the Welch-Satterthwaite t-test has a p-value of 0.522.)
Note that the code for the calculations is in each case 1 line for the combinations for the permutation test and the p-value could also be done in 1 line.
Adapting this to a function which carried out a permutation test or randomization test and produced output rather like a t-test would be a trivial matter.
Here's a display of the results:
# Draw a display to show distn & p-value region for both
opar <- par()
par(mfrow=c(2,1))
hist(permmean, n=100, xlim=c(45,58))
abline(v=mean(x), col=3)
abline(v=mean(xyv)*2-mean(x), col=3, lty=2)
abline(v=mean(xyv), col=4)
hist(randmean, n=100, xlim=c(45,58))
abline(v=mean(x), col=3)
abline(v=mean(xyv)*2-mean(x), col=3, lty=2)
abline(v=mean(xyv), col=4)
par(opar)
|
Have the reports of the death of the t-test been greatly exaggerated?
|
I wouldn't say the classic one sample (including paired) and two-sample equal variance t-tests are exactly obsolete, but there's a plethora of alternatives that have excellent properties and in many c
|
Have the reports of the death of the t-test been greatly exaggerated?
I wouldn't say the classic one sample (including paired) and two-sample equal variance t-tests are exactly obsolete, but there's a plethora of alternatives that have excellent properties and in many cases they should be used.
Nor would I say the ability to rapidly perform Wilcoxon-Mann-Whitney tests on large samples – or even permutation tests – is recent, I was doing both routinely more than 30 years ago as a student, and the capability to do so had been available for a long time at that point.
While it's vastly easier to code a permutation test – even from scratch – than it once was$^\dagger$, it wasn't difficult even then (if you had code to do it once, modifications to do it under different circumstances – different statistics, different data, etc – were straightforward, generally not requiring a background in programming).
So here are some alternatives, and why they can help:
Welch-Satterthwaite – when you're not confident variances will be close to equal (if sample sizes are the same, the equal variance assumption is not critical)
Wilcoxon-Mann-Whitney – Excellent if tails are normal or heavier than normal, particularly under cases that are close to symmetric. If tails tend to be close to normal a permutation test on the means will offer slightly more power.
robustified t-tests – there are a variety of these that have good power at the normal but also work well (and retain good power) under heavier tailed or somewhat skew alternatives.
GLMs – useful for counts or continuous right skew cases (e.g. gamma) for example; designed to deal with situations where variance is related to mean.
random effects or time-series models may be useful in cases where there's particular forms of dependence
Bayesian approaches, bootstrapping and a plethora of other important techniques which can offer similar advantages to the above ideas. For example, with a Bayesian approach it's quite possible to have a model that can account for a contaminating process, deal with counts or skewed data, and handle particular forms of dependence, all at the same time.
While a plethora of handy alternatives exist, the old stock standard equal variance two-sample t-test can often perform well in large, equal-size samples as long as the population isn't very far from normal (such as being very heavy tailed/skew) and we have near-independence.
The alternatives are useful in a host of situations where we might not be as confident with the plain t-test… and nevertheless generally perform well when the assumptions of the t-test are met or close to being met.
The Welch is a sensible default if distribution tends not to stray too far from normal (with larger samples allowing more leeway).
While the permutation test is excellent, with no loss of power compared to the t-test when its assumptions hold (and the useful benefit of giving inference directly about the quantity of interest), the Wilcoxon-Mann-Whitney is arguably a better choice if tails may be heavy; with a minor additional assumption, the WMW can give conclusions that relate to mean-shift. (There are other reasons one might prefer it to the permutation test)
[If you know you're dealing with say counts, or waiting times or similar kinds of data, the GLM route is often sensible. If you know a little about potential forms of dependence, that, too is readily handled, and the potential for dependence should be considered.]
So while the t-test surely won't be a thing of the past, you can nearly always do just as well or almost as well when it applies, and potentially gain a great deal when it doesn't by enlisting one of the alternatives. Which is to say, I broadly agree with the sentiment in that post relating to the t-test… much of the time you should probably think about your assumptions before even collecting the data, and if any of them may not be really expected to hold up, with the t-test there's usually almost nothing to lose in simply not making that assumption since the alternatives usually work very well.
If one is going to the great trouble of collecting data there's certainly no reason not to invest a little time sincerely considering the best way to approach your inferences.
Note that I generally advise against explicit testing of assumptions – not only does it answer the wrong question, but doing so and then choosing an analysis based on the rejection or non-rejection of the assumption impact the properties of both choices of test; if you can't reasonably safely make the assumption (either because you know about the process well enough that you can assume it or because the procedure is not sensitive to it in your circumstances), generally speaking you're better off to use the procedure that doesn't assume it.
$\dagger$ Nowadays, it's so simple as to be trivial. Here's a complete-enumeration permutation test and also a test based on sampling the permutation distribution (with replacement) for a two-sample comparison of means in R:
# set up some data
x <- c(53.4, 59.0, 40.4, 51.9, 43.8, 43.0, 57.6)
y <- c(49.1, 57.9, 74.8, 46.8, 48.8, 43.7)
xyv <- stack(list(x=x,y=y))$values
nx <- length(x)
# do sample-x mean for all combinations for permutation test
permmean = combn(xyv,nx,mean)
# do the equivalent resampling for a randomization test
randmean <- replicate(100000,mean(sample(xyv,nx)))
# find p-value for permutation test
left = mean(permmean<=mean(x))
# for the other tail, "at least as extreme" being as far above as the sample
# was below
right = mean(permmean>=(mean(xyv)*2-mean(x)))
pvalue_perm = left+right
"Permutation test p-value"; pvalue_perm
# this is easier:
# pvalue = mean(abs(permmean-mean(xyv))>=abs(mean(x)-mean(xyv)))
# but I'd keep left and right above for adapting to other tests
# find p-value for randomization test
left = mean(randmean<=mean(x))
right = mean(randmean>=(mean(xyv)*2-mean(x)))
pvalue_rand = left+right
"Randomization test p-value"; pvalue_rand
(The resulting p-values are 0.538 and 0.539 respectively; the corresponding ordinary two sample t-test has a p-value of 0.504 and the Welch-Satterthwaite t-test has a p-value of 0.522.)
Note that the code for the calculations is in each case 1 line for the combinations for the permutation test and the p-value could also be done in 1 line.
Adapting this to a function which carried out a permutation test or randomization test and produced output rather like a t-test would be a trivial matter.
Here's a display of the results:
# Draw a display to show distn & p-value region for both
opar <- par()
par(mfrow=c(2,1))
hist(permmean, n=100, xlim=c(45,58))
abline(v=mean(x), col=3)
abline(v=mean(xyv)*2-mean(x), col=3, lty=2)
abline(v=mean(xyv), col=4)
hist(randmean, n=100, xlim=c(45,58))
abline(v=mean(x), col=3)
abline(v=mean(xyv)*2-mean(x), col=3, lty=2)
abline(v=mean(xyv), col=4)
par(opar)
|
Have the reports of the death of the t-test been greatly exaggerated?
I wouldn't say the classic one sample (including paired) and two-sample equal variance t-tests are exactly obsolete, but there's a plethora of alternatives that have excellent properties and in many c
|
10,819
|
What is the problem with using R-squared in time series models?
|
Some aspects of the issue:
If somebody gives us a vector of numbers $\mathbf y$ and a conformable matrix of numbers $\mathbf X$, we do not need to know what is the relation between them to execute some estimation algebra, treating $y$ as the dependent variable. The algebra will result, irrespective of whether these numbers represent cross-sectional or time series or panel data, or of whether the matrix $\mathbf X$ contains lagged values of $y$ etc.
The fundamental definition of the coefficient of determination $R^2$ is
$$R^2 = 1 - \frac {SS_{res}}{SS_{tot}}$$
where $SS_{res}$ is the sum of squared residuals from some estimation procedure, and $SS_{tot}$ is the sum of squared deviations of the dependent variable from its sample mean.
Combining, the $R^2$ will always be uniquely calculated, for a specific data sample, a specific formulation of the relation between the variables, and a specific estimation procedure, subject only to the condition that the estimation procedure is such that it provides point estimates of the unknown quantities involved (and hence point estimates of the dependent variable, and hence point estimates of the residuals). If any of these three aspects change, the arithmetic value of $R^2$ will in general change -but this holds for any type of data, not just time-series.
So the issue with $R^2$ and time-series, is not whether it is "unique" or not (since most estimation procedures for time-series data provide point estimates). The issue is whether the "usual" time series specification framework is technically friendly for the $R^2$, and whether $R^2$ provides some useful information.
The interpretation of $R^2$ as "proportion of dependent variable variance explained" depends critically on the residuals adding up to zero. In the context of linear regression (on whatever kind of data), and of Ordinary Least Squares estimation, this is guaranteed only if the specification includes a constant term in the regressor matrix (a "drift" in time-series terminology). In autoregressive time-series models, a drift is in many cases not included.
More generally, when we are faced with time-series data, "automatically" we start thinking about how the time-series will evolve into the future. So we tend to evaluate a time-series model based more on how well it predicts future values, than how well it fits past values. But the $R^2$ mainly reflects the latter, not the former. The well-known fact that $R^2$ is non-decreasing in the number of regressors means that we can obtain a perfect fit by keeping adding regressors (any regressors, i.e. any series' of numbers, perhaps totally unrelated conceptually to the dependent variable). Experience shows that a perfect fit obtained thus, will also give abysmal predictions outside the sample.
Intuitively, this perhaps counter-intuitive trade-off happens because by capturing the whole variability of the dependent variable into an estimated equation, we turn unsystematic variability into systematic one, as regards prediction (here, "unsystematic" should be understood relative to our knowledge -from a purely deterministic philosophical point of view, there is no such thing as "unsystematic variability". But to the degree that our limited knowledge forces us to treat some variability as "unsystematic", then the attempt to nevertheless turn it into a systematic component, brings prediction disaster).
In fact this is perhaps the most convincing way to show somebody why $R^2$ should not be the main diagnostic/evaluation tool when dealing with time series: increase the number of regressors up to a point where $R^2\approx 1$. Then take the estimated equation and try to predict the future values of the dependent variable.
|
What is the problem with using R-squared in time series models?
|
Some aspects of the issue:
If somebody gives us a vector of numbers $\mathbf y$ and a conformable matrix of numbers $\mathbf X$, we do not need to know what is the relation between them to execute so
|
What is the problem with using R-squared in time series models?
Some aspects of the issue:
If somebody gives us a vector of numbers $\mathbf y$ and a conformable matrix of numbers $\mathbf X$, we do not need to know what is the relation between them to execute some estimation algebra, treating $y$ as the dependent variable. The algebra will result, irrespective of whether these numbers represent cross-sectional or time series or panel data, or of whether the matrix $\mathbf X$ contains lagged values of $y$ etc.
The fundamental definition of the coefficient of determination $R^2$ is
$$R^2 = 1 - \frac {SS_{res}}{SS_{tot}}$$
where $SS_{res}$ is the sum of squared residuals from some estimation procedure, and $SS_{tot}$ is the sum of squared deviations of the dependent variable from its sample mean.
Combining, the $R^2$ will always be uniquely calculated, for a specific data sample, a specific formulation of the relation between the variables, and a specific estimation procedure, subject only to the condition that the estimation procedure is such that it provides point estimates of the unknown quantities involved (and hence point estimates of the dependent variable, and hence point estimates of the residuals). If any of these three aspects change, the arithmetic value of $R^2$ will in general change -but this holds for any type of data, not just time-series.
So the issue with $R^2$ and time-series, is not whether it is "unique" or not (since most estimation procedures for time-series data provide point estimates). The issue is whether the "usual" time series specification framework is technically friendly for the $R^2$, and whether $R^2$ provides some useful information.
The interpretation of $R^2$ as "proportion of dependent variable variance explained" depends critically on the residuals adding up to zero. In the context of linear regression (on whatever kind of data), and of Ordinary Least Squares estimation, this is guaranteed only if the specification includes a constant term in the regressor matrix (a "drift" in time-series terminology). In autoregressive time-series models, a drift is in many cases not included.
More generally, when we are faced with time-series data, "automatically" we start thinking about how the time-series will evolve into the future. So we tend to evaluate a time-series model based more on how well it predicts future values, than how well it fits past values. But the $R^2$ mainly reflects the latter, not the former. The well-known fact that $R^2$ is non-decreasing in the number of regressors means that we can obtain a perfect fit by keeping adding regressors (any regressors, i.e. any series' of numbers, perhaps totally unrelated conceptually to the dependent variable). Experience shows that a perfect fit obtained thus, will also give abysmal predictions outside the sample.
Intuitively, this perhaps counter-intuitive trade-off happens because by capturing the whole variability of the dependent variable into an estimated equation, we turn unsystematic variability into systematic one, as regards prediction (here, "unsystematic" should be understood relative to our knowledge -from a purely deterministic philosophical point of view, there is no such thing as "unsystematic variability". But to the degree that our limited knowledge forces us to treat some variability as "unsystematic", then the attempt to nevertheless turn it into a systematic component, brings prediction disaster).
In fact this is perhaps the most convincing way to show somebody why $R^2$ should not be the main diagnostic/evaluation tool when dealing with time series: increase the number of regressors up to a point where $R^2\approx 1$. Then take the estimated equation and try to predict the future values of the dependent variable.
|
What is the problem with using R-squared in time series models?
Some aspects of the issue:
If somebody gives us a vector of numbers $\mathbf y$ and a conformable matrix of numbers $\mathbf X$, we do not need to know what is the relation between them to execute so
|
10,820
|
What is the problem with using R-squared in time series models?
|
Some extra comments to the post above.
When dealing with time-series an R squared (or adjusted R^2) would always be greater if explanatory variables were not differenced. However, when it goes to out-of-time fit, the error term would be significantly higher for non-differenced time series. This happens because of trends presented in the data and generally well-known issue. But it is a good way showing why this measure should probably, be last on the list when choosing most appropriate time-series model.
|
What is the problem with using R-squared in time series models?
|
Some extra comments to the post above.
When dealing with time-series an R squared (or adjusted R^2) would always be greater if explanatory variables were not differenced. However, when it goes to out-
|
What is the problem with using R-squared in time series models?
Some extra comments to the post above.
When dealing with time-series an R squared (or adjusted R^2) would always be greater if explanatory variables were not differenced. However, when it goes to out-of-time fit, the error term would be significantly higher for non-differenced time series. This happens because of trends presented in the data and generally well-known issue. But it is a good way showing why this measure should probably, be last on the list when choosing most appropriate time-series model.
|
What is the problem with using R-squared in time series models?
Some extra comments to the post above.
When dealing with time-series an R squared (or adjusted R^2) would always be greater if explanatory variables were not differenced. However, when it goes to out-
|
10,821
|
Intraclass correlation (ICC) for an interaction?
|
The R model formula
lmer(measurement ~ 1 + (1 | subject) + (1 | site), mydata)
fits the model
$$ Y_{ijk} = \beta_0 + \eta_{i} + \theta_{j} + \varepsilon_{ijk} $$
where $Y_{ijk}$ is the $k$'th measurement from subject $i$ at site $j$, $\eta_{i}$ is the subject $i$ random effect, $\theta_{j}$ is the site $j$ random effect and $\varepsilon_{ijk}$ is the leftover error. These random effects have variances $\sigma^{2}_{\eta}, \sigma^{2}_{\theta}, \sigma^{2}_{\varepsilon}$ that are estimated by the model. (Note that if subject is nested within site, you would traditionally write $\theta_{ij}$ here instead of $\theta_{j}$).
To answer your first question regarding how to calculate the ICCs: under this model, the ICCs are the proportion of the total variation explained by the respective blocking factor. In particular, the correlation between two randomly selected observations on the same subject is:
$$ {\rm ICC}({\rm Subject}) = \frac{\sigma^{2}_{\eta}}{\sigma^{2}_{\eta}+ \sigma^{2}_{\theta}+\sigma^{2}_{\varepsilon}}$$
The correlation between two randomly selected observations from the same site is:
$$ {\rm ICC}({\rm Site}) = \frac{\sigma^{2}_{\theta}}{\sigma^{2}_{\eta}+ \sigma^{2}_{\theta}+\sigma^{2}_{\varepsilon}}$$
The correlation between two randomly selected observations on the same individual, and at the same site (the so-called interaction ICC) is:
$$ {\rm ICC}({\rm Subject/Site \ Interaction}) = \frac{\sigma^{2}_{\eta}+\sigma^{2}_{\theta}}{\sigma^{2}_{\eta}+ \sigma^{2}_{\theta}+\sigma^{2}_{\varepsilon}}$$
It seems you were confused by this being referred to as an "interaction" since it's the sum of individual terms. It's an "interaction" in the sense that it estimates the ${\rm ICC}$ corresponding to the blocking factor composed on the combination of Subject and site - it's important to note that you do not have to include some kind of "interaction" term between the factors to estimate this quantity.
Each of these quantities can be estimated by plugging in the estimates of these variances that come out of the model fitting.
Regarding your second question - as you can see here, each ${\rm ICC}$ has a fairly clear interpretation. I would argue that the interaction ${\rm ICC}$ does tell us something interesting - how "similar" are measurements that share both subject and site?
One important point to note is that if subjects are nested within sites, then the Subject ${\rm ICC}$ is not meaningful in it's own right, since it's impossible to share Subject and not site. Then $\sigma^{2}_{\eta}$ becomes only a measure of how much more similar individuals are to themselves, compared to other individuals at their site.
|
Intraclass correlation (ICC) for an interaction?
|
The R model formula
lmer(measurement ~ 1 + (1 | subject) + (1 | site), mydata)
fits the model
$$ Y_{ijk} = \beta_0 + \eta_{i} + \theta_{j} + \varepsilon_{ijk} $$
where $Y_{ijk}$ is the $k$'th measu
|
Intraclass correlation (ICC) for an interaction?
The R model formula
lmer(measurement ~ 1 + (1 | subject) + (1 | site), mydata)
fits the model
$$ Y_{ijk} = \beta_0 + \eta_{i} + \theta_{j} + \varepsilon_{ijk} $$
where $Y_{ijk}$ is the $k$'th measurement from subject $i$ at site $j$, $\eta_{i}$ is the subject $i$ random effect, $\theta_{j}$ is the site $j$ random effect and $\varepsilon_{ijk}$ is the leftover error. These random effects have variances $\sigma^{2}_{\eta}, \sigma^{2}_{\theta}, \sigma^{2}_{\varepsilon}$ that are estimated by the model. (Note that if subject is nested within site, you would traditionally write $\theta_{ij}$ here instead of $\theta_{j}$).
To answer your first question regarding how to calculate the ICCs: under this model, the ICCs are the proportion of the total variation explained by the respective blocking factor. In particular, the correlation between two randomly selected observations on the same subject is:
$$ {\rm ICC}({\rm Subject}) = \frac{\sigma^{2}_{\eta}}{\sigma^{2}_{\eta}+ \sigma^{2}_{\theta}+\sigma^{2}_{\varepsilon}}$$
The correlation between two randomly selected observations from the same site is:
$$ {\rm ICC}({\rm Site}) = \frac{\sigma^{2}_{\theta}}{\sigma^{2}_{\eta}+ \sigma^{2}_{\theta}+\sigma^{2}_{\varepsilon}}$$
The correlation between two randomly selected observations on the same individual, and at the same site (the so-called interaction ICC) is:
$$ {\rm ICC}({\rm Subject/Site \ Interaction}) = \frac{\sigma^{2}_{\eta}+\sigma^{2}_{\theta}}{\sigma^{2}_{\eta}+ \sigma^{2}_{\theta}+\sigma^{2}_{\varepsilon}}$$
It seems you were confused by this being referred to as an "interaction" since it's the sum of individual terms. It's an "interaction" in the sense that it estimates the ${\rm ICC}$ corresponding to the blocking factor composed on the combination of Subject and site - it's important to note that you do not have to include some kind of "interaction" term between the factors to estimate this quantity.
Each of these quantities can be estimated by plugging in the estimates of these variances that come out of the model fitting.
Regarding your second question - as you can see here, each ${\rm ICC}$ has a fairly clear interpretation. I would argue that the interaction ${\rm ICC}$ does tell us something interesting - how "similar" are measurements that share both subject and site?
One important point to note is that if subjects are nested within sites, then the Subject ${\rm ICC}$ is not meaningful in it's own right, since it's impossible to share Subject and not site. Then $\sigma^{2}_{\eta}$ becomes only a measure of how much more similar individuals are to themselves, compared to other individuals at their site.
|
Intraclass correlation (ICC) for an interaction?
The R model formula
lmer(measurement ~ 1 + (1 | subject) + (1 | site), mydata)
fits the model
$$ Y_{ijk} = \beta_0 + \eta_{i} + \theta_{j} + \varepsilon_{ijk} $$
where $Y_{ijk}$ is the $k$'th measu
|
10,822
|
Why does Q-Learning use epsilon-greedy during testing?
|
In the nature paper they mention:
The trained agents were evaluated by playing each game 30 times for up
to 5 min each time with different initial random conditions
(‘noop’;see Extended Data Table 1) and an e-greedy policy with epsilon 0.05.
This procedure is adopted to minimize the possibility of overfitting
during evaluation.
I think what they mean is 'to nullify the negative effects of over / under fitting'. Using epsilon of 0 is a fully exploitative (as you point out) choice and makes a strong statement.
For instance, consider a labyrinth game where the agent’s current Q-estimates are converged to the optimal policy except for one grid, where it greedily chooses to move toward a boundary that results in it remaining in the same grid. If the agent reaches any such state, and it is choosing the Max Q action, it will be stuck there for eternity. However, keeping a vaguely explorative / stochastic element in its policy (like a tiny amount of epsilon) allows it to get out of such states.
Having said that, from the code implementations I have looked at (and coded myself) in practice performance is often times measured with greedy policy for the exact reasons you list in your question.
|
Why does Q-Learning use epsilon-greedy during testing?
|
In the nature paper they mention:
The trained agents were evaluated by playing each game 30 times for up
to 5 min each time with different initial random conditions
(‘noop’;see Extended Data Tabl
|
Why does Q-Learning use epsilon-greedy during testing?
In the nature paper they mention:
The trained agents were evaluated by playing each game 30 times for up
to 5 min each time with different initial random conditions
(‘noop’;see Extended Data Table 1) and an e-greedy policy with epsilon 0.05.
This procedure is adopted to minimize the possibility of overfitting
during evaluation.
I think what they mean is 'to nullify the negative effects of over / under fitting'. Using epsilon of 0 is a fully exploitative (as you point out) choice and makes a strong statement.
For instance, consider a labyrinth game where the agent’s current Q-estimates are converged to the optimal policy except for one grid, where it greedily chooses to move toward a boundary that results in it remaining in the same grid. If the agent reaches any such state, and it is choosing the Max Q action, it will be stuck there for eternity. However, keeping a vaguely explorative / stochastic element in its policy (like a tiny amount of epsilon) allows it to get out of such states.
Having said that, from the code implementations I have looked at (and coded myself) in practice performance is often times measured with greedy policy for the exact reasons you list in your question.
|
Why does Q-Learning use epsilon-greedy during testing?
In the nature paper they mention:
The trained agents were evaluated by playing each game 30 times for up
to 5 min each time with different initial random conditions
(‘noop’;see Extended Data Tabl
|
10,823
|
Why does Q-Learning use epsilon-greedy during testing?
|
The answer is there in the paper itself. They used $\epsilon\ = 0.05$ to avoid overfitting. This model is used as a baseline. And yobibyte mentioned in the comment they do random starts for the same reason. And then the algorithm is evaluated for performance against a human expert. The algorithm has no model of its opponent, so the tiny epsilon. If you have the model of your opponent your problem will be deterministic instead of being stochastic. I hope this answers your question
|
Why does Q-Learning use epsilon-greedy during testing?
|
The answer is there in the paper itself. They used $\epsilon\ = 0.05$ to avoid overfitting. This model is used as a baseline. And yobibyte mentioned in the comment they do random starts for the same r
|
Why does Q-Learning use epsilon-greedy during testing?
The answer is there in the paper itself. They used $\epsilon\ = 0.05$ to avoid overfitting. This model is used as a baseline. And yobibyte mentioned in the comment they do random starts for the same reason. And then the algorithm is evaluated for performance against a human expert. The algorithm has no model of its opponent, so the tiny epsilon. If you have the model of your opponent your problem will be deterministic instead of being stochastic. I hope this answers your question
|
Why does Q-Learning use epsilon-greedy during testing?
The answer is there in the paper itself. They used $\epsilon\ = 0.05$ to avoid overfitting. This model is used as a baseline. And yobibyte mentioned in the comment they do random starts for the same r
|
10,824
|
Why does Q-Learning use epsilon-greedy during testing?
|
I think the purpose of testing is to get a sense of how the system responds in real-world situations.
Option 1:
They might actually put some noise in the real world play - making truly random moves. This could make $\epsilon$-policy switching perfectly reflective of actual play.
Option 2:
If they are worried about being brittle, playing against a less "pristinely rational" player, then they might be "annealing" their training scores in order to not over-estimate them.
Option 3:
This is their magic smoke. There are going to be pieces of it they can't and don't want to share. They could be publishing this in order to obscure something proprietary or exceptionally relevant for their business that they don't want to share.
Option 4:
They could use repeated tests, and various values of epsilon to test how much "fat" is left in the system. If they had weak randomization, or so many samples that even a fair randomization starts repeating itself, then the method could "learn" an untrue behavior do to pseudo-random bias. This might allow checking of that in the testing phase.
I'm sure there are a half-dozen other meaningful reasons, but these were what I could think of.
EDIT: note to self, I really like the "brittle" thought. I think it may be an existential weakness of first-gen intermediate AI.
|
Why does Q-Learning use epsilon-greedy during testing?
|
I think the purpose of testing is to get a sense of how the system responds in real-world situations.
Option 1:
They might actually put some noise in the real world play - making truly random moves.
|
Why does Q-Learning use epsilon-greedy during testing?
I think the purpose of testing is to get a sense of how the system responds in real-world situations.
Option 1:
They might actually put some noise in the real world play - making truly random moves. This could make $\epsilon$-policy switching perfectly reflective of actual play.
Option 2:
If they are worried about being brittle, playing against a less "pristinely rational" player, then they might be "annealing" their training scores in order to not over-estimate them.
Option 3:
This is their magic smoke. There are going to be pieces of it they can't and don't want to share. They could be publishing this in order to obscure something proprietary or exceptionally relevant for their business that they don't want to share.
Option 4:
They could use repeated tests, and various values of epsilon to test how much "fat" is left in the system. If they had weak randomization, or so many samples that even a fair randomization starts repeating itself, then the method could "learn" an untrue behavior do to pseudo-random bias. This might allow checking of that in the testing phase.
I'm sure there are a half-dozen other meaningful reasons, but these were what I could think of.
EDIT: note to self, I really like the "brittle" thought. I think it may be an existential weakness of first-gen intermediate AI.
|
Why does Q-Learning use epsilon-greedy during testing?
I think the purpose of testing is to get a sense of how the system responds in real-world situations.
Option 1:
They might actually put some noise in the real world play - making truly random moves.
|
10,825
|
Why does Q-Learning use epsilon-greedy during testing?
|
The reason for using $\epsilon$-greedy during testing is that, unlike in supervised machine learning (for example image classification), in reinforcement learning there is no unseen, held-out data set available for the test phase. This means the algorithm is tested on the very same setup that it has been trained on. Now the paper mentions (section Methods, Evaluation procedure):
The trained agents were evaluated by playing each game
30 times for up to 5 min each time with different initial random conditions (‘no-
op’; see Extended Data Table 1) and an $\epsilon$-greedy policy with $\epsilon = 0.05$. This procedure is adopted to minimize the possibility of overfitting during evaluation.
Especially since the preprocessed input contains a history of previously encountered states the concern is that, instead of generalizing to the underlying gameplay, the agent just memorizes optimal trajectories for that specific game and replays them during the testing phase; this is what is meant by "the possibility of overfitting during evaluation". For deterministic environments this is obvious but also for stochastic state transitions memorization (i.e. overfitting) can occur. Using randomization during the test phase, in form of no-op starts of random length as well as a portion of random actions during the game, forces the algorithm to deal with unforeseen states and hence requires some degree of generalization.
On the other hand $\epsilon$-greedy is not used for potentially improving the performance of the algorithm by helping it get unstuck in poorly trained regions of the observation space. Although a given policy can always only be considered an approximate of the optimal policy (for these kind of tasks at least), they have trained well beyond the point where the algorithm would perform nonsensical actions. Using $\epsilon = 0$ during testing would potentially improve the performance but the point here is to show the ability to generalize. Furthermore in most of the Atari games the state also evolves on a no-op and so the agent would naturally get "unstuck" if that ever happened. Considering the elsewhere mentioned labyrinth example where the environment doesn't evolve on no-ops, the agent would quickly learn that running into a wall isn't a good idea if the reward is shaped properly (-1 for each step for example); especially when using optimistic initial values the required exploration happens naturally. In case you still find find your algorithm ever getting stuck in some situations then this means you need to increase the training time (i.e. run more episodes), instead of introducing some auxiliary randomization with respect to the actions.
If you are however running in an environment with evolving system dynamics (that is the underlying state transitions or rewards change over time) then you must retain some degree of exploration and update your policy accordingly in order to keep up with the changes.
|
Why does Q-Learning use epsilon-greedy during testing?
|
The reason for using $\epsilon$-greedy during testing is that, unlike in supervised machine learning (for example image classification), in reinforcement learning there is no unseen, held-out data set
|
Why does Q-Learning use epsilon-greedy during testing?
The reason for using $\epsilon$-greedy during testing is that, unlike in supervised machine learning (for example image classification), in reinforcement learning there is no unseen, held-out data set available for the test phase. This means the algorithm is tested on the very same setup that it has been trained on. Now the paper mentions (section Methods, Evaluation procedure):
The trained agents were evaluated by playing each game
30 times for up to 5 min each time with different initial random conditions (‘no-
op’; see Extended Data Table 1) and an $\epsilon$-greedy policy with $\epsilon = 0.05$. This procedure is adopted to minimize the possibility of overfitting during evaluation.
Especially since the preprocessed input contains a history of previously encountered states the concern is that, instead of generalizing to the underlying gameplay, the agent just memorizes optimal trajectories for that specific game and replays them during the testing phase; this is what is meant by "the possibility of overfitting during evaluation". For deterministic environments this is obvious but also for stochastic state transitions memorization (i.e. overfitting) can occur. Using randomization during the test phase, in form of no-op starts of random length as well as a portion of random actions during the game, forces the algorithm to deal with unforeseen states and hence requires some degree of generalization.
On the other hand $\epsilon$-greedy is not used for potentially improving the performance of the algorithm by helping it get unstuck in poorly trained regions of the observation space. Although a given policy can always only be considered an approximate of the optimal policy (for these kind of tasks at least), they have trained well beyond the point where the algorithm would perform nonsensical actions. Using $\epsilon = 0$ during testing would potentially improve the performance but the point here is to show the ability to generalize. Furthermore in most of the Atari games the state also evolves on a no-op and so the agent would naturally get "unstuck" if that ever happened. Considering the elsewhere mentioned labyrinth example where the environment doesn't evolve on no-ops, the agent would quickly learn that running into a wall isn't a good idea if the reward is shaped properly (-1 for each step for example); especially when using optimistic initial values the required exploration happens naturally. In case you still find find your algorithm ever getting stuck in some situations then this means you need to increase the training time (i.e. run more episodes), instead of introducing some auxiliary randomization with respect to the actions.
If you are however running in an environment with evolving system dynamics (that is the underlying state transitions or rewards change over time) then you must retain some degree of exploration and update your policy accordingly in order to keep up with the changes.
|
Why does Q-Learning use epsilon-greedy during testing?
The reason for using $\epsilon$-greedy during testing is that, unlike in supervised machine learning (for example image classification), in reinforcement learning there is no unseen, held-out data set
|
10,826
|
Bridge penalty vs. Elastic Net regularization
|
How bridge regression and elastic net differ is a fascinating question, given their similar-looking penalties. Here's one possible approach. Suppose we solve the bridge regression problem. We can then ask how the elastic net solution would differ. Looking at the gradients of the two loss functions can tell us something about this.
Bridge regression
Say $X$ is a matrix containing values of the independent variable ($n$ points x $d$ dimensions), $y$ is a vector containing values of the dependent variable, and $w$ is the weight vector.
The loss function penalizes the $\ell_q$ norm of the weights, with magnitude $\lambda_b$:
$$
L_b(w)
= \| y - Xw\|_2^2
+ \lambda_b \|w\|_q^q
$$
The gradient of the loss function is:
$$
\nabla_w L_b(w)
= -2 X^T (y - Xw)
+ \lambda_b q |w|^{\circ(q-1)} \text{sgn}(w)
$$
$v^{\circ c}$ denotes the Hadamard (i.e. element-wise) power, which gives a vector whose $i$th element is $v_i^c$. $\text{sgn}(w)$ is the sign function (applied to each element of $w$). The gradient may be undefined at zero for some values of $q$.
Elastic net
The loss function is:
$$
L_e(w)
= \|y - Xw\|_2^2
+ \lambda_1 \|w\|_1
+ \lambda_2 \|w\|_2^2
$$
This penalizes the $\ell_1$ norm of the weights with magnitude $\lambda_1$ and the $\ell_2$ norm with magnitude $\lambda_2$. The elastic net paper calls minimizing this loss function the 'naive elastic net' because it doubly shrinks the weights. They describe an improved procedure where the weights are later rescaled to compensate for the double shrinkage, but I'm just going to analyze the naive version. That's a caveat to keep in mind.
The gradient of the loss function is:
$$
\nabla_w L_e(w)
= -2 X^T (y - Xw)
+ \lambda_1 \text{sgn}(w)
+ 2 \lambda_2 w
$$
The gradient is undefined at zero when $\lambda_1 > 0$ because the absolute value in the $\ell_1$ penalty isn't differentiable there.
Approach
Say we select weights $w^*$ that solve the bridge regression problem. This means the the bridge regression gradient is zero at this point:
$$
\nabla_w L_b(w^*)
= -2 X^T (y - Xw^*)
+ \lambda_b q |w^*|^{\circ (q-1)} \text{sgn}(w^*)
= \vec{0}
$$
Therefore:
$$
2 X^T (y - Xw^*)
= \lambda_b q |w^*|^{\circ (q-1)} \text{sgn}(w^*)
$$
We can substitute this into the elastic net gradient, to get an expression for the elastic net gradient at $w^*$. Fortunately, it no longer depends directly on the data:
$$
\nabla_w L_e(w^*)
= \lambda_1 \text{sgn}(w^*)
+ 2 \lambda_2 w^*
-\lambda_b q |w^*|^{\circ (q-1)} \text{sgn}(w^*)
$$
Looking at the elastic net gradient at $w^*$ tells us: Given that bridge regression has converged to weights $w^*$, how would the elastic net want to change these weights?
It gives us the local direction and magnitude of the desired change, because the gradient points in the direction of steepest ascent and the loss function will decrease as we move in the direction opposite to the gradient. The gradient might not point directly toward the elastic net solution. But, because the elastic net loss function is convex, the local direction/magnitude gives some information about how the elastic net solution will differ from the bridge regression solution.
Case 1: Sanity check
($\lambda_b = 0, \lambda_1 = 0, \lambda_2 = 1$). Bridge regression in this case is equivalent to ordinary least squares (OLS), because the penalty magnitude is zero. The elastic net is equivalent ridge regression, because only the $\ell_2$ norm is penalized. The following plots show different bridge regression solutions and how the elastic net gradient behaves for each.
Left plot: Elastic net gradient vs. bridge regression weight along each dimension
The x axis represents one component of a set of weights $w^*$ selected by bridge regression. The y axis represents the corresponding component of the elastic net gradient, evaluated at $w^*$. Note that the weights are multidimensional, but we're just looking at the weights/gradient along a single dimension.
Right plot: Elastic net changes to bridge regression weights (2d)
Each point represents a set of 2d weights $w^*$ selected by bridge regression. For each choice of $w^*$, a vector is plotted pointing in the direction opposite the elastic net gradient, with magnitude proportional to that of the gradient. That is, the plotted vectors show how the elastic net wants to change the bridge regression solution.
These plots show that, compared to bridge regression (OLS in this case), elastic net (ridge regression in this case) wants to shrink weights toward zero. The desired amount of shrinkage increases with the magnitude of the weights. If the weights are zero, the solutions are the same. The interpretation is that we want to move in the direction opposite to the gradient to reduce the loss function. For example, say bridge regression converged to a positive value for one of the weights. The elastic net gradient is positive at this point, so elastic net wants to decrease this weight. If using gradient descent, we'd take steps proportional in size to the gradient (of course, we can't technically use gradient descent to solve the elastic net because of the non-differentiability at zero, but subgradient descent would give numerically similar results).
Case 2: Matching bridge & elastic net
($q = 1.4, \lambda_b = 1, \lambda_1 = 0.629, \lambda_2 = 0.355$). I chose the bridge penalty parameters to match the example from the question. I chose the elastic net parameters to give the best matching elastic net penalty. Here, best-matching means, given a particular distribution of weights, we find the elastic net penalty parameters that minimize the expected squared difference between the bridge and elastic net penalties:
$$
\min_{\lambda_1, \lambda_2} \enspace
E \left [ (
\lambda_1 \|w\|_1 + \lambda_2 \|w\|_2^2
- \lambda_b \|w\|_q^q
)^2 \right ]
$$
Here, I considered weights with all entries drawn i.i.d. from the uniform distribution on $[-2, 2]$ (i.e. within a hypercube centered at the origin). The best-matching elastic net parameters were similar for 2 to 1000 dimensions. Although they don't appear to be sensitive to the dimensionality, the best-matching parameters do depend on the scale of the distribution.
Penalty surface
Here's a contour plot of the total penalty imposed by bridge regression ($q=1.4, \lambda_b=100$) and best-matching elastic net ($\lambda_1 = 0.629, \lambda_2 = 0.355$) as a function of the weights (for the 2d case):
Gradient behavior
We can see the following:
Let $w^*_j$ be the chosen bridge regression weight along dimension $j$.
If $|w^*_j|< 0.25$, elastic net wants to shrink the weight toward zero.
If $|w^*_j| \approx 0.25$, the bridge regression and elastic net solutions are the same. But, elastic net wants to move away if the weight differs even slightly.
If $0.25 < |w^*_j| < 1.31$, elastic net wants to grow the weight.
If $|w^*_j| \approx 1.31$, the bridge regression and elastic net solutions are the same. Elastic net wants to move toward this point from nearby weights.
If $|w^*_j| > 1.31$, elastic net wants to shrink the weight.
The results are qualitatively similar if we change the the value of $q$ and/or $\lambda_b$ and find the corresponding best $\lambda_1, \lambda_2$. The points where the bridge and elastic net solutions coincide change slightly, but the behavior of the gradients are otherwise similar.
Case 3: Mismatched bridge & elastic net
$(q=1.8, \lambda_b=1, \lambda_1=0.765, \lambda_2 = 0.225)$. In this regime, bridge regression behaves similar to ridge regression. I found the best-matching $\lambda_1, \lambda_2$, but then swapped them so that the elastic net behaves more like lasso ($\ell_1$ penalty greater than $\ell_2$ penalty).
Relative to bridge regression, elastic net wants to shrink small weights toward zero and increase larger weights. There's a single set of weights in each quadrant where the bridge regression and elastic net solutions coincide, but elastic net wants to move away from this point if the weights differ even slightly.
$(q=1.2, \lambda_b=1, \lambda_1=173, \lambda_2 = 0.816)$. In this regime, the bridge penalty is more similar to an $\ell_1$ penalty (although bridge regression may not produce sparse solutions with $q > 1$, as mentioned in the elastic net paper). I found the best-matching $\lambda_1, \lambda_2$, but then swapped them so that the elastic net behaves more like ridge regression ($\ell_2$ penalty greater than $\ell_1$ penalty).
Relative to bridge regression, elastic net wants to grow small weights and shrink larger weights. There's a point in each quadrant where the bridge regression and elastic net solutions coincide, and elastic net wants to move toward these weights from neighboring points.
|
Bridge penalty vs. Elastic Net regularization
|
How bridge regression and elastic net differ is a fascinating question, given their similar-looking penalties. Here's one possible approach. Suppose we solve the bridge regression problem. We can then
|
Bridge penalty vs. Elastic Net regularization
How bridge regression and elastic net differ is a fascinating question, given their similar-looking penalties. Here's one possible approach. Suppose we solve the bridge regression problem. We can then ask how the elastic net solution would differ. Looking at the gradients of the two loss functions can tell us something about this.
Bridge regression
Say $X$ is a matrix containing values of the independent variable ($n$ points x $d$ dimensions), $y$ is a vector containing values of the dependent variable, and $w$ is the weight vector.
The loss function penalizes the $\ell_q$ norm of the weights, with magnitude $\lambda_b$:
$$
L_b(w)
= \| y - Xw\|_2^2
+ \lambda_b \|w\|_q^q
$$
The gradient of the loss function is:
$$
\nabla_w L_b(w)
= -2 X^T (y - Xw)
+ \lambda_b q |w|^{\circ(q-1)} \text{sgn}(w)
$$
$v^{\circ c}$ denotes the Hadamard (i.e. element-wise) power, which gives a vector whose $i$th element is $v_i^c$. $\text{sgn}(w)$ is the sign function (applied to each element of $w$). The gradient may be undefined at zero for some values of $q$.
Elastic net
The loss function is:
$$
L_e(w)
= \|y - Xw\|_2^2
+ \lambda_1 \|w\|_1
+ \lambda_2 \|w\|_2^2
$$
This penalizes the $\ell_1$ norm of the weights with magnitude $\lambda_1$ and the $\ell_2$ norm with magnitude $\lambda_2$. The elastic net paper calls minimizing this loss function the 'naive elastic net' because it doubly shrinks the weights. They describe an improved procedure where the weights are later rescaled to compensate for the double shrinkage, but I'm just going to analyze the naive version. That's a caveat to keep in mind.
The gradient of the loss function is:
$$
\nabla_w L_e(w)
= -2 X^T (y - Xw)
+ \lambda_1 \text{sgn}(w)
+ 2 \lambda_2 w
$$
The gradient is undefined at zero when $\lambda_1 > 0$ because the absolute value in the $\ell_1$ penalty isn't differentiable there.
Approach
Say we select weights $w^*$ that solve the bridge regression problem. This means the the bridge regression gradient is zero at this point:
$$
\nabla_w L_b(w^*)
= -2 X^T (y - Xw^*)
+ \lambda_b q |w^*|^{\circ (q-1)} \text{sgn}(w^*)
= \vec{0}
$$
Therefore:
$$
2 X^T (y - Xw^*)
= \lambda_b q |w^*|^{\circ (q-1)} \text{sgn}(w^*)
$$
We can substitute this into the elastic net gradient, to get an expression for the elastic net gradient at $w^*$. Fortunately, it no longer depends directly on the data:
$$
\nabla_w L_e(w^*)
= \lambda_1 \text{sgn}(w^*)
+ 2 \lambda_2 w^*
-\lambda_b q |w^*|^{\circ (q-1)} \text{sgn}(w^*)
$$
Looking at the elastic net gradient at $w^*$ tells us: Given that bridge regression has converged to weights $w^*$, how would the elastic net want to change these weights?
It gives us the local direction and magnitude of the desired change, because the gradient points in the direction of steepest ascent and the loss function will decrease as we move in the direction opposite to the gradient. The gradient might not point directly toward the elastic net solution. But, because the elastic net loss function is convex, the local direction/magnitude gives some information about how the elastic net solution will differ from the bridge regression solution.
Case 1: Sanity check
($\lambda_b = 0, \lambda_1 = 0, \lambda_2 = 1$). Bridge regression in this case is equivalent to ordinary least squares (OLS), because the penalty magnitude is zero. The elastic net is equivalent ridge regression, because only the $\ell_2$ norm is penalized. The following plots show different bridge regression solutions and how the elastic net gradient behaves for each.
Left plot: Elastic net gradient vs. bridge regression weight along each dimension
The x axis represents one component of a set of weights $w^*$ selected by bridge regression. The y axis represents the corresponding component of the elastic net gradient, evaluated at $w^*$. Note that the weights are multidimensional, but we're just looking at the weights/gradient along a single dimension.
Right plot: Elastic net changes to bridge regression weights (2d)
Each point represents a set of 2d weights $w^*$ selected by bridge regression. For each choice of $w^*$, a vector is plotted pointing in the direction opposite the elastic net gradient, with magnitude proportional to that of the gradient. That is, the plotted vectors show how the elastic net wants to change the bridge regression solution.
These plots show that, compared to bridge regression (OLS in this case), elastic net (ridge regression in this case) wants to shrink weights toward zero. The desired amount of shrinkage increases with the magnitude of the weights. If the weights are zero, the solutions are the same. The interpretation is that we want to move in the direction opposite to the gradient to reduce the loss function. For example, say bridge regression converged to a positive value for one of the weights. The elastic net gradient is positive at this point, so elastic net wants to decrease this weight. If using gradient descent, we'd take steps proportional in size to the gradient (of course, we can't technically use gradient descent to solve the elastic net because of the non-differentiability at zero, but subgradient descent would give numerically similar results).
Case 2: Matching bridge & elastic net
($q = 1.4, \lambda_b = 1, \lambda_1 = 0.629, \lambda_2 = 0.355$). I chose the bridge penalty parameters to match the example from the question. I chose the elastic net parameters to give the best matching elastic net penalty. Here, best-matching means, given a particular distribution of weights, we find the elastic net penalty parameters that minimize the expected squared difference between the bridge and elastic net penalties:
$$
\min_{\lambda_1, \lambda_2} \enspace
E \left [ (
\lambda_1 \|w\|_1 + \lambda_2 \|w\|_2^2
- \lambda_b \|w\|_q^q
)^2 \right ]
$$
Here, I considered weights with all entries drawn i.i.d. from the uniform distribution on $[-2, 2]$ (i.e. within a hypercube centered at the origin). The best-matching elastic net parameters were similar for 2 to 1000 dimensions. Although they don't appear to be sensitive to the dimensionality, the best-matching parameters do depend on the scale of the distribution.
Penalty surface
Here's a contour plot of the total penalty imposed by bridge regression ($q=1.4, \lambda_b=100$) and best-matching elastic net ($\lambda_1 = 0.629, \lambda_2 = 0.355$) as a function of the weights (for the 2d case):
Gradient behavior
We can see the following:
Let $w^*_j$ be the chosen bridge regression weight along dimension $j$.
If $|w^*_j|< 0.25$, elastic net wants to shrink the weight toward zero.
If $|w^*_j| \approx 0.25$, the bridge regression and elastic net solutions are the same. But, elastic net wants to move away if the weight differs even slightly.
If $0.25 < |w^*_j| < 1.31$, elastic net wants to grow the weight.
If $|w^*_j| \approx 1.31$, the bridge regression and elastic net solutions are the same. Elastic net wants to move toward this point from nearby weights.
If $|w^*_j| > 1.31$, elastic net wants to shrink the weight.
The results are qualitatively similar if we change the the value of $q$ and/or $\lambda_b$ and find the corresponding best $\lambda_1, \lambda_2$. The points where the bridge and elastic net solutions coincide change slightly, but the behavior of the gradients are otherwise similar.
Case 3: Mismatched bridge & elastic net
$(q=1.8, \lambda_b=1, \lambda_1=0.765, \lambda_2 = 0.225)$. In this regime, bridge regression behaves similar to ridge regression. I found the best-matching $\lambda_1, \lambda_2$, but then swapped them so that the elastic net behaves more like lasso ($\ell_1$ penalty greater than $\ell_2$ penalty).
Relative to bridge regression, elastic net wants to shrink small weights toward zero and increase larger weights. There's a single set of weights in each quadrant where the bridge regression and elastic net solutions coincide, but elastic net wants to move away from this point if the weights differ even slightly.
$(q=1.2, \lambda_b=1, \lambda_1=173, \lambda_2 = 0.816)$. In this regime, the bridge penalty is more similar to an $\ell_1$ penalty (although bridge regression may not produce sparse solutions with $q > 1$, as mentioned in the elastic net paper). I found the best-matching $\lambda_1, \lambda_2$, but then swapped them so that the elastic net behaves more like ridge regression ($\ell_2$ penalty greater than $\ell_1$ penalty).
Relative to bridge regression, elastic net wants to grow small weights and shrink larger weights. There's a point in each quadrant where the bridge regression and elastic net solutions coincide, and elastic net wants to move toward these weights from neighboring points.
|
Bridge penalty vs. Elastic Net regularization
How bridge regression and elastic net differ is a fascinating question, given their similar-looking penalties. Here's one possible approach. Suppose we solve the bridge regression problem. We can then
|
10,827
|
Can the mean squared error be used for classification?
|
Many classifiers can predict continuous scores. Often, continuous scores are intermediate results that are only converted to class labels (usually by threshold) as the very last step of the classification. In other cases, e.g. posterior probabilities for the class membership can be calculated (e.g. discriminant analysis, logistic regression).
You can calculate the MSE using these continuous scores rather than the class labels. The advantage of that is that you avoid the loss of information due to the dichotomization.
When the continuous score is a probability, the MSE metric is called Brier's score.
However, there are also classification problems that are rather regression problems in disguise. In my field that could e.g. be classifying cases according to whether the concentration of some substance exceeds a legal limit or not (which is a binary/discriminative two-class problem). Here, MSE is a natural choice due to the underlying regression nature of the task.
In this paper we explain it as part of a more general framework:
C. Beleites, R. Salzer and V. Sergo:
Validation of Soft Classification Models using Partial Class Memberships: An Extended Concept of Sensitivity & Co. applied to Grading of Astrocytoma Tissues
Chemom. Intell. Lab. Syst., 122 (2013), 12 - 22.
How to compute it: if you work in R, one implementation is in package "softclassval", http:/softclassval.r-forge.r-project.org.
|
Can the mean squared error be used for classification?
|
Many classifiers can predict continuous scores. Often, continuous scores are intermediate results that are only converted to class labels (usually by threshold) as the very last step of the classifica
|
Can the mean squared error be used for classification?
Many classifiers can predict continuous scores. Often, continuous scores are intermediate results that are only converted to class labels (usually by threshold) as the very last step of the classification. In other cases, e.g. posterior probabilities for the class membership can be calculated (e.g. discriminant analysis, logistic regression).
You can calculate the MSE using these continuous scores rather than the class labels. The advantage of that is that you avoid the loss of information due to the dichotomization.
When the continuous score is a probability, the MSE metric is called Brier's score.
However, there are also classification problems that are rather regression problems in disguise. In my field that could e.g. be classifying cases according to whether the concentration of some substance exceeds a legal limit or not (which is a binary/discriminative two-class problem). Here, MSE is a natural choice due to the underlying regression nature of the task.
In this paper we explain it as part of a more general framework:
C. Beleites, R. Salzer and V. Sergo:
Validation of Soft Classification Models using Partial Class Memberships: An Extended Concept of Sensitivity & Co. applied to Grading of Astrocytoma Tissues
Chemom. Intell. Lab. Syst., 122 (2013), 12 - 22.
How to compute it: if you work in R, one implementation is in package "softclassval", http:/softclassval.r-forge.r-project.org.
|
Can the mean squared error be used for classification?
Many classifiers can predict continuous scores. Often, continuous scores are intermediate results that are only converted to class labels (usually by threshold) as the very last step of the classifica
|
10,828
|
Can the mean squared error be used for classification?
|
For probability estimates $\hat{\pi}$ you would want to compute not MSE (the log likelihood of a Norma random variable) but instead use likelihood of a Bernoulli random variable
$L=\prod_i \hat{\pi}_i^{y_i} (1-\hat{\pi}_i)^{1-y_i}$
This likelihood is for a binary response, which is assumed to have a Bernoulli distribution.
If you take the log of $L$ and then negate, you get the logistic loss, which is sort of the analog of MSE for when you have a binary response. In particular, MSE is the negative log likelihood for a continuous response assumed to have a normal distribution.
|
Can the mean squared error be used for classification?
|
For probability estimates $\hat{\pi}$ you would want to compute not MSE (the log likelihood of a Norma random variable) but instead use likelihood of a Bernoulli random variable
$L=\prod_i \hat{\pi}_i
|
Can the mean squared error be used for classification?
For probability estimates $\hat{\pi}$ you would want to compute not MSE (the log likelihood of a Norma random variable) but instead use likelihood of a Bernoulli random variable
$L=\prod_i \hat{\pi}_i^{y_i} (1-\hat{\pi}_i)^{1-y_i}$
This likelihood is for a binary response, which is assumed to have a Bernoulli distribution.
If you take the log of $L$ and then negate, you get the logistic loss, which is sort of the analog of MSE for when you have a binary response. In particular, MSE is the negative log likelihood for a continuous response assumed to have a normal distribution.
|
Can the mean squared error be used for classification?
For probability estimates $\hat{\pi}$ you would want to compute not MSE (the log likelihood of a Norma random variable) but instead use likelihood of a Bernoulli random variable
$L=\prod_i \hat{\pi}_i
|
10,829
|
Can the mean squared error be used for classification?
|
Technically you can, but the MSE function is non-convex for binary classification. Thus, if a binary classification model is trained with MSE Cost function, it is not guaranteed to minimize the Cost function. Also, using MSE as a cost function assumes the Gaussian distribution which is not the case for binary classification.
|
Can the mean squared error be used for classification?
|
Technically you can, but the MSE function is non-convex for binary classification. Thus, if a binary classification model is trained with MSE Cost function, it is not guaranteed to minimize the Cost f
|
Can the mean squared error be used for classification?
Technically you can, but the MSE function is non-convex for binary classification. Thus, if a binary classification model is trained with MSE Cost function, it is not guaranteed to minimize the Cost function. Also, using MSE as a cost function assumes the Gaussian distribution which is not the case for binary classification.
|
Can the mean squared error be used for classification?
Technically you can, but the MSE function is non-convex for binary classification. Thus, if a binary classification model is trained with MSE Cost function, it is not guaranteed to minimize the Cost f
|
10,830
|
Can the mean squared error be used for classification?
|
I don't quite see how... successful classification is a binary variable (correct or not), so it is difficult to see what you would square.
Generally classifications are measured on indicators such as percentage correct, when a classification that has been estimated from a training set, is applied to a testing set that was set aside earlier.
Mean square error can certainly be (and is) calculated for forecasts or predicted values of continuous variables, but I think not for classifications.
|
Can the mean squared error be used for classification?
|
I don't quite see how... successful classification is a binary variable (correct or not), so it is difficult to see what you would square.
Generally classifications are measured on indicators such as
|
Can the mean squared error be used for classification?
I don't quite see how... successful classification is a binary variable (correct or not), so it is difficult to see what you would square.
Generally classifications are measured on indicators such as percentage correct, when a classification that has been estimated from a training set, is applied to a testing set that was set aside earlier.
Mean square error can certainly be (and is) calculated for forecasts or predicted values of continuous variables, but I think not for classifications.
|
Can the mean squared error be used for classification?
I don't quite see how... successful classification is a binary variable (correct or not), so it is difficult to see what you would square.
Generally classifications are measured on indicators such as
|
10,831
|
What is the distribution of the ratio of two Poisson random variables?
|
I think you're going to have a problem with that. Because variable Y will have zero's, X/Y will have some undefined values such that you won't get a distribution.
|
What is the distribution of the ratio of two Poisson random variables?
|
I think you're going to have a problem with that. Because variable Y will have zero's, X/Y will have some undefined values such that you won't get a distribution.
|
What is the distribution of the ratio of two Poisson random variables?
I think you're going to have a problem with that. Because variable Y will have zero's, X/Y will have some undefined values such that you won't get a distribution.
|
What is the distribution of the ratio of two Poisson random variables?
I think you're going to have a problem with that. Because variable Y will have zero's, X/Y will have some undefined values such that you won't get a distribution.
|
10,832
|
What is the distribution of the ratio of two Poisson random variables?
|
By realizing that the ratio is in fact not a well defined measurable set, we redefine the ratio as a properly measurable set
$$
\mathbb{P}\left[\frac{X}{Y} \leq r \right] := \mathbb{P}\left[X \leq r Y\right]\\
= \sum_{y = 0}^\infty \sum_{x=0}^{\left\lfloor ry \right\rfloor} \frac{\lambda_{2}^y }{y!}e^{-\lambda_2} \frac{\lambda_{1}^x }{x!}e^{-\lambda_1}
$$
where the summation follows as long as $r > 0$, and $X$ and $Y$ are independent Poisson variables. The density follows from the Radon-Nykodym theorem.
|
What is the distribution of the ratio of two Poisson random variables?
|
By realizing that the ratio is in fact not a well defined measurable set, we redefine the ratio as a properly measurable set
$$
\mathbb{P}\left[\frac{X}{Y} \leq r \right] := \mathbb{P}\left[X \leq r Y
|
What is the distribution of the ratio of two Poisson random variables?
By realizing that the ratio is in fact not a well defined measurable set, we redefine the ratio as a properly measurable set
$$
\mathbb{P}\left[\frac{X}{Y} \leq r \right] := \mathbb{P}\left[X \leq r Y\right]\\
= \sum_{y = 0}^\infty \sum_{x=0}^{\left\lfloor ry \right\rfloor} \frac{\lambda_{2}^y }{y!}e^{-\lambda_2} \frac{\lambda_{1}^x }{x!}e^{-\lambda_1}
$$
where the summation follows as long as $r > 0$, and $X$ and $Y$ are independent Poisson variables. The density follows from the Radon-Nykodym theorem.
|
What is the distribution of the ratio of two Poisson random variables?
By realizing that the ratio is in fact not a well defined measurable set, we redefine the ratio as a properly measurable set
$$
\mathbb{P}\left[\frac{X}{Y} \leq r \right] := \mathbb{P}\left[X \leq r Y
|
10,833
|
t-SNE versus MDS
|
PCA selects influential dimensions by eigenanalysis of the N data points themselves, while MDS selects influential dimensions by eigenanalysis of the $N^2$ data points of a pairwise distance matrix. This has the effect of highlighting the deviations from uniformity in the distribution. Considering the distance matrix as analogous to a stress tensor, MDS may be deemed a "force-directed" layout algorithm, the execution complexity of which is $\mathcal O(dN^a)$ where $3 < a \leq 4$.
t-SNE, on the otherhand, uses a field approximation to execute a somewhat different form of force-directed layout, typically via Barnes-Hut which reduces a $\mathcal O(dN^2)$ gradient-based complexity to $\mathcal O(dN\cdot \log(N))$, but the convergence properties are less well-understood for this iterative stochastic approximation method (to the best of my knowledge), and for $2 \leq d \leq 4$ the typical observed run-times are generally longer than other dimension-reduction methods. The results are often more visually interpretable than naive eigenanalysis, and depending on the distribution, often more intuitive than MDS results, which tend to preserve global structure at the expense of local structure retained by t-SNE.
MDS is already a simplification of kernel PCA, and should be extensible with alternate kernels, while kernel t-SNE is described in work by Gilbrecht, Hammer, Schulz, Mokbel, Lueks et al. I am not practically familiar with it, but perhaps another respondent may be.
I tend to select between MDS and t-SNE on the basis of contextual goals. Whichever elucidates the structure which I am interested in highlighting, whichever structure has the greater explanatory power, that is the algorithm I use. This can be considered a pitfall, as it is a form of researcher degree-of-freedom. But freedom used wisely is not such a bad thing.
|
t-SNE versus MDS
|
PCA selects influential dimensions by eigenanalysis of the N data points themselves, while MDS selects influential dimensions by eigenanalysis of the $N^2$ data points of a pairwise distance matrix. T
|
t-SNE versus MDS
PCA selects influential dimensions by eigenanalysis of the N data points themselves, while MDS selects influential dimensions by eigenanalysis of the $N^2$ data points of a pairwise distance matrix. This has the effect of highlighting the deviations from uniformity in the distribution. Considering the distance matrix as analogous to a stress tensor, MDS may be deemed a "force-directed" layout algorithm, the execution complexity of which is $\mathcal O(dN^a)$ where $3 < a \leq 4$.
t-SNE, on the otherhand, uses a field approximation to execute a somewhat different form of force-directed layout, typically via Barnes-Hut which reduces a $\mathcal O(dN^2)$ gradient-based complexity to $\mathcal O(dN\cdot \log(N))$, but the convergence properties are less well-understood for this iterative stochastic approximation method (to the best of my knowledge), and for $2 \leq d \leq 4$ the typical observed run-times are generally longer than other dimension-reduction methods. The results are often more visually interpretable than naive eigenanalysis, and depending on the distribution, often more intuitive than MDS results, which tend to preserve global structure at the expense of local structure retained by t-SNE.
MDS is already a simplification of kernel PCA, and should be extensible with alternate kernels, while kernel t-SNE is described in work by Gilbrecht, Hammer, Schulz, Mokbel, Lueks et al. I am not practically familiar with it, but perhaps another respondent may be.
I tend to select between MDS and t-SNE on the basis of contextual goals. Whichever elucidates the structure which I am interested in highlighting, whichever structure has the greater explanatory power, that is the algorithm I use. This can be considered a pitfall, as it is a form of researcher degree-of-freedom. But freedom used wisely is not such a bad thing.
|
t-SNE versus MDS
PCA selects influential dimensions by eigenanalysis of the N data points themselves, while MDS selects influential dimensions by eigenanalysis of the $N^2$ data points of a pairwise distance matrix. T
|
10,834
|
Estimating the most important features in a k-means cluster partition
|
One way to quantify the usefulness of each feature (= variable = dimension), from the book
Burns, Robert P., and Richard Burns. Business research methods and statistics using SPSS. Sage, 2008. (mirror), usefulness being defined by the features' discriminative power to tell clusters apart.
We usually examine the means for each cluster on each dimension using
ANOVA to assess how distinct our clusters are. Ideally, we would
obtain significantly different means for most, if not all dimensions,
used in the analysis. The magnitude of the F values performed on each
dimension is an indication of how well the respective dimension
discriminates between clusters.
Another way would be to remove a specific feature and see how this impact internal quality indices. Unlike the first solution, you would have to redo the clustering for each feature (or set of features) you want to analyze.
FYI:
Can a useless feature negatively impact the clustering?
Can the choice of the measurement units of the features impact the clustering?
Why vector normalization can improve the accuracy of clustering and classification?
What are the most commonly used ways to perform feature selection for k-means clustering?
|
Estimating the most important features in a k-means cluster partition
|
One way to quantify the usefulness of each feature (= variable = dimension), from the book
Burns, Robert P., and Richard Burns. Business research methods and statistics using SPSS. Sage, 2008. (mirror
|
Estimating the most important features in a k-means cluster partition
One way to quantify the usefulness of each feature (= variable = dimension), from the book
Burns, Robert P., and Richard Burns. Business research methods and statistics using SPSS. Sage, 2008. (mirror), usefulness being defined by the features' discriminative power to tell clusters apart.
We usually examine the means for each cluster on each dimension using
ANOVA to assess how distinct our clusters are. Ideally, we would
obtain significantly different means for most, if not all dimensions,
used in the analysis. The magnitude of the F values performed on each
dimension is an indication of how well the respective dimension
discriminates between clusters.
Another way would be to remove a specific feature and see how this impact internal quality indices. Unlike the first solution, you would have to redo the clustering for each feature (or set of features) you want to analyze.
FYI:
Can a useless feature negatively impact the clustering?
Can the choice of the measurement units of the features impact the clustering?
Why vector normalization can improve the accuracy of clustering and classification?
What are the most commonly used ways to perform feature selection for k-means clustering?
|
Estimating the most important features in a k-means cluster partition
One way to quantify the usefulness of each feature (= variable = dimension), from the book
Burns, Robert P., and Richard Burns. Business research methods and statistics using SPSS. Sage, 2008. (mirror
|
10,835
|
Estimating the most important features in a k-means cluster partition
|
I can think of two other possibilities that focus more on which variables are important to which clusters.
Multi-class classification. Consider the objects that belong to cluster x members of the same class (e.g., class 1) and the objects that belong to other clusters members of a second class (e.g., class 2). Train a classifier to predict class membership (e.g., class 1 vs. class 2). The classifier's variable coefficients can serve to estimate the importance of each variable in clustering objects to cluster x. Repeat this approach for all other clusters.
Intra-cluster variable similarity. For every variable, calculate the average similarity of each object to its centroid. A variable that has high similarity between a centroid and its objects is likely more important to the clustering process than a variable that has low similarity. Of course, similarity magnitude is relative, but now variables can be ranked by the degree to which they help to cluster the objects in each cluster.
|
Estimating the most important features in a k-means cluster partition
|
I can think of two other possibilities that focus more on which variables are important to which clusters.
Multi-class classification. Consider the objects that belong to cluster x members of the sam
|
Estimating the most important features in a k-means cluster partition
I can think of two other possibilities that focus more on which variables are important to which clusters.
Multi-class classification. Consider the objects that belong to cluster x members of the same class (e.g., class 1) and the objects that belong to other clusters members of a second class (e.g., class 2). Train a classifier to predict class membership (e.g., class 1 vs. class 2). The classifier's variable coefficients can serve to estimate the importance of each variable in clustering objects to cluster x. Repeat this approach for all other clusters.
Intra-cluster variable similarity. For every variable, calculate the average similarity of each object to its centroid. A variable that has high similarity between a centroid and its objects is likely more important to the clustering process than a variable that has low similarity. Of course, similarity magnitude is relative, but now variables can be ranked by the degree to which they help to cluster the objects in each cluster.
|
Estimating the most important features in a k-means cluster partition
I can think of two other possibilities that focus more on which variables are important to which clusters.
Multi-class classification. Consider the objects that belong to cluster x members of the sam
|
10,836
|
Estimating the most important features in a k-means cluster partition
|
I faced this problem before and developed two possible methods to find the most important features responsible for each K-Means cluster sub-optimal solution.
Focusing on each centroid’s position and the dimensions responsible for the highest Within-Cluster Sum of Squares minimization
Converting the problem into classification settings (Inspired by the paper: "A Supervised Methodology to Measure the Variables Contribution to a Clustering").
I have written a detailed article here Interpretable K-Means: Clusters Feature Importances. GitHub link is included as well if you want to try it. Hope this helps!
|
Estimating the most important features in a k-means cluster partition
|
I faced this problem before and developed two possible methods to find the most important features responsible for each K-Means cluster sub-optimal solution.
Focusing on each centroid’s position and
|
Estimating the most important features in a k-means cluster partition
I faced this problem before and developed two possible methods to find the most important features responsible for each K-Means cluster sub-optimal solution.
Focusing on each centroid’s position and the dimensions responsible for the highest Within-Cluster Sum of Squares minimization
Converting the problem into classification settings (Inspired by the paper: "A Supervised Methodology to Measure the Variables Contribution to a Clustering").
I have written a detailed article here Interpretable K-Means: Clusters Feature Importances. GitHub link is included as well if you want to try it. Hope this helps!
|
Estimating the most important features in a k-means cluster partition
I faced this problem before and developed two possible methods to find the most important features responsible for each K-Means cluster sub-optimal solution.
Focusing on each centroid’s position and
|
10,837
|
Estimating the most important features in a k-means cluster partition
|
Here is a very simple method. Note that the Euclidean distance between two cluster centers is a sum of square difference between individual features. We can then just use the square difference as the weight for each feature.
|
Estimating the most important features in a k-means cluster partition
|
Here is a very simple method. Note that the Euclidean distance between two cluster centers is a sum of square difference between individual features. We can then just use the square difference as the
|
Estimating the most important features in a k-means cluster partition
Here is a very simple method. Note that the Euclidean distance between two cluster centers is a sum of square difference between individual features. We can then just use the square difference as the weight for each feature.
|
Estimating the most important features in a k-means cluster partition
Here is a very simple method. Note that the Euclidean distance between two cluster centers is a sum of square difference between individual features. We can then just use the square difference as the
|
10,838
|
sum of noncentral Chi-square random variables
|
As Glen_b noted in the comments, if the variances are all the same you end up with a scaled noncentral chi-squared.
If not, there is a concept of a generalized chi-squared distribution, i.e. $x^T A x$ for $x \sim N(\mu, \Sigma)$ and $A$ fixed. In this case, you have the special case of diagonal $\Sigma$ ($\Sigma_{ii} = \sigma_i^2$), and $A = I$.
There has been some work on computing things with this distribution:
Imhof (1961) and Davies (1980) numerically invert the characteristic function.
Sheil and O'Muircheartaigh (1977) write the distribution as an infinite sum of central chi-squared variables.
Kuonen (1999) gives a saddlepoint approximation to the pdf/cdf.
Liu, Tang and Zhang (2009) approximate it with a noncentral chi-squared distribution based on cumulant matching.
You can also write it as a linear combination of independent noncentral chi-squared variables $Y = \sum_{i=1}^n \sigma_i^2 \left( \frac{X_i^2}{\sigma_i^2} \right)$, in which case:
Castaño-Martínez and López-Blázquez (2005) give a Laguerre expansion for the pdf/cdf.
Bausch (2013) gives a more computationally efficient algorithm for the linear combination of central chi-squareds; his work might be extensible to noncentral chi-squareds, and you might find some interesting pointers in the related work section.
|
sum of noncentral Chi-square random variables
|
As Glen_b noted in the comments, if the variances are all the same you end up with a scaled noncentral chi-squared.
If not, there is a concept of a generalized chi-squared distribution, i.e. $x^T A x$
|
sum of noncentral Chi-square random variables
As Glen_b noted in the comments, if the variances are all the same you end up with a scaled noncentral chi-squared.
If not, there is a concept of a generalized chi-squared distribution, i.e. $x^T A x$ for $x \sim N(\mu, \Sigma)$ and $A$ fixed. In this case, you have the special case of diagonal $\Sigma$ ($\Sigma_{ii} = \sigma_i^2$), and $A = I$.
There has been some work on computing things with this distribution:
Imhof (1961) and Davies (1980) numerically invert the characteristic function.
Sheil and O'Muircheartaigh (1977) write the distribution as an infinite sum of central chi-squared variables.
Kuonen (1999) gives a saddlepoint approximation to the pdf/cdf.
Liu, Tang and Zhang (2009) approximate it with a noncentral chi-squared distribution based on cumulant matching.
You can also write it as a linear combination of independent noncentral chi-squared variables $Y = \sum_{i=1}^n \sigma_i^2 \left( \frac{X_i^2}{\sigma_i^2} \right)$, in which case:
Castaño-Martínez and López-Blázquez (2005) give a Laguerre expansion for the pdf/cdf.
Bausch (2013) gives a more computationally efficient algorithm for the linear combination of central chi-squareds; his work might be extensible to noncentral chi-squareds, and you might find some interesting pointers in the related work section.
|
sum of noncentral Chi-square random variables
As Glen_b noted in the comments, if the variances are all the same you end up with a scaled noncentral chi-squared.
If not, there is a concept of a generalized chi-squared distribution, i.e. $x^T A x$
|
10,839
|
How to calculate p-value for multivariate linear regression
|
t-test
With a t-test you standardize the measured parameters by dividing by them by the variance. If the variance is an estimate then this standardized value will be distributed according to the t-distribution (otherwise, if the variance of the distribution of the errors is known, then you have a z-distribution)
Say your measurement is:
$$y_{obs} = X\beta + \epsilon \quad \text{with} \quad \epsilon \sim N(0,\sigma^2*I)$$
Then your estimate $\hat\beta$ is:
$$\begin{array}\\
\hat\beta & = & (X^TX)^{-1}X^T y_{obs} \\
& = &(X^TX)^{-1}X^T (X\beta + \epsilon) \\
& = & \beta + (X^TX)^{-1}X^T \epsilon
\end{array}$$
So your estimate $\hat\beta$ will be the true vector $\beta$ plus a term based on the error $\epsilon$. If $\epsilon \sim N(0,\sigma^2I)$ then
$$\hat\beta \sim N(\beta,(X^tX)^{-1}\sigma^2)$$
Note: I can not make the change of the $(X^TX)^{-1}X$ term into $(X^TX)^{-1}$ intuitive, but to derive this you would express $\text{Var}(\hat\beta) = \text{Var}((X^TX)^{-1}X^T\epsilon) = (X^TX)^{-1}X^T \, \sigma^2I \, ((X^TX)^{-1}X^T)^T$ and eliminate some of those terms
The unknown $\sigma$ will be estimated based on the sum of squares of the residuals multiplied by the ratio of the degrees of freedom in the residual terms and the total number of measurements/error-terms (in a similar fashion as Bessel's correction in the corrected sample variance) .
Then from this point you can pick up the expression of p-values for single $H_j: \beta_j = 0$ as standard t-tests (although due to the possible correlation in the distribution of different $\beta_j$, different more powerfull tests than individual t-tests could be done).
F-test
With the F-test you use the F-distribution which describes the ratio of two chi-squared distributed variables. This works as a hypothesis test when we compare the variance of a model and residuals (both are chi-square distributed when we assume that a certain model parameter $\beta_j$ has no effect)
The residual term of a model has $n-p$ degrees of freedom, with $n$ the number of observations/errors and $p$ the number of parameters that are used to fit the model. You could see this intuitively as the residuals being obtained from the errors by projecting the errors onto the space from the columns perpendicular to the model columns $X$ (this space has dimension $n-p$). A projection of a multivariate normal distributed variable is itself a multivariate normal distributed variable, but with a lower dimension. So while you may have $n$ residuals. They are actually $n-p$ residuals embedded in an $n$ dimensional space.
Now when you consider adding an extra variable to model 1, to obtain model 2, then you could analyze this by considering the errors being projected onto a smaller space. If the model 2 has no effect (ie. the added columns to make model $X_2$ from $X_1$ are just random) then one could state a null hypothesis that the reduced sum of squared residuals for model 1 and model 2 are equal. This is what is tested in an F-test (using the ratio of those reduced residuals) to obtain a p-value for the effect of changing model 1 into 2 (and you could do this for every variable $\beta_i$ where the way that you do this changes a bit see for instance How to interpret type I, type II, and type III ANOVA and MANOVA?).
So you split the sum of squared residuals of a simple model $RSS_{simple}$ into two projections (representing independent variables if the null hypothesis is true). One part is a projection onto the (smaller) space of a full model $RSS_{full}$ and the other part is the projection onto the space spanned by the model (which can be expressed by the difference) $RSS_{simple}-RSS_{full}$. And the ratio used in the F-test is
$$F = \frac{\left(\frac{RSS_{simple}-RSS_{full}}{p_{full}-p_{simple}}\right)}{\left(\frac{RSS_{full}}{n-p_{full}}\right)}$$
|
How to calculate p-value for multivariate linear regression
|
t-test
With a t-test you standardize the measured parameters by dividing by them by the variance. If the variance is an estimate then this standardized value will be distributed according to the t-dis
|
How to calculate p-value for multivariate linear regression
t-test
With a t-test you standardize the measured parameters by dividing by them by the variance. If the variance is an estimate then this standardized value will be distributed according to the t-distribution (otherwise, if the variance of the distribution of the errors is known, then you have a z-distribution)
Say your measurement is:
$$y_{obs} = X\beta + \epsilon \quad \text{with} \quad \epsilon \sim N(0,\sigma^2*I)$$
Then your estimate $\hat\beta$ is:
$$\begin{array}\\
\hat\beta & = & (X^TX)^{-1}X^T y_{obs} \\
& = &(X^TX)^{-1}X^T (X\beta + \epsilon) \\
& = & \beta + (X^TX)^{-1}X^T \epsilon
\end{array}$$
So your estimate $\hat\beta$ will be the true vector $\beta$ plus a term based on the error $\epsilon$. If $\epsilon \sim N(0,\sigma^2I)$ then
$$\hat\beta \sim N(\beta,(X^tX)^{-1}\sigma^2)$$
Note: I can not make the change of the $(X^TX)^{-1}X$ term into $(X^TX)^{-1}$ intuitive, but to derive this you would express $\text{Var}(\hat\beta) = \text{Var}((X^TX)^{-1}X^T\epsilon) = (X^TX)^{-1}X^T \, \sigma^2I \, ((X^TX)^{-1}X^T)^T$ and eliminate some of those terms
The unknown $\sigma$ will be estimated based on the sum of squares of the residuals multiplied by the ratio of the degrees of freedom in the residual terms and the total number of measurements/error-terms (in a similar fashion as Bessel's correction in the corrected sample variance) .
Then from this point you can pick up the expression of p-values for single $H_j: \beta_j = 0$ as standard t-tests (although due to the possible correlation in the distribution of different $\beta_j$, different more powerfull tests than individual t-tests could be done).
F-test
With the F-test you use the F-distribution which describes the ratio of two chi-squared distributed variables. This works as a hypothesis test when we compare the variance of a model and residuals (both are chi-square distributed when we assume that a certain model parameter $\beta_j$ has no effect)
The residual term of a model has $n-p$ degrees of freedom, with $n$ the number of observations/errors and $p$ the number of parameters that are used to fit the model. You could see this intuitively as the residuals being obtained from the errors by projecting the errors onto the space from the columns perpendicular to the model columns $X$ (this space has dimension $n-p$). A projection of a multivariate normal distributed variable is itself a multivariate normal distributed variable, but with a lower dimension. So while you may have $n$ residuals. They are actually $n-p$ residuals embedded in an $n$ dimensional space.
Now when you consider adding an extra variable to model 1, to obtain model 2, then you could analyze this by considering the errors being projected onto a smaller space. If the model 2 has no effect (ie. the added columns to make model $X_2$ from $X_1$ are just random) then one could state a null hypothesis that the reduced sum of squared residuals for model 1 and model 2 are equal. This is what is tested in an F-test (using the ratio of those reduced residuals) to obtain a p-value for the effect of changing model 1 into 2 (and you could do this for every variable $\beta_i$ where the way that you do this changes a bit see for instance How to interpret type I, type II, and type III ANOVA and MANOVA?).
So you split the sum of squared residuals of a simple model $RSS_{simple}$ into two projections (representing independent variables if the null hypothesis is true). One part is a projection onto the (smaller) space of a full model $RSS_{full}$ and the other part is the projection onto the space spanned by the model (which can be expressed by the difference) $RSS_{simple}-RSS_{full}$. And the ratio used in the F-test is
$$F = \frac{\left(\frac{RSS_{simple}-RSS_{full}}{p_{full}-p_{simple}}\right)}{\left(\frac{RSS_{full}}{n-p_{full}}\right)}$$
|
How to calculate p-value for multivariate linear regression
t-test
With a t-test you standardize the measured parameters by dividing by them by the variance. If the variance is an estimate then this standardized value will be distributed according to the t-dis
|
10,840
|
Why isn't Akaike information criterion used more in machine learning?
|
AIC and BIC are used, e.g. in stepwise regression. They are actually part of a larger class of "heuristics", which are also used. For example the DIC (Deviance Information Criterion) is often used in Bayesian Model selection.
However, they are basically "heuristics". While it can be shown, that both the AIC and BIC converge asymptotically towards cross-validation approaches (I think AIC goes towards leave-one-out CV, and BIC towards some other approach, but I am not sure), they are known to under-penalize and over-penalize respectively. I.e. using AIC you will often get a model, which is more complicated than it should be, whereas with BIC you often get a model which is too simplistic.
Since both are related to CV, CV is often a better choice, which does not suffer from these problems.
Then finally there is the issue of the # of parameters which are required for BIC and AIC. With general function approximators (e.g. KNNs) on real-valued inputs, it is possible to "hide" parameters, i.e. to construct a real number which contains the same information as two real numbers (think e.g. of intersecting the digits). In that case, what is the actual number of parameters? On the other hand, with more complicated models, you may have constraints on your parameters, say you can only fit parameters such that $\theta_1 > \theta_2$ (see e.g. here). Or you may have non-identifiability, in which case multiple values of the parameters actually give the same model. In all these case, simply counting of parameters does not give a suitable estimate.
Since many contemporary machine-learning algorithms show these properties (i.e. universal approximation, unclear number of parameters, non-identifiability), AIC and BIC are less useful for these model, than they may seem at first glance.
EDIT:
Some more points that could be clarified:
It seems I was wrong to consider the mapping by interleaving digits a bijection between $\mathbb{R}\rightarrow\mathbb{R}^N$ (see here). However, the details of why this isn't a bijection are a bit hard to understand. However, we don't actually need a bijection for this idea to work (a surjection is enough).
According to proof by Cantor (1877) there must be a bijection between $\mathbb{R}\rightarrow\mathbb{R}^N$. Although this bijection cannot be defined explicitly, it's existence can be proven (but this requires the unproven axiom of choice). This bijection can still be used in a theoretical model (it may not be possible to actually implement this model in a computer), to unpack a single parameter into an arbitrary number of parameters.
We don't actually need the mapping between $\mathbb{R}\rightarrow\mathbb{R}^N$ to be a bijection. Any surjective function $\mathbb{R}\rightarrow\mathbb{R}^N$ is enough to unpack multiple parameters from a single one. Such surjections can be shown to exists as limits to a sequence of other functions (so called space-filling curves, e.g. Peano curve).
Because neither the proof by Cantor is constructive (it simply proves the existence of the bijection without giving an example), nor the space-filling curves (because they only exist as limits of constructive objects and therefore are not constructive themselves), the argument I made is only a theoretical proof. In theory, we could just keep adding parameters to a model to reduce the BIC below any desired value (on the training set). However, in an actual model implementation we have to approximate the space-filling curve, so approximation error may prohibit us from actually doing so (I have not actually tested this).
Because all this requires the axiom of choice, the proof becomes invalid if you don't accept this axiom (although most mathematicians do so). That means, in constructive math this may not be possible, but I don't know what role constructive math plays for statistics.
Identifiability is intrinsically linked to functional complexity. If one simply takes an identifiable $N$-parameter model and adds a superfluous parameter (e.g. not used anywhere), then the new model becomes non-identifiably. Essentially, one is using a model that has the complexity of the $\mathbb{R}^{N+1}$ to solve a problem that has complexity $\mathbb{R}^N$. Similarly, with other forms of non-identifiability. Take for example the case of non-identifiable parameter permutations. In that case, one is using a model that has the complexity of the $\mathbb{R}^N$, however, the actual problem only has the complexity of a set of equivalence classes over the $\mathbb{R}^N$. However, this is only an informal argument, I don't know of any formal treatment of this notion of "complexity".
|
Why isn't Akaike information criterion used more in machine learning?
|
AIC and BIC are used, e.g. in stepwise regression. They are actually part of a larger class of "heuristics", which are also used. For example the DIC (Deviance Information Criterion) is often used in
|
Why isn't Akaike information criterion used more in machine learning?
AIC and BIC are used, e.g. in stepwise regression. They are actually part of a larger class of "heuristics", which are also used. For example the DIC (Deviance Information Criterion) is often used in Bayesian Model selection.
However, they are basically "heuristics". While it can be shown, that both the AIC and BIC converge asymptotically towards cross-validation approaches (I think AIC goes towards leave-one-out CV, and BIC towards some other approach, but I am not sure), they are known to under-penalize and over-penalize respectively. I.e. using AIC you will often get a model, which is more complicated than it should be, whereas with BIC you often get a model which is too simplistic.
Since both are related to CV, CV is often a better choice, which does not suffer from these problems.
Then finally there is the issue of the # of parameters which are required for BIC and AIC. With general function approximators (e.g. KNNs) on real-valued inputs, it is possible to "hide" parameters, i.e. to construct a real number which contains the same information as two real numbers (think e.g. of intersecting the digits). In that case, what is the actual number of parameters? On the other hand, with more complicated models, you may have constraints on your parameters, say you can only fit parameters such that $\theta_1 > \theta_2$ (see e.g. here). Or you may have non-identifiability, in which case multiple values of the parameters actually give the same model. In all these case, simply counting of parameters does not give a suitable estimate.
Since many contemporary machine-learning algorithms show these properties (i.e. universal approximation, unclear number of parameters, non-identifiability), AIC and BIC are less useful for these model, than they may seem at first glance.
EDIT:
Some more points that could be clarified:
It seems I was wrong to consider the mapping by interleaving digits a bijection between $\mathbb{R}\rightarrow\mathbb{R}^N$ (see here). However, the details of why this isn't a bijection are a bit hard to understand. However, we don't actually need a bijection for this idea to work (a surjection is enough).
According to proof by Cantor (1877) there must be a bijection between $\mathbb{R}\rightarrow\mathbb{R}^N$. Although this bijection cannot be defined explicitly, it's existence can be proven (but this requires the unproven axiom of choice). This bijection can still be used in a theoretical model (it may not be possible to actually implement this model in a computer), to unpack a single parameter into an arbitrary number of parameters.
We don't actually need the mapping between $\mathbb{R}\rightarrow\mathbb{R}^N$ to be a bijection. Any surjective function $\mathbb{R}\rightarrow\mathbb{R}^N$ is enough to unpack multiple parameters from a single one. Such surjections can be shown to exists as limits to a sequence of other functions (so called space-filling curves, e.g. Peano curve).
Because neither the proof by Cantor is constructive (it simply proves the existence of the bijection without giving an example), nor the space-filling curves (because they only exist as limits of constructive objects and therefore are not constructive themselves), the argument I made is only a theoretical proof. In theory, we could just keep adding parameters to a model to reduce the BIC below any desired value (on the training set). However, in an actual model implementation we have to approximate the space-filling curve, so approximation error may prohibit us from actually doing so (I have not actually tested this).
Because all this requires the axiom of choice, the proof becomes invalid if you don't accept this axiom (although most mathematicians do so). That means, in constructive math this may not be possible, but I don't know what role constructive math plays for statistics.
Identifiability is intrinsically linked to functional complexity. If one simply takes an identifiable $N$-parameter model and adds a superfluous parameter (e.g. not used anywhere), then the new model becomes non-identifiably. Essentially, one is using a model that has the complexity of the $\mathbb{R}^{N+1}$ to solve a problem that has complexity $\mathbb{R}^N$. Similarly, with other forms of non-identifiability. Take for example the case of non-identifiable parameter permutations. In that case, one is using a model that has the complexity of the $\mathbb{R}^N$, however, the actual problem only has the complexity of a set of equivalence classes over the $\mathbb{R}^N$. However, this is only an informal argument, I don't know of any formal treatment of this notion of "complexity".
|
Why isn't Akaike information criterion used more in machine learning?
AIC and BIC are used, e.g. in stepwise regression. They are actually part of a larger class of "heuristics", which are also used. For example the DIC (Deviance Information Criterion) is often used in
|
10,841
|
Logistic Regression - Multicollinearity Concerns/Pitfalls
|
All of the same principles concerning multicollinearity apply to logistic regression as they do to OLS. The same diagnostics assessing multicollinearity can be used (e.g. VIF, condition number, auxiliary regressions.), and the same dimension reduction techniques can be used (such as combining variables via principal components analysis).
This answer by chl will lead you to some resources and R packages for fitting penalized logistic models (as well as a good discussion on these types of penalized regression procedures). But some of your comments about "solutions" to multicollinearity are a bit disconcerting to me. If you only care about estimating relationships for variables that are not collinear these "solutions" may be fine, but if your interested in estimating coefficients of variables that are collinear these techniques do not solve your problem. Although the problem of multicollinearity is technical in that your matrix of predictor variables can not be inverted, it has a logical analog in that your predictors are not independent, and their effects cannot be uniquely identified.
|
Logistic Regression - Multicollinearity Concerns/Pitfalls
|
All of the same principles concerning multicollinearity apply to logistic regression as they do to OLS. The same diagnostics assessing multicollinearity can be used (e.g. VIF, condition number, auxili
|
Logistic Regression - Multicollinearity Concerns/Pitfalls
All of the same principles concerning multicollinearity apply to logistic regression as they do to OLS. The same diagnostics assessing multicollinearity can be used (e.g. VIF, condition number, auxiliary regressions.), and the same dimension reduction techniques can be used (such as combining variables via principal components analysis).
This answer by chl will lead you to some resources and R packages for fitting penalized logistic models (as well as a good discussion on these types of penalized regression procedures). But some of your comments about "solutions" to multicollinearity are a bit disconcerting to me. If you only care about estimating relationships for variables that are not collinear these "solutions" may be fine, but if your interested in estimating coefficients of variables that are collinear these techniques do not solve your problem. Although the problem of multicollinearity is technical in that your matrix of predictor variables can not be inverted, it has a logical analog in that your predictors are not independent, and their effects cannot be uniquely identified.
|
Logistic Regression - Multicollinearity Concerns/Pitfalls
All of the same principles concerning multicollinearity apply to logistic regression as they do to OLS. The same diagnostics assessing multicollinearity can be used (e.g. VIF, condition number, auxili
|
10,842
|
Is there a decision-tree-like algorithm for unsupervised clustering?
|
You may want to consider the following approach:
Use any clustering algorithm that is adequate for your data
Assume the resulting cluster are classes
Train a decision tree on the clusters
This will allow you to try different clustering algorithms, but you will get a decision tree approximation for each of them.
|
Is there a decision-tree-like algorithm for unsupervised clustering?
|
You may want to consider the following approach:
Use any clustering algorithm that is adequate for your data
Assume the resulting cluster are classes
Train a decision tree on the clusters
This will
|
Is there a decision-tree-like algorithm for unsupervised clustering?
You may want to consider the following approach:
Use any clustering algorithm that is adequate for your data
Assume the resulting cluster are classes
Train a decision tree on the clusters
This will allow you to try different clustering algorithms, but you will get a decision tree approximation for each of them.
|
Is there a decision-tree-like algorithm for unsupervised clustering?
You may want to consider the following approach:
Use any clustering algorithm that is adequate for your data
Assume the resulting cluster are classes
Train a decision tree on the clusters
This will
|
10,843
|
Is there a decision-tree-like algorithm for unsupervised clustering?
|
The first paper that comes to mind is this:
Clustering Via Decision Tree Construction
https://pdfs.semanticscholar.org/8996/148e8f0b34308e2d22f78ff89bf1f038d1d6.pdf
As another mentioned, "hierarchical" (top down) and "hierarchical agglomeration" (bottom up) are both well known techniques devised using trees to do clustering. Scipy has this.
If you are ok with custom code because I don't know of any library, there are two techniques I can recommend. Be warned that these are not technically clustering because of the mechanics they rely on. You might call this pseudo clustering.
1) Supervised: This is somewhat similar to the paper (worth reading). Build a single decision tree model to learn some target (you decide what makes sense). The target could be a randomly generated column (requires repeating and evaluating what iteration was best, see below). Define each full path of the tree as a "cluster" since points that fall through that series of branches are technically similar in regards to the target. This only works well on some problems, but it's efficient at large scale. You end up with K clusters (see below).
2) Semisupervised (sort of unsupervised, but mechanically supervised), using #1: you can try building trees to predict columns in a leave one out pattern. i.e. if the schema is [A,B,C], build 3 models [A,B] -> C, [A,C] -> B, [B,C]->A. You get KN clusters (see below). N=len(schema). If some of these features are not interesting or too imbalanced (in the case of categories), don't use them as targets.
Summary: The model will select features in order based on information or purity and clusters will be based on just a few features rather than all. There is no concept of distance in these clusters, but you could certainly devise one based on the centers.
Pros: easy to understand and explain, quick training and inference, works well with few strong features, works with categories. When your features are in essence heterogeneous and you have many features, you don't have to spend as much time deciding which to use in the distance function.
Cons: not standard, must be written, naive bias, collinearity with target causes bad results, having 1000 equally important features will not work well (KMeans with Euclidean distance is better here).
How many clusters do you get? You must, absolutely must restrict the DT model to not grow too much. e.g. Set min samples per leaf, max leaf nodes (preferred), or max depth. Optionally, set purity or entropy constraints. You must check how many clusters this gave you and evaluate if this method is better than real clustering.
Did the techniques and parameters work well for you? Which was best? To find out, you need to do cluster evaluation: Performance metrics to evaluate unsupervised learning
|
Is there a decision-tree-like algorithm for unsupervised clustering?
|
The first paper that comes to mind is this:
Clustering Via Decision Tree Construction
https://pdfs.semanticscholar.org/8996/148e8f0b34308e2d22f78ff89bf1f038d1d6.pdf
As another mentioned, "hierarchical
|
Is there a decision-tree-like algorithm for unsupervised clustering?
The first paper that comes to mind is this:
Clustering Via Decision Tree Construction
https://pdfs.semanticscholar.org/8996/148e8f0b34308e2d22f78ff89bf1f038d1d6.pdf
As another mentioned, "hierarchical" (top down) and "hierarchical agglomeration" (bottom up) are both well known techniques devised using trees to do clustering. Scipy has this.
If you are ok with custom code because I don't know of any library, there are two techniques I can recommend. Be warned that these are not technically clustering because of the mechanics they rely on. You might call this pseudo clustering.
1) Supervised: This is somewhat similar to the paper (worth reading). Build a single decision tree model to learn some target (you decide what makes sense). The target could be a randomly generated column (requires repeating and evaluating what iteration was best, see below). Define each full path of the tree as a "cluster" since points that fall through that series of branches are technically similar in regards to the target. This only works well on some problems, but it's efficient at large scale. You end up with K clusters (see below).
2) Semisupervised (sort of unsupervised, but mechanically supervised), using #1: you can try building trees to predict columns in a leave one out pattern. i.e. if the schema is [A,B,C], build 3 models [A,B] -> C, [A,C] -> B, [B,C]->A. You get KN clusters (see below). N=len(schema). If some of these features are not interesting or too imbalanced (in the case of categories), don't use them as targets.
Summary: The model will select features in order based on information or purity and clusters will be based on just a few features rather than all. There is no concept of distance in these clusters, but you could certainly devise one based on the centers.
Pros: easy to understand and explain, quick training and inference, works well with few strong features, works with categories. When your features are in essence heterogeneous and you have many features, you don't have to spend as much time deciding which to use in the distance function.
Cons: not standard, must be written, naive bias, collinearity with target causes bad results, having 1000 equally important features will not work well (KMeans with Euclidean distance is better here).
How many clusters do you get? You must, absolutely must restrict the DT model to not grow too much. e.g. Set min samples per leaf, max leaf nodes (preferred), or max depth. Optionally, set purity or entropy constraints. You must check how many clusters this gave you and evaluate if this method is better than real clustering.
Did the techniques and parameters work well for you? Which was best? To find out, you need to do cluster evaluation: Performance metrics to evaluate unsupervised learning
|
Is there a decision-tree-like algorithm for unsupervised clustering?
The first paper that comes to mind is this:
Clustering Via Decision Tree Construction
https://pdfs.semanticscholar.org/8996/148e8f0b34308e2d22f78ff89bf1f038d1d6.pdf
As another mentioned, "hierarchical
|
10,844
|
Is there a decision-tree-like algorithm for unsupervised clustering?
|
What you're looking for is a divisive clustering algorithm.
Most common algorithms are agglomerative, which cluster the data in a bottom up manner - each observation starts as its own cluster and clusters get merged. Divisive clustering is top down - observations start in one cluster which is gradually divided.
The desire to look like a decision tree limits the choices as most algorithms operate on distances within the complete data space rather than splitting one variable at a time.
DIANA is the only divisive clustering algorithm I know of, and I think it is structured like a decision tree. I would be amazed if there aren't others out there.
You could use a standard decision tree algorithm if you modify the splitting rule to a metric that does not consider a defined dependent variable, but rather uses a cluster goodness metric.
|
Is there a decision-tree-like algorithm for unsupervised clustering?
|
What you're looking for is a divisive clustering algorithm.
Most common algorithms are agglomerative, which cluster the data in a bottom up manner - each observation starts as its own cluster and cl
|
Is there a decision-tree-like algorithm for unsupervised clustering?
What you're looking for is a divisive clustering algorithm.
Most common algorithms are agglomerative, which cluster the data in a bottom up manner - each observation starts as its own cluster and clusters get merged. Divisive clustering is top down - observations start in one cluster which is gradually divided.
The desire to look like a decision tree limits the choices as most algorithms operate on distances within the complete data space rather than splitting one variable at a time.
DIANA is the only divisive clustering algorithm I know of, and I think it is structured like a decision tree. I would be amazed if there aren't others out there.
You could use a standard decision tree algorithm if you modify the splitting rule to a metric that does not consider a defined dependent variable, but rather uses a cluster goodness metric.
|
Is there a decision-tree-like algorithm for unsupervised clustering?
What you're looking for is a divisive clustering algorithm.
Most common algorithms are agglomerative, which cluster the data in a bottom up manner - each observation starts as its own cluster and cl
|
10,845
|
Is there a decision-tree-like algorithm for unsupervised clustering?
|
One idea to consider is let suppose you have k features and n points. You can build random trees using (k-1) feature and 1 feature as a dependent variable. Y. You can select a height h after which you will have data points in roots. You can take voting kind of different trees. Just a thought.
|
Is there a decision-tree-like algorithm for unsupervised clustering?
|
One idea to consider is let suppose you have k features and n points. You can build random trees using (k-1) feature and 1 feature as a dependent variable. Y. You can select a height h after which you
|
Is there a decision-tree-like algorithm for unsupervised clustering?
One idea to consider is let suppose you have k features and n points. You can build random trees using (k-1) feature and 1 feature as a dependent variable. Y. You can select a height h after which you will have data points in roots. You can take voting kind of different trees. Just a thought.
|
Is there a decision-tree-like algorithm for unsupervised clustering?
One idea to consider is let suppose you have k features and n points. You can build random trees using (k-1) feature and 1 feature as a dependent variable. Y. You can select a height h after which you
|
10,846
|
What's a good tool to create Sankey diagrams?
|
Have you seen this list? And there is also a function in R available. I personally would start with the path geom and size aesthetic in ggplot2 and see where that got me.
I haven't tested any of these. If you find a preferred option perhaps you could let us all know as they are rather cool graphics.
|
What's a good tool to create Sankey diagrams?
|
Have you seen this list? And there is also a function in R available. I personally would start with the path geom and size aesthetic in ggplot2 and see where that got me.
I haven't tested any of the
|
What's a good tool to create Sankey diagrams?
Have you seen this list? And there is also a function in R available. I personally would start with the path geom and size aesthetic in ggplot2 and see where that got me.
I haven't tested any of these. If you find a preferred option perhaps you could let us all know as they are rather cool graphics.
|
What's a good tool to create Sankey diagrams?
Have you seen this list? And there is also a function in R available. I personally would start with the path geom and size aesthetic in ggplot2 and see where that got me.
I haven't tested any of the
|
10,847
|
What's a good tool to create Sankey diagrams?
|
If you are looking for client side (JavaScript library) you can try:
http://tamc.github.com/Sankey/
You can also see a related question on StackOverFlow: https://stackoverflow.com/q/4545254/179529
|
What's a good tool to create Sankey diagrams?
|
If you are looking for client side (JavaScript library) you can try:
http://tamc.github.com/Sankey/
You can also see a related question on StackOverFlow: https://stackoverflow.com/q/4545254/179529
|
What's a good tool to create Sankey diagrams?
If you are looking for client side (JavaScript library) you can try:
http://tamc.github.com/Sankey/
You can also see a related question on StackOverFlow: https://stackoverflow.com/q/4545254/179529
|
What's a good tool to create Sankey diagrams?
If you are looking for client side (JavaScript library) you can try:
http://tamc.github.com/Sankey/
You can also see a related question on StackOverFlow: https://stackoverflow.com/q/4545254/179529
|
10,848
|
What's a good tool to create Sankey diagrams?
|
Check out my HTML5 D3 Sankey Diagram Generator - complete with self-loops and all :) http://sankey.csaladen.es
|
What's a good tool to create Sankey diagrams?
|
Check out my HTML5 D3 Sankey Diagram Generator - complete with self-loops and all :) http://sankey.csaladen.es
|
What's a good tool to create Sankey diagrams?
Check out my HTML5 D3 Sankey Diagram Generator - complete with self-loops and all :) http://sankey.csaladen.es
|
What's a good tool to create Sankey diagrams?
Check out my HTML5 D3 Sankey Diagram Generator - complete with self-loops and all :) http://sankey.csaladen.es
|
10,849
|
What's a good tool to create Sankey diagrams?
|
Are you looking for library or web app? If the second, you may try this Sankey Builder. It has visual interface and drag&drop UI. It is just Beta, at this point it only saves data to it's URL and is not optimised for mobile.
http://wikibudgets.org/sankey/
disclaimer: I work for wikiBudgets
|
What's a good tool to create Sankey diagrams?
|
Are you looking for library or web app? If the second, you may try this Sankey Builder. It has visual interface and drag&drop UI. It is just Beta, at this point it only saves data to it's URL and is n
|
What's a good tool to create Sankey diagrams?
Are you looking for library or web app? If the second, you may try this Sankey Builder. It has visual interface and drag&drop UI. It is just Beta, at this point it only saves data to it's URL and is not optimised for mobile.
http://wikibudgets.org/sankey/
disclaimer: I work for wikiBudgets
|
What's a good tool to create Sankey diagrams?
Are you looking for library or web app? If the second, you may try this Sankey Builder. It has visual interface and drag&drop UI. It is just Beta, at this point it only saves data to it's URL and is n
|
10,850
|
What's a good tool to create Sankey diagrams?
|
Our Sankey Diagram app is for Apple iOS. It uses the touch interface to provide intuitive creation of flow diagrams. Just search for "Sankey Diagram" in the Apple iTunes app store. Our web site is squishLogic Sankey Diagram
|
What's a good tool to create Sankey diagrams?
|
Our Sankey Diagram app is for Apple iOS. It uses the touch interface to provide intuitive creation of flow diagrams. Just search for "Sankey Diagram" in the Apple iTunes app store. Our web site is s
|
What's a good tool to create Sankey diagrams?
Our Sankey Diagram app is for Apple iOS. It uses the touch interface to provide intuitive creation of flow diagrams. Just search for "Sankey Diagram" in the Apple iTunes app store. Our web site is squishLogic Sankey Diagram
|
What's a good tool to create Sankey diagrams?
Our Sankey Diagram app is for Apple iOS. It uses the touch interface to provide intuitive creation of flow diagrams. Just search for "Sankey Diagram" in the Apple iTunes app store. Our web site is s
|
10,851
|
What's a good tool to create Sankey diagrams?
|
I just uploaded a brand new online Sankey Builder. You can upload data, configure via a wide range of tools and save. It allows you to drag and drop fields from your data to customize the Sankey Flow and dynamically add filters on any field in the diagram to squeeze the data. It has automatic highlights of bands across the diagram to highlight data relationships, and you can even mix and match colors. It features automatic paging for large data sets! A sort feature operates by value or field in ascending order.
All settings can be saved for a future visit. The tool includes a share feature which allows you to enable or disable any of the settings. The unique URL created by the tool allows you to distribute the Sankey Diagram to visitors for a read-only interactive version of Sankey Builder! Check out the demo (see below). You can even build a free Sankey diagram which is hosted for free. Simply visit http://SankeyBuilder.com and signup (all you need is an email address). That will get you access to SankeyBuilder for free. Demo: http://sankeybuilder.com/sankeybuilder.aspx?url=bbe34d97f8 and Tutorials are at http://sankeybuilder.com.
Hope this helps.
|
What's a good tool to create Sankey diagrams?
|
I just uploaded a brand new online Sankey Builder. You can upload data, configure via a wide range of tools and save. It allows you to drag and drop fields from your data to customize the Sankey Flow
|
What's a good tool to create Sankey diagrams?
I just uploaded a brand new online Sankey Builder. You can upload data, configure via a wide range of tools and save. It allows you to drag and drop fields from your data to customize the Sankey Flow and dynamically add filters on any field in the diagram to squeeze the data. It has automatic highlights of bands across the diagram to highlight data relationships, and you can even mix and match colors. It features automatic paging for large data sets! A sort feature operates by value or field in ascending order.
All settings can be saved for a future visit. The tool includes a share feature which allows you to enable or disable any of the settings. The unique URL created by the tool allows you to distribute the Sankey Diagram to visitors for a read-only interactive version of Sankey Builder! Check out the demo (see below). You can even build a free Sankey diagram which is hosted for free. Simply visit http://SankeyBuilder.com and signup (all you need is an email address). That will get you access to SankeyBuilder for free. Demo: http://sankeybuilder.com/sankeybuilder.aspx?url=bbe34d97f8 and Tutorials are at http://sankeybuilder.com.
Hope this helps.
|
What's a good tool to create Sankey diagrams?
I just uploaded a brand new online Sankey Builder. You can upload data, configure via a wide range of tools and save. It allows you to drag and drop fields from your data to customize the Sankey Flow
|
10,852
|
What's a good tool to create Sankey diagrams?
|
Draw Sankey diagrams directly in your browser with "Sankey Flow Show - Attractive flow diagrams made in minutes!"
http://www.sankeyflowshow.com
|
What's a good tool to create Sankey diagrams?
|
Draw Sankey diagrams directly in your browser with "Sankey Flow Show - Attractive flow diagrams made in minutes!"
http://www.sankeyflowshow.com
|
What's a good tool to create Sankey diagrams?
Draw Sankey diagrams directly in your browser with "Sankey Flow Show - Attractive flow diagrams made in minutes!"
http://www.sankeyflowshow.com
|
What's a good tool to create Sankey diagrams?
Draw Sankey diagrams directly in your browser with "Sankey Flow Show - Attractive flow diagrams made in minutes!"
http://www.sankeyflowshow.com
|
10,853
|
What's a good tool to create Sankey diagrams?
|
Like everything Latex is the way !
It you do not know latex at all you might not want this solution, but if you know even just a little, you just to have to change parameters accordingly to what you want, and it will be more precise and flexible than many other possibilities.
Note that you will have to enter the values manually.
You can use the Tikz library to produce that
Here is the code
|
What's a good tool to create Sankey diagrams?
|
Like everything Latex is the way !
It you do not know latex at all you might not want this solution, but if you know even just a little, you just to have to change parameters accordingly to what you w
|
What's a good tool to create Sankey diagrams?
Like everything Latex is the way !
It you do not know latex at all you might not want this solution, but if you know even just a little, you just to have to change parameters accordingly to what you want, and it will be more precise and flexible than many other possibilities.
Note that you will have to enter the values manually.
You can use the Tikz library to produce that
Here is the code
|
What's a good tool to create Sankey diagrams?
Like everything Latex is the way !
It you do not know latex at all you might not want this solution, but if you know even just a little, you just to have to change parameters accordingly to what you w
|
10,854
|
What's a good tool to create Sankey diagrams?
|
We build on Denes's online (web app) Sankey Diagram Generator with some more functions and abilties to copy data in from CSV, Excel Pivot Tables, and JSON.
Sankey Diagram Generator
|
What's a good tool to create Sankey diagrams?
|
We build on Denes's online (web app) Sankey Diagram Generator with some more functions and abilties to copy data in from CSV, Excel Pivot Tables, and JSON.
Sankey Diagram Generator
|
What's a good tool to create Sankey diagrams?
We build on Denes's online (web app) Sankey Diagram Generator with some more functions and abilties to copy data in from CSV, Excel Pivot Tables, and JSON.
Sankey Diagram Generator
|
What's a good tool to create Sankey diagrams?
We build on Denes's online (web app) Sankey Diagram Generator with some more functions and abilties to copy data in from CSV, Excel Pivot Tables, and JSON.
Sankey Diagram Generator
|
10,855
|
Does the distribution $\log(1 + x^{-2}) / 2\pi$ have a name?
|
Indeed, even the first moment does not exist. The CDF of this distribution is given by
$$F(x) = 1/2 + \left(\arctan(x) - x \log(\sin(\arctan(x)))\right)/\pi$$
for $x \ge 0$ and, by symmetry, $F(x) = 1 - F(|x|)$ for $x \lt 0$. Neither this nor any of the obvious transforms look familiar to me. (The fact that we can obtain a closed form for the CDF in terms of elementary functions already severely limits the possibilities, but the somewhat obscure and complicated nature of this closed form quickly rules out standard distributions or power/log/exponential/trig transformations of them. The arctangent is, of course, the CDF of a Cauchy (Student $t_1$) distribution, exhibiting this CDF as a (substantially) perturbed version of the Cauchy distribution, shown as red dashes.)
|
Does the distribution $\log(1 + x^{-2}) / 2\pi$ have a name?
|
Indeed, even the first moment does not exist. The CDF of this distribution is given by
$$F(x) = 1/2 + \left(\arctan(x) - x \log(\sin(\arctan(x)))\right)/\pi$$
for $x \ge 0$ and, by symmetry, $F(x) =
|
Does the distribution $\log(1 + x^{-2}) / 2\pi$ have a name?
Indeed, even the first moment does not exist. The CDF of this distribution is given by
$$F(x) = 1/2 + \left(\arctan(x) - x \log(\sin(\arctan(x)))\right)/\pi$$
for $x \ge 0$ and, by symmetry, $F(x) = 1 - F(|x|)$ for $x \lt 0$. Neither this nor any of the obvious transforms look familiar to me. (The fact that we can obtain a closed form for the CDF in terms of elementary functions already severely limits the possibilities, but the somewhat obscure and complicated nature of this closed form quickly rules out standard distributions or power/log/exponential/trig transformations of them. The arctangent is, of course, the CDF of a Cauchy (Student $t_1$) distribution, exhibiting this CDF as a (substantially) perturbed version of the Cauchy distribution, shown as red dashes.)
|
Does the distribution $\log(1 + x^{-2}) / 2\pi$ have a name?
Indeed, even the first moment does not exist. The CDF of this distribution is given by
$$F(x) = 1/2 + \left(\arctan(x) - x \log(\sin(\arctan(x)))\right)/\pi$$
for $x \ge 0$ and, by symmetry, $F(x) =
|
10,856
|
Does the distribution $\log(1 + x^{-2}) / 2\pi$ have a name?
|
Perhaps not.
I could not find it in this fairly extensive list of distributions:
Leemis and McQuestion 2008 Univariate Distribution Relationships. American Statistician 62(1) 45:53
|
Does the distribution $\log(1 + x^{-2}) / 2\pi$ have a name?
|
Perhaps not.
I could not find it in this fairly extensive list of distributions:
Leemis and McQuestion 2008 Univariate Distribution Relationships. American Statistician 62(1) 45:53
|
Does the distribution $\log(1 + x^{-2}) / 2\pi$ have a name?
Perhaps not.
I could not find it in this fairly extensive list of distributions:
Leemis and McQuestion 2008 Univariate Distribution Relationships. American Statistician 62(1) 45:53
|
Does the distribution $\log(1 + x^{-2}) / 2\pi$ have a name?
Perhaps not.
I could not find it in this fairly extensive list of distributions:
Leemis and McQuestion 2008 Univariate Distribution Relationships. American Statistician 62(1) 45:53
|
10,857
|
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
|
While your question is similar to a number of other questions on site, aspects of this question (such as your emphasis on consistency) make me think they're not sufficiently close to being duplicates.
Why not choose some other objective function to minimize?
Why not, indeed? If you objective is different from least squares, you should address your objective instead!
Nevertheless, least squares has a number of nice properties (not least, an intimate connection to estimating means, which many people want, and a simplicity which makes it an obvious first choice when teaching or trying to implement new ideas).
Further, in many cases people don't have a clear objective function, so there's an advantage to choosing what's readily available and widely understood.
That said, least squares also has some less-nice properties (sensitivity to outliers, for example) -- so sometimes people prefer a more robust criterion.
minimize the sum of square error will give you CONSISTENT estimator of your model parameters
Least squares is not a requirement for consistency. Consistency isn't a very high hurdle -- plenty of estimators will be consistent. Almost all estimators people use in practice are consistent.
and by Gauss-Markov theorem, this estimator is BLUE.
But in situations where all linear estimators are bad (as would be the case under extreme heavy-tails, say), there's not much advantage in the best one.
if you choose to minimize some other objective function that is not the SSE, then there is no guarantee that you will get consistent estimator of your model parameter. Is my understanding correct?
it's not hard to find consistent estimators, so no that's not an especially good justification of least squares
why when we try to compare different models using cross validation, we again, use the SSE as the judgment criterion? [...] Why not other criterion?
If your objective is better reflected by something else, why not indeed?
There is no lack of people using other objective functions than least squares. It comes up in M-estimation, in least-trimmed estimators, in quantile regression, and when people use LINEX loss functions, just to name a few.
was thinking that when you have a dataset, you first set up your model, i.e. make a set of functional or distributional assumptions. In your model, there are some parameters (assume it is a parametric model),
Presumably the parameters of the functional assumptions are what you're trying to estimate - in which case, the functional assumptions are what you do least squares (or whatever else) around; they don't determine the criterion, they're what the criterion is estimating.
On the other hand, if you have a distributional assumption, then you have a lot of information about a more suitable objective function -- presumably, for example, you'll want to get efficient estimates of your parameters -- which in large samples will tend to lead you toward MLE, (though possibly in some cases embedded in a robustified framework).
then you need to find a way to consistently estimate these parameters. Whether you minimize the SSE or LAD or some other objective function,
LAD is a quantile estimator. It's a consistent estimator of the parameter it should estimate in the conditions in which it should be expected to be, in the same way that least squares is. (If you look at what you show consistency for with least squares, there's corresponding results for many other common estimators. People rarely use inconsistent estimators, so if you see an estimator being widely discussed, unless they're talking about its inconsistency, it's almost certainly consistent.*)
* That said, consistency isn't necessarily an essential property. After all, for my sample, I have some particular sample size, not a sequence of sample sizes tending to infinity. What matters are the properties at the $n$ I have, not some infinitely larger $n$ that I don't have and will never see. But much more care is required when we have inconsistency - we may have a good estimator at $n$=20, but it may be terrible at $n$=2000; there's more effort required, in some sense, if we want to use inconsistent estimators as a matter of course.
If you use LAD to estimate the mean of an exponential, it won't be consistent for that (though a trivial scaling of its estimate would be) -- but by the same token if you use least squares to estimate the median of an exponential, it won't be consistent for that (and again, a trivial rescaling fixes that).
|
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
|
While your question is similar to a number of other questions on site, aspects of this question (such as your emphasis on consistency) make me think they're not sufficiently close to being duplicates.
|
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
While your question is similar to a number of other questions on site, aspects of this question (such as your emphasis on consistency) make me think they're not sufficiently close to being duplicates.
Why not choose some other objective function to minimize?
Why not, indeed? If you objective is different from least squares, you should address your objective instead!
Nevertheless, least squares has a number of nice properties (not least, an intimate connection to estimating means, which many people want, and a simplicity which makes it an obvious first choice when teaching or trying to implement new ideas).
Further, in many cases people don't have a clear objective function, so there's an advantage to choosing what's readily available and widely understood.
That said, least squares also has some less-nice properties (sensitivity to outliers, for example) -- so sometimes people prefer a more robust criterion.
minimize the sum of square error will give you CONSISTENT estimator of your model parameters
Least squares is not a requirement for consistency. Consistency isn't a very high hurdle -- plenty of estimators will be consistent. Almost all estimators people use in practice are consistent.
and by Gauss-Markov theorem, this estimator is BLUE.
But in situations where all linear estimators are bad (as would be the case under extreme heavy-tails, say), there's not much advantage in the best one.
if you choose to minimize some other objective function that is not the SSE, then there is no guarantee that you will get consistent estimator of your model parameter. Is my understanding correct?
it's not hard to find consistent estimators, so no that's not an especially good justification of least squares
why when we try to compare different models using cross validation, we again, use the SSE as the judgment criterion? [...] Why not other criterion?
If your objective is better reflected by something else, why not indeed?
There is no lack of people using other objective functions than least squares. It comes up in M-estimation, in least-trimmed estimators, in quantile regression, and when people use LINEX loss functions, just to name a few.
was thinking that when you have a dataset, you first set up your model, i.e. make a set of functional or distributional assumptions. In your model, there are some parameters (assume it is a parametric model),
Presumably the parameters of the functional assumptions are what you're trying to estimate - in which case, the functional assumptions are what you do least squares (or whatever else) around; they don't determine the criterion, they're what the criterion is estimating.
On the other hand, if you have a distributional assumption, then you have a lot of information about a more suitable objective function -- presumably, for example, you'll want to get efficient estimates of your parameters -- which in large samples will tend to lead you toward MLE, (though possibly in some cases embedded in a robustified framework).
then you need to find a way to consistently estimate these parameters. Whether you minimize the SSE or LAD or some other objective function,
LAD is a quantile estimator. It's a consistent estimator of the parameter it should estimate in the conditions in which it should be expected to be, in the same way that least squares is. (If you look at what you show consistency for with least squares, there's corresponding results for many other common estimators. People rarely use inconsistent estimators, so if you see an estimator being widely discussed, unless they're talking about its inconsistency, it's almost certainly consistent.*)
* That said, consistency isn't necessarily an essential property. After all, for my sample, I have some particular sample size, not a sequence of sample sizes tending to infinity. What matters are the properties at the $n$ I have, not some infinitely larger $n$ that I don't have and will never see. But much more care is required when we have inconsistency - we may have a good estimator at $n$=20, but it may be terrible at $n$=2000; there's more effort required, in some sense, if we want to use inconsistent estimators as a matter of course.
If you use LAD to estimate the mean of an exponential, it won't be consistent for that (though a trivial scaling of its estimate would be) -- but by the same token if you use least squares to estimate the median of an exponential, it won't be consistent for that (and again, a trivial rescaling fixes that).
|
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
While your question is similar to a number of other questions on site, aspects of this question (such as your emphasis on consistency) make me think they're not sufficiently close to being duplicates.
|
10,858
|
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
|
You asked a statistics question, and I hope that my control system engineer answer is a stab at it from enough of a different direction to be enlightening.
Here is a "canonical" information-flow form for control system engineering:
The "r" is for reference value. It is summed with an "F" transform of the output "y" to produce an error "e". This error is the input for a controller, transformed by the control transfer function "C" into a control input for the plant "P". It is meant to be general enough to apply to arbitrary plants. The "plant" could be a car engine for cruise control, or the angle of input of an inverse-pendulum.
Let's say you have a plant with a known transfer function with phenomenology suitable to the the following discussion, a current state, and a desired end state. (table 2.1 pp68) There are an infinite number of unique paths that the system, with different inputs, could traverse to get from the initial to final state. The textbook controls engineer "optimal approaches" include time optimal (shortest time/bang-bang), distance optimal (shortest path), force optimal (lowest maximum input magnitude), and energy optimal (minimum total energy input).
Just like there are an infinite number of paths, there are an infinite number of "optimals" - each of which selects one of those paths. If you pick one path and say it is best then you are implicitly picking a "measure of goodness" or "measure of optimality".
In my personal opinion, I think folks like L-2 norm (aka energy optimal, aka least squared error) because it is simple, easy to explain, easy to execute, has the property of doing more work against bigger errors than smaller ones, and leaves with zero bias. Consider h-infinity norms where the variance is minimized and bias is constrained but not zero. They can be quite useful, but they are more complex to describe, and more complex to code.
I think the L2-norm, aka the energy-minimizing optimal path, aka least squared error fit, is easy and in a lazy sense fits the heuristic that "bigger errors are more bad, and smaller errors are less bad". There are literally an infinite number of algorithmic ways to formulate this, but squared error is one of the most convenient. It requires only algebra, so more people can understand it. It works in the (popular) polynomial space. Energy-optimal is consistent with much of the physics that comprise our perceived world, so it "feels familiar". It is decently fast to compute and not too horrible on memory.
If I get more time I would like to put pictures, codes, or bibliographic references.
|
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
|
You asked a statistics question, and I hope that my control system engineer answer is a stab at it from enough of a different direction to be enlightening.
Here is a "canonical" information-flow form
|
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
You asked a statistics question, and I hope that my control system engineer answer is a stab at it from enough of a different direction to be enlightening.
Here is a "canonical" information-flow form for control system engineering:
The "r" is for reference value. It is summed with an "F" transform of the output "y" to produce an error "e". This error is the input for a controller, transformed by the control transfer function "C" into a control input for the plant "P". It is meant to be general enough to apply to arbitrary plants. The "plant" could be a car engine for cruise control, or the angle of input of an inverse-pendulum.
Let's say you have a plant with a known transfer function with phenomenology suitable to the the following discussion, a current state, and a desired end state. (table 2.1 pp68) There are an infinite number of unique paths that the system, with different inputs, could traverse to get from the initial to final state. The textbook controls engineer "optimal approaches" include time optimal (shortest time/bang-bang), distance optimal (shortest path), force optimal (lowest maximum input magnitude), and energy optimal (minimum total energy input).
Just like there are an infinite number of paths, there are an infinite number of "optimals" - each of which selects one of those paths. If you pick one path and say it is best then you are implicitly picking a "measure of goodness" or "measure of optimality".
In my personal opinion, I think folks like L-2 norm (aka energy optimal, aka least squared error) because it is simple, easy to explain, easy to execute, has the property of doing more work against bigger errors than smaller ones, and leaves with zero bias. Consider h-infinity norms where the variance is minimized and bias is constrained but not zero. They can be quite useful, but they are more complex to describe, and more complex to code.
I think the L2-norm, aka the energy-minimizing optimal path, aka least squared error fit, is easy and in a lazy sense fits the heuristic that "bigger errors are more bad, and smaller errors are less bad". There are literally an infinite number of algorithmic ways to formulate this, but squared error is one of the most convenient. It requires only algebra, so more people can understand it. It works in the (popular) polynomial space. Energy-optimal is consistent with much of the physics that comprise our perceived world, so it "feels familiar". It is decently fast to compute and not too horrible on memory.
If I get more time I would like to put pictures, codes, or bibliographic references.
|
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
You asked a statistics question, and I hope that my control system engineer answer is a stab at it from enough of a different direction to be enlightening.
Here is a "canonical" information-flow form
|
10,859
|
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
|
I think that, when fitting models, we usually choose to minimize the sum of squared errors ($SSE$) due to the fact that $SSE$ has a direct (negative) relation with $R^2$, a major goodness-of-fit (GoF) statistic for a model, as follows ($SST$ is sum of squares total):
$$ R^2 = 1 - \frac{SSE}{SST} $$
Omitting the discussion of why an adjusted $R^2$ is a better (unbiased) GoF statistic due to correction for sample size and number of coefficients (see this or this), it seems to me that this connection is important as $R^2$ family of statistics is the one that represents relative measures of the fit versus absolute measures, such as root mean squared error ($RMSE$).
Moreover, the fact that $R^2$ is equal to the percentage of the variance in the dependent variable that can be explained by all of the independent variables taken together, makes $R^2$ and, thus, indirectly, $SSE$, measures of explanatory power (or predictive power) of a model. In fact, for predictive models, some people recommend using a similar to $SSE$ statistic - predicted residual sum of squares ($PRESS$). For details, see this post and this post, which are relevant to your question in the end of the post.
Concluding and answering your main question, I think that we usually minimize $SSE$, because it is equivalent to maximizing explanatory or predictive power of a statistical model in question.
|
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
|
I think that, when fitting models, we usually choose to minimize the sum of squared errors ($SSE$) due to the fact that $SSE$ has a direct (negative) relation with $R^2$, a major goodness-of-fit (GoF)
|
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
I think that, when fitting models, we usually choose to minimize the sum of squared errors ($SSE$) due to the fact that $SSE$ has a direct (negative) relation with $R^2$, a major goodness-of-fit (GoF) statistic for a model, as follows ($SST$ is sum of squares total):
$$ R^2 = 1 - \frac{SSE}{SST} $$
Omitting the discussion of why an adjusted $R^2$ is a better (unbiased) GoF statistic due to correction for sample size and number of coefficients (see this or this), it seems to me that this connection is important as $R^2$ family of statistics is the one that represents relative measures of the fit versus absolute measures, such as root mean squared error ($RMSE$).
Moreover, the fact that $R^2$ is equal to the percentage of the variance in the dependent variable that can be explained by all of the independent variables taken together, makes $R^2$ and, thus, indirectly, $SSE$, measures of explanatory power (or predictive power) of a model. In fact, for predictive models, some people recommend using a similar to $SSE$ statistic - predicted residual sum of squares ($PRESS$). For details, see this post and this post, which are relevant to your question in the end of the post.
Concluding and answering your main question, I think that we usually minimize $SSE$, because it is equivalent to maximizing explanatory or predictive power of a statistical model in question.
|
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
I think that, when fitting models, we usually choose to minimize the sum of squared errors ($SSE$) due to the fact that $SSE$ has a direct (negative) relation with $R^2$, a major goodness-of-fit (GoF)
|
10,860
|
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
|
You might also look at minimizing the maximum error instead of least squares fitting. There is an ample literature on the subject. For a search word, try "Tchebechev" also spelled "Chebyshev" polynomials.
|
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
|
You might also look at minimizing the maximum error instead of least squares fitting. There is an ample literature on the subject. For a search word, try "Tchebechev" also spelled "Chebyshev" polynomi
|
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
You might also look at minimizing the maximum error instead of least squares fitting. There is an ample literature on the subject. For a search word, try "Tchebechev" also spelled "Chebyshev" polynomials.
|
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
You might also look at minimizing the maximum error instead of least squares fitting. There is an ample literature on the subject. For a search word, try "Tchebechev" also spelled "Chebyshev" polynomi
|
10,861
|
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
|
It looks people use squares because it allow to be within Linear Algebra realm and not touch other more complicated stuff like convex optimization which is more powerfull, but it lead to usin solvers without nice closed-form solutions.
Also idea from this math realm which has name convex optimization has not spread a lot.
"...Why do we care about square of items. To be honest because we can analyze it...If you say that it correspond to Energy and they buy it then move on quickly...." -- https://youtu.be/l1X4tOoIHYo?t=1416, EE263, L8, 23:36.
Also here Stephen P. Boyd describes in 2008 that people use hammer and adhoc:
L20, 01:05:15 -- https://youtu.be/qoCa7kMLXNg?t=3916
|
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
|
It looks people use squares because it allow to be within Linear Algebra realm and not touch other more complicated stuff like convex optimization which is more powerfull, but it lead to usin solvers
|
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
It looks people use squares because it allow to be within Linear Algebra realm and not touch other more complicated stuff like convex optimization which is more powerfull, but it lead to usin solvers without nice closed-form solutions.
Also idea from this math realm which has name convex optimization has not spread a lot.
"...Why do we care about square of items. To be honest because we can analyze it...If you say that it correspond to Energy and they buy it then move on quickly...." -- https://youtu.be/l1X4tOoIHYo?t=1416, EE263, L8, 23:36.
Also here Stephen P. Boyd describes in 2008 that people use hammer and adhoc:
L20, 01:05:15 -- https://youtu.be/qoCa7kMLXNg?t=3916
|
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
It looks people use squares because it allow to be within Linear Algebra realm and not touch other more complicated stuff like convex optimization which is more powerfull, but it lead to usin solvers
|
10,862
|
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
|
On a side note:
When factoring in uncertanty over the values of our target variable t, we can express the probability distribution of t as $$p(t|x,w,\beta) = \mathbb{N}(t|y(x,\textbf{w}),\beta^{-1})$$
assuming t follows a Gaussian conitioned on the polyomial y. Using training data $\{\textbf{x}, \textbf{t}\}$ the likelihood for the model parameters $\textbf{w}$ is given by $$ p(\textbf{t}|\textbf{x}, \textbf{w}, \beta) = \prod_{n=1}^ {N}\mathbb{N}(t_n|y(x_n, \textbf{w}),\beta^{-1}).$$
Maximizing the log likelihood of the form $$-\frac{\beta}{2}\sum_{n=1}^{N}\{y(x_n, \textbf{w})-t_n\}^2 + \frac{N}{2}ln\beta-\frac{N}{2}ln(2\pi)$$
is the same as minimizing the negative log likelihood. We cab drop the second and the third term since they're constant with regards to $\textbf{w}$. Also the scaling factor $\beta$ in the first term can be dropped, since a constant factor does not change the location of the maximum/minimum, leaving us with
$$-\frac{1}{2}\sum_{n=1}^{N}\{y(x_n, \textbf{w})-t_n\}^2.$$
Thus the SSE has arisen as a consequence of maximizing likelihood under the assumption of a Gaussian noise distribution.
|
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
|
On a side note:
When factoring in uncertanty over the values of our target variable t, we can express the probability distribution of t as $$p(t|x,w,\beta) = \mathbb{N}(t|y(x,\textbf{w}),\beta^{-1})$$
|
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
On a side note:
When factoring in uncertanty over the values of our target variable t, we can express the probability distribution of t as $$p(t|x,w,\beta) = \mathbb{N}(t|y(x,\textbf{w}),\beta^{-1})$$
assuming t follows a Gaussian conitioned on the polyomial y. Using training data $\{\textbf{x}, \textbf{t}\}$ the likelihood for the model parameters $\textbf{w}$ is given by $$ p(\textbf{t}|\textbf{x}, \textbf{w}, \beta) = \prod_{n=1}^ {N}\mathbb{N}(t_n|y(x_n, \textbf{w}),\beta^{-1}).$$
Maximizing the log likelihood of the form $$-\frac{\beta}{2}\sum_{n=1}^{N}\{y(x_n, \textbf{w})-t_n\}^2 + \frac{N}{2}ln\beta-\frac{N}{2}ln(2\pi)$$
is the same as minimizing the negative log likelihood. We cab drop the second and the third term since they're constant with regards to $\textbf{w}$. Also the scaling factor $\beta$ in the first term can be dropped, since a constant factor does not change the location of the maximum/minimum, leaving us with
$$-\frac{1}{2}\sum_{n=1}^{N}\{y(x_n, \textbf{w})-t_n\}^2.$$
Thus the SSE has arisen as a consequence of maximizing likelihood under the assumption of a Gaussian noise distribution.
|
Why do we usually choose to minimize the sum of square errors (SSE) when fitting a model?
On a side note:
When factoring in uncertanty over the values of our target variable t, we can express the probability distribution of t as $$p(t|x,w,\beta) = \mathbb{N}(t|y(x,\textbf{w}),\beta^{-1})$$
|
10,863
|
Standard deviation of binned observations
|
This reply presents two solutions: Sheppard's corrections and a maximum likelihood estimate. Both closely agree on an estimate of the standard deviation: $7.70$ for the first and $7.69$ for the second (when adjusted to be comparable to the usual "unbiased" estimator).
Sheppard's corrections
"Sheppard's corrections" are formulas that adjust moments computed from binned data (like these) where
the data are assumed to be governed by a distribution supported on a finite interval $[a,b]$
that interval is divided sequentially into equal bins of common width $h$ that is relatively small (no bin contains a large proportion of all the data)
the distribution has a continuous density function.
They are derived from the Euler-Maclaurin sum formula, which approximates integrals in terms of linear combinations of values of the integrand at regularly spaced points, and therefore generally applicable (and not just to Normal distributions).
Although strictly speaking a Normal distribution is not supported on a finite interval, to an extremely close approximation it is. Essentially all its probability is contained within seven standard deviations of the mean. Therefore Sheppard's corrections are applicable to data assumed to come from a Normal distribution.
The first two Sheppard's corrections are
Use the mean of the binned data for the mean of the data (that is, no correction is needed for the mean).
Subtract $h^2/12$ from the variance of the binned data to obtain the (approximate) variance of the data.
Where does $h^2/12$ come from? This equals the variance of a uniform variate distributed over an interval of length $h$. Intuitively, then, Sheppard's correction for the second moment suggests that binning the data--effectively replacing them by the midpoint of each bin--appears to add an approximately uniformly distributed value ranging between $-h/2$ and $h/2$, whence it inflates the variance by $h^2/12$.
Let's do the calculations. I use R to illustrate them, beginning by specifying the counts and the bins:
counts <- c(1,2,3,4,1)
bin.lower <- c(40, 45, 50, 55, 70)
bin.upper <- c(45, 50, 55, 60, 75)
The proper formula to use for the counts comes from replicating the bin widths by the amounts given by the counts; that is, the binned data are equivalent to
42.5, 47.5, 47.5, 52.5, 52.5, 57.5, 57.5, 57.5, 57.5, 72.5
Their number, mean, and variance can be directly computed without having to expand the data in this way, though: when a bin has midpoint $x$ and a count of $k$, then its contribution to the sum of squares is $kx^2$. This leads to the second of the Wikipedia formulas cited in the question.
bin.mid <- (bin.upper + bin.lower)/2
n <- sum(counts)
mu <- sum(bin.mid * counts) / n
sigma2 <- (sum(bin.mid^2 * counts) - n * mu^2) / (n-1)
The mean (mu) is $1195/22 \approx 54.32$ (needing no correction) and the variance (sigma2) is $675/11 \approx 61.36$. (Its square root is $7.83$ as stated in the question.) Because the common bin width is $h=5$, we subtract $h^2/12 = 25/12 \approx 2.08$ from the variance and take its square root, obtaining $\sqrt{675/11 - 5^2/12} \approx 7.70$ for the standard deviation.
Maximum Likelihood Estimates
An alternative method is to apply a maximum likelihood estimate. When the assumed underlying distribution has a distribution function $F_\theta$ (depending on parameters $\theta$ to be estimated) and the bin $(x_0, x_1]$ contains $k$ values out of a set of independent, identically distributed values from $F_\theta$, then the (additive) contribution to the log likelihood of this bin is
$$\log \prod_{i=1}^k \left(F_\theta(x_1) - F_\theta(x_0)\right) =
k\log\left(F_\theta(x_1) - F_\theta(x_0)\right)$$
(see MLE/Likelihood of lognormally distributed interval).
Summing over all bins gives the log likelihood $\Lambda(\theta)$ for the dataset. As usual, we find an estimate $\hat\theta$ which minimizes $-\Lambda(\theta)$. This requires numerical optimization and that is expedited by supplying good starting values for $\theta$. The following R code does the work for a Normal distribution:
sigma <- sqrt(sigma2) # Crude starting estimate for the SD
likelihood.log <- function(theta, counts, bin.lower, bin.upper) {
mu <- theta[1]; sigma <- theta[2]
-sum(sapply(1:length(counts), function(i) {
counts[i] *
log(pnorm(bin.upper[i], mu, sigma) - pnorm(bin.lower[i], mu, sigma))
}))
}
coefficients <- optim(c(mu, sigma), function(theta)
likelihood.log(theta, counts, bin.lower, bin.upper))$par
The resulting coefficients are $(\hat\mu, \hat\sigma) = (54.32, 7.33)$.
Remember, though, that for Normal distributions the maximum likelihood estimate of $\sigma$ (when the data are given exactly and not binned) is the population SD of the data, not the more conventional "bias corrected" estimate in which the variance is multiplied by $n/(n-1)$. Let us then (for comparison) correct the MLE of $\sigma$, finding $\sqrt{n/(n-1)} \hat\sigma = \sqrt{11/10}\times 7.33 = 7.69$. This compares favorably with the result of Sheppard's correction, which was $7.70$.
Verifying the Assumptions
To visualize these results we can plot the fitted Normal density over a histogram:
hist(unlist(mapply(function(x,y) rep(x,y), bin.mid, counts)),
breaks = breaks, xlab="Values", main="Data and Normal Fit")
curve(dnorm(x, coefficients[1], coefficients[2]),
from=min(bin.lower), to=max(bin.upper),
add=TRUE, col="Blue", lwd=2)
To some this might not look like a good fit. However, because the dataset is small (only $11$ values), surprisingly large deviations between the distribution of the observations and the true underlying distribution can occur.
Let's more formally check the assumption (made by the MLE) that the data are governed by a Normal distribution. An approximate goodness of fit test can be obtained from a $\chi^2$ test: the estimated parameters indicate the expected amount of data in each bin; the $\chi^2$ statistic compares the observed counts to the expected counts. Here is a test in R:
breaks <- sort(unique(c(bin.lower, bin.upper)))
fit <- mapply(function(l, u) exp(-likelihood.log(coefficients, 1, l, u)),
c(-Inf, breaks), c(breaks, Inf))
observed <- sapply(breaks[-length(breaks)], function(x) sum((counts)[bin.lower <= x])) -
sapply(breaks[-1], function(x) sum((counts)[bin.upper < x]))
chisq.test(c(0, observed, 0), p=fit, simulate.p.value=TRUE)
The output is
Chi-squared test for given probabilities with simulated p-value (based on 2000 replicates)
data: c(0, observed, 0)
X-squared = 7.9581, df = NA, p-value = 0.2449
The software has performed a permutation test (which is needed because the test statistic does not follow a chi-squared distribution exactly: see my analysis at How to Understand Degrees of Freedom). Its p-value of $0.245$, which is not small, shows very little evidence of departure from normality: we have reason to trust the maximum likelihood results.
|
Standard deviation of binned observations
|
This reply presents two solutions: Sheppard's corrections and a maximum likelihood estimate. Both closely agree on an estimate of the standard deviation: $7.70$ for the first and $7.69$ for the secon
|
Standard deviation of binned observations
This reply presents two solutions: Sheppard's corrections and a maximum likelihood estimate. Both closely agree on an estimate of the standard deviation: $7.70$ for the first and $7.69$ for the second (when adjusted to be comparable to the usual "unbiased" estimator).
Sheppard's corrections
"Sheppard's corrections" are formulas that adjust moments computed from binned data (like these) where
the data are assumed to be governed by a distribution supported on a finite interval $[a,b]$
that interval is divided sequentially into equal bins of common width $h$ that is relatively small (no bin contains a large proportion of all the data)
the distribution has a continuous density function.
They are derived from the Euler-Maclaurin sum formula, which approximates integrals in terms of linear combinations of values of the integrand at regularly spaced points, and therefore generally applicable (and not just to Normal distributions).
Although strictly speaking a Normal distribution is not supported on a finite interval, to an extremely close approximation it is. Essentially all its probability is contained within seven standard deviations of the mean. Therefore Sheppard's corrections are applicable to data assumed to come from a Normal distribution.
The first two Sheppard's corrections are
Use the mean of the binned data for the mean of the data (that is, no correction is needed for the mean).
Subtract $h^2/12$ from the variance of the binned data to obtain the (approximate) variance of the data.
Where does $h^2/12$ come from? This equals the variance of a uniform variate distributed over an interval of length $h$. Intuitively, then, Sheppard's correction for the second moment suggests that binning the data--effectively replacing them by the midpoint of each bin--appears to add an approximately uniformly distributed value ranging between $-h/2$ and $h/2$, whence it inflates the variance by $h^2/12$.
Let's do the calculations. I use R to illustrate them, beginning by specifying the counts and the bins:
counts <- c(1,2,3,4,1)
bin.lower <- c(40, 45, 50, 55, 70)
bin.upper <- c(45, 50, 55, 60, 75)
The proper formula to use for the counts comes from replicating the bin widths by the amounts given by the counts; that is, the binned data are equivalent to
42.5, 47.5, 47.5, 52.5, 52.5, 57.5, 57.5, 57.5, 57.5, 72.5
Their number, mean, and variance can be directly computed without having to expand the data in this way, though: when a bin has midpoint $x$ and a count of $k$, then its contribution to the sum of squares is $kx^2$. This leads to the second of the Wikipedia formulas cited in the question.
bin.mid <- (bin.upper + bin.lower)/2
n <- sum(counts)
mu <- sum(bin.mid * counts) / n
sigma2 <- (sum(bin.mid^2 * counts) - n * mu^2) / (n-1)
The mean (mu) is $1195/22 \approx 54.32$ (needing no correction) and the variance (sigma2) is $675/11 \approx 61.36$. (Its square root is $7.83$ as stated in the question.) Because the common bin width is $h=5$, we subtract $h^2/12 = 25/12 \approx 2.08$ from the variance and take its square root, obtaining $\sqrt{675/11 - 5^2/12} \approx 7.70$ for the standard deviation.
Maximum Likelihood Estimates
An alternative method is to apply a maximum likelihood estimate. When the assumed underlying distribution has a distribution function $F_\theta$ (depending on parameters $\theta$ to be estimated) and the bin $(x_0, x_1]$ contains $k$ values out of a set of independent, identically distributed values from $F_\theta$, then the (additive) contribution to the log likelihood of this bin is
$$\log \prod_{i=1}^k \left(F_\theta(x_1) - F_\theta(x_0)\right) =
k\log\left(F_\theta(x_1) - F_\theta(x_0)\right)$$
(see MLE/Likelihood of lognormally distributed interval).
Summing over all bins gives the log likelihood $\Lambda(\theta)$ for the dataset. As usual, we find an estimate $\hat\theta$ which minimizes $-\Lambda(\theta)$. This requires numerical optimization and that is expedited by supplying good starting values for $\theta$. The following R code does the work for a Normal distribution:
sigma <- sqrt(sigma2) # Crude starting estimate for the SD
likelihood.log <- function(theta, counts, bin.lower, bin.upper) {
mu <- theta[1]; sigma <- theta[2]
-sum(sapply(1:length(counts), function(i) {
counts[i] *
log(pnorm(bin.upper[i], mu, sigma) - pnorm(bin.lower[i], mu, sigma))
}))
}
coefficients <- optim(c(mu, sigma), function(theta)
likelihood.log(theta, counts, bin.lower, bin.upper))$par
The resulting coefficients are $(\hat\mu, \hat\sigma) = (54.32, 7.33)$.
Remember, though, that for Normal distributions the maximum likelihood estimate of $\sigma$ (when the data are given exactly and not binned) is the population SD of the data, not the more conventional "bias corrected" estimate in which the variance is multiplied by $n/(n-1)$. Let us then (for comparison) correct the MLE of $\sigma$, finding $\sqrt{n/(n-1)} \hat\sigma = \sqrt{11/10}\times 7.33 = 7.69$. This compares favorably with the result of Sheppard's correction, which was $7.70$.
Verifying the Assumptions
To visualize these results we can plot the fitted Normal density over a histogram:
hist(unlist(mapply(function(x,y) rep(x,y), bin.mid, counts)),
breaks = breaks, xlab="Values", main="Data and Normal Fit")
curve(dnorm(x, coefficients[1], coefficients[2]),
from=min(bin.lower), to=max(bin.upper),
add=TRUE, col="Blue", lwd=2)
To some this might not look like a good fit. However, because the dataset is small (only $11$ values), surprisingly large deviations between the distribution of the observations and the true underlying distribution can occur.
Let's more formally check the assumption (made by the MLE) that the data are governed by a Normal distribution. An approximate goodness of fit test can be obtained from a $\chi^2$ test: the estimated parameters indicate the expected amount of data in each bin; the $\chi^2$ statistic compares the observed counts to the expected counts. Here is a test in R:
breaks <- sort(unique(c(bin.lower, bin.upper)))
fit <- mapply(function(l, u) exp(-likelihood.log(coefficients, 1, l, u)),
c(-Inf, breaks), c(breaks, Inf))
observed <- sapply(breaks[-length(breaks)], function(x) sum((counts)[bin.lower <= x])) -
sapply(breaks[-1], function(x) sum((counts)[bin.upper < x]))
chisq.test(c(0, observed, 0), p=fit, simulate.p.value=TRUE)
The output is
Chi-squared test for given probabilities with simulated p-value (based on 2000 replicates)
data: c(0, observed, 0)
X-squared = 7.9581, df = NA, p-value = 0.2449
The software has performed a permutation test (which is needed because the test statistic does not follow a chi-squared distribution exactly: see my analysis at How to Understand Degrees of Freedom). Its p-value of $0.245$, which is not small, shows very little evidence of departure from normality: we have reason to trust the maximum likelihood results.
|
Standard deviation of binned observations
This reply presents two solutions: Sheppard's corrections and a maximum likelihood estimate. Both closely agree on an estimate of the standard deviation: $7.70$ for the first and $7.69$ for the secon
|
10,864
|
EM maximum likelihood estimation for Weibull distribution
|
I think the answer is yes, if I have understood the question correctly.
Write $z_i = x_i^k$. Then an EM algorithm type of iteration, starting with for example $\hat k = 1$, is
E step: ${\hat z}_i = x_i^{\hat k}$
M step: $\hat k = \frac{n}{\left[\sum({\hat z}_i - 1)\log x_i\right]}$
This is a special case (the case with no censoring and no covariates) of the iteration suggested for Weibull proportional hazards models by Aitkin and Clayton (1980). It can also be found in Section 6.11 of Aitkin et al (1989).
Aitkin, M. and Clayton, D., 1980. The fitting of exponential, Weibull and extreme value distributions to complex censored survival data using GLIM. Applied Statistics, pp.156-163.
Aitkin, M., Anderson, D., Francis, B. and Hinde, J., 1989. Statistical Modelling in GLIM. Oxford University Press. New York.
|
EM maximum likelihood estimation for Weibull distribution
|
I think the answer is yes, if I have understood the question correctly.
Write $z_i = x_i^k$. Then an EM algorithm type of iteration, starting with for example $\hat k = 1$, is
E step: ${\hat z}_i =
|
EM maximum likelihood estimation for Weibull distribution
I think the answer is yes, if I have understood the question correctly.
Write $z_i = x_i^k$. Then an EM algorithm type of iteration, starting with for example $\hat k = 1$, is
E step: ${\hat z}_i = x_i^{\hat k}$
M step: $\hat k = \frac{n}{\left[\sum({\hat z}_i - 1)\log x_i\right]}$
This is a special case (the case with no censoring and no covariates) of the iteration suggested for Weibull proportional hazards models by Aitkin and Clayton (1980). It can also be found in Section 6.11 of Aitkin et al (1989).
Aitkin, M. and Clayton, D., 1980. The fitting of exponential, Weibull and extreme value distributions to complex censored survival data using GLIM. Applied Statistics, pp.156-163.
Aitkin, M., Anderson, D., Francis, B. and Hinde, J., 1989. Statistical Modelling in GLIM. Oxford University Press. New York.
|
EM maximum likelihood estimation for Weibull distribution
I think the answer is yes, if I have understood the question correctly.
Write $z_i = x_i^k$. Then an EM algorithm type of iteration, starting with for example $\hat k = 1$, is
E step: ${\hat z}_i =
|
10,865
|
EM maximum likelihood estimation for Weibull distribution
|
The Weibull MLE is only numerically solvable:
Let
$$
f_{\lambda,\beta}(x) = \begin{cases} \frac{\beta}{\lambda}\left(\frac{x}{\lambda}\right)^{\beta-1}e^{-\left(\frac{x}{\lambda}\right)^{\beta}} & ,\,x\geq0 \\ 0 &,\, x<0 \end{cases}
$$
with $\beta,\,\lambda>0$.
1) Likelihoodfunction:
$$
\mathcal{L}_{\hat{x}}(\lambda, \beta)
=\prod_{i=1}^N f_{\lambda,\beta}(x_i)
=\prod_{i=1}^N \frac{\beta}{\lambda}\left(\frac{x_i}{\lambda}\right)^{\beta-1}e^{-\left(\frac{x_i}{\lambda}\right)^{\beta}}
= \frac{\beta^N}{\lambda^{N \beta}} e^{-\sum_{i=1}^N\left(\frac{x_i}{\lambda}\right)^{\beta}} \prod_{i=1}^N x_i^{\beta-1}
$$
log-Likelihoodfunction:
$$
\ell_{\hat{x}}(\lambda, \beta):= \ln \mathcal{L}_{\hat{x}}(\lambda, \beta)=N\ln \beta-N\beta\ln \lambda-\sum_{i=1}^N \left(\frac{x_i}{\lambda}\right)^\beta+(\beta-1)\sum_{i=1}^N \ln x_i
$$
2) MLE-Problem:
\begin{equation*}
\begin{aligned}
& & \underset{(\lambda,\beta) \in \mathbb{R}^2}{\text{max}}\,\,\,\,\,\,
& \ell_{\hat{x}}(\lambda, \beta) \\
& & \text{s.t.} \,\,\, \lambda>0\\
& & \beta > 0
\end{aligned}
\end{equation*}
3) Maximization by $0$-gradients:
\begin{align*}
\frac{\partial l}{\partial \lambda}&=-N\beta\frac{1}{\lambda}+\beta\sum_{i=1}^N x_i^\beta\frac{1}{\lambda^{\beta+1}}&\stackrel{!}{=} 0\\
\frac{\partial l}{\partial \beta}&=\frac{N}{\beta}-N\ln\lambda-\sum_{i=1}^N \ln\left(\frac{x_i}{\lambda}\right)e^{\beta \ln\left(\frac{x_i}{\lambda}\right)}+\sum_{i=1}^N \ln x_i&\stackrel{!}{=}0
\end{align*}
It follows:
\begin{align*}
-N\beta\frac{1}{\lambda}+\beta\sum_{i=1}^N x_i^\beta\frac{1}{\lambda^{\beta+1}} &= 0\\\\
-\beta\frac{1}{\lambda}N
+\beta\frac{1}{\lambda}\sum_{i=1}^N x_i^\beta\frac{1}{\lambda^{\beta}} &= 0\\\\
-1+\frac{1}{N}\sum_{i=1}^N x_i^\beta\frac{1}{\lambda^{\beta}}&=0\\\\
\frac{1}{N}\sum_{i=1}^N x_i^\beta&=\lambda^\beta
\end{align*}
$$\Rightarrow\lambda^*=\left(\frac{1}{N}\sum_{i=1}^N x_i^{\beta^*}\right)^\frac{1}{\beta^*}$$
Plugging $\lambda^*$ into the second 0-gradient condition:
\begin{align*}
\Rightarrow \beta^*=\left[\frac{\sum_{i=1}^N x_i^{\beta^*}\ln x_i}{\sum_{i=1}^N x_i^{\beta^*}}-\overline{\ln x}\right]^{-1}
\end{align*}
This equation is only numerically solvable, e.g. Newton-Raphson algorithm. $\hat{\beta}^*$ can then be placed into $\lambda^*$ to complete the ML estimator for the Weibull distribution.
|
EM maximum likelihood estimation for Weibull distribution
|
The Weibull MLE is only numerically solvable:
Let
$$
f_{\lambda,\beta}(x) = \begin{cases} \frac{\beta}{\lambda}\left(\frac{x}{\lambda}\right)^{\beta-1}e^{-\left(\frac{x}{\lambda}\right)^{\beta}} & ,\,
|
EM maximum likelihood estimation for Weibull distribution
The Weibull MLE is only numerically solvable:
Let
$$
f_{\lambda,\beta}(x) = \begin{cases} \frac{\beta}{\lambda}\left(\frac{x}{\lambda}\right)^{\beta-1}e^{-\left(\frac{x}{\lambda}\right)^{\beta}} & ,\,x\geq0 \\ 0 &,\, x<0 \end{cases}
$$
with $\beta,\,\lambda>0$.
1) Likelihoodfunction:
$$
\mathcal{L}_{\hat{x}}(\lambda, \beta)
=\prod_{i=1}^N f_{\lambda,\beta}(x_i)
=\prod_{i=1}^N \frac{\beta}{\lambda}\left(\frac{x_i}{\lambda}\right)^{\beta-1}e^{-\left(\frac{x_i}{\lambda}\right)^{\beta}}
= \frac{\beta^N}{\lambda^{N \beta}} e^{-\sum_{i=1}^N\left(\frac{x_i}{\lambda}\right)^{\beta}} \prod_{i=1}^N x_i^{\beta-1}
$$
log-Likelihoodfunction:
$$
\ell_{\hat{x}}(\lambda, \beta):= \ln \mathcal{L}_{\hat{x}}(\lambda, \beta)=N\ln \beta-N\beta\ln \lambda-\sum_{i=1}^N \left(\frac{x_i}{\lambda}\right)^\beta+(\beta-1)\sum_{i=1}^N \ln x_i
$$
2) MLE-Problem:
\begin{equation*}
\begin{aligned}
& & \underset{(\lambda,\beta) \in \mathbb{R}^2}{\text{max}}\,\,\,\,\,\,
& \ell_{\hat{x}}(\lambda, \beta) \\
& & \text{s.t.} \,\,\, \lambda>0\\
& & \beta > 0
\end{aligned}
\end{equation*}
3) Maximization by $0$-gradients:
\begin{align*}
\frac{\partial l}{\partial \lambda}&=-N\beta\frac{1}{\lambda}+\beta\sum_{i=1}^N x_i^\beta\frac{1}{\lambda^{\beta+1}}&\stackrel{!}{=} 0\\
\frac{\partial l}{\partial \beta}&=\frac{N}{\beta}-N\ln\lambda-\sum_{i=1}^N \ln\left(\frac{x_i}{\lambda}\right)e^{\beta \ln\left(\frac{x_i}{\lambda}\right)}+\sum_{i=1}^N \ln x_i&\stackrel{!}{=}0
\end{align*}
It follows:
\begin{align*}
-N\beta\frac{1}{\lambda}+\beta\sum_{i=1}^N x_i^\beta\frac{1}{\lambda^{\beta+1}} &= 0\\\\
-\beta\frac{1}{\lambda}N
+\beta\frac{1}{\lambda}\sum_{i=1}^N x_i^\beta\frac{1}{\lambda^{\beta}} &= 0\\\\
-1+\frac{1}{N}\sum_{i=1}^N x_i^\beta\frac{1}{\lambda^{\beta}}&=0\\\\
\frac{1}{N}\sum_{i=1}^N x_i^\beta&=\lambda^\beta
\end{align*}
$$\Rightarrow\lambda^*=\left(\frac{1}{N}\sum_{i=1}^N x_i^{\beta^*}\right)^\frac{1}{\beta^*}$$
Plugging $\lambda^*$ into the second 0-gradient condition:
\begin{align*}
\Rightarrow \beta^*=\left[\frac{\sum_{i=1}^N x_i^{\beta^*}\ln x_i}{\sum_{i=1}^N x_i^{\beta^*}}-\overline{\ln x}\right]^{-1}
\end{align*}
This equation is only numerically solvable, e.g. Newton-Raphson algorithm. $\hat{\beta}^*$ can then be placed into $\lambda^*$ to complete the ML estimator for the Weibull distribution.
|
EM maximum likelihood estimation for Weibull distribution
The Weibull MLE is only numerically solvable:
Let
$$
f_{\lambda,\beta}(x) = \begin{cases} \frac{\beta}{\lambda}\left(\frac{x}{\lambda}\right)^{\beta-1}e^{-\left(\frac{x}{\lambda}\right)^{\beta}} & ,\,
|
10,866
|
EM maximum likelihood estimation for Weibull distribution
|
Though this is an old question, it looks like there is an answer in a paper published here: http://home.iitk.ac.in/~kundu/interval-censoring-REVISED-2.pdf
In this work the analysis of interval-censored data, with Weibull distribution as
the underlying lifetime distribution has been considered. It is assumed that censoring
mechanism is independent and non-informative. As expected, the maximum likelihood
estimators cannot be obtained in closed form. In our simulation experiments it is
observed that the Newton-Raphson method may not converge many times. An expectation
maximization algorithm has been suggested to compute the maximum likelihood
estimators, and it converges almost all the times.
|
EM maximum likelihood estimation for Weibull distribution
|
Though this is an old question, it looks like there is an answer in a paper published here: http://home.iitk.ac.in/~kundu/interval-censoring-REVISED-2.pdf
In this work the analysis of interval-censor
|
EM maximum likelihood estimation for Weibull distribution
Though this is an old question, it looks like there is an answer in a paper published here: http://home.iitk.ac.in/~kundu/interval-censoring-REVISED-2.pdf
In this work the analysis of interval-censored data, with Weibull distribution as
the underlying lifetime distribution has been considered. It is assumed that censoring
mechanism is independent and non-informative. As expected, the maximum likelihood
estimators cannot be obtained in closed form. In our simulation experiments it is
observed that the Newton-Raphson method may not converge many times. An expectation
maximization algorithm has been suggested to compute the maximum likelihood
estimators, and it converges almost all the times.
|
EM maximum likelihood estimation for Weibull distribution
Though this is an old question, it looks like there is an answer in a paper published here: http://home.iitk.ac.in/~kundu/interval-censoring-REVISED-2.pdf
In this work the analysis of interval-censor
|
10,867
|
EM maximum likelihood estimation for Weibull distribution
|
In this case the MLE and EM estimators are equivalent, since the MLE estimator is actually just a special case of the EM estimator. (I am assuming a frequentist framework in my answer; this isn't true for EM in a Bayesian context in which we're talking about MAP's). Since there is no missing data (just an unknown parameter), the E step simply returns the log likelihood, regardless of your choice of $k^{(t)}$. The M step then maximizes the log likelihood, yielding the MLE.
EM would be applicable, for example, if you had observed data from a mixture of two Weibull distributions with parameters $k_1$ and $k_2$, but you didn't know which of these two distributions each observation came from.
|
EM maximum likelihood estimation for Weibull distribution
|
In this case the MLE and EM estimators are equivalent, since the MLE estimator is actually just a special case of the EM estimator. (I am assuming a frequentist framework in my answer; this isn't true
|
EM maximum likelihood estimation for Weibull distribution
In this case the MLE and EM estimators are equivalent, since the MLE estimator is actually just a special case of the EM estimator. (I am assuming a frequentist framework in my answer; this isn't true for EM in a Bayesian context in which we're talking about MAP's). Since there is no missing data (just an unknown parameter), the E step simply returns the log likelihood, regardless of your choice of $k^{(t)}$. The M step then maximizes the log likelihood, yielding the MLE.
EM would be applicable, for example, if you had observed data from a mixture of two Weibull distributions with parameters $k_1$ and $k_2$, but you didn't know which of these two distributions each observation came from.
|
EM maximum likelihood estimation for Weibull distribution
In this case the MLE and EM estimators are equivalent, since the MLE estimator is actually just a special case of the EM estimator. (I am assuming a frequentist framework in my answer; this isn't true
|
10,868
|
Why LKJcorr is a good prior for correlation matrix?
|
The LKJ distribution is an extension of the work of H. Joe (1). Joe proposed a procedure to generate correlation matrices uniformly over the space of all positive definite correlation matrices. The contribution of (2) is that it extends Joe's work to show that there is a more efficient manner of generating such samples.
The parameterization commonly used in software such as Stan allows you to control how closely the sampled matrices resemble the identity matrices. This means you can move smoothly from sampling matrices that are all very nearly $I$ to matrices which are more-or-less uniform over PD matrices.
An alternative manner of sampling from correlation matrices, called the "onion" method, is found in (3). (No relation to the satirical news magazine -- probably.)
Another alternative is to sample from Wishart distributions, which are positive semi-definite, and then divide out the variances to leave a correlation matrix. Some downsides to the Wishart/Inverse Wishart procedure are discussed in Downsides of inverse Wishart prior in hierarchical models
(1) H. Joe. "Generating random correlation matrices based on partial correlations." Journal of Multivariate Analysis, 97 (2006), pp. 2177-2189
(2) Daniel Lewandowski, Dorota Kurowicka, Harry Joe. "Generating random correlation matrices based on vines and extended onion method." Journal of Multivariate Analysis, Volume 100, Issue 9, 2009, Pages 1989-2001
(3) S. Ghosh, S.G. Henderson. "Behavior of the norta method for correlated random vector generation as the dimension increases." ACM Transactions on Modeling and Computer Simulation (TOMACS), 13 (3) (2003), pp. 276-294
|
Why LKJcorr is a good prior for correlation matrix?
|
The LKJ distribution is an extension of the work of H. Joe (1). Joe proposed a procedure to generate correlation matrices uniformly over the space of all positive definite correlation matrices. The co
|
Why LKJcorr is a good prior for correlation matrix?
The LKJ distribution is an extension of the work of H. Joe (1). Joe proposed a procedure to generate correlation matrices uniformly over the space of all positive definite correlation matrices. The contribution of (2) is that it extends Joe's work to show that there is a more efficient manner of generating such samples.
The parameterization commonly used in software such as Stan allows you to control how closely the sampled matrices resemble the identity matrices. This means you can move smoothly from sampling matrices that are all very nearly $I$ to matrices which are more-or-less uniform over PD matrices.
An alternative manner of sampling from correlation matrices, called the "onion" method, is found in (3). (No relation to the satirical news magazine -- probably.)
Another alternative is to sample from Wishart distributions, which are positive semi-definite, and then divide out the variances to leave a correlation matrix. Some downsides to the Wishart/Inverse Wishart procedure are discussed in Downsides of inverse Wishart prior in hierarchical models
(1) H. Joe. "Generating random correlation matrices based on partial correlations." Journal of Multivariate Analysis, 97 (2006), pp. 2177-2189
(2) Daniel Lewandowski, Dorota Kurowicka, Harry Joe. "Generating random correlation matrices based on vines and extended onion method." Journal of Multivariate Analysis, Volume 100, Issue 9, 2009, Pages 1989-2001
(3) S. Ghosh, S.G. Henderson. "Behavior of the norta method for correlated random vector generation as the dimension increases." ACM Transactions on Modeling and Computer Simulation (TOMACS), 13 (3) (2003), pp. 276-294
|
Why LKJcorr is a good prior for correlation matrix?
The LKJ distribution is an extension of the work of H. Joe (1). Joe proposed a procedure to generate correlation matrices uniformly over the space of all positive definite correlation matrices. The co
|
10,869
|
What are the classical notations in statistics, linear algebra and machine learning? And what are the connections between these notations?
|
Perhaps a related question is, "What are words used in different languages, and what are the connections between these words?"
Notation is in some sense like language:
Some words have region specific meanings; some words are broadly understood.
Like powerful nations spread their language, successful fields and influential researchers spread their notation.
Language evolves over time: language has a mix of historical origins and modern influence.
Your specific question...
I would disagree with your contention that the two follow "completely different notation." Both $X\boldsymbol{\beta} = \boldsymbol{y}$ and $A\mathbf{x} = \mathbf{b}$ use capital letters to denote matrices. They're not that different.
Machine learning is highly related to statistics, a large and mature field. Using $X$ to represent the data matrix is almost certainly the most readable, most standard convention to follow. While $A\mathbf{x} = \mathbf{b}$ is standard for solving linear systems, that's not how people doing statistics write the normal equations. You'll find your audience more confused if you try to do that. When in Rome...
In some sense, the heart of your revised question is, "What are the historical origins of statistics using the letter $x$ to represent data and the letter $\beta$ to represent the unknown variable to solve for?"
This is a question for the statistical historians! Briefly searching, I see the influential British statistician and Cambridge academic Udny Yule used $x$ to represent data in his Introduction to the Theory of Statistics (1911). He wrote a regression equation as $x_1 = a + bx_2$, with the least squares objective as minimizing $\sum\left( x_1 - a - bx_2\right)^2$, and with solution $b_{12} = \frac{\sum x_1x_2}{\sum x_2^2}$. It at least goes back to then...
The even more influential R.A. Fisher used $y$ for the dependent variable and $x$ for the independent variable in his 1925 book Statistical Methods for Research Workers. (Hat tip to @Nick Cox for providing link with info.)
Good notation is like good language. Avoid field specific jargon whenever possible. Write in the math equivalent of high BBC English, language that is understandable to most anyone that speaks English. One should write, whenever possible, using notation that is clear and that is broadly understood.
|
What are the classical notations in statistics, linear algebra and machine learning? And what are th
|
Perhaps a related question is, "What are words used in different languages, and what are the connections between these words?"
Notation is in some sense like language:
Some words have region specific
|
What are the classical notations in statistics, linear algebra and machine learning? And what are the connections between these notations?
Perhaps a related question is, "What are words used in different languages, and what are the connections between these words?"
Notation is in some sense like language:
Some words have region specific meanings; some words are broadly understood.
Like powerful nations spread their language, successful fields and influential researchers spread their notation.
Language evolves over time: language has a mix of historical origins and modern influence.
Your specific question...
I would disagree with your contention that the two follow "completely different notation." Both $X\boldsymbol{\beta} = \boldsymbol{y}$ and $A\mathbf{x} = \mathbf{b}$ use capital letters to denote matrices. They're not that different.
Machine learning is highly related to statistics, a large and mature field. Using $X$ to represent the data matrix is almost certainly the most readable, most standard convention to follow. While $A\mathbf{x} = \mathbf{b}$ is standard for solving linear systems, that's not how people doing statistics write the normal equations. You'll find your audience more confused if you try to do that. When in Rome...
In some sense, the heart of your revised question is, "What are the historical origins of statistics using the letter $x$ to represent data and the letter $\beta$ to represent the unknown variable to solve for?"
This is a question for the statistical historians! Briefly searching, I see the influential British statistician and Cambridge academic Udny Yule used $x$ to represent data in his Introduction to the Theory of Statistics (1911). He wrote a regression equation as $x_1 = a + bx_2$, with the least squares objective as minimizing $\sum\left( x_1 - a - bx_2\right)^2$, and with solution $b_{12} = \frac{\sum x_1x_2}{\sum x_2^2}$. It at least goes back to then...
The even more influential R.A. Fisher used $y$ for the dependent variable and $x$ for the independent variable in his 1925 book Statistical Methods for Research Workers. (Hat tip to @Nick Cox for providing link with info.)
Good notation is like good language. Avoid field specific jargon whenever possible. Write in the math equivalent of high BBC English, language that is understandable to most anyone that speaks English. One should write, whenever possible, using notation that is clear and that is broadly understood.
|
What are the classical notations in statistics, linear algebra and machine learning? And what are th
Perhaps a related question is, "What are words used in different languages, and what are the connections between these words?"
Notation is in some sense like language:
Some words have region specific
|
10,870
|
Machine Learning to Predict Class Probabilities
|
SVM is closely related to logistic regression, and can be used to predict the probabilities as well based on the distance to the hyperplane (the score of each point). You do this by making score -> probability mapping some way, which is relatively easy as the problem is one-dimensional. One way is to fit an S-curve (e.g. the logistic curve, or its slope) to the data. Another way is to use isotonic regression to fit a more general cumulative distribution function to the data.
Other than SVM, you can use a suitable loss function for any method which you can fit using gradient-based methods, such as deep networks.
Predicting probabilities is not something taken into consideration these days when designing classifiers. It's an extra which distracts from the classification performance, so it's discarded. You can, however, use any binary classifier to learn a fixed set of classification probabilities (e.g. "p in [0, 1/4], or [1/4, 1/2], or ...") with the "probing" reduction of Langford and Zadrozny.
|
Machine Learning to Predict Class Probabilities
|
SVM is closely related to logistic regression, and can be used to predict the probabilities as well based on the distance to the hyperplane (the score of each point). You do this by making score -> pr
|
Machine Learning to Predict Class Probabilities
SVM is closely related to logistic regression, and can be used to predict the probabilities as well based on the distance to the hyperplane (the score of each point). You do this by making score -> probability mapping some way, which is relatively easy as the problem is one-dimensional. One way is to fit an S-curve (e.g. the logistic curve, or its slope) to the data. Another way is to use isotonic regression to fit a more general cumulative distribution function to the data.
Other than SVM, you can use a suitable loss function for any method which you can fit using gradient-based methods, such as deep networks.
Predicting probabilities is not something taken into consideration these days when designing classifiers. It's an extra which distracts from the classification performance, so it's discarded. You can, however, use any binary classifier to learn a fixed set of classification probabilities (e.g. "p in [0, 1/4], or [1/4, 1/2], or ...") with the "probing" reduction of Langford and Zadrozny.
|
Machine Learning to Predict Class Probabilities
SVM is closely related to logistic regression, and can be used to predict the probabilities as well based on the distance to the hyperplane (the score of each point). You do this by making score -> pr
|
10,871
|
Machine Learning to Predict Class Probabilities
|
Another possibility are neural networks, if you use the cross-entropy as the cost functional with sigmoidal output units. That will provide you with the estimates you are looking for.
Neural networks, as well as logistic regression, are discriminative classifiers, meaning that they attempt to maximize the conditional distribution on the training data. Asymptotically, in the limit of infinite samples, both estimates approach the same limit.
You shall find a detailed analysis on this very question in this paper. The takeaway idea is that even though the generative model has a higher asymptotic error, it may approach this asymptotic error much faster than the discriminative model. Hence, which one to take, depends on your problem, data at hand and your particular requirements.
Last, considering the estimates of the conditional probabilities as an absolute score on which to base decisions (if that is what you are after) does not make much sense in general. What is important is to consider, given a concrete sample, the best candidates classes output by the classifier and compare the associated probabilities. If the different between the best two scores is high, it means that the classifier is very confident about his answer (not necessarily right).
|
Machine Learning to Predict Class Probabilities
|
Another possibility are neural networks, if you use the cross-entropy as the cost functional with sigmoidal output units. That will provide you with the estimates you are looking for.
Neural networks,
|
Machine Learning to Predict Class Probabilities
Another possibility are neural networks, if you use the cross-entropy as the cost functional with sigmoidal output units. That will provide you with the estimates you are looking for.
Neural networks, as well as logistic regression, are discriminative classifiers, meaning that they attempt to maximize the conditional distribution on the training data. Asymptotically, in the limit of infinite samples, both estimates approach the same limit.
You shall find a detailed analysis on this very question in this paper. The takeaway idea is that even though the generative model has a higher asymptotic error, it may approach this asymptotic error much faster than the discriminative model. Hence, which one to take, depends on your problem, data at hand and your particular requirements.
Last, considering the estimates of the conditional probabilities as an absolute score on which to base decisions (if that is what you are after) does not make much sense in general. What is important is to consider, given a concrete sample, the best candidates classes output by the classifier and compare the associated probabilities. If the different between the best two scores is high, it means that the classifier is very confident about his answer (not necessarily right).
|
Machine Learning to Predict Class Probabilities
Another possibility are neural networks, if you use the cross-entropy as the cost functional with sigmoidal output units. That will provide you with the estimates you are looking for.
Neural networks,
|
10,872
|
Machine Learning to Predict Class Probabilities
|
There are many - and what works best depends on the data. There are also many ways to cheat - for example, you can perform probability calibration on the outputs of any classifier that gives some semblance of a score (i.e.: a dot product between the weight vector and the input). The most common example of this is called Platt's scaling.
There is also the matter of the shape of the underlying model. If you have polynomial interactions with your data, then vanilla logistic regression will not be able to model it well. But you could use a kerneled version of logistic regression so that the model fits the data better. This usually increases the "goodness" of the probability outputs since you are also improving the accuracy of the classifier.
Generally, most models that do give probabilities are usually using a logistic function, so it can be hard to compare. It just tends to work well in practice, Bayesian networks are an alternative. Naive Bayes just makes too simplistic an assumption for its probabilities to be any good - and that is easily observed on any reasonably sized data set.
In the end, its usually easier to increase the quality of your probability estimates by picking the model that can represent the data better. In this sense, it doesn't matter too much how you get the probabilities. If you can get 70% accuracy with logistic regression, and 98% with a SVM - then just giving a "full confidence" probability alone will make you results "better" by most scoring methods, even though they aren't really probabilities (and then you can do the calibration I mentioned before, making them actually better).
The same question in the context of being unable to get an accurate classifier is more interesting, but I'm not sure anyones studied / compared in such a scenario.
|
Machine Learning to Predict Class Probabilities
|
There are many - and what works best depends on the data. There are also many ways to cheat - for example, you can perform probability calibration on the outputs of any classifier that gives some semb
|
Machine Learning to Predict Class Probabilities
There are many - and what works best depends on the data. There are also many ways to cheat - for example, you can perform probability calibration on the outputs of any classifier that gives some semblance of a score (i.e.: a dot product between the weight vector and the input). The most common example of this is called Platt's scaling.
There is also the matter of the shape of the underlying model. If you have polynomial interactions with your data, then vanilla logistic regression will not be able to model it well. But you could use a kerneled version of logistic regression so that the model fits the data better. This usually increases the "goodness" of the probability outputs since you are also improving the accuracy of the classifier.
Generally, most models that do give probabilities are usually using a logistic function, so it can be hard to compare. It just tends to work well in practice, Bayesian networks are an alternative. Naive Bayes just makes too simplistic an assumption for its probabilities to be any good - and that is easily observed on any reasonably sized data set.
In the end, its usually easier to increase the quality of your probability estimates by picking the model that can represent the data better. In this sense, it doesn't matter too much how you get the probabilities. If you can get 70% accuracy with logistic regression, and 98% with a SVM - then just giving a "full confidence" probability alone will make you results "better" by most scoring methods, even though they aren't really probabilities (and then you can do the calibration I mentioned before, making them actually better).
The same question in the context of being unable to get an accurate classifier is more interesting, but I'm not sure anyones studied / compared in such a scenario.
|
Machine Learning to Predict Class Probabilities
There are many - and what works best depends on the data. There are also many ways to cheat - for example, you can perform probability calibration on the outputs of any classifier that gives some semb
|
10,873
|
Does it make sense for a fixed effect to be nested within a random one, or how to code repeated measures in R (aov and lmer)?
|
In mixed models the treatment of factors as either fixed or random, particularly in conjunction with whether they are crossed, partially crossed or nested can lead to a lot of confusion. Also, there appears to be differences in terminology between what is meant by nesting in the anova/designed experiments world and mixed/multilevel models world.
I don't profess to know all the answers, and my answer won't be complete (and may produce further questions) but I will try to address some of the issues here:
Does it make sense for a fixed effect to be nested within a random one, or how to code repeated measures in R (aov and lmer)?
(the question title)
No, I don't believe this makes sense. When we are dealing with repeated measures, then usually whatever the thing is that the measures are repeated on will be random, let's just call it Subject, and in lme4 we will want to include Subject on the right side of one or more | in the random part of the formula. If we have other random effects, then these are either crossed, partially crossed or nested - and my answer to this question addresses that.
The issue with these anova-type designed experiments seems to be how to deal with factors that would normally be thought of as fixed, in a repeated measures situation, and the questions in the body of the OP speak to this:
Why Error(subject/A) and not Error(subject)?
I do not usually use aov() so I could be missing something but, for me the Error(subject/A) is very misleading in the case of the linked question. Error(subject) in fact leads to exactly the same results.
Is it (1|subject) or (1|subject)+(1|A:subject) or simply (1|A:subject)?
This relates to this question. In this case, all the following random effects formulations lead to exactly the same result:
(1|subject)
(1|A:subject)
(1|subject) + (1|A:subject)
(1|subject) + (1|A:subject) + (1|B:subject)
However, this is because the simulated dataset in the question has no variation within anything, it is just created with Y = rnorm(48). If we take a real dataset such as the cake dataset in the lme4, we find that this will not generally be the case. From the documentation, here is the experimental setup:
Data on the breakage angle of chocolate cakes made with three different recipes and baked at six different temperatures. This is a split-plot design with the recipes being whole-units and the different temperatures being applied to sub-units (within replicates). The experimental notes suggest that
the replicate numbering represents temporal ordering.
A data frame with 270 observations on the following 5 variables.
replicate a factor with levels 1 to 15
recipe a factor with levels A, B and C
temperature an ordered factor with levels 175 < 185 < 195 < 205 < 215 < 225
temp numeric value of the baking temperature (degrees F).
angle a numeric vector giving the angle at which the cake broke.
So, we have repeated measures within replicate, and we are also interested in the fixed factors recipe and temperature (we can ignore temp since this is just a different coding of temperature), and we can visualise the situation using xtabs:
> xtabs(~recipe+replicate,data=cake)
replicate
recipe 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
A 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6
B 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6
C 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6
If recipe were a random effect we would say that these are crossed random effects. In no way does recipe A belong to replicate 1 or any other replicate.
> xtabs(~temp+replicate,data=cake)
replicate
temp 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
175 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
185 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
195 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
205 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
215 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
225 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
Similarly for temp.
So the first model we might fit is:
> lmm1 <- lmer(angle ~ recipe * temperature + (1|replicate), cake, REML= FALSE)
This will treat each replicate as the only source of random variation (other than the residual of course). But there could be random differences between recipes. So we might be tempted to include recipe as another (crossed) random effect but that would be ill-advised because we have only 3 levels of recipe so we can't expect the model to estimate the variance components well. So instead we can use replicate:recipe as the grouping variable which will enable us to treat each combination of replicate and recipe as a separate grouping factor. So whereas with the above model we would have 15 random intercepts for the levels of replicate we will now have 45 random intercepts for each of the separate combinations:
lmm3 <- lmer(angle ~ recipe * temperature + (1|replicate:recipe) , cake, REML= FALSE)
Note that we now have (very slightly) different results indicating that there is some random variability due to recipe, but not a great deal.
We could likewise do the same thing with temperature.
Now, going back to your question, you also ask
Why (1|subject) + (1|A:subject) and not (1|subject) + (0+A|subject) or even simply (A|subject)?
I'm not entire sure where this (using random slopes) comes from - it doesn't seem to arise in the 2 linked questions - but my problem with (1|subject) + (1|A:subject) is that this is exactly the same as (1|subject/A) which means that A is nested within subject, which in turn means (to me) that each level of A occurs in 1 and only 1 level of subject which clearly is not the case here.
I will probably add to and/or edit this answer after I've thought about it some more, but I wanted to get my initial thoughts down.
|
Does it make sense for a fixed effect to be nested within a random one, or how to code repeated meas
|
In mixed models the treatment of factors as either fixed or random, particularly in conjunction with whether they are crossed, partially crossed or nested can lead to a lot of confusion. Also, there a
|
Does it make sense for a fixed effect to be nested within a random one, or how to code repeated measures in R (aov and lmer)?
In mixed models the treatment of factors as either fixed or random, particularly in conjunction with whether they are crossed, partially crossed or nested can lead to a lot of confusion. Also, there appears to be differences in terminology between what is meant by nesting in the anova/designed experiments world and mixed/multilevel models world.
I don't profess to know all the answers, and my answer won't be complete (and may produce further questions) but I will try to address some of the issues here:
Does it make sense for a fixed effect to be nested within a random one, or how to code repeated measures in R (aov and lmer)?
(the question title)
No, I don't believe this makes sense. When we are dealing with repeated measures, then usually whatever the thing is that the measures are repeated on will be random, let's just call it Subject, and in lme4 we will want to include Subject on the right side of one or more | in the random part of the formula. If we have other random effects, then these are either crossed, partially crossed or nested - and my answer to this question addresses that.
The issue with these anova-type designed experiments seems to be how to deal with factors that would normally be thought of as fixed, in a repeated measures situation, and the questions in the body of the OP speak to this:
Why Error(subject/A) and not Error(subject)?
I do not usually use aov() so I could be missing something but, for me the Error(subject/A) is very misleading in the case of the linked question. Error(subject) in fact leads to exactly the same results.
Is it (1|subject) or (1|subject)+(1|A:subject) or simply (1|A:subject)?
This relates to this question. In this case, all the following random effects formulations lead to exactly the same result:
(1|subject)
(1|A:subject)
(1|subject) + (1|A:subject)
(1|subject) + (1|A:subject) + (1|B:subject)
However, this is because the simulated dataset in the question has no variation within anything, it is just created with Y = rnorm(48). If we take a real dataset such as the cake dataset in the lme4, we find that this will not generally be the case. From the documentation, here is the experimental setup:
Data on the breakage angle of chocolate cakes made with three different recipes and baked at six different temperatures. This is a split-plot design with the recipes being whole-units and the different temperatures being applied to sub-units (within replicates). The experimental notes suggest that
the replicate numbering represents temporal ordering.
A data frame with 270 observations on the following 5 variables.
replicate a factor with levels 1 to 15
recipe a factor with levels A, B and C
temperature an ordered factor with levels 175 < 185 < 195 < 205 < 215 < 225
temp numeric value of the baking temperature (degrees F).
angle a numeric vector giving the angle at which the cake broke.
So, we have repeated measures within replicate, and we are also interested in the fixed factors recipe and temperature (we can ignore temp since this is just a different coding of temperature), and we can visualise the situation using xtabs:
> xtabs(~recipe+replicate,data=cake)
replicate
recipe 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
A 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6
B 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6
C 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6
If recipe were a random effect we would say that these are crossed random effects. In no way does recipe A belong to replicate 1 or any other replicate.
> xtabs(~temp+replicate,data=cake)
replicate
temp 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
175 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
185 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
195 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
205 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
215 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
225 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
Similarly for temp.
So the first model we might fit is:
> lmm1 <- lmer(angle ~ recipe * temperature + (1|replicate), cake, REML= FALSE)
This will treat each replicate as the only source of random variation (other than the residual of course). But there could be random differences between recipes. So we might be tempted to include recipe as another (crossed) random effect but that would be ill-advised because we have only 3 levels of recipe so we can't expect the model to estimate the variance components well. So instead we can use replicate:recipe as the grouping variable which will enable us to treat each combination of replicate and recipe as a separate grouping factor. So whereas with the above model we would have 15 random intercepts for the levels of replicate we will now have 45 random intercepts for each of the separate combinations:
lmm3 <- lmer(angle ~ recipe * temperature + (1|replicate:recipe) , cake, REML= FALSE)
Note that we now have (very slightly) different results indicating that there is some random variability due to recipe, but not a great deal.
We could likewise do the same thing with temperature.
Now, going back to your question, you also ask
Why (1|subject) + (1|A:subject) and not (1|subject) + (0+A|subject) or even simply (A|subject)?
I'm not entire sure where this (using random slopes) comes from - it doesn't seem to arise in the 2 linked questions - but my problem with (1|subject) + (1|A:subject) is that this is exactly the same as (1|subject/A) which means that A is nested within subject, which in turn means (to me) that each level of A occurs in 1 and only 1 level of subject which clearly is not the case here.
I will probably add to and/or edit this answer after I've thought about it some more, but I wanted to get my initial thoughts down.
|
Does it make sense for a fixed effect to be nested within a random one, or how to code repeated meas
In mixed models the treatment of factors as either fixed or random, particularly in conjunction with whether they are crossed, partially crossed or nested can lead to a lot of confusion. Also, there a
|
10,874
|
Does it make sense for a fixed effect to be nested within a random one, or how to code repeated measures in R (aov and lmer)?
|
Ooooops. Alert commenters have spotted that my post was full of nonsense. I was confusing nested designs and repeated measures designs.
This site gives a useful breakdown of the difference between nested and repeated measures designs. Interestingly, the author shows expected mean squares for fixed within fixed, random within fixed and random within random -- but not fixed within random. It's hard to imagine what that would mean - if the factors in level A are chosen at random, then randomness now governs the selection of the factors of level B. If 5 schools are chosen at random from a school board, and then 3 teachers are chosen from each school (teachers nested in schools), the levels of the "teacher" factor are now a random selection of teachers from the school board by virtue of the random selection of schools. I can't "fix" the teachers I will have in the experiment.
|
Does it make sense for a fixed effect to be nested within a random one, or how to code repeated meas
|
Ooooops. Alert commenters have spotted that my post was full of nonsense. I was confusing nested designs and repeated measures designs.
This site gives a useful breakdown of the difference between ne
|
Does it make sense for a fixed effect to be nested within a random one, or how to code repeated measures in R (aov and lmer)?
Ooooops. Alert commenters have spotted that my post was full of nonsense. I was confusing nested designs and repeated measures designs.
This site gives a useful breakdown of the difference between nested and repeated measures designs. Interestingly, the author shows expected mean squares for fixed within fixed, random within fixed and random within random -- but not fixed within random. It's hard to imagine what that would mean - if the factors in level A are chosen at random, then randomness now governs the selection of the factors of level B. If 5 schools are chosen at random from a school board, and then 3 teachers are chosen from each school (teachers nested in schools), the levels of the "teacher" factor are now a random selection of teachers from the school board by virtue of the random selection of schools. I can't "fix" the teachers I will have in the experiment.
|
Does it make sense for a fixed effect to be nested within a random one, or how to code repeated meas
Ooooops. Alert commenters have spotted that my post was full of nonsense. I was confusing nested designs and repeated measures designs.
This site gives a useful breakdown of the difference between ne
|
10,875
|
How to use scikit-learn's cross validation functions on multi-label classifiers
|
Stratified sampling means that the class membership distribution is preserved in your KFold sampling. This doesn't make a lot of sense in the multilabel case where your target vector might have more than one label per observation.
There are two possible interpretations of stratified in this sense.
For $n$ labels where at least one of them is filled that gives you $\sum\limits_{i=1}^n2^n$ unique labels. You could perform stratified sampling on the each of the unique label bins.
The other option is to try and segment the training data s.t. that probability mass of the distribution of the label vectors is approximately the same over the folds. E.g.
import numpy as np
np.random.seed(1)
y = np.random.randint(0, 2, (5000, 5))
y = y[np.where(y.sum(axis=1) != 0)[0]]
def proba_mass_split(y, folds=7):
obs, classes = y.shape
dist = y.sum(axis=0).astype('float')
dist /= dist.sum()
index_list = []
fold_dist = np.zeros((folds, classes), dtype='float')
for _ in xrange(folds):
index_list.append([])
for i in xrange(obs):
if i < folds:
target_fold = i
else:
normed_folds = fold_dist.T / fold_dist.sum(axis=1)
how_off = normed_folds.T - dist
target_fold = np.argmin(np.dot((y[i] - .5).reshape(1, -1), how_off.T))
fold_dist[target_fold] += y[i]
index_list[target_fold].append(i)
print("Fold distributions are")
print(fold_dist)
return index_list
if __name__ == '__main__':
proba_mass_split(y)
To get the normal training, testing indices that KFold produces you want to rewrite that to it returns the np.setdiff1d of each index with np.arange(y.shape[0]), then wrap that in a class with an iter method.
|
How to use scikit-learn's cross validation functions on multi-label classifiers
|
Stratified sampling means that the class membership distribution is preserved in your KFold sampling. This doesn't make a lot of sense in the multilabel case where your target vector might have more t
|
How to use scikit-learn's cross validation functions on multi-label classifiers
Stratified sampling means that the class membership distribution is preserved in your KFold sampling. This doesn't make a lot of sense in the multilabel case where your target vector might have more than one label per observation.
There are two possible interpretations of stratified in this sense.
For $n$ labels where at least one of them is filled that gives you $\sum\limits_{i=1}^n2^n$ unique labels. You could perform stratified sampling on the each of the unique label bins.
The other option is to try and segment the training data s.t. that probability mass of the distribution of the label vectors is approximately the same over the folds. E.g.
import numpy as np
np.random.seed(1)
y = np.random.randint(0, 2, (5000, 5))
y = y[np.where(y.sum(axis=1) != 0)[0]]
def proba_mass_split(y, folds=7):
obs, classes = y.shape
dist = y.sum(axis=0).astype('float')
dist /= dist.sum()
index_list = []
fold_dist = np.zeros((folds, classes), dtype='float')
for _ in xrange(folds):
index_list.append([])
for i in xrange(obs):
if i < folds:
target_fold = i
else:
normed_folds = fold_dist.T / fold_dist.sum(axis=1)
how_off = normed_folds.T - dist
target_fold = np.argmin(np.dot((y[i] - .5).reshape(1, -1), how_off.T))
fold_dist[target_fold] += y[i]
index_list[target_fold].append(i)
print("Fold distributions are")
print(fold_dist)
return index_list
if __name__ == '__main__':
proba_mass_split(y)
To get the normal training, testing indices that KFold produces you want to rewrite that to it returns the np.setdiff1d of each index with np.arange(y.shape[0]), then wrap that in a class with an iter method.
|
How to use scikit-learn's cross validation functions on multi-label classifiers
Stratified sampling means that the class membership distribution is preserved in your KFold sampling. This doesn't make a lot of sense in the multilabel case where your target vector might have more t
|
10,876
|
How to use scikit-learn's cross validation functions on multi-label classifiers
|
You might want to check: On the stratification of multi-label data .
Here the authors first tell the simple idea of sampling from unique labelsets and then introduce a new approach iterative stratification for multi-label datasets.
The approach of iterative stratification is greedy.
For a quick overview, here is what the iterative stratification does:
First they find out how many examples should go into each of the k-folds.
Find the desired number of examples per fold $i$ per label $j$, $c_i^j$ .
From the dataset which are yet to be distributed into k-folds, the label $l$ is identified for which the number of examples are the minimum, $D^l$ .
Then for each datapoint in $D^l$ find the fold $k$ for which $c_k^j$ is maximized (break ties here). Which is in other words mean: which fold has the maximum demand for label $l$, or is the most imbalanced with respect to label $l$.
Add the current datapoint to the fold $k$ found from above step, remove the datapoint from the original dataset and adjust the count values of $c$ and continue until all the datapoints are not distributed into the folds.
The main idea is to first focus on the labels which are rare, this idea comes from the hypothesis that
"if rare labels are not examined in priority, then they may be
distributed in an undesired way, and this cannot be repaired
subsequently"
To understand how ties are broken and other details, I will recommend reading the paper. Also, from the experiments section what I can understand is, depending on the labelset/examples ratio one might want to use the unique labelset based or this proposed iterative stratification method. For lower values of this ratio the distribution of the labels across the folds are close or better in a few cases as iterative stratification. For higher values of this ratio, iterative stratification is shown to have maintained better distributions in the folds.
|
How to use scikit-learn's cross validation functions on multi-label classifiers
|
You might want to check: On the stratification of multi-label data .
Here the authors first tell the simple idea of sampling from unique labelsets and then introduce a new approach iterative stratific
|
How to use scikit-learn's cross validation functions on multi-label classifiers
You might want to check: On the stratification of multi-label data .
Here the authors first tell the simple idea of sampling from unique labelsets and then introduce a new approach iterative stratification for multi-label datasets.
The approach of iterative stratification is greedy.
For a quick overview, here is what the iterative stratification does:
First they find out how many examples should go into each of the k-folds.
Find the desired number of examples per fold $i$ per label $j$, $c_i^j$ .
From the dataset which are yet to be distributed into k-folds, the label $l$ is identified for which the number of examples are the minimum, $D^l$ .
Then for each datapoint in $D^l$ find the fold $k$ for which $c_k^j$ is maximized (break ties here). Which is in other words mean: which fold has the maximum demand for label $l$, or is the most imbalanced with respect to label $l$.
Add the current datapoint to the fold $k$ found from above step, remove the datapoint from the original dataset and adjust the count values of $c$ and continue until all the datapoints are not distributed into the folds.
The main idea is to first focus on the labels which are rare, this idea comes from the hypothesis that
"if rare labels are not examined in priority, then they may be
distributed in an undesired way, and this cannot be repaired
subsequently"
To understand how ties are broken and other details, I will recommend reading the paper. Also, from the experiments section what I can understand is, depending on the labelset/examples ratio one might want to use the unique labelset based or this proposed iterative stratification method. For lower values of this ratio the distribution of the labels across the folds are close or better in a few cases as iterative stratification. For higher values of this ratio, iterative stratification is shown to have maintained better distributions in the folds.
|
How to use scikit-learn's cross validation functions on multi-label classifiers
You might want to check: On the stratification of multi-label data .
Here the authors first tell the simple idea of sampling from unique labelsets and then introduce a new approach iterative stratific
|
10,877
|
What is the use of the line produced by qqline() in R?
|
As you can see on the picture,
obtained by
> y <- rnorm(2000)*4-4
> qqnorm(y); qqline(y, col = 2,lwd=2,lty=2)
the diagonal would not make sense because the first axis is scaled in terms of the theoretical quantiles of a $\mathcal{N}(0,1)$ distribution. I think using the first and third quartiles to set the line gives a robust approach for estimating the parameters of the normal distribution, when compared with using the empirical mean and variance, say. Departures from the line (except in the tails) are indicative of a lack of normality.
|
What is the use of the line produced by qqline() in R?
|
As you can see on the picture,
obtained by
> y <- rnorm(2000)*4-4
> qqnorm(y); qqline(y, col = 2,lwd=2,lty=2)
the diagonal would not make sense because the first axis is scaled in terms of the theore
|
What is the use of the line produced by qqline() in R?
As you can see on the picture,
obtained by
> y <- rnorm(2000)*4-4
> qqnorm(y); qqline(y, col = 2,lwd=2,lty=2)
the diagonal would not make sense because the first axis is scaled in terms of the theoretical quantiles of a $\mathcal{N}(0,1)$ distribution. I think using the first and third quartiles to set the line gives a robust approach for estimating the parameters of the normal distribution, when compared with using the empirical mean and variance, say. Departures from the line (except in the tails) are indicative of a lack of normality.
|
What is the use of the line produced by qqline() in R?
As you can see on the picture,
obtained by
> y <- rnorm(2000)*4-4
> qqnorm(y); qqline(y, col = 2,lwd=2,lty=2)
the diagonal would not make sense because the first axis is scaled in terms of the theore
|
10,878
|
How does a causal tree optimize for heterogenous treatment effects?
|
Your understanding is correct, the core notion of the paper is that sampling-splitting is essential for empirical work and that it allows us to have an unbiased estimate of the treatment effect.
To tackle your main question: The criteria of choice are $\hat{EMSE}_\tau$ and $\hat{EMSE}_\mu$. Both penalise variance and well as encourage heterogeneity. For starters, I will focus on the estimated expected MSE of the treatment effect $\hat{EMSE}_\tau$. For a given tree/partition $\Pi$ when using a training sample $\mathcal{S}^{tr}$ and an estimation sample of size $N^{est}$, the estimator for the otherwise "infeasible criterion" $-\hat{EMSE}_\tau ( \mathcal{S}^{tr},N^{est},\Pi)$ is by definition the variance of the estimated treatment effect across leaves (the term denoted as: $\frac{1}{N^{tr}} \Sigma_{i \in \mathcal{S}^{tr}} \hat{\tau}^2 (X_i; \mathcal{S}^{tr}, \Pi)$) minus the uncertainty about these treatments effects (the variance estimators terms $S^2_{S^{tr}_{treat}}$ and $S^2_{S^{tr}_{control}}$ that are also inversely proportional to the sample sizes $N^{tr}$ and $N^{est}$). Therefore the goodness of fit is not a "vanilla" MSE but rather a variance-penalised one. The stronger the heterogeneity in our estimate the better our $EMSE_\tau$ and similarly the higher the variance of our estimates the worse our $EMSE_\tau$. Note also that the estimated average causal effect $\hat{\tau}(x; \mathcal{S}, \Pi)$ is equal to $\hat{\mu}(1,x; \mathcal{S}, \Pi ) - \hat{\mu}(0,x; \mathcal{S}, \Pi )$
i.e. we will reward the heterogeneity indirectly during the estimation of $\hat{\mu}$ too.
More generally, the basic idea with sample-splitting is that we are getting our estimates for a tree by using a separate sample from the sample that was used to construct the tree (i.e. a partition of the existing sample space $\mathcal{S}$) and thus we can focus mostly on the variance rather than on the bias-variance trade-off. This is the gist of the section Honest Splitting where we can see that the criteria of choice will penalise small leaf size exactly because they will be associated with high variance $S^2$ of the estimated effects.
In conclusion, the task of making a RF consistent is attacked from two sides:
The sample is split to training and evaluation sets.
The criterion for splitting is such that tree leafs are "big".
As mentioned through the paper, this will induce a hit in terms of MSE of the treatment effects but that will come to the increase of the nominal coverage for their confidence interval.
I think Prof. Athey's quote from her 2016 presentation on Solving Heterogeneous Estimating Equations Using Forest Based Algorithms (21:25 to 22:02) captures the essense of this work nicely : "... people have said, if you're going to do hypothesis testing on treatment effects within leaves, shouldn't your objective function somehow anticipate you wanted to construct a confidence interval. (...) So we basically, instead of doing nearest neighbors like this" (using an adaptive $k$-NN estimator), "we're going to have tree based neighborhoods that basically slice up the covariate space according to where we see heterogeneity in the tree building sample. And then in the estimation sample, we'll come back and estimate treatment effects in that partition."
|
How does a causal tree optimize for heterogenous treatment effects?
|
Your understanding is correct, the core notion of the paper is that sampling-splitting is essential for empirical work and that it allows us to have an unbiased estimate of the treatment effect.
To t
|
How does a causal tree optimize for heterogenous treatment effects?
Your understanding is correct, the core notion of the paper is that sampling-splitting is essential for empirical work and that it allows us to have an unbiased estimate of the treatment effect.
To tackle your main question: The criteria of choice are $\hat{EMSE}_\tau$ and $\hat{EMSE}_\mu$. Both penalise variance and well as encourage heterogeneity. For starters, I will focus on the estimated expected MSE of the treatment effect $\hat{EMSE}_\tau$. For a given tree/partition $\Pi$ when using a training sample $\mathcal{S}^{tr}$ and an estimation sample of size $N^{est}$, the estimator for the otherwise "infeasible criterion" $-\hat{EMSE}_\tau ( \mathcal{S}^{tr},N^{est},\Pi)$ is by definition the variance of the estimated treatment effect across leaves (the term denoted as: $\frac{1}{N^{tr}} \Sigma_{i \in \mathcal{S}^{tr}} \hat{\tau}^2 (X_i; \mathcal{S}^{tr}, \Pi)$) minus the uncertainty about these treatments effects (the variance estimators terms $S^2_{S^{tr}_{treat}}$ and $S^2_{S^{tr}_{control}}$ that are also inversely proportional to the sample sizes $N^{tr}$ and $N^{est}$). Therefore the goodness of fit is not a "vanilla" MSE but rather a variance-penalised one. The stronger the heterogeneity in our estimate the better our $EMSE_\tau$ and similarly the higher the variance of our estimates the worse our $EMSE_\tau$. Note also that the estimated average causal effect $\hat{\tau}(x; \mathcal{S}, \Pi)$ is equal to $\hat{\mu}(1,x; \mathcal{S}, \Pi ) - \hat{\mu}(0,x; \mathcal{S}, \Pi )$
i.e. we will reward the heterogeneity indirectly during the estimation of $\hat{\mu}$ too.
More generally, the basic idea with sample-splitting is that we are getting our estimates for a tree by using a separate sample from the sample that was used to construct the tree (i.e. a partition of the existing sample space $\mathcal{S}$) and thus we can focus mostly on the variance rather than on the bias-variance trade-off. This is the gist of the section Honest Splitting where we can see that the criteria of choice will penalise small leaf size exactly because they will be associated with high variance $S^2$ of the estimated effects.
In conclusion, the task of making a RF consistent is attacked from two sides:
The sample is split to training and evaluation sets.
The criterion for splitting is such that tree leafs are "big".
As mentioned through the paper, this will induce a hit in terms of MSE of the treatment effects but that will come to the increase of the nominal coverage for their confidence interval.
I think Prof. Athey's quote from her 2016 presentation on Solving Heterogeneous Estimating Equations Using Forest Based Algorithms (21:25 to 22:02) captures the essense of this work nicely : "... people have said, if you're going to do hypothesis testing on treatment effects within leaves, shouldn't your objective function somehow anticipate you wanted to construct a confidence interval. (...) So we basically, instead of doing nearest neighbors like this" (using an adaptive $k$-NN estimator), "we're going to have tree based neighborhoods that basically slice up the covariate space according to where we see heterogeneity in the tree building sample. And then in the estimation sample, we'll come back and estimate treatment effects in that partition."
|
How does a causal tree optimize for heterogenous treatment effects?
Your understanding is correct, the core notion of the paper is that sampling-splitting is essential for empirical work and that it allows us to have an unbiased estimate of the treatment effect.
To t
|
10,879
|
When to use Poisson vs. geometric vs. negative binomial GLMs for count data?
|
Both the Poisson distribution and the geometric distribution are special cases of the negative binomial (NB) distribution. One common notation is that the variance of the NB is $\mu + 1/\theta \cdot \mu^2$ where $\mu$ is the expectation and $\theta$ is responsible for the amount of (over-)dispersion. Sometimes $\alpha = 1/\theta$ is also used. The Poisson model has $\theta = \infty$, i.e., equidispersion, and the geometric has $\theta = 1$.
So in case of doubt between these three models, I would recommend to estimate the NB: The worst case is that you lose a little bit of efficiency by estimating one parameter too many. But, of course, there are also formal tests for assessing whether a certain value for $\theta$ (e.g., 1 or $\infty$) is sufficient. Or you can use information criteria etc.
Of course, there are also loads of other single- or multi-parameter count data distributions (including the compound Poisson you mentioned) which sometimes may or may not lead to significantly better fits.
As for excess zeros: The two standard strategies are to either use a zero-inflated count data distribution or a hurdle model consisting of a binary model for zero or greater plus a zero-truncated count data model. As you mention excess zeros and overdispersion may be confounded but often considerable overdispersion remains even after adjusting the model for excess zeros. Again, in case of doubt, I would recommend to use an NB-based zero inflation or hurdle model by the same logic as above.
Disclaimer: This is a very brief and simple overview. When applying the models in practice, I would recommend to consult a textbook on the topic. Personally, I like the count data books by Winkelmann and that by Cameron & Trivedi. But there are other good ones as well. For an R-based discussion you might also like our paper in JSS (http://www.jstatsoft.org/v27/i08/).
|
When to use Poisson vs. geometric vs. negative binomial GLMs for count data?
|
Both the Poisson distribution and the geometric distribution are special cases of the negative binomial (NB) distribution. One common notation is that the variance of the NB is $\mu + 1/\theta \cdot \
|
When to use Poisson vs. geometric vs. negative binomial GLMs for count data?
Both the Poisson distribution and the geometric distribution are special cases of the negative binomial (NB) distribution. One common notation is that the variance of the NB is $\mu + 1/\theta \cdot \mu^2$ where $\mu$ is the expectation and $\theta$ is responsible for the amount of (over-)dispersion. Sometimes $\alpha = 1/\theta$ is also used. The Poisson model has $\theta = \infty$, i.e., equidispersion, and the geometric has $\theta = 1$.
So in case of doubt between these three models, I would recommend to estimate the NB: The worst case is that you lose a little bit of efficiency by estimating one parameter too many. But, of course, there are also formal tests for assessing whether a certain value for $\theta$ (e.g., 1 or $\infty$) is sufficient. Or you can use information criteria etc.
Of course, there are also loads of other single- or multi-parameter count data distributions (including the compound Poisson you mentioned) which sometimes may or may not lead to significantly better fits.
As for excess zeros: The two standard strategies are to either use a zero-inflated count data distribution or a hurdle model consisting of a binary model for zero or greater plus a zero-truncated count data model. As you mention excess zeros and overdispersion may be confounded but often considerable overdispersion remains even after adjusting the model for excess zeros. Again, in case of doubt, I would recommend to use an NB-based zero inflation or hurdle model by the same logic as above.
Disclaimer: This is a very brief and simple overview. When applying the models in practice, I would recommend to consult a textbook on the topic. Personally, I like the count data books by Winkelmann and that by Cameron & Trivedi. But there are other good ones as well. For an R-based discussion you might also like our paper in JSS (http://www.jstatsoft.org/v27/i08/).
|
When to use Poisson vs. geometric vs. negative binomial GLMs for count data?
Both the Poisson distribution and the geometric distribution are special cases of the negative binomial (NB) distribution. One common notation is that the variance of the NB is $\mu + 1/\theta \cdot \
|
10,880
|
Why is the posterior distribution in Bayesian Inference often intractable?
|
Why can one not simply calculate the posterior distribution as the numerator of the right-hand side and then infer this normalization constant by requiring that the integral over the posterior distribution has to be 1?
This is precisely what is being done. The posterior distribution is
$$P(\theta|D) = \dfrac{p(D|\theta) \, P(\theta)}{P(D)}. $$
The numerator on the right hand side is $P(D|\theta)P(\theta)$. This is a function over $\theta$ and to be a probability distribution, it has to integrate to 1. Thus we need to find the constant $c$, such that
\begin{align*}
&\int_{\theta} cP(D|\theta) \, P(\theta)\, d\theta = 1\\
\Rightarrow & \int_{\theta} cP(D, \theta) \, d\theta = 1\\
\Rightarrow & cP(D) = 1\\
\Rightarrow& c = \dfrac{1}{P(D)}.
\end{align*}
Thus, the normalizing constant is $P(D)$ which is often intractable, or overtly complicated.
|
Why is the posterior distribution in Bayesian Inference often intractable?
|
Why can one not simply calculate the posterior distribution as the numerator of the right-hand side and then infer this normalization constant by requiring that the integral over the posterior distrib
|
Why is the posterior distribution in Bayesian Inference often intractable?
Why can one not simply calculate the posterior distribution as the numerator of the right-hand side and then infer this normalization constant by requiring that the integral over the posterior distribution has to be 1?
This is precisely what is being done. The posterior distribution is
$$P(\theta|D) = \dfrac{p(D|\theta) \, P(\theta)}{P(D)}. $$
The numerator on the right hand side is $P(D|\theta)P(\theta)$. This is a function over $\theta$ and to be a probability distribution, it has to integrate to 1. Thus we need to find the constant $c$, such that
\begin{align*}
&\int_{\theta} cP(D|\theta) \, P(\theta)\, d\theta = 1\\
\Rightarrow & \int_{\theta} cP(D, \theta) \, d\theta = 1\\
\Rightarrow & cP(D) = 1\\
\Rightarrow& c = \dfrac{1}{P(D)}.
\end{align*}
Thus, the normalizing constant is $P(D)$ which is often intractable, or overtly complicated.
|
Why is the posterior distribution in Bayesian Inference often intractable?
Why can one not simply calculate the posterior distribution as the numerator of the right-hand side and then infer this normalization constant by requiring that the integral over the posterior distrib
|
10,881
|
Why is the posterior distribution in Bayesian Inference often intractable?
|
I had the same question. This great post explains it really well.
In a nutshell. It is intractable because the denominator has to evaluate the probability for ALL possible values of 𝜃; in most interesting cases ALL is a large amount. Whereas the numerator is for a single realization of 𝜃.
See Eqs. 4-8 in the post. Screenshot of the link:
|
Why is the posterior distribution in Bayesian Inference often intractable?
|
I had the same question. This great post explains it really well.
In a nutshell. It is intractable because the denominator has to evaluate the probability for ALL possible values of 𝜃; in most interes
|
Why is the posterior distribution in Bayesian Inference often intractable?
I had the same question. This great post explains it really well.
In a nutshell. It is intractable because the denominator has to evaluate the probability for ALL possible values of 𝜃; in most interesting cases ALL is a large amount. Whereas the numerator is for a single realization of 𝜃.
See Eqs. 4-8 in the post. Screenshot of the link:
|
Why is the posterior distribution in Bayesian Inference often intractable?
I had the same question. This great post explains it really well.
In a nutshell. It is intractable because the denominator has to evaluate the probability for ALL possible values of 𝜃; in most interes
|
10,882
|
Different probability density transformations due to Jacobian factor
|
I suggest you reading the solution of Question 1.4 which provides a good intuition.
In a nutshell, if you have an arbitrary function $ f(x) $ and two variable $x$ and $y$ which are related to each other by the function $x = g(y)$, then you can find the maximum of the function either by directly analyzing $f(x)$: $ \hat{x} = argmax_x(f(x)) $ or the transformed function $f(g(y))$: $\hat{y} = argmax_y(f(g(y))$. Not surprisingly, $\hat{x}$ and $\hat{y}$ will be related to each as $\hat{x} = g(\hat{y})$ (here I assumed that $\forall{y}: g^\prime(y)\neq0)$.
This is not the case for probability distributions. If you have a probability distribution $p_x(x)$ and two random variables which are related to each other by $x=g(y)$. Then there is no direct relation between $\hat{x} = argmax_x(p_x(x))$ and $\hat{y}=argmax_y(p_y(y))$. This happens because of Jacobian factor, a factor that shows how the volum is relatively changed by a function such as $g(.)$.
|
Different probability density transformations due to Jacobian factor
|
I suggest you reading the solution of Question 1.4 which provides a good intuition.
In a nutshell, if you have an arbitrary function $ f(x) $ and two variable $x$ and $y$ which are related to each ot
|
Different probability density transformations due to Jacobian factor
I suggest you reading the solution of Question 1.4 which provides a good intuition.
In a nutshell, if you have an arbitrary function $ f(x) $ and two variable $x$ and $y$ which are related to each other by the function $x = g(y)$, then you can find the maximum of the function either by directly analyzing $f(x)$: $ \hat{x} = argmax_x(f(x)) $ or the transformed function $f(g(y))$: $\hat{y} = argmax_y(f(g(y))$. Not surprisingly, $\hat{x}$ and $\hat{y}$ will be related to each as $\hat{x} = g(\hat{y})$ (here I assumed that $\forall{y}: g^\prime(y)\neq0)$.
This is not the case for probability distributions. If you have a probability distribution $p_x(x)$ and two random variables which are related to each other by $x=g(y)$. Then there is no direct relation between $\hat{x} = argmax_x(p_x(x))$ and $\hat{y}=argmax_y(p_y(y))$. This happens because of Jacobian factor, a factor that shows how the volum is relatively changed by a function such as $g(.)$.
|
Different probability density transformations due to Jacobian factor
I suggest you reading the solution of Question 1.4 which provides a good intuition.
In a nutshell, if you have an arbitrary function $ f(x) $ and two variable $x$ and $y$ which are related to each ot
|
10,883
|
Bayesian thinking about overfitting
|
I might start by saying that a Bayesian model cannot systematically overfit (or underfit) data that are drawn from the prior predictive distribution, which is the basis for a procedure to validate that Bayesian software is working correctly before it is applied to data collected from the world.
But it can overfit a single dataset drawn from the prior predictive distribution or a single dataset collected from the world in the sense that the various predictive measures applied to the data that you conditioned on look better than those same predictive measures applied to future data that are generated by the same process. Chapter 6 of Richard McElreath's Bayesian book is devoted to overfitting.
The severity and frequency of overfitting can be lessened by good priors, particularly those that are informative about the scale of an effect. By putting vanishing prior probability on implausibly large values, you discourage the posterior distribution from getting overly excited by some idiosyncratic aspect of the data that you condition on that may suggest an implausibly large effect.
The best ways of detecting overfitting involve leave-one-out cross-validation, which can be approximated from a posterior distribution that does not actually leave any observations out of the conditioning set. There is an assumption that no individual "observation" [*] that you condition on has an overly large effect on the posterior distribution, but that assumption is checkable by evaluating the size of the estimate of the shape parameter in a Generalized Pareto distribution that is fit to the importance sampling weights (that are derived from the log-likelihood of an observation evaluated over every draw from the posterior distribution). If this assumption is satisfied, then you can obtain predictive measures for each observation that are as if that observation had been omitted, the posterior had been drawn from conditional on the remaining observations, and the posterior predictive distribution had been constructed for the omitted observation. If your predictions of left out observations suffer, then your model was overfitting to begin with. These ideas are implemented in the loo package for R, which includes citations such as here and there.
As far as distilling to a single number goes, I like to calculate the proportion of observations that fall within 50% predictive intervals. To the extent that this proportion is greater than one half, the model is overfitting, although you need more than a handful of observations in order to cut through the noise in the inclusion indicator function. For comparing different models (that may overfit), the expected log predictive density (which is calculated by the loo function in the loo package) is a good measure (proposed by I.J. Good) because it takes into account the possibility that a more flexible model may fit the available data better than a less flexible model but is expected to predict future data worse. But these ideas can be applied to the expectation of any predictive measure (that may be more intuitive to practitioners); see the E_loo function in the loo package.
[*] You do have to choose what constitutes an observation in a hierarchical model. For example, are you interested in predicting a new patient or a new time point for an existing patient? You can do it either way, but the former requires that you (re)write the likelihood function to integrate out the patient-specific parameters.
|
Bayesian thinking about overfitting
|
I might start by saying that a Bayesian model cannot systematically overfit (or underfit) data that are drawn from the prior predictive distribution, which is the basis for a procedure to validate tha
|
Bayesian thinking about overfitting
I might start by saying that a Bayesian model cannot systematically overfit (or underfit) data that are drawn from the prior predictive distribution, which is the basis for a procedure to validate that Bayesian software is working correctly before it is applied to data collected from the world.
But it can overfit a single dataset drawn from the prior predictive distribution or a single dataset collected from the world in the sense that the various predictive measures applied to the data that you conditioned on look better than those same predictive measures applied to future data that are generated by the same process. Chapter 6 of Richard McElreath's Bayesian book is devoted to overfitting.
The severity and frequency of overfitting can be lessened by good priors, particularly those that are informative about the scale of an effect. By putting vanishing prior probability on implausibly large values, you discourage the posterior distribution from getting overly excited by some idiosyncratic aspect of the data that you condition on that may suggest an implausibly large effect.
The best ways of detecting overfitting involve leave-one-out cross-validation, which can be approximated from a posterior distribution that does not actually leave any observations out of the conditioning set. There is an assumption that no individual "observation" [*] that you condition on has an overly large effect on the posterior distribution, but that assumption is checkable by evaluating the size of the estimate of the shape parameter in a Generalized Pareto distribution that is fit to the importance sampling weights (that are derived from the log-likelihood of an observation evaluated over every draw from the posterior distribution). If this assumption is satisfied, then you can obtain predictive measures for each observation that are as if that observation had been omitted, the posterior had been drawn from conditional on the remaining observations, and the posterior predictive distribution had been constructed for the omitted observation. If your predictions of left out observations suffer, then your model was overfitting to begin with. These ideas are implemented in the loo package for R, which includes citations such as here and there.
As far as distilling to a single number goes, I like to calculate the proportion of observations that fall within 50% predictive intervals. To the extent that this proportion is greater than one half, the model is overfitting, although you need more than a handful of observations in order to cut through the noise in the inclusion indicator function. For comparing different models (that may overfit), the expected log predictive density (which is calculated by the loo function in the loo package) is a good measure (proposed by I.J. Good) because it takes into account the possibility that a more flexible model may fit the available data better than a less flexible model but is expected to predict future data worse. But these ideas can be applied to the expectation of any predictive measure (that may be more intuitive to practitioners); see the E_loo function in the loo package.
[*] You do have to choose what constitutes an observation in a hierarchical model. For example, are you interested in predicting a new patient or a new time point for an existing patient? You can do it either way, but the former requires that you (re)write the likelihood function to integrate out the patient-specific parameters.
|
Bayesian thinking about overfitting
I might start by saying that a Bayesian model cannot systematically overfit (or underfit) data that are drawn from the prior predictive distribution, which is the basis for a procedure to validate tha
|
10,884
|
Bayesian thinking about overfitting
|
Overfitting means the model works well on the training set but performs poorly on test set. IMHO, it comes from two sources: the data and the model we use (or our subjectivity).
Data is probably the more important factor. With whatever models/approaches we use, we implicitly assume the our data is representative enough, that is what we obtain from our (training) data can be also generalized to the population. In practice it is always not the case. If the data is not iid then standard $k$-fold CV makes no sense in avoiding overfitting.
As a result, if we are frequentist then the source of overfitting comes from MLE. If we are Bayesian then this comes from the (subjective) choice of prior distribution(and of course the choice of likelihood)). So even if you use posterior distribution/mean/median, you already overfitted from the beginning and this overfitting is carried along. The proper choice of prior distribution and likelihood will help but they are still the models, you can never avoid overfitting completely.
|
Bayesian thinking about overfitting
|
Overfitting means the model works well on the training set but performs poorly on test set. IMHO, it comes from two sources: the data and the model we use (or our subjectivity).
Data is probably the m
|
Bayesian thinking about overfitting
Overfitting means the model works well on the training set but performs poorly on test set. IMHO, it comes from two sources: the data and the model we use (or our subjectivity).
Data is probably the more important factor. With whatever models/approaches we use, we implicitly assume the our data is representative enough, that is what we obtain from our (training) data can be also generalized to the population. In practice it is always not the case. If the data is not iid then standard $k$-fold CV makes no sense in avoiding overfitting.
As a result, if we are frequentist then the source of overfitting comes from MLE. If we are Bayesian then this comes from the (subjective) choice of prior distribution(and of course the choice of likelihood)). So even if you use posterior distribution/mean/median, you already overfitted from the beginning and this overfitting is carried along. The proper choice of prior distribution and likelihood will help but they are still the models, you can never avoid overfitting completely.
|
Bayesian thinking about overfitting
Overfitting means the model works well on the training set but performs poorly on test set. IMHO, it comes from two sources: the data and the model we use (or our subjectivity).
Data is probably the m
|
10,885
|
Detecting patterns of cheating on a multi-question exam
|
Ad hoc approach
I'd assume that $\beta_i$ is reasonably reliable because it was estimated on many students, most of who did not cheat on question $i$. For each student $j$, sort the questions in order of increasing difficulty, compute $\beta_i + q_j$ (note that $q_j$ is just a constant offset) and threshold it at some reasonable place (e.g. p(correct) < 0.6). This gives a set of questions which the student is unlikely to answer correctly. You can now use hypothesis testing to see whether this is violated, in which case the student probably cheated (assuming of course your model is correct). One caveat is that if there are few such questions, you might not have enough data for the test to be reliable. Also, I don't think it's possible to determine which question he cheated on, because he always has a 50% chance of guessing. But if you assume in addition that many students got access to (and cheated on) the same set of questions, you can compare these across students and see which questions got answered more often than chance.
You can do a similar trick with questions. I.e. for each question, sort students by $q_j$, add $\beta_i$ (this is now a constant offset) and threshold at probability 0.6. This gives you a list of students who shouldn't be able to answer this question correctly. So they have a 60% chance to guess. Again, do hypothesis testing and see whether this is violated. This only works if most students cheated on the same set of questions (e.g. if a subset of questions 'leaked' before the exam).
Principled approach
For each student, there is a binary variable $c_j$ with a Bernoulli prior with some suitable probability, indicating whether the student is a cheater. For each question there is a binary variable $l_i$, again with some suitable Bernoulli prior, indicating whether the question was leaked. Then there is a set of binary variables $a_{ij}$, indicating whether student $j$ answered question $i$ correctly. If $c_j = 1$ and $l_i = 1$, then the distribution of $a_{ij}$ is Bernoulli with probability 0.99. Otherwise the distribution is $logit(\beta_i + q_j)$. These $a_{ij}$ are the observed variables. $c_j$ and $l_i$ are hidden and must be inferred. You probably can do it by Gibbs sampling. But other approaches might also be feasible, maybe something related to biclustering.
|
Detecting patterns of cheating on a multi-question exam
|
Ad hoc approach
I'd assume that $\beta_i$ is reasonably reliable because it was estimated on many students, most of who did not cheat on question $i$. For each student $j$, sort the questions in order
|
Detecting patterns of cheating on a multi-question exam
Ad hoc approach
I'd assume that $\beta_i$ is reasonably reliable because it was estimated on many students, most of who did not cheat on question $i$. For each student $j$, sort the questions in order of increasing difficulty, compute $\beta_i + q_j$ (note that $q_j$ is just a constant offset) and threshold it at some reasonable place (e.g. p(correct) < 0.6). This gives a set of questions which the student is unlikely to answer correctly. You can now use hypothesis testing to see whether this is violated, in which case the student probably cheated (assuming of course your model is correct). One caveat is that if there are few such questions, you might not have enough data for the test to be reliable. Also, I don't think it's possible to determine which question he cheated on, because he always has a 50% chance of guessing. But if you assume in addition that many students got access to (and cheated on) the same set of questions, you can compare these across students and see which questions got answered more often than chance.
You can do a similar trick with questions. I.e. for each question, sort students by $q_j$, add $\beta_i$ (this is now a constant offset) and threshold at probability 0.6. This gives you a list of students who shouldn't be able to answer this question correctly. So they have a 60% chance to guess. Again, do hypothesis testing and see whether this is violated. This only works if most students cheated on the same set of questions (e.g. if a subset of questions 'leaked' before the exam).
Principled approach
For each student, there is a binary variable $c_j$ with a Bernoulli prior with some suitable probability, indicating whether the student is a cheater. For each question there is a binary variable $l_i$, again with some suitable Bernoulli prior, indicating whether the question was leaked. Then there is a set of binary variables $a_{ij}$, indicating whether student $j$ answered question $i$ correctly. If $c_j = 1$ and $l_i = 1$, then the distribution of $a_{ij}$ is Bernoulli with probability 0.99. Otherwise the distribution is $logit(\beta_i + q_j)$. These $a_{ij}$ are the observed variables. $c_j$ and $l_i$ are hidden and must be inferred. You probably can do it by Gibbs sampling. But other approaches might also be feasible, maybe something related to biclustering.
|
Detecting patterns of cheating on a multi-question exam
Ad hoc approach
I'd assume that $\beta_i$ is reasonably reliable because it was estimated on many students, most of who did not cheat on question $i$. For each student $j$, sort the questions in order
|
10,886
|
Detecting patterns of cheating on a multi-question exam
|
If you want to get into some more complex approaches, you might look at item response theory models. You could then model the difficulty of each question. Students who got difficult items correct while missing easier ones would, I think, be more likely to be cheating than those who did the reverse.
It's been more than a decade since I did this sort of thing, but I think it could be promising. For more detail, check out psychometrics books
|
Detecting patterns of cheating on a multi-question exam
|
If you want to get into some more complex approaches, you might look at item response theory models. You could then model the difficulty of each question. Students who got difficult items correct wh
|
Detecting patterns of cheating on a multi-question exam
If you want to get into some more complex approaches, you might look at item response theory models. You could then model the difficulty of each question. Students who got difficult items correct while missing easier ones would, I think, be more likely to be cheating than those who did the reverse.
It's been more than a decade since I did this sort of thing, but I think it could be promising. For more detail, check out psychometrics books
|
Detecting patterns of cheating on a multi-question exam
If you want to get into some more complex approaches, you might look at item response theory models. You could then model the difficulty of each question. Students who got difficult items correct wh
|
10,887
|
Restricted Boltzmann Machine : how is it used in machine learning?
|
It is possible to use RBMs to deal with typical problems that arise in data collection (that could be used for example to train a machine learning model).
Such problems include imbalanced data sets (in a classification problem), or datasets with missing values (the values of some features are unknown).
In the first case it is possible to train an RBM with data from the minority class and use it to generate examples for this class while in the second case it is possible to train a RBM separately for each class and uncover unknown feature values.
Another typical application of RBMs is collaborative filtering (http://dl.acm.org/citation.cfm?id=1273596).
As far as popular libraries are concerned I think deeplearning4j is a good example (http://deeplearning4j.org).
|
Restricted Boltzmann Machine : how is it used in machine learning?
|
It is possible to use RBMs to deal with typical problems that arise in data collection (that could be used for example to train a machine learning model).
Such problems include imbalanced data sets (i
|
Restricted Boltzmann Machine : how is it used in machine learning?
It is possible to use RBMs to deal with typical problems that arise in data collection (that could be used for example to train a machine learning model).
Such problems include imbalanced data sets (in a classification problem), or datasets with missing values (the values of some features are unknown).
In the first case it is possible to train an RBM with data from the minority class and use it to generate examples for this class while in the second case it is possible to train a RBM separately for each class and uncover unknown feature values.
Another typical application of RBMs is collaborative filtering (http://dl.acm.org/citation.cfm?id=1273596).
As far as popular libraries are concerned I think deeplearning4j is a good example (http://deeplearning4j.org).
|
Restricted Boltzmann Machine : how is it used in machine learning?
It is possible to use RBMs to deal with typical problems that arise in data collection (that could be used for example to train a machine learning model).
Such problems include imbalanced data sets (i
|
10,888
|
Restricted Boltzmann Machine : how is it used in machine learning?
|
RBM was one of the first practical ways of training/learning a deep network, having more than just one or two layers. And the deep belief network was proposed by Geoffrey Hinton, who's considered one of the 'father's of deep learning, I suppose, although Yann LeCun is the other main 'father' of deep learning, I think, or thats how I see it. Of course, everything was already invented years ago by Jurgen Schmidhuber :-)
So, RBMs are famous because 1. one of the first ways of doing deep learning 2. Geoffrey Hinton.
However, in practice, they are surely used, and useable, in academic research, since there are lots of people trying to find some unique niche, that they can be expert in, and being the worldwide expert in some niche of RBMs is a good niche as any other. However, in practice, in industry, whilst I wont claim they're never used, but they come up extremely rarely. There are simply so many very standard techniques, that train really fast and easily, like logistic regression, and feed-forward convolutional neural networks. For unsupervised, things like GANs are really popular at the moment.
|
Restricted Boltzmann Machine : how is it used in machine learning?
|
RBM was one of the first practical ways of training/learning a deep network, having more than just one or two layers. And the deep belief network was proposed by Geoffrey Hinton, who's considered one
|
Restricted Boltzmann Machine : how is it used in machine learning?
RBM was one of the first practical ways of training/learning a deep network, having more than just one or two layers. And the deep belief network was proposed by Geoffrey Hinton, who's considered one of the 'father's of deep learning, I suppose, although Yann LeCun is the other main 'father' of deep learning, I think, or thats how I see it. Of course, everything was already invented years ago by Jurgen Schmidhuber :-)
So, RBMs are famous because 1. one of the first ways of doing deep learning 2. Geoffrey Hinton.
However, in practice, they are surely used, and useable, in academic research, since there are lots of people trying to find some unique niche, that they can be expert in, and being the worldwide expert in some niche of RBMs is a good niche as any other. However, in practice, in industry, whilst I wont claim they're never used, but they come up extremely rarely. There are simply so many very standard techniques, that train really fast and easily, like logistic regression, and feed-forward convolutional neural networks. For unsupervised, things like GANs are really popular at the moment.
|
Restricted Boltzmann Machine : how is it used in machine learning?
RBM was one of the first practical ways of training/learning a deep network, having more than just one or two layers. And the deep belief network was proposed by Geoffrey Hinton, who's considered one
|
10,889
|
Paradox in model selection (AIC, BIC, to explain or to predict?)
|
I will try to explain what's going on with some materials that I am referring to and what I have learned with personal correspondence with the author of the materials.
Above is an example where we are trying to infer a 3rd degree polynomial plus noise. If you look at the bottom left quadrant, you will see that on a cumulative basis AIC beats BIC on a 1000 sample horizon. However you can also see that up to sample 100, instantaneous risk of AIC is worse that BIC. This is due to the fact that AIC is a bad estimator for small samples (a suggested fix is AICc). 0-100 is the region where "To Explain or To Predict" paper is demonstrating without a clear explanation of what's going on. Also even though it is not clear from the picture when the number of samples become large (the slopes become almost identical) BIC instantaneous risk outperforms AIC because the true model is in the search space. However at this point the ML estimates are so much concentrated around their true values that the overfitting of AIC becomes irrelevant as the extra model parameters are very very close to 0. So as you can see from the top-right quadrant AIC identifies on average a polynomial degree of ~3.2 (over many simulation runs it sometimes identifies a degree of 3 sometimes 4). However that extra parameter is minuscule, which makes AIC a no-brainer against BIC.
The story is not that simple however. There are several confusions in papers treating AIC and BIC. Two scenarios to be considered:
1) The model that is searched for is static/fixed, and we increase the number of samples and see what happens under different methodologies.
a) The true model is in search space. We covered this case above.
b) The true model is not in search space but can be approximated with the functional form we are using. In this case AIC is also superior.
http://homepages.cwi.nl/~pdg/presentations/RSShandout.pdf (page 9)
c) The true model is not in search space and we are not even close to getting in right with an approximation. According to Prof. Grunwald, we don't know what's going on under this scenario.
2) The number of samples are fixed, and we vary the model to be searched for to understand the effects of model difficulty under different methodologies.
Prof. Grunwald provides the following example. The truth is say a distribution with a parameter $\theta = \sqrt{(\log n) / n}$ where n is the sample size. And the candidate model 1 is $\theta = 0$ and candidate model 2 is a distribution with a free parameter $\theta^*$. BIC always selects model 1, however model 2 always predicts better because the ML estimate is closer to $\theta$ than 0. As you can see BIC is not finding the truth and and also predicting worse at the same time.
There is also the non-parametric case, but I don't have much information on that front.
My personal opinion is that all the information criteria are approximations and one should not expect a correct result in all cases. I also believe that the model that predicts best is also the model that explains best. It is because when people use the term "model" they don't involve the values of the parameters just the number the parameters. But if you think of it as a point hypothesis then the information content of the protested extra parameters are virtually zero. That's why I would always choose AIC over BIC, if I am left with only those options.
|
Paradox in model selection (AIC, BIC, to explain or to predict?)
|
I will try to explain what's going on with some materials that I am referring to and what I have learned with personal correspondence with the author of the materials.
Above is an example where we ar
|
Paradox in model selection (AIC, BIC, to explain or to predict?)
I will try to explain what's going on with some materials that I am referring to and what I have learned with personal correspondence with the author of the materials.
Above is an example where we are trying to infer a 3rd degree polynomial plus noise. If you look at the bottom left quadrant, you will see that on a cumulative basis AIC beats BIC on a 1000 sample horizon. However you can also see that up to sample 100, instantaneous risk of AIC is worse that BIC. This is due to the fact that AIC is a bad estimator for small samples (a suggested fix is AICc). 0-100 is the region where "To Explain or To Predict" paper is demonstrating without a clear explanation of what's going on. Also even though it is not clear from the picture when the number of samples become large (the slopes become almost identical) BIC instantaneous risk outperforms AIC because the true model is in the search space. However at this point the ML estimates are so much concentrated around their true values that the overfitting of AIC becomes irrelevant as the extra model parameters are very very close to 0. So as you can see from the top-right quadrant AIC identifies on average a polynomial degree of ~3.2 (over many simulation runs it sometimes identifies a degree of 3 sometimes 4). However that extra parameter is minuscule, which makes AIC a no-brainer against BIC.
The story is not that simple however. There are several confusions in papers treating AIC and BIC. Two scenarios to be considered:
1) The model that is searched for is static/fixed, and we increase the number of samples and see what happens under different methodologies.
a) The true model is in search space. We covered this case above.
b) The true model is not in search space but can be approximated with the functional form we are using. In this case AIC is also superior.
http://homepages.cwi.nl/~pdg/presentations/RSShandout.pdf (page 9)
c) The true model is not in search space and we are not even close to getting in right with an approximation. According to Prof. Grunwald, we don't know what's going on under this scenario.
2) The number of samples are fixed, and we vary the model to be searched for to understand the effects of model difficulty under different methodologies.
Prof. Grunwald provides the following example. The truth is say a distribution with a parameter $\theta = \sqrt{(\log n) / n}$ where n is the sample size. And the candidate model 1 is $\theta = 0$ and candidate model 2 is a distribution with a free parameter $\theta^*$. BIC always selects model 1, however model 2 always predicts better because the ML estimate is closer to $\theta$ than 0. As you can see BIC is not finding the truth and and also predicting worse at the same time.
There is also the non-parametric case, but I don't have much information on that front.
My personal opinion is that all the information criteria are approximations and one should not expect a correct result in all cases. I also believe that the model that predicts best is also the model that explains best. It is because when people use the term "model" they don't involve the values of the parameters just the number the parameters. But if you think of it as a point hypothesis then the information content of the protested extra parameters are virtually zero. That's why I would always choose AIC over BIC, if I am left with only those options.
|
Paradox in model selection (AIC, BIC, to explain or to predict?)
I will try to explain what's going on with some materials that I am referring to and what I have learned with personal correspondence with the author of the materials.
Above is an example where we ar
|
10,890
|
Paradox in model selection (AIC, BIC, to explain or to predict?)
|
They are not to be taken in the same context; points 1 and 2 have different contexts. For both AIC and BIC one first explores which combination of parameters in which number yield the best indices (Some authors have epileptic fits when I use the word index in this context. Ignore them, or look up index in the dictionary.) In point 2, AIC is the richer model, where richer means selecting models with more parameters, only sometimes, because frequently the optimum AIC model is the same number of parameters model as BIC the selection. That is, if AIC and BIC select models having the SAME number of parameters then the claim is that AIC will be better for prediction than BIC. However, the opposite could occur if BIC maxes out with a fewer parameters model selected (but no guarantees). Sober (2002) concluded that AIC measures predictive accuracy while BIC measures goodness of fit, where predictive accuracy can mean predicting y outside of the extreme value range of x. When outside, frequently a less optimal AIC having weakly predictive parameters dropped will better predict extrapolated values than an optimal AIC index from more parameters in its selected model. I note in passing that AIC and ML do not obviate the need for extrapolation error testing, which is a separate test for models. This can be done by withholding extreme values from the "training" set and computing the error between the extrapolated "post-training" model and the withheld data.
Now BIC is supposedly a lesser error predictor of y-values within the extreme values of range of x. Improved goodness of fit often comes at the price of bias of the regression (for extrapolation), wherein the error is reduced by introducing that bias. This will, for example, often flatten the slope to split the sign of the average left verses right $f(x)-y$ residuals (think of more negative residuals on one side and more positive residuals on the other) thereby reducing total error. So in this case we are asking for the best y value given an x value, and for AIC we are more closely asking for a best functional relationship between x and y. One difference between these is, for example, that BIC, other parameter choices being equal, will have a better correlation coefficient between model and data, and AIC will have better extrapolation error measured as y-value error for a given extrapolated x-value.
Point 3 is a sometimes statement under some conditions
when the data are very noisy (large $σ$);
when the true absolute values of the left-out parameters (in our
example $β_2$) are small;
when the predictors are highly correlated; and
when the sample size is small or the range of left-out variables is
small.
In practice, a correct form of an equation does not mean that fitting with it will yield the correct parameter values because of noise, and the more noise the merrier. The same thing happens with R$^2$ versus adjusted R$^2$ and high collinearity. That is, sometimes when a parameter is added adjusted R$^2$ degrades while R$^2$ improves.
I would hasten to point out that these statements are optimistic. Typically, models are wrong, and often a better model will enforce a norm that cannot be used with AIC or BIC, or the wrong residual structure is assumed for their application, and alternative measures are needed. In my work, this is always the case.
|
Paradox in model selection (AIC, BIC, to explain or to predict?)
|
They are not to be taken in the same context; points 1 and 2 have different contexts. For both AIC and BIC one first explores which combination of parameters in which number yield the best indices (So
|
Paradox in model selection (AIC, BIC, to explain or to predict?)
They are not to be taken in the same context; points 1 and 2 have different contexts. For both AIC and BIC one first explores which combination of parameters in which number yield the best indices (Some authors have epileptic fits when I use the word index in this context. Ignore them, or look up index in the dictionary.) In point 2, AIC is the richer model, where richer means selecting models with more parameters, only sometimes, because frequently the optimum AIC model is the same number of parameters model as BIC the selection. That is, if AIC and BIC select models having the SAME number of parameters then the claim is that AIC will be better for prediction than BIC. However, the opposite could occur if BIC maxes out with a fewer parameters model selected (but no guarantees). Sober (2002) concluded that AIC measures predictive accuracy while BIC measures goodness of fit, where predictive accuracy can mean predicting y outside of the extreme value range of x. When outside, frequently a less optimal AIC having weakly predictive parameters dropped will better predict extrapolated values than an optimal AIC index from more parameters in its selected model. I note in passing that AIC and ML do not obviate the need for extrapolation error testing, which is a separate test for models. This can be done by withholding extreme values from the "training" set and computing the error between the extrapolated "post-training" model and the withheld data.
Now BIC is supposedly a lesser error predictor of y-values within the extreme values of range of x. Improved goodness of fit often comes at the price of bias of the regression (for extrapolation), wherein the error is reduced by introducing that bias. This will, for example, often flatten the slope to split the sign of the average left verses right $f(x)-y$ residuals (think of more negative residuals on one side and more positive residuals on the other) thereby reducing total error. So in this case we are asking for the best y value given an x value, and for AIC we are more closely asking for a best functional relationship between x and y. One difference between these is, for example, that BIC, other parameter choices being equal, will have a better correlation coefficient between model and data, and AIC will have better extrapolation error measured as y-value error for a given extrapolated x-value.
Point 3 is a sometimes statement under some conditions
when the data are very noisy (large $σ$);
when the true absolute values of the left-out parameters (in our
example $β_2$) are small;
when the predictors are highly correlated; and
when the sample size is small or the range of left-out variables is
small.
In practice, a correct form of an equation does not mean that fitting with it will yield the correct parameter values because of noise, and the more noise the merrier. The same thing happens with R$^2$ versus adjusted R$^2$ and high collinearity. That is, sometimes when a parameter is added adjusted R$^2$ degrades while R$^2$ improves.
I would hasten to point out that these statements are optimistic. Typically, models are wrong, and often a better model will enforce a norm that cannot be used with AIC or BIC, or the wrong residual structure is assumed for their application, and alternative measures are needed. In my work, this is always the case.
|
Paradox in model selection (AIC, BIC, to explain or to predict?)
They are not to be taken in the same context; points 1 and 2 have different contexts. For both AIC and BIC one first explores which combination of parameters in which number yield the best indices (So
|
10,891
|
Paradox in model selection (AIC, BIC, to explain or to predict?)
|
I read Shmueli's "To Explain or to Predict" (2010) a couple of years ago for the first time and it was one of the most important readings for me. Several great doubts come to solve after such reading.
It seems me that the contradictions you notice are less relevant that it seems to be. I try to reply to your two questions together.
My main argument is that your point 3 do not appear at pag 307 (here there are the detail) but at the begin of the discussion – bias-variance tradeoff argument (par 1.5; in particular end of pag 293). Your point 3 is the core message of the article. (See EDIT)
Your points 1 and 2 are related to the sub-argument of model selection. At this stage the main important practical difference between explanatory and predictive models do not appear. The analysis of the predictive models must involve out of sample data, in explanatory models it is not the case.
In predictive framework, firstly we have model estimation, then model selection that is something like evaluate the model (hyper)parameters tuning; at the end we have model evaluation on new data.
In explanatory framework, model estimation/selection/evaluation are much less distinguishable. In this framework theorethical consideration seems me much more important that the detailed distinction between BIC and AIC.
In Shmueli (2010) the concept of true model is intended as theoretical summary that imply substantial causal meaning. Causal inference is the goal. [For example you can read: “proper explanatory model selection is performed in a constrained manner … A researcher might choose to retain a causal covariate which has a strong theoretical justification even if is statistically insignificant.” Pag 300]
Now, the role of true model in causal inference debate is of my great interest and represent the core of several question that I opened on this web-community. For example you can read:
Regression and causality in econometrics
Structural equation and causal model in economics
Causality: Structural Causal Model and DAG
Today my guess is that the usual concept of true model is too simplistic for carried out exhaustive causal inference. At the best we can interpret it as very particular type of Pearl’s Structural Causal Model.
I know that, under some condition, BIC method permit us to select the true model. However the story that is behind this result sound me as too poor for exhaustive causal inference.
Finally the distinction between AIC and BIC seems me not so important and, most important, it does not affect the main point of the article (your 3).
EDIT:
To be clearer. The main message of the article is that explanation and prediction are different things. Prediction and explanation (causation) are different goal that involve different tools. Conflation between them without understood the difference is a big problem.
Bias-variance tradeoff is the main theoretical point that justify the necessity of the distinction between prediction and explanation. In this sense your point 3 is the core of the article.
EDIT2
In my opinion the fact here is that the problems addressed by this article are too wide and complex. Then, more than as usual, concepts like contradiction and/or paradox should be contextualized. For some readers that reads your question but not the article can seems that the article at all, or at least in most part, should be refuse, until somebody do not resolve the contradiction. My point is that this is not the case.
Suffice to say that the author could simply skip model selection details and the core message could remain the same, definitely. In fact the core of the article is not about the best strategy to achieve good prediction (or explanation) model, but to show that prediction and explanation are different goal that imply different method. In this sense your point 1 and 2 are minor and this fact resolve the contradiction (in the sense above).
At the other side remain the fact that AIC bring us to prefer long rather then short regression and this fact contradicts the argument at your point 3 is refer to. In this sense the paradox and or contradiction remain.
Maybe the paradox come from the fact that the argument behind point 3, bias-variance trade-off, is valid in finite sample data; in small sample can be substantial. In case of infinitely large sample, estimation error of parameter disappear, but possible bias term no, then the true model (in empirical sense) become the best also in the sense of expected prediction error.
Now the good prediction properties of AIC is achieved only asymptotically, in small sample it can select models that have too many parameters then overfitting can appear. In case like this is hard to say precisely in what way the sample size matters.
However in order to face the problem of small sample a modified version of AIC was developed. See here: https://en.wikipedia.org/wiki/Akaike_information_criterion#Modification_for_small_sample_size
I done some calculus as examples and if these are free of mistake:
for the case of 2 parameters (as the case in Shmueli example) if we have less than 8 obs AIC penalizes more than BIC (as you says). If we have more than 8 but less than 14 obs AICc penalizes more than BIC. If we have 14 or more obs BIC is again the more penalizer
for the case of 5 parameters, if we have less than 8 obs AIC penalizes more than BIC (as you says). If we have more than 8 but less than 19 obs AICc penalizes more than BIC. If we have 19 or more obs BIC is again the more penalizer
for the case of 10 parameters, if we have less than 8 obs AIC penalizes more than BIC (as you says). If we have more than 8 but less than 28 obs AICc penalizes more than BIC. If we have 28 or more obs BIC is again the more penalizer.
Finally let me remark that if we remain very close to author words we can read that she do not explicitly suggest to use AIC in prediction and BIC in explanation (as reported at your point 1). She essentially said that: in explanatory model theoretical consideration are relevant and in prediction no. This is the core of the difference between these two kind of model selection. Then AIC is just presented as “popular metric” and its popularity come from the idea behind it. We can read: “A popular predictive metric is the in-sample Akaike Information Criterion (AIC). Akaike derived the AIC from a predictive viewpoint, where the model is not intended to accurately infer the “true distribution,” but rather to predict future data as accurately as possible”.
|
Paradox in model selection (AIC, BIC, to explain or to predict?)
|
I read Shmueli's "To Explain or to Predict" (2010) a couple of years ago for the first time and it was one of the most important readings for me. Several great doubts come to solve after such reading.
|
Paradox in model selection (AIC, BIC, to explain or to predict?)
I read Shmueli's "To Explain or to Predict" (2010) a couple of years ago for the first time and it was one of the most important readings for me. Several great doubts come to solve after such reading.
It seems me that the contradictions you notice are less relevant that it seems to be. I try to reply to your two questions together.
My main argument is that your point 3 do not appear at pag 307 (here there are the detail) but at the begin of the discussion – bias-variance tradeoff argument (par 1.5; in particular end of pag 293). Your point 3 is the core message of the article. (See EDIT)
Your points 1 and 2 are related to the sub-argument of model selection. At this stage the main important practical difference between explanatory and predictive models do not appear. The analysis of the predictive models must involve out of sample data, in explanatory models it is not the case.
In predictive framework, firstly we have model estimation, then model selection that is something like evaluate the model (hyper)parameters tuning; at the end we have model evaluation on new data.
In explanatory framework, model estimation/selection/evaluation are much less distinguishable. In this framework theorethical consideration seems me much more important that the detailed distinction between BIC and AIC.
In Shmueli (2010) the concept of true model is intended as theoretical summary that imply substantial causal meaning. Causal inference is the goal. [For example you can read: “proper explanatory model selection is performed in a constrained manner … A researcher might choose to retain a causal covariate which has a strong theoretical justification even if is statistically insignificant.” Pag 300]
Now, the role of true model in causal inference debate is of my great interest and represent the core of several question that I opened on this web-community. For example you can read:
Regression and causality in econometrics
Structural equation and causal model in economics
Causality: Structural Causal Model and DAG
Today my guess is that the usual concept of true model is too simplistic for carried out exhaustive causal inference. At the best we can interpret it as very particular type of Pearl’s Structural Causal Model.
I know that, under some condition, BIC method permit us to select the true model. However the story that is behind this result sound me as too poor for exhaustive causal inference.
Finally the distinction between AIC and BIC seems me not so important and, most important, it does not affect the main point of the article (your 3).
EDIT:
To be clearer. The main message of the article is that explanation and prediction are different things. Prediction and explanation (causation) are different goal that involve different tools. Conflation between them without understood the difference is a big problem.
Bias-variance tradeoff is the main theoretical point that justify the necessity of the distinction between prediction and explanation. In this sense your point 3 is the core of the article.
EDIT2
In my opinion the fact here is that the problems addressed by this article are too wide and complex. Then, more than as usual, concepts like contradiction and/or paradox should be contextualized. For some readers that reads your question but not the article can seems that the article at all, or at least in most part, should be refuse, until somebody do not resolve the contradiction. My point is that this is not the case.
Suffice to say that the author could simply skip model selection details and the core message could remain the same, definitely. In fact the core of the article is not about the best strategy to achieve good prediction (or explanation) model, but to show that prediction and explanation are different goal that imply different method. In this sense your point 1 and 2 are minor and this fact resolve the contradiction (in the sense above).
At the other side remain the fact that AIC bring us to prefer long rather then short regression and this fact contradicts the argument at your point 3 is refer to. In this sense the paradox and or contradiction remain.
Maybe the paradox come from the fact that the argument behind point 3, bias-variance trade-off, is valid in finite sample data; in small sample can be substantial. In case of infinitely large sample, estimation error of parameter disappear, but possible bias term no, then the true model (in empirical sense) become the best also in the sense of expected prediction error.
Now the good prediction properties of AIC is achieved only asymptotically, in small sample it can select models that have too many parameters then overfitting can appear. In case like this is hard to say precisely in what way the sample size matters.
However in order to face the problem of small sample a modified version of AIC was developed. See here: https://en.wikipedia.org/wiki/Akaike_information_criterion#Modification_for_small_sample_size
I done some calculus as examples and if these are free of mistake:
for the case of 2 parameters (as the case in Shmueli example) if we have less than 8 obs AIC penalizes more than BIC (as you says). If we have more than 8 but less than 14 obs AICc penalizes more than BIC. If we have 14 or more obs BIC is again the more penalizer
for the case of 5 parameters, if we have less than 8 obs AIC penalizes more than BIC (as you says). If we have more than 8 but less than 19 obs AICc penalizes more than BIC. If we have 19 or more obs BIC is again the more penalizer
for the case of 10 parameters, if we have less than 8 obs AIC penalizes more than BIC (as you says). If we have more than 8 but less than 28 obs AICc penalizes more than BIC. If we have 28 or more obs BIC is again the more penalizer.
Finally let me remark that if we remain very close to author words we can read that she do not explicitly suggest to use AIC in prediction and BIC in explanation (as reported at your point 1). She essentially said that: in explanatory model theoretical consideration are relevant and in prediction no. This is the core of the difference between these two kind of model selection. Then AIC is just presented as “popular metric” and its popularity come from the idea behind it. We can read: “A popular predictive metric is the in-sample Akaike Information Criterion (AIC). Akaike derived the AIC from a predictive viewpoint, where the model is not intended to accurately infer the “true distribution,” but rather to predict future data as accurately as possible”.
|
Paradox in model selection (AIC, BIC, to explain or to predict?)
I read Shmueli's "To Explain or to Predict" (2010) a couple of years ago for the first time and it was one of the most important readings for me. Several great doubts come to solve after such reading.
|
10,892
|
Can the Mantel test be extended to asymmetric matrices?
|
It doesn't need to be extended. The original Mantel test, as presented in Mantel's 1967 paper, allows for asymmetric matrices. Recall that this test compares two $n\times n$ distance matrices $X$ and $Y$.
We may at this point anticipate a modification of our statistic which will simplify the statistical procedures to be developed below. The modification is to remove the restriction $i\lt j$, and to replace it only by the restriction $i\ne j$. Where $X_{ij} = X_{ji}$ and $Y_{ij} = Y_{ji}$, the effect of the modification is simply to double exactly the value of the summation. However, the procedures then developed are appropriate even when the distance relationships are not symmetric, that is, when it is possible that $X_{ij} \ne X_{ji}$ and the $Y_{ij} \ne Y_{ji}$; a particular case then covered is where $X_{ij} = -X_{ji}, Y_{ij} = -Y_{ji}$ ...
(in section 4; emphasis added).
Symmetry appears to be an artificial condition in much software, such as the ade4 package for R, which uses objects of a "dist" class to store and manipulate distance matrices. The manipulation functions assume the distances are symmetric. For this reason you cannot apply its mantel.rtest procedure to asymmetric matrices--but that is purely a software limitation, not a property of the test itself.
The test itself does not appear to require any properties of the matrices. Obviously (by virtue of the explicit reference to antisymmetric references at the end of the preceding passage) it doesn't even need that the entries in $X$ or $Y$ are positive. It merely is a permutation test that uses some measure of correlation of the two matrices (considered as vectors with $n^2$ elements) as a test statistic.
In principle we can list the $n!$ possible permutations of our data, compute $Z$ [the test statistic] for each permutation, and obtain the null distribution of $Z$ against which the observed value of $Z$ can be judged.
[ibid.]
In fact, Mantel explicitly pointed out that the matrices do not have to be distance matrices and he emphasized the importance of this possibility:
The general case formulas will be appropriate also for cases where the $X_{ij}$'s and $Y_{ij}$'s do not follow the arithmetic and geometric regularities imposed in the clustering problem; e.g., $X_{ik} \le X_{ij} + X_{jk}$. It is the applicability of the general procedure to arbitrary $X_{ij}$'s and $Y_{ij}$'s which underlies its extension to a wider variety of problems ...
(The example states the triangle inequality.)
As an example, he offered "the study of interpersonal relationships" in which "we have $n$ individuals and 2 different measures, symmetric or asymmetric, relating each individual to the remaining $n-1$" (emphasis added).
In an appendix, Mantel derived the "permutational variance of $Z=\sum\sum X_{ij}Y_{ij}$, making no stronger assumption than that the diagonal elements of the matrices are constants, potentially nonzero.
In conclusion, from the very beginning every one of the metric axioms has been explicitly considered and rejected as being inessential to the test:
"Distances" may be negative.
"Distances" between an object and itself may be nonzero.
The triangle inequality need not hold.
"Distances" need not be symmetric.
I will end by remarking that Mantel's proposed statistic, $Z=\sum_{i,j} X_{ij}Y_{ij}$, may work poorly for non-symmetric distances. The challenge is to find a test statistic that effectively distinguishes two such matrices: use that in the permutation test instead of the sum of products.
This is an example of the test in R. Given two distance matrices x and y, it returns a sample of the permutation distribution (as a vector of values of the test statistic). It does not require that x or y have any particular properties at all. They only need to be the same size of square matrix.
mantel <- function(x, y, n.iter=999, stat=function(a,b) sum(a*b)) {
permute <- function(z) {
i <- sample.int(nrow(z), nrow(z))
return (z[i, i])
}
sapply(1:n.iter, function(i) stat(x, permute(y)))
}
|
Can the Mantel test be extended to asymmetric matrices?
|
It doesn't need to be extended. The original Mantel test, as presented in Mantel's 1967 paper, allows for asymmetric matrices. Recall that this test compares two $n\times n$ distance matrices $X$ an
|
Can the Mantel test be extended to asymmetric matrices?
It doesn't need to be extended. The original Mantel test, as presented in Mantel's 1967 paper, allows for asymmetric matrices. Recall that this test compares two $n\times n$ distance matrices $X$ and $Y$.
We may at this point anticipate a modification of our statistic which will simplify the statistical procedures to be developed below. The modification is to remove the restriction $i\lt j$, and to replace it only by the restriction $i\ne j$. Where $X_{ij} = X_{ji}$ and $Y_{ij} = Y_{ji}$, the effect of the modification is simply to double exactly the value of the summation. However, the procedures then developed are appropriate even when the distance relationships are not symmetric, that is, when it is possible that $X_{ij} \ne X_{ji}$ and the $Y_{ij} \ne Y_{ji}$; a particular case then covered is where $X_{ij} = -X_{ji}, Y_{ij} = -Y_{ji}$ ...
(in section 4; emphasis added).
Symmetry appears to be an artificial condition in much software, such as the ade4 package for R, which uses objects of a "dist" class to store and manipulate distance matrices. The manipulation functions assume the distances are symmetric. For this reason you cannot apply its mantel.rtest procedure to asymmetric matrices--but that is purely a software limitation, not a property of the test itself.
The test itself does not appear to require any properties of the matrices. Obviously (by virtue of the explicit reference to antisymmetric references at the end of the preceding passage) it doesn't even need that the entries in $X$ or $Y$ are positive. It merely is a permutation test that uses some measure of correlation of the two matrices (considered as vectors with $n^2$ elements) as a test statistic.
In principle we can list the $n!$ possible permutations of our data, compute $Z$ [the test statistic] for each permutation, and obtain the null distribution of $Z$ against which the observed value of $Z$ can be judged.
[ibid.]
In fact, Mantel explicitly pointed out that the matrices do not have to be distance matrices and he emphasized the importance of this possibility:
The general case formulas will be appropriate also for cases where the $X_{ij}$'s and $Y_{ij}$'s do not follow the arithmetic and geometric regularities imposed in the clustering problem; e.g., $X_{ik} \le X_{ij} + X_{jk}$. It is the applicability of the general procedure to arbitrary $X_{ij}$'s and $Y_{ij}$'s which underlies its extension to a wider variety of problems ...
(The example states the triangle inequality.)
As an example, he offered "the study of interpersonal relationships" in which "we have $n$ individuals and 2 different measures, symmetric or asymmetric, relating each individual to the remaining $n-1$" (emphasis added).
In an appendix, Mantel derived the "permutational variance of $Z=\sum\sum X_{ij}Y_{ij}$, making no stronger assumption than that the diagonal elements of the matrices are constants, potentially nonzero.
In conclusion, from the very beginning every one of the metric axioms has been explicitly considered and rejected as being inessential to the test:
"Distances" may be negative.
"Distances" between an object and itself may be nonzero.
The triangle inequality need not hold.
"Distances" need not be symmetric.
I will end by remarking that Mantel's proposed statistic, $Z=\sum_{i,j} X_{ij}Y_{ij}$, may work poorly for non-symmetric distances. The challenge is to find a test statistic that effectively distinguishes two such matrices: use that in the permutation test instead of the sum of products.
This is an example of the test in R. Given two distance matrices x and y, it returns a sample of the permutation distribution (as a vector of values of the test statistic). It does not require that x or y have any particular properties at all. They only need to be the same size of square matrix.
mantel <- function(x, y, n.iter=999, stat=function(a,b) sum(a*b)) {
permute <- function(z) {
i <- sample.int(nrow(z), nrow(z))
return (z[i, i])
}
sapply(1:n.iter, function(i) stat(x, permute(y)))
}
|
Can the Mantel test be extended to asymmetric matrices?
It doesn't need to be extended. The original Mantel test, as presented in Mantel's 1967 paper, allows for asymmetric matrices. Recall that this test compares two $n\times n$ distance matrices $X$ an
|
10,893
|
Bootstrapping Generalized Least Squares
|
As long as your data set is not infinitely large, a bootstrapping approach would likely entail overlaying alternative realizations of biased parameter values and standard errors on top of one another. Shrinkage approaches which attempt to nudge off-diagonals in the upper(lower) triangular toward the diagonal may help, but the fix might best be hinged to corrections to the eigendensity either through the Marcenko-Pastur law or Tracy-Widom law. (see my random matrix theory lecture). The issues will be central to the eigendensity of the Hessian, which is inverted for updating coefficients as well as obtaining standard errors after convergence. I believe the issues inherent in what you are proposing are potentially inadmissible, as they would scale with sample size.
|
Bootstrapping Generalized Least Squares
|
As long as your data set is not infinitely large, a bootstrapping approach would likely entail overlaying alternative realizations of biased parameter values and standard errors on top of one another.
|
Bootstrapping Generalized Least Squares
As long as your data set is not infinitely large, a bootstrapping approach would likely entail overlaying alternative realizations of biased parameter values and standard errors on top of one another. Shrinkage approaches which attempt to nudge off-diagonals in the upper(lower) triangular toward the diagonal may help, but the fix might best be hinged to corrections to the eigendensity either through the Marcenko-Pastur law or Tracy-Widom law. (see my random matrix theory lecture). The issues will be central to the eigendensity of the Hessian, which is inverted for updating coefficients as well as obtaining standard errors after convergence. I believe the issues inherent in what you are proposing are potentially inadmissible, as they would scale with sample size.
|
Bootstrapping Generalized Least Squares
As long as your data set is not infinitely large, a bootstrapping approach would likely entail overlaying alternative realizations of biased parameter values and standard errors on top of one another.
|
10,894
|
Properties of PCA for dependent observations
|
Presumably, you could add the time-component as an additional feature to your sampled points, and now they are i.i.d.? Basically, the original data points are conditional on time:
$$
p(\mathbf{x}_i \mid t_i) \ne p(\mathbf{x}_i)
$$
But, if we define $\mathbf{x}_i' = \{\mathbf{x}_i, t_i\}$, then we have:
$$
p(\mathbf{x}'_i \mid t_i) = p(\mathbf{x}'_i)
$$
... and the data samples are now mutually independent.
In practice, by including the time as a feature in each data point, PCA could have as result that one component simply points along the time feature axis. But if any features are correlated with the time feature, a component might consist of one or more of these features, as well as the time feature.
|
Properties of PCA for dependent observations
|
Presumably, you could add the time-component as an additional feature to your sampled points, and now they are i.i.d.? Basically, the original data points are conditional on time:
$$
p(\mathbf{x}_i \m
|
Properties of PCA for dependent observations
Presumably, you could add the time-component as an additional feature to your sampled points, and now they are i.i.d.? Basically, the original data points are conditional on time:
$$
p(\mathbf{x}_i \mid t_i) \ne p(\mathbf{x}_i)
$$
But, if we define $\mathbf{x}_i' = \{\mathbf{x}_i, t_i\}$, then we have:
$$
p(\mathbf{x}'_i \mid t_i) = p(\mathbf{x}'_i)
$$
... and the data samples are now mutually independent.
In practice, by including the time as a feature in each data point, PCA could have as result that one component simply points along the time feature axis. But if any features are correlated with the time feature, a component might consist of one or more of these features, as well as the time feature.
|
Properties of PCA for dependent observations
Presumably, you could add the time-component as an additional feature to your sampled points, and now they are i.i.d.? Basically, the original data points are conditional on time:
$$
p(\mathbf{x}_i \m
|
10,895
|
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but P(H | HHHH) = 0.5
|
P(HHHHH) is the probability of having five heads in a row. But, P(H|HHHH) means having heads if the last four tosses were heads. In the former, you're at the beginning of the experiment and in the latter one you have already completed four tosses and know the results. Think about the following rewordings:
P(HHHHH): If you were to start the experiment all over again, what would be the probability of having five heads?
P(H|HHHH): If you were to start the experiment but keep restarting it until you got four heads in a row, and then, given that you have four heads, what would be the probability of having the final one as heads?
|
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but
|
P(HHHHH) is the probability of having five heads in a row. But, P(H|HHHH) means having heads if the last four tosses were heads. In the former, you're at the beginning of the experiment and in the lat
|
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but P(H | HHHH) = 0.5
P(HHHHH) is the probability of having five heads in a row. But, P(H|HHHH) means having heads if the last four tosses were heads. In the former, you're at the beginning of the experiment and in the latter one you have already completed four tosses and know the results. Think about the following rewordings:
P(HHHHH): If you were to start the experiment all over again, what would be the probability of having five heads?
P(H|HHHH): If you were to start the experiment but keep restarting it until you got four heads in a row, and then, given that you have four heads, what would be the probability of having the final one as heads?
|
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but
P(HHHHH) is the probability of having five heads in a row. But, P(H|HHHH) means having heads if the last four tosses were heads. In the former, you're at the beginning of the experiment and in the lat
|
10,896
|
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but P(H | HHHH) = 0.5
|
P(HHHHH)
There are 32 possible outcomes from flipping a coin 5 times. Here they are listed:
HHHHH THHHH HTHHH TTHHH HHTHH THTHH HTTHH TTTHH
HHHTH THHTH HTHTH TTHTH HHTTH THTTH HTTTH TTTTH
HHHHT THHHT HTHHT TTHHT HHTHT THTHT HTTHT TTTHT
HHHTT THHTT HTHTT TTHTT HHTTT THTTT HTTTT TTTTT
All of these outcomes are equally likely. So the probability of any one of these sequences is 1/32 = .03125. That's why P(HHHHH) = .03125.
P(H | HHHH)
We are now considering the possible outcomes of a single coin flip, having just observed 4 heads in a row. The only two possible outcomes of this single coin flip, they are of course the following:
H
T
Since the coin flips are assumed independent, the fact that we just observed 4 heads in a row is irrelevant, so this is just the same as considering P(H), the probability of heads for a single toss, regardless of what was just observed. That's why P(H | HHHH) = 0.5.
|
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but
|
P(HHHHH)
There are 32 possible outcomes from flipping a coin 5 times. Here they are listed:
HHHHH THHHH HTHHH TTHHH HHTHH THTHH HTTHH TTTHH
HHHTH THHTH HTHTH TTHTH HHTTH THTTH HTTTH TTTTH
HHHHT THHHT
|
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but P(H | HHHH) = 0.5
P(HHHHH)
There are 32 possible outcomes from flipping a coin 5 times. Here they are listed:
HHHHH THHHH HTHHH TTHHH HHTHH THTHH HTTHH TTTHH
HHHTH THHTH HTHTH TTHTH HHTTH THTTH HTTTH TTTTH
HHHHT THHHT HTHHT TTHHT HHTHT THTHT HTTHT TTTHT
HHHTT THHTT HTHTT TTHTT HHTTT THTTT HTTTT TTTTT
All of these outcomes are equally likely. So the probability of any one of these sequences is 1/32 = .03125. That's why P(HHHHH) = .03125.
P(H | HHHH)
We are now considering the possible outcomes of a single coin flip, having just observed 4 heads in a row. The only two possible outcomes of this single coin flip, they are of course the following:
H
T
Since the coin flips are assumed independent, the fact that we just observed 4 heads in a row is irrelevant, so this is just the same as considering P(H), the probability of heads for a single toss, regardless of what was just observed. That's why P(H | HHHH) = 0.5.
|
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but
P(HHHHH)
There are 32 possible outcomes from flipping a coin 5 times. Here they are listed:
HHHHH THHHH HTHHH TTHHH HHTHH THTHH HTTHH TTTHH
HHHTH THHTH HTHTH TTHTH HHTTH THTTH HTTTH TTTTH
HHHHT THHHT
|
10,897
|
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but P(H | HHHH) = 0.5
|
Often it's helpful to thing of conditions in terms of information:
$$ \mathbb{P}[H | HHHH] $$
can be read as "The probability of getting Heads, given that I have 4 heads already", i.e., given the information that there are already 4 heads.
Of course, we're told the coin tosses are independent, so this information is not helpful -- the past tosses have nothing to do with the upcoming tosses, i.e., this information tells us nothing about the probability of the upcoming event. Hence (since it's a fair coin), $ \mathbb{P}[H | HHHH] = .5$
We can think of the lack of a condition as being the lack of information, so $ \mathbb{P}[H] $ is "the probability of Heads, with no further information", and
$$ \mathbb{P}[H | HHHH] = \mathbb{P}[H] $$
is a restatement of the above -- the probability of getting heads given the information that we already have 4 heads is the same as the probability of getting heads with no other information.
Lastly we can thus see
$$ \mathbb{P}[HHHHH] $$
As "the probability of 5 heads, with no further information". This means that we don't know the outcome of any tosses yet (since those outcomes would count as information), and there we get our $\mathbb{P}[HHHHH] = \frac1{32}$ -- of all the $2^5$ possible outcomes of 5 tosses (starting from when we don't yet know any outcomes), there is only 1 where all the tosses are H.
|
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but
|
Often it's helpful to thing of conditions in terms of information:
$$ \mathbb{P}[H | HHHH] $$
can be read as "The probability of getting Heads, given that I have 4 heads already", i.e., given the info
|
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but P(H | HHHH) = 0.5
Often it's helpful to thing of conditions in terms of information:
$$ \mathbb{P}[H | HHHH] $$
can be read as "The probability of getting Heads, given that I have 4 heads already", i.e., given the information that there are already 4 heads.
Of course, we're told the coin tosses are independent, so this information is not helpful -- the past tosses have nothing to do with the upcoming tosses, i.e., this information tells us nothing about the probability of the upcoming event. Hence (since it's a fair coin), $ \mathbb{P}[H | HHHH] = .5$
We can think of the lack of a condition as being the lack of information, so $ \mathbb{P}[H] $ is "the probability of Heads, with no further information", and
$$ \mathbb{P}[H | HHHH] = \mathbb{P}[H] $$
is a restatement of the above -- the probability of getting heads given the information that we already have 4 heads is the same as the probability of getting heads with no other information.
Lastly we can thus see
$$ \mathbb{P}[HHHHH] $$
As "the probability of 5 heads, with no further information". This means that we don't know the outcome of any tosses yet (since those outcomes would count as information), and there we get our $\mathbb{P}[HHHHH] = \frac1{32}$ -- of all the $2^5$ possible outcomes of 5 tosses (starting from when we don't yet know any outcomes), there is only 1 where all the tosses are H.
|
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but
Often it's helpful to thing of conditions in terms of information:
$$ \mathbb{P}[H | HHHH] $$
can be read as "The probability of getting Heads, given that I have 4 heads already", i.e., given the info
|
10,898
|
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but P(H | HHHH) = 0.5
|
The notation that does not index the coin throws and/or their outcomes (and does not even separate the outcomes by commas or signs of intersection) may be confusing. How do we know which coin throw each $H$ refers to in $P(H|HHHH)$ or $P(HHHHH)$? We can often guess, but this is needlessly ambiguous.
Let us index the coin throws and their outcomes by natural numbers. Given that the coin has no memory, it is hopefully clearer why
$$
P(H_1|H_1,H_2,H_3,H_4)=1,
$$
but
$$
P(H_5|H_1,H_2,H_3,H_4)=0.5
$$
and
$$
P(H_1,H_2,H_3,H_4,H_5)=0.03125.
$$
|
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but
|
The notation that does not index the coin throws and/or their outcomes (and does not even separate the outcomes by commas or signs of intersection) may be confusing. How do we know which coin throw ea
|
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but P(H | HHHH) = 0.5
The notation that does not index the coin throws and/or their outcomes (and does not even separate the outcomes by commas or signs of intersection) may be confusing. How do we know which coin throw each $H$ refers to in $P(H|HHHH)$ or $P(HHHHH)$? We can often guess, but this is needlessly ambiguous.
Let us index the coin throws and their outcomes by natural numbers. Given that the coin has no memory, it is hopefully clearer why
$$
P(H_1|H_1,H_2,H_3,H_4)=1,
$$
but
$$
P(H_5|H_1,H_2,H_3,H_4)=0.5
$$
and
$$
P(H_1,H_2,H_3,H_4,H_5)=0.03125.
$$
|
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but
The notation that does not index the coin throws and/or their outcomes (and does not even separate the outcomes by commas or signs of intersection) may be confusing. How do we know which coin throw ea
|
10,899
|
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but P(H | HHHH) = 0.5
|
I would suggest you to run a simulation, and view the conditional distribution as apply filter on data.
Specifically, you may
simulation large amount of (say 5 million) coin flips on 5 fair coins
try to find for first 4 coins, the results are HHHH
select a subset of the data by first 4 coins' results are HHHH
check the distribution on the 5th coin.
You may find it is close to 0.5.
|
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but
|
I would suggest you to run a simulation, and view the conditional distribution as apply filter on data.
Specifically, you may
simulation large amount of (say 5 million) coin flips on 5 fair coins
try
|
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but P(H | HHHH) = 0.5
I would suggest you to run a simulation, and view the conditional distribution as apply filter on data.
Specifically, you may
simulation large amount of (say 5 million) coin flips on 5 fair coins
try to find for first 4 coins, the results are HHHH
select a subset of the data by first 4 coins' results are HHHH
check the distribution on the 5th coin.
You may find it is close to 0.5.
|
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but
I would suggest you to run a simulation, and view the conditional distribution as apply filter on data.
Specifically, you may
simulation large amount of (say 5 million) coin flips on 5 fair coins
try
|
10,900
|
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but P(H | HHHH) = 0.5
|
Fair independent coin
The probability of event a (5th case is heads $H_5$) given event b (already 4 heads $H_4H_3H_2H_1$)
$$\underbrace{P(H_5|H_4H_3H_2H_1)}_{\text {P(a given b)}} = \frac{\overbrace{P(H_5 \& H_4H_3H_2H_1)}^{\text{P(a and b)}}}{\underbrace{P(H_5 \& H_4H_3H_2H_1) }_{\text {P(a and b)}}+\underbrace{P(T_5 \& H_4H_3H_2H_1)}_{\text {P((not a) and b)}}}$$
For a fair coin you have $P(T_5H_4H_3H_2H_1) = P(H_5H_4H_3H_2H_1) = 0.5^5$. And the above equation will be
$${P(H_5|H_4H_3H_2H_1)} = \frac{0.5^5}{0.5^5+0.5^5} = 0.5$$
With a fair coin that is also independent (note we may have $p_{heads}=p_{tails}$ but that does not necessarily mean that the flips are independent), you should get the above result. But that is not the general result.
Unfair coin (or coin with nonindependent flips)
But if the coin is possibly unfair or not independent from flip to flip then this may not be true. Based on some assumed probability distribution for the fairness of the coin you may compute different values for $P(T_5H_4H_3H_2H_1)$ and $P(H_5H_4H_3H_2H_1)$.
In the more general case (the coin is not necessarily fair) you might get that given already four heads, $P(H_5|H_4H_3H_2H_1)>P(H_5)$
|
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but
|
Fair independent coin
The probability of event a (5th case is heads $H_5$) given event b (already 4 heads $H_4H_3H_2H_1$)
$$\underbrace{P(H_5|H_4H_3H_2H_1)}_{\text {P(a given b)}} = \frac{\overbrace{P
|
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but P(H | HHHH) = 0.5
Fair independent coin
The probability of event a (5th case is heads $H_5$) given event b (already 4 heads $H_4H_3H_2H_1$)
$$\underbrace{P(H_5|H_4H_3H_2H_1)}_{\text {P(a given b)}} = \frac{\overbrace{P(H_5 \& H_4H_3H_2H_1)}^{\text{P(a and b)}}}{\underbrace{P(H_5 \& H_4H_3H_2H_1) }_{\text {P(a and b)}}+\underbrace{P(T_5 \& H_4H_3H_2H_1)}_{\text {P((not a) and b)}}}$$
For a fair coin you have $P(T_5H_4H_3H_2H_1) = P(H_5H_4H_3H_2H_1) = 0.5^5$. And the above equation will be
$${P(H_5|H_4H_3H_2H_1)} = \frac{0.5^5}{0.5^5+0.5^5} = 0.5$$
With a fair coin that is also independent (note we may have $p_{heads}=p_{tails}$ but that does not necessarily mean that the flips are independent), you should get the above result. But that is not the general result.
Unfair coin (or coin with nonindependent flips)
But if the coin is possibly unfair or not independent from flip to flip then this may not be true. Based on some assumed probability distribution for the fairness of the coin you may compute different values for $P(T_5H_4H_3H_2H_1)$ and $P(H_5H_4H_3H_2H_1)$.
In the more general case (the coin is not necessarily fair) you might get that given already four heads, $P(H_5|H_4H_3H_2H_1)>P(H_5)$
|
Confused about independent probabilities. If a fair coin is flipped 5 times, P(HHHHH) = 0.03125, but
Fair independent coin
The probability of event a (5th case is heads $H_5$) given event b (already 4 heads $H_4H_3H_2H_1$)
$$\underbrace{P(H_5|H_4H_3H_2H_1)}_{\text {P(a given b)}} = \frac{\overbrace{P
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.