idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
11,101
|
Why should we care about rapid mixing in MCMC chains?
|
The ideal Monte Carlo algorithm uses independent successive random values. In MCMC, successive values are not independant, which makes the method converge slower than ideal Monte Carlo; however, the faster it mixes, the faster the dependence decays in successive iterations¹, and the faster it converges.
¹ I mean here that the successive values are quickly "almost independent" of the initial state, or rather that given the value $X_n$ at one point, the values $X_{ń+k}$ become quickly "almost independent" of $X_n$ as $k$ grows; so, as qkhhly says in the comments, "the chain don’t keep stuck in a certain region of the state space".
Edit: I think the following example can help
Imagine you want to estimate the mean of the uniform distribution on $\{1, \dots, n\}$ by MCMC. You start with the ordered sequence $(1, \dots, n)$; at each step, you chose $k>2$ elements in the sequence and randomly shuffle them. At each step, the element at position 1 is recorded; this converges to the uniform distribution. The value of $k$ controls the mixing rapidity: when $k=2$, it is slow; when $k=n$, the successive elements are independent and the mixing is fast.
Here is a R function for this MCMC algorithm :
mcmc <- function(n, k = 2, N = 5000)
{
x <- 1:n;
res <- numeric(N)
for(i in 1:N)
{
swap <- sample(1:n, k)
x[swap] <- sample(x[swap],k);
res[i] <- x[1];
}
return(res);
}
Let’s apply it for $n = 99$, and plot the successive estimation of the mean $\mu = 50$ along the MCMC iterations:
n <- 99; mu <- sum(1:n)/n;
mcmc(n) -> r1
plot(cumsum(r1)/1:length(r1), type="l", ylim=c(0,n), ylab="mean")
abline(mu,0,lty=2)
mcmc(n,round(n/2)) -> r2
lines(1:length(r2), cumsum(r2)/1:length(r2), col="blue")
mcmc(n,n) -> r3
lines(1:length(r3), cumsum(r3)/1:length(r3), col="red")
legend("topleft", c("k = 2", paste("k =",round(n/2)), paste("k =",n)), col=c("black","blue","red"), lwd=1)
You can see here that for $k=2$ (in black), the convergence is slow; for $k=50$ (in blue), it is faster, but still slower than with $k=99$ (in red).
You can also plot an histogram for the distribution of the estimated mean after a fixed number of iterations, eg 100 iterations:
K <- 5000;
M1 <- numeric(K)
M2 <- numeric(K)
M3 <- numeric(K)
for(i in 1:K)
{
M1[i] <- mean(mcmc(n,2,100));
M2[i] <- mean(mcmc(n,round(n/2),100));
M3[i] <- mean(mcmc(n,n,100));
}
dev.new()
par(mfrow=c(3,1))
hist(M1, xlim=c(0,n), freq=FALSE)
hist(M2, xlim=c(0,n), freq=FALSE)
hist(M3, xlim=c(0,n), freq=FALSE)
You can see that with $k=2$ (M1), the influence of the initial value after 100 iterations only gives you a terrible result. With $k=50$ it seems ok, with still greater standard deviation than with $k=99$. Here are the means and sd:
> mean(M1)
[1] 19.046
> mean(M2)
[1] 49.51611
> mean(M3)
[1] 50.09301
> sd(M2)
[1] 5.013053
> sd(M3)
[1] 2.829185
|
Why should we care about rapid mixing in MCMC chains?
|
The ideal Monte Carlo algorithm uses independent successive random values. In MCMC, successive values are not independant, which makes the method converge slower than ideal Monte Carlo; however, the f
|
Why should we care about rapid mixing in MCMC chains?
The ideal Monte Carlo algorithm uses independent successive random values. In MCMC, successive values are not independant, which makes the method converge slower than ideal Monte Carlo; however, the faster it mixes, the faster the dependence decays in successive iterations¹, and the faster it converges.
¹ I mean here that the successive values are quickly "almost independent" of the initial state, or rather that given the value $X_n$ at one point, the values $X_{ń+k}$ become quickly "almost independent" of $X_n$ as $k$ grows; so, as qkhhly says in the comments, "the chain don’t keep stuck in a certain region of the state space".
Edit: I think the following example can help
Imagine you want to estimate the mean of the uniform distribution on $\{1, \dots, n\}$ by MCMC. You start with the ordered sequence $(1, \dots, n)$; at each step, you chose $k>2$ elements in the sequence and randomly shuffle them. At each step, the element at position 1 is recorded; this converges to the uniform distribution. The value of $k$ controls the mixing rapidity: when $k=2$, it is slow; when $k=n$, the successive elements are independent and the mixing is fast.
Here is a R function for this MCMC algorithm :
mcmc <- function(n, k = 2, N = 5000)
{
x <- 1:n;
res <- numeric(N)
for(i in 1:N)
{
swap <- sample(1:n, k)
x[swap] <- sample(x[swap],k);
res[i] <- x[1];
}
return(res);
}
Let’s apply it for $n = 99$, and plot the successive estimation of the mean $\mu = 50$ along the MCMC iterations:
n <- 99; mu <- sum(1:n)/n;
mcmc(n) -> r1
plot(cumsum(r1)/1:length(r1), type="l", ylim=c(0,n), ylab="mean")
abline(mu,0,lty=2)
mcmc(n,round(n/2)) -> r2
lines(1:length(r2), cumsum(r2)/1:length(r2), col="blue")
mcmc(n,n) -> r3
lines(1:length(r3), cumsum(r3)/1:length(r3), col="red")
legend("topleft", c("k = 2", paste("k =",round(n/2)), paste("k =",n)), col=c("black","blue","red"), lwd=1)
You can see here that for $k=2$ (in black), the convergence is slow; for $k=50$ (in blue), it is faster, but still slower than with $k=99$ (in red).
You can also plot an histogram for the distribution of the estimated mean after a fixed number of iterations, eg 100 iterations:
K <- 5000;
M1 <- numeric(K)
M2 <- numeric(K)
M3 <- numeric(K)
for(i in 1:K)
{
M1[i] <- mean(mcmc(n,2,100));
M2[i] <- mean(mcmc(n,round(n/2),100));
M3[i] <- mean(mcmc(n,n,100));
}
dev.new()
par(mfrow=c(3,1))
hist(M1, xlim=c(0,n), freq=FALSE)
hist(M2, xlim=c(0,n), freq=FALSE)
hist(M3, xlim=c(0,n), freq=FALSE)
You can see that with $k=2$ (M1), the influence of the initial value after 100 iterations only gives you a terrible result. With $k=50$ it seems ok, with still greater standard deviation than with $k=99$. Here are the means and sd:
> mean(M1)
[1] 19.046
> mean(M2)
[1] 49.51611
> mean(M3)
[1] 50.09301
> sd(M2)
[1] 5.013053
> sd(M3)
[1] 2.829185
|
Why should we care about rapid mixing in MCMC chains?
The ideal Monte Carlo algorithm uses independent successive random values. In MCMC, successive values are not independant, which makes the method converge slower than ideal Monte Carlo; however, the f
|
11,102
|
Why should we care about rapid mixing in MCMC chains?
|
In completion of both earlier answers, mixing is only one aspect of MCMC convergence. It is indeed directly connected with the speed of forgetting the initial value or distribution of the Markov chain $(X_n)$. For instance,the mathematical notion of $\alpha$-mixing is defined by the measure
$$
\alpha(n) = \sup_{A,B} \left\{\,|P(X_0\in A,X_n\in\cap B) - P(X_0\in A)P(X_n\in B)\right\}\,, n\in \mathbb{N}\,,
$$
whose speed of convergence to zero is characteristic of the mixing. However, this measure is not directly related to the speed with which $(X_n)$ converges to the target distribution $\pi$. One may get very fast convergence to the target and still keep high correlation between the elements of the chain.
Furthermore, independence between the $X_n$'s is only relevant in some settings. When aiming at integration, negative correlation (a.k.a. antithetic simulation) is superior to independence.
About your specific comment that
...the accepted candidate draws should and will concentrated in the high density part of the posterior distribution. If what I understand is true, then do we still want the chain to move through the support ( which includes the low density part) ?
the MCMC chain explores the target in exact proportion to its height (in its stationary regime) so indeed spends more time in the higher density region(s). That the chain must cross lower density regions is relevant when the target has several high density components separated by low density regions. (This is also called a multimodal setting.) Slow mixing may prevent the chain from crossing such low density regions. The only regions $(X_n)$ the chain should never visit are the regions with zero probability under the target distribution.
|
Why should we care about rapid mixing in MCMC chains?
|
In completion of both earlier answers, mixing is only one aspect of MCMC convergence. It is indeed directly connected with the speed of forgetting the initial value or distribution of the Markov chain
|
Why should we care about rapid mixing in MCMC chains?
In completion of both earlier answers, mixing is only one aspect of MCMC convergence. It is indeed directly connected with the speed of forgetting the initial value or distribution of the Markov chain $(X_n)$. For instance,the mathematical notion of $\alpha$-mixing is defined by the measure
$$
\alpha(n) = \sup_{A,B} \left\{\,|P(X_0\in A,X_n\in\cap B) - P(X_0\in A)P(X_n\in B)\right\}\,, n\in \mathbb{N}\,,
$$
whose speed of convergence to zero is characteristic of the mixing. However, this measure is not directly related to the speed with which $(X_n)$ converges to the target distribution $\pi$. One may get very fast convergence to the target and still keep high correlation between the elements of the chain.
Furthermore, independence between the $X_n$'s is only relevant in some settings. When aiming at integration, negative correlation (a.k.a. antithetic simulation) is superior to independence.
About your specific comment that
...the accepted candidate draws should and will concentrated in the high density part of the posterior distribution. If what I understand is true, then do we still want the chain to move through the support ( which includes the low density part) ?
the MCMC chain explores the target in exact proportion to its height (in its stationary regime) so indeed spends more time in the higher density region(s). That the chain must cross lower density regions is relevant when the target has several high density components separated by low density regions. (This is also called a multimodal setting.) Slow mixing may prevent the chain from crossing such low density regions. The only regions $(X_n)$ the chain should never visit are the regions with zero probability under the target distribution.
|
Why should we care about rapid mixing in MCMC chains?
In completion of both earlier answers, mixing is only one aspect of MCMC convergence. It is indeed directly connected with the speed of forgetting the initial value or distribution of the Markov chain
|
11,103
|
Why should we care about rapid mixing in MCMC chains?
|
The presumptions that motivate the desire for a rapidly mixing chain are that you care about computing time and that you want a representative sample from the posterior. The former will depend on the complexity of the problem: if you have a small/simple problem, it may not matter much whether your algorithm is efficient. The latter is very important if you are interested in posterior uncertainty or knowing the posterior mean with high precision. However, if you don't care about having a representative sample of the posterior because you are just using MCMC to do approximate optimization, this may not be very important to you.
|
Why should we care about rapid mixing in MCMC chains?
|
The presumptions that motivate the desire for a rapidly mixing chain are that you care about computing time and that you want a representative sample from the posterior. The former will depend on the
|
Why should we care about rapid mixing in MCMC chains?
The presumptions that motivate the desire for a rapidly mixing chain are that you care about computing time and that you want a representative sample from the posterior. The former will depend on the complexity of the problem: if you have a small/simple problem, it may not matter much whether your algorithm is efficient. The latter is very important if you are interested in posterior uncertainty or knowing the posterior mean with high precision. However, if you don't care about having a representative sample of the posterior because you are just using MCMC to do approximate optimization, this may not be very important to you.
|
Why should we care about rapid mixing in MCMC chains?
The presumptions that motivate the desire for a rapidly mixing chain are that you care about computing time and that you want a representative sample from the posterior. The former will depend on the
|
11,104
|
Is it possible to accumulate a set of statistics that describes a large number of samples such that I can then produce a boxplot?
|
For 'on the fly' boxplot, you will need 'on the fly' min/max (trivial) as well as 'on the fly' quartiles (0.25,0.5=median and 0.75).
A lot of work has been going on recently in the problem of online (or 'on the fly') algorithm for median computation.
A recent developements is binmedian. As a side-kick, it also enjoy better worst case complexity than quickselect (which is neither online nor single pass).
You can find the associated paper as well as C and FORTRAN code online here. You may have to check the licencing details with the authors.
You will also need a single pass algorithm for the quartiles, for which you can use the approach above and the following recursive characterization of the quartiles in terms of medians:
$Q_{0.75}(x) \approx Q_{0.5}(x_i:x_i > Q_{0.5}(x))$
and
$Q_{0.25}(x) \approx Q_{0.5}(x_i:x_i < Q_{0.5}(x))$
i.e. the 25 (75) percent quartile is very close to the median of those observations that are smaller (larger) than the median.
Addendum:
There exist a host of older multi-pass methods for computing quantiles. A popular approach is to maintain/update a deterministically sized reservoir of observations randomly selected from the stream and recursively compute quantiles (see this review) on this reservoir. This (and related) approach are superseded by the one proposed above.
|
Is it possible to accumulate a set of statistics that describes a large number of samples such that
|
For 'on the fly' boxplot, you will need 'on the fly' min/max (trivial) as well as 'on the fly' quartiles (0.25,0.5=median and 0.75).
A lot of work has been going on recently in the problem of online (
|
Is it possible to accumulate a set of statistics that describes a large number of samples such that I can then produce a boxplot?
For 'on the fly' boxplot, you will need 'on the fly' min/max (trivial) as well as 'on the fly' quartiles (0.25,0.5=median and 0.75).
A lot of work has been going on recently in the problem of online (or 'on the fly') algorithm for median computation.
A recent developements is binmedian. As a side-kick, it also enjoy better worst case complexity than quickselect (which is neither online nor single pass).
You can find the associated paper as well as C and FORTRAN code online here. You may have to check the licencing details with the authors.
You will also need a single pass algorithm for the quartiles, for which you can use the approach above and the following recursive characterization of the quartiles in terms of medians:
$Q_{0.75}(x) \approx Q_{0.5}(x_i:x_i > Q_{0.5}(x))$
and
$Q_{0.25}(x) \approx Q_{0.5}(x_i:x_i < Q_{0.5}(x))$
i.e. the 25 (75) percent quartile is very close to the median of those observations that are smaller (larger) than the median.
Addendum:
There exist a host of older multi-pass methods for computing quantiles. A popular approach is to maintain/update a deterministically sized reservoir of observations randomly selected from the stream and recursively compute quantiles (see this review) on this reservoir. This (and related) approach are superseded by the one proposed above.
|
Is it possible to accumulate a set of statistics that describes a large number of samples such that
For 'on the fly' boxplot, you will need 'on the fly' min/max (trivial) as well as 'on the fly' quartiles (0.25,0.5=median and 0.75).
A lot of work has been going on recently in the problem of online (
|
11,105
|
Is it possible to accumulate a set of statistics that describes a large number of samples such that I can then produce a boxplot?
|
Instead of just finding the median, there is an algorithm that directly maintains an estimated histogram: "the P-Square Algorithm for Dynamic Calculation of Quantiles and Histograms Without Storing Observations". This will probably be much more efficient that repeated binning for every quantile you want.
|
Is it possible to accumulate a set of statistics that describes a large number of samples such that
|
Instead of just finding the median, there is an algorithm that directly maintains an estimated histogram: "the P-Square Algorithm for Dynamic Calculation of Quantiles and Histograms Without Storin
|
Is it possible to accumulate a set of statistics that describes a large number of samples such that I can then produce a boxplot?
Instead of just finding the median, there is an algorithm that directly maintains an estimated histogram: "the P-Square Algorithm for Dynamic Calculation of Quantiles and Histograms Without Storing Observations". This will probably be much more efficient that repeated binning for every quantile you want.
|
Is it possible to accumulate a set of statistics that describes a large number of samples such that
Instead of just finding the median, there is an algorithm that directly maintains an estimated histogram: "the P-Square Algorithm for Dynamic Calculation of Quantiles and Histograms Without Storin
|
11,106
|
How bad is hyperparameter tuning outside cross-validation?
|
The effects of this bias can be very great. A good demonstration of this is given by the open machine learning competitions that feature in some machine learning conferences. These generally have a training set, a validation set and a test set. The competitors don't get to see the labels for either the validation set or the test set (obviously). The validation set is used to determine the ranking of competitors on a leaderboard that everyone can see while the competition is in progress. It is very common for those at the head of the leaderboard at the end of the competition to be very low in the final ranking based on the test data. This is because they have tuned the hyper-parameters for their learning systems to maximise their performance on the leaderboard and in doing so have over-fitted the validation data by tuning their model. More experienced users pay little or no attention to the leaderboard and adopt more rigorous unbiased performance estimates to guide their methodology.
The example in my paper (mentioned by Jacques) shows that the effects of this kind of bias can be of the same sort of size as the difference between learning algorithms, so the short answer is don't used biased performance evaluation protocols if you are genuinely interested in finding out what works and what doesn't. The basic rule is "treat model selection (e.g. hyper-parameter tuning) as an integral part of the model fitting procedure, and include that in each fold of the cross-validation used for performance evaluation).
The fact that regularisation is less prone to over-fitting than feature selection is precisely the reason that LASSO etc. are good ways of performing feature selection. However, the size of the bias depends on the number of features, size of dataset and the nature of the learning task (i.e. there is an element that depends on the a particular dataset and will vary from application to application). The data-dependent nature of this means that you are better off estimating the size of the bias by using an unbiased protocol and comparing the difference (reporting that the method is robust to over-fitting in model selection in this particular case may be of interest in itself).
G. C. Cawley and N. L. C. Talbot (2010), "Over-fitting in model selection and subsequent selection bias in performance evaluation", Journal of Machine Learning Research, 11, p.2079, section 5.2.)
|
How bad is hyperparameter tuning outside cross-validation?
|
The effects of this bias can be very great. A good demonstration of this is given by the open machine learning competitions that feature in some machine learning conferences. These generally have a
|
How bad is hyperparameter tuning outside cross-validation?
The effects of this bias can be very great. A good demonstration of this is given by the open machine learning competitions that feature in some machine learning conferences. These generally have a training set, a validation set and a test set. The competitors don't get to see the labels for either the validation set or the test set (obviously). The validation set is used to determine the ranking of competitors on a leaderboard that everyone can see while the competition is in progress. It is very common for those at the head of the leaderboard at the end of the competition to be very low in the final ranking based on the test data. This is because they have tuned the hyper-parameters for their learning systems to maximise their performance on the leaderboard and in doing so have over-fitted the validation data by tuning their model. More experienced users pay little or no attention to the leaderboard and adopt more rigorous unbiased performance estimates to guide their methodology.
The example in my paper (mentioned by Jacques) shows that the effects of this kind of bias can be of the same sort of size as the difference between learning algorithms, so the short answer is don't used biased performance evaluation protocols if you are genuinely interested in finding out what works and what doesn't. The basic rule is "treat model selection (e.g. hyper-parameter tuning) as an integral part of the model fitting procedure, and include that in each fold of the cross-validation used for performance evaluation).
The fact that regularisation is less prone to over-fitting than feature selection is precisely the reason that LASSO etc. are good ways of performing feature selection. However, the size of the bias depends on the number of features, size of dataset and the nature of the learning task (i.e. there is an element that depends on the a particular dataset and will vary from application to application). The data-dependent nature of this means that you are better off estimating the size of the bias by using an unbiased protocol and comparing the difference (reporting that the method is robust to over-fitting in model selection in this particular case may be of interest in itself).
G. C. Cawley and N. L. C. Talbot (2010), "Over-fitting in model selection and subsequent selection bias in performance evaluation", Journal of Machine Learning Research, 11, p.2079, section 5.2.)
|
How bad is hyperparameter tuning outside cross-validation?
The effects of this bias can be very great. A good demonstration of this is given by the open machine learning competitions that feature in some machine learning conferences. These generally have a
|
11,107
|
How bad is hyperparameter tuning outside cross-validation?
|
The bias you are talking about is still mainly connected to overfitting.
You can keep the risk low by evaluating only very few models for fixing the regularization hyperparameter plus going for a low complexity within the plausible choice.
As @MarcClaesen points out, you have the learning curve working for you, which will somewhat mitigate the bias. But the learning curve is typically steep only for very few cases, and then also overfitting is much more of a problem.
In the end, I'd expect the bias to depend much on
the data (it's hard to overfit a univariate problem...) and
your experience and modeling behaviour: I think it is possible that you'd decide on a roughly appropriate complexity for your model if you have enough experience with both the type of model and the application and if you are extremely well behaved and do not yield to the temptation for more complex models. But of course, we don't know you and therefore cannot judge how conservative your modeling is.
Also, admitting that your fancy statistical model is highly subjective and you don't have cases left to do a validation is typically not what you want. (Not even in situations where the overall outcome is expected to be better.)
I don't use LASSO (as variable selection does not make much sense for my data for physical reasons), but PCA or PLS usually work well. A ridge would be an alternative that is close to LASSO and more appropriate for the kind of data.
With these data I have seen an order of magnitude more misclassifications on the "shortcut-validation" vs. proper independent (outer) cross validation. In these extreme situations, however, my experience says that the shortcut-validation looked suspiciously good, e.g. 2 % misclassifications => 20 % with proper cross validation.
I cannot give you real numbers that directly apply to your question, though:
So far, I did care more about other types of "shortcuts" that happen in my field and lead to data leaks, e.g. cross validating spectra instead of patients (huge bias! I can show you 10% misclassification -> 70% = guessing among 3 classes), or not including the PCA in the cross validation (2 - 5% -> 20 - 30%).
In situations where I have to decide whether the one cross validation I can afford should be spent on model optimization or on validation, I always decide for validation and fix the complexity parameter by experience. PCA and PLS work well as regularization techniques is that respect because the complexity parameter (# components) is directly related to physical/chemical properties of the problem (e.g. I may have a good guess how many chemically different substance groups I expect to matter). Also, for physico-chemical reasons I know that the components should look somewhat like spectra and if they are noisy, I'm overfitting. But experience may also be optimizing model complexity on an old data set from a previous experiment that is similar enough in general to justify transferring hyperparameters and then just use the regularization parameter for the new data.
That way, I cannot claim to have the optimal model, but I can claim to have reasonable estimate of the performance I can get.
And with the patient number I have, it is anyways impossible to do statistically meaningful model comparisons (remember, my total patient number is below the recommended sample size for estimating a single proportion [according to the rule of thumb @FrankHarrell gives here]).
Why don't you run some simulations that are as close as possible to your data and let us know what happens?
About my data: I work with spectroscopic data. Data sets are typically wide: a few tens of independent cases (patients; though typically lots of measurements per case. Ca. 10³ variates in the raw data, which I may be able to reduce to say 250 by applying domain knowledge to cut uninformative areas out of my spectra and to reduce spectral resolution.
|
How bad is hyperparameter tuning outside cross-validation?
|
The bias you are talking about is still mainly connected to overfitting.
You can keep the risk low by evaluating only very few models for fixing the regularization hyperparameter plus going for a low
|
How bad is hyperparameter tuning outside cross-validation?
The bias you are talking about is still mainly connected to overfitting.
You can keep the risk low by evaluating only very few models for fixing the regularization hyperparameter plus going for a low complexity within the plausible choice.
As @MarcClaesen points out, you have the learning curve working for you, which will somewhat mitigate the bias. But the learning curve is typically steep only for very few cases, and then also overfitting is much more of a problem.
In the end, I'd expect the bias to depend much on
the data (it's hard to overfit a univariate problem...) and
your experience and modeling behaviour: I think it is possible that you'd decide on a roughly appropriate complexity for your model if you have enough experience with both the type of model and the application and if you are extremely well behaved and do not yield to the temptation for more complex models. But of course, we don't know you and therefore cannot judge how conservative your modeling is.
Also, admitting that your fancy statistical model is highly subjective and you don't have cases left to do a validation is typically not what you want. (Not even in situations where the overall outcome is expected to be better.)
I don't use LASSO (as variable selection does not make much sense for my data for physical reasons), but PCA or PLS usually work well. A ridge would be an alternative that is close to LASSO and more appropriate for the kind of data.
With these data I have seen an order of magnitude more misclassifications on the "shortcut-validation" vs. proper independent (outer) cross validation. In these extreme situations, however, my experience says that the shortcut-validation looked suspiciously good, e.g. 2 % misclassifications => 20 % with proper cross validation.
I cannot give you real numbers that directly apply to your question, though:
So far, I did care more about other types of "shortcuts" that happen in my field and lead to data leaks, e.g. cross validating spectra instead of patients (huge bias! I can show you 10% misclassification -> 70% = guessing among 3 classes), or not including the PCA in the cross validation (2 - 5% -> 20 - 30%).
In situations where I have to decide whether the one cross validation I can afford should be spent on model optimization or on validation, I always decide for validation and fix the complexity parameter by experience. PCA and PLS work well as regularization techniques is that respect because the complexity parameter (# components) is directly related to physical/chemical properties of the problem (e.g. I may have a good guess how many chemically different substance groups I expect to matter). Also, for physico-chemical reasons I know that the components should look somewhat like spectra and if they are noisy, I'm overfitting. But experience may also be optimizing model complexity on an old data set from a previous experiment that is similar enough in general to justify transferring hyperparameters and then just use the regularization parameter for the new data.
That way, I cannot claim to have the optimal model, but I can claim to have reasonable estimate of the performance I can get.
And with the patient number I have, it is anyways impossible to do statistically meaningful model comparisons (remember, my total patient number is below the recommended sample size for estimating a single proportion [according to the rule of thumb @FrankHarrell gives here]).
Why don't you run some simulations that are as close as possible to your data and let us know what happens?
About my data: I work with spectroscopic data. Data sets are typically wide: a few tens of independent cases (patients; though typically lots of measurements per case. Ca. 10³ variates in the raw data, which I may be able to reduce to say 250 by applying domain knowledge to cut uninformative areas out of my spectra and to reduce spectral resolution.
|
How bad is hyperparameter tuning outside cross-validation?
The bias you are talking about is still mainly connected to overfitting.
You can keep the risk low by evaluating only very few models for fixing the regularization hyperparameter plus going for a low
|
11,108
|
How bad is hyperparameter tuning outside cross-validation?
|
If you are only selecting the hyperparameter for the LASSO, there is no need for a nested CV. Hyper-parameter selection is done in a single/flat CV interaction.
Given that you have already decided to use LASSO and given that you have already decided which features to keep and give to the algorithm (the LASSO will likely remove some of the features but that is the LASSO optimization not your decision) the only thing left is to choose the $\lambda$ hyperparameter, and that you will do with a flat/single CV:
1) divide the data into training\learning sets $L_i$ and test sets $T_i$ and chose the $\lambda^*$ that minimizes the mean error for all $T_i$ when trained with the corresponding $L_i$.
2) $\lambda^*$ is your choice of hyperparameter. DONE.
(This is not the only method to select hyperparameters but it is the most common one - there is also the "median" procedure discussed and criticized by G. C. Cawley and N. L. C. Talbot (2010), "Over-fitting in model selection and subsequent selection bias in performance evaluation", Journal of Machine Learning Research, 11, p.2079, section 5.2.)
What I understand you are asking is: how bad is to use the error I computed in step 1 above (the minimal error that allow me to select $\lambda^*$) as an estimate of the generalization error of the classified with that $\lambda^*$ for future data? Here you are talking about estimation not hyper-parameter selection!!
I know of two experimental results in measuring the bias of this estimate (in comparison to a true generalization error for synthetic datasets)
the Cawley and Talbot paper above
Varna and Simon (2006), "Bias in error estimation when using cross-validation for model selection", BMC Bioinformatics, 7, 91.
both open access.
You need a nested CV if:
a) you want to choose between a LASSO and some other algorithms, specially if they also have hyperparameters
b) if you want to report a unbiased estimate of the expected generalization error/accuracy of your final classifier (LASSO with $\lambda^*$).
In fact nested CV is used to compute an unbiased estimate of the generalization error of a classifier (with the best choice of hyperparameters - but you dont get to know which are the values of the hyperparameters). This is what allows you to decide between the LASSO and say an SVM-RBF - the one with the best generalization error should be chosen. And this generalization error is the one you use to report b) (which is surprising, in b) you already know the value of the best hyperparameter - $\lambda ^*$ - but the nested CV procedure does not make use of that information).
Finally, nested CV is not the only way to calculate a reasonable unbiased estimate of the expected generalizationn error. There has been at least three other proposals
Ding et al. Bias correction for selecting the minimal-error classifier from many machine learning models BioInformatics 30(22) has their one proposal and compare it with two others the weighted mean correction and Tibshirani-Tibshirani procedure.(see references in the paper)
|
How bad is hyperparameter tuning outside cross-validation?
|
If you are only selecting the hyperparameter for the LASSO, there is no need for a nested CV. Hyper-parameter selection is done in a single/flat CV interaction.
Given that you have already decided to
|
How bad is hyperparameter tuning outside cross-validation?
If you are only selecting the hyperparameter for the LASSO, there is no need for a nested CV. Hyper-parameter selection is done in a single/flat CV interaction.
Given that you have already decided to use LASSO and given that you have already decided which features to keep and give to the algorithm (the LASSO will likely remove some of the features but that is the LASSO optimization not your decision) the only thing left is to choose the $\lambda$ hyperparameter, and that you will do with a flat/single CV:
1) divide the data into training\learning sets $L_i$ and test sets $T_i$ and chose the $\lambda^*$ that minimizes the mean error for all $T_i$ when trained with the corresponding $L_i$.
2) $\lambda^*$ is your choice of hyperparameter. DONE.
(This is not the only method to select hyperparameters but it is the most common one - there is also the "median" procedure discussed and criticized by G. C. Cawley and N. L. C. Talbot (2010), "Over-fitting in model selection and subsequent selection bias in performance evaluation", Journal of Machine Learning Research, 11, p.2079, section 5.2.)
What I understand you are asking is: how bad is to use the error I computed in step 1 above (the minimal error that allow me to select $\lambda^*$) as an estimate of the generalization error of the classified with that $\lambda^*$ for future data? Here you are talking about estimation not hyper-parameter selection!!
I know of two experimental results in measuring the bias of this estimate (in comparison to a true generalization error for synthetic datasets)
the Cawley and Talbot paper above
Varna and Simon (2006), "Bias in error estimation when using cross-validation for model selection", BMC Bioinformatics, 7, 91.
both open access.
You need a nested CV if:
a) you want to choose between a LASSO and some other algorithms, specially if they also have hyperparameters
b) if you want to report a unbiased estimate of the expected generalization error/accuracy of your final classifier (LASSO with $\lambda^*$).
In fact nested CV is used to compute an unbiased estimate of the generalization error of a classifier (with the best choice of hyperparameters - but you dont get to know which are the values of the hyperparameters). This is what allows you to decide between the LASSO and say an SVM-RBF - the one with the best generalization error should be chosen. And this generalization error is the one you use to report b) (which is surprising, in b) you already know the value of the best hyperparameter - $\lambda ^*$ - but the nested CV procedure does not make use of that information).
Finally, nested CV is not the only way to calculate a reasonable unbiased estimate of the expected generalizationn error. There has been at least three other proposals
Ding et al. Bias correction for selecting the minimal-error classifier from many machine learning models BioInformatics 30(22) has their one proposal and compare it with two others the weighted mean correction and Tibshirani-Tibshirani procedure.(see references in the paper)
|
How bad is hyperparameter tuning outside cross-validation?
If you are only selecting the hyperparameter for the LASSO, there is no need for a nested CV. Hyper-parameter selection is done in a single/flat CV interaction.
Given that you have already decided to
|
11,109
|
How bad is hyperparameter tuning outside cross-validation?
|
Any complex learning algorithm, like SVM, neural networks, random forest, ... can attain 100% training accuracy if you let them (for instance through weak/no regularization), with absolutely horrible generalization performance as a result.
For instance, lets use an SVM with RBF kernel $\kappa(\mathbf{x}_i,\mathbf{x}_j) = \exp(-\gamma\|\mathbf{x}_i-\mathbf{x}_j\|^2)$. For $\gamma=\infty$ (or some ridiculously high number), the kernel matrix becomes the unit matrix. This results into a model with $100\%$ training set accuracy and constant test set predictions (e.g. all positive or all negative, depending on the bias term).
In short, you can easily end up with a perfect classifier on your training set that learned absolutely nothing useful on an independent test set. That is how bad it is.
|
How bad is hyperparameter tuning outside cross-validation?
|
Any complex learning algorithm, like SVM, neural networks, random forest, ... can attain 100% training accuracy if you let them (for instance through weak/no regularization), with absolutely horrible
|
How bad is hyperparameter tuning outside cross-validation?
Any complex learning algorithm, like SVM, neural networks, random forest, ... can attain 100% training accuracy if you let them (for instance through weak/no regularization), with absolutely horrible generalization performance as a result.
For instance, lets use an SVM with RBF kernel $\kappa(\mathbf{x}_i,\mathbf{x}_j) = \exp(-\gamma\|\mathbf{x}_i-\mathbf{x}_j\|^2)$. For $\gamma=\infty$ (or some ridiculously high number), the kernel matrix becomes the unit matrix. This results into a model with $100\%$ training set accuracy and constant test set predictions (e.g. all positive or all negative, depending on the bias term).
In short, you can easily end up with a perfect classifier on your training set that learned absolutely nothing useful on an independent test set. That is how bad it is.
|
How bad is hyperparameter tuning outside cross-validation?
Any complex learning algorithm, like SVM, neural networks, random forest, ... can attain 100% training accuracy if you let them (for instance through weak/no regularization), with absolutely horrible
|
11,110
|
Why are random variables defined as functions?
|
If you are wondering why all this machinery is used when something much simpler could suffice--you are right, for most common situations. However, the measure-theoretic version of probability was developed by Kolmogorov for the purpose of establishing a theory of such generality that it could handle, in some cases, very abstract and complicated probability spaces. In fact, Kolmogorov's measure theoretic foundations for probability ultimately allowed probabilistic tools to be applied far beyond their original intended domain of application into areas such as harmonic analysis.
At first it does seem more straightforward to skip any "underlying" $\sigma$-algebra $\Omega$, and to simply assign probability masses to the events comprising the sample space directly, as you have proposed. Indeed, probabilists effectively do the same thing whenever they choose to work with the "induced-measure" on the sample space defined by $P \circ X^{-1}$. However, things start getting tricky when you start getting into infinite dimensional spaces. Suppose you want to prove the Strong Law of Large Numbers for the specific case of flipping fair coins (that is, that the proportion of heads tends arbitrarily closely to 1/2 as the number of coin flips goes to infinity). You could attempt to construct a $\sigma$-algebra on the set of infinite sequences of the form $(H,T,H,...)$. But here can find that it is much more convenient to take the underlying space to be $\Omega = [0,1)$; and then use the binary representations of real numbers (e.g. $0.10100...$) to represent sequences of coin flips (1 being heads, 0 being tails.) An illustration of this very example can be found in the first few chapters of Billingsley's Probability and Measure.
|
Why are random variables defined as functions?
|
If you are wondering why all this machinery is used when something much simpler could suffice--you are right, for most common situations. However, the measure-theoretic version of probability was dev
|
Why are random variables defined as functions?
If you are wondering why all this machinery is used when something much simpler could suffice--you are right, for most common situations. However, the measure-theoretic version of probability was developed by Kolmogorov for the purpose of establishing a theory of such generality that it could handle, in some cases, very abstract and complicated probability spaces. In fact, Kolmogorov's measure theoretic foundations for probability ultimately allowed probabilistic tools to be applied far beyond their original intended domain of application into areas such as harmonic analysis.
At first it does seem more straightforward to skip any "underlying" $\sigma$-algebra $\Omega$, and to simply assign probability masses to the events comprising the sample space directly, as you have proposed. Indeed, probabilists effectively do the same thing whenever they choose to work with the "induced-measure" on the sample space defined by $P \circ X^{-1}$. However, things start getting tricky when you start getting into infinite dimensional spaces. Suppose you want to prove the Strong Law of Large Numbers for the specific case of flipping fair coins (that is, that the proportion of heads tends arbitrarily closely to 1/2 as the number of coin flips goes to infinity). You could attempt to construct a $\sigma$-algebra on the set of infinite sequences of the form $(H,T,H,...)$. But here can find that it is much more convenient to take the underlying space to be $\Omega = [0,1)$; and then use the binary representations of real numbers (e.g. $0.10100...$) to represent sequences of coin flips (1 being heads, 0 being tails.) An illustration of this very example can be found in the first few chapters of Billingsley's Probability and Measure.
|
Why are random variables defined as functions?
If you are wondering why all this machinery is used when something much simpler could suffice--you are right, for most common situations. However, the measure-theoretic version of probability was dev
|
11,111
|
Why are random variables defined as functions?
|
The issues regarding $\sigma$-algebras are mathematical subtleties, that do not really explain why or if we need a background space. Indeed, I would say that there is no compelling evidence that the background space is a necessity. For any probabilistic setup
$(E, \mathbb{E}, \mu)$ where $E$ is the sample space, $\mathbb{E}$ the $\sigma$-algebra and $\mu$ a probability measure, the interest is in $\mu$, and there is no abstract reason that we want $\mu$ to be the image measure of a measurable map $X : (\Omega, \mathbb{B}) \to (E, \mathbb{E})$.
However, the use of an abstract background space gives mathematical convenience that makes many results appear more natural and intuitive. The objective is always to say something about $\mu$, the distribution of $X$, but it may be easier and more clearly expressed in terms of $X$.
An example is given by the central limit theorem. If $X_1, \ldots, X_n$ are i.i.d. real valued with mean $\mu$ and variance $\sigma^2$ the CLT says that
$$P\left(\frac{\sqrt{n}}{\sigma} \left(\frac{1}{n}\sum_{i=1}^n X_i - \xi\right) \leq x \right) \to \Phi(x)$$
where $\Phi$ is the distribution function for the standard normal distribution.
If the distribution of $X_i$ is $\mu$ the corresponding result in terms of the measure reads
$$\rho_{\sqrt{n}/\sigma} \circ \tau_{\xi} \circ \rho_{1/n}(\mu^{*n})((-\infty, x]) \to \Phi(x)$$
Some explanation of the terminology is needed. By $\mu^{*n}$ we mean the $n$-times convolution of $\mu$ (the distribution of the sum). The functions $\rho_c$ are the linear functions $\rho_c(x) = cx$ and $\tau_{\xi}$ is the translation $\tau_{\xi}(x) = x - \xi$.
One could probably get used to the second formulation, but it does a good job at hiding what it is all about.
What seems to be the issue is that the arithmetic transformations involved in the CLT are quite clearly expressed in terms of random variables but they do not translate so well in terms of the measures.
|
Why are random variables defined as functions?
|
The issues regarding $\sigma$-algebras are mathematical subtleties, that do not really explain why or if we need a background space. Indeed, I would say that there is no compelling evidence that the b
|
Why are random variables defined as functions?
The issues regarding $\sigma$-algebras are mathematical subtleties, that do not really explain why or if we need a background space. Indeed, I would say that there is no compelling evidence that the background space is a necessity. For any probabilistic setup
$(E, \mathbb{E}, \mu)$ where $E$ is the sample space, $\mathbb{E}$ the $\sigma$-algebra and $\mu$ a probability measure, the interest is in $\mu$, and there is no abstract reason that we want $\mu$ to be the image measure of a measurable map $X : (\Omega, \mathbb{B}) \to (E, \mathbb{E})$.
However, the use of an abstract background space gives mathematical convenience that makes many results appear more natural and intuitive. The objective is always to say something about $\mu$, the distribution of $X$, but it may be easier and more clearly expressed in terms of $X$.
An example is given by the central limit theorem. If $X_1, \ldots, X_n$ are i.i.d. real valued with mean $\mu$ and variance $\sigma^2$ the CLT says that
$$P\left(\frac{\sqrt{n}}{\sigma} \left(\frac{1}{n}\sum_{i=1}^n X_i - \xi\right) \leq x \right) \to \Phi(x)$$
where $\Phi$ is the distribution function for the standard normal distribution.
If the distribution of $X_i$ is $\mu$ the corresponding result in terms of the measure reads
$$\rho_{\sqrt{n}/\sigma} \circ \tau_{\xi} \circ \rho_{1/n}(\mu^{*n})((-\infty, x]) \to \Phi(x)$$
Some explanation of the terminology is needed. By $\mu^{*n}$ we mean the $n$-times convolution of $\mu$ (the distribution of the sum). The functions $\rho_c$ are the linear functions $\rho_c(x) = cx$ and $\tau_{\xi}$ is the translation $\tau_{\xi}(x) = x - \xi$.
One could probably get used to the second formulation, but it does a good job at hiding what it is all about.
What seems to be the issue is that the arithmetic transformations involved in the CLT are quite clearly expressed in terms of random variables but they do not translate so well in terms of the measures.
|
Why are random variables defined as functions?
The issues regarding $\sigma$-algebras are mathematical subtleties, that do not really explain why or if we need a background space. Indeed, I would say that there is no compelling evidence that the b
|
11,112
|
Why are random variables defined as functions?
|
I only recently stumbled over this new way to think about the Random Variable $X$ as well as about the background space $\Omega$. I am not sure whether this is the question you were looking for, as it is not a mathematical reason, but I think it provides a very neat way to think of RVs.
Imagine a situation in which we throw a coin. This experimental setup consists of a Set of possible initial conditions that include the physical description of how the coin is tossed. The background space consists of all those possible initial conditions. For simplicities sake we might assume that the coin tosses only vary in velocity, then we would set $\Omega = [0,v_{max}]$
The random variable $X$ can then be thought of as a function that maps every initial state $\omega \in \Omega $ with the corresponding outcome of the experiment, i.e. whether it is tails or head.
For the RV: $X:([0,v_{max}], B\cap [0,v_{max}], Q)\to (\{0,1\}, 2^{\{0,1\}})$ the measure $Q$ would then correspond to the probability measure over the initial conditions, which together with the dynamics of the experiment represented by $X$ determines the probability distribution over the outcomes.
For reference of this idea you can look at Tim Maudlin`s or Micheal Strevens chapters in "Probabilties in Physics" (2011)
|
Why are random variables defined as functions?
|
I only recently stumbled over this new way to think about the Random Variable $X$ as well as about the background space $\Omega$. I am not sure whether this is the question you were looking for, as it
|
Why are random variables defined as functions?
I only recently stumbled over this new way to think about the Random Variable $X$ as well as about the background space $\Omega$. I am not sure whether this is the question you were looking for, as it is not a mathematical reason, but I think it provides a very neat way to think of RVs.
Imagine a situation in which we throw a coin. This experimental setup consists of a Set of possible initial conditions that include the physical description of how the coin is tossed. The background space consists of all those possible initial conditions. For simplicities sake we might assume that the coin tosses only vary in velocity, then we would set $\Omega = [0,v_{max}]$
The random variable $X$ can then be thought of as a function that maps every initial state $\omega \in \Omega $ with the corresponding outcome of the experiment, i.e. whether it is tails or head.
For the RV: $X:([0,v_{max}], B\cap [0,v_{max}], Q)\to (\{0,1\}, 2^{\{0,1\}})$ the measure $Q$ would then correspond to the probability measure over the initial conditions, which together with the dynamics of the experiment represented by $X$ determines the probability distribution over the outcomes.
For reference of this idea you can look at Tim Maudlin`s or Micheal Strevens chapters in "Probabilties in Physics" (2011)
|
Why are random variables defined as functions?
I only recently stumbled over this new way to think about the Random Variable $X$ as well as about the background space $\Omega$. I am not sure whether this is the question you were looking for, as it
|
11,113
|
When are Shao's results on leave-one-out cross-validation applicable?
|
You need to specify the purpose of the model before you can say whether Shao's results are applicable. For example, if the purpose is prediction, then LOOCV makes good sense and the inconsistency of variable selection is not a problem. On the other hand, if the purpose is to identify the important variables and explain how they affect the response variable, then Shao's results are obviously important and LOOCV is not appropriate.
The AIC is asymptotically LOOCV and BIC is asymptotically equivalent to a leave-$v$-out CV where $v=n[1-1/(\log(n)-1)]$ --- the BIC result for linear models only. So the BIC gives consistent model selection. Therefore a short-hand summary of Shao's result is that AIC is useful for prediction but BIC is useful for explanation.
|
When are Shao's results on leave-one-out cross-validation applicable?
|
You need to specify the purpose of the model before you can say whether Shao's results are applicable. For example, if the purpose is prediction, then LOOCV makes good sense and the inconsistency of v
|
When are Shao's results on leave-one-out cross-validation applicable?
You need to specify the purpose of the model before you can say whether Shao's results are applicable. For example, if the purpose is prediction, then LOOCV makes good sense and the inconsistency of variable selection is not a problem. On the other hand, if the purpose is to identify the important variables and explain how they affect the response variable, then Shao's results are obviously important and LOOCV is not appropriate.
The AIC is asymptotically LOOCV and BIC is asymptotically equivalent to a leave-$v$-out CV where $v=n[1-1/(\log(n)-1)]$ --- the BIC result for linear models only. So the BIC gives consistent model selection. Therefore a short-hand summary of Shao's result is that AIC is useful for prediction but BIC is useful for explanation.
|
When are Shao's results on leave-one-out cross-validation applicable?
You need to specify the purpose of the model before you can say whether Shao's results are applicable. For example, if the purpose is prediction, then LOOCV makes good sense and the inconsistency of v
|
11,114
|
When are Shao's results on leave-one-out cross-validation applicable?
|
This paper is somewhat controversial, and somewhat ignored
Not really, it's well regarded where the theory of model selection is concerned, though it's certainly misinterpreted. The real issue is how relevant it is to the practice of modeling in the wild. Suppose you perform the simulations for the cases you propose to investigate and determine that LOOCV is indeed inconsistent. The only reason you'd get that is because you already knew the "true" model and could hence determine that the probability of recovering the "true" model does not converge to 1. For modeling in the wild, how often is this true (that the phenomena are described by linear models and the "true" model is a subset of those in consideration)?
Shao's paper is certainly interesting for advancing the theoretical framework. It even provides some clarity: if the "true" model is indeed under consideration, then we have the consistency results to hang our hats on. But I'm not sure how interesting actual simulations for the cases you describe would be. This is largely why most books like EOSL don't focus as much on Shao's result, but instead on prediction/generalization error as a criterion for model selection.
EDIT: The very short answer to your question is: Shao's results are applicable when you're performing least squares estimation, quadratic loss function. Not any wider. (I think there was an interesting paper by Yang (2005?) which investigated whether you could have consistency and efficiency, with a negative answer.)
|
When are Shao's results on leave-one-out cross-validation applicable?
|
This paper is somewhat controversial, and somewhat ignored
Not really, it's well regarded where the theory of model selection is concerned, though it's certainly misinterpreted. The real issue is ho
|
When are Shao's results on leave-one-out cross-validation applicable?
This paper is somewhat controversial, and somewhat ignored
Not really, it's well regarded where the theory of model selection is concerned, though it's certainly misinterpreted. The real issue is how relevant it is to the practice of modeling in the wild. Suppose you perform the simulations for the cases you propose to investigate and determine that LOOCV is indeed inconsistent. The only reason you'd get that is because you already knew the "true" model and could hence determine that the probability of recovering the "true" model does not converge to 1. For modeling in the wild, how often is this true (that the phenomena are described by linear models and the "true" model is a subset of those in consideration)?
Shao's paper is certainly interesting for advancing the theoretical framework. It even provides some clarity: if the "true" model is indeed under consideration, then we have the consistency results to hang our hats on. But I'm not sure how interesting actual simulations for the cases you describe would be. This is largely why most books like EOSL don't focus as much on Shao's result, but instead on prediction/generalization error as a criterion for model selection.
EDIT: The very short answer to your question is: Shao's results are applicable when you're performing least squares estimation, quadratic loss function. Not any wider. (I think there was an interesting paper by Yang (2005?) which investigated whether you could have consistency and efficiency, with a negative answer.)
|
When are Shao's results on leave-one-out cross-validation applicable?
This paper is somewhat controversial, and somewhat ignored
Not really, it's well regarded where the theory of model selection is concerned, though it's certainly misinterpreted. The real issue is ho
|
11,115
|
When are Shao's results on leave-one-out cross-validation applicable?
|
I would say: everywhere, but I haven't seen a strict proof of it. The intuition behind is such that when doing CV one must hold a balance between train large enough to build sensible model and test large enough so it would be a sensible benchmark.
When dealing with thousands of pretty homogeneous objects, picking one is connected with risk that it is pretty similar to other object that was left in the set -- and then the results would be too optimistic.
On the other hand, in case of few objects there will be no vital difference between LOO and k-fold; $10/10$ is just $1$ and we can't do anything with it.
|
When are Shao's results on leave-one-out cross-validation applicable?
|
I would say: everywhere, but I haven't seen a strict proof of it. The intuition behind is such that when doing CV one must hold a balance between train large enough to build sensible model and test la
|
When are Shao's results on leave-one-out cross-validation applicable?
I would say: everywhere, but I haven't seen a strict proof of it. The intuition behind is such that when doing CV one must hold a balance between train large enough to build sensible model and test large enough so it would be a sensible benchmark.
When dealing with thousands of pretty homogeneous objects, picking one is connected with risk that it is pretty similar to other object that was left in the set -- and then the results would be too optimistic.
On the other hand, in case of few objects there will be no vital difference between LOO and k-fold; $10/10$ is just $1$ and we can't do anything with it.
|
When are Shao's results on leave-one-out cross-validation applicable?
I would say: everywhere, but I haven't seen a strict proof of it. The intuition behind is such that when doing CV one must hold a balance between train large enough to build sensible model and test la
|
11,116
|
When are Shao's results on leave-one-out cross-validation applicable?
|
1) The answer by @ars mentions Yang (2005), "Can The Strengths of AIC and BIC Be Shared?". Loosely speaking, it seems that you can't have a model-selection criterion achieve both consistency (tend to pick the correct model, if there is indeed a correct model and it is among the models being considered) and efficiency (achieve the lowest mean squared error on average among the models you picked). If you tend to pick the right model on average, sometimes you'll get slightly-too-small models... but by often missing a real predictor, you do worse in terms of MSE than someone who always includes a few spurious predictors.
So, as said before, if you care about making-good-predictions more than getting-exactly-the-right-variables, it's fine to keep using LOOCV or AIC.
2) But I also wanted to point out two other of his papers: Yang (2006) "Comparing Learning Methods for Classification" and Yang (2007) "Consistency of Cross Validation for Comparing Regression Procedures". These papers show that you don't need the ratio of training-to-testing data to shrink towards 0 if you're comparing models which converge at slower rates than linear models do.
So, to answer your original questions 1-6 more directly: Shao's results apply when comparing linear models to each other. Whether for regression or classification, if you are comparing nonparametric models that converge at a slower rate (or even comparing one linear model to one nonparametric model), you can use most of the data for training and still have model-selection-consistent CV... but still, Yang suggests that LOOCV is too extreme.
|
When are Shao's results on leave-one-out cross-validation applicable?
|
1) The answer by @ars mentions Yang (2005), "Can The Strengths of AIC and BIC Be Shared?". Loosely speaking, it seems that you can't have a model-selection criterion achieve both consistency (tend to
|
When are Shao's results on leave-one-out cross-validation applicable?
1) The answer by @ars mentions Yang (2005), "Can The Strengths of AIC and BIC Be Shared?". Loosely speaking, it seems that you can't have a model-selection criterion achieve both consistency (tend to pick the correct model, if there is indeed a correct model and it is among the models being considered) and efficiency (achieve the lowest mean squared error on average among the models you picked). If you tend to pick the right model on average, sometimes you'll get slightly-too-small models... but by often missing a real predictor, you do worse in terms of MSE than someone who always includes a few spurious predictors.
So, as said before, if you care about making-good-predictions more than getting-exactly-the-right-variables, it's fine to keep using LOOCV or AIC.
2) But I also wanted to point out two other of his papers: Yang (2006) "Comparing Learning Methods for Classification" and Yang (2007) "Consistency of Cross Validation for Comparing Regression Procedures". These papers show that you don't need the ratio of training-to-testing data to shrink towards 0 if you're comparing models which converge at slower rates than linear models do.
So, to answer your original questions 1-6 more directly: Shao's results apply when comparing linear models to each other. Whether for regression or classification, if you are comparing nonparametric models that converge at a slower rate (or even comparing one linear model to one nonparametric model), you can use most of the data for training and still have model-selection-consistent CV... but still, Yang suggests that LOOCV is too extreme.
|
When are Shao's results on leave-one-out cross-validation applicable?
1) The answer by @ars mentions Yang (2005), "Can The Strengths of AIC and BIC Be Shared?". Loosely speaking, it seems that you can't have a model-selection criterion achieve both consistency (tend to
|
11,117
|
When are Shao's results on leave-one-out cross-validation applicable?
|
I believe Shao's article applies most effectively to situations where a person is trying to eliminate predictors from a model (whether linear or non-linear, such as machine learning, etc.), and he recommends using Monte Carlo CV (MCCV) in this case.
On the other hand, if you are not worried about the size (number of factors/predictors) in your model, LOOCV might be a shorter and somewhat more intuitive method, which is probably why it is used and accepted so widely.
|
When are Shao's results on leave-one-out cross-validation applicable?
|
I believe Shao's article applies most effectively to situations where a person is trying to eliminate predictors from a model (whether linear or non-linear, such as machine learning, etc.), and he rec
|
When are Shao's results on leave-one-out cross-validation applicable?
I believe Shao's article applies most effectively to situations where a person is trying to eliminate predictors from a model (whether linear or non-linear, such as machine learning, etc.), and he recommends using Monte Carlo CV (MCCV) in this case.
On the other hand, if you are not worried about the size (number of factors/predictors) in your model, LOOCV might be a shorter and somewhat more intuitive method, which is probably why it is used and accepted so widely.
|
When are Shao's results on leave-one-out cross-validation applicable?
I believe Shao's article applies most effectively to situations where a person is trying to eliminate predictors from a model (whether linear or non-linear, such as machine learning, etc.), and he rec
|
11,118
|
How does ACF & PACF identify the order of MA and AR terms?
|
The quotes are from the link in the OP:
Identification of an AR model is often best done with the PACF.
For an AR model, the theoretical PACF “shuts off” past the order of
the model. The phrase “shuts off” means that in theory the partial
autocorrelations are equal to $0$ beyond that point. Put another way,
the number of non-zero partial autocorrelations gives the order of the
AR model. By the “order of the model” we mean the most extreme lag of
x that is used as a predictor.
... a $k^{\text{th}}$ order autoregression, written as AR($k$), is a multiple linear regression in which the value of the series at any time t is a (linear) function of the values at times $t-1,t-2,\ldots,t-k:$
$$\begin{equation*} y_{t}=\beta_{0}+\beta_{1}y_{t-1}+\beta_{2}y_{t-2}+\cdots+\beta_{2}y_{t-k}+\epsilon_{t}. \end{equation*}$$
This equation looks like a regression model, as indicated on the linked paged... So what is a possible intuition...
In Chinese whispers or the telephone game as illustrated here
the message gets distorted as it is whispered from person to person, and the sentence is completely new after passing through two people. For instance, at time $t_2$ the message, i.e. "$\color{lime}{\small\text{CC}}$'s pool", is completely different in meaning from that at $t_o,$ i.e. "CV is cool!" The "correlation" that existed with $t_1$ ("$\color{lime}{\small\text{CC}}$ is cool!") in the word "$\color{lime}{\small\text{CC}}$" is gone; there are no remaining identical words, and even the intonation ("!") has changed.
This pattern repeats itself: there is a word shared at any given two consecutive time stamps, which goes away if $t_k$ is compared to $t_{k-2}.$
However, in this process of introducing errors at each step there is a similarity that spans further than just one single step: Although Chrisy's pool is different in meaning to CC is cool!, there is no denying their phonetic similarities or the rhyming of "pool" and "cool". Therefore it wouldn't be true that the correlation stops at $t_{k-1}.$ It does decay (exponentially) but it can be traced downstream for a long time: compare $t_5$ (Missi's cruel) to $t_0$ (CV is cool!) - there are still similarities.
This explains the correlogram (ACF) in an AR($1$) processes (e.g. with coefficient $0.8$):
Multiple, progressively offset sequences are correlated, discarding any contribution of the intermediate steps. This would be the graph of the operations involved:
In this setting the PACF is useful in showing that once the effect of $t_{k-1}$ is controlled for, older timestamps than $t_{k-1}$ do not explain any of the remaining variance: all that remains is white noise:
It is not difficult to come very close to the actual output of the R function by actually obtaining consecutive OLS regressions through the origin of farther lagged sequences, and collecting the coefficients into a vector. Schematically,
Identification of an MA model is often best done with the ACF rather
than the PACF.
For an MA model, the theoretical PACF does not shut off, but instead
tapers toward $0$ in some manner. A clearer pattern for an MA model is
in the ACF. The ACF will have non-zero autocorrelations only at lags
involved in the model.
A moving average term in a time series model is a past error (multiplied by a coefficient).
The $q^{\text{th}}$-order moving average model, denoted by MA($q$) is
$$x_t = \mu + w_t +\theta_1w_{t-1}+\theta_2w_{t-2}+\dots + \theta_qw_{t-q}$$
with $w_t \overset{\text{iid}}{\sim} N(0, \sigma^2_w).$
It turns out that the behavior of the ACF and the PACF are flipped compared to AR processes:
In the game above, $t_{k-1}$ was enough to explain all prior errors in transmitting the message (single significan bar in PACF plot), absorbing all prior errors, which had shaped the final message one error at a time. An alternative view of that AR($1$) process is as the addition of a long series of correlated mistakes (Koyck transformation), an MA($\infty$). Likewise, with some conditions, an MA($1$) process can be inverted into an AR($\infty$) process.
$$x_t = - \theta x_{t-1} - \theta^2 x_{t-2} - \theta^3 x_{t-3}+\cdots +\epsilon_t$$
The confusing part then is why the significant spikes in the ACF stop after the number of lags in MA($q$). But in an MA($1$) process the covariance is different from zero only at consecutive times $\small \text{Cov}(X_t,X_{t-1})=\theta \sigma^2,$ because only then the expansion $\small {\text{Cov}}(\epsilon_t + \theta \epsilon_{t-1}, \epsilon_{t-1} + \theta \epsilon_{t_2})=\theta \text{Cov}(\epsilon_{t-1}, \epsilon_{t-1})$ will result in a match in time stamps - all other combinations will be zero due to iid condition.
This is the reason why the ACF plot is helpful in indicating the number of lags, as in this MA($1$) process $\epsilon_t + 0.8 \epsilon_{t-1}$, in which only one lag shows significant correlation, and the PACF shows typical oscillating values that progressively decay:
In the game of whispers, the error at $t_2$ (pool) is "correlated" with the value at $t_3$ (Chrissy's pool); however, there is no "correlation" between $t_3$ and the error at $t_1$ (CC).
Applying a PACF to a MA process will not result in "shut offs", but rather a progressive decay: controlling for the explanatory contribution of later random variables in the process does not render more distant ones insignificant as it was the case in AR processes.
|
How does ACF & PACF identify the order of MA and AR terms?
|
The quotes are from the link in the OP:
Identification of an AR model is often best done with the PACF.
For an AR model, the theoretical PACF “shuts off” past the order of
the model. The phrase “shu
|
How does ACF & PACF identify the order of MA and AR terms?
The quotes are from the link in the OP:
Identification of an AR model is often best done with the PACF.
For an AR model, the theoretical PACF “shuts off” past the order of
the model. The phrase “shuts off” means that in theory the partial
autocorrelations are equal to $0$ beyond that point. Put another way,
the number of non-zero partial autocorrelations gives the order of the
AR model. By the “order of the model” we mean the most extreme lag of
x that is used as a predictor.
... a $k^{\text{th}}$ order autoregression, written as AR($k$), is a multiple linear regression in which the value of the series at any time t is a (linear) function of the values at times $t-1,t-2,\ldots,t-k:$
$$\begin{equation*} y_{t}=\beta_{0}+\beta_{1}y_{t-1}+\beta_{2}y_{t-2}+\cdots+\beta_{2}y_{t-k}+\epsilon_{t}. \end{equation*}$$
This equation looks like a regression model, as indicated on the linked paged... So what is a possible intuition...
In Chinese whispers or the telephone game as illustrated here
the message gets distorted as it is whispered from person to person, and the sentence is completely new after passing through two people. For instance, at time $t_2$ the message, i.e. "$\color{lime}{\small\text{CC}}$'s pool", is completely different in meaning from that at $t_o,$ i.e. "CV is cool!" The "correlation" that existed with $t_1$ ("$\color{lime}{\small\text{CC}}$ is cool!") in the word "$\color{lime}{\small\text{CC}}$" is gone; there are no remaining identical words, and even the intonation ("!") has changed.
This pattern repeats itself: there is a word shared at any given two consecutive time stamps, which goes away if $t_k$ is compared to $t_{k-2}.$
However, in this process of introducing errors at each step there is a similarity that spans further than just one single step: Although Chrisy's pool is different in meaning to CC is cool!, there is no denying their phonetic similarities or the rhyming of "pool" and "cool". Therefore it wouldn't be true that the correlation stops at $t_{k-1}.$ It does decay (exponentially) but it can be traced downstream for a long time: compare $t_5$ (Missi's cruel) to $t_0$ (CV is cool!) - there are still similarities.
This explains the correlogram (ACF) in an AR($1$) processes (e.g. with coefficient $0.8$):
Multiple, progressively offset sequences are correlated, discarding any contribution of the intermediate steps. This would be the graph of the operations involved:
In this setting the PACF is useful in showing that once the effect of $t_{k-1}$ is controlled for, older timestamps than $t_{k-1}$ do not explain any of the remaining variance: all that remains is white noise:
It is not difficult to come very close to the actual output of the R function by actually obtaining consecutive OLS regressions through the origin of farther lagged sequences, and collecting the coefficients into a vector. Schematically,
Identification of an MA model is often best done with the ACF rather
than the PACF.
For an MA model, the theoretical PACF does not shut off, but instead
tapers toward $0$ in some manner. A clearer pattern for an MA model is
in the ACF. The ACF will have non-zero autocorrelations only at lags
involved in the model.
A moving average term in a time series model is a past error (multiplied by a coefficient).
The $q^{\text{th}}$-order moving average model, denoted by MA($q$) is
$$x_t = \mu + w_t +\theta_1w_{t-1}+\theta_2w_{t-2}+\dots + \theta_qw_{t-q}$$
with $w_t \overset{\text{iid}}{\sim} N(0, \sigma^2_w).$
It turns out that the behavior of the ACF and the PACF are flipped compared to AR processes:
In the game above, $t_{k-1}$ was enough to explain all prior errors in transmitting the message (single significan bar in PACF plot), absorbing all prior errors, which had shaped the final message one error at a time. An alternative view of that AR($1$) process is as the addition of a long series of correlated mistakes (Koyck transformation), an MA($\infty$). Likewise, with some conditions, an MA($1$) process can be inverted into an AR($\infty$) process.
$$x_t = - \theta x_{t-1} - \theta^2 x_{t-2} - \theta^3 x_{t-3}+\cdots +\epsilon_t$$
The confusing part then is why the significant spikes in the ACF stop after the number of lags in MA($q$). But in an MA($1$) process the covariance is different from zero only at consecutive times $\small \text{Cov}(X_t,X_{t-1})=\theta \sigma^2,$ because only then the expansion $\small {\text{Cov}}(\epsilon_t + \theta \epsilon_{t-1}, \epsilon_{t-1} + \theta \epsilon_{t_2})=\theta \text{Cov}(\epsilon_{t-1}, \epsilon_{t-1})$ will result in a match in time stamps - all other combinations will be zero due to iid condition.
This is the reason why the ACF plot is helpful in indicating the number of lags, as in this MA($1$) process $\epsilon_t + 0.8 \epsilon_{t-1}$, in which only one lag shows significant correlation, and the PACF shows typical oscillating values that progressively decay:
In the game of whispers, the error at $t_2$ (pool) is "correlated" with the value at $t_3$ (Chrissy's pool); however, there is no "correlation" between $t_3$ and the error at $t_1$ (CC).
Applying a PACF to a MA process will not result in "shut offs", but rather a progressive decay: controlling for the explanatory contribution of later random variables in the process does not render more distant ones insignificant as it was the case in AR processes.
|
How does ACF & PACF identify the order of MA and AR terms?
The quotes are from the link in the OP:
Identification of an AR model is often best done with the PACF.
For an AR model, the theoretical PACF “shuts off” past the order of
the model. The phrase “shu
|
11,119
|
How does ACF & PACF identify the order of MA and AR terms?
|
Robert Nau from Duke's Fuqua School of Business gives a detailed and somewhat intuitive explanation of how ACF and PACF plots can be used to choose AR and MA orders here and here. I give a brief summary of his arguments below.
A simple explanation of why PACF identifies the AR order
The partial autocorrelations can be computed by fitting a sequence of AR models starting with the first lag only and progressively adding more lags. The coefficient of lag $k$ in an AR($k$) model gives the partial autocorrelation at lag $k$. Given this, if the partial autocorrelation "cuts off"/ceases to be significant at a certain lag (as seen in an ACF plot) this indicates that that lag does not add explanatory power to a model and therefore that the AR order should be the previous lag.
A more complete explanation which also addresses the use of ACF to identify the MA order
Time series can have AR or MA signatures:
An AR signature corresponds to a PACF plot displaying a sharp cut-off and a more slowly decaying ACF;
An MA signature corresponds to an ACF plot displaying a sharp cut-off and a PACF plot that decays more slowly.
AR signatures are often associated with positive autocorrelation at lag 1, suggesting that the series is slightly "underdifferenced" (this means that further differencing is necessary to completely eliminate autocorrelation). Since AR terms achieve partial differencing (see below), this can be fixed by adding an AR term to the model (hence the name of this signature). Therefore a PACF plot with a sharp cut-off (accompanied by a slowly decaying ACF plot with a positive first lag) can indicate the order of the AR term. Nau puts it as folows:
If the PACF of the differenced series displays a sharp cutoff and/or the lag-1 autocorrelation is positive--i.e., if the series appears slightly "underdifferenced"--then consider adding an AR term to the model. The lag at which the PACF cuts off is the indicated number of AR terms.
MA signatures, on the other hand, are commonly associated with negative first lags, suggesting that the series is "overdifferenced" (i.e. it is necessary to partially cancel out the differencing to obtain a stationary series). Since MA terms can cancel an order of differencing (see below), the ACF plot of a series with an MA signature indicates the necessary MA order:
If the ACF of the differenced series displays a sharp cutoff and/or the lag-1 autocorrelation is negative--i.e., if the series appears slightly "overdifferenced"--then consider adding an MA term to the model. The lag at which the ACF cuts off is the indicated number of MA terms.
Why AR terms achieve partial differencing and MA terms partially cancel previous differencing
Take a basic ARIMA(1,1,1) model, presented without the constant for simplicity:
$y_t = Y_t - Y_{t-1}$
$y_t = \phi y_{t-1} + e_t - \theta e_{t-1}$
Defining $B$ as the lag/backshift operator, this can be written as follows:
$y_t = (1-B)Y_t$
$y_t = \phi B y_t + e_t - \theta B e_t$
which can be further simplified to give:
$(1-\phi B) y_t = (1-\theta B) e_t$
or equivalently:
$(1-\phi B)(1-B) Y_t = (1-\theta B)e_t$.
We can see that the AR(1) term gave us the $(1-\phi B)$ term, thus partially (if $\phi \in (0,1)$) increasing the order of differencing. Moreover, if we manipulate $B$ as a numeric variable (which we can do because it is a linear operator), we can see that the MA(1) term gave us the $(1-\theta B)$ term, thus partially cancelling out the original differencing term—$(1-B)$—in the left-hand-side.
|
How does ACF & PACF identify the order of MA and AR terms?
|
Robert Nau from Duke's Fuqua School of Business gives a detailed and somewhat intuitive explanation of how ACF and PACF plots can be used to choose AR and MA orders here and here. I give a brief summa
|
How does ACF & PACF identify the order of MA and AR terms?
Robert Nau from Duke's Fuqua School of Business gives a detailed and somewhat intuitive explanation of how ACF and PACF plots can be used to choose AR and MA orders here and here. I give a brief summary of his arguments below.
A simple explanation of why PACF identifies the AR order
The partial autocorrelations can be computed by fitting a sequence of AR models starting with the first lag only and progressively adding more lags. The coefficient of lag $k$ in an AR($k$) model gives the partial autocorrelation at lag $k$. Given this, if the partial autocorrelation "cuts off"/ceases to be significant at a certain lag (as seen in an ACF plot) this indicates that that lag does not add explanatory power to a model and therefore that the AR order should be the previous lag.
A more complete explanation which also addresses the use of ACF to identify the MA order
Time series can have AR or MA signatures:
An AR signature corresponds to a PACF plot displaying a sharp cut-off and a more slowly decaying ACF;
An MA signature corresponds to an ACF plot displaying a sharp cut-off and a PACF plot that decays more slowly.
AR signatures are often associated with positive autocorrelation at lag 1, suggesting that the series is slightly "underdifferenced" (this means that further differencing is necessary to completely eliminate autocorrelation). Since AR terms achieve partial differencing (see below), this can be fixed by adding an AR term to the model (hence the name of this signature). Therefore a PACF plot with a sharp cut-off (accompanied by a slowly decaying ACF plot with a positive first lag) can indicate the order of the AR term. Nau puts it as folows:
If the PACF of the differenced series displays a sharp cutoff and/or the lag-1 autocorrelation is positive--i.e., if the series appears slightly "underdifferenced"--then consider adding an AR term to the model. The lag at which the PACF cuts off is the indicated number of AR terms.
MA signatures, on the other hand, are commonly associated with negative first lags, suggesting that the series is "overdifferenced" (i.e. it is necessary to partially cancel out the differencing to obtain a stationary series). Since MA terms can cancel an order of differencing (see below), the ACF plot of a series with an MA signature indicates the necessary MA order:
If the ACF of the differenced series displays a sharp cutoff and/or the lag-1 autocorrelation is negative--i.e., if the series appears slightly "overdifferenced"--then consider adding an MA term to the model. The lag at which the ACF cuts off is the indicated number of MA terms.
Why AR terms achieve partial differencing and MA terms partially cancel previous differencing
Take a basic ARIMA(1,1,1) model, presented without the constant for simplicity:
$y_t = Y_t - Y_{t-1}$
$y_t = \phi y_{t-1} + e_t - \theta e_{t-1}$
Defining $B$ as the lag/backshift operator, this can be written as follows:
$y_t = (1-B)Y_t$
$y_t = \phi B y_t + e_t - \theta B e_t$
which can be further simplified to give:
$(1-\phi B) y_t = (1-\theta B) e_t$
or equivalently:
$(1-\phi B)(1-B) Y_t = (1-\theta B)e_t$.
We can see that the AR(1) term gave us the $(1-\phi B)$ term, thus partially (if $\phi \in (0,1)$) increasing the order of differencing. Moreover, if we manipulate $B$ as a numeric variable (which we can do because it is a linear operator), we can see that the MA(1) term gave us the $(1-\theta B)$ term, thus partially cancelling out the original differencing term—$(1-B)$—in the left-hand-side.
|
How does ACF & PACF identify the order of MA and AR terms?
Robert Nau from Duke's Fuqua School of Business gives a detailed and somewhat intuitive explanation of how ACF and PACF plots can be used to choose AR and MA orders here and here. I give a brief summa
|
11,120
|
How does ACF & PACF identify the order of MA and AR terms?
|
On a higher level, here is how to understand it. (If you need a more mathematical approach, I can gladly go after some of my notes on time series analysis)
ACF and PACF are theoretical statistical constructs just like an expected value or variance, but on different domains. The same way that Expected values come up when studying random variables, ACF and PACF come up when studying time series.
When studying random variables, there is the question of how to estimate their parameters, which is where the method of moments, MLE and other procedures and constructs come in, as well as inspecting the estimates, their standard errors and etc.
Inspecting the estimated ACF and PACF come is from the same idea, estimating the parameters of a random time series process. Get the idea?
If you think you need a more mathematically inclined answer, please let me know, and I'll try and see if I can craft something by the end of the day.
|
How does ACF & PACF identify the order of MA and AR terms?
|
On a higher level, here is how to understand it. (If you need a more mathematical approach, I can gladly go after some of my notes on time series analysis)
ACF and PACF are theoretical statistical con
|
How does ACF & PACF identify the order of MA and AR terms?
On a higher level, here is how to understand it. (If you need a more mathematical approach, I can gladly go after some of my notes on time series analysis)
ACF and PACF are theoretical statistical constructs just like an expected value or variance, but on different domains. The same way that Expected values come up when studying random variables, ACF and PACF come up when studying time series.
When studying random variables, there is the question of how to estimate their parameters, which is where the method of moments, MLE and other procedures and constructs come in, as well as inspecting the estimates, their standard errors and etc.
Inspecting the estimated ACF and PACF come is from the same idea, estimating the parameters of a random time series process. Get the idea?
If you think you need a more mathematically inclined answer, please let me know, and I'll try and see if I can craft something by the end of the day.
|
How does ACF & PACF identify the order of MA and AR terms?
On a higher level, here is how to understand it. (If you need a more mathematical approach, I can gladly go after some of my notes on time series analysis)
ACF and PACF are theoretical statistical con
|
11,121
|
Distribution of an observation-level Mahalanobis distance
|
Check out Gaussian Mixture Modeling by Exploiting the Mahalanobis Distance
(alternative link). See page no 13, Second column. Authors also given some proof also for deriving the distribution. The distribution is scaled beta. Please let me know if this is not working for you. Otherwise I could check any hint in the S.S. Wilks book tomorrow.
|
Distribution of an observation-level Mahalanobis distance
|
Check out Gaussian Mixture Modeling by Exploiting the Mahalanobis Distance
(alternative link). See page no 13, Second column. Authors also given some proof also for deriving the distribution. The dis
|
Distribution of an observation-level Mahalanobis distance
Check out Gaussian Mixture Modeling by Exploiting the Mahalanobis Distance
(alternative link). See page no 13, Second column. Authors also given some proof also for deriving the distribution. The distribution is scaled beta. Please let me know if this is not working for you. Otherwise I could check any hint in the S.S. Wilks book tomorrow.
|
Distribution of an observation-level Mahalanobis distance
Check out Gaussian Mixture Modeling by Exploiting the Mahalanobis Distance
(alternative link). See page no 13, Second column. Authors also given some proof also for deriving the distribution. The dis
|
11,122
|
Distribution of an observation-level Mahalanobis distance
|
There are 3 relevant distributions. As noted, if the true population parameters are used then the distribution is chi-squared with $df=p$. This is also the asymptotic distribution with estimated parameters and large sample size.
Another answer gives the correct distribution for the most common situation, with estimated parameters when the observation itself is part of the estimation set:
$$
\frac{n(d^2)}{(n-1)^2} \sim Beta\left(\frac{p}{2}, \frac{(n-p-1)}{2}\right).
$$
However, if the observation $x_i$ is independent of the parameter estimates, then the distribution is proportional to a Fisher's F-ratio distribution:
$$
\left(\frac{nd^2(n-p)}{(p(n-1)(n+1)}\right) \sim F(p, n-p)
$$
|
Distribution of an observation-level Mahalanobis distance
|
There are 3 relevant distributions. As noted, if the true population parameters are used then the distribution is chi-squared with $df=p$. This is also the asymptotic distribution with estimated param
|
Distribution of an observation-level Mahalanobis distance
There are 3 relevant distributions. As noted, if the true population parameters are used then the distribution is chi-squared with $df=p$. This is also the asymptotic distribution with estimated parameters and large sample size.
Another answer gives the correct distribution for the most common situation, with estimated parameters when the observation itself is part of the estimation set:
$$
\frac{n(d^2)}{(n-1)^2} \sim Beta\left(\frac{p}{2}, \frac{(n-p-1)}{2}\right).
$$
However, if the observation $x_i$ is independent of the parameter estimates, then the distribution is proportional to a Fisher's F-ratio distribution:
$$
\left(\frac{nd^2(n-p)}{(p(n-1)(n+1)}\right) \sim F(p, n-p)
$$
|
Distribution of an observation-level Mahalanobis distance
There are 3 relevant distributions. As noted, if the true population parameters are used then the distribution is chi-squared with $df=p$. This is also the asymptotic distribution with estimated param
|
11,123
|
Should dimensionality reduction for visualization be considered a "closed" problem, solved by t-SNE?
|
Definitely not.
I agree that t-SNE is an amazing algorithm that works extremely well and that was a real breakthrough at the time. However:
it does have serious shortcomings;
some of the shortcomings must be solvable;
there already are algorithms that perform noticeably better in some cases;
many t-SNE's properties are still poorly understood.
Somebody linked to this very popular account of some shortcomings of t-SNE: https://distill.pub/2016/misread-tsne/ (+1), but it only discusses very simple toy datasets and I find that it does not correspond very well to the problems that one faces in practice when working with t-SNE and related algorithms on real-world data. For example:
t-SNE often fails to preserve global structure of the dataset;
t-SNE tends to suffer from "overcrowding" when $N$ grows above ~100k;
Barnes-Hut runtime is too slow for large $N$.
I will briefly discuss all three below.
t-SNE often fails to preserve global structure of the dataset.
Consider this single cell RNA-seq dataset from the Allen institute (mouse cortical cells): http://celltypes.brain-map.org/rnaseq/mouse. It has ~23k cells. We know a priori that this dataset has a lot of meaningful hierarchical structure, and this is confirmed by hierarchical clustering. There are neurons and non-neural cells (glia, astrocytes, etc.). Among neurons, there are excitatory and inhibitory neurons -- two very different groups. Among e.g. inhibitory neurons, there are several major groups: Pvalb-expressing, SSt-expressing, VIP-expressing. In any of these groups, there seem to be multiple further clusters. This is reflected in the hierarchical clustering tree. But here is t-SNE, taken from the link above:
Non-neural cells are in grey/brown/black. Excitatory neurons are in blue/teal/green. Inhibitory neurons are in orange/red/purple. One would want these major groups to stick together, but this does not happen: once t-SNE separates a group into several clusters, they can end up being positioned arbitrarily. The hierarchical structure of the dataset is lost.
I think this should be a solvable problem, but I am not aware of any good principled developments, despite some recent work in this direction (including my own).
t-SNE tends to suffer from "overcrowding" when $N$ grows above ~100k
t-SNE works very well on the MNIST data. But consider this (taken from this paper):
With 1 mln data points, all clusters get clumped together (the exact reason for this is not very clear) and the only known way to counter-balance is with some dirty hacks as shown above. I know from experience that this happens with other similarly large datasets as well.
One can arguably see this with MNIST itself (N=70k). Take a look:
On the right is t-SNE. On the left is UMAP, a new exciting method under active development, that is very similar to an older largeVis. UMAP/largeVis pull clusters much further apart. The exact reason for this is IMHO unclear; I would say there is still a lot to understand here, and possibly a lot to improve.
Barnes-Hut runtime is too slow for large $N$
Vanilla t-SNE is unusable for $N$ over ~10k. The standard solution until recently was Barnes-Hut t-SNE, however for $N$ closer to ~1mln it becomes painfully slow. This is one of the big selling points of UMAP, but actually a recent paper suggested FFT-accelerated t-SNE (FIt-SNE) that works much faster than Barnes-Hut t-SNE and is at least as fast as UMAP. I recommend everybody to use this implementation from now on.
So this might not exactly be an open problem anymore, but it used to be until very recently, and I guess there is room for further improvements in runtime. So work can certainly continue in this direction.
|
Should dimensionality reduction for visualization be considered a "closed" problem, solved by t-SNE?
|
Definitely not.
I agree that t-SNE is an amazing algorithm that works extremely well and that was a real breakthrough at the time. However:
it does have serious shortcomings;
some of the shortcomings
|
Should dimensionality reduction for visualization be considered a "closed" problem, solved by t-SNE?
Definitely not.
I agree that t-SNE is an amazing algorithm that works extremely well and that was a real breakthrough at the time. However:
it does have serious shortcomings;
some of the shortcomings must be solvable;
there already are algorithms that perform noticeably better in some cases;
many t-SNE's properties are still poorly understood.
Somebody linked to this very popular account of some shortcomings of t-SNE: https://distill.pub/2016/misread-tsne/ (+1), but it only discusses very simple toy datasets and I find that it does not correspond very well to the problems that one faces in practice when working with t-SNE and related algorithms on real-world data. For example:
t-SNE often fails to preserve global structure of the dataset;
t-SNE tends to suffer from "overcrowding" when $N$ grows above ~100k;
Barnes-Hut runtime is too slow for large $N$.
I will briefly discuss all three below.
t-SNE often fails to preserve global structure of the dataset.
Consider this single cell RNA-seq dataset from the Allen institute (mouse cortical cells): http://celltypes.brain-map.org/rnaseq/mouse. It has ~23k cells. We know a priori that this dataset has a lot of meaningful hierarchical structure, and this is confirmed by hierarchical clustering. There are neurons and non-neural cells (glia, astrocytes, etc.). Among neurons, there are excitatory and inhibitory neurons -- two very different groups. Among e.g. inhibitory neurons, there are several major groups: Pvalb-expressing, SSt-expressing, VIP-expressing. In any of these groups, there seem to be multiple further clusters. This is reflected in the hierarchical clustering tree. But here is t-SNE, taken from the link above:
Non-neural cells are in grey/brown/black. Excitatory neurons are in blue/teal/green. Inhibitory neurons are in orange/red/purple. One would want these major groups to stick together, but this does not happen: once t-SNE separates a group into several clusters, they can end up being positioned arbitrarily. The hierarchical structure of the dataset is lost.
I think this should be a solvable problem, but I am not aware of any good principled developments, despite some recent work in this direction (including my own).
t-SNE tends to suffer from "overcrowding" when $N$ grows above ~100k
t-SNE works very well on the MNIST data. But consider this (taken from this paper):
With 1 mln data points, all clusters get clumped together (the exact reason for this is not very clear) and the only known way to counter-balance is with some dirty hacks as shown above. I know from experience that this happens with other similarly large datasets as well.
One can arguably see this with MNIST itself (N=70k). Take a look:
On the right is t-SNE. On the left is UMAP, a new exciting method under active development, that is very similar to an older largeVis. UMAP/largeVis pull clusters much further apart. The exact reason for this is IMHO unclear; I would say there is still a lot to understand here, and possibly a lot to improve.
Barnes-Hut runtime is too slow for large $N$
Vanilla t-SNE is unusable for $N$ over ~10k. The standard solution until recently was Barnes-Hut t-SNE, however for $N$ closer to ~1mln it becomes painfully slow. This is one of the big selling points of UMAP, but actually a recent paper suggested FFT-accelerated t-SNE (FIt-SNE) that works much faster than Barnes-Hut t-SNE and is at least as fast as UMAP. I recommend everybody to use this implementation from now on.
So this might not exactly be an open problem anymore, but it used to be until very recently, and I guess there is room for further improvements in runtime. So work can certainly continue in this direction.
|
Should dimensionality reduction for visualization be considered a "closed" problem, solved by t-SNE?
Definitely not.
I agree that t-SNE is an amazing algorithm that works extremely well and that was a real breakthrough at the time. However:
it does have serious shortcomings;
some of the shortcomings
|
11,124
|
Should dimensionality reduction for visualization be considered a "closed" problem, solved by t-SNE?
|
I would still love to hear other comments but I'll post my own answer for now, as I see it. While I was looking for a more "practical" answer, there are two theoretical "dis-advantages" to t-sne which are worth mentioning; the first one is less problematic, and the second should definitely be considered:
t-sne cost function is not convex, so we are not guaranteed to reach a global optimum: Other dimensionality reduction techniques (Isomap, LLE) have a convex cost function. In t-sne this is not the case, hence there are some optimization parameters that need to be effectively tuned in order to reach a "good" solution. However, although a potential theoretical pitfall, it's worth mentioning that in practice this is hardly a downfall, since it seems that even the "local minimum" of the t-sne algorithm outperforms (creates better visualizations) then the global minimum of the other methods.
curse of intrinstic dimensionality: One important thing to keep in mind when using t-sne is that it is essentially a manifold learning algorithm. Essentially, this means t-sne (and other such methods) are designed to work in situations in which the original high dimensional is only artificially high: there is an intrinsic lower dimension to the data. i.e, the data "sits" on a lower dimensional manifold. A nice example to have in mind is consecutive photos of the same person: While I might represent each image in the number of pixels (high-dimension), the intrinstic dimensionality of the data is actually bounded by the physical transformation of the points (in this case, the 3D rotation of the head). In such cases t-sne works well. But in cases where the intrinsic dimensionality is high, or the data points sit on a highly varying manifold, t-sne is expected to perform badly, since it's most basic assumption - local linearity on the manifold - is violated.
For the practical user, I think this implies two useful suggestions to bear in mind:
Before performing dimensionality reduction for visualization methods, always try to first figure out if there actually exists a lower intrinsic dimension to the data you're dealing with.
If you're not sure about 1 (and also generally), it might be useful, as the original article suggests, to "perform t-sne on a data representation obtained from a model that represents the highly varying data manifold efficiently in a number of nonlinear layers, such as an auto-encoder". So the combination of auto-encoder + t-sne can be a good solution in such cases.
|
Should dimensionality reduction for visualization be considered a "closed" problem, solved by t-SNE?
|
I would still love to hear other comments but I'll post my own answer for now, as I see it. While I was looking for a more "practical" answer, there are two theoretical "dis-advantages" to t-sne which
|
Should dimensionality reduction for visualization be considered a "closed" problem, solved by t-SNE?
I would still love to hear other comments but I'll post my own answer for now, as I see it. While I was looking for a more "practical" answer, there are two theoretical "dis-advantages" to t-sne which are worth mentioning; the first one is less problematic, and the second should definitely be considered:
t-sne cost function is not convex, so we are not guaranteed to reach a global optimum: Other dimensionality reduction techniques (Isomap, LLE) have a convex cost function. In t-sne this is not the case, hence there are some optimization parameters that need to be effectively tuned in order to reach a "good" solution. However, although a potential theoretical pitfall, it's worth mentioning that in practice this is hardly a downfall, since it seems that even the "local minimum" of the t-sne algorithm outperforms (creates better visualizations) then the global minimum of the other methods.
curse of intrinstic dimensionality: One important thing to keep in mind when using t-sne is that it is essentially a manifold learning algorithm. Essentially, this means t-sne (and other such methods) are designed to work in situations in which the original high dimensional is only artificially high: there is an intrinsic lower dimension to the data. i.e, the data "sits" on a lower dimensional manifold. A nice example to have in mind is consecutive photos of the same person: While I might represent each image in the number of pixels (high-dimension), the intrinstic dimensionality of the data is actually bounded by the physical transformation of the points (in this case, the 3D rotation of the head). In such cases t-sne works well. But in cases where the intrinsic dimensionality is high, or the data points sit on a highly varying manifold, t-sne is expected to perform badly, since it's most basic assumption - local linearity on the manifold - is violated.
For the practical user, I think this implies two useful suggestions to bear in mind:
Before performing dimensionality reduction for visualization methods, always try to first figure out if there actually exists a lower intrinsic dimension to the data you're dealing with.
If you're not sure about 1 (and also generally), it might be useful, as the original article suggests, to "perform t-sne on a data representation obtained from a model that represents the highly varying data manifold efficiently in a number of nonlinear layers, such as an auto-encoder". So the combination of auto-encoder + t-sne can be a good solution in such cases.
|
Should dimensionality reduction for visualization be considered a "closed" problem, solved by t-SNE?
I would still love to hear other comments but I'll post my own answer for now, as I see it. While I was looking for a more "practical" answer, there are two theoretical "dis-advantages" to t-sne which
|
11,125
|
Should dimensionality reduction for visualization be considered a "closed" problem, solved by t-SNE?
|
Here's an excellent analysis of how varying the parameters when running t-SNE affects some very simple datasets: http://distill.pub/2016/misread-tsne/. In general, t-SNE seems to do well at recognizing high-dimensional structures (including relationships more complex than clusters), though this is subject to parameter tuning, especially perplexity values.
|
Should dimensionality reduction for visualization be considered a "closed" problem, solved by t-SNE?
|
Here's an excellent analysis of how varying the parameters when running t-SNE affects some very simple datasets: http://distill.pub/2016/misread-tsne/. In general, t-SNE seems to do well at recognizin
|
Should dimensionality reduction for visualization be considered a "closed" problem, solved by t-SNE?
Here's an excellent analysis of how varying the parameters when running t-SNE affects some very simple datasets: http://distill.pub/2016/misread-tsne/. In general, t-SNE seems to do well at recognizing high-dimensional structures (including relationships more complex than clusters), though this is subject to parameter tuning, especially perplexity values.
|
Should dimensionality reduction for visualization be considered a "closed" problem, solved by t-SNE?
Here's an excellent analysis of how varying the parameters when running t-SNE affects some very simple datasets: http://distill.pub/2016/misread-tsne/. In general, t-SNE seems to do well at recognizin
|
11,126
|
Modelling with more variables than data points
|
It's certainly possible to fit good models when there are more variables than data points, but this must be done with care.
When there are more variables than data points, the problem may not have a unique solution unless it's further constrained. That is, there may be multiple (perhaps infinitely many) solutions that fit the data equally well. Such a problem is called 'ill-posed' or 'underdetermined'. For example, when there are more variables than data points, standard least squares regression has infinitely many solutions that achieve zero error on the training data.
Such a model would certainly overfit because it's 'too flexible' for the amount of training data. As model flexibility increases (e.g. more variables in a regression model) and the amount of training data shrinks, it becomes increasingly likely that the model will be able to achieve a low error by fitting random fluctuations in the training data that don't represent the true, underlying distribution. Performance will therefore be poor when the model is run on future data drawn from the same distribution.
The problems of ill-posedness and overfitting can both be addressed by imposing constraints. This can take the form of explicit constraints on the parameters, a penalty/regularization term, or a Bayesian prior. Training then becomes a tradeoff between fitting the data well and satisfying the constraints. You mentioned two examples of this strategy for regression problems: 1) LASSO constrains or penalizes the $\ell_1$ norm of the weights, which is equivalent to imposing a Laplacian prior. 2) Ridge regression constrains or penalizes the $\ell_2$ norm of the weights, which is equivalent to imposing a Gaussian prior.
Constraints can yield a unique solution, which is desirable when we want to interpret the model to learn something about the process that generated the data. They can also yield better predictive performance by limiting the model's flexibility, thereby reducing the tendency to overfit.
However, simply imposing constraints or guaranteeing that a unique solution exists doesn't imply that the resulting solution will be good. Constraints will only produce good solutions when they're actually suited to the problem.
A couple miscellaneous points:
The existence of multiple solutions isn't necessarily problematic. For example, neural nets can have many possible solutions that are distinct from each other but near equally good.
The existence of more variables than data points, the existence of multiple solutions, and overfitting often coincide. But, these are distinct concepts; each can occur without the others.
|
Modelling with more variables than data points
|
It's certainly possible to fit good models when there are more variables than data points, but this must be done with care.
When there are more variables than data points, the problem may not have a u
|
Modelling with more variables than data points
It's certainly possible to fit good models when there are more variables than data points, but this must be done with care.
When there are more variables than data points, the problem may not have a unique solution unless it's further constrained. That is, there may be multiple (perhaps infinitely many) solutions that fit the data equally well. Such a problem is called 'ill-posed' or 'underdetermined'. For example, when there are more variables than data points, standard least squares regression has infinitely many solutions that achieve zero error on the training data.
Such a model would certainly overfit because it's 'too flexible' for the amount of training data. As model flexibility increases (e.g. more variables in a regression model) and the amount of training data shrinks, it becomes increasingly likely that the model will be able to achieve a low error by fitting random fluctuations in the training data that don't represent the true, underlying distribution. Performance will therefore be poor when the model is run on future data drawn from the same distribution.
The problems of ill-posedness and overfitting can both be addressed by imposing constraints. This can take the form of explicit constraints on the parameters, a penalty/regularization term, or a Bayesian prior. Training then becomes a tradeoff between fitting the data well and satisfying the constraints. You mentioned two examples of this strategy for regression problems: 1) LASSO constrains or penalizes the $\ell_1$ norm of the weights, which is equivalent to imposing a Laplacian prior. 2) Ridge regression constrains or penalizes the $\ell_2$ norm of the weights, which is equivalent to imposing a Gaussian prior.
Constraints can yield a unique solution, which is desirable when we want to interpret the model to learn something about the process that generated the data. They can also yield better predictive performance by limiting the model's flexibility, thereby reducing the tendency to overfit.
However, simply imposing constraints or guaranteeing that a unique solution exists doesn't imply that the resulting solution will be good. Constraints will only produce good solutions when they're actually suited to the problem.
A couple miscellaneous points:
The existence of multiple solutions isn't necessarily problematic. For example, neural nets can have many possible solutions that are distinct from each other but near equally good.
The existence of more variables than data points, the existence of multiple solutions, and overfitting often coincide. But, these are distinct concepts; each can occur without the others.
|
Modelling with more variables than data points
It's certainly possible to fit good models when there are more variables than data points, but this must be done with care.
When there are more variables than data points, the problem may not have a u
|
11,127
|
Modelling with more variables than data points
|
There are many solutions this problem: find three terms whose sum is equal to $3$: $3=7-3-1$, $3=1234-23451+22220$, for instance. Here, the number of observations is one ($n=1$) and $p=3$.
In mathematics, a useful concept is that of overdetermined systems (and their converse, underdetermined systems). Key features from the previous wikipedia pages:
"overdetermined if there are more equations than unknowns" and "is almost always inconsistent (it has no solution) when constructed with random coefficients. However, an overdetermined system will have solutions in some cases"
"underdetermined if there are fewer equations than unknowns" and "In general, an underdetermined system of linear equations has an infinite number of solutions, if any. However, in optimization problems that are subject to linear equality constraints, only one of the solutions is relevant, namely the one giving the highest or lowest value of an objective function."
Without additional assumptions, one has difficulties finding a very meaningful solution. In practice, you may assume that you have no more that two non-zero terms (sparsity hypothesis), or you can constrain them to be positive (positivity hypothesis). In such a case, you end up with ordered triplets like $(3, 0, 0)$ or $(2, 1, 0)$, a reduced set which you can explore as potential "practical" solutions which shall be tested or probed. You can reduce the space of search, by imposing all variables to be equal (a kind of zero-degree parametric model). Then, $(1, 1, 1)$ would be the solution.
This is what penalized regression (like lasso or ridge) is meant for: find a manageable subset of "simpler" solutions, potentially more natural to some extent. They use the law of parsimony, or Ockham's razor, for which if two models explain the observation which the same precision, it can be wisest to choose the more compact in terms of, for instance, the number of free parameters. One does not really "explain" useful relationship between variables with too involved models.
A quote attributed to John von Neumann illustrates this context:
With four parameters I can fit an elephant, and with five I can make
him wiggle his trunk.
|
Modelling with more variables than data points
|
There are many solutions this problem: find three terms whose sum is equal to $3$: $3=7-3-1$, $3=1234-23451+22220$, for instance. Here, the number of observations is one ($n=1$) and $p=3$.
In mathemat
|
Modelling with more variables than data points
There are many solutions this problem: find three terms whose sum is equal to $3$: $3=7-3-1$, $3=1234-23451+22220$, for instance. Here, the number of observations is one ($n=1$) and $p=3$.
In mathematics, a useful concept is that of overdetermined systems (and their converse, underdetermined systems). Key features from the previous wikipedia pages:
"overdetermined if there are more equations than unknowns" and "is almost always inconsistent (it has no solution) when constructed with random coefficients. However, an overdetermined system will have solutions in some cases"
"underdetermined if there are fewer equations than unknowns" and "In general, an underdetermined system of linear equations has an infinite number of solutions, if any. However, in optimization problems that are subject to linear equality constraints, only one of the solutions is relevant, namely the one giving the highest or lowest value of an objective function."
Without additional assumptions, one has difficulties finding a very meaningful solution. In practice, you may assume that you have no more that two non-zero terms (sparsity hypothesis), or you can constrain them to be positive (positivity hypothesis). In such a case, you end up with ordered triplets like $(3, 0, 0)$ or $(2, 1, 0)$, a reduced set which you can explore as potential "practical" solutions which shall be tested or probed. You can reduce the space of search, by imposing all variables to be equal (a kind of zero-degree parametric model). Then, $(1, 1, 1)$ would be the solution.
This is what penalized regression (like lasso or ridge) is meant for: find a manageable subset of "simpler" solutions, potentially more natural to some extent. They use the law of parsimony, or Ockham's razor, for which if two models explain the observation which the same precision, it can be wisest to choose the more compact in terms of, for instance, the number of free parameters. One does not really "explain" useful relationship between variables with too involved models.
A quote attributed to John von Neumann illustrates this context:
With four parameters I can fit an elephant, and with five I can make
him wiggle his trunk.
|
Modelling with more variables than data points
There are many solutions this problem: find three terms whose sum is equal to $3$: $3=7-3-1$, $3=1234-23451+22220$, for instance. Here, the number of observations is one ($n=1$) and $p=3$.
In mathemat
|
11,128
|
Why does the variance of a sample change if the observations are duplicated?
|
If you define variance as $s^2_{n}=$$\,\text{MSE}\,$$=\frac1n \sum_{i=1}^n (x_i-\bar{x})^2$ -- similar to population variance but with sample mean for $\mu$, then both your samples would have the same variance.
So the difference is purely because of Bessel's correction in the usual formula for the sample variance ($s^2_{n-1}=\frac{n}{n-1}\cdot \text{MSE}=\frac{n}{n-1}\cdot \frac1n \sum_{i=1}^n (x_i-\bar{x})^2=\frac{1}{n-1}\sum_{i=1}^n (x_i-\bar{x})^2$, which adjusts for the fact that the sample mean is closer to the data than the population mean is, in order to make it unbiased (taking the right value "on average").
The effect gradually goes away with increasing sample size, as $\frac{n-1}{n}$ goes to 1 as $n\to\infty$.
There's no particular reason you have to use the unbiased estimator for variance, by the way -- $s^2_n$ is a perfectly valid estimator, and in some cases may arguably have advantages over the more common form (unbiasedness isn't necessarily that big a deal).
Variance itself isn't directly a measure of spread. If I double all the values in my data set, I contend they're twice as "spread". But variance increases by a factor of 4. So more usually, it is said that standard deviation, rather than variance is a measure of spread.
Of course, the same issue occurs with standard deviation (the usual $s_{n-1}$ version) as with variance -- when you double up the points the standard deviation changes, for the same reason as happens with the variance.
In small samples the Bessel correction makes standard deviation somewhat less intuitive as a measure of spread because of that effect (that duplicating the sample changes the value). But many measures of spread do retain the the same value when duplicating the sample; I'll mention a few --
$s_n$ (of course)
the mean (absolute) deviation from the mean
the median (absolute) deviation from the median
the interquartile range (at least for some definitions of sample quartiles)
|
Why does the variance of a sample change if the observations are duplicated?
|
If you define variance as $s^2_{n}=$$\,\text{MSE}\,$$=\frac1n \sum_{i=1}^n (x_i-\bar{x})^2$ -- similar to population variance but with sample mean for $\mu$, then both your samples would have the same
|
Why does the variance of a sample change if the observations are duplicated?
If you define variance as $s^2_{n}=$$\,\text{MSE}\,$$=\frac1n \sum_{i=1}^n (x_i-\bar{x})^2$ -- similar to population variance but with sample mean for $\mu$, then both your samples would have the same variance.
So the difference is purely because of Bessel's correction in the usual formula for the sample variance ($s^2_{n-1}=\frac{n}{n-1}\cdot \text{MSE}=\frac{n}{n-1}\cdot \frac1n \sum_{i=1}^n (x_i-\bar{x})^2=\frac{1}{n-1}\sum_{i=1}^n (x_i-\bar{x})^2$, which adjusts for the fact that the sample mean is closer to the data than the population mean is, in order to make it unbiased (taking the right value "on average").
The effect gradually goes away with increasing sample size, as $\frac{n-1}{n}$ goes to 1 as $n\to\infty$.
There's no particular reason you have to use the unbiased estimator for variance, by the way -- $s^2_n$ is a perfectly valid estimator, and in some cases may arguably have advantages over the more common form (unbiasedness isn't necessarily that big a deal).
Variance itself isn't directly a measure of spread. If I double all the values in my data set, I contend they're twice as "spread". But variance increases by a factor of 4. So more usually, it is said that standard deviation, rather than variance is a measure of spread.
Of course, the same issue occurs with standard deviation (the usual $s_{n-1}$ version) as with variance -- when you double up the points the standard deviation changes, for the same reason as happens with the variance.
In small samples the Bessel correction makes standard deviation somewhat less intuitive as a measure of spread because of that effect (that duplicating the sample changes the value). But many measures of spread do retain the the same value when duplicating the sample; I'll mention a few --
$s_n$ (of course)
the mean (absolute) deviation from the mean
the median (absolute) deviation from the median
the interquartile range (at least for some definitions of sample quartiles)
|
Why does the variance of a sample change if the observations are duplicated?
If you define variance as $s^2_{n}=$$\,\text{MSE}\,$$=\frac1n \sum_{i=1}^n (x_i-\bar{x})^2$ -- similar to population variance but with sample mean for $\mu$, then both your samples would have the same
|
11,129
|
Why does the variance of a sample change if the observations are duplicated?
|
As some sort of mnemonic, $V\,X = E\,V\,X + V\,E\,X$. So the expected value of a sample's variance is too low, with the difference being the variance of the sample's mean.
The usual sample variance formula compensates for that, and the variance of the sample's mean scales inversely with sample size.
As an extreme example, taking a single sample will always show a sample variance of 0, obviously not indicating a variance of 0 for the underlying distribution.
Now for 2 and 4 evenly weighted samples, the corrective factors are $2/1$ and $4/3$, respectively. So your calculated expected variances differ by a factor of $2/3$. The variance of the sample itself is $1$ in either case. But the first case presents a weaker case for $4$ being the mean of the base distribution, and every other value would mean a larger variance.
|
Why does the variance of a sample change if the observations are duplicated?
|
As some sort of mnemonic, $V\,X = E\,V\,X + V\,E\,X$. So the expected value of a sample's variance is too low, with the difference being the variance of the sample's mean.
The usual sample variance f
|
Why does the variance of a sample change if the observations are duplicated?
As some sort of mnemonic, $V\,X = E\,V\,X + V\,E\,X$. So the expected value of a sample's variance is too low, with the difference being the variance of the sample's mean.
The usual sample variance formula compensates for that, and the variance of the sample's mean scales inversely with sample size.
As an extreme example, taking a single sample will always show a sample variance of 0, obviously not indicating a variance of 0 for the underlying distribution.
Now for 2 and 4 evenly weighted samples, the corrective factors are $2/1$ and $4/3$, respectively. So your calculated expected variances differ by a factor of $2/3$. The variance of the sample itself is $1$ in either case. But the first case presents a weaker case for $4$ being the mean of the base distribution, and every other value would mean a larger variance.
|
Why does the variance of a sample change if the observations are duplicated?
As some sort of mnemonic, $V\,X = E\,V\,X + V\,E\,X$. So the expected value of a sample's variance is too low, with the difference being the variance of the sample's mean.
The usual sample variance f
|
11,130
|
The reference book for statistics with R – does it exist and what should it contain?
|
I personally thought that Modern Applied Statistics with S-Plus ticks all of the boxes you have outlined. Every example has R code, they give good references to other sources, and Venables and Ripley have a wonderfully terse and explanatory writing style which I really appreciated. I tend to re-read the book every so often, and each time I get more from it. Of course, your mileage may vary.
|
The reference book for statistics with R – does it exist and what should it contain?
|
I personally thought that Modern Applied Statistics with S-Plus ticks all of the boxes you have outlined. Every example has R code, they give good references to other sources, and Venables and Ripley
|
The reference book for statistics with R – does it exist and what should it contain?
I personally thought that Modern Applied Statistics with S-Plus ticks all of the boxes you have outlined. Every example has R code, they give good references to other sources, and Venables and Ripley have a wonderfully terse and explanatory writing style which I really appreciated. I tend to re-read the book every so often, and each time I get more from it. Of course, your mileage may vary.
|
The reference book for statistics with R – does it exist and what should it contain?
I personally thought that Modern Applied Statistics with S-Plus ticks all of the boxes you have outlined. Every example has R code, they give good references to other sources, and Venables and Ripley
|
11,131
|
The reference book for statistics with R – does it exist and what should it contain?
|
I don't think a book like this exists. The book that I think comes closest is Gelman and Hill's Data Analysis Using Regression and Multilevel/Hierarchical Models.
Cons:
It's ~5 old and aimed at social scientists.
It does not have everything on your TOC list (nothing spatial, basically nothing on time series, etc.)
Pros:
Well-written
It's got a list of errata and a TOC at the link
It covers key things like missing data, which is not on your numbered list.
It does hit most items on your bullet list.
Lots of graphs and R code (some Bugs code for the multi-level).
All the data/code is available for downloading.
|
The reference book for statistics with R – does it exist and what should it contain?
|
I don't think a book like this exists. The book that I think comes closest is Gelman and Hill's Data Analysis Using Regression and Multilevel/Hierarchical Models.
Cons:
It's ~5 old and aimed at soci
|
The reference book for statistics with R – does it exist and what should it contain?
I don't think a book like this exists. The book that I think comes closest is Gelman and Hill's Data Analysis Using Regression and Multilevel/Hierarchical Models.
Cons:
It's ~5 old and aimed at social scientists.
It does not have everything on your TOC list (nothing spatial, basically nothing on time series, etc.)
Pros:
Well-written
It's got a list of errata and a TOC at the link
It covers key things like missing data, which is not on your numbered list.
It does hit most items on your bullet list.
Lots of graphs and R code (some Bugs code for the multi-level).
All the data/code is available for downloading.
|
The reference book for statistics with R – does it exist and what should it contain?
I don't think a book like this exists. The book that I think comes closest is Gelman and Hill's Data Analysis Using Regression and Multilevel/Hierarchical Models.
Cons:
It's ~5 old and aimed at soci
|
11,132
|
The reference book for statistics with R – does it exist and what should it contain?
|
Thanks for such a good question, and especially compiling all of that information. Unfortunately, the book you're describing doesn't exist, and to be honest, it couldn't possibly exist. If what you primarily want is a reference book for statistics, I would start with a really good book on linear models. My recommendation is Kutner et al, it meets the criteria of being greater than a brick in both volume and mass, is very comprehensive, clear, and with lots of examples. In fact, if you eliminate the R requirement, it pretty much ticks off your whole list. I refer back to it often. However, in ~1500 pages, it pretty much only covers linear models--i.e., regression, and ANOVA--there are some brief chapters on a couple of other topics, but you'll really want other books for that. Next, I would get a top-notch statistical reference book, at the level appropriate for you, for whatever other techniques you may need to work with (e.g., survival analysis, spatial analysis, etc.). If those books don't use R for their examples, you may want to get an R specific book, like one of the use-R! books, but between the documentation, the vignettes, the R-help mailing lists, StackOverflow, and CV, you may not need to. If you want to learn to program in R the right way, you should get one of those books, too. At this point, you have at least 4 books. I'm sorry, but that's the way it is. No one who works extensively with statistics has just one book that covers everything.
|
The reference book for statistics with R – does it exist and what should it contain?
|
Thanks for such a good question, and especially compiling all of that information. Unfortunately, the book you're describing doesn't exist, and to be honest, it couldn't possibly exist. If what you
|
The reference book for statistics with R – does it exist and what should it contain?
Thanks for such a good question, and especially compiling all of that information. Unfortunately, the book you're describing doesn't exist, and to be honest, it couldn't possibly exist. If what you primarily want is a reference book for statistics, I would start with a really good book on linear models. My recommendation is Kutner et al, it meets the criteria of being greater than a brick in both volume and mass, is very comprehensive, clear, and with lots of examples. In fact, if you eliminate the R requirement, it pretty much ticks off your whole list. I refer back to it often. However, in ~1500 pages, it pretty much only covers linear models--i.e., regression, and ANOVA--there are some brief chapters on a couple of other topics, but you'll really want other books for that. Next, I would get a top-notch statistical reference book, at the level appropriate for you, for whatever other techniques you may need to work with (e.g., survival analysis, spatial analysis, etc.). If those books don't use R for their examples, you may want to get an R specific book, like one of the use-R! books, but between the documentation, the vignettes, the R-help mailing lists, StackOverflow, and CV, you may not need to. If you want to learn to program in R the right way, you should get one of those books, too. At this point, you have at least 4 books. I'm sorry, but that's the way it is. No one who works extensively with statistics has just one book that covers everything.
|
The reference book for statistics with R – does it exist and what should it contain?
Thanks for such a good question, and especially compiling all of that information. Unfortunately, the book you're describing doesn't exist, and to be honest, it couldn't possibly exist. If what you
|
11,133
|
The reference book for statistics with R – does it exist and what should it contain?
|
I am working my way through Elements of Statistical Learning. This book covers an incredible range of techniques (so is 700+ pages) but each approach is explained clearly in a very practical, rather than highly theoretical way. It doesn't explicitly contain anything about R, however the plots and graphs are all clearly made with R and there are packages on CRAN for all the topics discussed. The authors have all been involved with the development of R (as well as a fair chunk of modern machine learning techniques).
|
The reference book for statistics with R – does it exist and what should it contain?
|
I am working my way through Elements of Statistical Learning. This book covers an incredible range of techniques (so is 700+ pages) but each approach is explained clearly in a very practical, rather t
|
The reference book for statistics with R – does it exist and what should it contain?
I am working my way through Elements of Statistical Learning. This book covers an incredible range of techniques (so is 700+ pages) but each approach is explained clearly in a very practical, rather than highly theoretical way. It doesn't explicitly contain anything about R, however the plots and graphs are all clearly made with R and there are packages on CRAN for all the topics discussed. The authors have all been involved with the development of R (as well as a fair chunk of modern machine learning techniques).
|
The reference book for statistics with R – does it exist and what should it contain?
I am working my way through Elements of Statistical Learning. This book covers an incredible range of techniques (so is 700+ pages) but each approach is explained clearly in a very practical, rather t
|
11,134
|
The reference book for statistics with R – does it exist and what should it contain?
|
I agreed with the currently top-voted answer that MASS4 was a pretty good fit to the request and have the same experience as another respondent with difficulty meeting its requirement of a fairly high level of statistical sophistication. MASS3 was in fact my first "Rbook" and it served me fairly well in that capacity. I did buy Crawley's "The R Book" and found it unsatisfactory for both an inaccurate description of the R language and being little more than a set of worked examples that seemed to lack depth of statistical theory.
However, with the passage of time, I have found Harrell's "Regression Modeling Strategies" (RMS) a better fit for the "biostatistical" focus of this question as well as having good depth. It's not an introductory text on R. For that one needs to look elsewhere and for that I recommend one of Introduction to Scientific Programming and Simulation Using R [http://www.crcpress.com/product/isbn/9781420068726] or (despite its name) "R for Dummies" written by a couple of long time contributors to StackOverflow's R posting tags. I only have RMS in its first edition when it was more focused on S, but since that time Harrell has switched over to R and fully supports the rms/Hmisc R package duo. I believe it satisfies @gung's suggestion for specialty coverage in several of the listed domains, although not for spatial analysis or mixed models.
|
The reference book for statistics with R – does it exist and what should it contain?
|
I agreed with the currently top-voted answer that MASS4 was a pretty good fit to the request and have the same experience as another respondent with difficulty meeting its requirement of a fairly high
|
The reference book for statistics with R – does it exist and what should it contain?
I agreed with the currently top-voted answer that MASS4 was a pretty good fit to the request and have the same experience as another respondent with difficulty meeting its requirement of a fairly high level of statistical sophistication. MASS3 was in fact my first "Rbook" and it served me fairly well in that capacity. I did buy Crawley's "The R Book" and found it unsatisfactory for both an inaccurate description of the R language and being little more than a set of worked examples that seemed to lack depth of statistical theory.
However, with the passage of time, I have found Harrell's "Regression Modeling Strategies" (RMS) a better fit for the "biostatistical" focus of this question as well as having good depth. It's not an introductory text on R. For that one needs to look elsewhere and for that I recommend one of Introduction to Scientific Programming and Simulation Using R [http://www.crcpress.com/product/isbn/9781420068726] or (despite its name) "R for Dummies" written by a couple of long time contributors to StackOverflow's R posting tags. I only have RMS in its first edition when it was more focused on S, but since that time Harrell has switched over to R and fully supports the rms/Hmisc R package duo. I believe it satisfies @gung's suggestion for specialty coverage in several of the listed domains, although not for spatial analysis or mixed models.
|
The reference book for statistics with R – does it exist and what should it contain?
I agreed with the currently top-voted answer that MASS4 was a pretty good fit to the request and have the same experience as another respondent with difficulty meeting its requirement of a fairly high
|
11,135
|
The reference book for statistics with R – does it exist and what should it contain?
|
If you want to translate... (this a companion book of a 4,900 page theoretical book):
Big R Book
This book (of which I am a co-author) is a compilation of 15 years of consulting experience and teaching at undergraduate and graduate level and show only examples of R stuff for whose the details of mathematics (proofs) are given in my 4,900 pages companion books where calculations are also made by hand with numerical values (+500 pages that will be available in the next edition). This book also gives the possibility to check that the software gives the right values and it is much more fun than making calculations by hand or in MS Excel about subjects that are normally taught in graduate courses in European schools. The purpose of this book is also to show that you can use 1 software instead of many for the same results without cost (instead of using JMP + Minitab + SPSS + SAS + MATLAB together). This book also shows the weaknesses of R (package maintenance not guaranteed). It is also a compendium of highly valuable questions on various R forums and blogs. It is free and in color!
|
The reference book for statistics with R – does it exist and what should it contain?
|
If you want to translate... (this a companion book of a 4,900 page theoretical book):
Big R Book
This book (of which I am a co-author) is a compilation of 15 years of consulting experience and teachin
|
The reference book for statistics with R – does it exist and what should it contain?
If you want to translate... (this a companion book of a 4,900 page theoretical book):
Big R Book
This book (of which I am a co-author) is a compilation of 15 years of consulting experience and teaching at undergraduate and graduate level and show only examples of R stuff for whose the details of mathematics (proofs) are given in my 4,900 pages companion books where calculations are also made by hand with numerical values (+500 pages that will be available in the next edition). This book also gives the possibility to check that the software gives the right values and it is much more fun than making calculations by hand or in MS Excel about subjects that are normally taught in graduate courses in European schools. The purpose of this book is also to show that you can use 1 software instead of many for the same results without cost (instead of using JMP + Minitab + SPSS + SAS + MATLAB together). This book also shows the weaknesses of R (package maintenance not guaranteed). It is also a compendium of highly valuable questions on various R forums and blogs. It is free and in color!
|
The reference book for statistics with R – does it exist and what should it contain?
If you want to translate... (this a companion book of a 4,900 page theoretical book):
Big R Book
This book (of which I am a co-author) is a compilation of 15 years of consulting experience and teachin
|
11,136
|
Are machine learning techniques "approximation algorithms"?
|
I think you're mixing multiple important concepts. Let me try to clarify a couple of things:
There are metaheuristic methods, which are methods that iteratively try to improve a candidate solution. Examples of this are tabu search, simulated annealing, genetic algorithms, etc. Observe that while there can be many cases where these methods work nicely, there isn't any deep understanding of when these methods work and when they don't. And more importantly when they don't get to the solution, we can be arbitrarily far from it. Problems solved by metaheuristic methods tend to be discrete in nature, because there are far better tools to handle continuous problems. But every now and then you see metaheuristics for continuous problems, too.
There are numerical optimization methods, people in this community carefully examine the nature of the function that is to be optimized and the restrictions of the solution (into groups like convex optimization, quadratic programming, linear programming, etc) and apply algorithms that have been shown to work for that type of function, and those type of restrictions. When people in this area say "shown to work" they mean a proof. The situation is that these types of methods work in continuous problems. But when your problem falls in this category, this is definitely the tool to use.
There are discrete optimization methods, which tend to be things that in nature are connected to algorithms to well studied discrete problems: such as shortest paths, max flow, etc. People in this area also care that their algorithms really work (proofs). There are a subset of people in this group that study really hard problems for which no fast algorithm is expected to exist. They then study approximation algorithms, which are fast algorithms for which they are able to show that their solution is within a constant factor of the true optimum. This is called "approximation algorithms". These people also show their results as proofs.
So... to answer your question, I do not think that metaheuristics are approximation algorithms. It doesn't seem to me as something connected to opinion, it is just fact.
|
Are machine learning techniques "approximation algorithms"?
|
I think you're mixing multiple important concepts. Let me try to clarify a couple of things:
There are metaheuristic methods, which are methods that iteratively try to improve a candidate solution. E
|
Are machine learning techniques "approximation algorithms"?
I think you're mixing multiple important concepts. Let me try to clarify a couple of things:
There are metaheuristic methods, which are methods that iteratively try to improve a candidate solution. Examples of this are tabu search, simulated annealing, genetic algorithms, etc. Observe that while there can be many cases where these methods work nicely, there isn't any deep understanding of when these methods work and when they don't. And more importantly when they don't get to the solution, we can be arbitrarily far from it. Problems solved by metaheuristic methods tend to be discrete in nature, because there are far better tools to handle continuous problems. But every now and then you see metaheuristics for continuous problems, too.
There are numerical optimization methods, people in this community carefully examine the nature of the function that is to be optimized and the restrictions of the solution (into groups like convex optimization, quadratic programming, linear programming, etc) and apply algorithms that have been shown to work for that type of function, and those type of restrictions. When people in this area say "shown to work" they mean a proof. The situation is that these types of methods work in continuous problems. But when your problem falls in this category, this is definitely the tool to use.
There are discrete optimization methods, which tend to be things that in nature are connected to algorithms to well studied discrete problems: such as shortest paths, max flow, etc. People in this area also care that their algorithms really work (proofs). There are a subset of people in this group that study really hard problems for which no fast algorithm is expected to exist. They then study approximation algorithms, which are fast algorithms for which they are able to show that their solution is within a constant factor of the true optimum. This is called "approximation algorithms". These people also show their results as proofs.
So... to answer your question, I do not think that metaheuristics are approximation algorithms. It doesn't seem to me as something connected to opinion, it is just fact.
|
Are machine learning techniques "approximation algorithms"?
I think you're mixing multiple important concepts. Let me try to clarify a couple of things:
There are metaheuristic methods, which are methods that iteratively try to improve a candidate solution. E
|
11,137
|
Are machine learning techniques "approximation algorithms"?
|
Machine learning often deals with optimization of a function which has many local minimas. Feedforward neural networks with hidden units is a good example. Whether these functions are discrete or continuous, there is no method which achieves a global minimum and stops. It is easy to prove that there is no general algorithm to find a global minimum of a continuous function even if it is one-dimensional and smooth (has infinitely many derivatives). In practice, all algorithms for learning neural networks stuck into a local minimum. It is easy to check this: create a random neural network, make a big set of its responses to random inputs, then try to learn another neural network with the same architecture to copy the responses. While the perfect solution exists, neither backpropagation not any other learning algorithm will be able to discover it, starting from a random set of weights.
Some learning methods, like simulated annealing or genetic algorithms, explore many local minimas. For continuous functions there are methods like gradient descent, which find the closest local minimum. They are much faster, thats why they are widely used in practice. But given enough time, the former group of methods outperforms the later in terms of training set error. But with reasonable time constraints, for real world problems, the latter group is usually better.
For some models, like logistic regression, there is one local minimum, the function is convex, the minimization converges to the minimum, but the models themselves are simplistic.
Thats the bitter truth.
Note also that proof of convergence and proof of convergence to the best solution are two different things. K-means algorithm is an example of this.
Finally, for some models we don't know how to learn at all. For example, if the output is an arbitrary computable function of inputs, we don't know good algorithms which, in reasonable time, find a Turing or equivalent machine implementing this function. For instance, if f(1)=2, f(2)=3, f(3)=5, f(4)=7, ..., f(10)=29 (ten first primes), we don't know any learning algorithm which would be able to predict, in reasonable time, that f(11)=31, unless it already knows the concept of prime numbers.
|
Are machine learning techniques "approximation algorithms"?
|
Machine learning often deals with optimization of a function which has many local minimas. Feedforward neural networks with hidden units is a good example. Whether these functions are discrete or cont
|
Are machine learning techniques "approximation algorithms"?
Machine learning often deals with optimization of a function which has many local minimas. Feedforward neural networks with hidden units is a good example. Whether these functions are discrete or continuous, there is no method which achieves a global minimum and stops. It is easy to prove that there is no general algorithm to find a global minimum of a continuous function even if it is one-dimensional and smooth (has infinitely many derivatives). In practice, all algorithms for learning neural networks stuck into a local minimum. It is easy to check this: create a random neural network, make a big set of its responses to random inputs, then try to learn another neural network with the same architecture to copy the responses. While the perfect solution exists, neither backpropagation not any other learning algorithm will be able to discover it, starting from a random set of weights.
Some learning methods, like simulated annealing or genetic algorithms, explore many local minimas. For continuous functions there are methods like gradient descent, which find the closest local minimum. They are much faster, thats why they are widely used in practice. But given enough time, the former group of methods outperforms the later in terms of training set error. But with reasonable time constraints, for real world problems, the latter group is usually better.
For some models, like logistic regression, there is one local minimum, the function is convex, the minimization converges to the minimum, but the models themselves are simplistic.
Thats the bitter truth.
Note also that proof of convergence and proof of convergence to the best solution are two different things. K-means algorithm is an example of this.
Finally, for some models we don't know how to learn at all. For example, if the output is an arbitrary computable function of inputs, we don't know good algorithms which, in reasonable time, find a Turing or equivalent machine implementing this function. For instance, if f(1)=2, f(2)=3, f(3)=5, f(4)=7, ..., f(10)=29 (ten first primes), we don't know any learning algorithm which would be able to predict, in reasonable time, that f(11)=31, unless it already knows the concept of prime numbers.
|
Are machine learning techniques "approximation algorithms"?
Machine learning often deals with optimization of a function which has many local minimas. Feedforward neural networks with hidden units is a good example. Whether these functions are discrete or cont
|
11,138
|
What is the difference between learning and inference?
|
I agree with Neil G's answer, but perhaps this alternative phrasing also helps:
Consider the setting of a simple Gaussian mixture model. Here we can think of the model parameters as the set of Gaussian components of the mixture model (each of their means and variances, and each one's weight in the mixture).
Given a set of model parameters, inference is the problem of identifying which component was likely to have generated a single given example, usually in the form of a "responsibility" for each component. Here, the latent variables are just the single identifier for which component generated the given vector, and we are inferring which component that was likely to have been. (In this case, inference is simple, though in more complex models it becomes quite complicated.)
Learning is the process of, given a set of samples from the model, identifying the model parameters (or a distribution over model parameters) that best fit the data given: choosing the Gaussians' means, variances, and weightings.
The Expectation-Maximization learning algorithm can be thought of as performing inference for the training set, then learning the best parameters given that inference, then repeating. Inference is often used in the learning process in this way, but it is also of independent interest, e.g. to choose which component generated a given data point in a Gaussian mixture model, to decide on the most likely hidden state in a hidden Markov model, to impute missing values in a more general graphical model, ....
|
What is the difference between learning and inference?
|
I agree with Neil G's answer, but perhaps this alternative phrasing also helps:
Consider the setting of a simple Gaussian mixture model. Here we can think of the model parameters as the set of Gaussia
|
What is the difference between learning and inference?
I agree with Neil G's answer, but perhaps this alternative phrasing also helps:
Consider the setting of a simple Gaussian mixture model. Here we can think of the model parameters as the set of Gaussian components of the mixture model (each of their means and variances, and each one's weight in the mixture).
Given a set of model parameters, inference is the problem of identifying which component was likely to have generated a single given example, usually in the form of a "responsibility" for each component. Here, the latent variables are just the single identifier for which component generated the given vector, and we are inferring which component that was likely to have been. (In this case, inference is simple, though in more complex models it becomes quite complicated.)
Learning is the process of, given a set of samples from the model, identifying the model parameters (or a distribution over model parameters) that best fit the data given: choosing the Gaussians' means, variances, and weightings.
The Expectation-Maximization learning algorithm can be thought of as performing inference for the training set, then learning the best parameters given that inference, then repeating. Inference is often used in the learning process in this way, but it is also of independent interest, e.g. to choose which component generated a given data point in a Gaussian mixture model, to decide on the most likely hidden state in a hidden Markov model, to impute missing values in a more general graphical model, ....
|
What is the difference between learning and inference?
I agree with Neil G's answer, but perhaps this alternative phrasing also helps:
Consider the setting of a simple Gaussian mixture model. Here we can think of the model parameters as the set of Gaussia
|
11,139
|
What is the difference between learning and inference?
|
Inference is choosing a configuration based on a single input. Learning is choosing parameters based on some training examples.
In the energy-based model framework (a way of looking at nearly all machine learning architectures), inference chooses a configuration to minimize an energy function while holding the parameters fixed; learning chooses the parameters to minimize the loss function.
As conjugateprior points out, other people use different terminology for the same thing. For example Bishop, uses "inference" and "decision" to mean learning and inference respectively. Causal inference means learning. But whichever terms you decide on, these two concepts are distinct.
The neurological analogy is a pattern of firing neurons is a configuration; a set of link strengths are the parameters.
|
What is the difference between learning and inference?
|
Inference is choosing a configuration based on a single input. Learning is choosing parameters based on some training examples.
In the energy-based model framework (a way of looking at nearly all mac
|
What is the difference between learning and inference?
Inference is choosing a configuration based on a single input. Learning is choosing parameters based on some training examples.
In the energy-based model framework (a way of looking at nearly all machine learning architectures), inference chooses a configuration to minimize an energy function while holding the parameters fixed; learning chooses the parameters to minimize the loss function.
As conjugateprior points out, other people use different terminology for the same thing. For example Bishop, uses "inference" and "decision" to mean learning and inference respectively. Causal inference means learning. But whichever terms you decide on, these two concepts are distinct.
The neurological analogy is a pattern of firing neurons is a configuration; a set of link strengths are the parameters.
|
What is the difference between learning and inference?
Inference is choosing a configuration based on a single input. Learning is choosing parameters based on some training examples.
In the energy-based model framework (a way of looking at nearly all mac
|
11,140
|
What is the difference between learning and inference?
|
This looks like classic cross-discipline lingo confusion. The OP seems to be using neuroscience-like terminology where the two terms in question may have different connotations. But since Cross Validated generally deals with statistics and maching learning, I'll try answering the question based on the common usage of these terms in those fields.
In classical statistics, inference is simply the act of taking what you know about a sample and making a mathematical statement about the population from which it is (hopefully) representative. From the canonical textbook of Casella & Berger (2002): "The subject of probability theory is the foundation upon which all of statistics is built ... through these models, statisticians are able to draw inferences about populations, inferences based on examination of only a part of the whole". So in statistics, inference is specifically related to p-values, test statistics, and sampling distributions, etc.
As for learning, I think this table from Wasserman's All of Statistics (2003) might be helpful:
|
What is the difference between learning and inference?
|
This looks like classic cross-discipline lingo confusion. The OP seems to be using neuroscience-like terminology where the two terms in question may have different connotations. But since Cross Valida
|
What is the difference between learning and inference?
This looks like classic cross-discipline lingo confusion. The OP seems to be using neuroscience-like terminology where the two terms in question may have different connotations. But since Cross Validated generally deals with statistics and maching learning, I'll try answering the question based on the common usage of these terms in those fields.
In classical statistics, inference is simply the act of taking what you know about a sample and making a mathematical statement about the population from which it is (hopefully) representative. From the canonical textbook of Casella & Berger (2002): "The subject of probability theory is the foundation upon which all of statistics is built ... through these models, statisticians are able to draw inferences about populations, inferences based on examination of only a part of the whole". So in statistics, inference is specifically related to p-values, test statistics, and sampling distributions, etc.
As for learning, I think this table from Wasserman's All of Statistics (2003) might be helpful:
|
What is the difference between learning and inference?
This looks like classic cross-discipline lingo confusion. The OP seems to be using neuroscience-like terminology where the two terms in question may have different connotations. But since Cross Valida
|
11,141
|
What is the difference between learning and inference?
|
It is strange no one else mentioned this, but you can have inference only in cases where you have a probability distribution. Here to quote Wiki, which quotes Oxford dictionary:
Statistical inference is the process of using data analysis to deduce properties of an underlying probability distribution (Oxford Dictionary of Statistics)
https://en.wikipedia.org/wiki/Statistical_inference
In case of traditional neural networks, k-NN or vanilla SVMs you have no probability density to estimate, nor assumptions about any density, thus, no statistical inference there. Only training/learning. However, for most (all?) statistical procedures, you can use both inference AND learning, since these procedures possess some assumptions about the distribution of population in question.
|
What is the difference between learning and inference?
|
It is strange no one else mentioned this, but you can have inference only in cases where you have a probability distribution. Here to quote Wiki, which quotes Oxford dictionary:
Statistical inference
|
What is the difference between learning and inference?
It is strange no one else mentioned this, but you can have inference only in cases where you have a probability distribution. Here to quote Wiki, which quotes Oxford dictionary:
Statistical inference is the process of using data analysis to deduce properties of an underlying probability distribution (Oxford Dictionary of Statistics)
https://en.wikipedia.org/wiki/Statistical_inference
In case of traditional neural networks, k-NN or vanilla SVMs you have no probability density to estimate, nor assumptions about any density, thus, no statistical inference there. Only training/learning. However, for most (all?) statistical procedures, you can use both inference AND learning, since these procedures possess some assumptions about the distribution of population in question.
|
What is the difference between learning and inference?
It is strange no one else mentioned this, but you can have inference only in cases where you have a probability distribution. Here to quote Wiki, which quotes Oxford dictionary:
Statistical inference
|
11,142
|
Why do the estimated values from a Best Linear Unbiased Predictor (BLUP) differ from a Best Linear Unbiased Estimator (BLUE)?
|
The values that you get from BLUPs aren't estimated in the same way as the BLUE estimates of fixed effects; by convention BLUPs are referred to as predictions. When you fit a mixed effects model, what are estimated initially are the mean and variance (and possibly the covariance) of the random effects. The random effect for a given study unit (say a student) is subsequently calculated from the estimated mean and variance, and the data. In a simple linear model, the mean is estimated (as is the residual variance), but the observed scores are considered to be composed of both that and the error, which is a random variable. In a mixed effects model, the effect for a given unit is likewise a random variable (although in some sense it has already been realized).
You can also treat such units as fixed effects, if you like. In that case, the parameters for that unit are estimated as usual. In such a case however, the mean (for example) of the population from which the units were drawn is not estimated.
Moreover, the assumption behind random effects is that they were sampled at random from some population, and it is the population that you care about. The assumption underlying fixed effects is that you selected those units purposefully because those are the only units you care about.
If you turn around and fit a mixed effects model and predict those same effects, they tend to be 'shrunk' towards the population mean relative to their fixed effects estimates. You can think of this as analogous to a Bayesian analysis where the estimated mean and variance specify a normal prior and the BLUP is like the mean of the posterior that comes from optimally combining the data with the prior.
The amount of shrinkage varies based on several factors. An important determinate of how far the random effects predictions will be from the fixed effects estimates is the ratio of the variance of the random effects to the error variance. Here is a quick R demo for the simplest case with 5 'level 2' units with only means (intercepts) fit. (You can think of this as test scores for students within classes.)
library(lme4) # we'll need to use this package
set.seed(1673) # this makes the example exactly reproducible
nj = 5; ni = 5; g = as.factor(rep(c(1:nj), each=ni))
##### model 1
pop.mean = 16; sigma.g = 1; sigma.e = 5
r.eff1 = rnorm(nj, mean=0, sd=sigma.g)
error = rnorm(nj*ni, mean=0, sd=sigma.e)
y = pop.mean + rep(r.eff1, each=ni) + error
re.mod1 = lmer(y~(1|g))
fe.mod1 = lm(y~0+g)
df1 = data.frame(fe1=coef(fe.mod1), re1=coef(re.mod1)$g)
##### model 2
pop.mean = 16; sigma.g = 5; sigma.e = 5
r.eff2 = rnorm(nj, mean=0, sd=sigma.g)
error = rnorm(nj*ni, mean=0, sd=sigma.e)
y = pop.mean + rep(r.eff2, each=ni) + error
re.mod2 = lmer(y~(1|g))
fe.mod2 = lm(y~0+g)
df2 = data.frame(fe2=coef(fe.mod2), re2=coef(re.mod2)$g)
##### model 3
pop.mean = 16; sigma.g = 5; sigma.e = 1
r.eff3 = rnorm(nj, mean=0, sd=sigma.g)
error = rnorm(nj*ni, mean=0, sd=sigma.e)
y = pop.mean + rep(r.eff3, each=ni) + error
re.mod3 = lmer(y~(1|g))
fe.mod3 = lm(y~0+g)
df3 = data.frame(fe3=coef(fe.mod3), re3=coef(re.mod3)$g)
So the ratios of the variance of the random effects to the error variance is 1/5 for model 1, 5/5 for model 2, and 5/1 for model 3. Note that I used level means coding for the fixed effects models. We can now examine how the estimated fixed effects and the predicted random effects compare for these three scenarios.
df1
# fe1 re1
# g1 17.88528 15.9897
# g2 18.38737 15.9897
# g3 14.85108 15.9897
# g4 14.92801 15.9897
# g5 13.89675 15.9897
df2
# fe2 re2
# g1 10.979130 11.32997
# g2 13.002723 13.14321
# g3 26.118189 24.89537
# g4 12.109896 12.34319
# g5 9.561495 10.05969
df3
# fe3 re3
# g1 13.08629 13.19965
# g2 16.36932 16.31164
# g3 17.60149 17.47962
# g4 15.51098 15.49802
# g5 13.74309 13.82224
Another way to end up with random effects predictions that are closer to the fixed effects estimates is when you have more data. We can compare model 1 from above, with its low ratio of random effects variance to error variance, to a version (model 1b) with the same ratio, but much more data (notice that ni = 500 instead of ni = 5).
##### model 1b
nj = 5; ni = 500; g = as.factor(rep(c(1:nj), each=ni))
pop.mean = 16; sigma.g = 1; sigma.e = 5
r.eff1b = rnorm(nj, mean=0, sd=sigma.g)
error = rnorm(nj*ni, mean=0, sd=sigma.e)
y = pop.mean + rep(r.eff1b, each=ni) + error
re.mod1b = lmer(y~(1|g))
fe.mod1b = lm(y~0+g)
df1b = data.frame(fe1b=coef(fe.mod1b), re1b=coef(re.mod1b)$g)
Here are the effects:
df1
# fe1 re1
# g1 17.88528 15.9897
# g2 18.38737 15.9897
# g3 14.85108 15.9897
# g4 14.92801 15.9897
# g5 13.89675 15.9897
df1b
# fe1b re1b
# g1 15.29064 15.29543
# g2 14.05557 14.08403
# g3 13.97053 14.00061
# g4 16.94697 16.92004
# g5 17.44085 17.40445
On a somewhat related note, Doug Bates (the author of the R package lme4) doesn't like the term "BLUP" and uses "conditional mode" instead (see pp. 22-23 of his draft lme4 book pdf). In particular, he points out in section 1.6 that "BLUP" can only meaningfully be used for linear mixed-effects models.
|
Why do the estimated values from a Best Linear Unbiased Predictor (BLUP) differ from a Best Linear U
|
The values that you get from BLUPs aren't estimated in the same way as the BLUE estimates of fixed effects; by convention BLUPs are referred to as predictions. When you fit a mixed effects model, wha
|
Why do the estimated values from a Best Linear Unbiased Predictor (BLUP) differ from a Best Linear Unbiased Estimator (BLUE)?
The values that you get from BLUPs aren't estimated in the same way as the BLUE estimates of fixed effects; by convention BLUPs are referred to as predictions. When you fit a mixed effects model, what are estimated initially are the mean and variance (and possibly the covariance) of the random effects. The random effect for a given study unit (say a student) is subsequently calculated from the estimated mean and variance, and the data. In a simple linear model, the mean is estimated (as is the residual variance), but the observed scores are considered to be composed of both that and the error, which is a random variable. In a mixed effects model, the effect for a given unit is likewise a random variable (although in some sense it has already been realized).
You can also treat such units as fixed effects, if you like. In that case, the parameters for that unit are estimated as usual. In such a case however, the mean (for example) of the population from which the units were drawn is not estimated.
Moreover, the assumption behind random effects is that they were sampled at random from some population, and it is the population that you care about. The assumption underlying fixed effects is that you selected those units purposefully because those are the only units you care about.
If you turn around and fit a mixed effects model and predict those same effects, they tend to be 'shrunk' towards the population mean relative to their fixed effects estimates. You can think of this as analogous to a Bayesian analysis where the estimated mean and variance specify a normal prior and the BLUP is like the mean of the posterior that comes from optimally combining the data with the prior.
The amount of shrinkage varies based on several factors. An important determinate of how far the random effects predictions will be from the fixed effects estimates is the ratio of the variance of the random effects to the error variance. Here is a quick R demo for the simplest case with 5 'level 2' units with only means (intercepts) fit. (You can think of this as test scores for students within classes.)
library(lme4) # we'll need to use this package
set.seed(1673) # this makes the example exactly reproducible
nj = 5; ni = 5; g = as.factor(rep(c(1:nj), each=ni))
##### model 1
pop.mean = 16; sigma.g = 1; sigma.e = 5
r.eff1 = rnorm(nj, mean=0, sd=sigma.g)
error = rnorm(nj*ni, mean=0, sd=sigma.e)
y = pop.mean + rep(r.eff1, each=ni) + error
re.mod1 = lmer(y~(1|g))
fe.mod1 = lm(y~0+g)
df1 = data.frame(fe1=coef(fe.mod1), re1=coef(re.mod1)$g)
##### model 2
pop.mean = 16; sigma.g = 5; sigma.e = 5
r.eff2 = rnorm(nj, mean=0, sd=sigma.g)
error = rnorm(nj*ni, mean=0, sd=sigma.e)
y = pop.mean + rep(r.eff2, each=ni) + error
re.mod2 = lmer(y~(1|g))
fe.mod2 = lm(y~0+g)
df2 = data.frame(fe2=coef(fe.mod2), re2=coef(re.mod2)$g)
##### model 3
pop.mean = 16; sigma.g = 5; sigma.e = 1
r.eff3 = rnorm(nj, mean=0, sd=sigma.g)
error = rnorm(nj*ni, mean=0, sd=sigma.e)
y = pop.mean + rep(r.eff3, each=ni) + error
re.mod3 = lmer(y~(1|g))
fe.mod3 = lm(y~0+g)
df3 = data.frame(fe3=coef(fe.mod3), re3=coef(re.mod3)$g)
So the ratios of the variance of the random effects to the error variance is 1/5 for model 1, 5/5 for model 2, and 5/1 for model 3. Note that I used level means coding for the fixed effects models. We can now examine how the estimated fixed effects and the predicted random effects compare for these three scenarios.
df1
# fe1 re1
# g1 17.88528 15.9897
# g2 18.38737 15.9897
# g3 14.85108 15.9897
# g4 14.92801 15.9897
# g5 13.89675 15.9897
df2
# fe2 re2
# g1 10.979130 11.32997
# g2 13.002723 13.14321
# g3 26.118189 24.89537
# g4 12.109896 12.34319
# g5 9.561495 10.05969
df3
# fe3 re3
# g1 13.08629 13.19965
# g2 16.36932 16.31164
# g3 17.60149 17.47962
# g4 15.51098 15.49802
# g5 13.74309 13.82224
Another way to end up with random effects predictions that are closer to the fixed effects estimates is when you have more data. We can compare model 1 from above, with its low ratio of random effects variance to error variance, to a version (model 1b) with the same ratio, but much more data (notice that ni = 500 instead of ni = 5).
##### model 1b
nj = 5; ni = 500; g = as.factor(rep(c(1:nj), each=ni))
pop.mean = 16; sigma.g = 1; sigma.e = 5
r.eff1b = rnorm(nj, mean=0, sd=sigma.g)
error = rnorm(nj*ni, mean=0, sd=sigma.e)
y = pop.mean + rep(r.eff1b, each=ni) + error
re.mod1b = lmer(y~(1|g))
fe.mod1b = lm(y~0+g)
df1b = data.frame(fe1b=coef(fe.mod1b), re1b=coef(re.mod1b)$g)
Here are the effects:
df1
# fe1 re1
# g1 17.88528 15.9897
# g2 18.38737 15.9897
# g3 14.85108 15.9897
# g4 14.92801 15.9897
# g5 13.89675 15.9897
df1b
# fe1b re1b
# g1 15.29064 15.29543
# g2 14.05557 14.08403
# g3 13.97053 14.00061
# g4 16.94697 16.92004
# g5 17.44085 17.40445
On a somewhat related note, Doug Bates (the author of the R package lme4) doesn't like the term "BLUP" and uses "conditional mode" instead (see pp. 22-23 of his draft lme4 book pdf). In particular, he points out in section 1.6 that "BLUP" can only meaningfully be used for linear mixed-effects models.
|
Why do the estimated values from a Best Linear Unbiased Predictor (BLUP) differ from a Best Linear U
The values that you get from BLUPs aren't estimated in the same way as the BLUE estimates of fixed effects; by convention BLUPs are referred to as predictions. When you fit a mixed effects model, wha
|
11,143
|
Why log-transforming the data before performing principal component analysis?
|
The iris data set is a fine example to learn PCA. That said, the first four columns describing length and width of sepals and petals are not an example of strongly skewed data. Therefore log-transforming the data does not change the results much, since the resulting rotation of the principal components is quite unchanged by log-transformation.
In other situations log-transformation is a good choice.
We perform PCA to get insight of the general structure of a data set. We center, scale and sometimes log-transform to filter off some trivial effects, which could dominate our PCA. The algorithm of a PCA will in turn find the rotation of each PC to minimize the squared residuals, namely the sum of squared perpendicular distances from any sample to the PCs. Large values tend to have high leverage.
Imagine injecting two new samples into the iris data. A flower with 430 cm petal length and one with petal length of 0.0043 cm. Both flowers are very abnormal being 100 times larger and 1000 times smaller respectively than average examples. The leverage of the first flower is huge, such that the first PCs mostly will describe the differences between the large flower and any other flower. Clustering of species is not possible due to that one outlier. If the data are log-transformed, the absolute value now describes the relative variation. Now the small flower is the most abnormal one. Nonetheless it is possible to both contain all samples in one image and provide a fair clustering of the species. Check out this example:
data(iris) #get data
#add two new observations from two new species to iris data
levels(iris[,5]) = c(levels(iris[,5]),"setosa_gigantica","virginica_brevis")
iris[151,] = list(6,3, 430 ,1.5,"setosa_gigantica") # a big flower
iris[152,] = list(6,3,.0043,1.5 ,"virginica_brevis") # a small flower
#Plotting scores of PC1 and PC" without log transformation
plot(prcomp(iris[,-5],cen=T,sca=T)$x[,1:2],col=iris$Spec)
#Plotting scores of PC1 and PC2 with log transformation
plot(prcomp(log(iris[,-5]),cen=T,sca=T)$x[,1:2],col=iris$Spec)
|
Why log-transforming the data before performing principal component analysis?
|
The iris data set is a fine example to learn PCA. That said, the first four columns describing length and width of sepals and petals are not an example of strongly skewed data. Therefore log-transform
|
Why log-transforming the data before performing principal component analysis?
The iris data set is a fine example to learn PCA. That said, the first four columns describing length and width of sepals and petals are not an example of strongly skewed data. Therefore log-transforming the data does not change the results much, since the resulting rotation of the principal components is quite unchanged by log-transformation.
In other situations log-transformation is a good choice.
We perform PCA to get insight of the general structure of a data set. We center, scale and sometimes log-transform to filter off some trivial effects, which could dominate our PCA. The algorithm of a PCA will in turn find the rotation of each PC to minimize the squared residuals, namely the sum of squared perpendicular distances from any sample to the PCs. Large values tend to have high leverage.
Imagine injecting two new samples into the iris data. A flower with 430 cm petal length and one with petal length of 0.0043 cm. Both flowers are very abnormal being 100 times larger and 1000 times smaller respectively than average examples. The leverage of the first flower is huge, such that the first PCs mostly will describe the differences between the large flower and any other flower. Clustering of species is not possible due to that one outlier. If the data are log-transformed, the absolute value now describes the relative variation. Now the small flower is the most abnormal one. Nonetheless it is possible to both contain all samples in one image and provide a fair clustering of the species. Check out this example:
data(iris) #get data
#add two new observations from two new species to iris data
levels(iris[,5]) = c(levels(iris[,5]),"setosa_gigantica","virginica_brevis")
iris[151,] = list(6,3, 430 ,1.5,"setosa_gigantica") # a big flower
iris[152,] = list(6,3,.0043,1.5 ,"virginica_brevis") # a small flower
#Plotting scores of PC1 and PC" without log transformation
plot(prcomp(iris[,-5],cen=T,sca=T)$x[,1:2],col=iris$Spec)
#Plotting scores of PC1 and PC2 with log transformation
plot(prcomp(log(iris[,-5]),cen=T,sca=T)$x[,1:2],col=iris$Spec)
|
Why log-transforming the data before performing principal component analysis?
The iris data set is a fine example to learn PCA. That said, the first four columns describing length and width of sepals and petals are not an example of strongly skewed data. Therefore log-transform
|
11,144
|
Why log-transforming the data before performing principal component analysis?
|
Well, the other answer gives an example, when the log-transform is used to reduce the influence of extreme values or outliers.
Another general argument occurs, when you try to analyze data which are multiplicatively composed instead of addititively - PCA and FA model by their math such additive compositions. Multiplicative compositions occur in the most simple case in physical data like the surface and the volume of bodies (functionally) dependent on (for instance) the three parameters lenght, width, depth. One can reproduce the compositions of an historic example of the early PCA, I think it is called "Thurstone's Ball- (or 'Cubes'-) problem" or the like. Once I had played with the data of that example and had found that the log-transformed data gave a much nicer and clearer model for the composition of the measured volume and surface data with the three one-dimensional measures.
Besides of such simple examples, if we consider in social research data interactions , then we ususally think them as well as multiplicatively composed measurements of more elementary items. So if we look specifically at interactions, a log-transform might be a special helpful tool to get a mathematical model for the de-composition.
|
Why log-transforming the data before performing principal component analysis?
|
Well, the other answer gives an example, when the log-transform is used to reduce the influence of extreme values or outliers.
Another general argument occurs, when you try to analyze data which are m
|
Why log-transforming the data before performing principal component analysis?
Well, the other answer gives an example, when the log-transform is used to reduce the influence of extreme values or outliers.
Another general argument occurs, when you try to analyze data which are multiplicatively composed instead of addititively - PCA and FA model by their math such additive compositions. Multiplicative compositions occur in the most simple case in physical data like the surface and the volume of bodies (functionally) dependent on (for instance) the three parameters lenght, width, depth. One can reproduce the compositions of an historic example of the early PCA, I think it is called "Thurstone's Ball- (or 'Cubes'-) problem" or the like. Once I had played with the data of that example and had found that the log-transformed data gave a much nicer and clearer model for the composition of the measured volume and surface data with the three one-dimensional measures.
Besides of such simple examples, if we consider in social research data interactions , then we ususally think them as well as multiplicatively composed measurements of more elementary items. So if we look specifically at interactions, a log-transform might be a special helpful tool to get a mathematical model for the de-composition.
|
Why log-transforming the data before performing principal component analysis?
Well, the other answer gives an example, when the log-transform is used to reduce the influence of extreme values or outliers.
Another general argument occurs, when you try to analyze data which are m
|
11,145
|
Difference between Factorization machines and Matrix Factorization?
|
Matrix factorization is a method to, well, factorize matrices. It does one job of decomposing a matrix into two matrices such that their product closely matches the original matrix.
But Factorization Machines are quite general in nature compared to
Matrix Factorization. The problem formulation itself is very
different. It is formulated as a linear model, with interactions
between features as additional parameters. This feature interaction is
done in their latent space representation instead of their plain
format. So along with the feature interactions like in Matrix Factorization, it also takes the linear weights of different features.
So compared to Matrix Factorization, here are key differences:
In recommender systems, where Matrix Factorization is generally used, we cannot use side-features. For a movie recommendation system, we cannot use the movie genres, its language etc in Matrix Factorization. The factorization itself has to learn these from the existing interactions. But we can pass this info in Factorization Machines.
Factorization Machines can also be used for other prediction tasks such as Regression and Binary Classification. This is usually not the case with Matrix Factorization
The paper shared in previous answer is the original paper that talks about FMs. It has a great illustrative example too as to what FM exactly is.
Edit: A note on side features that can be used in Factorization Machines but not Matrix factorization:
Matrix Factorization is solely a collaborative filtering approach which needs user engagement on the items. So it doesn't work for what is called "cold start" problems. Think of a new movie released on Netflix. As no one would have watched it, matrix factorization doesn't work for it. But as Netflix would know the genre, actors, director etc, Factorization Machine can kick-start the recommendations for this movie from day 1 itself, which is a crucial component for many websites that use recommendation systems.
|
Difference between Factorization machines and Matrix Factorization?
|
Matrix factorization is a method to, well, factorize matrices. It does one job of decomposing a matrix into two matrices such that their product closely matches the original matrix.
But Factorization
|
Difference between Factorization machines and Matrix Factorization?
Matrix factorization is a method to, well, factorize matrices. It does one job of decomposing a matrix into two matrices such that their product closely matches the original matrix.
But Factorization Machines are quite general in nature compared to
Matrix Factorization. The problem formulation itself is very
different. It is formulated as a linear model, with interactions
between features as additional parameters. This feature interaction is
done in their latent space representation instead of their plain
format. So along with the feature interactions like in Matrix Factorization, it also takes the linear weights of different features.
So compared to Matrix Factorization, here are key differences:
In recommender systems, where Matrix Factorization is generally used, we cannot use side-features. For a movie recommendation system, we cannot use the movie genres, its language etc in Matrix Factorization. The factorization itself has to learn these from the existing interactions. But we can pass this info in Factorization Machines.
Factorization Machines can also be used for other prediction tasks such as Regression and Binary Classification. This is usually not the case with Matrix Factorization
The paper shared in previous answer is the original paper that talks about FMs. It has a great illustrative example too as to what FM exactly is.
Edit: A note on side features that can be used in Factorization Machines but not Matrix factorization:
Matrix Factorization is solely a collaborative filtering approach which needs user engagement on the items. So it doesn't work for what is called "cold start" problems. Think of a new movie released on Netflix. As no one would have watched it, matrix factorization doesn't work for it. But as Netflix would know the genre, actors, director etc, Factorization Machine can kick-start the recommendations for this movie from day 1 itself, which is a crucial component for many websites that use recommendation systems.
|
Difference between Factorization machines and Matrix Factorization?
Matrix factorization is a method to, well, factorize matrices. It does one job of decomposing a matrix into two matrices such that their product closely matches the original matrix.
But Factorization
|
11,146
|
Difference between Factorization machines and Matrix Factorization?
|
Just some extension to Dileep's answer.
If the only features involved are two categorical variables (e.g., users and items), then the (nature of the interaction terms of) FM is equivalent to the matrix factorization model. But FM can be easily applied to more than two real-valued features.
|
Difference between Factorization machines and Matrix Factorization?
|
Just some extension to Dileep's answer.
If the only features involved are two categorical variables (e.g., users and items), then the (nature of the interaction terms of) FM is equivalent to the matri
|
Difference between Factorization machines and Matrix Factorization?
Just some extension to Dileep's answer.
If the only features involved are two categorical variables (e.g., users and items), then the (nature of the interaction terms of) FM is equivalent to the matrix factorization model. But FM can be easily applied to more than two real-valued features.
|
Difference between Factorization machines and Matrix Factorization?
Just some extension to Dileep's answer.
If the only features involved are two categorical variables (e.g., users and items), then the (nature of the interaction terms of) FM is equivalent to the matri
|
11,147
|
Difference between Factorization machines and Matrix Factorization?
|
Matrix factorization is a different factorization model.
From the article about FM:
There are many different factorization models like matrix
factorization, parallel factor analysis or specialized models like
SVD++, PITF or FPMC.
The drawback of these models is that they are not applicable for
general prediction tasks, but work only with special input data.
Furthermore their model equations and optimization algorithms are
derived individually for each task. We show that FMs can mimic these
models just by specifying the input data (i.e. the feature vectors).
This makes FMs easily applicable even for users without expert
knowledge in factorization models.
From libfm.org:
"Factorization machines (FM) are a generic approach that allows to
mimic most factorization models by feature engineering. This way,
factorization machines combine the generality of feature engineering
with the superiority of factorization models in estimating
interactions between categorical variables of large domain."
|
Difference between Factorization machines and Matrix Factorization?
|
Matrix factorization is a different factorization model.
From the article about FM:
There are many different factorization models like matrix
factorization, parallel factor analysis or specialized
|
Difference between Factorization machines and Matrix Factorization?
Matrix factorization is a different factorization model.
From the article about FM:
There are many different factorization models like matrix
factorization, parallel factor analysis or specialized models like
SVD++, PITF or FPMC.
The drawback of these models is that they are not applicable for
general prediction tasks, but work only with special input data.
Furthermore their model equations and optimization algorithms are
derived individually for each task. We show that FMs can mimic these
models just by specifying the input data (i.e. the feature vectors).
This makes FMs easily applicable even for users without expert
knowledge in factorization models.
From libfm.org:
"Factorization machines (FM) are a generic approach that allows to
mimic most factorization models by feature engineering. This way,
factorization machines combine the generality of feature engineering
with the superiority of factorization models in estimating
interactions between categorical variables of large domain."
|
Difference between Factorization machines and Matrix Factorization?
Matrix factorization is a different factorization model.
From the article about FM:
There are many different factorization models like matrix
factorization, parallel factor analysis or specialized
|
11,148
|
Difference between Factorization machines and Matrix Factorization?
|
Let me take a force-march through a simple item-user example below, where there are only two categorical variables for the user and item scenario, and hence both matrix factorization and factorization machines work(inspired by @dontloo's answer).
Let's say we have two users: $u_1$ and $u_2$, and two items: $i_1$ and $i_2$. We initialize two vectors with very low dimension of 2 for the two users($u_1$ and $u_2$ respectively): [$v_{11}$, $v_{12}$], [$v_{21}$, $v_{22}$], and two low dimension vectors for the two items$i_1$ and $i_2$ respectively: [$w_{11}$, $w_{12}$], [$w_{21}$, $w_{22}$].
And our observations can be represented as the following matrix:
The three values, namely 2, 4, and 1, are ratings of $u_1$ for $i_1$, $u_1$ for $i_2$, and $u_2$ for $i_2$.
To train a model and predict using matrix factorization, we do the following steps:
preparing training data using the three cases:
a. multiply the vector of $u_1$ and $i_1$ for the label rating 2: [$v_{11}$, $v_{12}$] $\cdot$ [$w_{11}$, $w_{12}$] = 2
b. multiply the vector of $u_1$ and $i_2$ for the label rating 4: [$v_{11}$, $v_{12}$] $\cdot$ [$w_{21}$, $w_{22}$] = 4
c. multiply the vector of $u_2$ and $i_2$ for the label rating 1: [$v_{21}$, $v_{22}$] $\cdot$ [$w_{21}$, $w_{22}$] = 1
use Stochastic Gradient Descent(SGD) or Weighted Alternating Least Squares(WALS) (with some regularization methods) to get the vectors for $u_1$, $u_2$, $i_1$, and $i_2$, by minimizing the true results of the three dot productions and their corresponding labels: 2, 4, 1.
use the trained vectors of $u_2$ and $i_1$ to predict the missing value in the matrix in the above image: [$v_{21}$, $v_{22}$] $\cdot$ [$w_{11}$, $w_{12}$]
However the process of training and predicting differs for factorization machines:
We kind of flatten the users and items to make the training data matrix.
a. we add four parameters(which should be learned like the w's and v's in the matrix factorization example) for the first order feature combinations($u_1$, $u_2$, $i_1$, and $i_2$): $k_{u1}$, $k_{u2}$, $k_{i1}$, and $k_{i2}$, and then we multiply the four parameters with their values for the three observations:
$\text{ }$i. For label 2, it relates only two $u_1$ and $i_1$: $k_{u1}$ * 1 + $k_{u2}$ * 0 + $k_{i1}$ * 1 + $k_{i2}$ * 0; 1 for relating to the item or user and 0 otherwise
$\text{ }$ii. For label 4, it relates only two $u_1$ and $i_2$: $k_{u1}$ * 1 + $k_{u2}$ * 0 + $k_{i1}$ * 0 + $k_{i2}$ * 1; 1 for relating to the item or user and 0 otherwise
$\text{ }$iii. For label 1, it relates only two $u_2$ and $i_2$: $k_{u1}$ * 0 + $k_{u2}$ * 1 + $k_{i1}$ * 0 + $k_{i2}$ * 1; 1 for relating to the item or user and 0 otherwise
b. To deal with the second-order feature combinations, we multiply the vectors for each user and item for the three observations: $u_1$ and $i_1$ for 2, $u_1$ and $i_2$ for 4, and $u_2$ and $i_2$ for 1; in this case, we don't need three additional parameters, we use the dot product of each user and item vector pairs:
$\text{ }$i. $u_1$ and $i_1$: [$v_{11}$, $v_{12}$] $\cdot$ [$w_{11}$, $w_{12}$] * 1 + [$v_{11}$, $v_{12}$] $\cdot$ [$w_{21}$, $w_{22}$] * 0 + [$v_{21}$, $v_{22}$] $\cdot$ [$w_{11}$, $w_{12}$] * 0 + [$v_{21}$, $v_{22}$] $\cdot$ [$w_{21}$, $w_{22}$] * 0
$\text{ }$ii. $u_1$ and $i_2$: [$v_{11}$, $v_{12}$] $\cdot$ [$w_{11}$, $w_{12}$] * 0 + [$v_{11}$, $v_{12}$] $\cdot$ [$w_{21}$, $w_{22}$] * 1 + [$v_{21}$, $v_{22}$] $\cdot$ [$w_{11}$, $w_{12}$] * 0 + [$v_{21}$, $v_{22}$] $\cdot$ [$w_{21}$, $w_{22}$] * 0
$\text{ }$iii. $u_2$ and $i_2$: [$v_{11}$, $v_{12}$] $\cdot$ [$w_{11}$, $w_{12}$] * 0 + [$v_{11}$, $v_{12}$] $\cdot$ [$w_{21}$, $w_{22}$] * 0 + [$v_{21}$, $v_{22}$] $\cdot$ [$w_{11}$, $w_{12}$] * 0 + [$v_{21}$, $v_{22}$] $\cdot$ [$w_{21}$, $w_{22}$] * 1
c. add the two above terms as the final predicted value for each observation.
$\text{ }$i. $u_1$ and $i_1$ for 2: $k_{u1}$ * 1 + $k_{u2}$ * 0 + $k_{i1}$ * 1 + $k_{i2}$ * 0 + [$v_{11}$, $v_{12}$] $\cdot$ [$w_{11}$, $w_{12}$] * 1 + [$v_{11}$, $v_{12}$] $\cdot$ [$w_{21}$, $w_{22}$] * 0 + [$v_{21}$, $v_{22}$] $\cdot$ [$w_{11}$, $w_{12}$] * 0 + [$v_{21}$, $v_{22}$] $\cdot$ [$w_{21}$, $w_{22}$] * 0 = 2
$\text{ }$ii. $u_1$ and $i_2$ for 4: $k_{u1}$ * 1 + $k_{u2}$ * 0 + $k_{i1}$ * 0 + $k_{i2}$ * 1 + [$v_{11}$, $v_{12}$] $\cdot$ [$w_{11}$, $w_{12}$] * 0 + [$v_{11}$, $v_{12}$] $\cdot$ [$w_{21}$, $w_{22}$] * 1 + [$v_{21}$, $v_{22}$] $\cdot$ [$w_{11}$, $w_{12}$] * 0 + [$v_{21}$, $v_{22}$] $\cdot$ [$w_{21}$, $w_{22}$] * 0 = 4
$\text{ }$iii. $u_2$ and $i_2$ for 1: $k_{u1}$ * 0 + $k_{u2}$ * 1 + $k_{i1}$ * 0 + $k_{i2}$ * 1 + [$v_{11}$, $v_{12}$] $\cdot$ [$w_{11}$, $w_{12}$] * 0 + [$v_{11}$, $v_{12}$] $\cdot$ [$w_{21}$, $w_{22}$] * 0 + [$v_{21}$, $v_{22}$] $\cdot$ [$w_{11}$, $w_{12}$] * 0 + [$v_{21}$, $v_{22}$] $\cdot$ [$w_{21}$, $w_{22}$] * 1 = 1
use Stochastic Gradient Descent(SGD) (with some regularization methods) to get the values or vectors for $k_{u1}$, $k_{u2}$, $k_{i1}$, $k_{i2}$, $u_1$, $u_2$, $i_1$, and $i_2$, by minimizing the true results of the three predictions in 1.c and their corresponding labels: 2, 4, 1.
use the trained values and vectors of $u_2$ and $i_1$ to predict the missing value in the matrix in the above image: $k_{u1}$ * 0 + $k_{u2}$ * 1 + $k_{i1}$ * 1 + $k_{i2}$ * 0 + [$v_{11}$, $v_{12}$] $\cdot$ [$w_{11}$, $w_{12}$] * 0 + [$v_{11}$, $v_{12}$] $\cdot$ [$w_{21}$, $w_{22}$] * 0 + [$v_{21}$, $v_{22}$] $\cdot$ [$w_{11}$, $w_{12}$] * 1 + [$v_{21}$, $v_{22}$] $\cdot$ [$w_{21}$, $w_{22}$] * 0
The above are just two toy examples, and in real problems there would be much more users and much more items(and more additional features for the factorization machines) and hence much more observations, making the ratio of parameter to observation much lower. Usually we would try models with less parameters by reducing the vector size for the users and items or perhaps other additional features(for FM).
And the connection between FM and MC lies in that they both use dot product of two vectors to reduce the number of parameters in modeling: $u \cdot i$
References:
16.3.1. The Matrix Factorization Model.
Factorization Machines for Item Recommendation with Implicit Feedback Data
Matrix Factorization
|
Difference between Factorization machines and Matrix Factorization?
|
Let me take a force-march through a simple item-user example below, where there are only two categorical variables for the user and item scenario, and hence both matrix factorization and factorization
|
Difference between Factorization machines and Matrix Factorization?
Let me take a force-march through a simple item-user example below, where there are only two categorical variables for the user and item scenario, and hence both matrix factorization and factorization machines work(inspired by @dontloo's answer).
Let's say we have two users: $u_1$ and $u_2$, and two items: $i_1$ and $i_2$. We initialize two vectors with very low dimension of 2 for the two users($u_1$ and $u_2$ respectively): [$v_{11}$, $v_{12}$], [$v_{21}$, $v_{22}$], and two low dimension vectors for the two items$i_1$ and $i_2$ respectively: [$w_{11}$, $w_{12}$], [$w_{21}$, $w_{22}$].
And our observations can be represented as the following matrix:
The three values, namely 2, 4, and 1, are ratings of $u_1$ for $i_1$, $u_1$ for $i_2$, and $u_2$ for $i_2$.
To train a model and predict using matrix factorization, we do the following steps:
preparing training data using the three cases:
a. multiply the vector of $u_1$ and $i_1$ for the label rating 2: [$v_{11}$, $v_{12}$] $\cdot$ [$w_{11}$, $w_{12}$] = 2
b. multiply the vector of $u_1$ and $i_2$ for the label rating 4: [$v_{11}$, $v_{12}$] $\cdot$ [$w_{21}$, $w_{22}$] = 4
c. multiply the vector of $u_2$ and $i_2$ for the label rating 1: [$v_{21}$, $v_{22}$] $\cdot$ [$w_{21}$, $w_{22}$] = 1
use Stochastic Gradient Descent(SGD) or Weighted Alternating Least Squares(WALS) (with some regularization methods) to get the vectors for $u_1$, $u_2$, $i_1$, and $i_2$, by minimizing the true results of the three dot productions and their corresponding labels: 2, 4, 1.
use the trained vectors of $u_2$ and $i_1$ to predict the missing value in the matrix in the above image: [$v_{21}$, $v_{22}$] $\cdot$ [$w_{11}$, $w_{12}$]
However the process of training and predicting differs for factorization machines:
We kind of flatten the users and items to make the training data matrix.
a. we add four parameters(which should be learned like the w's and v's in the matrix factorization example) for the first order feature combinations($u_1$, $u_2$, $i_1$, and $i_2$): $k_{u1}$, $k_{u2}$, $k_{i1}$, and $k_{i2}$, and then we multiply the four parameters with their values for the three observations:
$\text{ }$i. For label 2, it relates only two $u_1$ and $i_1$: $k_{u1}$ * 1 + $k_{u2}$ * 0 + $k_{i1}$ * 1 + $k_{i2}$ * 0; 1 for relating to the item or user and 0 otherwise
$\text{ }$ii. For label 4, it relates only two $u_1$ and $i_2$: $k_{u1}$ * 1 + $k_{u2}$ * 0 + $k_{i1}$ * 0 + $k_{i2}$ * 1; 1 for relating to the item or user and 0 otherwise
$\text{ }$iii. For label 1, it relates only two $u_2$ and $i_2$: $k_{u1}$ * 0 + $k_{u2}$ * 1 + $k_{i1}$ * 0 + $k_{i2}$ * 1; 1 for relating to the item or user and 0 otherwise
b. To deal with the second-order feature combinations, we multiply the vectors for each user and item for the three observations: $u_1$ and $i_1$ for 2, $u_1$ and $i_2$ for 4, and $u_2$ and $i_2$ for 1; in this case, we don't need three additional parameters, we use the dot product of each user and item vector pairs:
$\text{ }$i. $u_1$ and $i_1$: [$v_{11}$, $v_{12}$] $\cdot$ [$w_{11}$, $w_{12}$] * 1 + [$v_{11}$, $v_{12}$] $\cdot$ [$w_{21}$, $w_{22}$] * 0 + [$v_{21}$, $v_{22}$] $\cdot$ [$w_{11}$, $w_{12}$] * 0 + [$v_{21}$, $v_{22}$] $\cdot$ [$w_{21}$, $w_{22}$] * 0
$\text{ }$ii. $u_1$ and $i_2$: [$v_{11}$, $v_{12}$] $\cdot$ [$w_{11}$, $w_{12}$] * 0 + [$v_{11}$, $v_{12}$] $\cdot$ [$w_{21}$, $w_{22}$] * 1 + [$v_{21}$, $v_{22}$] $\cdot$ [$w_{11}$, $w_{12}$] * 0 + [$v_{21}$, $v_{22}$] $\cdot$ [$w_{21}$, $w_{22}$] * 0
$\text{ }$iii. $u_2$ and $i_2$: [$v_{11}$, $v_{12}$] $\cdot$ [$w_{11}$, $w_{12}$] * 0 + [$v_{11}$, $v_{12}$] $\cdot$ [$w_{21}$, $w_{22}$] * 0 + [$v_{21}$, $v_{22}$] $\cdot$ [$w_{11}$, $w_{12}$] * 0 + [$v_{21}$, $v_{22}$] $\cdot$ [$w_{21}$, $w_{22}$] * 1
c. add the two above terms as the final predicted value for each observation.
$\text{ }$i. $u_1$ and $i_1$ for 2: $k_{u1}$ * 1 + $k_{u2}$ * 0 + $k_{i1}$ * 1 + $k_{i2}$ * 0 + [$v_{11}$, $v_{12}$] $\cdot$ [$w_{11}$, $w_{12}$] * 1 + [$v_{11}$, $v_{12}$] $\cdot$ [$w_{21}$, $w_{22}$] * 0 + [$v_{21}$, $v_{22}$] $\cdot$ [$w_{11}$, $w_{12}$] * 0 + [$v_{21}$, $v_{22}$] $\cdot$ [$w_{21}$, $w_{22}$] * 0 = 2
$\text{ }$ii. $u_1$ and $i_2$ for 4: $k_{u1}$ * 1 + $k_{u2}$ * 0 + $k_{i1}$ * 0 + $k_{i2}$ * 1 + [$v_{11}$, $v_{12}$] $\cdot$ [$w_{11}$, $w_{12}$] * 0 + [$v_{11}$, $v_{12}$] $\cdot$ [$w_{21}$, $w_{22}$] * 1 + [$v_{21}$, $v_{22}$] $\cdot$ [$w_{11}$, $w_{12}$] * 0 + [$v_{21}$, $v_{22}$] $\cdot$ [$w_{21}$, $w_{22}$] * 0 = 4
$\text{ }$iii. $u_2$ and $i_2$ for 1: $k_{u1}$ * 0 + $k_{u2}$ * 1 + $k_{i1}$ * 0 + $k_{i2}$ * 1 + [$v_{11}$, $v_{12}$] $\cdot$ [$w_{11}$, $w_{12}$] * 0 + [$v_{11}$, $v_{12}$] $\cdot$ [$w_{21}$, $w_{22}$] * 0 + [$v_{21}$, $v_{22}$] $\cdot$ [$w_{11}$, $w_{12}$] * 0 + [$v_{21}$, $v_{22}$] $\cdot$ [$w_{21}$, $w_{22}$] * 1 = 1
use Stochastic Gradient Descent(SGD) (with some regularization methods) to get the values or vectors for $k_{u1}$, $k_{u2}$, $k_{i1}$, $k_{i2}$, $u_1$, $u_2$, $i_1$, and $i_2$, by minimizing the true results of the three predictions in 1.c and their corresponding labels: 2, 4, 1.
use the trained values and vectors of $u_2$ and $i_1$ to predict the missing value in the matrix in the above image: $k_{u1}$ * 0 + $k_{u2}$ * 1 + $k_{i1}$ * 1 + $k_{i2}$ * 0 + [$v_{11}$, $v_{12}$] $\cdot$ [$w_{11}$, $w_{12}$] * 0 + [$v_{11}$, $v_{12}$] $\cdot$ [$w_{21}$, $w_{22}$] * 0 + [$v_{21}$, $v_{22}$] $\cdot$ [$w_{11}$, $w_{12}$] * 1 + [$v_{21}$, $v_{22}$] $\cdot$ [$w_{21}$, $w_{22}$] * 0
The above are just two toy examples, and in real problems there would be much more users and much more items(and more additional features for the factorization machines) and hence much more observations, making the ratio of parameter to observation much lower. Usually we would try models with less parameters by reducing the vector size for the users and items or perhaps other additional features(for FM).
And the connection between FM and MC lies in that they both use dot product of two vectors to reduce the number of parameters in modeling: $u \cdot i$
References:
16.3.1. The Matrix Factorization Model.
Factorization Machines for Item Recommendation with Implicit Feedback Data
Matrix Factorization
|
Difference between Factorization machines and Matrix Factorization?
Let me take a force-march through a simple item-user example below, where there are only two categorical variables for the user and item scenario, and hence both matrix factorization and factorization
|
11,149
|
Caret and randomForest number of trees [duplicate]
|
In theory, the performance of a RF model should be a monotonic function of ntree that plateaus beyond a certain point once you have 'enough' trees. This makes ntree more of a performance parameter than a Goldilocks parameter that you would want to tune. Caret tends to focus on tuning parameters that perform poorly for high and low values in which you want to find the happy medium.
In practice I believe there may have been studies that have found performance does reduce for very large ntree values but even if this is true the effect is subtle and requires very large forests.
There are at least 2-3 other parameters to RF that Caret doesn't tune for the same reasons as ntree.
|
Caret and randomForest number of trees [duplicate]
|
In theory, the performance of a RF model should be a monotonic function of ntree that plateaus beyond a certain point once you have 'enough' trees. This makes ntree more of a performance parameter tha
|
Caret and randomForest number of trees [duplicate]
In theory, the performance of a RF model should be a monotonic function of ntree that plateaus beyond a certain point once you have 'enough' trees. This makes ntree more of a performance parameter than a Goldilocks parameter that you would want to tune. Caret tends to focus on tuning parameters that perform poorly for high and low values in which you want to find the happy medium.
In practice I believe there may have been studies that have found performance does reduce for very large ntree values but even if this is true the effect is subtle and requires very large forests.
There are at least 2-3 other parameters to RF that Caret doesn't tune for the same reasons as ntree.
|
Caret and randomForest number of trees [duplicate]
In theory, the performance of a RF model should be a monotonic function of ntree that plateaus beyond a certain point once you have 'enough' trees. This makes ntree more of a performance parameter tha
|
11,150
|
Caret and randomForest number of trees [duplicate]
|
Caret does let you tune the number of trees on its backend randomForest package. For instance, considering the latest version (4.6-12) as of now, you just pass the normal ntree parameter. caret will "repass" it to randomForest, e.g.:
train(formula,
data = mydata,
method = "rf",
ntree = 5,
trControl = myTrControl)
|
Caret and randomForest number of trees [duplicate]
|
Caret does let you tune the number of trees on its backend randomForest package. For instance, considering the latest version (4.6-12) as of now, you just pass the normal ntree parameter. caret will "
|
Caret and randomForest number of trees [duplicate]
Caret does let you tune the number of trees on its backend randomForest package. For instance, considering the latest version (4.6-12) as of now, you just pass the normal ntree parameter. caret will "repass" it to randomForest, e.g.:
train(formula,
data = mydata,
method = "rf",
ntree = 5,
trControl = myTrControl)
|
Caret and randomForest number of trees [duplicate]
Caret does let you tune the number of trees on its backend randomForest package. For instance, considering the latest version (4.6-12) as of now, you just pass the normal ntree parameter. caret will "
|
11,151
|
Caret and randomForest number of trees [duplicate]
|
If you already have an idea about how many trees you want to use (Breiman recommends at least 1000) and have used randomForest::tuneRF to get an optimal mtry value (let's use 6 as an example), then:
ctrl <- trainControl(method = "none")
set.seed(2)
rforest <- train(response ~ ., data = data_set,
method = "rf",
ntree = 1000,
trControl = ctrl,
tuneGrid = data.frame(mtry = 6))
Eduardo has answered your question above but I wanted to additionally demonstrate how you can tune the value for the number of random variables used for partitioning. When tuning a random forest, this parameter has more importance than ntree as long as ntree is sufficiently large.
|
Caret and randomForest number of trees [duplicate]
|
If you already have an idea about how many trees you want to use (Breiman recommends at least 1000) and have used randomForest::tuneRF to get an optimal mtry value (let's use 6 as an example), then:
c
|
Caret and randomForest number of trees [duplicate]
If you already have an idea about how many trees you want to use (Breiman recommends at least 1000) and have used randomForest::tuneRF to get an optimal mtry value (let's use 6 as an example), then:
ctrl <- trainControl(method = "none")
set.seed(2)
rforest <- train(response ~ ., data = data_set,
method = "rf",
ntree = 1000,
trControl = ctrl,
tuneGrid = data.frame(mtry = 6))
Eduardo has answered your question above but I wanted to additionally demonstrate how you can tune the value for the number of random variables used for partitioning. When tuning a random forest, this parameter has more importance than ntree as long as ntree is sufficiently large.
|
Caret and randomForest number of trees [duplicate]
If you already have an idea about how many trees you want to use (Breiman recommends at least 1000) and have used randomForest::tuneRF to get an optimal mtry value (let's use 6 as an example), then:
c
|
11,152
|
Caret and randomForest number of trees [duplicate]
|
Though I agree the with the theoretical explanations posted here, in practice, having a too large number of trees is a waste of computational power and makes the model objects uncomfortably heavy for working with them (especially if you use to constantly save and load .RDS objects). Because of that, I think if we want models to be adequate we have to find somehow the minimum necessary number of trees that allow for a stable performance (and then "let the asymptotic behavior of LLN do the rest"). Perhaps if you are a very experienced statistician or if you are always working on similar problems you can use some rule of thumb (say 1000 or 10000 trees). But if your work requires you to adapt to a variety of modelling tasks, you'll end up needing some calibration method that allows you for finding an adequate and inexpensive number of trees.
For this purpose, you could just download the source code of the method from here and then rewrite it to create a custom method that adapts to your needs. Feel free to use the following example:
customRF <- list(label = "Random Forest",
library = "randomForest",
loop = NULL,
type = c("Classification", "Regression"),
parameters = data.frame(parameter = c("mtry", "ntree"), class = rep("numeric", 2), label = c("mtry", "ntree")),
grid = function(x, y, len = NULL, search = "grid") {
if(search == "grid") {
out <- data.frame(mtry = caret::var_seq(p = ncol(x),
classification = is.factor(y),
len = len))
} else {
out <- data.frame(mtry = unique(sample(1:ncol(x), size = len, replace = TRUE)))
}
out
},
fit = function(x, y, wts, param, lev, last, classProbs, ...)
randomForest::randomForest(x, y, mtry = param$mtry, ntree=param$ntree...),
predict = function(modelFit, newdata, submodels = NULL)
if(!is.null(newdata)) predict(modelFit, newdata) else predict(modelFit),
prob = function(modelFit, newdata, submodels = NULL)
if(!is.null(newdata)) predict(modelFit, newdata, type = "prob") else predict(modelFit, type = "prob"),
predictors = function(x, ...) {
## After doing some testing, it looks like randomForest
## will only try to split on plain main effects (instead
## of interactions or terms like I(x^2).
varIndex <- as.numeric(names(table(x$forest$bestvar)))
varIndex <- varIndex[varIndex > 0]
varsUsed <- names(x$forest$ncat)[varIndex]
varsUsed
},
varImp = function(object, ...){
varImp <- randomForest::importance(object, ...)
if(object$type == "regression")
varImp <- data.frame(Overall = varImp[,"%IncMSE"])
else {
retainNames <- levels(object$y)
if(all(retainNames %in% colnames(varImp))) {
varImp <- varImp[, retainNames]
} else {
varImp <- data.frame(Overall = varImp[,1])
}
}
out <- as.data.frame(varImp)
if(dim(out)[2] == 2) {
tmp <- apply(out, 1, mean)
out[,1] <- out[,2] <- tmp
}
out
},
levels = function(x) x$classes,
tags = c("Random Forest", "Ensemble Model", "Bagging", "Implicit Feature Selection"),
sort = function(x) x[order(x[,1]),],
oob = function(x) {
out <- switch(x$type,
regression = c(sqrt(max(x$mse[length(x$mse)], 0)), x$rsq[length(x$rsq)]),
classification = c(1 - x$err.rate[x$ntree, "OOB"],
e1071::classAgreement(x$confusion[,-dim(x$confusion)[2]])[["kappa"]]))
names(out) <- if(x$type == "regression") c("RMSE", "Rsquared") else c("Accuracy", "Kappa")
out
})
After defining this custom method you only have to call it from train(method=customRF) and both mtry and ntree will be calibrated depending on your trainControl() specifications.
|
Caret and randomForest number of trees [duplicate]
|
Though I agree the with the theoretical explanations posted here, in practice, having a too large number of trees is a waste of computational power and makes the model objects uncomfortably heavy for
|
Caret and randomForest number of trees [duplicate]
Though I agree the with the theoretical explanations posted here, in practice, having a too large number of trees is a waste of computational power and makes the model objects uncomfortably heavy for working with them (especially if you use to constantly save and load .RDS objects). Because of that, I think if we want models to be adequate we have to find somehow the minimum necessary number of trees that allow for a stable performance (and then "let the asymptotic behavior of LLN do the rest"). Perhaps if you are a very experienced statistician or if you are always working on similar problems you can use some rule of thumb (say 1000 or 10000 trees). But if your work requires you to adapt to a variety of modelling tasks, you'll end up needing some calibration method that allows you for finding an adequate and inexpensive number of trees.
For this purpose, you could just download the source code of the method from here and then rewrite it to create a custom method that adapts to your needs. Feel free to use the following example:
customRF <- list(label = "Random Forest",
library = "randomForest",
loop = NULL,
type = c("Classification", "Regression"),
parameters = data.frame(parameter = c("mtry", "ntree"), class = rep("numeric", 2), label = c("mtry", "ntree")),
grid = function(x, y, len = NULL, search = "grid") {
if(search == "grid") {
out <- data.frame(mtry = caret::var_seq(p = ncol(x),
classification = is.factor(y),
len = len))
} else {
out <- data.frame(mtry = unique(sample(1:ncol(x), size = len, replace = TRUE)))
}
out
},
fit = function(x, y, wts, param, lev, last, classProbs, ...)
randomForest::randomForest(x, y, mtry = param$mtry, ntree=param$ntree...),
predict = function(modelFit, newdata, submodels = NULL)
if(!is.null(newdata)) predict(modelFit, newdata) else predict(modelFit),
prob = function(modelFit, newdata, submodels = NULL)
if(!is.null(newdata)) predict(modelFit, newdata, type = "prob") else predict(modelFit, type = "prob"),
predictors = function(x, ...) {
## After doing some testing, it looks like randomForest
## will only try to split on plain main effects (instead
## of interactions or terms like I(x^2).
varIndex <- as.numeric(names(table(x$forest$bestvar)))
varIndex <- varIndex[varIndex > 0]
varsUsed <- names(x$forest$ncat)[varIndex]
varsUsed
},
varImp = function(object, ...){
varImp <- randomForest::importance(object, ...)
if(object$type == "regression")
varImp <- data.frame(Overall = varImp[,"%IncMSE"])
else {
retainNames <- levels(object$y)
if(all(retainNames %in% colnames(varImp))) {
varImp <- varImp[, retainNames]
} else {
varImp <- data.frame(Overall = varImp[,1])
}
}
out <- as.data.frame(varImp)
if(dim(out)[2] == 2) {
tmp <- apply(out, 1, mean)
out[,1] <- out[,2] <- tmp
}
out
},
levels = function(x) x$classes,
tags = c("Random Forest", "Ensemble Model", "Bagging", "Implicit Feature Selection"),
sort = function(x) x[order(x[,1]),],
oob = function(x) {
out <- switch(x$type,
regression = c(sqrt(max(x$mse[length(x$mse)], 0)), x$rsq[length(x$rsq)]),
classification = c(1 - x$err.rate[x$ntree, "OOB"],
e1071::classAgreement(x$confusion[,-dim(x$confusion)[2]])[["kappa"]]))
names(out) <- if(x$type == "regression") c("RMSE", "Rsquared") else c("Accuracy", "Kappa")
out
})
After defining this custom method you only have to call it from train(method=customRF) and both mtry and ntree will be calibrated depending on your trainControl() specifications.
|
Caret and randomForest number of trees [duplicate]
Though I agree the with the theoretical explanations posted here, in practice, having a too large number of trees is a waste of computational power and makes the model objects uncomfortably heavy for
|
11,153
|
lme() and lmer() giving conflicting results
|
tl;dr if you change the optimizer to "nloptwrap" I think it will avoid these issues (probably).
Congratulations, you've found one of the simplest examples of multiple optima in a statistical estimation problem! The parameter that lme4 uses internally (thus convenient for illustration) is the scaled standard deviation of the random effects, i.e. the among-group std dev divided by the residual std dev.
Extract these values for the original lme and lmer fits:
(sd1 <- sqrt(getVarCov(Mlme)[[1]])/sigma(Mlme))
## 2.332469
(sd2 <- getME(Mlmer,"theta")) ## 14.48926
Refit with another optimizer (this will probably be the default in the next release of lme4):
Mlmer2 <- update(Mlmer,
control=lmerControl(optimizer="nloptwrap"))
sd3 <- getME(Mlmer2,"theta") ## 2.33247
Matches lme ... let's see what's going on. The deviance function (-2*log likelihood), or in this case the analogous REML-criterion function, for LMMs with a single random effect takes only one argument, because the fixed-effect parameters are profiled out; they can be computed automatically for a given value of the RE standard deviation.
ff <- as.function(Mlmer)
tvec <- seq(0,20,length=101)
Lvec <- sapply(tvec,ff)
png("CV38425.png")
par(bty="l",las=1)
plot(tvec,Lvec,type="l",
ylab="REML criterion",
xlab="scaled random effects standard deviation")
abline(v=1,lty=2)
points(sd1,ff(sd1),pch=16,col=1)
points(sd2,ff(sd2),pch=16,col=2)
points(sd3,ff(sd3),pch=1,col=4)
dev.off()
I continued to obsess further over this and ran the fits for random seeds from 1 to 1000, fitting lme, lmer, and lmer+nloptwrap for each case. Here are the numbers out of 1000 where a given method gets answers that are at least 0.001 deviance units worse than another ...
lme.dev lmer.dev lmer2.dev
lme.dev 0 64 61
lmer.dev 369 0 326
lmer2.dev 43 3 0
In other words, (1) there is no method that always works best; (2) lmer with the default optimizer is worst (fails about 1/3 of the time); (3) lmer with "nloptwrap" is best (worse than lme 4% of the time, rarely worse than lmer).
To be a little bit reassuring, I think that this situation is likely to be worst for small, misspecified cases (i.e. residual error here is uniform rather than Normal). It would be interesting to explore this more systematically though ...
|
lme() and lmer() giving conflicting results
|
tl;dr if you change the optimizer to "nloptwrap" I think it will avoid these issues (probably).
Congratulations, you've found one of the simplest examples of multiple optima in a statistical estimatio
|
lme() and lmer() giving conflicting results
tl;dr if you change the optimizer to "nloptwrap" I think it will avoid these issues (probably).
Congratulations, you've found one of the simplest examples of multiple optima in a statistical estimation problem! The parameter that lme4 uses internally (thus convenient for illustration) is the scaled standard deviation of the random effects, i.e. the among-group std dev divided by the residual std dev.
Extract these values for the original lme and lmer fits:
(sd1 <- sqrt(getVarCov(Mlme)[[1]])/sigma(Mlme))
## 2.332469
(sd2 <- getME(Mlmer,"theta")) ## 14.48926
Refit with another optimizer (this will probably be the default in the next release of lme4):
Mlmer2 <- update(Mlmer,
control=lmerControl(optimizer="nloptwrap"))
sd3 <- getME(Mlmer2,"theta") ## 2.33247
Matches lme ... let's see what's going on. The deviance function (-2*log likelihood), or in this case the analogous REML-criterion function, for LMMs with a single random effect takes only one argument, because the fixed-effect parameters are profiled out; they can be computed automatically for a given value of the RE standard deviation.
ff <- as.function(Mlmer)
tvec <- seq(0,20,length=101)
Lvec <- sapply(tvec,ff)
png("CV38425.png")
par(bty="l",las=1)
plot(tvec,Lvec,type="l",
ylab="REML criterion",
xlab="scaled random effects standard deviation")
abline(v=1,lty=2)
points(sd1,ff(sd1),pch=16,col=1)
points(sd2,ff(sd2),pch=16,col=2)
points(sd3,ff(sd3),pch=1,col=4)
dev.off()
I continued to obsess further over this and ran the fits for random seeds from 1 to 1000, fitting lme, lmer, and lmer+nloptwrap for each case. Here are the numbers out of 1000 where a given method gets answers that are at least 0.001 deviance units worse than another ...
lme.dev lmer.dev lmer2.dev
lme.dev 0 64 61
lmer.dev 369 0 326
lmer2.dev 43 3 0
In other words, (1) there is no method that always works best; (2) lmer with the default optimizer is worst (fails about 1/3 of the time); (3) lmer with "nloptwrap" is best (worse than lme 4% of the time, rarely worse than lmer).
To be a little bit reassuring, I think that this situation is likely to be worst for small, misspecified cases (i.e. residual error here is uniform rather than Normal). It would be interesting to explore this more systematically though ...
|
lme() and lmer() giving conflicting results
tl;dr if you change the optimizer to "nloptwrap" I think it will avoid these issues (probably).
Congratulations, you've found one of the simplest examples of multiple optima in a statistical estimatio
|
11,154
|
What is meant by fine-tuning of neural network?
|
Finetuning means taking weights of a trained neural network and use it as initialization for a new model being trained on data from the same domain (often e.g. images). It is used to:
speed up the training
overcome small dataset size
There are various strategies, such as training the whole initialized network or "freezing" some of the pre-trained weights (usually whole layers). The article A Comprehensive guide to Fine-tuning Deep Learning Models in Keras provides a good insight into this. Also have a look at the following threads:
Fine Tuning vs Joint Training vs Feature Extraction
CNN: ReTraining and Fine Tuning
|
What is meant by fine-tuning of neural network?
|
Finetuning means taking weights of a trained neural network and use it as initialization for a new model being trained on data from the same domain (often e.g. images). It is used to:
speed up the tr
|
What is meant by fine-tuning of neural network?
Finetuning means taking weights of a trained neural network and use it as initialization for a new model being trained on data from the same domain (often e.g. images). It is used to:
speed up the training
overcome small dataset size
There are various strategies, such as training the whole initialized network or "freezing" some of the pre-trained weights (usually whole layers). The article A Comprehensive guide to Fine-tuning Deep Learning Models in Keras provides a good insight into this. Also have a look at the following threads:
Fine Tuning vs Joint Training vs Feature Extraction
CNN: ReTraining and Fine Tuning
|
What is meant by fine-tuning of neural network?
Finetuning means taking weights of a trained neural network and use it as initialization for a new model being trained on data from the same domain (often e.g. images). It is used to:
speed up the tr
|
11,155
|
Difference between statsmodel OLS and scikit linear regression
|
First in terms of usage. You can get the prediction in statsmodels in a very similar way as in scikit-learn, except that we use the results instance returned by fit
predictions = results.predict(X_test)
Given the predictions, we can calculate statistics that are based on the prediction error
prediction_error = y_test - predictions
There is a separate list of functions to calculate goodness of prediction statistics with it, but it's not integrated into the models, nor does it include R squared. (I've never heard of R squared used for out of sample data.) Calculating those requires a bit more work by the user and statsmodels does not have the same set of statistics, especially not for classification or models with a binary response variable.
To your other two points:
Linear regression is in its basic form the same in statsmodels and in scikit-learn. However, the implementation differs which might produce different results in edge cases, and scikit learn has in general more support for larger models. For example, statsmodels currently uses sparse matrices in very few parts.
The most important difference is in the surrounding infrastructure and the use cases that are directly supported.
Statsmodels follows largely the traditional model where we want to know how well a given model fits the data, and what variables "explain" or affect the outcome, or what the size of the effect is.
Scikit-learn follows the machine learning tradition where the main supported task is chosing the "best" model for prediction.
As a consequence, the emphasis in the supporting features of statsmodels is in analysing the training data which includes hypothesis tests and goodness-of-fit measures, while the emphasis in the supporting infrastructure in scikit-learn is on model selection for out-of-sample prediction and therefore cross-validation on "test data".
This points out the distinction, there is still quite a lot of overlap also in the usage. statsmodels also does prediction, and additionally forecasting in a time series context.
But, when we want to do cross-validation for prediction in statsmodels it is currently still often easier to reuse the cross-validation setup of scikit-learn together with the estimation models of statsmodels.
|
Difference between statsmodel OLS and scikit linear regression
|
First in terms of usage. You can get the prediction in statsmodels in a very similar way as in scikit-learn, except that we use the results instance returned by fit
predictions = results.predict(X_tes
|
Difference between statsmodel OLS and scikit linear regression
First in terms of usage. You can get the prediction in statsmodels in a very similar way as in scikit-learn, except that we use the results instance returned by fit
predictions = results.predict(X_test)
Given the predictions, we can calculate statistics that are based on the prediction error
prediction_error = y_test - predictions
There is a separate list of functions to calculate goodness of prediction statistics with it, but it's not integrated into the models, nor does it include R squared. (I've never heard of R squared used for out of sample data.) Calculating those requires a bit more work by the user and statsmodels does not have the same set of statistics, especially not for classification or models with a binary response variable.
To your other two points:
Linear regression is in its basic form the same in statsmodels and in scikit-learn. However, the implementation differs which might produce different results in edge cases, and scikit learn has in general more support for larger models. For example, statsmodels currently uses sparse matrices in very few parts.
The most important difference is in the surrounding infrastructure and the use cases that are directly supported.
Statsmodels follows largely the traditional model where we want to know how well a given model fits the data, and what variables "explain" or affect the outcome, or what the size of the effect is.
Scikit-learn follows the machine learning tradition where the main supported task is chosing the "best" model for prediction.
As a consequence, the emphasis in the supporting features of statsmodels is in analysing the training data which includes hypothesis tests and goodness-of-fit measures, while the emphasis in the supporting infrastructure in scikit-learn is on model selection for out-of-sample prediction and therefore cross-validation on "test data".
This points out the distinction, there is still quite a lot of overlap also in the usage. statsmodels also does prediction, and additionally forecasting in a time series context.
But, when we want to do cross-validation for prediction in statsmodels it is currently still often easier to reuse the cross-validation setup of scikit-learn together with the estimation models of statsmodels.
|
Difference between statsmodel OLS and scikit linear regression
First in terms of usage. You can get the prediction in statsmodels in a very similar way as in scikit-learn, except that we use the results instance returned by fit
predictions = results.predict(X_tes
|
11,156
|
Difference between statsmodel OLS and scikit linear regression
|
In the OLS model you are using the training data to fit and predict.
With the LinearRegression model you are using training data to fit and test data to predict, therefore different results in R2 scores.
If you would take test data in OLS model, you should have same results and lower value
|
Difference between statsmodel OLS and scikit linear regression
|
In the OLS model you are using the training data to fit and predict.
With the LinearRegression model you are using training data to fit and test data to predict, therefore different results in R2 scor
|
Difference between statsmodel OLS and scikit linear regression
In the OLS model you are using the training data to fit and predict.
With the LinearRegression model you are using training data to fit and test data to predict, therefore different results in R2 scores.
If you would take test data in OLS model, you should have same results and lower value
|
Difference between statsmodel OLS and scikit linear regression
In the OLS model you are using the training data to fit and predict.
With the LinearRegression model you are using training data to fit and test data to predict, therefore different results in R2 scor
|
11,157
|
Difference between statsmodel OLS and scikit linear regression
|
I have encountered a similar issue where the OLS is giving different Rsquared and Adjusted Rsquared values compared to Sklearn LinearRegression model.
Reason for it: OLS does not consider, be default, the intercept coefficient and there builds the model without it and Sklearn considers it in building the model.
Solution: Add a column of 1's to the dataset and fit the model with OLS and you will get the almost same Rsquared and Adj. Rsquared values for both models.
|
Difference between statsmodel OLS and scikit linear regression
|
I have encountered a similar issue where the OLS is giving different Rsquared and Adjusted Rsquared values compared to Sklearn LinearRegression model.
Reason for it: OLS does not consider, be default,
|
Difference between statsmodel OLS and scikit linear regression
I have encountered a similar issue where the OLS is giving different Rsquared and Adjusted Rsquared values compared to Sklearn LinearRegression model.
Reason for it: OLS does not consider, be default, the intercept coefficient and there builds the model without it and Sklearn considers it in building the model.
Solution: Add a column of 1's to the dataset and fit the model with OLS and you will get the almost same Rsquared and Adj. Rsquared values for both models.
|
Difference between statsmodel OLS and scikit linear regression
I have encountered a similar issue where the OLS is giving different Rsquared and Adjusted Rsquared values compared to Sklearn LinearRegression model.
Reason for it: OLS does not consider, be default,
|
11,158
|
Difference between statsmodel OLS and scikit linear regression
|
Let me make it crystal clear:
we know that multiple linear regression is represented as :
y = b0 + b1X1 + b2X2 + b3X3 +…..+ bnXn
but we can also, represent it as:
y = b0X0 + b1X1 + b2X2 + b3X3 +…..+ bnXn
where X0 = 1
We have to add one column with all the same values as 1 to represent b0X0.
Why we need to do that??
statsmodels Python library provides an OLS(ordinary least square) class for implementing Backward Elimination. Now one thing to note that OLS class does not provide the intercept by default and it has to be created by the user himself. That is why we created a column with all same values as 1 to represent b0X0.
Thats the reason why we get different R2 values in sklearn Regression model and Ols statsmodel.
|
Difference between statsmodel OLS and scikit linear regression
|
Let me make it crystal clear:
we know that multiple linear regression is represented as :
y = b0 + b1X1 + b2X2 + b3X3 +…..+ bnXn
but we can also, represent it as:
y = b0X0 + b1X1 + b2X2 + b3X3 +…..+ b
|
Difference between statsmodel OLS and scikit linear regression
Let me make it crystal clear:
we know that multiple linear regression is represented as :
y = b0 + b1X1 + b2X2 + b3X3 +…..+ bnXn
but we can also, represent it as:
y = b0X0 + b1X1 + b2X2 + b3X3 +…..+ bnXn
where X0 = 1
We have to add one column with all the same values as 1 to represent b0X0.
Why we need to do that??
statsmodels Python library provides an OLS(ordinary least square) class for implementing Backward Elimination. Now one thing to note that OLS class does not provide the intercept by default and it has to be created by the user himself. That is why we created a column with all same values as 1 to represent b0X0.
Thats the reason why we get different R2 values in sklearn Regression model and Ols statsmodel.
|
Difference between statsmodel OLS and scikit linear regression
Let me make it crystal clear:
we know that multiple linear regression is represented as :
y = b0 + b1X1 + b2X2 + b3X3 +…..+ bnXn
but we can also, represent it as:
y = b0X0 + b1X1 + b2X2 + b3X3 +…..+ b
|
11,159
|
Allowed comparisons of mixed effects models (random effects primarily)
|
Using maximum likelihood, any of these can be compared with AIC; if the fixed effects are the same (m1 to m4), using either REML or ML is fine, with REML usually preferred, but if they are different, only ML can be used. However, interpretation is usually difficult when both fixed effects and random effects are changing, so in practice, most recommend changing only one or the other at a time.
Using the likelihood ratio test is possible but messy because the usual chi-squared approximation doesn't hold when testing if a variance component is zero. See Aniko's answer for details. (Kudos to Aniko for both reading the question more carefully than I did and reading my original answer carefully enough to notice that it missed this point. Thanks!)
Pinhiero/Bates is the classic reference; it describes the nlme package, but the theory is the same. Well, mostly the same; Doug Bates has changed his recommendations on inference since writing that book and the new recommendations are reflected in the lme4 package. But that's more than I want to get into here. A more readable reference is Weiss (2005), Modeling Longitudinal Data.
|
Allowed comparisons of mixed effects models (random effects primarily)
|
Using maximum likelihood, any of these can be compared with AIC; if the fixed effects are the same (m1 to m4), using either REML or ML is fine, with REML usually preferred, but if they are different,
|
Allowed comparisons of mixed effects models (random effects primarily)
Using maximum likelihood, any of these can be compared with AIC; if the fixed effects are the same (m1 to m4), using either REML or ML is fine, with REML usually preferred, but if they are different, only ML can be used. However, interpretation is usually difficult when both fixed effects and random effects are changing, so in practice, most recommend changing only one or the other at a time.
Using the likelihood ratio test is possible but messy because the usual chi-squared approximation doesn't hold when testing if a variance component is zero. See Aniko's answer for details. (Kudos to Aniko for both reading the question more carefully than I did and reading my original answer carefully enough to notice that it missed this point. Thanks!)
Pinhiero/Bates is the classic reference; it describes the nlme package, but the theory is the same. Well, mostly the same; Doug Bates has changed his recommendations on inference since writing that book and the new recommendations are reflected in the lme4 package. But that's more than I want to get into here. A more readable reference is Weiss (2005), Modeling Longitudinal Data.
|
Allowed comparisons of mixed effects models (random effects primarily)
Using maximum likelihood, any of these can be compared with AIC; if the fixed effects are the same (m1 to m4), using either REML or ML is fine, with REML usually preferred, but if they are different,
|
11,160
|
Allowed comparisons of mixed effects models (random effects primarily)
|
You have to be careful using likelihood-ratio tests when evaluating whether a variance component is 0 (m vs m-m4), because the typical chi-square approximation does not apply. The reason is that the null-hypothesis is $\sigma^2=0$, and it is on the boundary of the parameter space, so the classical results do not apply.
There is an entire theory of the distribution of LRT in these situations (see, for example, Self and Liang, 1987 [1]), however it quickly becomes messy. For the special case of only one parameter hitting the boundary (eg, m vs m2), the likelihood ratio has a $\frac 12 \chi^2_1 + \frac 12 \chi^2_0$ distribution, which in practice means that you have to halve your p-value based on $\chi^2_1$.
However, as @Aaron stated, many experts do not recommend doing a likelihood ratio test like this. Potential alternatives are the information criteria (AIC, BIC, etc), or bootstrapping the LRT.
[1] Self, S. G. & Liang, K. Asymptotic properties of maximum likelihood estimators and likelihood ratio tests under nonstandard conditions J. Amer. Statist. Assoc., 1987, 82, 605-610.
|
Allowed comparisons of mixed effects models (random effects primarily)
|
You have to be careful using likelihood-ratio tests when evaluating whether a variance component is 0 (m vs m-m4), because the typical chi-square approximation does not apply. The reason is that the n
|
Allowed comparisons of mixed effects models (random effects primarily)
You have to be careful using likelihood-ratio tests when evaluating whether a variance component is 0 (m vs m-m4), because the typical chi-square approximation does not apply. The reason is that the null-hypothesis is $\sigma^2=0$, and it is on the boundary of the parameter space, so the classical results do not apply.
There is an entire theory of the distribution of LRT in these situations (see, for example, Self and Liang, 1987 [1]), however it quickly becomes messy. For the special case of only one parameter hitting the boundary (eg, m vs m2), the likelihood ratio has a $\frac 12 \chi^2_1 + \frac 12 \chi^2_0$ distribution, which in practice means that you have to halve your p-value based on $\chi^2_1$.
However, as @Aaron stated, many experts do not recommend doing a likelihood ratio test like this. Potential alternatives are the information criteria (AIC, BIC, etc), or bootstrapping the LRT.
[1] Self, S. G. & Liang, K. Asymptotic properties of maximum likelihood estimators and likelihood ratio tests under nonstandard conditions J. Amer. Statist. Assoc., 1987, 82, 605-610.
|
Allowed comparisons of mixed effects models (random effects primarily)
You have to be careful using likelihood-ratio tests when evaluating whether a variance component is 0 (m vs m-m4), because the typical chi-square approximation does not apply. The reason is that the n
|
11,161
|
What is the correct way to test for significant differences between coefficients?
|
The two approaches do differ.
Let the estimated standard errors of the two regressions be $s_1$ and $s_2$. Then, because the combined regression (with all coefficient-dummy interactions) fits the same coefficients, it has the same residuals, whence its standard error can be computed as
$$s = \sqrt{\frac{(n_1-p) s_1^2 + (n_2-p) s_2^2)}{n_1 + n_2 - 2 p}}.$$
The number of parameters $p$ equals $6$ in the example: five slopes and an intercept in each regression.
Let $b_1$ estimate a parameter in one regression, $b_2$ estimate the same parameter in the other regression, and $b$ estimate their difference in the combined regression. Then their standard errors are related by
$$SE(b) = s \sqrt{(SE(b_1)/s_1)^2 + (SE(b_2)/s_2)^2}.$$
If you haven't done the combined regression, but only have statistics for the separate regressions, plug in the preceding equation for $s$. This will be the denominator for the t-test. Evidently it is not the same as the denominator presented in the question.
The assumption made by the combined regression is that the variances of the residuals are essentially the same in both separate regressions. If this is not the case, however, the z-test isn't going to be good, either (unless the sample sizes are large): you would want to use a CABF test or Welch-Satterthwaite t-test.
|
What is the correct way to test for significant differences between coefficients?
|
The two approaches do differ.
Let the estimated standard errors of the two regressions be $s_1$ and $s_2$. Then, because the combined regression (with all coefficient-dummy interactions) fits the sam
|
What is the correct way to test for significant differences between coefficients?
The two approaches do differ.
Let the estimated standard errors of the two regressions be $s_1$ and $s_2$. Then, because the combined regression (with all coefficient-dummy interactions) fits the same coefficients, it has the same residuals, whence its standard error can be computed as
$$s = \sqrt{\frac{(n_1-p) s_1^2 + (n_2-p) s_2^2)}{n_1 + n_2 - 2 p}}.$$
The number of parameters $p$ equals $6$ in the example: five slopes and an intercept in each regression.
Let $b_1$ estimate a parameter in one regression, $b_2$ estimate the same parameter in the other regression, and $b$ estimate their difference in the combined regression. Then their standard errors are related by
$$SE(b) = s \sqrt{(SE(b_1)/s_1)^2 + (SE(b_2)/s_2)^2}.$$
If you haven't done the combined regression, but only have statistics for the separate regressions, plug in the preceding equation for $s$. This will be the denominator for the t-test. Evidently it is not the same as the denominator presented in the question.
The assumption made by the combined regression is that the variances of the residuals are essentially the same in both separate regressions. If this is not the case, however, the z-test isn't going to be good, either (unless the sample sizes are large): you would want to use a CABF test or Welch-Satterthwaite t-test.
|
What is the correct way to test for significant differences between coefficients?
The two approaches do differ.
Let the estimated standard errors of the two regressions be $s_1$ and $s_2$. Then, because the combined regression (with all coefficient-dummy interactions) fits the sam
|
11,162
|
What is the correct way to test for significant differences between coefficients?
|
The most direct way to test for a difference in the coefficient between two groups is to include an interaction term into your regression, which is almost what you describe in your question. The model you would run is the following:
$y_i = \alpha + \beta x_i + \gamma g_i + \delta (x_i \times g_i) + \varepsilon_i$
Note that I have included the group variable as a separate regressor in the model. With this model, a $t$-test with the null hypothesis $H_0: \delta = 0$ is a test of the coefficients being the same between the two groups. To see this, first let $g_i = 0$ in the above model. Then, we get the following equation for group 0:
$y_i = \alpha + \beta x_i + \varepsilon_i$
Now, if $g_i = 1$, then we have:
$y_i = (\alpha + \gamma) + (\beta + \delta) x_i + \varepsilon_i$
Thus, when $\delta$ is 0, then two groups have the same coefficient.
|
What is the correct way to test for significant differences between coefficients?
|
The most direct way to test for a difference in the coefficient between two groups is to include an interaction term into your regression, which is almost what you describe in your question. The model
|
What is the correct way to test for significant differences between coefficients?
The most direct way to test for a difference in the coefficient between two groups is to include an interaction term into your regression, which is almost what you describe in your question. The model you would run is the following:
$y_i = \alpha + \beta x_i + \gamma g_i + \delta (x_i \times g_i) + \varepsilon_i$
Note that I have included the group variable as a separate regressor in the model. With this model, a $t$-test with the null hypothesis $H_0: \delta = 0$ is a test of the coefficients being the same between the two groups. To see this, first let $g_i = 0$ in the above model. Then, we get the following equation for group 0:
$y_i = \alpha + \beta x_i + \varepsilon_i$
Now, if $g_i = 1$, then we have:
$y_i = (\alpha + \gamma) + (\beta + \delta) x_i + \varepsilon_i$
Thus, when $\delta$ is 0, then two groups have the same coefficient.
|
What is the correct way to test for significant differences between coefficients?
The most direct way to test for a difference in the coefficient between two groups is to include an interaction term into your regression, which is almost what you describe in your question. The model
|
11,163
|
In genome-wide association studies, what are principal components?
|
In this particular context, PCA is mainly used to account for population-specific variations in alleles distribution on the SNPs (or other DNA markers, although I'm only familiar with the SNP case) under investigation. Such "population substructure" mainly arises as a consequence of varying frequencies of minor alleles in genetically distant ancestries (e.g. japanese and black-african or european-american). The general idea is well explained in Population Structure and Eigenanalysis, by Patterson et al. (PLoS Genetics 2006, 2(12)), or the Lancet's special issue on genetic epidemiology (2005, 366; most articles can be found on the web, start with Cordell & Clayton, Genetic Association Studies).
The construction of principal axes follows from the classical approach to PCA, which is applied to the scaled matrix (individuals by SNPs) of observed genotypes (AA, AB, BB; say B is the minor allele in all cases), to the exception that an additional normalization to account for population drift might be applied. It all assumes that the frequency of the minor allele (taking value in {0,1,2}) can be considered as numeric, that is we work under an additive model (also called allelic dosage) or any equivalent one that would make sense. As the successive orthogonal PCs will account for the maximum variance, this provides a way to highlight groups of individuals differing at the level of minor allele frequency. The software used for this is known as Eigenstrat. It is also available in the egscore() function from the GenABEL R package (see also GenABEL.org). It is worth to note that other methods to detect population substructure were proposed, in particular model-based cluster reconstruction (see references at the end). More information can be found by browsing the Hapmap project, and available tutorial coming from the Bioconductor project. (Search for Vince J Carey or David Clayton's nice tutorials on Google).
Apart from clustering subpopulations, this approach can also be used for detecting outliers which might arise in two cases (AFAIK): (a) genotyping errors, and (b) when working with an homogeneous population (or assumed so, given self-reported ethnicity), individuals exhibiting unexpected genotype. What is usually done in this case is to apply PCA in an iterative manner, and remove individuals whose scores are below $\pm 6$ SD on at least one of the first 20 principal axes; this amounts to "whiten" the sample, in some sense. Note that any such measure of genotype distance (this also holds when using Multidimensional Scaling in place of PCA) will allow to spot relatives or siblings. The plink software provides additional methods, see the section on Population stratification in the on-line help.
Considering that eigenanalysis allows to uncover some structure at the level of the individuals, we can use this information when trying to explain observed variations in a given phenotype (or any distribution that might be defined according to a binary criterion, e.g. disease or case-control situation). Specifically, we can adjust our analysis with those PCs (i.e., the factor scores of individuals), as illustrated in Principal components analysis corrects for stratification in genome-wide association studies, by Price et al. (Nature Genetics 2006, 38(8)), and later work (there was a nice picture showing axes of genetic variation in Europe in Genes mirror geography within Europe; Nature 2008; Fig 1A reproduced below). Note also that another solution is to carry out a stratified analysis (by including ethnicity in an GLM)--this is readily available in the snpMatrix package, for example.
References
Daniel Falush, Matthew Stephens, and Jonathan K Pritchard (2003). Inference of population structure using multilocus genotype data: linked loci and correlated allele frequencies. Genetics, 164(4): 1567–1587.
B Devlin and K Roeder (1999). Genomic control for association studies. Biometrics, 55(4): 997–1004.
JK Pritchard, M Stephens, and P Donnelly (2000). Inference of population structure using multilocus genotype data. Genetics, 155(2): 945–959.
Gang Zheng, Boris Freidlin, Zhaohai Li, and Joseph L Gastwirth (2005). Genomic control for association studies under various genetic models. Biometrics, 61(1): 186–92.
Chao Tian, Peter K. Gregersen, and Michael F. Seldin1 (2008). Accounting for ancestry: population substructure and genome-wide association studies. Human Molecular Genetics, 17(R2): R143-R150.
Kai Yu, Population Substructure and Control Selection in Genome-wide Association Studies.
Alkes L. Price, Noah A. Zaitlen, David Reich and Nick Patterson (2010). New approaches to population stratification in genome-wide association studies, Nature Reviews Genetics
Chao Tian, et al. (2009). European Population Genetic Substructure: Further Definition of Ancestry Informative Markers for Distinguishing among Diverse European Ethnic Groups, Molecular Medicine, 15(11-12): 371–383.
|
In genome-wide association studies, what are principal components?
|
In this particular context, PCA is mainly used to account for population-specific variations in alleles distribution on the SNPs (or other DNA markers, although I'm only familiar with the SNP case) un
|
In genome-wide association studies, what are principal components?
In this particular context, PCA is mainly used to account for population-specific variations in alleles distribution on the SNPs (or other DNA markers, although I'm only familiar with the SNP case) under investigation. Such "population substructure" mainly arises as a consequence of varying frequencies of minor alleles in genetically distant ancestries (e.g. japanese and black-african or european-american). The general idea is well explained in Population Structure and Eigenanalysis, by Patterson et al. (PLoS Genetics 2006, 2(12)), or the Lancet's special issue on genetic epidemiology (2005, 366; most articles can be found on the web, start with Cordell & Clayton, Genetic Association Studies).
The construction of principal axes follows from the classical approach to PCA, which is applied to the scaled matrix (individuals by SNPs) of observed genotypes (AA, AB, BB; say B is the minor allele in all cases), to the exception that an additional normalization to account for population drift might be applied. It all assumes that the frequency of the minor allele (taking value in {0,1,2}) can be considered as numeric, that is we work under an additive model (also called allelic dosage) or any equivalent one that would make sense. As the successive orthogonal PCs will account for the maximum variance, this provides a way to highlight groups of individuals differing at the level of minor allele frequency. The software used for this is known as Eigenstrat. It is also available in the egscore() function from the GenABEL R package (see also GenABEL.org). It is worth to note that other methods to detect population substructure were proposed, in particular model-based cluster reconstruction (see references at the end). More information can be found by browsing the Hapmap project, and available tutorial coming from the Bioconductor project. (Search for Vince J Carey or David Clayton's nice tutorials on Google).
Apart from clustering subpopulations, this approach can also be used for detecting outliers which might arise in two cases (AFAIK): (a) genotyping errors, and (b) when working with an homogeneous population (or assumed so, given self-reported ethnicity), individuals exhibiting unexpected genotype. What is usually done in this case is to apply PCA in an iterative manner, and remove individuals whose scores are below $\pm 6$ SD on at least one of the first 20 principal axes; this amounts to "whiten" the sample, in some sense. Note that any such measure of genotype distance (this also holds when using Multidimensional Scaling in place of PCA) will allow to spot relatives or siblings. The plink software provides additional methods, see the section on Population stratification in the on-line help.
Considering that eigenanalysis allows to uncover some structure at the level of the individuals, we can use this information when trying to explain observed variations in a given phenotype (or any distribution that might be defined according to a binary criterion, e.g. disease or case-control situation). Specifically, we can adjust our analysis with those PCs (i.e., the factor scores of individuals), as illustrated in Principal components analysis corrects for stratification in genome-wide association studies, by Price et al. (Nature Genetics 2006, 38(8)), and later work (there was a nice picture showing axes of genetic variation in Europe in Genes mirror geography within Europe; Nature 2008; Fig 1A reproduced below). Note also that another solution is to carry out a stratified analysis (by including ethnicity in an GLM)--this is readily available in the snpMatrix package, for example.
References
Daniel Falush, Matthew Stephens, and Jonathan K Pritchard (2003). Inference of population structure using multilocus genotype data: linked loci and correlated allele frequencies. Genetics, 164(4): 1567–1587.
B Devlin and K Roeder (1999). Genomic control for association studies. Biometrics, 55(4): 997–1004.
JK Pritchard, M Stephens, and P Donnelly (2000). Inference of population structure using multilocus genotype data. Genetics, 155(2): 945–959.
Gang Zheng, Boris Freidlin, Zhaohai Li, and Joseph L Gastwirth (2005). Genomic control for association studies under various genetic models. Biometrics, 61(1): 186–92.
Chao Tian, Peter K. Gregersen, and Michael F. Seldin1 (2008). Accounting for ancestry: population substructure and genome-wide association studies. Human Molecular Genetics, 17(R2): R143-R150.
Kai Yu, Population Substructure and Control Selection in Genome-wide Association Studies.
Alkes L. Price, Noah A. Zaitlen, David Reich and Nick Patterson (2010). New approaches to population stratification in genome-wide association studies, Nature Reviews Genetics
Chao Tian, et al. (2009). European Population Genetic Substructure: Further Definition of Ancestry Informative Markers for Distinguishing among Diverse European Ethnic Groups, Molecular Medicine, 15(11-12): 371–383.
|
In genome-wide association studies, what are principal components?
In this particular context, PCA is mainly used to account for population-specific variations in alleles distribution on the SNPs (or other DNA markers, although I'm only familiar with the SNP case) un
|
11,164
|
Using regularization when doing statistical inference
|
There is a major difference between performing estimating using ridge type penalties and lasso-type penalties. Ridge type estimators tend to shrink all regression coefficients towards zero and are biased, but have an easy to derive asymptotic distribution because they do not shrink any variable to exactly zero. The bias in the ridge estimates may be problematic in subsequent performing hypothesis testing, but I am not an expert on that. On the other hand, Lasso/elastic-net type penalties shrink many regression coefficients to zero and can therefore be viewed as model selection techniques. The problem of performing inference on models that were selected based on data is usually referred to as the selective inference problem or post-selection inference. This field has seen many developments in recent years.
The main problem with performing inference after model selection is that selection truncates the sample space. As a simple example, suppose that we observe $y\sim N(\mu,1)$ and only want to estimate $\mu$ if we have evidence that it is larger than zero. Then, we estimate $\mu$ if $|y| > c >0$ for some pre-specified threshold $c$. In such a case, we only observe $y$ if it is larger than $c$ in absolute value and therefore $y$ is no longer normal but truncated normal.
Similarly, the Lasso (or elastic net) constrains the sample space in such a way as to ensure that the selected model has been selected. This truncation is more complicated, but can be described analytically.
Based on this insight, one can perform inference based on the truncated distribution of the data to obtain valid test statistics. For confidence intervals and test statistics, see the work of Lee et al. (2016):
Exact post-selection inference, with application to the lasso
Their methods are implemented in the R package selectiveInference.
Optimal estimation (and testing) after model selection is discussed in (for the lasso):
Tractable Post-Selection Maximum Likelihood Inference for the Lasso | Cornell University Statistics Archives
and their (far less comprehensive) software package is available in:
selectiveMLE by ammeir2 | GitHub
|
Using regularization when doing statistical inference
|
There is a major difference between performing estimating using ridge type penalties and lasso-type penalties. Ridge type estimators tend to shrink all regression coefficients towards zero and are bia
|
Using regularization when doing statistical inference
There is a major difference between performing estimating using ridge type penalties and lasso-type penalties. Ridge type estimators tend to shrink all regression coefficients towards zero and are biased, but have an easy to derive asymptotic distribution because they do not shrink any variable to exactly zero. The bias in the ridge estimates may be problematic in subsequent performing hypothesis testing, but I am not an expert on that. On the other hand, Lasso/elastic-net type penalties shrink many regression coefficients to zero and can therefore be viewed as model selection techniques. The problem of performing inference on models that were selected based on data is usually referred to as the selective inference problem or post-selection inference. This field has seen many developments in recent years.
The main problem with performing inference after model selection is that selection truncates the sample space. As a simple example, suppose that we observe $y\sim N(\mu,1)$ and only want to estimate $\mu$ if we have evidence that it is larger than zero. Then, we estimate $\mu$ if $|y| > c >0$ for some pre-specified threshold $c$. In such a case, we only observe $y$ if it is larger than $c$ in absolute value and therefore $y$ is no longer normal but truncated normal.
Similarly, the Lasso (or elastic net) constrains the sample space in such a way as to ensure that the selected model has been selected. This truncation is more complicated, but can be described analytically.
Based on this insight, one can perform inference based on the truncated distribution of the data to obtain valid test statistics. For confidence intervals and test statistics, see the work of Lee et al. (2016):
Exact post-selection inference, with application to the lasso
Their methods are implemented in the R package selectiveInference.
Optimal estimation (and testing) after model selection is discussed in (for the lasso):
Tractable Post-Selection Maximum Likelihood Inference for the Lasso | Cornell University Statistics Archives
and their (far less comprehensive) software package is available in:
selectiveMLE by ammeir2 | GitHub
|
Using regularization when doing statistical inference
There is a major difference between performing estimating using ridge type penalties and lasso-type penalties. Ridge type estimators tend to shrink all regression coefficients towards zero and are bia
|
11,165
|
Using regularization when doing statistical inference
|
The term "regularization" covers a very wide variety of methods. For the purpose of this answer, I am going to narrow in to mean "penalized optimization", i.e. adding an $L_1$ or $L_2$ penalty to your optimization problem.
If that's the case, then the answer is a definitive "Yes! Well kinda".
The reason for this is that adding an $L_1$ or $L_2$ penalty to the likelihood function leads to exactly the same mathematical function as adding either a Laplace or Gaussian a prior to a likelihood to get the posterior distribution (elevator pitch: prior distribution describes uncertainty of parameters before seeing data, posterior distribution describes uncertainty of parameters after seeing data), which leads to Bayesian statistics 101. Bayesian statistics is very popular and performed all the time with the goal of inference of estimated effects.
That was the "Yes!" part. The "Well kinda" is that optimizing your posterior distribution is done and is called "Maximum A Posterior" (MAP) estimation. But most Bayesian don't use MAP estimation, they sample from the posterior distribution using MCMC algorithms! This has several advantages, one which being that it tends to have less downward bias in the variance components.
For the sake of brevity, I have tried not to go into details about Bayesian statistics, but if this interests you, that's the place to start looking.
|
Using regularization when doing statistical inference
|
The term "regularization" covers a very wide variety of methods. For the purpose of this answer, I am going to narrow in to mean "penalized optimization", i.e. adding an $L_1$ or $L_2$ penalty to your
|
Using regularization when doing statistical inference
The term "regularization" covers a very wide variety of methods. For the purpose of this answer, I am going to narrow in to mean "penalized optimization", i.e. adding an $L_1$ or $L_2$ penalty to your optimization problem.
If that's the case, then the answer is a definitive "Yes! Well kinda".
The reason for this is that adding an $L_1$ or $L_2$ penalty to the likelihood function leads to exactly the same mathematical function as adding either a Laplace or Gaussian a prior to a likelihood to get the posterior distribution (elevator pitch: prior distribution describes uncertainty of parameters before seeing data, posterior distribution describes uncertainty of parameters after seeing data), which leads to Bayesian statistics 101. Bayesian statistics is very popular and performed all the time with the goal of inference of estimated effects.
That was the "Yes!" part. The "Well kinda" is that optimizing your posterior distribution is done and is called "Maximum A Posterior" (MAP) estimation. But most Bayesian don't use MAP estimation, they sample from the posterior distribution using MCMC algorithms! This has several advantages, one which being that it tends to have less downward bias in the variance components.
For the sake of brevity, I have tried not to go into details about Bayesian statistics, but if this interests you, that's the place to start looking.
|
Using regularization when doing statistical inference
The term "regularization" covers a very wide variety of methods. For the purpose of this answer, I am going to narrow in to mean "penalized optimization", i.e. adding an $L_1$ or $L_2$ penalty to your
|
11,166
|
Using regularization when doing statistical inference
|
I would particularly recommend LASSO if you are attempting to use regression for inference based on "which predictors are statisically significant"--but not for the reason you might expect.
In practice, predictors in a model tend to be correlated. Even if there isn't substantial multicollinearity, regression's choice of "significant" predictors among the set of correlated predictors can vary substantially from sample to sample.
So yes, go ahead and do LASSO for your regression. Then repeat the complete model building process (including cross-validation to pick the LASSO penalty) on multiple bootstrap samples (a few hundred or so) from the original data. See how variable the set of "significant" predictors selected this way can be.
Unless your predictors are highly orthogonal to each other, this process should make you think twice about interpreting p-values in a regression in terms of which individual predictors are "significantly" important.
|
Using regularization when doing statistical inference
|
I would particularly recommend LASSO if you are attempting to use regression for inference based on "which predictors are statisically significant"--but not for the reason you might expect.
In practic
|
Using regularization when doing statistical inference
I would particularly recommend LASSO if you are attempting to use regression for inference based on "which predictors are statisically significant"--but not for the reason you might expect.
In practice, predictors in a model tend to be correlated. Even if there isn't substantial multicollinearity, regression's choice of "significant" predictors among the set of correlated predictors can vary substantially from sample to sample.
So yes, go ahead and do LASSO for your regression. Then repeat the complete model building process (including cross-validation to pick the LASSO penalty) on multiple bootstrap samples (a few hundred or so) from the original data. See how variable the set of "significant" predictors selected this way can be.
Unless your predictors are highly orthogonal to each other, this process should make you think twice about interpreting p-values in a regression in terms of which individual predictors are "significantly" important.
|
Using regularization when doing statistical inference
I would particularly recommend LASSO if you are attempting to use regression for inference based on "which predictors are statisically significant"--but not for the reason you might expect.
In practic
|
11,167
|
Does Dimensionality curse effect some models more than others?
|
In general, the curse of dimensionality makes the problem of searching through a space much more difficult, and effects the majority of algorithms that "learn" through partitioning their vector space. The higher the dimensionality of our optimization problem the more data we need to fill the space that we are optimizing over.
Generalized Linear Models
Linear models suffer immensely from the curse of dimensionality. Linear models partition the space in to a single linear plane. Even if we are not looking to directly compute $$\hat{\beta} = (X^{'}X)^{-1}X^{'}y$$ the problem posed is still very sensitive to collinearity, and can be considered "ill conditioned" without some type of regularization. In very high dimensional spaces, there is more than one plane that can be fitted to your data, and without proper type of regularization can cause the model to behave very poorly. Specifically what regularization does is try to force one unique solution to exist. Both L1 and squared L2 regularization try to minimize the weights, and can be interpreted selecting the model with the smallest weights to be the most "correct" model. This can be thought of as a mathematical formulation of Occams Razor.
Decision Trees
Decision trees also suffer from the curse of dimensionality. Decision trees directly partition the sample space at each node. As the sample space increases, the distances between data points increases, which makes it much harder to find a "good" split.
Random Forests
Random Forests use a collection of decision trees to make their predictions. But instead of using all the features of your problem, individual trees only use a subset of the features. This minimizes the space that each tree is optimizing over and can help combat the problem of the curse of dimensionality.
Boosted Tree's
Boosting algorithms such as AdaBoost suffer from the curse of dimensionality and tend to overffit if regularization is not utilized. I won't go in depth, because the post Is AdaBoost less or more prone to overfitting?
explains the reason why better than I could.
Neural Networks
Neural networks are weird in the sense that they both are and are not impacted by the curse of dimensionality dependent on the architecture, activations, depth etc. So to reiterate the curse of dimensionality is the problem that a huge amount of
points are necessary in high dimensions to cover an input space. One way to interpret deep neural networks is to think of all layers expect the very last layer as doing a complicated projection of a high dimensional manifold into a lower dimensional manifold, where then the last layer classifies on top of. So for example in a convolutional network for classification where the last layer is a softmax layer, we can interpret the architecture as doing a non-linear projection onto a smaller dimension and then doing a multinomial logistic regression (the softmax layer) on that projection. So in a sense the compressed representation of our data allows us to circumvent the curse of dimensionality. Again this is one interpretation, in reality the curse of dimensionality does in fact impact neural networks, but not at the same level as the models outlined above.
SVM
SVM tend to not overffit as much as generalized linear models due to the excessive regularization that occurs. Check out this post SVM, Overfitting, curse of dimensionality for more detail.
K-NN, K-Means
Both K-mean and K-NN are greatly impacted by the curse of dimensionality, since both of them use the L2 squared distance measure. As the amount of dimensions increases the distance between various data-points increases as well. This is why you need a greater amount of points to cover more space in hopes the distance will be more descriptive.
Feel free to ask specifics about the models, since my answers are pretty general. Hope this helps.
|
Does Dimensionality curse effect some models more than others?
|
In general, the curse of dimensionality makes the problem of searching through a space much more difficult, and effects the majority of algorithms that "learn" through partitioning their vector space.
|
Does Dimensionality curse effect some models more than others?
In general, the curse of dimensionality makes the problem of searching through a space much more difficult, and effects the majority of algorithms that "learn" through partitioning their vector space. The higher the dimensionality of our optimization problem the more data we need to fill the space that we are optimizing over.
Generalized Linear Models
Linear models suffer immensely from the curse of dimensionality. Linear models partition the space in to a single linear plane. Even if we are not looking to directly compute $$\hat{\beta} = (X^{'}X)^{-1}X^{'}y$$ the problem posed is still very sensitive to collinearity, and can be considered "ill conditioned" without some type of regularization. In very high dimensional spaces, there is more than one plane that can be fitted to your data, and without proper type of regularization can cause the model to behave very poorly. Specifically what regularization does is try to force one unique solution to exist. Both L1 and squared L2 regularization try to minimize the weights, and can be interpreted selecting the model with the smallest weights to be the most "correct" model. This can be thought of as a mathematical formulation of Occams Razor.
Decision Trees
Decision trees also suffer from the curse of dimensionality. Decision trees directly partition the sample space at each node. As the sample space increases, the distances between data points increases, which makes it much harder to find a "good" split.
Random Forests
Random Forests use a collection of decision trees to make their predictions. But instead of using all the features of your problem, individual trees only use a subset of the features. This minimizes the space that each tree is optimizing over and can help combat the problem of the curse of dimensionality.
Boosted Tree's
Boosting algorithms such as AdaBoost suffer from the curse of dimensionality and tend to overffit if regularization is not utilized. I won't go in depth, because the post Is AdaBoost less or more prone to overfitting?
explains the reason why better than I could.
Neural Networks
Neural networks are weird in the sense that they both are and are not impacted by the curse of dimensionality dependent on the architecture, activations, depth etc. So to reiterate the curse of dimensionality is the problem that a huge amount of
points are necessary in high dimensions to cover an input space. One way to interpret deep neural networks is to think of all layers expect the very last layer as doing a complicated projection of a high dimensional manifold into a lower dimensional manifold, where then the last layer classifies on top of. So for example in a convolutional network for classification where the last layer is a softmax layer, we can interpret the architecture as doing a non-linear projection onto a smaller dimension and then doing a multinomial logistic regression (the softmax layer) on that projection. So in a sense the compressed representation of our data allows us to circumvent the curse of dimensionality. Again this is one interpretation, in reality the curse of dimensionality does in fact impact neural networks, but not at the same level as the models outlined above.
SVM
SVM tend to not overffit as much as generalized linear models due to the excessive regularization that occurs. Check out this post SVM, Overfitting, curse of dimensionality for more detail.
K-NN, K-Means
Both K-mean and K-NN are greatly impacted by the curse of dimensionality, since both of them use the L2 squared distance measure. As the amount of dimensions increases the distance between various data-points increases as well. This is why you need a greater amount of points to cover more space in hopes the distance will be more descriptive.
Feel free to ask specifics about the models, since my answers are pretty general. Hope this helps.
|
Does Dimensionality curse effect some models more than others?
In general, the curse of dimensionality makes the problem of searching through a space much more difficult, and effects the majority of algorithms that "learn" through partitioning their vector space.
|
11,168
|
How to determine the accuracy of regression? Which measure should be used?
|
You should ask yourself what were you trying to achieve with your modeling approach.
As you correctly said "how far from true solution am I" is a good starting point (notice this is also true for classification, we only get into specifics when we run into dichotomization, usually in more CS oriented machine learning, such as trees or SVMs).
So, let's measure it, shall we? If $x_i$ is the truth and $\hat x_i$ your model output, for sample $i$, here's the error:
$$\epsilon_i = x_i - \hat x_i$$
You could measure the mean error $\sum_i \epsilon_i$, but it turns out that, doing that, positive and negative errors cancel, giving you no way to know how good your model actually performs!
So, what people do in general, is to use these measures:
Squared error:
$$\text{SE}=\sum_i^n \epsilon_i^2$$
Mean squared error:
$$\text{MSE}=1/n \times \text{SE}$$
Root mean squared error:
$$\text{RMSE}=\sqrt{\text{MSE}}$$
Relative mean squared error (do not confuse this for the RMSE, root mean squared error):
$$\text{rMSE}={n-1\over n}{{\sum_i^n \epsilon_i^2}\over {\sum_i^n (x_i - \mathbb E(x))^2}}= {\text{MSE} \over Var(x)}$$
$\text{R}^2$:
$$\text{R}^2=1 - \text{rMSE}$$
Absolute error:
$$\text{AE}=\sum_i^n \sqrt{\epsilon_i^2}=\sum_i^n |{\epsilon_i}|$$
Mean absolute error:
$$\text{MAE}=1/n \times \text{AE}$$
And many, many others. You can find them around on the site (see for example How to interpret error measures?).
|
How to determine the accuracy of regression? Which measure should be used?
|
You should ask yourself what were you trying to achieve with your modeling approach.
As you correctly said "how far from true solution am I" is a good starting point (notice this is also true for clas
|
How to determine the accuracy of regression? Which measure should be used?
You should ask yourself what were you trying to achieve with your modeling approach.
As you correctly said "how far from true solution am I" is a good starting point (notice this is also true for classification, we only get into specifics when we run into dichotomization, usually in more CS oriented machine learning, such as trees or SVMs).
So, let's measure it, shall we? If $x_i$ is the truth and $\hat x_i$ your model output, for sample $i$, here's the error:
$$\epsilon_i = x_i - \hat x_i$$
You could measure the mean error $\sum_i \epsilon_i$, but it turns out that, doing that, positive and negative errors cancel, giving you no way to know how good your model actually performs!
So, what people do in general, is to use these measures:
Squared error:
$$\text{SE}=\sum_i^n \epsilon_i^2$$
Mean squared error:
$$\text{MSE}=1/n \times \text{SE}$$
Root mean squared error:
$$\text{RMSE}=\sqrt{\text{MSE}}$$
Relative mean squared error (do not confuse this for the RMSE, root mean squared error):
$$\text{rMSE}={n-1\over n}{{\sum_i^n \epsilon_i^2}\over {\sum_i^n (x_i - \mathbb E(x))^2}}= {\text{MSE} \over Var(x)}$$
$\text{R}^2$:
$$\text{R}^2=1 - \text{rMSE}$$
Absolute error:
$$\text{AE}=\sum_i^n \sqrt{\epsilon_i^2}=\sum_i^n |{\epsilon_i}|$$
Mean absolute error:
$$\text{MAE}=1/n \times \text{AE}$$
And many, many others. You can find them around on the site (see for example How to interpret error measures?).
|
How to determine the accuracy of regression? Which measure should be used?
You should ask yourself what were you trying to achieve with your modeling approach.
As you correctly said "how far from true solution am I" is a good starting point (notice this is also true for clas
|
11,169
|
Restricted Boltzmann machines vs multilayer neural networks
|
First of all RBM's are certainly different from normal Neural Nets, and when used properly they achieve much better performance. Also, training a few layers of a RBM, and then using the found weights as a starting point for a Mulitlayer NN often yields better results than simply using a Multilayer NN.
The best pointer I can think of is this course on Coursera, taught by Geoffrey Hinton, who is one of the people responsible for RBMs:
https://class.coursera.org/neuralnets-2012-001/class/index
the videos on RBMs and Denoising Autoencoders are a valuable learning resource for anyone interested in the topic.
As to implementation in R, I don't know any either, but if you want to implement it, better not use pure R (unless your data is not to big). The training of an RBM takes quite a long time, and if you use pure R instead of R with C it can grow significantly.
|
Restricted Boltzmann machines vs multilayer neural networks
|
First of all RBM's are certainly different from normal Neural Nets, and when used properly they achieve much better performance. Also, training a few layers of a RBM, and then using the found weights
|
Restricted Boltzmann machines vs multilayer neural networks
First of all RBM's are certainly different from normal Neural Nets, and when used properly they achieve much better performance. Also, training a few layers of a RBM, and then using the found weights as a starting point for a Mulitlayer NN often yields better results than simply using a Multilayer NN.
The best pointer I can think of is this course on Coursera, taught by Geoffrey Hinton, who is one of the people responsible for RBMs:
https://class.coursera.org/neuralnets-2012-001/class/index
the videos on RBMs and Denoising Autoencoders are a valuable learning resource for anyone interested in the topic.
As to implementation in R, I don't know any either, but if you want to implement it, better not use pure R (unless your data is not to big). The training of an RBM takes quite a long time, and if you use pure R instead of R with C it can grow significantly.
|
Restricted Boltzmann machines vs multilayer neural networks
First of all RBM's are certainly different from normal Neural Nets, and when used properly they achieve much better performance. Also, training a few layers of a RBM, and then using the found weights
|
11,170
|
Restricted Boltzmann machines vs multilayer neural networks
|
In R you can use neuralnet and RSNNS (which provides an interface to the Stuttgart Neural Network Simulator) to fit standard multilayer neural networks, but there are differences to RBM.
Regarding implementing deep neural nets in R, I think the only worthwhile strategy would be to interface existing FOSS implementations, which is usually a much better solution than just reimplementing things on your own (I never quite understood why everyone needs to reinvent the wheel). R offers a lot of functionality to do this and you can leverage the data handling of R with the speed and ready-to-use aspects of existing solutions. For example, one might interface MDP with the Python/R interfacing capabilities, see e.g., this paper.
Edit: Andrew Landgraf from Statistically Significant provides some R Code for RBM.
|
Restricted Boltzmann machines vs multilayer neural networks
|
In R you can use neuralnet and RSNNS (which provides an interface to the Stuttgart Neural Network Simulator) to fit standard multilayer neural networks, but there are differences to RBM.
Regarding im
|
Restricted Boltzmann machines vs multilayer neural networks
In R you can use neuralnet and RSNNS (which provides an interface to the Stuttgart Neural Network Simulator) to fit standard multilayer neural networks, but there are differences to RBM.
Regarding implementing deep neural nets in R, I think the only worthwhile strategy would be to interface existing FOSS implementations, which is usually a much better solution than just reimplementing things on your own (I never quite understood why everyone needs to reinvent the wheel). R offers a lot of functionality to do this and you can leverage the data handling of R with the speed and ready-to-use aspects of existing solutions. For example, one might interface MDP with the Python/R interfacing capabilities, see e.g., this paper.
Edit: Andrew Landgraf from Statistically Significant provides some R Code for RBM.
|
Restricted Boltzmann machines vs multilayer neural networks
In R you can use neuralnet and RSNNS (which provides an interface to the Stuttgart Neural Network Simulator) to fit standard multilayer neural networks, but there are differences to RBM.
Regarding im
|
11,171
|
Efficient calculation of matrix inverse in R
|
Have you tried what cardinal suggested and explored some of the alternative methods for computing the inverse? Let's consider a specific example:
library(MASS)
k <- 2000
rho <- .3
S <- matrix(rep(rho, k*k), nrow=k)
diag(S) <- 1
dat <- mvrnorm(10000, mu=rep(0,k), Sigma=S) ### be patient!
R <- cor(dat)
system.time(RI1 <- solve(R))
system.time(RI2 <- chol2inv(chol(R)))
system.time(RI3 <- qr.solve(R))
all.equal(RI1, RI2)
all.equal(RI1, RI3)
So, this is an example of a $2000 \times 2000$ correlation matrix for which we want the inverse. On my laptop (Core-i5 2.50Ghz), solve takes 8-9 seconds, chol2inv(chol()) takes a bit over 4 seconds, and qr.solve() takes 17-18 seconds (multiple runs of the code are suggested to get stable results).
So the inverse via the Choleski decomposition is about twice as fast as solve. There may of course be even faster ways of doing that. I just explored some of the most obvious ones here. And as already mentioned in the comments, if the matrix has a special structure, then this probably can be exploited for more speed.
|
Efficient calculation of matrix inverse in R
|
Have you tried what cardinal suggested and explored some of the alternative methods for computing the inverse? Let's consider a specific example:
library(MASS)
k <- 2000
rho <- .3
S <- matri
|
Efficient calculation of matrix inverse in R
Have you tried what cardinal suggested and explored some of the alternative methods for computing the inverse? Let's consider a specific example:
library(MASS)
k <- 2000
rho <- .3
S <- matrix(rep(rho, k*k), nrow=k)
diag(S) <- 1
dat <- mvrnorm(10000, mu=rep(0,k), Sigma=S) ### be patient!
R <- cor(dat)
system.time(RI1 <- solve(R))
system.time(RI2 <- chol2inv(chol(R)))
system.time(RI3 <- qr.solve(R))
all.equal(RI1, RI2)
all.equal(RI1, RI3)
So, this is an example of a $2000 \times 2000$ correlation matrix for which we want the inverse. On my laptop (Core-i5 2.50Ghz), solve takes 8-9 seconds, chol2inv(chol()) takes a bit over 4 seconds, and qr.solve() takes 17-18 seconds (multiple runs of the code are suggested to get stable results).
So the inverse via the Choleski decomposition is about twice as fast as solve. There may of course be even faster ways of doing that. I just explored some of the most obvious ones here. And as already mentioned in the comments, if the matrix has a special structure, then this probably can be exploited for more speed.
|
Efficient calculation of matrix inverse in R
Have you tried what cardinal suggested and explored some of the alternative methods for computing the inverse? Let's consider a specific example:
library(MASS)
k <- 2000
rho <- .3
S <- matri
|
11,172
|
Efficient calculation of matrix inverse in R
|
If you are working with covariance matrix or any positive definite matrix you can use pd.solve is faster.
Following the Wolfgang example:
library(MASS)
library(mnormt)
k <- 2000
rho <- .3
S <- matrix(rep(rho, k*k), nrow=k)
diag(S) <- 1
dat <- mvrnorm(10000, mu=rep(0,k), Sigma=S) ### be patient!
R <- cor(dat)
system.time(RI1 <- solve(R))
system.time(RI2 <- chol2inv(chol(R)))
system.time(RI3 <- qr.solve(R))
> system.time(RI1 <- solve(R))
usuário sistema decorrido
13.21 0.03 13.76
> system.time(RI2 <- chol2inv(chol(R)))
usuário sistema decorrido
5.62 0.05 5.80
> system.time(RI3 <- qr.solve(R))
usuário sistema decorrido
20.42 0.09 21.10
> system.time(RI4 <- pd.solve(R))
usuário sistema decorrido
5.53 0.00 5.61
|
Efficient calculation of matrix inverse in R
|
If you are working with covariance matrix or any positive definite matrix you can use pd.solve is faster.
Following the Wolfgang example:
library(MASS)
library(mnormt)
k <- 2000
rho <- .3
S
|
Efficient calculation of matrix inverse in R
If you are working with covariance matrix or any positive definite matrix you can use pd.solve is faster.
Following the Wolfgang example:
library(MASS)
library(mnormt)
k <- 2000
rho <- .3
S <- matrix(rep(rho, k*k), nrow=k)
diag(S) <- 1
dat <- mvrnorm(10000, mu=rep(0,k), Sigma=S) ### be patient!
R <- cor(dat)
system.time(RI1 <- solve(R))
system.time(RI2 <- chol2inv(chol(R)))
system.time(RI3 <- qr.solve(R))
> system.time(RI1 <- solve(R))
usuário sistema decorrido
13.21 0.03 13.76
> system.time(RI2 <- chol2inv(chol(R)))
usuário sistema decorrido
5.62 0.05 5.80
> system.time(RI3 <- qr.solve(R))
usuário sistema decorrido
20.42 0.09 21.10
> system.time(RI4 <- pd.solve(R))
usuário sistema decorrido
5.53 0.00 5.61
|
Efficient calculation of matrix inverse in R
If you are working with covariance matrix or any positive definite matrix you can use pd.solve is faster.
Following the Wolfgang example:
library(MASS)
library(mnormt)
k <- 2000
rho <- .3
S
|
11,173
|
Top five classifiers to try first
|
Random Forest
Fast, robust, good accuracy, in most cases nothing to tune, requires no normalization, immune to collinearity, generates quite good error approximation and useful importance ranking as a side effect of training, trivially parallel, predicts in a blink of an eye.
Drawbacks: slower than trivial methods like kNN or NB, works best with equal classes, worse accuracy than SVM for problems desperately requiring kernel trick, is a hard black-box, does not make coffee.
|
Top five classifiers to try first
|
Random Forest
Fast, robust, good accuracy, in most cases nothing to tune, requires no normalization, immune to collinearity, generates quite good error approximation and useful importance ranking as a
|
Top five classifiers to try first
Random Forest
Fast, robust, good accuracy, in most cases nothing to tune, requires no normalization, immune to collinearity, generates quite good error approximation and useful importance ranking as a side effect of training, trivially parallel, predicts in a blink of an eye.
Drawbacks: slower than trivial methods like kNN or NB, works best with equal classes, worse accuracy than SVM for problems desperately requiring kernel trick, is a hard black-box, does not make coffee.
|
Top five classifiers to try first
Random Forest
Fast, robust, good accuracy, in most cases nothing to tune, requires no normalization, immune to collinearity, generates quite good error approximation and useful importance ranking as a
|
11,174
|
Top five classifiers to try first
|
Gaussian process classifier (not using the Laplace approximation), preferably with marginalisation rather than optimisation of the hyper-parameters. Why?
because they give a probabilistic classification
you can use a kernel function that allows you to operate directly on non-vectorial data and/or incorporate expert knowledge
they deal with the uncertainty in fitting the model properly, and you can propagate that uncertainty through to the decision making process
generally very good predictive performance.
Downsides
slow
requires a lot of memory
impractical for large scale problems.
First choice though would be regularised logistic regression or ridge regression [without feature selection] - for most problems, very simple algorithms work rather well and are more difficult to get wrong (in practice the differences in performance between algorithms is smaller than the differences in performance between the operator driving them).
|
Top five classifiers to try first
|
Gaussian process classifier (not using the Laplace approximation), preferably with marginalisation rather than optimisation of the hyper-parameters. Why?
because they give a probabilistic classifica
|
Top five classifiers to try first
Gaussian process classifier (not using the Laplace approximation), preferably with marginalisation rather than optimisation of the hyper-parameters. Why?
because they give a probabilistic classification
you can use a kernel function that allows you to operate directly on non-vectorial data and/or incorporate expert knowledge
they deal with the uncertainty in fitting the model properly, and you can propagate that uncertainty through to the decision making process
generally very good predictive performance.
Downsides
slow
requires a lot of memory
impractical for large scale problems.
First choice though would be regularised logistic regression or ridge regression [without feature selection] - for most problems, very simple algorithms work rather well and are more difficult to get wrong (in practice the differences in performance between algorithms is smaller than the differences in performance between the operator driving them).
|
Top five classifiers to try first
Gaussian process classifier (not using the Laplace approximation), preferably with marginalisation rather than optimisation of the hyper-parameters. Why?
because they give a probabilistic classifica
|
11,175
|
Top five classifiers to try first
|
By myself when you are approaching to a new data set you should start to watch to the whole problem. First of all get a distribution for categorical features and mean and standard deviations for each continuous feature. Then:
Delete features with more than X% missing values;
Delete categorical features when a particular value gets more then 90-95% of relative frequency;
Delete continuous features with CV=std/mean<0.1;
Get a parameter ranking, eg ANOVA for continuous and Chi-square for categorical;
Get a significant subset of features;
Then I usually split the classification techniques in 2 sets: white box and black box technique. If you need to know 'how the classifier works' you should choose in the first set, eg Decision-Trees or Rules-based classifiers.
If you need to classify new records without building a model should should take a look to eager learner, eg KNN.
After that I think is better to have a threshold between accuracy and speed: Neural Network are a bit slower than SVM.
This is my top five classification technique:
Decision Tree;
Rule-based classifiers;
SMO (SVM);
Naive Bayes;
Neural Networks.
|
Top five classifiers to try first
|
By myself when you are approaching to a new data set you should start to watch to the whole problem. First of all get a distribution for categorical features and mean and standard deviations for each
|
Top five classifiers to try first
By myself when you are approaching to a new data set you should start to watch to the whole problem. First of all get a distribution for categorical features and mean and standard deviations for each continuous feature. Then:
Delete features with more than X% missing values;
Delete categorical features when a particular value gets more then 90-95% of relative frequency;
Delete continuous features with CV=std/mean<0.1;
Get a parameter ranking, eg ANOVA for continuous and Chi-square for categorical;
Get a significant subset of features;
Then I usually split the classification techniques in 2 sets: white box and black box technique. If you need to know 'how the classifier works' you should choose in the first set, eg Decision-Trees or Rules-based classifiers.
If you need to classify new records without building a model should should take a look to eager learner, eg KNN.
After that I think is better to have a threshold between accuracy and speed: Neural Network are a bit slower than SVM.
This is my top five classification technique:
Decision Tree;
Rule-based classifiers;
SMO (SVM);
Naive Bayes;
Neural Networks.
|
Top five classifiers to try first
By myself when you are approaching to a new data set you should start to watch to the whole problem. First of all get a distribution for categorical features and mean and standard deviations for each
|
11,176
|
Is the second parameter for the normal distribution the variance or std deviation?
|
There's a choice of parameterizations of the normal distribution, and none is inherently more correct. Sometimes one or another is more convenient, and arguably one or another is more conventional in a given situation.
From what I've seen, when statisticians* are writing algebraic formulas, the most common convention is (by far) $N(\mu,\sigma^2)$, so $N(0,4)$ would imply the variance is $4$. However the convention is not completely universal so while I'd fairly confidently interpret the intent as "variance 4", it's hard to be completely sure without some additional indication (often, careful examination will yield some additional clue, such as an earlier or subsequent use by the same author).
Speaking for myself, I try to write an explicit square in there to reduce confusion. For example, rather than write $N(0,4)$, I would usually tend to write $N(0,2^2)$, which more clearly implies that the variance is 4 and the sd is 2.
When calling functions in statistics packages (such as R's dnorm for one example), the arguments are nearly always $(\mu, \sigma)$. However, as usεr11852 points out, check the documentation! Of course in the worst case - missing or ambiguous documentation, unhelpful argument names - a little experimentation would resolve any dilemma about which it used.
* here I mean people whose primary training is in statistics rather than learning statistics for application to some other area; conventions can vary across application areas.
|
Is the second parameter for the normal distribution the variance or std deviation?
|
There's a choice of parameterizations of the normal distribution, and none is inherently more correct. Sometimes one or another is more convenient, and arguably one or another is more conventional in
|
Is the second parameter for the normal distribution the variance or std deviation?
There's a choice of parameterizations of the normal distribution, and none is inherently more correct. Sometimes one or another is more convenient, and arguably one or another is more conventional in a given situation.
From what I've seen, when statisticians* are writing algebraic formulas, the most common convention is (by far) $N(\mu,\sigma^2)$, so $N(0,4)$ would imply the variance is $4$. However the convention is not completely universal so while I'd fairly confidently interpret the intent as "variance 4", it's hard to be completely sure without some additional indication (often, careful examination will yield some additional clue, such as an earlier or subsequent use by the same author).
Speaking for myself, I try to write an explicit square in there to reduce confusion. For example, rather than write $N(0,4)$, I would usually tend to write $N(0,2^2)$, which more clearly implies that the variance is 4 and the sd is 2.
When calling functions in statistics packages (such as R's dnorm for one example), the arguments are nearly always $(\mu, \sigma)$. However, as usεr11852 points out, check the documentation! Of course in the worst case - missing or ambiguous documentation, unhelpful argument names - a little experimentation would resolve any dilemma about which it used.
* here I mean people whose primary training is in statistics rather than learning statistics for application to some other area; conventions can vary across application areas.
|
Is the second parameter for the normal distribution the variance or std deviation?
There's a choice of parameterizations of the normal distribution, and none is inherently more correct. Sometimes one or another is more convenient, and arguably one or another is more conventional in
|
11,177
|
Is the second parameter for the normal distribution the variance or std deviation?
|
From an earlier answer 7 years ago: ".... there are at least
three different conventions for interpreting $X \sim N(a,b)$ as
a normal random variable. Usually, $a$ is the mean $\mu_X$
but $b$ can have different meanings.
$X \sim N(a,b)$ means that the standard deviation of $X$ is $b$.
$X \sim N(a,b)$ means that the variance of $X$ is $b$.
$X \sim N(a,b)$ means that the variance of $X$ is $\dfrac{1}{b}$.
Fortunately, $X \sim N(0,1)$
means that $X$ is a standard
normal random variable in all three of the above conventions! "
|
Is the second parameter for the normal distribution the variance or std deviation?
|
From an earlier answer 7 years ago: ".... there are at least
three different conventions for interpreting $X \sim N(a,b)$ as
a normal random variable. Usually, $a$ is the mean $\mu_X$
but $b$ can h
|
Is the second parameter for the normal distribution the variance or std deviation?
From an earlier answer 7 years ago: ".... there are at least
three different conventions for interpreting $X \sim N(a,b)$ as
a normal random variable. Usually, $a$ is the mean $\mu_X$
but $b$ can have different meanings.
$X \sim N(a,b)$ means that the standard deviation of $X$ is $b$.
$X \sim N(a,b)$ means that the variance of $X$ is $b$.
$X \sim N(a,b)$ means that the variance of $X$ is $\dfrac{1}{b}$.
Fortunately, $X \sim N(0,1)$
means that $X$ is a standard
normal random variable in all three of the above conventions! "
|
Is the second parameter for the normal distribution the variance or std deviation?
From an earlier answer 7 years ago: ".... there are at least
three different conventions for interpreting $X \sim N(a,b)$ as
a normal random variable. Usually, $a$ is the mean $\mu_X$
but $b$ can h
|
11,178
|
Which Theories of Causality Should I know?
|
Strictly speaking, "Granger causality" is not at all about causality. It's about predictive ability/time precedence, you want to check whether one time series is useful to predict another time series---it's suited for claims like "usually A happens before B happens" or "knowing A helps me predict B will happen, but not the other way around" (even after considering all past information about $B$). The choice of this name was very unfortunate, and it's a cause of several misconceptions.
While it's almost uncontroversial that a cause has to precede its effect in time, to draw causal conclusions with time precedence you still need to claim the absence of confounding, among other sources of spurious associations.
Now regarding the Potential Outcomes (Neyman-Rubin) versus Causal Graphs/Structural Equation Modeling (Pearl), I would say this is a false dilemma and you should learn both.
First, it's important to notice that these are not opposite views about causality. As Pearl puts it, there's a hierarchy regarding (causal) inference tasks:
Observational prediction
Prediction under intervention
Counterfactuals
For the first task, you only need to know the joint distribution of observed variables. For the second task, you need to know the joint distribution and the causal structure. For the last task, of counterfactuals, you will further need some information about the functional forms of your structural equation model.
So, when talking about counterfactuals, there's a formal equivalency between both perspectives. The difference is that potential outcomes take counterfactual statements as primitives and in DAGs counterfactuals are seen as derived from the structural equations. However, you might ask, if they are "equivalent", why bother learning both? Because there are differences in terms of "easiness" to express and derive things.
For example, try to express the concept of M-Bias using only potential outcomes --- I've never seen a good one. In fact, my experience so far is that researchers who never studied graphs aren't even aware of it. Also, casting the substantive assumptions of your model in graphical language will make it computationally easier to derive its empirical testable implications and answer questions of identifiability. On the other hand, sometimes people will find it easier to first think directly about the counterfactuals themselves, and combine this with parametric assumptions to answer very specific queries.
There's much more one could say, but the point here is that you should learn how to "speak both languages". For references, you can check out how to get started here.
|
Which Theories of Causality Should I know?
|
Strictly speaking, "Granger causality" is not at all about causality. It's about predictive ability/time precedence, you want to check whether one time series is useful to predict another time series-
|
Which Theories of Causality Should I know?
Strictly speaking, "Granger causality" is not at all about causality. It's about predictive ability/time precedence, you want to check whether one time series is useful to predict another time series---it's suited for claims like "usually A happens before B happens" or "knowing A helps me predict B will happen, but not the other way around" (even after considering all past information about $B$). The choice of this name was very unfortunate, and it's a cause of several misconceptions.
While it's almost uncontroversial that a cause has to precede its effect in time, to draw causal conclusions with time precedence you still need to claim the absence of confounding, among other sources of spurious associations.
Now regarding the Potential Outcomes (Neyman-Rubin) versus Causal Graphs/Structural Equation Modeling (Pearl), I would say this is a false dilemma and you should learn both.
First, it's important to notice that these are not opposite views about causality. As Pearl puts it, there's a hierarchy regarding (causal) inference tasks:
Observational prediction
Prediction under intervention
Counterfactuals
For the first task, you only need to know the joint distribution of observed variables. For the second task, you need to know the joint distribution and the causal structure. For the last task, of counterfactuals, you will further need some information about the functional forms of your structural equation model.
So, when talking about counterfactuals, there's a formal equivalency between both perspectives. The difference is that potential outcomes take counterfactual statements as primitives and in DAGs counterfactuals are seen as derived from the structural equations. However, you might ask, if they are "equivalent", why bother learning both? Because there are differences in terms of "easiness" to express and derive things.
For example, try to express the concept of M-Bias using only potential outcomes --- I've never seen a good one. In fact, my experience so far is that researchers who never studied graphs aren't even aware of it. Also, casting the substantive assumptions of your model in graphical language will make it computationally easier to derive its empirical testable implications and answer questions of identifiability. On the other hand, sometimes people will find it easier to first think directly about the counterfactuals themselves, and combine this with parametric assumptions to answer very specific queries.
There's much more one could say, but the point here is that you should learn how to "speak both languages". For references, you can check out how to get started here.
|
Which Theories of Causality Should I know?
Strictly speaking, "Granger causality" is not at all about causality. It's about predictive ability/time precedence, you want to check whether one time series is useful to predict another time series-
|
11,179
|
Why is ROC AUC equivalent to the probability that two randomly-selected samples are correctly ranked? [duplicate]
|
It's easy to see once you obtained a closed-form formula for AUC.
Since we have finite number of samples $\{(x_i, y_i)\}_{i=1}^N$, we'll have finite number of points on the ROC curve. We do linear interpolation in between.
First, some definitions. Suppose we'd like to evaluate an algorithm $A(x)$ that outputs a probability of $x$ lying in the positive class $+1$. Let's define $N_+$ as the number of samples in the positive class $+1$ and $N_-$ as the number of samples in the negative class $-1$. Now, for a threshold $\tau$ let's define False-Positive-Rate (FPR, aka 1-specificity) and True-Positive-Rate (TPR, aka sensitivity):
$$
\text{TPR}(\tau) = \frac{\sum_{i=1}^N [y_i = +1] [A(x_i) \ge \tau]}{N_+}
\quad \text{and} \quad
\text{FPR}(\tau) = \frac{\sum_{i=1}^N [y_i = -1] [A(x_i) \ge \tau]}{N_-}
$$
(where $[\text{boolean expression}]$ is 1 if expression is true, and 0 otherwise). Then, ROC curve is built from points of the form $(\text{FPR}(\tau), \text{TPR}(\tau))$ for different values of $\tau$. Moreover, it's easy to see that if we order our samples $x_{(i)}$ (note the parentheses) according to the algorithm's output $A(x_i)$, then neither $\text{TPR}$ nor $\text{FPR}$ changes for $\tau$ between consecutive samples $A(x_{(i)}) < \tau < A(x_{(i+1)})$. So it's enough to evaluate FPR and TPR only for $\tau \in \{A(x_{(1)}), \dots, A(x_{(N)})\}$. For $k^{\text{th}}$ point we have
$$
\text{TPR}_k = \frac{\sum_{i=k}^N [y_{(i)} = +1]}{N_+}
\quad \text{and} \quad
\text{FPR}_k = \frac{\sum_{i=k}^N [y_{(i)} = -1]}{N_-}
$$
(Note both sequences are non-increasing in $k$). These sequences define x and y coordinates of points on the ROC curve. Next, we linearly interpolate these points to get the curve itself and calculate area under the curve (Using a formula for area of a trapezoid):
$$
\begin{align*}
\text{AUC} &= \sum_{k=1}^{N-1} \frac{\text{TPR}_{k+1} + \text{TPR}_{k}}{2} (\text{FPR}_{k} - \text{FPR}_{k+1}) \\
&= \sum_{k=1}^{N-1} \frac{\sum_{i=k+1}^N [y_{(i)} = +1] + \tfrac{1}{2} [y_{(k)} = +1]}{N_+} \frac{[y_{(k)} = -1]}{N_-} \\
&= \frac{1}{N_+ N_-} \sum_{k=1}^{N-1} \sum_{i=k+1}^N [y_{(i)} = +1] [y_{(k)} = -1]
= \frac{1}{N_+ N_-} \sum_{k < i} [y_{(k)} < y_{(i)}]
\end{align*}
$$
Here I used the fact that $[y = -1] [y = +1] = 0$ for any $y$.
So there you have it: AUC is proportional to the number of correctly ordered pairs, which is proportional to the probability of random pair of samples being ranked according to their labels.
EDIT (6 years later): Since for $a, b \in \{-1, +1\}$ we have $[a < b] = 1$ only when $a = -1$ and $b = +1$, it's easy to see that
$$
\frac{1}{N_+ N_-} \sum_{k < i} [y_{(k)} < y_{(i)}] = \frac{1}{N_+ N_-} \sum_{\substack{k < i \\ y_{(i)} = 1 \\ y_{(k)} = -1}} [y_{(k)} < y_{(i)}]
$$
In essence, we form all possible negative-positive pairs and see what fraction of them is correctly ordered according to our algorithm $A$, that is, $A($positive sample$)\; > A($negative sample$)$.
|
Why is ROC AUC equivalent to the probability that two randomly-selected samples are correctly ranked
|
It's easy to see once you obtained a closed-form formula for AUC.
Since we have finite number of samples $\{(x_i, y_i)\}_{i=1}^N$, we'll have finite number of points on the ROC curve. We do linear int
|
Why is ROC AUC equivalent to the probability that two randomly-selected samples are correctly ranked? [duplicate]
It's easy to see once you obtained a closed-form formula for AUC.
Since we have finite number of samples $\{(x_i, y_i)\}_{i=1}^N$, we'll have finite number of points on the ROC curve. We do linear interpolation in between.
First, some definitions. Suppose we'd like to evaluate an algorithm $A(x)$ that outputs a probability of $x$ lying in the positive class $+1$. Let's define $N_+$ as the number of samples in the positive class $+1$ and $N_-$ as the number of samples in the negative class $-1$. Now, for a threshold $\tau$ let's define False-Positive-Rate (FPR, aka 1-specificity) and True-Positive-Rate (TPR, aka sensitivity):
$$
\text{TPR}(\tau) = \frac{\sum_{i=1}^N [y_i = +1] [A(x_i) \ge \tau]}{N_+}
\quad \text{and} \quad
\text{FPR}(\tau) = \frac{\sum_{i=1}^N [y_i = -1] [A(x_i) \ge \tau]}{N_-}
$$
(where $[\text{boolean expression}]$ is 1 if expression is true, and 0 otherwise). Then, ROC curve is built from points of the form $(\text{FPR}(\tau), \text{TPR}(\tau))$ for different values of $\tau$. Moreover, it's easy to see that if we order our samples $x_{(i)}$ (note the parentheses) according to the algorithm's output $A(x_i)$, then neither $\text{TPR}$ nor $\text{FPR}$ changes for $\tau$ between consecutive samples $A(x_{(i)}) < \tau < A(x_{(i+1)})$. So it's enough to evaluate FPR and TPR only for $\tau \in \{A(x_{(1)}), \dots, A(x_{(N)})\}$. For $k^{\text{th}}$ point we have
$$
\text{TPR}_k = \frac{\sum_{i=k}^N [y_{(i)} = +1]}{N_+}
\quad \text{and} \quad
\text{FPR}_k = \frac{\sum_{i=k}^N [y_{(i)} = -1]}{N_-}
$$
(Note both sequences are non-increasing in $k$). These sequences define x and y coordinates of points on the ROC curve. Next, we linearly interpolate these points to get the curve itself and calculate area under the curve (Using a formula for area of a trapezoid):
$$
\begin{align*}
\text{AUC} &= \sum_{k=1}^{N-1} \frac{\text{TPR}_{k+1} + \text{TPR}_{k}}{2} (\text{FPR}_{k} - \text{FPR}_{k+1}) \\
&= \sum_{k=1}^{N-1} \frac{\sum_{i=k+1}^N [y_{(i)} = +1] + \tfrac{1}{2} [y_{(k)} = +1]}{N_+} \frac{[y_{(k)} = -1]}{N_-} \\
&= \frac{1}{N_+ N_-} \sum_{k=1}^{N-1} \sum_{i=k+1}^N [y_{(i)} = +1] [y_{(k)} = -1]
= \frac{1}{N_+ N_-} \sum_{k < i} [y_{(k)} < y_{(i)}]
\end{align*}
$$
Here I used the fact that $[y = -1] [y = +1] = 0$ for any $y$.
So there you have it: AUC is proportional to the number of correctly ordered pairs, which is proportional to the probability of random pair of samples being ranked according to their labels.
EDIT (6 years later): Since for $a, b \in \{-1, +1\}$ we have $[a < b] = 1$ only when $a = -1$ and $b = +1$, it's easy to see that
$$
\frac{1}{N_+ N_-} \sum_{k < i} [y_{(k)} < y_{(i)}] = \frac{1}{N_+ N_-} \sum_{\substack{k < i \\ y_{(i)} = 1 \\ y_{(k)} = -1}} [y_{(k)} < y_{(i)}]
$$
In essence, we form all possible negative-positive pairs and see what fraction of them is correctly ordered according to our algorithm $A$, that is, $A($positive sample$)\; > A($negative sample$)$.
|
Why is ROC AUC equivalent to the probability that two randomly-selected samples are correctly ranked
It's easy to see once you obtained a closed-form formula for AUC.
Since we have finite number of samples $\{(x_i, y_i)\}_{i=1}^N$, we'll have finite number of points on the ROC curve. We do linear int
|
11,180
|
How do I calculate the variance of the OLS estimator $\beta_0$, conditional on $x_1, \ldots , x_n$?
|
This is a self-study question, so I provide hints that will hopefully help to find the solution, and I'll edit the answer based on your feedbacks/progress.
The parameter estimates that minimize the sum of squares are
\begin{align}
\hat{\beta}_0 &= \bar{y} - \hat{\beta}_1 \bar{x} , \\
\hat{\beta}_1 &= \frac{ \sum_{i = 1}^n(x_i - \bar{x})y_i }{ \sum_{i = 1}^n(x_i - \bar{x})^2 } .
\end{align}
To get the variance of $\hat{\beta}_0$, start from its expression and substitute the expression of $\hat{\beta}_1$, and do the algebra
$$
{\rm Var}(\hat{\beta}_0) = {\rm Var} (\bar{Y} - \hat{\beta}_1 \bar{x}) = \ldots
$$
Edit:
We have
\begin{align}
{\rm Var}(\hat{\beta}_0)
&= {\rm Var} (\bar{Y} - \hat{\beta}_1 \bar{x}) \\
&= {\rm Var} (\bar{Y}) + (\bar{x})^2 {\rm Var} (\hat{\beta}_1)
- 2 \bar{x} {\rm Cov} (\bar{Y}, \hat{\beta}_1).
\end{align}
The two variance terms are
$$
{\rm Var} (\bar{Y})
= {\rm Var} \left(\frac{1}{n} \sum_{i = 1}^n Y_i \right)
= \frac{1}{n^2} \sum_{i = 1}^n {\rm Var} (Y_i)
= \frac{\sigma^2}{n},
$$
and
\begin{align}
{\rm Var} (\hat{\beta}_1)
&= \frac{ 1 }{ \left[\sum_{i = 1}^n(x_i - \bar{x})^2 \right]^2 }
\sum_{i = 1}^n(x_i - \bar{x})^2 {\rm Var} (Y_i) \\
&= \frac{ \sigma^2 }{ \sum_{i = 1}^n(x_i - \bar{x})^2 } ,
\end{align}
and the covariance term is
\begin{align}
{\rm Cov} (\bar{Y}, \hat{\beta}_1)
&= {\rm Cov} \left\{
\frac{1}{n} \sum_{i = 1}^n Y_i,
\frac{ \sum_{j = 1}^n(x_j - \bar{x})Y_j }{ \sum_{i = 1}^n(x_i - \bar{x})^2 }
\right \} \\
&= \frac{1}{n} \frac{ 1 }{ \sum_{i = 1}^n(x_i - \bar{x})^2 }
{\rm Cov} \left\{ \sum_{i = 1}^n Y_i, \sum_{j = 1}^n(x_j - \bar{x})Y_j \right\} \\
&= \frac{ 1 }{ n \sum_{i = 1}^n(x_i - \bar{x})^2 }
\sum_{i = 1}^n (x_j - \bar{x}) \sum_{j = 1}^n {\rm Cov}(Y_i, Y_j) \\
&= \frac{ 1 }{ n \sum_{i = 1}^n(x_i - \bar{x})^2 }
\sum_{i = 1}^n (x_j - \bar{x}) \sigma^2 \\
&= 0
\end{align}
since $\sum_{i = 1}^n (x_j - \bar{x})=0$.
And since
$$\sum_{i = 1}^n(x_i - \bar{x})^2
= \sum_{i = 1}^n x_i^2 - 2 \bar{x} \sum_{i = 1}^n x_i
+ \sum_{i = 1}^n \bar{x}^2
= \sum_{i = 1}^n x_i^2 - n \bar{x}^2,
$$
we have
\begin{align}
{\rm Var}(\hat{\beta}_0)
&= \frac{\sigma^2}{n} + \frac{ \sigma^2 \bar{x}^2}{ \sum_{i = 1}^n(x_i - \bar{x})^2 } \\
&= \frac{\sigma^2 }{ n \sum_{i = 1}^n(x_i - \bar{x})^2 }
\left\{ \sum_{i = 1}^n(x_i - \bar{x})^2 + n \bar{x}^2 \right\} \\
&= \frac{\sigma^2 \sum_{i = 1}^n x_i^2}{ n \sum_{i = 1}^n(x_i - \bar{x})^2 }.
\end{align}
Edit 2
Why do we have
${\rm var} ( \sum_{i = 1}^n Y_i) = \sum_{i = 1}^n {\rm Var} (Y_i) $?
The assumed model is $ Y_i = \beta_0 + \beta_1 X_i + \epsilon_i$, where the $\epsilon_i$ are independant and identically distributed random variables with ${\rm E}(\epsilon_i) = 0$ and ${\rm var}(\epsilon_i) = \sigma^2$.
Once we have a sample, the $X_i$ are known, the only random terms are the $\epsilon_i$. Recalling that for a random variable $Z$ and a constant $a$, we have ${\rm var}(a+Z) = {\rm var}(Z)$. Thus,
\begin{align}
{\rm var} \left( \sum_{i = 1}^n Y_i \right)
&= {\rm var} \left( \sum_{i = 1}^n \beta_0 + \beta_1 X_i + \epsilon_i \right)\\
&= {\rm var} \left( \sum_{i = 1}^n \epsilon_i \right)
= \sum_{i = 1}^n \sum_{j = 1}^n {\rm cov} (\epsilon_i, \epsilon_j)\\
&= \sum_{i = 1}^n {\rm cov} (\epsilon_i, \epsilon_i)
= \sum_{i = 1}^n {\rm var} (\epsilon_i)\\
&= \sum_{i = 1}^n {\rm var} (\beta_0 + \beta_1 X_i + \epsilon_i)
= \sum_{i = 1}^n {\rm var} (Y_i).\\
\end{align}
The 4th equality holds as ${\rm cov} (\epsilon_i, \epsilon_j) = 0$ for $i \neq j$ by the independence of the $\epsilon_i$.
|
How do I calculate the variance of the OLS estimator $\beta_0$, conditional on $x_1, \ldots , x_n$?
|
This is a self-study question, so I provide hints that will hopefully help to find the solution, and I'll edit the answer based on your feedbacks/progress.
The parameter estimates that minimize the su
|
How do I calculate the variance of the OLS estimator $\beta_0$, conditional on $x_1, \ldots , x_n$?
This is a self-study question, so I provide hints that will hopefully help to find the solution, and I'll edit the answer based on your feedbacks/progress.
The parameter estimates that minimize the sum of squares are
\begin{align}
\hat{\beta}_0 &= \bar{y} - \hat{\beta}_1 \bar{x} , \\
\hat{\beta}_1 &= \frac{ \sum_{i = 1}^n(x_i - \bar{x})y_i }{ \sum_{i = 1}^n(x_i - \bar{x})^2 } .
\end{align}
To get the variance of $\hat{\beta}_0$, start from its expression and substitute the expression of $\hat{\beta}_1$, and do the algebra
$$
{\rm Var}(\hat{\beta}_0) = {\rm Var} (\bar{Y} - \hat{\beta}_1 \bar{x}) = \ldots
$$
Edit:
We have
\begin{align}
{\rm Var}(\hat{\beta}_0)
&= {\rm Var} (\bar{Y} - \hat{\beta}_1 \bar{x}) \\
&= {\rm Var} (\bar{Y}) + (\bar{x})^2 {\rm Var} (\hat{\beta}_1)
- 2 \bar{x} {\rm Cov} (\bar{Y}, \hat{\beta}_1).
\end{align}
The two variance terms are
$$
{\rm Var} (\bar{Y})
= {\rm Var} \left(\frac{1}{n} \sum_{i = 1}^n Y_i \right)
= \frac{1}{n^2} \sum_{i = 1}^n {\rm Var} (Y_i)
= \frac{\sigma^2}{n},
$$
and
\begin{align}
{\rm Var} (\hat{\beta}_1)
&= \frac{ 1 }{ \left[\sum_{i = 1}^n(x_i - \bar{x})^2 \right]^2 }
\sum_{i = 1}^n(x_i - \bar{x})^2 {\rm Var} (Y_i) \\
&= \frac{ \sigma^2 }{ \sum_{i = 1}^n(x_i - \bar{x})^2 } ,
\end{align}
and the covariance term is
\begin{align}
{\rm Cov} (\bar{Y}, \hat{\beta}_1)
&= {\rm Cov} \left\{
\frac{1}{n} \sum_{i = 1}^n Y_i,
\frac{ \sum_{j = 1}^n(x_j - \bar{x})Y_j }{ \sum_{i = 1}^n(x_i - \bar{x})^2 }
\right \} \\
&= \frac{1}{n} \frac{ 1 }{ \sum_{i = 1}^n(x_i - \bar{x})^2 }
{\rm Cov} \left\{ \sum_{i = 1}^n Y_i, \sum_{j = 1}^n(x_j - \bar{x})Y_j \right\} \\
&= \frac{ 1 }{ n \sum_{i = 1}^n(x_i - \bar{x})^2 }
\sum_{i = 1}^n (x_j - \bar{x}) \sum_{j = 1}^n {\rm Cov}(Y_i, Y_j) \\
&= \frac{ 1 }{ n \sum_{i = 1}^n(x_i - \bar{x})^2 }
\sum_{i = 1}^n (x_j - \bar{x}) \sigma^2 \\
&= 0
\end{align}
since $\sum_{i = 1}^n (x_j - \bar{x})=0$.
And since
$$\sum_{i = 1}^n(x_i - \bar{x})^2
= \sum_{i = 1}^n x_i^2 - 2 \bar{x} \sum_{i = 1}^n x_i
+ \sum_{i = 1}^n \bar{x}^2
= \sum_{i = 1}^n x_i^2 - n \bar{x}^2,
$$
we have
\begin{align}
{\rm Var}(\hat{\beta}_0)
&= \frac{\sigma^2}{n} + \frac{ \sigma^2 \bar{x}^2}{ \sum_{i = 1}^n(x_i - \bar{x})^2 } \\
&= \frac{\sigma^2 }{ n \sum_{i = 1}^n(x_i - \bar{x})^2 }
\left\{ \sum_{i = 1}^n(x_i - \bar{x})^2 + n \bar{x}^2 \right\} \\
&= \frac{\sigma^2 \sum_{i = 1}^n x_i^2}{ n \sum_{i = 1}^n(x_i - \bar{x})^2 }.
\end{align}
Edit 2
Why do we have
${\rm var} ( \sum_{i = 1}^n Y_i) = \sum_{i = 1}^n {\rm Var} (Y_i) $?
The assumed model is $ Y_i = \beta_0 + \beta_1 X_i + \epsilon_i$, where the $\epsilon_i$ are independant and identically distributed random variables with ${\rm E}(\epsilon_i) = 0$ and ${\rm var}(\epsilon_i) = \sigma^2$.
Once we have a sample, the $X_i$ are known, the only random terms are the $\epsilon_i$. Recalling that for a random variable $Z$ and a constant $a$, we have ${\rm var}(a+Z) = {\rm var}(Z)$. Thus,
\begin{align}
{\rm var} \left( \sum_{i = 1}^n Y_i \right)
&= {\rm var} \left( \sum_{i = 1}^n \beta_0 + \beta_1 X_i + \epsilon_i \right)\\
&= {\rm var} \left( \sum_{i = 1}^n \epsilon_i \right)
= \sum_{i = 1}^n \sum_{j = 1}^n {\rm cov} (\epsilon_i, \epsilon_j)\\
&= \sum_{i = 1}^n {\rm cov} (\epsilon_i, \epsilon_i)
= \sum_{i = 1}^n {\rm var} (\epsilon_i)\\
&= \sum_{i = 1}^n {\rm var} (\beta_0 + \beta_1 X_i + \epsilon_i)
= \sum_{i = 1}^n {\rm var} (Y_i).\\
\end{align}
The 4th equality holds as ${\rm cov} (\epsilon_i, \epsilon_j) = 0$ for $i \neq j$ by the independence of the $\epsilon_i$.
|
How do I calculate the variance of the OLS estimator $\beta_0$, conditional on $x_1, \ldots , x_n$?
This is a self-study question, so I provide hints that will hopefully help to find the solution, and I'll edit the answer based on your feedbacks/progress.
The parameter estimates that minimize the su
|
11,181
|
How do I calculate the variance of the OLS estimator $\beta_0$, conditional on $x_1, \ldots , x_n$?
|
I got it! Well, with help. I found the part of the book that gives steps to work through when proving the $Var \left( \hat{\beta}_0 \right)$ formula (thankfully it doesn't actually work them out, otherwise I'd be tempted to not actually do the proof). I proved each separate step, and I think it worked.
I'm using the book's notation, which is:
$$
SST_x = \displaystyle\sum\limits_{i=1}^n (x_i - \bar{x})^2,
$$
and $u_i$ is the error term.
1) Show that $\hat{\beta}_1$ can be written as $\hat{\beta}_1 = \beta_1 + \displaystyle\sum\limits_{i=1}^n w_i u_i$ where $w_i = \frac{d_i}{SST_x}$ and $d_i = x_i - \bar{x}$.
This was easy because we know that
\begin{align}
\hat{\beta}_1 &= \beta_1 + \frac{\displaystyle\sum\limits_{i=1}^n (x_i - \bar{x}) u_i}{SST_x} \\
&= \beta_1 + \displaystyle\sum\limits_{i=1}^n \frac{d_i}{SST_x} u_i \\
&= \beta_1 + \displaystyle\sum\limits_{i=1}^n w_i u_i
\end{align}
2) Use part 1, along with $\displaystyle\sum\limits_{i=1}^n w_i = 0$ to show that $\hat{\beta_1}$ and $\bar{u}$ are uncorrelated, i.e. show that $E[(\hat{\beta_1}-\beta_1) \bar{u}] = 0$.
\begin{align}
E[(\hat{\beta_1}-\beta_1) \bar{u}] &= E[\bar{u}\displaystyle\sum\limits_{i=1}^n w_i u_i] \\
&=\displaystyle\sum\limits_{i=1}^n E[w_i \bar{u} u_i] \\
&=\displaystyle\sum\limits_{i=1}^n w_i E[\bar{u} u_i] \\
&= \frac{1}{n}\displaystyle\sum\limits_{i=1}^n w_i E\left(u_i\displaystyle\sum\limits_{j=1}^n u_j\right) \\
&= \frac{1}{n}\displaystyle\sum\limits_{i=1}^n w_i \left[E\left(u_i u_1\right) +\cdots + E(u_i u_j) + \cdots+ E\left(u_i u_n \right)\right] \\
\end{align}
and because the $u$ are i.i.d., $E(u_i u_j) = E(u_i) E(u_j)$ when $ j \neq i$.
When $j = i$, $E(u_i u_j) = E(u_i^2)$, so we have:
\begin{align}
&= \frac{1}{n}\displaystyle\sum\limits_{i=1}^n w_i \left[E(u_i) E(u_1) +\cdots + E(u_i^2) + \cdots + E(u_i) E(u_n)\right] \\
&= \frac{1}{n}\displaystyle\sum\limits_{i=1}^n w_i E(u_i^2) \\
&= \frac{1}{n}\displaystyle\sum\limits_{i=1}^n w_i \left[Var(u_i) + E(u_i) E(u_i)\right] \\
&= \frac{1}{n}\displaystyle\sum\limits_{i=1}^n w_i \sigma^2 \\
&= \frac{\sigma^2}{n}\displaystyle\sum\limits_{i=1}^n w_i \\
&= \frac{\sigma^2}{n \cdot SST_x}\displaystyle\sum\limits_{i=1}^n (x_i - \bar{x}) \\
&= \frac{\sigma^2}{n \cdot SST_x} \left(0\right)
&= 0
\end{align}
3) Show that $\hat{\beta_0}$ can be written as $\hat{\beta_0} = \beta_0 + \bar{u} - \bar{x}(\hat{\beta_1} - \beta_1)$. This seemed pretty easy too:
\begin{align}
\hat{\beta_0} &= \bar{y} - \hat{\beta_1} \bar{x} \\
&= (\beta_0 + \beta_1 \bar{x} + \bar{u}) - \hat{\beta_1} \bar{x} \\
&= \beta_0 + \bar{u} - \bar{x}(\hat{\beta_1} - \beta_1).
\end{align}
4) Use parts 2 and 3 to show that $Var(\hat{\beta_0}) = \frac{\sigma^2}{n} + \frac{\sigma^2 (\bar{x}) ^2} {SST_x}$:
\begin{align}
Var(\hat{\beta_0}) &= Var(\beta_0 + \bar{u} - \bar{x}(\hat{\beta_1} - \beta_1)) \\
&= Var(\bar{u}) + (-\bar{x})^2 Var(\hat{\beta_1} - \beta_1) \\
&= \frac{\sigma^2}{n} + (\bar{x})^2 Var(\hat{\beta_1}) \\
&= \frac{\sigma^2}{n} + \frac{\sigma^2 (\bar{x}) ^2} {SST_x}.
\end{align}
I believe this all works because since we provided that $\bar{u}$ and $\hat{\beta_1} - \beta_1$ are uncorrelated, the covariance between them is zero, so the variance of the sum is the sum of the variance. $\beta_0$ is just a constant, so it drops out, as does $\beta_1$ later in the calculations.
5) Use algebra and the fact that $\frac{SST_x}{n} = \frac{1}{n} \displaystyle\sum\limits_{i=1}^n x_i^2 - (\bar{x})^2$:
\begin{align}
Var(\hat{\beta_0}) &= \frac{\sigma^2}{n} + \frac{\sigma^2 (\bar{x}) ^2} {SST_x} \\
&= \frac{\sigma^2 SST_x}{SST_x n} + \frac{\sigma^2 (\bar{x})^2}{SST_x} \\
&= \frac{\sigma^2}{SST_x} \left( \frac{1}{n} \displaystyle\sum\limits_{i=1}^n x_i^2 - (\bar{x})^2 \right) + \frac{\sigma^2 (\bar{x})^2}{SST_x} \\
&= \frac{\sigma^2 n^{-1} \displaystyle\sum\limits_{i=1}^n x_i^2}{SST_x}
\end{align}
|
How do I calculate the variance of the OLS estimator $\beta_0$, conditional on $x_1, \ldots , x_n$?
|
I got it! Well, with help. I found the part of the book that gives steps to work through when proving the $Var \left( \hat{\beta}_0 \right)$ formula (thankfully it doesn't actually work them out, othe
|
How do I calculate the variance of the OLS estimator $\beta_0$, conditional on $x_1, \ldots , x_n$?
I got it! Well, with help. I found the part of the book that gives steps to work through when proving the $Var \left( \hat{\beta}_0 \right)$ formula (thankfully it doesn't actually work them out, otherwise I'd be tempted to not actually do the proof). I proved each separate step, and I think it worked.
I'm using the book's notation, which is:
$$
SST_x = \displaystyle\sum\limits_{i=1}^n (x_i - \bar{x})^2,
$$
and $u_i$ is the error term.
1) Show that $\hat{\beta}_1$ can be written as $\hat{\beta}_1 = \beta_1 + \displaystyle\sum\limits_{i=1}^n w_i u_i$ where $w_i = \frac{d_i}{SST_x}$ and $d_i = x_i - \bar{x}$.
This was easy because we know that
\begin{align}
\hat{\beta}_1 &= \beta_1 + \frac{\displaystyle\sum\limits_{i=1}^n (x_i - \bar{x}) u_i}{SST_x} \\
&= \beta_1 + \displaystyle\sum\limits_{i=1}^n \frac{d_i}{SST_x} u_i \\
&= \beta_1 + \displaystyle\sum\limits_{i=1}^n w_i u_i
\end{align}
2) Use part 1, along with $\displaystyle\sum\limits_{i=1}^n w_i = 0$ to show that $\hat{\beta_1}$ and $\bar{u}$ are uncorrelated, i.e. show that $E[(\hat{\beta_1}-\beta_1) \bar{u}] = 0$.
\begin{align}
E[(\hat{\beta_1}-\beta_1) \bar{u}] &= E[\bar{u}\displaystyle\sum\limits_{i=1}^n w_i u_i] \\
&=\displaystyle\sum\limits_{i=1}^n E[w_i \bar{u} u_i] \\
&=\displaystyle\sum\limits_{i=1}^n w_i E[\bar{u} u_i] \\
&= \frac{1}{n}\displaystyle\sum\limits_{i=1}^n w_i E\left(u_i\displaystyle\sum\limits_{j=1}^n u_j\right) \\
&= \frac{1}{n}\displaystyle\sum\limits_{i=1}^n w_i \left[E\left(u_i u_1\right) +\cdots + E(u_i u_j) + \cdots+ E\left(u_i u_n \right)\right] \\
\end{align}
and because the $u$ are i.i.d., $E(u_i u_j) = E(u_i) E(u_j)$ when $ j \neq i$.
When $j = i$, $E(u_i u_j) = E(u_i^2)$, so we have:
\begin{align}
&= \frac{1}{n}\displaystyle\sum\limits_{i=1}^n w_i \left[E(u_i) E(u_1) +\cdots + E(u_i^2) + \cdots + E(u_i) E(u_n)\right] \\
&= \frac{1}{n}\displaystyle\sum\limits_{i=1}^n w_i E(u_i^2) \\
&= \frac{1}{n}\displaystyle\sum\limits_{i=1}^n w_i \left[Var(u_i) + E(u_i) E(u_i)\right] \\
&= \frac{1}{n}\displaystyle\sum\limits_{i=1}^n w_i \sigma^2 \\
&= \frac{\sigma^2}{n}\displaystyle\sum\limits_{i=1}^n w_i \\
&= \frac{\sigma^2}{n \cdot SST_x}\displaystyle\sum\limits_{i=1}^n (x_i - \bar{x}) \\
&= \frac{\sigma^2}{n \cdot SST_x} \left(0\right)
&= 0
\end{align}
3) Show that $\hat{\beta_0}$ can be written as $\hat{\beta_0} = \beta_0 + \bar{u} - \bar{x}(\hat{\beta_1} - \beta_1)$. This seemed pretty easy too:
\begin{align}
\hat{\beta_0} &= \bar{y} - \hat{\beta_1} \bar{x} \\
&= (\beta_0 + \beta_1 \bar{x} + \bar{u}) - \hat{\beta_1} \bar{x} \\
&= \beta_0 + \bar{u} - \bar{x}(\hat{\beta_1} - \beta_1).
\end{align}
4) Use parts 2 and 3 to show that $Var(\hat{\beta_0}) = \frac{\sigma^2}{n} + \frac{\sigma^2 (\bar{x}) ^2} {SST_x}$:
\begin{align}
Var(\hat{\beta_0}) &= Var(\beta_0 + \bar{u} - \bar{x}(\hat{\beta_1} - \beta_1)) \\
&= Var(\bar{u}) + (-\bar{x})^2 Var(\hat{\beta_1} - \beta_1) \\
&= \frac{\sigma^2}{n} + (\bar{x})^2 Var(\hat{\beta_1}) \\
&= \frac{\sigma^2}{n} + \frac{\sigma^2 (\bar{x}) ^2} {SST_x}.
\end{align}
I believe this all works because since we provided that $\bar{u}$ and $\hat{\beta_1} - \beta_1$ are uncorrelated, the covariance between them is zero, so the variance of the sum is the sum of the variance. $\beta_0$ is just a constant, so it drops out, as does $\beta_1$ later in the calculations.
5) Use algebra and the fact that $\frac{SST_x}{n} = \frac{1}{n} \displaystyle\sum\limits_{i=1}^n x_i^2 - (\bar{x})^2$:
\begin{align}
Var(\hat{\beta_0}) &= \frac{\sigma^2}{n} + \frac{\sigma^2 (\bar{x}) ^2} {SST_x} \\
&= \frac{\sigma^2 SST_x}{SST_x n} + \frac{\sigma^2 (\bar{x})^2}{SST_x} \\
&= \frac{\sigma^2}{SST_x} \left( \frac{1}{n} \displaystyle\sum\limits_{i=1}^n x_i^2 - (\bar{x})^2 \right) + \frac{\sigma^2 (\bar{x})^2}{SST_x} \\
&= \frac{\sigma^2 n^{-1} \displaystyle\sum\limits_{i=1}^n x_i^2}{SST_x}
\end{align}
|
How do I calculate the variance of the OLS estimator $\beta_0$, conditional on $x_1, \ldots , x_n$?
I got it! Well, with help. I found the part of the book that gives steps to work through when proving the $Var \left( \hat{\beta}_0 \right)$ formula (thankfully it doesn't actually work them out, othe
|
11,182
|
use of weights in svyglm vs glm
|
There are lots of different sorts of weights and they get kind of confusing. You have to be pretty careful when you're using different functions or software that you're using the kind of weights you think you're using.
The svyglm function uses survey weights - these weight the importance of each case to make them representative (to each other, after twang). I'm not sure what weight does in glm() - I think they represent the accuracy of the measures. (If you're using the binomial family, they have different meaning).
The survey weights (in surveyglm) are the weights that you want, to give you the correct standard errors.
(There are also frequency weights, analytic weights, and importance weights).
|
use of weights in svyglm vs glm
|
There are lots of different sorts of weights and they get kind of confusing. You have to be pretty careful when you're using different functions or software that you're using the kind of weights you
|
use of weights in svyglm vs glm
There are lots of different sorts of weights and they get kind of confusing. You have to be pretty careful when you're using different functions or software that you're using the kind of weights you think you're using.
The svyglm function uses survey weights - these weight the importance of each case to make them representative (to each other, after twang). I'm not sure what weight does in glm() - I think they represent the accuracy of the measures. (If you're using the binomial family, they have different meaning).
The survey weights (in surveyglm) are the weights that you want, to give you the correct standard errors.
(There are also frequency weights, analytic weights, and importance weights).
|
use of weights in svyglm vs glm
There are lots of different sorts of weights and they get kind of confusing. You have to be pretty careful when you're using different functions or software that you're using the kind of weights you
|
11,183
|
use of weights in svyglm vs glm
|
survey computes the standard errors with consideration of the loss of precision introduced by sampling weights. Weights in glm simply adjust the weight given to the errors in the least squares estimation, so the standard errors aren't correct. Here's a selection from Lumley (2010):
In a model-based analysis it would be necessary to specify the random part of the model correctly to get correct standard errors, but all our standard error estimates are design-based and so are valid regardless of the model. It is worth noting that the “sandwich”, or “model-robust”, or “heteroskedasticity-consistent” standard errors sometimes used in model-based regression analysis are almost identical to the design-based standard errors we will use; the main difference being in the handling of stratification.
So without strata in your design, you will likely find that using sandwich will get you identical or near-identical SE estimates.
library(sandwich)
coefs <- vcovHC(glm11, type="HC0")
lmtest::coeftest(glm11,coefs)
In my test, they didn't compute out exactly when using "HC0" or "HC1", but were very close. svyglm is now reporting a z-value instead of t-value as well.
|
use of weights in svyglm vs glm
|
survey computes the standard errors with consideration of the loss of precision introduced by sampling weights. Weights in glm simply adjust the weight given to the errors in the least squares estimat
|
use of weights in svyglm vs glm
survey computes the standard errors with consideration of the loss of precision introduced by sampling weights. Weights in glm simply adjust the weight given to the errors in the least squares estimation, so the standard errors aren't correct. Here's a selection from Lumley (2010):
In a model-based analysis it would be necessary to specify the random part of the model correctly to get correct standard errors, but all our standard error estimates are design-based and so are valid regardless of the model. It is worth noting that the “sandwich”, or “model-robust”, or “heteroskedasticity-consistent” standard errors sometimes used in model-based regression analysis are almost identical to the design-based standard errors we will use; the main difference being in the handling of stratification.
So without strata in your design, you will likely find that using sandwich will get you identical or near-identical SE estimates.
library(sandwich)
coefs <- vcovHC(glm11, type="HC0")
lmtest::coeftest(glm11,coefs)
In my test, they didn't compute out exactly when using "HC0" or "HC1", but were very close. svyglm is now reporting a z-value instead of t-value as well.
|
use of weights in svyglm vs glm
survey computes the standard errors with consideration of the loss of precision introduced by sampling weights. Weights in glm simply adjust the weight given to the errors in the least squares estimat
|
11,184
|
Proper scoring rule when there is a decision to make (e.g. spam vs ham email)
|
I guess I'm one of the "among others", so I'll chime in.
The short version: I'm afraid your example is a bit of a straw man, and I don't think we can learn a lot from it.
In the first case, yes, you can threshold your predictions at 0.50 to get a perfect classification. True. But we also see that your model is actually rather poor. Take item #127 in the spam group, and compare it to item #484 in the ham group. They have predicted probabilities of being spam of 0.49 and 0.51. (That's because I picked the largest prediction in the spam and the smallest prediction in the ham group.)
That is, for the model they are almost indistinguishable in terms of their likelihood of being spam. But they aren't! We know that the first one is practically certain to be spam, and the second one to be ham. "Practically certain" as in "we observed 1000 instances, and the cutoff always worked". Saying that the two instances are practically equally likely to be spam is a clear indication that our model doesn't really know what it is doing.
Thus, in the present case, the conversation should not be whether we should go with model 1 or with model 2, or whether we should decide between the two models based on accuracy or on the Brier score. Rather, we should be feeding both models' predictions to any standard third model, such as a standard logistic regression. This will transform the predictions from model 1 to extremely confident predictions that are essentially 0 and 1 and thus reflect the structure in the data much better. The Brier score of this meta-model will be much lower, on the order of zero. And in the same way, the predictions from model 2 will be transformed into predictions that are almost as good, but a little worse - with a Brier score that is somewhat higher. Now, the Brier score of the two meta-models will correctly reflect that the one based on (meta-)model 1 should be preferred.
And of course, the final decision will likely need to use some kind of threshold. Depending on the costs of type I and II errors, the cost-optimal threshold might well be different from 0.5 (except, of course, in the present example). After all, as you write, it may be much more costly to misclassify ham as spam than vice versa. But as I write elsewhere, a cost optimal decision might also well include more than one threshold! Quite possibly, a very low predicted spam probability might have the mail sent to your inbox directly, while a very high predicted probability might have it filtered at the mail server without you ever seeing it - but probabilities in between might mean that a [SUSPECTED SPAM] might be inserted in the subject, and the mail would still be sent to your inbox. Accuracy as an evaluation measure fails here, unless we start looking at separate accuracy for the multiple buckets, but in the end, all the "in between" mails will be classified as one or the other, and shouldn't they have been sent to the correct bucket in the first place? Proper scoring rules, on the other hand, can help you calibrate your probabilistic predictions.
To be honest, I don't think deterministic examples like the one you give here are very useful. If we know what is happening, then we wouldn't be doing probabilistic classification/prediction in the first place, after all. So I would argue for probabilistic examples. Here is one such. I'll generate 1,000 true underlying probabilities as uniformly distributed on $[0,1]$, then generate actuals according to this probability. Now we don't have the perfect separation that I'm arguing fogs up the example above.
set.seed(2020)
nn <- 1000
true_probabilities <- runif(nn)
actuals <- runif(nn)<true_probabilities
library(beanplot)
beanplot(true_probabilities~actuals,
horizontal=TRUE,what=c(0,1,0,0),border=NA,col="lightgray",las=1,
xlab="True probability")
points(true_probabilities,actuals+1+runif(nn,-0.3,0.3),pch=19,cex=0.6)
Now, if we have the true probabilities, we can use cost-based thresholds as above. But typically, we will not know these true probabilities, but we may need to decide between competing models that each output such probabilities. I would argue that searching for a model that gets as close as possible to these true probabilities is worthwhile, because, for instance, if we have a biased understanding of the true probabilities, any resources we invest in changing the process (e.g., in medical applications: screening, inoculation, propagating lifestyle changes, ...) or in understanding it better may be misallocated. Put differently: working with accuracy and a threshold means that we don't care at all whether we predict a probability $\hat{p}_1$ or $\hat{p}_2$ as long as it's above the threshold, $\hat{p}_i>t$ (and vice versa below $t$), so we have zero incentive in understanding and investigating which instances we are unsure about, just as long as we get them to the correct side of the threshold.
Let's look at a couple of miscalibrated predicted probabilities. Specifically, for the true probabilities $p$, we can look at power transforms $\hat{p}_x:=p^x$ for some exponent $x>0$. This is a monotone transformation, so any thresholds we would like to use based on $p$ can also be transformed for use with $\hat{p}_x$. Or, starting from $\hat{p}_x$ and not knowing $p$, we can optimize thresholds $\hat{t}_x$ to get the exact same accuracies for $(\hat{p}_x,\hat{t}_x)$ as for $(\hat{p}_y,\hat{t}_y)$, because of the monotonicity. This means that accuracy is of no use whatsoever in our search for the true probabilities, which correspond to $x=1$! However (drum roll), proper scoring rules like the Brier or the log score will indeed be optimized in expectation by the correct $x=1$.
brier_score <- function(probs,actuals) mean(c((1-probs)[actuals]^2,probs[!actuals]^2))
log_score <- function(probs,actuals) mean(c(-log(probs[actuals]),-log((1-probs)[!actuals])))
exponents <- 10^seq(-1,1,by=0.1)
brier_scores <- log_scores <- rep(NA,length(exponents))
for ( ii in seq_along(exponents) ) {
brier_scores[ii] <- brier_score(true_probabilities^exponents[ii],actuals)
log_scores[ii] <- log_score(true_probabilities^exponents[ii],actuals)
}
plot(exponents,brier_scores,log="x",type="o",xlab="Exponent",main="Brier score",ylab="")
plot(exponents,log_scores,log="x",type="o",xlab="Exponent",main="Log score",ylab="")
|
Proper scoring rule when there is a decision to make (e.g. spam vs ham email)
|
I guess I'm one of the "among others", so I'll chime in.
The short version: I'm afraid your example is a bit of a straw man, and I don't think we can learn a lot from it.
In the first case, yes, you c
|
Proper scoring rule when there is a decision to make (e.g. spam vs ham email)
I guess I'm one of the "among others", so I'll chime in.
The short version: I'm afraid your example is a bit of a straw man, and I don't think we can learn a lot from it.
In the first case, yes, you can threshold your predictions at 0.50 to get a perfect classification. True. But we also see that your model is actually rather poor. Take item #127 in the spam group, and compare it to item #484 in the ham group. They have predicted probabilities of being spam of 0.49 and 0.51. (That's because I picked the largest prediction in the spam and the smallest prediction in the ham group.)
That is, for the model they are almost indistinguishable in terms of their likelihood of being spam. But they aren't! We know that the first one is practically certain to be spam, and the second one to be ham. "Practically certain" as in "we observed 1000 instances, and the cutoff always worked". Saying that the two instances are practically equally likely to be spam is a clear indication that our model doesn't really know what it is doing.
Thus, in the present case, the conversation should not be whether we should go with model 1 or with model 2, or whether we should decide between the two models based on accuracy or on the Brier score. Rather, we should be feeding both models' predictions to any standard third model, such as a standard logistic regression. This will transform the predictions from model 1 to extremely confident predictions that are essentially 0 and 1 and thus reflect the structure in the data much better. The Brier score of this meta-model will be much lower, on the order of zero. And in the same way, the predictions from model 2 will be transformed into predictions that are almost as good, but a little worse - with a Brier score that is somewhat higher. Now, the Brier score of the two meta-models will correctly reflect that the one based on (meta-)model 1 should be preferred.
And of course, the final decision will likely need to use some kind of threshold. Depending on the costs of type I and II errors, the cost-optimal threshold might well be different from 0.5 (except, of course, in the present example). After all, as you write, it may be much more costly to misclassify ham as spam than vice versa. But as I write elsewhere, a cost optimal decision might also well include more than one threshold! Quite possibly, a very low predicted spam probability might have the mail sent to your inbox directly, while a very high predicted probability might have it filtered at the mail server without you ever seeing it - but probabilities in between might mean that a [SUSPECTED SPAM] might be inserted in the subject, and the mail would still be sent to your inbox. Accuracy as an evaluation measure fails here, unless we start looking at separate accuracy for the multiple buckets, but in the end, all the "in between" mails will be classified as one or the other, and shouldn't they have been sent to the correct bucket in the first place? Proper scoring rules, on the other hand, can help you calibrate your probabilistic predictions.
To be honest, I don't think deterministic examples like the one you give here are very useful. If we know what is happening, then we wouldn't be doing probabilistic classification/prediction in the first place, after all. So I would argue for probabilistic examples. Here is one such. I'll generate 1,000 true underlying probabilities as uniformly distributed on $[0,1]$, then generate actuals according to this probability. Now we don't have the perfect separation that I'm arguing fogs up the example above.
set.seed(2020)
nn <- 1000
true_probabilities <- runif(nn)
actuals <- runif(nn)<true_probabilities
library(beanplot)
beanplot(true_probabilities~actuals,
horizontal=TRUE,what=c(0,1,0,0),border=NA,col="lightgray",las=1,
xlab="True probability")
points(true_probabilities,actuals+1+runif(nn,-0.3,0.3),pch=19,cex=0.6)
Now, if we have the true probabilities, we can use cost-based thresholds as above. But typically, we will not know these true probabilities, but we may need to decide between competing models that each output such probabilities. I would argue that searching for a model that gets as close as possible to these true probabilities is worthwhile, because, for instance, if we have a biased understanding of the true probabilities, any resources we invest in changing the process (e.g., in medical applications: screening, inoculation, propagating lifestyle changes, ...) or in understanding it better may be misallocated. Put differently: working with accuracy and a threshold means that we don't care at all whether we predict a probability $\hat{p}_1$ or $\hat{p}_2$ as long as it's above the threshold, $\hat{p}_i>t$ (and vice versa below $t$), so we have zero incentive in understanding and investigating which instances we are unsure about, just as long as we get them to the correct side of the threshold.
Let's look at a couple of miscalibrated predicted probabilities. Specifically, for the true probabilities $p$, we can look at power transforms $\hat{p}_x:=p^x$ for some exponent $x>0$. This is a monotone transformation, so any thresholds we would like to use based on $p$ can also be transformed for use with $\hat{p}_x$. Or, starting from $\hat{p}_x$ and not knowing $p$, we can optimize thresholds $\hat{t}_x$ to get the exact same accuracies for $(\hat{p}_x,\hat{t}_x)$ as for $(\hat{p}_y,\hat{t}_y)$, because of the monotonicity. This means that accuracy is of no use whatsoever in our search for the true probabilities, which correspond to $x=1$! However (drum roll), proper scoring rules like the Brier or the log score will indeed be optimized in expectation by the correct $x=1$.
brier_score <- function(probs,actuals) mean(c((1-probs)[actuals]^2,probs[!actuals]^2))
log_score <- function(probs,actuals) mean(c(-log(probs[actuals]),-log((1-probs)[!actuals])))
exponents <- 10^seq(-1,1,by=0.1)
brier_scores <- log_scores <- rep(NA,length(exponents))
for ( ii in seq_along(exponents) ) {
brier_scores[ii] <- brier_score(true_probabilities^exponents[ii],actuals)
log_scores[ii] <- log_score(true_probabilities^exponents[ii],actuals)
}
plot(exponents,brier_scores,log="x",type="o",xlab="Exponent",main="Brier score",ylab="")
plot(exponents,log_scores,log="x",type="o",xlab="Exponent",main="Log score",ylab="")
|
Proper scoring rule when there is a decision to make (e.g. spam vs ham email)
I guess I'm one of the "among others", so I'll chime in.
The short version: I'm afraid your example is a bit of a straw man, and I don't think we can learn a lot from it.
In the first case, yes, you c
|
11,185
|
Proper scoring rule when there is a decision to make (e.g. spam vs ham email)
|
I think it is worth making the distinction between performance evaluation and model selection criteria.
In performance evaluation you want to know how well your system is likely to perform in operation, based on the data you have available now. When evaluating performance, you need to use the metric that is appropriate for your task. So if you have a classification problem (i.e. you do need to make a hard "yes"/"no" decision) and the false-positive and false-negative costs are the same, then classification accuracy is an appropriate performance evaluation criterion. Obviously you would probably want to evaluate the uncertainty of the performance estimate, perhaps using bootstrap replication.
For a model selection criterion, on the other hand, where we want to choose between competing methods, accuracy might not be a good criterion, even if it is the primary quantity of interest. The problem is that accuracy is brittle, if you ran the experiment again, but with a different sample of training data, you might get a slightly different decision boundary that gave a very different accuracy on the same test data. In this case, because the margins on the first classifier are quite small, it is more likely that the decision boundary will shift if it is trained on a different sample of data than the second one. Likewise, if the classifier is evaluated on a different set of test data, the first classifier is more likely to make mistakes than the second. As the margins are small, the new test data has to be only slightly different to the old data near the decision boundary for there to be a misclassification. Thus the Brier score is rewarding the second classifier for its greater stability. This is important as we want a classifier that will perform well in operation, not just on this particular test set.
I use least-squares support vector machine (and kernel logistic regression) a fair bit, and you need a model selection criterion for tuning the hyper-parameters. The obvious thing to do is to use cross-validation accuracy as the selection criterion for problems where accuracy is the quantity of primary interest. However it is generally better to use the cross-validated Brier score (or equivalently PRESS criterion in my case), which is less brittle. You can get better accuracy from the final model by using a proper scoring rule as the model selection criterion.
I learned this from taking part in a valuable "Performance Prediction Challenge" hosted at a conference, where you had to provide not just the predictions, but also an estimate of how accurate (balanced accuracy in this case) your model would be on the test data. So this is based on experimentation and experience - challenges like these are a good place to find out what actually works and what doesn't as there are no straw men or "operator bias" issues in the evaluation. The paper is here FWIW (preprint here).
|
Proper scoring rule when there is a decision to make (e.g. spam vs ham email)
|
I think it is worth making the distinction between performance evaluation and model selection criteria.
In performance evaluation you want to know how well your system is likely to perform in operatio
|
Proper scoring rule when there is a decision to make (e.g. spam vs ham email)
I think it is worth making the distinction between performance evaluation and model selection criteria.
In performance evaluation you want to know how well your system is likely to perform in operation, based on the data you have available now. When evaluating performance, you need to use the metric that is appropriate for your task. So if you have a classification problem (i.e. you do need to make a hard "yes"/"no" decision) and the false-positive and false-negative costs are the same, then classification accuracy is an appropriate performance evaluation criterion. Obviously you would probably want to evaluate the uncertainty of the performance estimate, perhaps using bootstrap replication.
For a model selection criterion, on the other hand, where we want to choose between competing methods, accuracy might not be a good criterion, even if it is the primary quantity of interest. The problem is that accuracy is brittle, if you ran the experiment again, but with a different sample of training data, you might get a slightly different decision boundary that gave a very different accuracy on the same test data. In this case, because the margins on the first classifier are quite small, it is more likely that the decision boundary will shift if it is trained on a different sample of data than the second one. Likewise, if the classifier is evaluated on a different set of test data, the first classifier is more likely to make mistakes than the second. As the margins are small, the new test data has to be only slightly different to the old data near the decision boundary for there to be a misclassification. Thus the Brier score is rewarding the second classifier for its greater stability. This is important as we want a classifier that will perform well in operation, not just on this particular test set.
I use least-squares support vector machine (and kernel logistic regression) a fair bit, and you need a model selection criterion for tuning the hyper-parameters. The obvious thing to do is to use cross-validation accuracy as the selection criterion for problems where accuracy is the quantity of primary interest. However it is generally better to use the cross-validated Brier score (or equivalently PRESS criterion in my case), which is less brittle. You can get better accuracy from the final model by using a proper scoring rule as the model selection criterion.
I learned this from taking part in a valuable "Performance Prediction Challenge" hosted at a conference, where you had to provide not just the predictions, but also an estimate of how accurate (balanced accuracy in this case) your model would be on the test data. So this is based on experimentation and experience - challenges like these are a good place to find out what actually works and what doesn't as there are no straw men or "operator bias" issues in the evaluation. The paper is here FWIW (preprint here).
|
Proper scoring rule when there is a decision to make (e.g. spam vs ham email)
I think it is worth making the distinction between performance evaluation and model selection criteria.
In performance evaluation you want to know how well your system is likely to perform in operatio
|
11,186
|
Statistical tables in old books purposefully wrong?
|
The Wikipedia article "Fictitious entry", which is on the more general subject of "deliberately incorrect entries in reference works", cites one example of something close to this:
By including a trivial piece of false information in a larger work, it is easier to demonstrate subsequent plagiarism if the fictitious entry is copied along with other material. An admission of this motive appears in the preface to Chambers's 1964 mathematical tables: "those [errors] that are known to exist form an uncomfortable trap for any would-be plagiarist".
The citation is to page vi of:
Comrie, L. J. (1964). Chambers's shorter six-figure mathematical tables. Edinburgh: W. & R. Chambers.
|
Statistical tables in old books purposefully wrong?
|
The Wikipedia article "Fictitious entry", which is on the more general subject of "deliberately incorrect entries in reference works", cites one example of something close to this:
By including a tri
|
Statistical tables in old books purposefully wrong?
The Wikipedia article "Fictitious entry", which is on the more general subject of "deliberately incorrect entries in reference works", cites one example of something close to this:
By including a trivial piece of false information in a larger work, it is easier to demonstrate subsequent plagiarism if the fictitious entry is copied along with other material. An admission of this motive appears in the preface to Chambers's 1964 mathematical tables: "those [errors] that are known to exist form an uncomfortable trap for any would-be plagiarist".
The citation is to page vi of:
Comrie, L. J. (1964). Chambers's shorter six-figure mathematical tables. Edinburgh: W. & R. Chambers.
|
Statistical tables in old books purposefully wrong?
The Wikipedia article "Fictitious entry", which is on the more general subject of "deliberately incorrect entries in reference works", cites one example of something close to this:
By including a tri
|
11,187
|
Distribution of the largest fragment of a broken stick (spacings)
|
With the information given by @Glen_b I could find the answer. Using the same notations as the question
$$
P(Z_k \leq x) = \sum_{j=0}^{k+1} { k+1 \choose j } (-1)^j (1-jx)_+^k,
$$
where $a_+ = a$ if $a > 0$ and $0$ otherwise. I also give the expectation and the asymptotic convergence to the Gumbel (NB: not Beta) distribution
$$
E(Z_k)= \frac{1}{k+1}\sum_{i=1}^{k+1}\frac{1}{i} \sim \frac{\log(k+1)}{k+1}, \\
P(Z_k \leq x) \sim \exp\left(- e^{-(k+1)x + \log(k+1)} \right).
$$
The material of the proofs is taken from several publications linked in the references. They are somewhat lengthy, but straightforward.
1. Proof of the exact distribution
Let $(U_1, \ldots, U_k)$ be IID uniform random variables in the interval $(0,1)$. By ordering them, we obtain the $k$ order statistics denoted $(U_{(1)}, \ldots, U_{(k)})$. The uniform spacings are defined as $\Delta_i = U_{(i)} - U_{(i-1)}$, with $U_{(0)} = 0$ and $U_{(k+1)} = 1$. The ordered spacings are the corresponding ordered statistics $\Delta_{(1)} \leq \ldots \leq \Delta_{(k+1)}$. The variable of interest is $\Delta_{(k+1)}$.
For fixed $x \in (0,1)$, we define the indicator variable $\mathbb{1}_i = \mathbb{1}_{\{\Delta_i > x\}}$. By symmetry, the random vector $(\mathbb{1}_1, \ldots, \mathbb{1}_{k+1})$ is exchangeable, so the joint distribution of a subset of size $j$ is the same as the joint distribution of the first $j$. By expanding the product, we thus obtain
$$
P(\Delta_{(k+1)} \leq x)
= E \left( \prod_{i=1}^{k+1} (1 - \mathbb{1}_i) \right)
= 1 + \sum_{j=1}^{k+1} { k+1 \choose j } (-1)^j
E \left( \prod_{i=1}^j \mathbb{1}_i \right).
$$
We will now prove that $E \left( \prod_{i=1}^j \mathbb{1}_i \right) = (1-jx)_+^k$, which will establish the distribution given above. We prove this for $j=2$, as the general case is proved similarly.
$$
E \left( \prod_{i=1}^2 \mathbb{1}_i \right)
= P(\Delta_1 > x \cap \Delta_2 > x)
= P(\Delta_1 > x) P(\Delta_2 > x | \Delta_1 > x).
$$
If $\Delta_1 > x$, the $k$ breakpoints are in the interval $(x,1)$. Conditionally on this event, the breakpoints are still exchangeable, so the probability that the distance between the second and the first breakpoint is greater than $x$ is the same as the probability that the distance between the first breakpoint and the left barrier (at position $x$) is greater than $x$. So
$$
P(\Delta_2 > x | \Delta_1 > x) = P\big(\text{all points are in } (2x,1) \big| \text{all points are in } (x,1)\big), \; \text{so} \\
P(\Delta_2 > x \cap \Delta_1 > x) = P\big(\text{all points are in } (2x,1)\big) = (1-2x)_+^k.
$$
2. Expectation
For distributions with finite support, we have
$$
E(X) = \int P(X > x)dx = 1 - \int P(X \leq x)dx.
$$
Integrating the distribution of $\Delta_{(k+1)}$, we obtain
$$
E\left(\Delta_{(k+1)}\right)
= \frac{1}{k+1}\sum_{j=1}^{k+1}{k+1 \choose j}\frac{(-1)^{j+1}}{j}
= \frac{1}{k+1}\sum_{j=1}^{k+1}\frac{1}{j}.
$$
The last equality is a classic representation of harmonic numbers $H_i = 1+ \frac{1}{2}+ \ldots + \frac{1}{i}$, which we demonstrate below.
$$
H_{k+1} = \int_0^1 1 + x + \ldots + x^k dx
= \int_0^1 \frac{1-x^{k+1}}{1-x}dx.
$$
With the change of variable $u = 1-x$ and expanding the product, we obtain
$$
H_{k+1} = \int_0^1\sum_{j=1}^{k+1}{ k+1 \choose j }(-1)^{j+1}u^{j-1}du
= \sum_{j=1}^{k+1}{k+1 \choose j}\frac{(-1)^{j+1}}{j}.
$$
3. Alternative construction of uniform spacings
In order to obtain the asymptotic distribution of the largest fragment, we will need to exhibit a classical construction of uniform spacings as exponential variables divided by their sum. The probability density of the associated order statistics $(U_{(1)}, \ldots, U_{(k)})$ is
$$
f_{U_{(1)}, \ldots U_{(k)}}(u_{(1)}, \ldots, u_{(k)}) = k!, \;
0 \leq u_{(1)} \leq \ldots \leq u_{(k+1)}.
$$
If we denote the uniform spacings $\Delta_i = U_{(i)} - U_{(i-1)}$, with $U_{(0)} = 0$, we obtain
$$
f_{\Delta_1, \ldots \Delta_k}(\delta_1, \ldots, \delta_k) = k!,
\; 0 \leq \delta_i + \ldots + \delta_k \leq 1.
$$
By defining $U_{(k+1)} = 1$, we thus obtain
$$
f_{\Delta_1, \ldots \Delta_{k+1}}(\delta_1, \ldots, \delta_{k+1}) = k!,
\; \delta_1 + \ldots + \delta_k = 1.
$$
Now, let $(X_1, \ldots, X_{k+1})$ be IID exponential random variables with mean 1, and let $S = X_1 + \ldots + X_{k+1}$. With a simple change of variable, we can see that
$$f_{X_1, \ldots X_k, S}(x_1, \ldots, x_k, s) = e^{-s}.$$
Define $Y_i = X_i/S$, such that by a change of variable we obtain
$$f_{Y_1, \ldots Y_k, S}(y_1, \ldots, y_k, s) = s^k e^{-s}.$$
Integrating this density with respect to $s$, we thus obtain
$$
f_{Y_1, \ldots Y_k,}(y_1, \ldots, y_k) =
\int_0^{\infty}s^k e^{-s}ds = k!,
\; 0 \leq y_i + \ldots + y_k \leq 1, \; \text{and thus} \\
f_{Y_1, \ldots Y_{k+1},}(y_1, \ldots, y_{k+1}) = k!,
\; y_1 + \ldots + y_{k+1} = 1.
$$
So the joint distribution of $k+1$ uniform spacings on the interval $(0,1)$ is the same as the joint distribution of $k+1$ exponential random variables divided by their sum. We come to the following equivalence of distribution
$$
\Delta_{(k+1)} \equiv \frac{X_{(k+1)}}{X_1 + \ldots + X_{k+1}}.
$$
4. Asymptotic distribution
Using the equivalence above, we obtain
$$
\begin{align}
P\big((k+1)\Delta_{(k+1)} - \log(k+1) \leq x\big)
&= P\left(X_{(k+1)} \leq (x + \log(k+1))\frac{X_1 + \ldots + X_{k+1}}{k+1}\right) \\
&= P\left(X_{(k+1)} - \log(k+1) \leq x + (x + \log(k+1))T_{k+1}\right),
\end{align}
$$
where $T_{k+1} = \frac{X_1+\ldots+X_{k+1}}{k+1} -1$. This variable vanishes in probability because $E\left(T_{k+1}\right) = 0$ and $Var\big(\log(k+1)T_{k+1}\big) = \frac{(\log(k+1))^2}{k+1} \downarrow 0$. Asymptotically, the distribution is the same as that of $X_{(k+1)} - \log(k+1)$. Because the $X_i$ are IID, we have
$$
\begin{align}
P\left(X_{(k+1)} - \log(k+1) \leq x \right)
&= P\left(X_1 \leq x + \log(k+1)\right)^{k+1} \\
&= \left(1-e^{-x - \log(k+1)}\right)^{k+1} = \left(1-\frac{e^{-x}}{k+1}\right)^{k+1} \sim \exp\left\{-e^{-x}\right\}.
\end{align}
$$
5. Graphical overview
The plot below shows the distribution of the largest fragment for different values of $k$. For $k=10, 20, 50$, I have also overlaid the asymptotic Gumbel distribution (thin line). The Gumbel is a very bad approximation for small values of $k$ so I omit them to not overload the picture. The Gumbel approximation is good from $k \approx 50$.
6. References
The proofs above are taken from references 2 and 3. The cited literature contains many more results, such as the distribution of the ordered spacings of any rank, their limit distribution and some alternative constructions of the ordered uniform spacings. The key references are not easily accessible, so I also provide links to the full text.
Bairamov et al. (2010) Limit results for ordered uniform spacings, Stat papers, 51:1, pp 227-240
Holst (1980) On the lengths of the pieces of a stick broken at random, J. Appl. Prob., 17, pp 623-634
Pyke (1965) Spacings, JRSS(B) 27:3, pp. 395-449
Renyi (1953) On the theory of order statistics, Acta math Hung, 4, pp 191-231
|
Distribution of the largest fragment of a broken stick (spacings)
|
With the information given by @Glen_b I could find the answer. Using the same notations as the question
$$
P(Z_k \leq x) = \sum_{j=0}^{k+1} { k+1 \choose j } (-1)^j (1-jx)_+^k,
$$
where $a_+ = a$ if $
|
Distribution of the largest fragment of a broken stick (spacings)
With the information given by @Glen_b I could find the answer. Using the same notations as the question
$$
P(Z_k \leq x) = \sum_{j=0}^{k+1} { k+1 \choose j } (-1)^j (1-jx)_+^k,
$$
where $a_+ = a$ if $a > 0$ and $0$ otherwise. I also give the expectation and the asymptotic convergence to the Gumbel (NB: not Beta) distribution
$$
E(Z_k)= \frac{1}{k+1}\sum_{i=1}^{k+1}\frac{1}{i} \sim \frac{\log(k+1)}{k+1}, \\
P(Z_k \leq x) \sim \exp\left(- e^{-(k+1)x + \log(k+1)} \right).
$$
The material of the proofs is taken from several publications linked in the references. They are somewhat lengthy, but straightforward.
1. Proof of the exact distribution
Let $(U_1, \ldots, U_k)$ be IID uniform random variables in the interval $(0,1)$. By ordering them, we obtain the $k$ order statistics denoted $(U_{(1)}, \ldots, U_{(k)})$. The uniform spacings are defined as $\Delta_i = U_{(i)} - U_{(i-1)}$, with $U_{(0)} = 0$ and $U_{(k+1)} = 1$. The ordered spacings are the corresponding ordered statistics $\Delta_{(1)} \leq \ldots \leq \Delta_{(k+1)}$. The variable of interest is $\Delta_{(k+1)}$.
For fixed $x \in (0,1)$, we define the indicator variable $\mathbb{1}_i = \mathbb{1}_{\{\Delta_i > x\}}$. By symmetry, the random vector $(\mathbb{1}_1, \ldots, \mathbb{1}_{k+1})$ is exchangeable, so the joint distribution of a subset of size $j$ is the same as the joint distribution of the first $j$. By expanding the product, we thus obtain
$$
P(\Delta_{(k+1)} \leq x)
= E \left( \prod_{i=1}^{k+1} (1 - \mathbb{1}_i) \right)
= 1 + \sum_{j=1}^{k+1} { k+1 \choose j } (-1)^j
E \left( \prod_{i=1}^j \mathbb{1}_i \right).
$$
We will now prove that $E \left( \prod_{i=1}^j \mathbb{1}_i \right) = (1-jx)_+^k$, which will establish the distribution given above. We prove this for $j=2$, as the general case is proved similarly.
$$
E \left( \prod_{i=1}^2 \mathbb{1}_i \right)
= P(\Delta_1 > x \cap \Delta_2 > x)
= P(\Delta_1 > x) P(\Delta_2 > x | \Delta_1 > x).
$$
If $\Delta_1 > x$, the $k$ breakpoints are in the interval $(x,1)$. Conditionally on this event, the breakpoints are still exchangeable, so the probability that the distance between the second and the first breakpoint is greater than $x$ is the same as the probability that the distance between the first breakpoint and the left barrier (at position $x$) is greater than $x$. So
$$
P(\Delta_2 > x | \Delta_1 > x) = P\big(\text{all points are in } (2x,1) \big| \text{all points are in } (x,1)\big), \; \text{so} \\
P(\Delta_2 > x \cap \Delta_1 > x) = P\big(\text{all points are in } (2x,1)\big) = (1-2x)_+^k.
$$
2. Expectation
For distributions with finite support, we have
$$
E(X) = \int P(X > x)dx = 1 - \int P(X \leq x)dx.
$$
Integrating the distribution of $\Delta_{(k+1)}$, we obtain
$$
E\left(\Delta_{(k+1)}\right)
= \frac{1}{k+1}\sum_{j=1}^{k+1}{k+1 \choose j}\frac{(-1)^{j+1}}{j}
= \frac{1}{k+1}\sum_{j=1}^{k+1}\frac{1}{j}.
$$
The last equality is a classic representation of harmonic numbers $H_i = 1+ \frac{1}{2}+ \ldots + \frac{1}{i}$, which we demonstrate below.
$$
H_{k+1} = \int_0^1 1 + x + \ldots + x^k dx
= \int_0^1 \frac{1-x^{k+1}}{1-x}dx.
$$
With the change of variable $u = 1-x$ and expanding the product, we obtain
$$
H_{k+1} = \int_0^1\sum_{j=1}^{k+1}{ k+1 \choose j }(-1)^{j+1}u^{j-1}du
= \sum_{j=1}^{k+1}{k+1 \choose j}\frac{(-1)^{j+1}}{j}.
$$
3. Alternative construction of uniform spacings
In order to obtain the asymptotic distribution of the largest fragment, we will need to exhibit a classical construction of uniform spacings as exponential variables divided by their sum. The probability density of the associated order statistics $(U_{(1)}, \ldots, U_{(k)})$ is
$$
f_{U_{(1)}, \ldots U_{(k)}}(u_{(1)}, \ldots, u_{(k)}) = k!, \;
0 \leq u_{(1)} \leq \ldots \leq u_{(k+1)}.
$$
If we denote the uniform spacings $\Delta_i = U_{(i)} - U_{(i-1)}$, with $U_{(0)} = 0$, we obtain
$$
f_{\Delta_1, \ldots \Delta_k}(\delta_1, \ldots, \delta_k) = k!,
\; 0 \leq \delta_i + \ldots + \delta_k \leq 1.
$$
By defining $U_{(k+1)} = 1$, we thus obtain
$$
f_{\Delta_1, \ldots \Delta_{k+1}}(\delta_1, \ldots, \delta_{k+1}) = k!,
\; \delta_1 + \ldots + \delta_k = 1.
$$
Now, let $(X_1, \ldots, X_{k+1})$ be IID exponential random variables with mean 1, and let $S = X_1 + \ldots + X_{k+1}$. With a simple change of variable, we can see that
$$f_{X_1, \ldots X_k, S}(x_1, \ldots, x_k, s) = e^{-s}.$$
Define $Y_i = X_i/S$, such that by a change of variable we obtain
$$f_{Y_1, \ldots Y_k, S}(y_1, \ldots, y_k, s) = s^k e^{-s}.$$
Integrating this density with respect to $s$, we thus obtain
$$
f_{Y_1, \ldots Y_k,}(y_1, \ldots, y_k) =
\int_0^{\infty}s^k e^{-s}ds = k!,
\; 0 \leq y_i + \ldots + y_k \leq 1, \; \text{and thus} \\
f_{Y_1, \ldots Y_{k+1},}(y_1, \ldots, y_{k+1}) = k!,
\; y_1 + \ldots + y_{k+1} = 1.
$$
So the joint distribution of $k+1$ uniform spacings on the interval $(0,1)$ is the same as the joint distribution of $k+1$ exponential random variables divided by their sum. We come to the following equivalence of distribution
$$
\Delta_{(k+1)} \equiv \frac{X_{(k+1)}}{X_1 + \ldots + X_{k+1}}.
$$
4. Asymptotic distribution
Using the equivalence above, we obtain
$$
\begin{align}
P\big((k+1)\Delta_{(k+1)} - \log(k+1) \leq x\big)
&= P\left(X_{(k+1)} \leq (x + \log(k+1))\frac{X_1 + \ldots + X_{k+1}}{k+1}\right) \\
&= P\left(X_{(k+1)} - \log(k+1) \leq x + (x + \log(k+1))T_{k+1}\right),
\end{align}
$$
where $T_{k+1} = \frac{X_1+\ldots+X_{k+1}}{k+1} -1$. This variable vanishes in probability because $E\left(T_{k+1}\right) = 0$ and $Var\big(\log(k+1)T_{k+1}\big) = \frac{(\log(k+1))^2}{k+1} \downarrow 0$. Asymptotically, the distribution is the same as that of $X_{(k+1)} - \log(k+1)$. Because the $X_i$ are IID, we have
$$
\begin{align}
P\left(X_{(k+1)} - \log(k+1) \leq x \right)
&= P\left(X_1 \leq x + \log(k+1)\right)^{k+1} \\
&= \left(1-e^{-x - \log(k+1)}\right)^{k+1} = \left(1-\frac{e^{-x}}{k+1}\right)^{k+1} \sim \exp\left\{-e^{-x}\right\}.
\end{align}
$$
5. Graphical overview
The plot below shows the distribution of the largest fragment for different values of $k$. For $k=10, 20, 50$, I have also overlaid the asymptotic Gumbel distribution (thin line). The Gumbel is a very bad approximation for small values of $k$ so I omit them to not overload the picture. The Gumbel approximation is good from $k \approx 50$.
6. References
The proofs above are taken from references 2 and 3. The cited literature contains many more results, such as the distribution of the ordered spacings of any rank, their limit distribution and some alternative constructions of the ordered uniform spacings. The key references are not easily accessible, so I also provide links to the full text.
Bairamov et al. (2010) Limit results for ordered uniform spacings, Stat papers, 51:1, pp 227-240
Holst (1980) On the lengths of the pieces of a stick broken at random, J. Appl. Prob., 17, pp 623-634
Pyke (1965) Spacings, JRSS(B) 27:3, pp. 395-449
Renyi (1953) On the theory of order statistics, Acta math Hung, 4, pp 191-231
|
Distribution of the largest fragment of a broken stick (spacings)
With the information given by @Glen_b I could find the answer. Using the same notations as the question
$$
P(Z_k \leq x) = \sum_{j=0}^{k+1} { k+1 \choose j } (-1)^j (1-jx)_+^k,
$$
where $a_+ = a$ if $
|
11,188
|
Distribution of the largest fragment of a broken stick (spacings)
|
This is not a complete answer, but I did some quick simulations, and this is what I obtained:
This looks remarkably beta-ish, and this makes a bit of sense, since the order statistics of i.i.d. uniform distributions are beta wiki.
This might give some starting point to derive the resulting p.d.f..
I'll update if I get to a final closed solution.
Cheers!
|
Distribution of the largest fragment of a broken stick (spacings)
|
This is not a complete answer, but I did some quick simulations, and this is what I obtained:
This looks remarkably beta-ish, and this makes a bit of sense, since the order statistics of i.i.d. unifo
|
Distribution of the largest fragment of a broken stick (spacings)
This is not a complete answer, but I did some quick simulations, and this is what I obtained:
This looks remarkably beta-ish, and this makes a bit of sense, since the order statistics of i.i.d. uniform distributions are beta wiki.
This might give some starting point to derive the resulting p.d.f..
I'll update if I get to a final closed solution.
Cheers!
|
Distribution of the largest fragment of a broken stick (spacings)
This is not a complete answer, but I did some quick simulations, and this is what I obtained:
This looks remarkably beta-ish, and this makes a bit of sense, since the order statistics of i.i.d. unifo
|
11,189
|
Distribution of the largest fragment of a broken stick (spacings)
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
I produced the answer for a conference in Siena (Italy) in 2005. The paper (2006) is presented on my web-site here (pdf). The exact distributions of all the spacings (smallest to largest) are found on pages 75 & 76.
I'm hoping to give a presentation on this topic at the RSS Conference in Manchester (England) in September 2016.
|
Distribution of the largest fragment of a broken stick (spacings)
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
Distribution of the largest fragment of a broken stick (spacings)
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
I produced the answer for a conference in Siena (Italy) in 2005. The paper (2006) is presented on my web-site here (pdf). The exact distributions of all the spacings (smallest to largest) are found on pages 75 & 76.
I'm hoping to give a presentation on this topic at the RSS Conference in Manchester (England) in September 2016.
|
Distribution of the largest fragment of a broken stick (spacings)
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
11,190
|
Analogy of Pearson correlation for 3 variables
|
It is indeed something. To find out, we need to examine what we know about correlation itself.
The correlation matrix of a vector-valued random variable $\mathbf{X}=(X_1,X_2,\ldots,X_p)$ is the variance-covariance matrix, or simply "variance," of the standardized version of $\mathbf{X}$. That is, each $X_i$ is replaced by its recentered, rescaled version.
The covariance of $X_i$ and $X_j$ is the expectation of the product of their centered versions. That is, writing $X^\prime_i = X_i - E[X_i]$ and $X^\prime_j = X_j - E[X_j]$, we have
$$\operatorname{Cov}(X_i,X_j) = E[X^\prime_i X^\prime_j].$$
The variance of $\mathbf{X}$, which I will write $\operatorname{Var}(\mathbf{X})$, is not a single number. It is the array of values $$\operatorname{Var}(\mathbf{X})_{ij}=\operatorname{Cov}(X_i,X_j).$$
The way to think of the covariance for the intended generalization is to consider it a tensor. That means it's an entire collection of quantities $v_{ij}$, indexed by $i$ and $j$ ranging from $1$ through $p$, whose values change in a particularly simple predictable way when $\mathbf{X}$ undergoes a linear transformation. Specifically, let $\mathbf{Y}=(Y_1,Y_2,\ldots,Y_q)$ be another vector-valued random variable defined by
$$Y_i = \sum_{j=1}^p a_i^{\,j}X_j.$$
The constants $a_i^{\,j}$ ($i$ and $j$ are indexes--$j$ is not a power) form a $q\times p$ array $\mathbb{A} = (a_i^{\,j})$, $j=1,\ldots, p$ and $i=1,\ldots, q$. The linearity of expectation implies
$$\operatorname{Var}(\mathbf Y)_{ij} = \sum a_i^{\,k}a_j^{\,l}\operatorname{Var}(\mathbf X)_{kl} .$$
In matrix notation,
$$\operatorname{Var}(\mathbf Y) = \mathbb{A}\operatorname{Var}(\mathbf X) \mathbb{A}^\prime .$$
All the components of $\operatorname{Var}(\mathbf{X})$ actually are univariate variances, due to the Polarization Identity
$$4\operatorname{Cov}(X_i,X_j) = \operatorname{Var}(X_i+X_j) - \operatorname{Var}(X_i-X_j).$$
This tells us that if you understand variances of univariate random variables, you already understand covariances of bivariate variables: they are "just" linear combinations of variances.
The expression in the question is perfectly analogous: the variables $X_i$ have been standardized as in $(1)$. We can understand what it represents by considering what it means for any variable, standardized or not. We would replace each $X_i$ by its centered version, as in $(2)$, and form quantities having three indexes,
$$\mu_3(\mathbf{X})_{ijk} = E[X_i^\prime X_j^\prime X_k^\prime].$$
These are the central (multivariate) moments of degree $3$. As in $(4)$, they form a tensor: when $\mathbf{Y} = \mathbb{A}\mathbf{X}$, then
$$\mu_3(\mathbf{Y})_{ijk} = \sum_{l,m,n} a_i^{\,l}a_j^{\,m}a_k^{\,n} \mu_3(\mathbf{X})_{lmn}.$$
The indexes in this triple sum range over all combinations of integers from $1$ through $p$.
The analog of the Polarization Identity is
$$\eqalign{&24\mu_3(\mathbf{X})_{ijk} = \\ &\mu_3(X_i+X_j+X_k) - \mu_3(X_i-X_j+X_k) - \mu_3(X_i+X_j-X_k) + \mu_3(X_i-X_j-X_k).}$$
On the right hand side, $\mu_3$ refers to the (univariate) central third moment: the expected value of the cube of the centered variable. When the variables are standardized, this moment is usually called the skewness. Accordingly, we may think of $\mu_3(\mathbf{X})$ as being the multivariate skewness of $\mathbf{X}$. It is a tensor of rank three (that is, with three indices) whose values are linear combinations of the skewnesses of various sums and differences of the $X_i$. If we were to seek interpretations, then, we would think of these components as measuring in $p$ dimensions whatever the skewness is measuring in one dimension. In many cases,
The first moments measure the location of a distribution;
The second moments (the variance-covariance matrix) measure its spread;
The standardized second moments (the correlations) indicate how the spread varies in $p$-dimensional space; and
The standardized third and fourth moments are taken to measure the shape of a distribution relative to its spread.
To elaborate on what a multidimensional "shape" might mean, observe that we can understand principal component analysis (PCA) as a mechanism to reduce any multivariate distribution to a standard version located at the origin and equal spreads in all directions. After PCA is performed, then, $\mu_3$ would provide the simplest indicators of the multidimensional shape of the distribution. These ideas apply equally well to data as to random variables, because data can always be analyzed in terms of their empirical distribution.
Reference
Alan Stuart & J. Keith Ord, Kendall's Advanced Theory of Statistics Fifth Edition, Volume 1: Distribution Theory; Chapter 3, Moments and Cumulants. Oxford University Press (1987).
Appendix: Proof of the Generalized Polarization Identity
Let $x_1,\ldots, x_n$ be algebraic variables. There are $2^n$ ways to add and subtract all $n$ of them. When we raise each of these sums-and-differences to the $n^\text{th}$ power, pick a suitable sign for each of those results, and add them up, we will get a multiple of $x_1x_2\cdots x_n$.
More formally, let $S=\{1,-1\}^n$ be the set of all $n$-tuples of $\pm 1$, so that any element $s\in S$ is a vector $s=(s_1,s_2,\ldots,s_n)$ whose coefficients are all $\pm 1$. The claim is
$$2^n n!\, x_1x_2\cdots x_n = \sum_{s\in S} \color{red}{s_1s_2\cdots s_n}(s_1x_1+s_2x_2+\cdots+s_nx_n)^n.\tag{1}$$
A prettier way of writing this equality helps explain the factor of $2^n n!$ that appears: upon dividing by $2^n$ we obtain the average of the terms on the right side (since $S$ has $|S|=2^n$ elements) and the $n!$ counts the distinct ways to form the monomial $x_1\cdots x_n$ from products of its components--namely, it counts the elements of the symmetric group $\mathfrak{S}^n.$ Thus, upon abbreviating $s_1s_2\cdots s_n=\chi(\mathbf s)$ and letting $\mathbf{s}\cdot \mathbf{x} = s_1x_1+ \cdots + s_nx_n$ be the (usual) dot product of vectors,
$$\sum_{\sigma\in\mathfrak{S}^n} x_{\sigma(1)}x_{\sigma(2)}\cdots x_{\sigma(n)} = \frac{1}{|S|}\sum_{\mathbf s\in S} \color{red}{\chi(\mathbf s)}(\mathbf{s}\cdot \mathbf{x} )^n.\tag{1}$$
Indeed, the Multinomial Theorem states that the coefficient of the monomial $x_1^{i_1}x_2^{i_2}\cdots x_n^{i_n}$ (where the $i_j$ are nonnegative integers summing to $n$) in the expansion of any term on the right hand side is
$$\binom{n}{i_1,i_2,\ldots,i_n}s_1^{i_1}s_2^{i_2}\cdots s_n^{i_n}.$$
In the sum $(1)$, the coefficients involving $x_1^{i_1}$ appear in pairs where one of each pair involves the case $s_1=1$, with coefficient proportional to $ \color{red}{s_1}$ times $s_1^{i_1}$, equal to $1$, and the other of each pair involves the case $s_1=-1$, with coefficient proportional to $\color{red}{-1}$ times $(-1)^{i_1}$, equal to $(-1)^{i_1+1}$. They cancel in the sum whenever $i_1+1$ is odd. The same argument applies to $i_2, \ldots, i_n$. Consequently, the only monomials that occur with nonzero coefficients must have odd powers of all the $x_i$. The only such monomial is $x_1x_2\cdots x_n$. It appears with coefficient $\binom{n}{1,1,\ldots,1}=n!$ in all $2^n$ terms of the sum. Consequently its coefficient is $2^nn!$, QED.
We need take only half of each pair associated with $x_1$: that is, we can restrict the right hand side of $(1)$ to the terms with $s_1=1$ and halve the coefficient on the left hand side to $2^{n-1}n!$ . That gives precisely the two versions of the Polarization Identity quoted in this answer for the cases $n=2$ and $n=3$: $2^{2-1}2! = 4$ and $2^{3-1}3!=24$.
Of course the Polarization Identity for algebraic variables immediately implies it for random variables: let each $x_i$ be a random variable $X_i$. Take expectations of both sides. The result follows by linearity of expectation.
|
Analogy of Pearson correlation for 3 variables
|
It is indeed something. To find out, we need to examine what we know about correlation itself.
The correlation matrix of a vector-valued random variable $\mathbf{X}=(X_1,X_2,\ldots,X_p)$ is the vari
|
Analogy of Pearson correlation for 3 variables
It is indeed something. To find out, we need to examine what we know about correlation itself.
The correlation matrix of a vector-valued random variable $\mathbf{X}=(X_1,X_2,\ldots,X_p)$ is the variance-covariance matrix, or simply "variance," of the standardized version of $\mathbf{X}$. That is, each $X_i$ is replaced by its recentered, rescaled version.
The covariance of $X_i$ and $X_j$ is the expectation of the product of their centered versions. That is, writing $X^\prime_i = X_i - E[X_i]$ and $X^\prime_j = X_j - E[X_j]$, we have
$$\operatorname{Cov}(X_i,X_j) = E[X^\prime_i X^\prime_j].$$
The variance of $\mathbf{X}$, which I will write $\operatorname{Var}(\mathbf{X})$, is not a single number. It is the array of values $$\operatorname{Var}(\mathbf{X})_{ij}=\operatorname{Cov}(X_i,X_j).$$
The way to think of the covariance for the intended generalization is to consider it a tensor. That means it's an entire collection of quantities $v_{ij}$, indexed by $i$ and $j$ ranging from $1$ through $p$, whose values change in a particularly simple predictable way when $\mathbf{X}$ undergoes a linear transformation. Specifically, let $\mathbf{Y}=(Y_1,Y_2,\ldots,Y_q)$ be another vector-valued random variable defined by
$$Y_i = \sum_{j=1}^p a_i^{\,j}X_j.$$
The constants $a_i^{\,j}$ ($i$ and $j$ are indexes--$j$ is not a power) form a $q\times p$ array $\mathbb{A} = (a_i^{\,j})$, $j=1,\ldots, p$ and $i=1,\ldots, q$. The linearity of expectation implies
$$\operatorname{Var}(\mathbf Y)_{ij} = \sum a_i^{\,k}a_j^{\,l}\operatorname{Var}(\mathbf X)_{kl} .$$
In matrix notation,
$$\operatorname{Var}(\mathbf Y) = \mathbb{A}\operatorname{Var}(\mathbf X) \mathbb{A}^\prime .$$
All the components of $\operatorname{Var}(\mathbf{X})$ actually are univariate variances, due to the Polarization Identity
$$4\operatorname{Cov}(X_i,X_j) = \operatorname{Var}(X_i+X_j) - \operatorname{Var}(X_i-X_j).$$
This tells us that if you understand variances of univariate random variables, you already understand covariances of bivariate variables: they are "just" linear combinations of variances.
The expression in the question is perfectly analogous: the variables $X_i$ have been standardized as in $(1)$. We can understand what it represents by considering what it means for any variable, standardized or not. We would replace each $X_i$ by its centered version, as in $(2)$, and form quantities having three indexes,
$$\mu_3(\mathbf{X})_{ijk} = E[X_i^\prime X_j^\prime X_k^\prime].$$
These are the central (multivariate) moments of degree $3$. As in $(4)$, they form a tensor: when $\mathbf{Y} = \mathbb{A}\mathbf{X}$, then
$$\mu_3(\mathbf{Y})_{ijk} = \sum_{l,m,n} a_i^{\,l}a_j^{\,m}a_k^{\,n} \mu_3(\mathbf{X})_{lmn}.$$
The indexes in this triple sum range over all combinations of integers from $1$ through $p$.
The analog of the Polarization Identity is
$$\eqalign{&24\mu_3(\mathbf{X})_{ijk} = \\ &\mu_3(X_i+X_j+X_k) - \mu_3(X_i-X_j+X_k) - \mu_3(X_i+X_j-X_k) + \mu_3(X_i-X_j-X_k).}$$
On the right hand side, $\mu_3$ refers to the (univariate) central third moment: the expected value of the cube of the centered variable. When the variables are standardized, this moment is usually called the skewness. Accordingly, we may think of $\mu_3(\mathbf{X})$ as being the multivariate skewness of $\mathbf{X}$. It is a tensor of rank three (that is, with three indices) whose values are linear combinations of the skewnesses of various sums and differences of the $X_i$. If we were to seek interpretations, then, we would think of these components as measuring in $p$ dimensions whatever the skewness is measuring in one dimension. In many cases,
The first moments measure the location of a distribution;
The second moments (the variance-covariance matrix) measure its spread;
The standardized second moments (the correlations) indicate how the spread varies in $p$-dimensional space; and
The standardized third and fourth moments are taken to measure the shape of a distribution relative to its spread.
To elaborate on what a multidimensional "shape" might mean, observe that we can understand principal component analysis (PCA) as a mechanism to reduce any multivariate distribution to a standard version located at the origin and equal spreads in all directions. After PCA is performed, then, $\mu_3$ would provide the simplest indicators of the multidimensional shape of the distribution. These ideas apply equally well to data as to random variables, because data can always be analyzed in terms of their empirical distribution.
Reference
Alan Stuart & J. Keith Ord, Kendall's Advanced Theory of Statistics Fifth Edition, Volume 1: Distribution Theory; Chapter 3, Moments and Cumulants. Oxford University Press (1987).
Appendix: Proof of the Generalized Polarization Identity
Let $x_1,\ldots, x_n$ be algebraic variables. There are $2^n$ ways to add and subtract all $n$ of them. When we raise each of these sums-and-differences to the $n^\text{th}$ power, pick a suitable sign for each of those results, and add them up, we will get a multiple of $x_1x_2\cdots x_n$.
More formally, let $S=\{1,-1\}^n$ be the set of all $n$-tuples of $\pm 1$, so that any element $s\in S$ is a vector $s=(s_1,s_2,\ldots,s_n)$ whose coefficients are all $\pm 1$. The claim is
$$2^n n!\, x_1x_2\cdots x_n = \sum_{s\in S} \color{red}{s_1s_2\cdots s_n}(s_1x_1+s_2x_2+\cdots+s_nx_n)^n.\tag{1}$$
A prettier way of writing this equality helps explain the factor of $2^n n!$ that appears: upon dividing by $2^n$ we obtain the average of the terms on the right side (since $S$ has $|S|=2^n$ elements) and the $n!$ counts the distinct ways to form the monomial $x_1\cdots x_n$ from products of its components--namely, it counts the elements of the symmetric group $\mathfrak{S}^n.$ Thus, upon abbreviating $s_1s_2\cdots s_n=\chi(\mathbf s)$ and letting $\mathbf{s}\cdot \mathbf{x} = s_1x_1+ \cdots + s_nx_n$ be the (usual) dot product of vectors,
$$\sum_{\sigma\in\mathfrak{S}^n} x_{\sigma(1)}x_{\sigma(2)}\cdots x_{\sigma(n)} = \frac{1}{|S|}\sum_{\mathbf s\in S} \color{red}{\chi(\mathbf s)}(\mathbf{s}\cdot \mathbf{x} )^n.\tag{1}$$
Indeed, the Multinomial Theorem states that the coefficient of the monomial $x_1^{i_1}x_2^{i_2}\cdots x_n^{i_n}$ (where the $i_j$ are nonnegative integers summing to $n$) in the expansion of any term on the right hand side is
$$\binom{n}{i_1,i_2,\ldots,i_n}s_1^{i_1}s_2^{i_2}\cdots s_n^{i_n}.$$
In the sum $(1)$, the coefficients involving $x_1^{i_1}$ appear in pairs where one of each pair involves the case $s_1=1$, with coefficient proportional to $ \color{red}{s_1}$ times $s_1^{i_1}$, equal to $1$, and the other of each pair involves the case $s_1=-1$, with coefficient proportional to $\color{red}{-1}$ times $(-1)^{i_1}$, equal to $(-1)^{i_1+1}$. They cancel in the sum whenever $i_1+1$ is odd. The same argument applies to $i_2, \ldots, i_n$. Consequently, the only monomials that occur with nonzero coefficients must have odd powers of all the $x_i$. The only such monomial is $x_1x_2\cdots x_n$. It appears with coefficient $\binom{n}{1,1,\ldots,1}=n!$ in all $2^n$ terms of the sum. Consequently its coefficient is $2^nn!$, QED.
We need take only half of each pair associated with $x_1$: that is, we can restrict the right hand side of $(1)$ to the terms with $s_1=1$ and halve the coefficient on the left hand side to $2^{n-1}n!$ . That gives precisely the two versions of the Polarization Identity quoted in this answer for the cases $n=2$ and $n=3$: $2^{2-1}2! = 4$ and $2^{3-1}3!=24$.
Of course the Polarization Identity for algebraic variables immediately implies it for random variables: let each $x_i$ be a random variable $X_i$. Take expectations of both sides. The result follows by linearity of expectation.
|
Analogy of Pearson correlation for 3 variables
It is indeed something. To find out, we need to examine what we know about correlation itself.
The correlation matrix of a vector-valued random variable $\mathbf{X}=(X_1,X_2,\ldots,X_p)$ is the vari
|
11,191
|
Analogy of Pearson correlation for 3 variables
|
Hmmm. If we run...
a <- rnorm(100);
b <- rnorm(100);
c <- rnorm(100)
mean((a-mean(a))*(b-mean(b))*(c-mean(c)))/
(sd(a) * sd(b) * sd(c))
it does seem to center on 0 (I haven't done a real simulation), but as @ttnphns alludes, running this (all variables the same)
a <- rnorm(100)
mean((a-mean(a))*(a-mean(a))*(a-mean(a)))/
(sd(a) * sd(a) * sd(a))
also seems to center on 0, which certainly makes me wonder what use this could be.
|
Analogy of Pearson correlation for 3 variables
|
Hmmm. If we run...
a <- rnorm(100);
b <- rnorm(100);
c <- rnorm(100)
mean((a-mean(a))*(b-mean(b))*(c-mean(c)))/
(sd(a) * sd(b) * sd(c))
it does seem to center on 0 (I haven't done a real simulation
|
Analogy of Pearson correlation for 3 variables
Hmmm. If we run...
a <- rnorm(100);
b <- rnorm(100);
c <- rnorm(100)
mean((a-mean(a))*(b-mean(b))*(c-mean(c)))/
(sd(a) * sd(b) * sd(c))
it does seem to center on 0 (I haven't done a real simulation), but as @ttnphns alludes, running this (all variables the same)
a <- rnorm(100)
mean((a-mean(a))*(a-mean(a))*(a-mean(a)))/
(sd(a) * sd(a) * sd(a))
also seems to center on 0, which certainly makes me wonder what use this could be.
|
Analogy of Pearson correlation for 3 variables
Hmmm. If we run...
a <- rnorm(100);
b <- rnorm(100);
c <- rnorm(100)
mean((a-mean(a))*(b-mean(b))*(c-mean(c)))/
(sd(a) * sd(b) * sd(c))
it does seem to center on 0 (I haven't done a real simulation
|
11,192
|
Analogy of Pearson correlation for 3 variables
|
If You need to calculate "correlation" between three or more variables, you could not use Pearson, as in this case it will be different for different order of variables have a look here.
If you are interesting in linear dependency, or how well they are fitted by 3D line, you may use PCA, obtain explained variance for first PC, permute your data and find probability, that this value may be to to random reasons. I've discuss something similar here (see Technical details below).
Matlab code
% Simulate our experimental data
x=normrnd(0,1,100,1);
y=2*x.*normrnd(1,0.1,100,1);
z=(-3*x+1.5*y).*normrnd(1,2,100,1);
% perform pca
[loadings, scores, variance]=pca([x,y,z]);
% Observed Explained Variance for first
% principal component
OEV1 = variance(1)/sum(variance)
% perform permutations
permOEV1=[];
for iPermutation=1:1000
permX=datasample(x,numel(x), 'replace',
false);
permY=datasample(y,numel(y), 'replace',
false);
permZ = datasample(z, numel(z),
'replace', false);
[loadings, scores, variance] = pca([permX,
permY, permZ]);
permOEV1(end+1) = variance(1) /
sum(variance);
end
% Calculate p-value
p_value=sum(permOEV1>=OEV1)/(numel(permOEV1)+1)
|
Analogy of Pearson correlation for 3 variables
|
If You need to calculate "correlation" between three or more variables, you could not use Pearson, as in this case it will be different for different order of variables have a look here.
If you are in
|
Analogy of Pearson correlation for 3 variables
If You need to calculate "correlation" between three or more variables, you could not use Pearson, as in this case it will be different for different order of variables have a look here.
If you are interesting in linear dependency, or how well they are fitted by 3D line, you may use PCA, obtain explained variance for first PC, permute your data and find probability, that this value may be to to random reasons. I've discuss something similar here (see Technical details below).
Matlab code
% Simulate our experimental data
x=normrnd(0,1,100,1);
y=2*x.*normrnd(1,0.1,100,1);
z=(-3*x+1.5*y).*normrnd(1,2,100,1);
% perform pca
[loadings, scores, variance]=pca([x,y,z]);
% Observed Explained Variance for first
% principal component
OEV1 = variance(1)/sum(variance)
% perform permutations
permOEV1=[];
for iPermutation=1:1000
permX=datasample(x,numel(x), 'replace',
false);
permY=datasample(y,numel(y), 'replace',
false);
permZ = datasample(z, numel(z),
'replace', false);
[loadings, scores, variance] = pca([permX,
permY, permZ]);
permOEV1(end+1) = variance(1) /
sum(variance);
end
% Calculate p-value
p_value=sum(permOEV1>=OEV1)/(numel(permOEV1)+1)
|
Analogy of Pearson correlation for 3 variables
If You need to calculate "correlation" between three or more variables, you could not use Pearson, as in this case it will be different for different order of variables have a look here.
If you are in
|
11,193
|
How do betting houses determine betting odds for sports?
|
How odds are set is a really interesting subject that I have done some research into, and in a similar way sports analytics.
The first paper I would refer to covers the NFL specifically "Why are Gambling Markets organised so differently from Financial Markets", Steven.D.Levitt (The Economic Journal 2004). This illustrates that the odds on the NFL are rarely set to generate 50/50 action because the bookmaker can exploit "square" action by skewing odds against their traditional bias (i.e. the point made above about the Ohio State Buckeyes - if the bookmaker is aware that they are going to take a larger % of the bets, they can either adjust the odds or the spread so the better has to pay a premium to bet the Buckeyes - e.g. the -7.5 or more than one touchdown instead of -6.5 - especially if the true rating for the game was around -5 or -6). It also makes the point that bookmakers/sportsbooks rarely make the odds themselves they are usually paying influential odds makers who set the line for a lot of events. The bookmaker will then rarely adjust these odds greatly as they will effectively handicap the market against other bookmakers and sportsbooks (generating a profitable opportunity for "sharp" action).
In the case of the game quoted by the OP, the prices quoted by Bet 365 is consistent with the over-round % that they have run on most football games this season of between 105-107% (I have an interest in this - their over-round% on the English Premier League is typically 5-6%). That 5-7% margin will look after them in the long run as it increasingly means unsophisticated gamblers have to be more right than average in the long run to make a a sustained profit. How the actual odds are generated is another matter in the case of Bet365 a lot of their competitors use the Bet Genius group for Odds Data (e.g. Sportingbet, Paddy Power, Sky Bet). They will probably then make small adjustments to this based on their typical clients betting preferences (e.g. what type of action they take and biases).
For a lot of sports the Cantor Fitzgerald group have created the Midas Algorithm to set up odds in the same way they would deal on Wall Street and they are getting an increasing presence in Las Vegas running several sports books - http://m.wired.com/magazine/2010/11/ff_midas/all/1. This has allowed them to set spreads for the entire NFL season (http://www.grantland.com/blog/the-triangle/post/_/id/27740/nfl-win-totals-hot-off-the-sportsbook-press) before the pre-season has taken place (which is not a typical case as most bookmakers seems to react on week to week action and player injuries and performance).
How are the actual odds generated? This is the more difficult question. Going on Mathletics (Wayne L.Winston 2009), some sports e.g. the NFL can be governed by a simple least squares algorithm based on margins of victory and points scored which can then be finessed (e.g. to give more weight to recent games). This can then be used to generate win percentages based on the ratings derived. In the case of the NFL, Hal Stern "On the Probability of Winning an American Football Game" (American Statistician 45, 1991) showed that the probability of the final margin of victory for home NFL team can be well approximated by a normal random variable with mean = home edge+home team rating-away team rating and a standard deviation of 13.86. Plug the ratings generated by your least squares work in and you have a set of percentages against a given spread. I believe that this can also be applied to a lot of other sports (e.g Australian Rules Football). In the case of football though I believe that oddsmakers also have done some regression analysis into player statistics to allow them to make a more rational rating based on the players that will actually be on the pitch rather than past team performance in terms of margins of victory (e.g. the Dtech group who analyse European football for the Times Newspaper base their ratings on the Team shots and goals data http://www.dectech.org/football/help_info.php - rather than a least squares model based purely on margin of victory). Given that sports could and should be viewed as an academic subject, I believe this is why we have seen increases in the number of the groups such as the Accuscore group who have a largely academic background (from interviews on the ESPN Behind the Bets Podcast) and have used their knowledge to generate opportunities from odds skewed to exploit gamblers that bet with pre-conditioned biases (e.g. the home team favourite wins more than 50% of games). If you can remove bias from the team that you pick, I believe this will generate opportunity.
|
How do betting houses determine betting odds for sports?
|
How odds are set is a really interesting subject that I have done some research into, and in a similar way sports analytics.
The first paper I would refer to covers the NFL specifically "Why are Gambl
|
How do betting houses determine betting odds for sports?
How odds are set is a really interesting subject that I have done some research into, and in a similar way sports analytics.
The first paper I would refer to covers the NFL specifically "Why are Gambling Markets organised so differently from Financial Markets", Steven.D.Levitt (The Economic Journal 2004). This illustrates that the odds on the NFL are rarely set to generate 50/50 action because the bookmaker can exploit "square" action by skewing odds against their traditional bias (i.e. the point made above about the Ohio State Buckeyes - if the bookmaker is aware that they are going to take a larger % of the bets, they can either adjust the odds or the spread so the better has to pay a premium to bet the Buckeyes - e.g. the -7.5 or more than one touchdown instead of -6.5 - especially if the true rating for the game was around -5 or -6). It also makes the point that bookmakers/sportsbooks rarely make the odds themselves they are usually paying influential odds makers who set the line for a lot of events. The bookmaker will then rarely adjust these odds greatly as they will effectively handicap the market against other bookmakers and sportsbooks (generating a profitable opportunity for "sharp" action).
In the case of the game quoted by the OP, the prices quoted by Bet 365 is consistent with the over-round % that they have run on most football games this season of between 105-107% (I have an interest in this - their over-round% on the English Premier League is typically 5-6%). That 5-7% margin will look after them in the long run as it increasingly means unsophisticated gamblers have to be more right than average in the long run to make a a sustained profit. How the actual odds are generated is another matter in the case of Bet365 a lot of their competitors use the Bet Genius group for Odds Data (e.g. Sportingbet, Paddy Power, Sky Bet). They will probably then make small adjustments to this based on their typical clients betting preferences (e.g. what type of action they take and biases).
For a lot of sports the Cantor Fitzgerald group have created the Midas Algorithm to set up odds in the same way they would deal on Wall Street and they are getting an increasing presence in Las Vegas running several sports books - http://m.wired.com/magazine/2010/11/ff_midas/all/1. This has allowed them to set spreads for the entire NFL season (http://www.grantland.com/blog/the-triangle/post/_/id/27740/nfl-win-totals-hot-off-the-sportsbook-press) before the pre-season has taken place (which is not a typical case as most bookmakers seems to react on week to week action and player injuries and performance).
How are the actual odds generated? This is the more difficult question. Going on Mathletics (Wayne L.Winston 2009), some sports e.g. the NFL can be governed by a simple least squares algorithm based on margins of victory and points scored which can then be finessed (e.g. to give more weight to recent games). This can then be used to generate win percentages based on the ratings derived. In the case of the NFL, Hal Stern "On the Probability of Winning an American Football Game" (American Statistician 45, 1991) showed that the probability of the final margin of victory for home NFL team can be well approximated by a normal random variable with mean = home edge+home team rating-away team rating and a standard deviation of 13.86. Plug the ratings generated by your least squares work in and you have a set of percentages against a given spread. I believe that this can also be applied to a lot of other sports (e.g Australian Rules Football). In the case of football though I believe that oddsmakers also have done some regression analysis into player statistics to allow them to make a more rational rating based on the players that will actually be on the pitch rather than past team performance in terms of margins of victory (e.g. the Dtech group who analyse European football for the Times Newspaper base their ratings on the Team shots and goals data http://www.dectech.org/football/help_info.php - rather than a least squares model based purely on margin of victory). Given that sports could and should be viewed as an academic subject, I believe this is why we have seen increases in the number of the groups such as the Accuscore group who have a largely academic background (from interviews on the ESPN Behind the Bets Podcast) and have used their knowledge to generate opportunities from odds skewed to exploit gamblers that bet with pre-conditioned biases (e.g. the home team favourite wins more than 50% of games). If you can remove bias from the team that you pick, I believe this will generate opportunity.
|
How do betting houses determine betting odds for sports?
How odds are set is a really interesting subject that I have done some research into, and in a similar way sports analytics.
The first paper I would refer to covers the NFL specifically "Why are Gambl
|
11,194
|
How do betting houses determine betting odds for sports?
|
The following is for entertainment purposes only. Sports betting is a very interesting academic topic, and I recommend you keep it an academic topic. You incur your own financial (and legal, in some jurisdictions) risks by acting upon anything I say :)
The process is more complicated than many people make it out to be. First of all, there are examples of sports books that are explicitly based upon the public's actions...basically they are matchmakers for those willing to offer bets and those willing to place them. Those generally aren't what we are talking about though. As for the others, one school of thoughts is for the houses to expertly determine the "true" odds, and set the line accordingly. As many in the comments pointed out, another school of thought is that the house can manipulate the line so as to receive balanced action on each side and guarantee profits (as was noted, the implied probabilities do not sum to 100%, so the difference is accounted for by the juice or vig). However, legitimate betting houses handle thousands and thousands of events every year, so they don't have to be overly concerned with guaranteeing a smaller profit if a larger profit can be had in expectation. As handled by sports books, this is not purely an accounting problem.
It's important at this point to make a couple distinctions. First I'll note the difference between opening lines and closing lines. Betting houses have to have some line before the market has acted on their lines, and these opening lines are considered less efficient than the closing lines just before an event starts. It's also important to note the difference between so-called "sharps" and "squares" in the betting world. Despite what they might think, most bettors are so-called "squares" that don't really have much if any discernible skill in picking a side on which to bet. "Sharps," on the other hand, are well-funded experts that know what they are doing. Whereas a square probably has a set limit that he will bet, independent of the odds, a sharp is very different. A sharp won't bet at all if he feels the odds are unfavorable, and he will bet a great deal if the lines do not update to reflect his actions. This is not pure irrational greed on the sharp's part; there is a statistical basis known as the Kelly criterion for determining how much to bet when one has an advantage.
So the existence of sharps makes it surprisingly non-trivial to simply set a 50/50 line and happily lock in profits. If the line is bad, sharps will hammer it over and over again. To maintain the 50/50 action, the house would have to adjust the line to the point where sharps no longer feel they have an advantage. There are lots of forces at work here...market forces, arbitrage between different books, and lots of information, but these are going to tend to make that closing line fairly close to the true odds, whether that was the original intention or not. Of course it's complicated since the house might have non-public information or might think they are smarter than the public. There's not really a simple answer, but just keep in mind that a) sports books want to maximize profit and b) there are smart people out there who bet more when the line is worse.
As for how they might determine the lines aside from actual betting (which might be important for instance in setting opening lines), there are numerous things they can do. First of all, they might look to other books and assume some variant of the efficient market hypothesis. However, they might also employ the same types of techniques that sharp bettors employ. In the 21st century, sports analytics is a growing field. There are numerous academic journals, conferences, and blogs dedicated solely to sports analytics. There's also a ton of data out there, beyond just which team won or lost. In Major League Baseball, for instance, data exists on just about anything one might want to track. Every play of every regular season game for roughly the last 60 years can be downloaded by anyone with an interest. There are also people that are quite knowledgeable at ascertaining relative skills of players by observing them play. I can't say what a specific book does, but the potential combination of analytics and scouting on behalf of the sports book is quite formidable.
|
How do betting houses determine betting odds for sports?
|
The following is for entertainment purposes only. Sports betting is a very interesting academic topic, and I recommend you keep it an academic topic. You incur your own financial (and legal, in some
|
How do betting houses determine betting odds for sports?
The following is for entertainment purposes only. Sports betting is a very interesting academic topic, and I recommend you keep it an academic topic. You incur your own financial (and legal, in some jurisdictions) risks by acting upon anything I say :)
The process is more complicated than many people make it out to be. First of all, there are examples of sports books that are explicitly based upon the public's actions...basically they are matchmakers for those willing to offer bets and those willing to place them. Those generally aren't what we are talking about though. As for the others, one school of thoughts is for the houses to expertly determine the "true" odds, and set the line accordingly. As many in the comments pointed out, another school of thought is that the house can manipulate the line so as to receive balanced action on each side and guarantee profits (as was noted, the implied probabilities do not sum to 100%, so the difference is accounted for by the juice or vig). However, legitimate betting houses handle thousands and thousands of events every year, so they don't have to be overly concerned with guaranteeing a smaller profit if a larger profit can be had in expectation. As handled by sports books, this is not purely an accounting problem.
It's important at this point to make a couple distinctions. First I'll note the difference between opening lines and closing lines. Betting houses have to have some line before the market has acted on their lines, and these opening lines are considered less efficient than the closing lines just before an event starts. It's also important to note the difference between so-called "sharps" and "squares" in the betting world. Despite what they might think, most bettors are so-called "squares" that don't really have much if any discernible skill in picking a side on which to bet. "Sharps," on the other hand, are well-funded experts that know what they are doing. Whereas a square probably has a set limit that he will bet, independent of the odds, a sharp is very different. A sharp won't bet at all if he feels the odds are unfavorable, and he will bet a great deal if the lines do not update to reflect his actions. This is not pure irrational greed on the sharp's part; there is a statistical basis known as the Kelly criterion for determining how much to bet when one has an advantage.
So the existence of sharps makes it surprisingly non-trivial to simply set a 50/50 line and happily lock in profits. If the line is bad, sharps will hammer it over and over again. To maintain the 50/50 action, the house would have to adjust the line to the point where sharps no longer feel they have an advantage. There are lots of forces at work here...market forces, arbitrage between different books, and lots of information, but these are going to tend to make that closing line fairly close to the true odds, whether that was the original intention or not. Of course it's complicated since the house might have non-public information or might think they are smarter than the public. There's not really a simple answer, but just keep in mind that a) sports books want to maximize profit and b) there are smart people out there who bet more when the line is worse.
As for how they might determine the lines aside from actual betting (which might be important for instance in setting opening lines), there are numerous things they can do. First of all, they might look to other books and assume some variant of the efficient market hypothesis. However, they might also employ the same types of techniques that sharp bettors employ. In the 21st century, sports analytics is a growing field. There are numerous academic journals, conferences, and blogs dedicated solely to sports analytics. There's also a ton of data out there, beyond just which team won or lost. In Major League Baseball, for instance, data exists on just about anything one might want to track. Every play of every regular season game for roughly the last 60 years can be downloaded by anyone with an interest. There are also people that are quite knowledgeable at ascertaining relative skills of players by observing them play. I can't say what a specific book does, but the potential combination of analytics and scouting on behalf of the sports book is quite formidable.
|
How do betting houses determine betting odds for sports?
The following is for entertainment purposes only. Sports betting is a very interesting academic topic, and I recommend you keep it an academic topic. You incur your own financial (and legal, in some
|
11,195
|
Building a linear model for a ratio vs. percentage?
|
I've never seen a regression model for ratios before, but regression for a percentage (or more commonly, a fraction) is quite common. The reason may be that it's easy to write down a likelihood (probability of the data given your parameter) in terms of a fraction or probability: each element has a probability $p$ of being in category $A$ (vs. $B$). The estimate of $p$ is then the estimated fraction.
Note however: it's not standard to make a linear model for a fraction; more common is a generalized linear model, which is a linear model along with an invertible, nonlinear 'link' function that controls the range of the desired model (here $[0,1]$).
The most common model for fractions is (as you noted) logistic regression, which allows you to use regressors on the real line but have a fraction constrained to live on [0,1]. However, logistic regression is technically a model for binary data, meaning you observe a series of events where each input (set of independent variables) produces an independent observation of $0$ or $1$. For the case where you just have a population divided into two different classes (i.e., and you don't have separate regressors for each member of the population), you might want binomial regression.
That being said, there's probably nothing to stop you from writing down a generalized linear model (GLM) for ratios. (Logistic and binomial regression are also GLMs). You'd need to pick a function mapping from the input space to the space of possible ratios (e.g., $\log$), then write down your likelihood in terms of the resulting ratio.
|
Building a linear model for a ratio vs. percentage?
|
I've never seen a regression model for ratios before, but regression for a percentage (or more commonly, a fraction) is quite common. The reason may be that it's easy to write down a likelihood (prob
|
Building a linear model for a ratio vs. percentage?
I've never seen a regression model for ratios before, but regression for a percentage (or more commonly, a fraction) is quite common. The reason may be that it's easy to write down a likelihood (probability of the data given your parameter) in terms of a fraction or probability: each element has a probability $p$ of being in category $A$ (vs. $B$). The estimate of $p$ is then the estimated fraction.
Note however: it's not standard to make a linear model for a fraction; more common is a generalized linear model, which is a linear model along with an invertible, nonlinear 'link' function that controls the range of the desired model (here $[0,1]$).
The most common model for fractions is (as you noted) logistic regression, which allows you to use regressors on the real line but have a fraction constrained to live on [0,1]. However, logistic regression is technically a model for binary data, meaning you observe a series of events where each input (set of independent variables) produces an independent observation of $0$ or $1$. For the case where you just have a population divided into two different classes (i.e., and you don't have separate regressors for each member of the population), you might want binomial regression.
That being said, there's probably nothing to stop you from writing down a generalized linear model (GLM) for ratios. (Logistic and binomial regression are also GLMs). You'd need to pick a function mapping from the input space to the space of possible ratios (e.g., $\log$), then write down your likelihood in terms of the resulting ratio.
|
Building a linear model for a ratio vs. percentage?
I've never seen a regression model for ratios before, but regression for a percentage (or more commonly, a fraction) is quite common. The reason may be that it's easy to write down a likelihood (prob
|
11,196
|
Building a linear model for a ratio vs. percentage?
|
Echoing the first answer. Don't bother to convert - just model the counts and covariates directly.
If you do that and fit a Binomial (or equivalently logistic) regression model to the boy girl counts you will, if you choose the usual link function for such models, implicitly already be fitting a (covariate smoothed logged) ratio of boys to girls. That's the linear predictor.
The primary reason to model counts directly rather than proportions or ratios is that you don't lose information. Intuitively you'd be a lot more confident about inferences from an observed ratio of 1 (boys to girls) if it came from seeing 100 boys and 100 girls than from seeing 2 and 2. Consequently, if you have covariates you'll have more information about their effects and potentially a better predictive model.
|
Building a linear model for a ratio vs. percentage?
|
Echoing the first answer. Don't bother to convert - just model the counts and covariates directly.
If you do that and fit a Binomial (or equivalently logistic) regression model to the boy girl counts
|
Building a linear model for a ratio vs. percentage?
Echoing the first answer. Don't bother to convert - just model the counts and covariates directly.
If you do that and fit a Binomial (or equivalently logistic) regression model to the boy girl counts you will, if you choose the usual link function for such models, implicitly already be fitting a (covariate smoothed logged) ratio of boys to girls. That's the linear predictor.
The primary reason to model counts directly rather than proportions or ratios is that you don't lose information. Intuitively you'd be a lot more confident about inferences from an observed ratio of 1 (boys to girls) if it came from seeing 100 boys and 100 girls than from seeing 2 and 2. Consequently, if you have covariates you'll have more information about their effects and potentially a better predictive model.
|
Building a linear model for a ratio vs. percentage?
Echoing the first answer. Don't bother to convert - just model the counts and covariates directly.
If you do that and fit a Binomial (or equivalently logistic) regression model to the boy girl counts
|
11,197
|
Meaning of 'number of parameters' in AIC
|
As mugen mentioned, $k$ represents the number of parameters estimated. In other words, it's the number of additional quantities you need to know in order to fully specify the model. In the simple linear regression model
$$y=ax+b$$
you can estimate $a$, $b$, or both. Whichever quantities you don't estimate you must fix. There is no "ignoring" a parameter in the sense that you don't know it and don't care about it. The most common model that doesn't estimate both $a$ and $b$ is the no-intercept model, where we fix $b=0$. This will have 1 parameter. You could just as easily fix $a=2$ or $b=1$ if you have some reason to believe that it reflects reality. (Fine point: $\sigma$ is also a parameter in a simple linear regression, but since it's there in every model you can drop it without affecting comparisons of AIC.)
If your model is
$$y=af(c,x)+b$$
the number of parameters depends on whether you fix any of these values, and on the form of $f$. For example, if we want to estimate $a, b, c$ and know that $f(c,x)=x^c$, then when we write out the model we have
$$y=ax^c+b$$
with three unknown parameters. If, however, $f(c,x)=cx$, then we have the model
$$y=acx+b$$
which really only has two parameters: $ac$ and $b$.
It is crucial that $f(c,x)$ is a family of functions indexed by $c$. If all you know is that $f(c,x)$ is continuous and it depends on $c$ and $x$, then you're out of luck because there are uncountably many continuous functions.
|
Meaning of 'number of parameters' in AIC
|
As mugen mentioned, $k$ represents the number of parameters estimated. In other words, it's the number of additional quantities you need to know in order to fully specify the model. In the simple line
|
Meaning of 'number of parameters' in AIC
As mugen mentioned, $k$ represents the number of parameters estimated. In other words, it's the number of additional quantities you need to know in order to fully specify the model. In the simple linear regression model
$$y=ax+b$$
you can estimate $a$, $b$, or both. Whichever quantities you don't estimate you must fix. There is no "ignoring" a parameter in the sense that you don't know it and don't care about it. The most common model that doesn't estimate both $a$ and $b$ is the no-intercept model, where we fix $b=0$. This will have 1 parameter. You could just as easily fix $a=2$ or $b=1$ if you have some reason to believe that it reflects reality. (Fine point: $\sigma$ is also a parameter in a simple linear regression, but since it's there in every model you can drop it without affecting comparisons of AIC.)
If your model is
$$y=af(c,x)+b$$
the number of parameters depends on whether you fix any of these values, and on the form of $f$. For example, if we want to estimate $a, b, c$ and know that $f(c,x)=x^c$, then when we write out the model we have
$$y=ax^c+b$$
with three unknown parameters. If, however, $f(c,x)=cx$, then we have the model
$$y=acx+b$$
which really only has two parameters: $ac$ and $b$.
It is crucial that $f(c,x)$ is a family of functions indexed by $c$. If all you know is that $f(c,x)$ is continuous and it depends on $c$ and $x$, then you're out of luck because there are uncountably many continuous functions.
|
Meaning of 'number of parameters' in AIC
As mugen mentioned, $k$ represents the number of parameters estimated. In other words, it's the number of additional quantities you need to know in order to fully specify the model. In the simple line
|
11,198
|
Meaning of 'number of parameters' in AIC
|
For any statistical model, the AIC value is $\mathit{AIC} = 2k - 2\ln(L)$
where k is the number of parameters in the model, and L is the maximized value of the likelihood function for the model.
(see here)
As you may see, $k$ represents the number of parameters estimated in each model. If you model includes an intercept (that is, if you compute a point estimate, variance and confidence interval for the intercept) then it counts as a parameter. On the other hand, if you are computing a model without an intercept, it does not count.
Remember that AIC does not only summarise goodness of fit but it also considers the complexity of the model. That's why $k$ exists, to penalise models with more parameters.
I don't feel knowledgeable enough to answer your second question, I'll leave it for another member of the community.
|
Meaning of 'number of parameters' in AIC
|
For any statistical model, the AIC value is $\mathit{AIC} = 2k - 2\ln(L)$
where k is the number of parameters in the model, and L is the maximized value of the likelihood function for the model.
(s
|
Meaning of 'number of parameters' in AIC
For any statistical model, the AIC value is $\mathit{AIC} = 2k - 2\ln(L)$
where k is the number of parameters in the model, and L is the maximized value of the likelihood function for the model.
(see here)
As you may see, $k$ represents the number of parameters estimated in each model. If you model includes an intercept (that is, if you compute a point estimate, variance and confidence interval for the intercept) then it counts as a parameter. On the other hand, if you are computing a model without an intercept, it does not count.
Remember that AIC does not only summarise goodness of fit but it also considers the complexity of the model. That's why $k$ exists, to penalise models with more parameters.
I don't feel knowledgeable enough to answer your second question, I'll leave it for another member of the community.
|
Meaning of 'number of parameters' in AIC
For any statistical model, the AIC value is $\mathit{AIC} = 2k - 2\ln(L)$
where k is the number of parameters in the model, and L is the maximized value of the likelihood function for the model.
(s
|
11,199
|
Meaning of 'number of parameters' in AIC
|
First, to those who may not be familiar with AIC: the Akaike Information Criterion (AIC) is a simple metric designed to compare the "goodness" of models.
According to AIC, when trying to choose between two different models applying to the same input and response variables, i.e. models designed to solve the same problem, the model with the lower AIC is considered "better".
In the AIC formula, $k$ refers to the number of variables (input features, or columns) in the model. The more complex the model is (more variables needed to get the estimate or prediction), the higher the AIC is. This ensures that among two models with the same predictive power or accuracy, the simpler model wins. This is a form of Occam's razor.
So the simple answer to the last question is: if the c in $f(c, x)$ is a constant that doesn't change with the observations, then, it should not be included in $k$.
|
Meaning of 'number of parameters' in AIC
|
First, to those who may not be familiar with AIC: the Akaike Information Criterion (AIC) is a simple metric designed to compare the "goodness" of models.
According to AIC, when trying to choose betwee
|
Meaning of 'number of parameters' in AIC
First, to those who may not be familiar with AIC: the Akaike Information Criterion (AIC) is a simple metric designed to compare the "goodness" of models.
According to AIC, when trying to choose between two different models applying to the same input and response variables, i.e. models designed to solve the same problem, the model with the lower AIC is considered "better".
In the AIC formula, $k$ refers to the number of variables (input features, or columns) in the model. The more complex the model is (more variables needed to get the estimate or prediction), the higher the AIC is. This ensures that among two models with the same predictive power or accuracy, the simpler model wins. This is a form of Occam's razor.
So the simple answer to the last question is: if the c in $f(c, x)$ is a constant that doesn't change with the observations, then, it should not be included in $k$.
|
Meaning of 'number of parameters' in AIC
First, to those who may not be familiar with AIC: the Akaike Information Criterion (AIC) is a simple metric designed to compare the "goodness" of models.
According to AIC, when trying to choose betwee
|
11,200
|
Equations in the news: Translating a multi-level model to a general audience
|
Here's one possibility.
Assessing teacher performance has traditionally been difficult. One part of this difficulty is that different students have different levels of interest in a given subject. If a given student gets an A, this doesn't necessarily mean that teaching was excellent -- rather, it may mean that a very gifted and interested student did his best to succeed even despite poor teaching quality. Conversely, a student getting a D doesn't necessarily mean that the teaching was poor -- rather, it may mean that a disinterested student coasted despite the teacher's best efforts to educate and inspire.
The difficulty is aggravated by the fact that student selection (and therefore the students' level of interest) is far from random. It is common for schools to emphasize one subject (or a group of subjects) over others. For example, a school may emphasize technical subjects over humanities. Students in such schools are probably so interested in technical areas that they will receive a passing grade even with the worst possible teacher. Thus the fraction of students passing math is not a good measure of teaching -- we expect good teachers to do much better than that with students who are so eager to learn. In contrast, those same students may not be interested at all in arts. It would be difficult to expect even from the best teacher to ensure all students get A's.
Another difficulty is that not all success in a given class is attributable to that class's teacher directly. Rather, the success may be due to the school (or entire district) creating motivation and framework for achievement.
To take into account all of these difficulties, researchers have created a model that evaluates teacher's 'added value'. In essence, the model takes into account the intrinsic characteristics of each student (overall level of interest and success in learning), as well as the school and district's contributions to student success, and predicts the student grades that would be expected with 'average' teaching in that environment. The model then compares the actual grades to the predicted ones and based on it decides whether teaching was adequate given all the other considerations, better than adequate, or worse. Although the model may seem complex to a non-mathematician, it is actually pretty simple and standard. Mathematicians have been using similar (and even more complex) models for decades.
To summarize, Ms. Isaacson's guess is correct. Even though 65 of her 66 students scored proficient on the state test, they would have scored just the same even if a dog were their teacher. An actual good teacher would enable these students to achieve not merely 'proficient', but actually 'good' scores on the same test.
At this point I could mention some of my concerns with the model. For example, the model developers claim it addresses some of the difficulties with evaluating teaching quality. Do I have enough reasons to believe them? Neighborhoods with lower-income population will have lower expected 'district' and 'school' scores. Say a neighborhood will have an expected score of 2.5. A teacher that will achieve an average of 3 will get a good evaluation. This may prompt teachers to aim for the score of 3, rather than for a score of, say, 4 or 5. In other words, teachers will aim for mediocrity rather than perfection. Do we want this to happen? Finally, even though the model is simple mathematically, it works in a way very different from how human intuition works. As a result, we have no obvious way to validate or dispute the model's decision. Ms. Isaacson's unfortunate example illustrates what this may lead to. Do we want to depend blindly on the computer in something so important?
Note that this is an explanation to a layperson. I sidestepped several potentially controversial issues here. For example, I didn't want to say that school districts with low income demographics are expected to perform poorer, because this wouldn't sound good to a layperson.
Also, I have assumed that the goal is actually to give a reasonably fair description of the model. But I'm pretty sure that this wasn't NYT's goal here. So at least part of the reason their explanation is poor is intentional FUD, in my opinion.
|
Equations in the news: Translating a multi-level model to a general audience
|
Here's one possibility.
Assessing teacher performance has traditionally been difficult. One part of this difficulty is that different students have different levels of interest in a given subject. If
|
Equations in the news: Translating a multi-level model to a general audience
Here's one possibility.
Assessing teacher performance has traditionally been difficult. One part of this difficulty is that different students have different levels of interest in a given subject. If a given student gets an A, this doesn't necessarily mean that teaching was excellent -- rather, it may mean that a very gifted and interested student did his best to succeed even despite poor teaching quality. Conversely, a student getting a D doesn't necessarily mean that the teaching was poor -- rather, it may mean that a disinterested student coasted despite the teacher's best efforts to educate and inspire.
The difficulty is aggravated by the fact that student selection (and therefore the students' level of interest) is far from random. It is common for schools to emphasize one subject (or a group of subjects) over others. For example, a school may emphasize technical subjects over humanities. Students in such schools are probably so interested in technical areas that they will receive a passing grade even with the worst possible teacher. Thus the fraction of students passing math is not a good measure of teaching -- we expect good teachers to do much better than that with students who are so eager to learn. In contrast, those same students may not be interested at all in arts. It would be difficult to expect even from the best teacher to ensure all students get A's.
Another difficulty is that not all success in a given class is attributable to that class's teacher directly. Rather, the success may be due to the school (or entire district) creating motivation and framework for achievement.
To take into account all of these difficulties, researchers have created a model that evaluates teacher's 'added value'. In essence, the model takes into account the intrinsic characteristics of each student (overall level of interest and success in learning), as well as the school and district's contributions to student success, and predicts the student grades that would be expected with 'average' teaching in that environment. The model then compares the actual grades to the predicted ones and based on it decides whether teaching was adequate given all the other considerations, better than adequate, or worse. Although the model may seem complex to a non-mathematician, it is actually pretty simple and standard. Mathematicians have been using similar (and even more complex) models for decades.
To summarize, Ms. Isaacson's guess is correct. Even though 65 of her 66 students scored proficient on the state test, they would have scored just the same even if a dog were their teacher. An actual good teacher would enable these students to achieve not merely 'proficient', but actually 'good' scores on the same test.
At this point I could mention some of my concerns with the model. For example, the model developers claim it addresses some of the difficulties with evaluating teaching quality. Do I have enough reasons to believe them? Neighborhoods with lower-income population will have lower expected 'district' and 'school' scores. Say a neighborhood will have an expected score of 2.5. A teacher that will achieve an average of 3 will get a good evaluation. This may prompt teachers to aim for the score of 3, rather than for a score of, say, 4 or 5. In other words, teachers will aim for mediocrity rather than perfection. Do we want this to happen? Finally, even though the model is simple mathematically, it works in a way very different from how human intuition works. As a result, we have no obvious way to validate or dispute the model's decision. Ms. Isaacson's unfortunate example illustrates what this may lead to. Do we want to depend blindly on the computer in something so important?
Note that this is an explanation to a layperson. I sidestepped several potentially controversial issues here. For example, I didn't want to say that school districts with low income demographics are expected to perform poorer, because this wouldn't sound good to a layperson.
Also, I have assumed that the goal is actually to give a reasonably fair description of the model. But I'm pretty sure that this wasn't NYT's goal here. So at least part of the reason their explanation is poor is intentional FUD, in my opinion.
|
Equations in the news: Translating a multi-level model to a general audience
Here's one possibility.
Assessing teacher performance has traditionally been difficult. One part of this difficulty is that different students have different levels of interest in a given subject. If
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.