idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
14,801
|
Logit with ordinal independent variables
|
it's perfectly fine to use a categorical predictor in a logit (or OLS) regression model if the levels are ordinal. But if you have a reason to treat each level as discrete (or if in fact your categorical variable is nominal rather than ordinal), then, as alternative to dummy coding, you can also use orthogonal contrast coding. For very complete & accessible discussion, see Judd, C.M., McClelland, G.H. & Ryan, C.S. Data analysis : a model comparison approach, Edn. 2nd. (Routledge/Taylor and Francis, New York, NY; 2008), or just google "contrast coding"
|
Logit with ordinal independent variables
|
it's perfectly fine to use a categorical predictor in a logit (or OLS) regression model if the levels are ordinal. But if you have a reason to treat each level as discrete (or if in fact your categori
|
Logit with ordinal independent variables
it's perfectly fine to use a categorical predictor in a logit (or OLS) regression model if the levels are ordinal. But if you have a reason to treat each level as discrete (or if in fact your categorical variable is nominal rather than ordinal), then, as alternative to dummy coding, you can also use orthogonal contrast coding. For very complete & accessible discussion, see Judd, C.M., McClelland, G.H. & Ryan, C.S. Data analysis : a model comparison approach, Edn. 2nd. (Routledge/Taylor and Francis, New York, NY; 2008), or just google "contrast coding"
|
Logit with ordinal independent variables
it's perfectly fine to use a categorical predictor in a logit (or OLS) regression model if the levels are ordinal. But if you have a reason to treat each level as discrete (or if in fact your categori
|
14,802
|
Why does a bagged tree / random forest tree have higher bias than a single decision tree?
|
I will accept the answer on 1) from Kunlun, but just to close this case, I will here give the conclusions on the two questions that I reached in my thesis (which were both accepted by my Supervisor):
1) More data produces better models, and since we only use part of the whole training data to train the model (bootstrap), higher bias occurs in each tree (Copy from the answer by Kunlun)
2) In the Random Forests algorithm, we limit the number of variables to split on in each split - i.e. we limit the number of variables to explain our data with. Again, higher bias occurs in each tree.
Conclusion: Both situations are a matter of limiting our ability to explain the population: First we limit the number of observations, then we limit the number of variables to split on in each split. Both limitations leads to higher bias in each tree, but often the variance reduction in the model overshines the bias increase in each tree, and thus Bagging and Random Forests tend to produce a better model than just a single decision tree.
|
Why does a bagged tree / random forest tree have higher bias than a single decision tree?
|
I will accept the answer on 1) from Kunlun, but just to close this case, I will here give the conclusions on the two questions that I reached in my thesis (which were both accepted by my Supervisor):
|
Why does a bagged tree / random forest tree have higher bias than a single decision tree?
I will accept the answer on 1) from Kunlun, but just to close this case, I will here give the conclusions on the two questions that I reached in my thesis (which were both accepted by my Supervisor):
1) More data produces better models, and since we only use part of the whole training data to train the model (bootstrap), higher bias occurs in each tree (Copy from the answer by Kunlun)
2) In the Random Forests algorithm, we limit the number of variables to split on in each split - i.e. we limit the number of variables to explain our data with. Again, higher bias occurs in each tree.
Conclusion: Both situations are a matter of limiting our ability to explain the population: First we limit the number of observations, then we limit the number of variables to split on in each split. Both limitations leads to higher bias in each tree, but often the variance reduction in the model overshines the bias increase in each tree, and thus Bagging and Random Forests tend to produce a better model than just a single decision tree.
|
Why does a bagged tree / random forest tree have higher bias than a single decision tree?
I will accept the answer on 1) from Kunlun, but just to close this case, I will here give the conclusions on the two questions that I reached in my thesis (which were both accepted by my Supervisor):
|
14,803
|
Why does a bagged tree / random forest tree have higher bias than a single decision tree?
|
According to the authors of "Elements of Statistical Learning" (see proof below):
As in bagging, the bias of a random forest is the same as the bias of
any of the individual sampled trees.
Taken from 2008. Elements of Statistical Learning 2nd Ed, Chapter 9.2.3. Hastie, Tibshirani, Friedman:
Your answer however seems to make sense, and in the right plot of Fig 15.10 we can see that the green horizontal line, which is the squared bias of a single tree, is way below the bias of a random forest. Seems to be a contradiction which I have not sorted out yet.
EDIT:
The above is clarified right below the proof (same source): a tree within the random forest has the same bias as a random forest, where the single tree is restricted by bootstrap and no# of regressors randomly selected at each split (m). A fully grown, unpruned tree outside the random forest on the other hand (not bootstrapped and restricted by m) has lower bias. Hence random forests / bagging improve through variance reduction only, not bias reduction.
Quote:
|
Why does a bagged tree / random forest tree have higher bias than a single decision tree?
|
According to the authors of "Elements of Statistical Learning" (see proof below):
As in bagging, the bias of a random forest is the same as the bias of
any of the individual sampled trees.
Taken fro
|
Why does a bagged tree / random forest tree have higher bias than a single decision tree?
According to the authors of "Elements of Statistical Learning" (see proof below):
As in bagging, the bias of a random forest is the same as the bias of
any of the individual sampled trees.
Taken from 2008. Elements of Statistical Learning 2nd Ed, Chapter 9.2.3. Hastie, Tibshirani, Friedman:
Your answer however seems to make sense, and in the right plot of Fig 15.10 we can see that the green horizontal line, which is the squared bias of a single tree, is way below the bias of a random forest. Seems to be a contradiction which I have not sorted out yet.
EDIT:
The above is clarified right below the proof (same source): a tree within the random forest has the same bias as a random forest, where the single tree is restricted by bootstrap and no# of regressors randomly selected at each split (m). A fully grown, unpruned tree outside the random forest on the other hand (not bootstrapped and restricted by m) has lower bias. Hence random forests / bagging improve through variance reduction only, not bias reduction.
Quote:
|
Why does a bagged tree / random forest tree have higher bias than a single decision tree?
According to the authors of "Elements of Statistical Learning" (see proof below):
As in bagging, the bias of a random forest is the same as the bias of
any of the individual sampled trees.
Taken fro
|
14,804
|
Why does a bagged tree / random forest tree have higher bias than a single decision tree?
|
Your questions are pretty straightforward. 1) More data produces better model, since you only use part of the whole training data to train your model (bootstrap), higher bias is reasonable. 2) More splits means deeper trees, or purer nodes. This typically leads to high variance and low bias. If you limit the split, lower variance and higher bias.
|
Why does a bagged tree / random forest tree have higher bias than a single decision tree?
|
Your questions are pretty straightforward. 1) More data produces better model, since you only use part of the whole training data to train your model (bootstrap), higher bias is reasonable. 2) More sp
|
Why does a bagged tree / random forest tree have higher bias than a single decision tree?
Your questions are pretty straightforward. 1) More data produces better model, since you only use part of the whole training data to train your model (bootstrap), higher bias is reasonable. 2) More splits means deeper trees, or purer nodes. This typically leads to high variance and low bias. If you limit the split, lower variance and higher bias.
|
Why does a bagged tree / random forest tree have higher bias than a single decision tree?
Your questions are pretty straightforward. 1) More data produces better model, since you only use part of the whole training data to train your model (bootstrap), higher bias is reasonable. 2) More sp
|
14,805
|
Iconic (toy) models of neural networks
|
One of the most classical is the Perceptron in 2 dimensions, which goes back to the 1950s. This is a good example because it is a launching pad for more modern techniques:
1) Not everything is linearly separable (hence the need for nonlinear activations or kernel methods, multiple layers, etc.).
2) The Perceptron won't converge if the data is not linearly separable (continuous measures of separation such as softmax, learning rate decay, etc.).
3) While there are infinitely many solutions to splitting data, it's clear that some are more desired than others (maximum boundary separation, SVMs, etc.)
For multilayer neural networks, you might like the toy classification examples that come with this visualization.
For Convolutional Neural Nets, the MNIST is the classical gold standard, with a cute visualization here and here.
For RNNs, a really simple problem they can solve is binary addition, which requires memorizing 4 patterns.
|
Iconic (toy) models of neural networks
|
One of the most classical is the Perceptron in 2 dimensions, which goes back to the 1950s. This is a good example because it is a launching pad for more modern techniques:
1) Not everything is linearl
|
Iconic (toy) models of neural networks
One of the most classical is the Perceptron in 2 dimensions, which goes back to the 1950s. This is a good example because it is a launching pad for more modern techniques:
1) Not everything is linearly separable (hence the need for nonlinear activations or kernel methods, multiple layers, etc.).
2) The Perceptron won't converge if the data is not linearly separable (continuous measures of separation such as softmax, learning rate decay, etc.).
3) While there are infinitely many solutions to splitting data, it's clear that some are more desired than others (maximum boundary separation, SVMs, etc.)
For multilayer neural networks, you might like the toy classification examples that come with this visualization.
For Convolutional Neural Nets, the MNIST is the classical gold standard, with a cute visualization here and here.
For RNNs, a really simple problem they can solve is binary addition, which requires memorizing 4 patterns.
|
Iconic (toy) models of neural networks
One of the most classical is the Perceptron in 2 dimensions, which goes back to the 1950s. This is a good example because it is a launching pad for more modern techniques:
1) Not everything is linearl
|
14,806
|
Iconic (toy) models of neural networks
|
The XOR problem is probably the canonical ANN toy problem.
Richard Bland June 1998 University of Stirling, Department of Computing Science and Mathematics Computing Science Technical Report
"Learning XOR: exploring the space of a classic problem"
The TensorFlow Playground is an interactive interface to several toy neural nets, including XOR and Jellyroll.
Computing the largest eigenvalue of a fixed-size (2x2 or 3x3) symmetric matrix is one I use in classroom demos.
A. Cichocki and R. Unbehauen. "Neural networks for computing eigenvalues and eigenvectors" Biological Cybernetics
December 1992, Volume 68, Issue 2, pp 155–164
Problems like MNIST are definitely canonical but aren't easily verified by hand -- unless you happen to have enormous free time. Nor is the code especially basic.
As far as NLP tasks, the Penn Tree Bank is a very standard benchmark data set, used for example in Wojciech Zaremba, Ilya Sutskever, Oriol Vinyals
"Recurrent Neural Network Regularization," and probably hundreds of other papers.
|
Iconic (toy) models of neural networks
|
The XOR problem is probably the canonical ANN toy problem.
Richard Bland June 1998 University of Stirling, Department of Computing Science and Mathematics Computing Science Technical Report
"Learning
|
Iconic (toy) models of neural networks
The XOR problem is probably the canonical ANN toy problem.
Richard Bland June 1998 University of Stirling, Department of Computing Science and Mathematics Computing Science Technical Report
"Learning XOR: exploring the space of a classic problem"
The TensorFlow Playground is an interactive interface to several toy neural nets, including XOR and Jellyroll.
Computing the largest eigenvalue of a fixed-size (2x2 or 3x3) symmetric matrix is one I use in classroom demos.
A. Cichocki and R. Unbehauen. "Neural networks for computing eigenvalues and eigenvectors" Biological Cybernetics
December 1992, Volume 68, Issue 2, pp 155–164
Problems like MNIST are definitely canonical but aren't easily verified by hand -- unless you happen to have enormous free time. Nor is the code especially basic.
As far as NLP tasks, the Penn Tree Bank is a very standard benchmark data set, used for example in Wojciech Zaremba, Ilya Sutskever, Oriol Vinyals
"Recurrent Neural Network Regularization," and probably hundreds of other papers.
|
Iconic (toy) models of neural networks
The XOR problem is probably the canonical ANN toy problem.
Richard Bland June 1998 University of Stirling, Department of Computing Science and Mathematics Computing Science Technical Report
"Learning
|
14,807
|
What is the Method of Moments and how is it different from MLE?
|
What is the method of moments?
There is a nice article about this on Wikipedia.
https://en.m.wikipedia.org/wiki/Method_of_moments_(statistics)
It means that you are estimating the population parameters by selecting the parameters such that the population distribution has the moments that are equivalent to the observed moments in the sample.
How is it different from MLE
The maximum likelihood estimate minimizes the likelihood function. In some cases this minimum can sometimes be expressed in terms of setting the population parameters equal to the sample parameters.
E.g. when estimating the mean parameter of a distribution and employ MLE then often we end up with using $\mu = \bar{x} $. However this does not need to be always the case ( related: https://stats.stackexchange.com/a/317631/164061 although in the case of the example there, the Poisson distribution, the MLE and MoM estimate coincide, and the same is true for many others). For example the MLE solution for the estimate of $\mu $ in a log normal distribution is:
$$\mu = 1/n \sum ln (x_i) = \overline {ln (x)}$$
Whereas the MoM solution is solving
$$exp (\mu + \frac {1}{2}\sigma^2) = \bar {x}$$ leading to
$$\mu = ln (\bar {x}) - \frac {1}{2} \sigma^2$$
So the MoM is a practical way to estimate the parameters, leading often to the exact same result as the MLE (since the moments of the sample often coincide with the moments of the population, e.g. a sample mean is distributed around the population mean, and up to some factor/bias, it works out very well). The MLE has a stronger theoretical foundation and for instance allows estimation of errors using the Fisher matrix (or estimates of it), and it is a much more natural approach in the case of regression problems (I haven't tried it but I guess that a MoM for solving parameters in a simple linear regression is not working easily and may give bad results. In the answer by superpronker it seems like this is done by some minimization of a function. For MLE this minimization expresses higher probability, but I wonder whether it represents such a similar thing for MoM).
|
What is the Method of Moments and how is it different from MLE?
|
What is the method of moments?
There is a nice article about this on Wikipedia.
https://en.m.wikipedia.org/wiki/Method_of_moments_(statistics)
It means that you are estimating the population paramet
|
What is the Method of Moments and how is it different from MLE?
What is the method of moments?
There is a nice article about this on Wikipedia.
https://en.m.wikipedia.org/wiki/Method_of_moments_(statistics)
It means that you are estimating the population parameters by selecting the parameters such that the population distribution has the moments that are equivalent to the observed moments in the sample.
How is it different from MLE
The maximum likelihood estimate minimizes the likelihood function. In some cases this minimum can sometimes be expressed in terms of setting the population parameters equal to the sample parameters.
E.g. when estimating the mean parameter of a distribution and employ MLE then often we end up with using $\mu = \bar{x} $. However this does not need to be always the case ( related: https://stats.stackexchange.com/a/317631/164061 although in the case of the example there, the Poisson distribution, the MLE and MoM estimate coincide, and the same is true for many others). For example the MLE solution for the estimate of $\mu $ in a log normal distribution is:
$$\mu = 1/n \sum ln (x_i) = \overline {ln (x)}$$
Whereas the MoM solution is solving
$$exp (\mu + \frac {1}{2}\sigma^2) = \bar {x}$$ leading to
$$\mu = ln (\bar {x}) - \frac {1}{2} \sigma^2$$
So the MoM is a practical way to estimate the parameters, leading often to the exact same result as the MLE (since the moments of the sample often coincide with the moments of the population, e.g. a sample mean is distributed around the population mean, and up to some factor/bias, it works out very well). The MLE has a stronger theoretical foundation and for instance allows estimation of errors using the Fisher matrix (or estimates of it), and it is a much more natural approach in the case of regression problems (I haven't tried it but I guess that a MoM for solving parameters in a simple linear regression is not working easily and may give bad results. In the answer by superpronker it seems like this is done by some minimization of a function. For MLE this minimization expresses higher probability, but I wonder whether it represents such a similar thing for MoM).
|
What is the Method of Moments and how is it different from MLE?
What is the method of moments?
There is a nice article about this on Wikipedia.
https://en.m.wikipedia.org/wiki/Method_of_moments_(statistics)
It means that you are estimating the population paramet
|
14,808
|
What is the Method of Moments and how is it different from MLE?
|
In MoM, the estimator is chosen so that some function has conditional expectation equal to zero. E.g. $E[g(y,x,\theta)] = 0$. Often the expectation is conditional on $x$. Typically, this is converted to a problem of minimizing a quadratic form in this expectations with a weight matrix.
In MLE, the estimator maximizes the log likelihood function.
In broad generalization, MLE makes stricter assumptions (the full density) and is thus typically less robust but more efficient if the assumptions are met (it achieves the Kramer Rao lower bound on asymptotic variance).
In some cases the two coincide, OLS being one notable example where the analytic solution is identical and hence the estimator behaves in the same way.
In some sense, you can think of an MLE (in almost all cases) as an MoM estimator because the estimator sets the expected value of the gradient of the log likelihood function equal to zero. In that sense, there are cases where the density is incorrect but the MLE is still consistent because the first order conditions are still satisfied. Then MLE is referred to as "quasi-ML".
|
What is the Method of Moments and how is it different from MLE?
|
In MoM, the estimator is chosen so that some function has conditional expectation equal to zero. E.g. $E[g(y,x,\theta)] = 0$. Often the expectation is conditional on $x$. Typically, this is converted
|
What is the Method of Moments and how is it different from MLE?
In MoM, the estimator is chosen so that some function has conditional expectation equal to zero. E.g. $E[g(y,x,\theta)] = 0$. Often the expectation is conditional on $x$. Typically, this is converted to a problem of minimizing a quadratic form in this expectations with a weight matrix.
In MLE, the estimator maximizes the log likelihood function.
In broad generalization, MLE makes stricter assumptions (the full density) and is thus typically less robust but more efficient if the assumptions are met (it achieves the Kramer Rao lower bound on asymptotic variance).
In some cases the two coincide, OLS being one notable example where the analytic solution is identical and hence the estimator behaves in the same way.
In some sense, you can think of an MLE (in almost all cases) as an MoM estimator because the estimator sets the expected value of the gradient of the log likelihood function equal to zero. In that sense, there are cases where the density is incorrect but the MLE is still consistent because the first order conditions are still satisfied. Then MLE is referred to as "quasi-ML".
|
What is the Method of Moments and how is it different from MLE?
In MoM, the estimator is chosen so that some function has conditional expectation equal to zero. E.g. $E[g(y,x,\theta)] = 0$. Often the expectation is conditional on $x$. Typically, this is converted
|
14,809
|
What is the Method of Moments and how is it different from MLE?
|
Soorry, I can't past comments..
MLE makes stricter assumptions (the full density) and is thus
typically less robust but more efficient if the assumptions are met
Actually at MITx "Fundamentals of Statistics" we are teached the opposite, that MoM relies on specific equation of the moments, and if we pick up the wrong density, we do totally wrong, while MLE is more resilient, as we in all case minimise the KD divergence..
|
What is the Method of Moments and how is it different from MLE?
|
Soorry, I can't past comments..
MLE makes stricter assumptions (the full density) and is thus
typically less robust but more efficient if the assumptions are met
Actually at MITx "Fundamentals of
|
What is the Method of Moments and how is it different from MLE?
Soorry, I can't past comments..
MLE makes stricter assumptions (the full density) and is thus
typically less robust but more efficient if the assumptions are met
Actually at MITx "Fundamentals of Statistics" we are teached the opposite, that MoM relies on specific equation of the moments, and if we pick up the wrong density, we do totally wrong, while MLE is more resilient, as we in all case minimise the KD divergence..
|
What is the Method of Moments and how is it different from MLE?
Soorry, I can't past comments..
MLE makes stricter assumptions (the full density) and is thus
typically less robust but more efficient if the assumptions are met
Actually at MITx "Fundamentals of
|
14,810
|
Why are p-values often higher in a Cox proportional hazard model than in logistic regression?
|
The logistic regression model assumes the response is a Bernoulli trial (or more generally a binomial, but for simplicity, we'll keep it 0-1). A survival model assumes the response is typically a time to event (again, there are generalizations of this that we'll skip). Another way to put that is that units are passing through a series of values until an event occurs. It isn't that a coin is actually discretely flipped at each point. (That could happen, of course, but then you would need a model for repeated measures—perhaps a GLMM.)
Your logistic regression model takes each death as a coin flip that occurred at that age and came up tails. Likewise, it considers each censored datum as a single coin flip that occurred at the specified age and came up heads. The problem here is that that is inconsistent with what the data really are.
Here are some plots of the data, and the output of the models. (Note that I flip the predictions from the logistic regression model to predicting being alive so that the line matches the conditional density plot.)
library(survival)
data(lung)
s = with(lung, Surv(time=time, event=status-1))
summary(sm <- coxph(s~age, data=lung))
# Call:
# coxph(formula = s ~ age, data = lung)
#
# n= 228, number of events= 165
#
# coef exp(coef) se(coef) z Pr(>|z|)
# age 0.018720 1.018897 0.009199 2.035 0.0419 *
# ---
# Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
# exp(coef) exp(-coef) lower .95 upper .95
# age 1.019 0.9815 1.001 1.037
#
# Concordance= 0.55 (se = 0.026 )
# Rsquare= 0.018 (max possible= 0.999 )
# Likelihood ratio test= 4.24 on 1 df, p=0.03946
# Wald test = 4.14 on 1 df, p=0.04185
# Score (logrank) test = 4.15 on 1 df, p=0.04154
lung$died = factor(ifelse(lung$status==2, "died", "alive"), levels=c("died","alive"))
summary(lrm <- glm(status-1~age, data=lung, family="binomial"))
# Call:
# glm(formula = status - 1 ~ age, family = "binomial", data = lung)
#
# Deviance Residuals:
# Min 1Q Median 3Q Max
# -1.8543 -1.3109 0.7169 0.8272 1.1097
#
# Coefficients:
# Estimate Std. Error z value Pr(>|z|)
# (Intercept) -1.30949 1.01743 -1.287 0.1981
# age 0.03677 0.01645 2.235 0.0254 *
# ---
# Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
# (Dispersion parameter for binomial family taken to be 1)
#
# Null deviance: 268.78 on 227 degrees of freedom
# Residual deviance: 263.71 on 226 degrees of freedom
# AIC: 267.71
#
# Number of Fisher Scoring iterations: 4
windows()
plot(survfit(s~1))
windows()
par(mfrow=c(2,1))
with(lung, spineplot(age, as.factor(status)))
with(lung, cdplot(age, as.factor(status)))
lines(40:80, 1-predict(lrm, newdata=data.frame(age=40:80), type="response"),
col="red")
It may be helpful to consider a situation in which the data were appropriate for either a survival analysis or a logistic regression. Imagine a study to determine the probability that a patient will be readmitted to the hospital within 30 days of discharge under a new protocol or standard of care. However, all patients are followed to readmission, and there is no censoring (this isn't terribly realistic), so the exact time to readmission could be analyzed with survival analysis (viz., a Cox proportional hazards model here). To simulate this situation, I'll use exponential distributions with rates .5 and 1, and use the value 1 as a cutoff to represent 30 days:
set.seed(0775) # this makes the example exactly reproducible
t1 = rexp(50, rate=.5)
t2 = rexp(50, rate=1)
d = data.frame(time=c(t1,t2),
group=rep(c("g1","g2"), each=50),
event=ifelse(c(t1,t2)<1, "yes", "no"))
windows()
plot(with(d, survfit(Surv(time)~group)), col=1:2, mark.time=TRUE)
legend("topright", legend=c("Group 1", "Group 2"), lty=1, col=1:2)
abline(v=1, col="gray")
with(d, table(event, group))
# group
# event g1 g2
# no 29 22
# yes 21 28
summary(glm(event~group, d, family=binomial))$coefficients
# Estimate Std. Error z value Pr(>|z|)
# (Intercept) -0.3227734 0.2865341 -1.126475 0.2599647
# groupg2 0.5639354 0.4040676 1.395646 0.1628210
summary(coxph(Surv(time)~group, d))$coefficients
# coef exp(coef) se(coef) z Pr(>|z|)
# groupg2 0.5841386 1.793445 0.2093571 2.790154 0.005268299
In this case, we see that the p-value from the logistic regression model (0.163) was higher than the p-value from a survival analysis (0.005). To explore this idea further, we can extend the simulation to estimate the power of a logistic regression analysis vs. a survival analysis, and the probability that the p-value from the Cox model will be lower than the p-value from the logistic regression. I'll also use 1.4 as the threshold, so that I don't disadvantage the logistic regression by using a suboptimal cutoff:
xs = seq(.1,5,.1)
xs[which.max(pexp(xs,1)-pexp(xs,.5))] # 1.4
set.seed(7458)
plr = vector(length=10000)
psv = vector(length=10000)
for(i in 1:10000){
t1 = rexp(50, rate=.5)
t2 = rexp(50, rate=1)
d = data.frame(time=c(t1,t2), group=rep(c("g1", "g2"), each=50),
event=ifelse(c(t1,t2)<1.4, "yes", "no"))
plr[i] = summary(glm(event~group, d, family=binomial))$coefficients[2,4]
psv[i] = summary(coxph(Surv(time)~group, d))$coefficients[1,5]
}
## estimated power:
mean(plr<.05) # [1] 0.753
mean(psv<.05) # [1] 0.9253
## probability that p-value from survival analysis < logistic regression:
mean(psv<plr) # [1] 0.8977
So the power of the logistic regression is lower (about 75%) than the survival analysis (about 93%), and 90% of the p-values from the survival analysis were lower than the corresponding p-values from the logistic regression. Taking the lag times into account, instead of just less than or greater than some threshold does yield more statistical power as you had intuited.
|
Why are p-values often higher in a Cox proportional hazard model than in logistic regression?
|
The logistic regression model assumes the response is a Bernoulli trial (or more generally a binomial, but for simplicity, we'll keep it 0-1). A survival model assumes the response is typically a tim
|
Why are p-values often higher in a Cox proportional hazard model than in logistic regression?
The logistic regression model assumes the response is a Bernoulli trial (or more generally a binomial, but for simplicity, we'll keep it 0-1). A survival model assumes the response is typically a time to event (again, there are generalizations of this that we'll skip). Another way to put that is that units are passing through a series of values until an event occurs. It isn't that a coin is actually discretely flipped at each point. (That could happen, of course, but then you would need a model for repeated measures—perhaps a GLMM.)
Your logistic regression model takes each death as a coin flip that occurred at that age and came up tails. Likewise, it considers each censored datum as a single coin flip that occurred at the specified age and came up heads. The problem here is that that is inconsistent with what the data really are.
Here are some plots of the data, and the output of the models. (Note that I flip the predictions from the logistic regression model to predicting being alive so that the line matches the conditional density plot.)
library(survival)
data(lung)
s = with(lung, Surv(time=time, event=status-1))
summary(sm <- coxph(s~age, data=lung))
# Call:
# coxph(formula = s ~ age, data = lung)
#
# n= 228, number of events= 165
#
# coef exp(coef) se(coef) z Pr(>|z|)
# age 0.018720 1.018897 0.009199 2.035 0.0419 *
# ---
# Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
# exp(coef) exp(-coef) lower .95 upper .95
# age 1.019 0.9815 1.001 1.037
#
# Concordance= 0.55 (se = 0.026 )
# Rsquare= 0.018 (max possible= 0.999 )
# Likelihood ratio test= 4.24 on 1 df, p=0.03946
# Wald test = 4.14 on 1 df, p=0.04185
# Score (logrank) test = 4.15 on 1 df, p=0.04154
lung$died = factor(ifelse(lung$status==2, "died", "alive"), levels=c("died","alive"))
summary(lrm <- glm(status-1~age, data=lung, family="binomial"))
# Call:
# glm(formula = status - 1 ~ age, family = "binomial", data = lung)
#
# Deviance Residuals:
# Min 1Q Median 3Q Max
# -1.8543 -1.3109 0.7169 0.8272 1.1097
#
# Coefficients:
# Estimate Std. Error z value Pr(>|z|)
# (Intercept) -1.30949 1.01743 -1.287 0.1981
# age 0.03677 0.01645 2.235 0.0254 *
# ---
# Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
# (Dispersion parameter for binomial family taken to be 1)
#
# Null deviance: 268.78 on 227 degrees of freedom
# Residual deviance: 263.71 on 226 degrees of freedom
# AIC: 267.71
#
# Number of Fisher Scoring iterations: 4
windows()
plot(survfit(s~1))
windows()
par(mfrow=c(2,1))
with(lung, spineplot(age, as.factor(status)))
with(lung, cdplot(age, as.factor(status)))
lines(40:80, 1-predict(lrm, newdata=data.frame(age=40:80), type="response"),
col="red")
It may be helpful to consider a situation in which the data were appropriate for either a survival analysis or a logistic regression. Imagine a study to determine the probability that a patient will be readmitted to the hospital within 30 days of discharge under a new protocol or standard of care. However, all patients are followed to readmission, and there is no censoring (this isn't terribly realistic), so the exact time to readmission could be analyzed with survival analysis (viz., a Cox proportional hazards model here). To simulate this situation, I'll use exponential distributions with rates .5 and 1, and use the value 1 as a cutoff to represent 30 days:
set.seed(0775) # this makes the example exactly reproducible
t1 = rexp(50, rate=.5)
t2 = rexp(50, rate=1)
d = data.frame(time=c(t1,t2),
group=rep(c("g1","g2"), each=50),
event=ifelse(c(t1,t2)<1, "yes", "no"))
windows()
plot(with(d, survfit(Surv(time)~group)), col=1:2, mark.time=TRUE)
legend("topright", legend=c("Group 1", "Group 2"), lty=1, col=1:2)
abline(v=1, col="gray")
with(d, table(event, group))
# group
# event g1 g2
# no 29 22
# yes 21 28
summary(glm(event~group, d, family=binomial))$coefficients
# Estimate Std. Error z value Pr(>|z|)
# (Intercept) -0.3227734 0.2865341 -1.126475 0.2599647
# groupg2 0.5639354 0.4040676 1.395646 0.1628210
summary(coxph(Surv(time)~group, d))$coefficients
# coef exp(coef) se(coef) z Pr(>|z|)
# groupg2 0.5841386 1.793445 0.2093571 2.790154 0.005268299
In this case, we see that the p-value from the logistic regression model (0.163) was higher than the p-value from a survival analysis (0.005). To explore this idea further, we can extend the simulation to estimate the power of a logistic regression analysis vs. a survival analysis, and the probability that the p-value from the Cox model will be lower than the p-value from the logistic regression. I'll also use 1.4 as the threshold, so that I don't disadvantage the logistic regression by using a suboptimal cutoff:
xs = seq(.1,5,.1)
xs[which.max(pexp(xs,1)-pexp(xs,.5))] # 1.4
set.seed(7458)
plr = vector(length=10000)
psv = vector(length=10000)
for(i in 1:10000){
t1 = rexp(50, rate=.5)
t2 = rexp(50, rate=1)
d = data.frame(time=c(t1,t2), group=rep(c("g1", "g2"), each=50),
event=ifelse(c(t1,t2)<1.4, "yes", "no"))
plr[i] = summary(glm(event~group, d, family=binomial))$coefficients[2,4]
psv[i] = summary(coxph(Surv(time)~group, d))$coefficients[1,5]
}
## estimated power:
mean(plr<.05) # [1] 0.753
mean(psv<.05) # [1] 0.9253
## probability that p-value from survival analysis < logistic regression:
mean(psv<plr) # [1] 0.8977
So the power of the logistic regression is lower (about 75%) than the survival analysis (about 93%), and 90% of the p-values from the survival analysis were lower than the corresponding p-values from the logistic regression. Taking the lag times into account, instead of just less than or greater than some threshold does yield more statistical power as you had intuited.
|
Why are p-values often higher in a Cox proportional hazard model than in logistic regression?
The logistic regression model assumes the response is a Bernoulli trial (or more generally a binomial, but for simplicity, we'll keep it 0-1). A survival model assumes the response is typically a tim
|
14,811
|
What is Thompson Sampling in layman's terms?
|
I am going to try to give an explanation without any mathematics. Part of this answer is repeated from some points I made in an answer to another question on MAB problems.
The strategic trade-off in multi-arm bandit problems: In multi-arm bandit problems the gambler plays one "bandit" each round and attempts to maximise his total expected return over a given number of rounds. The expected return of each of the bandits is described by some unknown parameters in the problem, and so as we observe more outcomes in each round, we get more information about these unknown parameters, and hence, about the expected return of each of the bandits. In each round of play (except the last), the MAB problem involves a strategic trade-off by the gambler between two objectives:
Immediate rewards: In each round he would like to choose a distribution that gives him a high expected reward on this round, which entails a preference for distributions he (presently) infers to have a high mean reward;
Future rewards (affected by information gain): On the other hand, he wants to refine his knowledge of the true expected rewards by gaining more information on the distributions (especially those that he has not played as much as others), so that he can improve his choices in future rounds.
The relative importance of these two things will determine the trade-off, and this relative importance is affected by a number of factors. For example, if there is only a small number of remaining rounds in the problem then inference for future trials is relatively less valuable, whereas if there is a large number of remaining rounds then inference for future rewards is relatively more valuable. So the gambler needs to consider how much he wants to focus on maximising the immediate rewards in the current round, and how much he wants to deviate from this, to learn more about the unknown parameters that determine the expected reward of each of the bandits.
Thompson sampling: The basic idea of Thompson sampling is that in each round, we take our existing knowledge of the machines, which is in the form of a posterior belief about the unknown parameters, and we "sample" the parameters from this posterior distribution. This sampled parameter yields a set of expected rewards for each machine, and now we bet on the one with the highest expected return, under that sampled parameter.
Prima facie, the Thompson sampling scheme seems to involve an attempt to maximise the immediate expected return in each round (since it involves this maximisation step after sampling the parameter). However, because it involves random sampling of the parameter from the posterior, the scheme involves an implicit variation of maximising the present reward, versus searching for more information. Most of the time we will get a parameter "sample" that is somewhere in the main part of the posterior, and the choice of machine will roughly approximate maximisation of the immediate reward. However, sometimes we will randomly sample a parameter value that is far in the tails of the posterior distribution, and in that case we will end up choosing a machine that does not maximise the immediate reward - i.e., this will constitute more of a "search" to assist with future rewards.
The Thompson scheme also has the nice property that we tend to decrease our "search" as we get more information, and this mimics the desirable strategic trade-off in the problem, where we want to focus less on searches as we obtain more information. As we get play more and more rounds and get more and more data, the posterior converges closer to the true parameter values and so the random "sampling" in the Thompson scheme becomes more tightly packed around the parameter values that will lead to maximisation of the immediate reward. Hence, there is an implicit tendency of this scheme to be more "search-oriented" early on with little information, and less "search-oriented" later on when there is a lot of data.
Now, having said this, one clear drawback of the Thompson sampling scheme is that it does not take into account the number of rounds remaining in the MAB problem. This scheme is sometimes formulated on the basis of a game with infinite rounds, and in this case that is not an issue. However, in MAB problems with finite rounds, it is preferable to take account of the number of remaining rounds in order to decrease the "search" as the number of future rounds decreases. (And in particular, the optimal play in the last round is to ignore searches completely and just bet on the bandit with the highest posterior expected return.) The Thompson scheme does not do this, so it will play finite-round games in a way that is clearly sub-optimal in certain cases.
|
What is Thompson Sampling in layman's terms?
|
I am going to try to give an explanation without any mathematics. Part of this answer is repeated from some points I made in an answer to another question on MAB problems.
The strategic trade-off in
|
What is Thompson Sampling in layman's terms?
I am going to try to give an explanation without any mathematics. Part of this answer is repeated from some points I made in an answer to another question on MAB problems.
The strategic trade-off in multi-arm bandit problems: In multi-arm bandit problems the gambler plays one "bandit" each round and attempts to maximise his total expected return over a given number of rounds. The expected return of each of the bandits is described by some unknown parameters in the problem, and so as we observe more outcomes in each round, we get more information about these unknown parameters, and hence, about the expected return of each of the bandits. In each round of play (except the last), the MAB problem involves a strategic trade-off by the gambler between two objectives:
Immediate rewards: In each round he would like to choose a distribution that gives him a high expected reward on this round, which entails a preference for distributions he (presently) infers to have a high mean reward;
Future rewards (affected by information gain): On the other hand, he wants to refine his knowledge of the true expected rewards by gaining more information on the distributions (especially those that he has not played as much as others), so that he can improve his choices in future rounds.
The relative importance of these two things will determine the trade-off, and this relative importance is affected by a number of factors. For example, if there is only a small number of remaining rounds in the problem then inference for future trials is relatively less valuable, whereas if there is a large number of remaining rounds then inference for future rewards is relatively more valuable. So the gambler needs to consider how much he wants to focus on maximising the immediate rewards in the current round, and how much he wants to deviate from this, to learn more about the unknown parameters that determine the expected reward of each of the bandits.
Thompson sampling: The basic idea of Thompson sampling is that in each round, we take our existing knowledge of the machines, which is in the form of a posterior belief about the unknown parameters, and we "sample" the parameters from this posterior distribution. This sampled parameter yields a set of expected rewards for each machine, and now we bet on the one with the highest expected return, under that sampled parameter.
Prima facie, the Thompson sampling scheme seems to involve an attempt to maximise the immediate expected return in each round (since it involves this maximisation step after sampling the parameter). However, because it involves random sampling of the parameter from the posterior, the scheme involves an implicit variation of maximising the present reward, versus searching for more information. Most of the time we will get a parameter "sample" that is somewhere in the main part of the posterior, and the choice of machine will roughly approximate maximisation of the immediate reward. However, sometimes we will randomly sample a parameter value that is far in the tails of the posterior distribution, and in that case we will end up choosing a machine that does not maximise the immediate reward - i.e., this will constitute more of a "search" to assist with future rewards.
The Thompson scheme also has the nice property that we tend to decrease our "search" as we get more information, and this mimics the desirable strategic trade-off in the problem, where we want to focus less on searches as we obtain more information. As we get play more and more rounds and get more and more data, the posterior converges closer to the true parameter values and so the random "sampling" in the Thompson scheme becomes more tightly packed around the parameter values that will lead to maximisation of the immediate reward. Hence, there is an implicit tendency of this scheme to be more "search-oriented" early on with little information, and less "search-oriented" later on when there is a lot of data.
Now, having said this, one clear drawback of the Thompson sampling scheme is that it does not take into account the number of rounds remaining in the MAB problem. This scheme is sometimes formulated on the basis of a game with infinite rounds, and in this case that is not an issue. However, in MAB problems with finite rounds, it is preferable to take account of the number of remaining rounds in order to decrease the "search" as the number of future rounds decreases. (And in particular, the optimal play in the last round is to ignore searches completely and just bet on the bandit with the highest posterior expected return.) The Thompson scheme does not do this, so it will play finite-round games in a way that is clearly sub-optimal in certain cases.
|
What is Thompson Sampling in layman's terms?
I am going to try to give an explanation without any mathematics. Part of this answer is repeated from some points I made in an answer to another question on MAB problems.
The strategic trade-off in
|
14,812
|
What is Thompson Sampling in layman's terms?
|
I will give it a shot and I hope you like it! There are some formulas below which might scare you of. I don't hope so, because I will do my best to explain them in the most simple way I can.
These are the two formulas:
The likelihood: $P(r|\theta,a,x)$
And the posterior: $P(\theta|D)$
TL;DR
Thompson Sampling lets you
Choose an random model parameter from all the model parameters that you think are possible.
Act once according to that particular model parameter.
Observe the reward you get with that particular model parameter.
Learn from this new experience and update your belief about the possible model parameters.
Likelihood??
The likelihood is something that defines how likely things are. In this case the likelihood says how likely it is that we get reward $r$ if play action $a$ in context $x$. For example, if it is raining (context!) and you take an umbrella (action!) you stay dry (reward! :) ). On the other hand, if it is not raining (context!) and you take an umbrella (action!) you have to carry extra weight (negative reward! :( ). So the likelihood is the central thing that you want to understand. If you know everything about the likelihood it is easy to act optimal.
What about that strange circle??
As you might have noticed I did not wrote anything about that strange circle $\theta$ which is called theta. (Mathematicians have a habit of indicating which parts are the hardest by giving them greek letters, making it even harder to understand). This $\theta$ represents the model parameter. These parameters are used when the relationship between the context+actions and the reward is more difficult. For example, a model parameter might be how much your reward goes down if 1mm rain falls on top of your head. Another model parameter might state how much your reward goes down if you take an umbrella. I just said that the likelihood is the central thing you want to understand; and central to the likelihood are the model parameters. If you know the model parameters $\theta$, you know how context+actions relate to reward and it is easy to act optimal.
So how do we get to know these model parameters such that I can get maximum reward??
That is the essential question for the multi-armed bandit problem. Actually, it has two parts. You want to get to know the model parameters precisely by exploring all different kind of actions in different contexts. But if you already know which action is good for a specific context you want to exploit that action and get as much reward as possible. So if you are uncertain about your model parameters $\theta$ you might want to do some extra exploration. If you are pretty sure about our model parameters $\theta$, you are also pretty sure which action to take. This is known as the exploration versus exploitation trade-off.
You haven't said anything about this posterior
Key to this optimal behaviour is your (un)certainty about the model parameters $\theta$. And the posterior says exactly that: given all the previous rewards we got from previous actions in previous contexts, how much do you know about $\theta$. For example, if you never have been outside you do not know how unhappy you become when rain falls on your head. In other words, you are very uncertain about the unhappiness-when-rain-on-head model parameter. If you have been in a rain sometimes, with and without an umbrella, you can start to learn something about this obscure model parameter.
Now what does Thomson Sampling suggest to do with all these uncertainties??
Thomson Sampling suggests something very simple: just pick a random model parameter from your posterior, take an action and observe what happens. For example, when you have never been outside before, the unhappiness-when-rain-on-head parameter can be anything. So we just pick one, we assume that we get really unhappy when rain falls on our head. We see that it is raining (context) so we take an umbrella (action) because our model parameter tells us that this is how we can get the maximum reward. And indeed, you observe that you get slightly grumpy from walking in the rain with an umbrella but not really unhappy. We learn from this that rain+umbrella is grumpy. Next time it rains you pick again a random belief about what happens when rain falls on your head. This time it might be that it doesn't bother you at all. However, once you are half-way down to your destination you are wringing wet and you learn that rain without umbrella is really really bad. This reduces your uncertainty about unhappiness-when-rain-on-head, because now you know it is probably high.
This sounds so simple!!
Yep, it is not that complex. The difficult part is sampling from a model parameter posterior. Getting and maintaining a distribution over all your model parameters, that is also appropriate for your specific problem is hard. But... it is definitely doable :).
|
What is Thompson Sampling in layman's terms?
|
I will give it a shot and I hope you like it! There are some formulas below which might scare you of. I don't hope so, because I will do my best to explain them in the most simple way I can.
These ar
|
What is Thompson Sampling in layman's terms?
I will give it a shot and I hope you like it! There are some formulas below which might scare you of. I don't hope so, because I will do my best to explain them in the most simple way I can.
These are the two formulas:
The likelihood: $P(r|\theta,a,x)$
And the posterior: $P(\theta|D)$
TL;DR
Thompson Sampling lets you
Choose an random model parameter from all the model parameters that you think are possible.
Act once according to that particular model parameter.
Observe the reward you get with that particular model parameter.
Learn from this new experience and update your belief about the possible model parameters.
Likelihood??
The likelihood is something that defines how likely things are. In this case the likelihood says how likely it is that we get reward $r$ if play action $a$ in context $x$. For example, if it is raining (context!) and you take an umbrella (action!) you stay dry (reward! :) ). On the other hand, if it is not raining (context!) and you take an umbrella (action!) you have to carry extra weight (negative reward! :( ). So the likelihood is the central thing that you want to understand. If you know everything about the likelihood it is easy to act optimal.
What about that strange circle??
As you might have noticed I did not wrote anything about that strange circle $\theta$ which is called theta. (Mathematicians have a habit of indicating which parts are the hardest by giving them greek letters, making it even harder to understand). This $\theta$ represents the model parameter. These parameters are used when the relationship between the context+actions and the reward is more difficult. For example, a model parameter might be how much your reward goes down if 1mm rain falls on top of your head. Another model parameter might state how much your reward goes down if you take an umbrella. I just said that the likelihood is the central thing you want to understand; and central to the likelihood are the model parameters. If you know the model parameters $\theta$, you know how context+actions relate to reward and it is easy to act optimal.
So how do we get to know these model parameters such that I can get maximum reward??
That is the essential question for the multi-armed bandit problem. Actually, it has two parts. You want to get to know the model parameters precisely by exploring all different kind of actions in different contexts. But if you already know which action is good for a specific context you want to exploit that action and get as much reward as possible. So if you are uncertain about your model parameters $\theta$ you might want to do some extra exploration. If you are pretty sure about our model parameters $\theta$, you are also pretty sure which action to take. This is known as the exploration versus exploitation trade-off.
You haven't said anything about this posterior
Key to this optimal behaviour is your (un)certainty about the model parameters $\theta$. And the posterior says exactly that: given all the previous rewards we got from previous actions in previous contexts, how much do you know about $\theta$. For example, if you never have been outside you do not know how unhappy you become when rain falls on your head. In other words, you are very uncertain about the unhappiness-when-rain-on-head model parameter. If you have been in a rain sometimes, with and without an umbrella, you can start to learn something about this obscure model parameter.
Now what does Thomson Sampling suggest to do with all these uncertainties??
Thomson Sampling suggests something very simple: just pick a random model parameter from your posterior, take an action and observe what happens. For example, when you have never been outside before, the unhappiness-when-rain-on-head parameter can be anything. So we just pick one, we assume that we get really unhappy when rain falls on our head. We see that it is raining (context) so we take an umbrella (action) because our model parameter tells us that this is how we can get the maximum reward. And indeed, you observe that you get slightly grumpy from walking in the rain with an umbrella but not really unhappy. We learn from this that rain+umbrella is grumpy. Next time it rains you pick again a random belief about what happens when rain falls on your head. This time it might be that it doesn't bother you at all. However, once you are half-way down to your destination you are wringing wet and you learn that rain without umbrella is really really bad. This reduces your uncertainty about unhappiness-when-rain-on-head, because now you know it is probably high.
This sounds so simple!!
Yep, it is not that complex. The difficult part is sampling from a model parameter posterior. Getting and maintaining a distribution over all your model parameters, that is also appropriate for your specific problem is hard. But... it is definitely doable :).
|
What is Thompson Sampling in layman's terms?
I will give it a shot and I hope you like it! There are some formulas below which might scare you of. I don't hope so, because I will do my best to explain them in the most simple way I can.
These ar
|
14,813
|
What is the difference between feature selection and dimensionality reduction?
|
The difference is that the set of features made by feature selection must be a subset of the original set of features, and the set made by dimensionality reduction doesn't have to (for instance PCA reduces dimensionality by making new synthetic features from linear combination of the original ones, and then discarding the less important ones).
This way feature selection is a special case of dimensionality reduction.
|
What is the difference between feature selection and dimensionality reduction?
|
The difference is that the set of features made by feature selection must be a subset of the original set of features, and the set made by dimensionality reduction doesn't have to (for instance PCA re
|
What is the difference between feature selection and dimensionality reduction?
The difference is that the set of features made by feature selection must be a subset of the original set of features, and the set made by dimensionality reduction doesn't have to (for instance PCA reduces dimensionality by making new synthetic features from linear combination of the original ones, and then discarding the less important ones).
This way feature selection is a special case of dimensionality reduction.
|
What is the difference between feature selection and dimensionality reduction?
The difference is that the set of features made by feature selection must be a subset of the original set of features, and the set made by dimensionality reduction doesn't have to (for instance PCA re
|
14,814
|
Do coefficients of logistic regression have a meaning?
|
The coefficients from the output do have a meaning, although it isn't very intuitive to most people and certainly not to me. That is why people change them to odds ratios. However, the log of the odds ratio is the coefficient; equivalently, the exponentiated coefficients are the odds ratios.
The coefficients are most useful for plugging into formulas that give predicted probabilities of being in each level of the dependent variable.
e.g. in R
library("MASS")
data(menarche)
glm.out = glm(cbind(Menarche, Total-Menarche) ~ Age,
family=binomial(logit), data=menarche)
summary(glm.out)
The parameter estimate for age is 1.64. What does this mean? Well, if you combine it with the parameter estimate for the intercept (-21.24) you can get a formula predicting the likelihood of menarche:
$P(M) = \frac{1}{1 + e^{21.24 - 1.64*age}}$
but that formula (even with just one variable!) doesn't give much of a sense of how age is related to menarche. If we use the odds ratio (which is $e^{1.64} = 5.16$ that means that, for each additional year of age, the odds of menarche are 5.16 times as big (not exactly 5.16 times as likely, but that interpretation is often used).
|
Do coefficients of logistic regression have a meaning?
|
The coefficients from the output do have a meaning, although it isn't very intuitive to most people and certainly not to me. That is why people change them to odds ratios. However, the log of the odds
|
Do coefficients of logistic regression have a meaning?
The coefficients from the output do have a meaning, although it isn't very intuitive to most people and certainly not to me. That is why people change them to odds ratios. However, the log of the odds ratio is the coefficient; equivalently, the exponentiated coefficients are the odds ratios.
The coefficients are most useful for plugging into formulas that give predicted probabilities of being in each level of the dependent variable.
e.g. in R
library("MASS")
data(menarche)
glm.out = glm(cbind(Menarche, Total-Menarche) ~ Age,
family=binomial(logit), data=menarche)
summary(glm.out)
The parameter estimate for age is 1.64. What does this mean? Well, if you combine it with the parameter estimate for the intercept (-21.24) you can get a formula predicting the likelihood of menarche:
$P(M) = \frac{1}{1 + e^{21.24 - 1.64*age}}$
but that formula (even with just one variable!) doesn't give much of a sense of how age is related to menarche. If we use the odds ratio (which is $e^{1.64} = 5.16$ that means that, for each additional year of age, the odds of menarche are 5.16 times as big (not exactly 5.16 times as likely, but that interpretation is often used).
|
Do coefficients of logistic regression have a meaning?
The coefficients from the output do have a meaning, although it isn't very intuitive to most people and certainly not to me. That is why people change them to odds ratios. However, the log of the odds
|
14,815
|
Do coefficients of logistic regression have a meaning?
|
Interpreting directly the coefficients is difficult and can be misleading. You have no guarantees on how weights are assigned among the variables.
Quick example, similar to the situation you describe: I have worked on a model of the interaction of users to a website. That model included two variables that represent the number of "clicks" during the first hour and during the second hour of a user session. These variables are highly correlated to each other. If both coefficients for those variable were positive then we could easily mislead ourselves and believe that maybe higher coefficient indicates "higher" importance. However, by a adding/removing other variables we could easily end up with a model where the first variable had positive sign and the other negative. The reasoning we ended up to was that since there were some significant (albeit low) correlations between most pairs of the available variables we couldn't have any secure conclusion on the importance of the variables using the coefficients (happy to learn from the community if this interpretation is correct).
If you want to get a model where it is kind of easier to interpret one idea would be to use Lasso (minimisation of the L1 norm). That leads to sparse solutions were variables are less correlated to each other. However, that approach wouldn't easily pick both variables of the previous example - one would be zero wighted.
If you just want to assess the importance of specific variables, or sets of variables, I would recommend using directly some feature selection approach. Such approaches lead to much more meaningful insights and even global rankings of the importance of the variables based on some criterion.
|
Do coefficients of logistic regression have a meaning?
|
Interpreting directly the coefficients is difficult and can be misleading. You have no guarantees on how weights are assigned among the variables.
Quick example, similar to the situation you describe:
|
Do coefficients of logistic regression have a meaning?
Interpreting directly the coefficients is difficult and can be misleading. You have no guarantees on how weights are assigned among the variables.
Quick example, similar to the situation you describe: I have worked on a model of the interaction of users to a website. That model included two variables that represent the number of "clicks" during the first hour and during the second hour of a user session. These variables are highly correlated to each other. If both coefficients for those variable were positive then we could easily mislead ourselves and believe that maybe higher coefficient indicates "higher" importance. However, by a adding/removing other variables we could easily end up with a model where the first variable had positive sign and the other negative. The reasoning we ended up to was that since there were some significant (albeit low) correlations between most pairs of the available variables we couldn't have any secure conclusion on the importance of the variables using the coefficients (happy to learn from the community if this interpretation is correct).
If you want to get a model where it is kind of easier to interpret one idea would be to use Lasso (minimisation of the L1 norm). That leads to sparse solutions were variables are less correlated to each other. However, that approach wouldn't easily pick both variables of the previous example - one would be zero wighted.
If you just want to assess the importance of specific variables, or sets of variables, I would recommend using directly some feature selection approach. Such approaches lead to much more meaningful insights and even global rankings of the importance of the variables based on some criterion.
|
Do coefficients of logistic regression have a meaning?
Interpreting directly the coefficients is difficult and can be misleading. You have no guarantees on how weights are assigned among the variables.
Quick example, similar to the situation you describe:
|
14,816
|
Do coefficients of logistic regression have a meaning?
|
The coefficients most certainly have a meaning. In some software packages the model can be directed in either of two ways to produce either of two types of coefficients. For example, in Stata, one can use either the Logistic command or the logit command; in using one, the model gives traditional coefficients, while in using the other, the model gives odds ratios.
You may find that one is much more meaningful to you than the other.
About your question that "...coefficients seem to depend sensitivity...".
Are you saying that the results depend on what variables you put in the model?
If so, yes, this is a fact of life when doing regression analysis. The reason for this is that regression analysis is looking at a bunch of numbers and crunching them in an automated way.
The results depend on how the variables are related to each other and on what variables are not measured. It is as much an art as it is a science.
Furthermore, if the model has too many predictors compared to the sample size, the signs can flip around in a crazy way - I think of this is saying that the model is using variables that have a small effect to "adjust" its estimates of those that have a big effect (like a small volume knob to make small calibrations). When this happens, I tend to not trust the variables with small effects.
On the other hand, it may be that signs initially change, when you add new predictors, because you are getting closer to the causal truth.
For example, lets imagine that Greenland Brandy might be bad for one's health but income is good for one's health. If income is omitted, and more rich people drink Brandy, then the model may "pick up" the omitted income influence and "say" that the alcohol is good for your health.
Have no doubt about it, it is a fact of life that coefficients depend on the other variables that are included. To learn more, look into "omitted variable bias" and "spurious relationship". If you have not encountered these ideas before, try to find introduction to statistics courses that meet your needs - this can make a huge difference in doing the models.
|
Do coefficients of logistic regression have a meaning?
|
The coefficients most certainly have a meaning. In some software packages the model can be directed in either of two ways to produce either of two types of coefficients. For example, in Stata, one c
|
Do coefficients of logistic regression have a meaning?
The coefficients most certainly have a meaning. In some software packages the model can be directed in either of two ways to produce either of two types of coefficients. For example, in Stata, one can use either the Logistic command or the logit command; in using one, the model gives traditional coefficients, while in using the other, the model gives odds ratios.
You may find that one is much more meaningful to you than the other.
About your question that "...coefficients seem to depend sensitivity...".
Are you saying that the results depend on what variables you put in the model?
If so, yes, this is a fact of life when doing regression analysis. The reason for this is that regression analysis is looking at a bunch of numbers and crunching them in an automated way.
The results depend on how the variables are related to each other and on what variables are not measured. It is as much an art as it is a science.
Furthermore, if the model has too many predictors compared to the sample size, the signs can flip around in a crazy way - I think of this is saying that the model is using variables that have a small effect to "adjust" its estimates of those that have a big effect (like a small volume knob to make small calibrations). When this happens, I tend to not trust the variables with small effects.
On the other hand, it may be that signs initially change, when you add new predictors, because you are getting closer to the causal truth.
For example, lets imagine that Greenland Brandy might be bad for one's health but income is good for one's health. If income is omitted, and more rich people drink Brandy, then the model may "pick up" the omitted income influence and "say" that the alcohol is good for your health.
Have no doubt about it, it is a fact of life that coefficients depend on the other variables that are included. To learn more, look into "omitted variable bias" and "spurious relationship". If you have not encountered these ideas before, try to find introduction to statistics courses that meet your needs - this can make a huge difference in doing the models.
|
Do coefficients of logistic regression have a meaning?
The coefficients most certainly have a meaning. In some software packages the model can be directed in either of two ways to produce either of two types of coefficients. For example, in Stata, one c
|
14,817
|
Does log likelihood in GLM have guaranteed convergence to global maxima?
|
The definition of exponential family is:
$$
p(x|\theta) = h(x)\exp(\theta^T\phi(x) - A(\theta)),
$$
where $A(\theta)$ is the log partition function. Now one can prove that the following three things hold for 1D case (and they generalize to higher dimensions--you can look into properties of exponential families or log partition):
$ \frac{dA}{d\theta} = \mathbb{E}[\phi(x)]$
$ \frac{d^2A}{d\theta^2} = \mathbb{E}[\phi^2(x)] -\mathbb{E}[\phi(x)]^2 = {\rm var}(\phi(x)) $
$ \frac{ \partial ^2A}{\partial\theta_i\partial\theta_j} = \mathbb{E}[\phi_i(x)\phi_j(x)] -\mathbb{E}[\phi_i(x)] \mathbb{E}[\phi_j(x)] = {\rm cov}(\phi(x)) \Rightarrow \Delta^2A(\theta) = {\rm cov}(\phi(x))$
The above result prove that $A(\theta)$ is convex(as ${\rm cov}(\phi(x))$ is positive semidefinite). Now we take a look at likelihood function for MLE:
\begin{align}
p(\mathcal{D}|\theta) &= \bigg[\prod_{i=1}^{N}{h(x_i)}\bigg]\ \exp\!\big(\theta^T[\sum_{i=1}^{N}\phi(x_i)] - NA(\theta)\big) \\
\log\!\big(p(\mathcal{D}|\theta)\big) &= \theta^T\bigg[\sum_{i=1}^{N}\phi(x_i)\bigg] - NA(\theta) \\
&= \theta^T[\phi(\mathcal{D})] - NA(\theta)
\end{align}
Now $\theta^T[\phi(\mathcal{D})]$ is linear in theta and $-A(\theta)$ is concave. Therefore, there is a unique global maximum.
There is a generalized version called curved exponential family which would also be similar. But most of the proofs are in canonical form.
|
Does log likelihood in GLM have guaranteed convergence to global maxima?
|
The definition of exponential family is:
$$
p(x|\theta) = h(x)\exp(\theta^T\phi(x) - A(\theta)),
$$
where $A(\theta)$ is the log partition function. Now one can prove that the following three things h
|
Does log likelihood in GLM have guaranteed convergence to global maxima?
The definition of exponential family is:
$$
p(x|\theta) = h(x)\exp(\theta^T\phi(x) - A(\theta)),
$$
where $A(\theta)$ is the log partition function. Now one can prove that the following three things hold for 1D case (and they generalize to higher dimensions--you can look into properties of exponential families or log partition):
$ \frac{dA}{d\theta} = \mathbb{E}[\phi(x)]$
$ \frac{d^2A}{d\theta^2} = \mathbb{E}[\phi^2(x)] -\mathbb{E}[\phi(x)]^2 = {\rm var}(\phi(x)) $
$ \frac{ \partial ^2A}{\partial\theta_i\partial\theta_j} = \mathbb{E}[\phi_i(x)\phi_j(x)] -\mathbb{E}[\phi_i(x)] \mathbb{E}[\phi_j(x)] = {\rm cov}(\phi(x)) \Rightarrow \Delta^2A(\theta) = {\rm cov}(\phi(x))$
The above result prove that $A(\theta)$ is convex(as ${\rm cov}(\phi(x))$ is positive semidefinite). Now we take a look at likelihood function for MLE:
\begin{align}
p(\mathcal{D}|\theta) &= \bigg[\prod_{i=1}^{N}{h(x_i)}\bigg]\ \exp\!\big(\theta^T[\sum_{i=1}^{N}\phi(x_i)] - NA(\theta)\big) \\
\log\!\big(p(\mathcal{D}|\theta)\big) &= \theta^T\bigg[\sum_{i=1}^{N}\phi(x_i)\bigg] - NA(\theta) \\
&= \theta^T[\phi(\mathcal{D})] - NA(\theta)
\end{align}
Now $\theta^T[\phi(\mathcal{D})]$ is linear in theta and $-A(\theta)$ is concave. Therefore, there is a unique global maximum.
There is a generalized version called curved exponential family which would also be similar. But most of the proofs are in canonical form.
|
Does log likelihood in GLM have guaranteed convergence to global maxima?
The definition of exponential family is:
$$
p(x|\theta) = h(x)\exp(\theta^T\phi(x) - A(\theta)),
$$
where $A(\theta)$ is the log partition function. Now one can prove that the following three things h
|
14,818
|
Does log likelihood in GLM have guaranteed convergence to global maxima?
|
I was investigating this heavily during my thesis. The answer is that the GLM likelihood is not always convex, it is only convex under the right assumptions. A very good investigation of this was made by Nelder and Wedderburn in their paper "On the Existence and Uniqueness of the Maximum Likelihood Estimates for Certain Generalized Linear Models" which can be found at https://www.jstor.org/stable/2335080
|
Does log likelihood in GLM have guaranteed convergence to global maxima?
|
I was investigating this heavily during my thesis. The answer is that the GLM likelihood is not always convex, it is only convex under the right assumptions. A very good investigation of this was made
|
Does log likelihood in GLM have guaranteed convergence to global maxima?
I was investigating this heavily during my thesis. The answer is that the GLM likelihood is not always convex, it is only convex under the right assumptions. A very good investigation of this was made by Nelder and Wedderburn in their paper "On the Existence and Uniqueness of the Maximum Likelihood Estimates for Certain Generalized Linear Models" which can be found at https://www.jstor.org/stable/2335080
|
Does log likelihood in GLM have guaranteed convergence to global maxima?
I was investigating this heavily during my thesis. The answer is that the GLM likelihood is not always convex, it is only convex under the right assumptions. A very good investigation of this was made
|
14,819
|
When someone says residual deviance/df should ~ 1 for a Poisson model, how approximate is approximate?
|
10 is large... 1.01 is not. Since the variance of a $\chi^2_k$ is $2k$ (see Wikipedia), the standard deviation of a $\chi^2_k$ is $\sqrt{2k}$, and that of $\chi^2_k/k$ is $\sqrt{2/k}$. That's your measuring stick: for $\chi^2_{100}$, 1.01 is not large, but 2 is large (7 s.d.s away). For $\chi^2_{10,000}$, 1.01 is OK, but 1.1 is not (7 s.d.s away).
|
When someone says residual deviance/df should ~ 1 for a Poisson model, how approximate is approximat
|
10 is large... 1.01 is not. Since the variance of a $\chi^2_k$ is $2k$ (see Wikipedia), the standard deviation of a $\chi^2_k$ is $\sqrt{2k}$, and that of $\chi^2_k/k$ is $\sqrt{2/k}$. That's your mea
|
When someone says residual deviance/df should ~ 1 for a Poisson model, how approximate is approximate?
10 is large... 1.01 is not. Since the variance of a $\chi^2_k$ is $2k$ (see Wikipedia), the standard deviation of a $\chi^2_k$ is $\sqrt{2k}$, and that of $\chi^2_k/k$ is $\sqrt{2/k}$. That's your measuring stick: for $\chi^2_{100}$, 1.01 is not large, but 2 is large (7 s.d.s away). For $\chi^2_{10,000}$, 1.01 is OK, but 1.1 is not (7 s.d.s away).
|
When someone says residual deviance/df should ~ 1 for a Poisson model, how approximate is approximat
10 is large... 1.01 is not. Since the variance of a $\chi^2_k$ is $2k$ (see Wikipedia), the standard deviation of a $\chi^2_k$ is $\sqrt{2k}$, and that of $\chi^2_k/k$ is $\sqrt{2/k}$. That's your mea
|
14,820
|
When someone says residual deviance/df should ~ 1 for a Poisson model, how approximate is approximate?
|
Asymptotically the deviance should be chi-square distributed with mean equal to the degrees of freedom. So divide it by its degrees of freedom & you should get about 1 if the data is not over-dispersed. To get a proper test just look up the deviance in chi-square tables - but note (a) that the chi square distribution is an approximation & (b) that a high value can indicate other kinds of lack of fit (which is perhaps why 'around 1' is considered good enough for government work).
|
When someone says residual deviance/df should ~ 1 for a Poisson model, how approximate is approximat
|
Asymptotically the deviance should be chi-square distributed with mean equal to the degrees of freedom. So divide it by its degrees of freedom & you should get about 1 if the data is not over-dispers
|
When someone says residual deviance/df should ~ 1 for a Poisson model, how approximate is approximate?
Asymptotically the deviance should be chi-square distributed with mean equal to the degrees of freedom. So divide it by its degrees of freedom & you should get about 1 if the data is not over-dispersed. To get a proper test just look up the deviance in chi-square tables - but note (a) that the chi square distribution is an approximation & (b) that a high value can indicate other kinds of lack of fit (which is perhaps why 'around 1' is considered good enough for government work).
|
When someone says residual deviance/df should ~ 1 for a Poisson model, how approximate is approximat
Asymptotically the deviance should be chi-square distributed with mean equal to the degrees of freedom. So divide it by its degrees of freedom & you should get about 1 if the data is not over-dispers
|
14,821
|
How to run two-way ANOVA on data with neither normality nor equality of variance in R?
|
This may be more of a comment than an answer, but it won't fit as a comment. We may be able to help you here, but this may take a few iterations; we need more information.
First, what is your response variable?
Second, note that the marginal distribution of your response does not have to be normal, rather the distribution conditional on the model (i.e., the residuals) should be--it is not clear that you have examined your residuals. Furthermore, normality is the least important assumption of a linear model (e.g., an ANOVA); the residuals may not need to be perfectly normal. Tests of normality are not generally worthwhile (see here for a discussion on CV), plots are much better. I would try a qq-plot of your residuals. In R this is done with qqnorm(), or try qqPlot() in the car package. It's also worth considering the manner in which the residuals are non-normal: skewness is more damaging than excess kurtosis, in particular if the skews alternate directions amongst the groups.
If there really is a problem worth worrying about, a transformation is a good strategy. Taking the log of your raw data is one option, but not the only one. Note that centering and standardizing aren't really transformations in this sense. You want to look into the Box & Cox family of power transformations. And remember, the result doesn't have to be perfectly normal, just good enough.
Next, I don't follow your use of the chi-squared test for homogeneity of variance, although it may be perfectly fine. I would suggest you use Levene's test (use leveneTest() in car). Heterogeneity is more damaging than non-normality, but the ANOVA is pretty robust if the heterogeneity is minor. A standard rule of thumb is that the largest group variance can be up to four times the smallest without posing strong problems. A good transformation should also address heterogeneity.
If these strategies are insufficient, I would probably explore robust regression before trying a non-parametric approach.
If you can edit your question and say more about your data, I may be able to update this to provide more specific information.
|
How to run two-way ANOVA on data with neither normality nor equality of variance in R?
|
This may be more of a comment than an answer, but it won't fit as a comment. We may be able to help you here, but this may take a few iterations; we need more information.
First, what is your respo
|
How to run two-way ANOVA on data with neither normality nor equality of variance in R?
This may be more of a comment than an answer, but it won't fit as a comment. We may be able to help you here, but this may take a few iterations; we need more information.
First, what is your response variable?
Second, note that the marginal distribution of your response does not have to be normal, rather the distribution conditional on the model (i.e., the residuals) should be--it is not clear that you have examined your residuals. Furthermore, normality is the least important assumption of a linear model (e.g., an ANOVA); the residuals may not need to be perfectly normal. Tests of normality are not generally worthwhile (see here for a discussion on CV), plots are much better. I would try a qq-plot of your residuals. In R this is done with qqnorm(), or try qqPlot() in the car package. It's also worth considering the manner in which the residuals are non-normal: skewness is more damaging than excess kurtosis, in particular if the skews alternate directions amongst the groups.
If there really is a problem worth worrying about, a transformation is a good strategy. Taking the log of your raw data is one option, but not the only one. Note that centering and standardizing aren't really transformations in this sense. You want to look into the Box & Cox family of power transformations. And remember, the result doesn't have to be perfectly normal, just good enough.
Next, I don't follow your use of the chi-squared test for homogeneity of variance, although it may be perfectly fine. I would suggest you use Levene's test (use leveneTest() in car). Heterogeneity is more damaging than non-normality, but the ANOVA is pretty robust if the heterogeneity is minor. A standard rule of thumb is that the largest group variance can be up to four times the smallest without posing strong problems. A good transformation should also address heterogeneity.
If these strategies are insufficient, I would probably explore robust regression before trying a non-parametric approach.
If you can edit your question and say more about your data, I may be able to update this to provide more specific information.
|
How to run two-way ANOVA on data with neither normality nor equality of variance in R?
This may be more of a comment than an answer, but it won't fit as a comment. We may be able to help you here, but this may take a few iterations; we need more information.
First, what is your respo
|
14,822
|
How to run two-way ANOVA on data with neither normality nor equality of variance in R?
|
(note: this answer was posted before the question was migrated and merged from SO, so details have been added to the question that are not addressed here. Many are addressed in the comments and the answer by @gung).
There are many different approaches, and this question has been covered elsewhere on this site. Here is a list of some approaches, with links to other questions on the site and some references.:
The Box-Cox power transformation can normalize residuals that are on a non-linear scale
ANOVA on ranked data is very easy but has reduced power and is difficult to interpret. See Conover and Iman, (1981)
Proportional Odds ordinal logistic model
Permutation Tests (Anderson and ter Braak 2003), implemented in and described by Anderson and as the adonis function in the R Vegan package
Bootstrapping
Hierarchical Bayesian modeling (Gelman 2005)
|
How to run two-way ANOVA on data with neither normality nor equality of variance in R?
|
(note: this answer was posted before the question was migrated and merged from SO, so details have been added to the question that are not addressed here. Many are addressed in the comments and the an
|
How to run two-way ANOVA on data with neither normality nor equality of variance in R?
(note: this answer was posted before the question was migrated and merged from SO, so details have been added to the question that are not addressed here. Many are addressed in the comments and the answer by @gung).
There are many different approaches, and this question has been covered elsewhere on this site. Here is a list of some approaches, with links to other questions on the site and some references.:
The Box-Cox power transformation can normalize residuals that are on a non-linear scale
ANOVA on ranked data is very easy but has reduced power and is difficult to interpret. See Conover and Iman, (1981)
Proportional Odds ordinal logistic model
Permutation Tests (Anderson and ter Braak 2003), implemented in and described by Anderson and as the adonis function in the R Vegan package
Bootstrapping
Hierarchical Bayesian modeling (Gelman 2005)
|
How to run two-way ANOVA on data with neither normality nor equality of variance in R?
(note: this answer was posted before the question was migrated and merged from SO, so details have been added to the question that are not addressed here. Many are addressed in the comments and the an
|
14,823
|
Why squaring $R$ gives explained variance?
|
Hand-wavingly, the correlation $R$ can be thought of as a measure of the angle between two vectors, the dependent vector $Y$ and the independent vector $X$.
If the angle between the vectors is $\theta$, the correlation $R$ is $\cos(\theta)$.
The part of $Y$ that is explained by $X$ is of length $\Vert Y\Vert\cos(\theta)$ and is parallel to $X$ (or the projection of $Y$ on $X$). The part that is not explained is of length $\Vert Y\Vert\sin(\theta)$ and is orthogonal to $X$. In terms of variances, we have
$$\sigma_Y^2 = \sigma_Y^2\cos^2(\theta) + \sigma_Y^2\sin^2(\theta)$$
where the first term on the right is the explained variance and the second the unexplained variance. The fraction that is explained is thus $R^2$, not $R$.
|
Why squaring $R$ gives explained variance?
|
Hand-wavingly, the correlation $R$ can be thought of as a measure of the angle between two vectors, the dependent vector $Y$ and the independent vector $X$.
If the angle between the vectors is $\theta
|
Why squaring $R$ gives explained variance?
Hand-wavingly, the correlation $R$ can be thought of as a measure of the angle between two vectors, the dependent vector $Y$ and the independent vector $X$.
If the angle between the vectors is $\theta$, the correlation $R$ is $\cos(\theta)$.
The part of $Y$ that is explained by $X$ is of length $\Vert Y\Vert\cos(\theta)$ and is parallel to $X$ (or the projection of $Y$ on $X$). The part that is not explained is of length $\Vert Y\Vert\sin(\theta)$ and is orthogonal to $X$. In terms of variances, we have
$$\sigma_Y^2 = \sigma_Y^2\cos^2(\theta) + \sigma_Y^2\sin^2(\theta)$$
where the first term on the right is the explained variance and the second the unexplained variance. The fraction that is explained is thus $R^2$, not $R$.
|
Why squaring $R$ gives explained variance?
Hand-wavingly, the correlation $R$ can be thought of as a measure of the angle between two vectors, the dependent vector $Y$ and the independent vector $X$.
If the angle between the vectors is $\theta
|
14,824
|
Why squaring $R$ gives explained variance?
|
You can do this a long way and show that the total variance of the dependent variable is a sum of the variance of predicted and the error variance. The ratio of the variance or predicted to the variance of dependent variable is called $R^2$, and it's between 0 and 1 in OLS. It happens so that when you have only one independent variable $\sqrt{R^2}=R$, the Pearson correlation coefficient. That's why you can say that squaring the correlation coefficient gives the explained variance, i.e. the portion predicted variance to the total variance.
|
Why squaring $R$ gives explained variance?
|
You can do this a long way and show that the total variance of the dependent variable is a sum of the variance of predicted and the error variance. The ratio of the variance or predicted to the varian
|
Why squaring $R$ gives explained variance?
You can do this a long way and show that the total variance of the dependent variable is a sum of the variance of predicted and the error variance. The ratio of the variance or predicted to the variance of dependent variable is called $R^2$, and it's between 0 and 1 in OLS. It happens so that when you have only one independent variable $\sqrt{R^2}=R$, the Pearson correlation coefficient. That's why you can say that squaring the correlation coefficient gives the explained variance, i.e. the portion predicted variance to the total variance.
|
Why squaring $R$ gives explained variance?
You can do this a long way and show that the total variance of the dependent variable is a sum of the variance of predicted and the error variance. The ratio of the variance or predicted to the varian
|
14,825
|
When to use bootstrap vs. bayesian technique?
|
To my thinking, your problem description points to two main issues. First:
I have a rather complicated decision analysis...
Assuming you've got a loss function in hand, you need to decide whether you care about frequentist risk or posterior expected loss. The bootstrap lets you approximate functionals of the data distribution, so it will help with the former; and posterior samples from MCMC will let you assess the latter. But...
I also have data at the subsystem and system levels
so these data have hierarchical structure. The Bayesian approach models such data very naturally, whereas the bootstrap was originally designed for data modelled as i.i.d. While it has been extended to hierarchical data (see references in the introduction of this paper), such approaches are relatively underdeveloped (according to the abstract of this article) .
To summarize: if it really is frequentist risk that you care about, then some original research in the application of the bootstrap to decision theory may be necessary. However, if minimizing posterior expected loss is a more natural fit to your decision problem, Bayes is definitely the way to go.
|
When to use bootstrap vs. bayesian technique?
|
To my thinking, your problem description points to two main issues. First:
I have a rather complicated decision analysis...
Assuming you've got a loss function in hand, you need to decide whether yo
|
When to use bootstrap vs. bayesian technique?
To my thinking, your problem description points to two main issues. First:
I have a rather complicated decision analysis...
Assuming you've got a loss function in hand, you need to decide whether you care about frequentist risk or posterior expected loss. The bootstrap lets you approximate functionals of the data distribution, so it will help with the former; and posterior samples from MCMC will let you assess the latter. But...
I also have data at the subsystem and system levels
so these data have hierarchical structure. The Bayesian approach models such data very naturally, whereas the bootstrap was originally designed for data modelled as i.i.d. While it has been extended to hierarchical data (see references in the introduction of this paper), such approaches are relatively underdeveloped (according to the abstract of this article) .
To summarize: if it really is frequentist risk that you care about, then some original research in the application of the bootstrap to decision theory may be necessary. However, if minimizing posterior expected loss is a more natural fit to your decision problem, Bayes is definitely the way to go.
|
When to use bootstrap vs. bayesian technique?
To my thinking, your problem description points to two main issues. First:
I have a rather complicated decision analysis...
Assuming you've got a loss function in hand, you need to decide whether yo
|
14,826
|
When to use bootstrap vs. bayesian technique?
|
I've read that the non-parametric bootstrap can be seen as a special case of a Bayesian model with a discrete (very)non informative prior, where the assumptions being made in the model is that the data is discrete, and the domain of your target distribution is completely observed in your sample.
Here are two references:
The bootstrap and Markov chain Monte Carlo, Bradley Efron
The Non-parametric Bootstrap as a Bayesian Model, Rasmus Bååth
|
When to use bootstrap vs. bayesian technique?
|
I've read that the non-parametric bootstrap can be seen as a special case of a Bayesian model with a discrete (very)non informative prior, where the assumptions being made in the model is that the da
|
When to use bootstrap vs. bayesian technique?
I've read that the non-parametric bootstrap can be seen as a special case of a Bayesian model with a discrete (very)non informative prior, where the assumptions being made in the model is that the data is discrete, and the domain of your target distribution is completely observed in your sample.
Here are two references:
The bootstrap and Markov chain Monte Carlo, Bradley Efron
The Non-parametric Bootstrap as a Bayesian Model, Rasmus Bååth
|
When to use bootstrap vs. bayesian technique?
I've read that the non-parametric bootstrap can be seen as a special case of a Bayesian model with a discrete (very)non informative prior, where the assumptions being made in the model is that the da
|
14,827
|
Measuring individual player effectiveness in 2-player per team sports
|
Below are a couple very simple models. They are both deficient in at least one way, but maybe they'll provide something to build on. The second model actually does not (quite) address the OP's scenario (see remarks below), but I am leaving it in case it helps in some way.
Model 1: A variant of the Bradley–Terry model
Suppose we are primarily interested in predicting whether one team will beat another based on the players on each team. We can simply record whether Team 1 with players $(i,j)$ beats Team 2 with players $(k,\ell)$ for each game, ignoring the final score. Certainly, this is throwing away some information, but in many cases this still provides lots of information.
The model is then
$$
\mathrm{logit}(\mathbb P(\text{Team 1 beats Team 2})) = \alpha_i + \alpha_j - \alpha_k - \alpha_\ell \> .
$$
That is, we have an "affinity" parameter for each player that affects how much that player improves the chance of his team winning. Define the player's "strength" by $s_i = e^{\alpha_i}$. Then, this model asserts that
$$
\mathbb P(\text{Team 1 beats Team 2}) = \frac{s_i s_j}{s_i s_j + s_k s_\ell} \>.
$$
There is a very nice symmetry here in that it doesn't matter how the response is coded as long as it is consistent with the predictors. That is, we also have
$$
\mathrm{logit}(\mathbb P(\text{Team 2 beats Team 1})) = \alpha_k + \alpha_\ell - \alpha_i - \alpha_j \> .
$$
This can be fit easily as a logistic regression with predictors that are indicators (one for each player) taking value $+1$ if player $i$ is on Team 1 for the game in question, $-1$ if she's on Team 2 and $0$ if she does not participate in that game.
From this we also have a natural ranking for the players. The larger the $\alpha$ (or $s$), the greater the player improves her team's chance of winning. So, we can simply rank players according to their estimated coefficients. (Note that the affinity parameters are only identifiable up to a common offset. Therefore, it is typical to fix $\alpha_1 = 0$ to make the model identifiable.)
Model 2: Independent scoring
NB: Upon rereading the OP's question, it's apparent that the models below are inadequate for his setup. Specifically, the OP is interested in a game that ends after a fixed number of points are scored by one team or the other. The models below are more appropriate for games that have a fixed duration in time. Modifications can be made to fit better within the OP's framework, but it would require a separate answer to develop.
Now we want to keep track of scores. Suppose it's a reasonable approximation that each team scores points independently of each other with the number of points scored in any interval independent of any disjoint interval. Then the number of points each team scores can be modeled as a Poisson random variable.
Thus, we can setup a Poisson GLM such that the score of some team consisting of players $i$ and $j$ in a particular game is
$$
\log(\mu) = \gamma_i + \gamma_j
$$
Note that this model ignores the actual matchups between teams, focusing purely on scoring.
It does have an interesting connection to the modified Bradley–Terry model. Define $\sigma_i = e^{\gamma_i}$ and suppose that a "sudden-death" game is played in which the first team to scores wins. If Team 1 has players $(i,j)$ and Team 2 has players $(k,\ell)$, then
$$
\mathbb P(\text{Team 1 beats Team 2 in sudden death}) = \frac{\sigma_i \sigma_j}{\sigma_i \sigma_j + \sigma_k \sigma_\ell} \>.
$$
Thus, the mean rate of scoring of the players is equivalent to the "strength" parameter formulation of Model 1.
We might consider making this model more complex by having an "offense" affinity $\rho_i$ and "defense" affinity $\delta_i$ for each player, such that if Team 1 with $(i,j)$ plays Team 2 with $(k,\ell)$, then
$$
\log(\mu_1) = \rho_i + \rho_j - \delta_k - \delta_{\ell}
$$
and
$$
\log(\mu_2) = \rho_k + \rho_{\ell} - \delta_i - \delta_j
$$
Scoring is still independent in this model, but now there is an interaction between the players on each team that affects the score. Players can also be ranked according to their affinity-coefficient estimates.
Model 2 (and its variants) allow for prediction of a final score as well.
Extensions: One useful way to extend both models is to incorporate an ordering where the positive indicators correspond to the "home" team and the negative indicators to the "away" team. Adding in an intercept term to the models can then be interpreted as a "home-field advantage". Other extensions might include incorporating the chance of ties in Model 1 (it's actually already a possibility in Model 2).
Side note: At least one of the computerized polls (Peter Wolfe's) used for the Bowl Championship Series in American college football uses the (standard) Bradley–Terry model to produce its rankings.
|
Measuring individual player effectiveness in 2-player per team sports
|
Below are a couple very simple models. They are both deficient in at least one way, but maybe they'll provide something to build on. The second model actually does not (quite) address the OP's scenari
|
Measuring individual player effectiveness in 2-player per team sports
Below are a couple very simple models. They are both deficient in at least one way, but maybe they'll provide something to build on. The second model actually does not (quite) address the OP's scenario (see remarks below), but I am leaving it in case it helps in some way.
Model 1: A variant of the Bradley–Terry model
Suppose we are primarily interested in predicting whether one team will beat another based on the players on each team. We can simply record whether Team 1 with players $(i,j)$ beats Team 2 with players $(k,\ell)$ for each game, ignoring the final score. Certainly, this is throwing away some information, but in many cases this still provides lots of information.
The model is then
$$
\mathrm{logit}(\mathbb P(\text{Team 1 beats Team 2})) = \alpha_i + \alpha_j - \alpha_k - \alpha_\ell \> .
$$
That is, we have an "affinity" parameter for each player that affects how much that player improves the chance of his team winning. Define the player's "strength" by $s_i = e^{\alpha_i}$. Then, this model asserts that
$$
\mathbb P(\text{Team 1 beats Team 2}) = \frac{s_i s_j}{s_i s_j + s_k s_\ell} \>.
$$
There is a very nice symmetry here in that it doesn't matter how the response is coded as long as it is consistent with the predictors. That is, we also have
$$
\mathrm{logit}(\mathbb P(\text{Team 2 beats Team 1})) = \alpha_k + \alpha_\ell - \alpha_i - \alpha_j \> .
$$
This can be fit easily as a logistic regression with predictors that are indicators (one for each player) taking value $+1$ if player $i$ is on Team 1 for the game in question, $-1$ if she's on Team 2 and $0$ if she does not participate in that game.
From this we also have a natural ranking for the players. The larger the $\alpha$ (or $s$), the greater the player improves her team's chance of winning. So, we can simply rank players according to their estimated coefficients. (Note that the affinity parameters are only identifiable up to a common offset. Therefore, it is typical to fix $\alpha_1 = 0$ to make the model identifiable.)
Model 2: Independent scoring
NB: Upon rereading the OP's question, it's apparent that the models below are inadequate for his setup. Specifically, the OP is interested in a game that ends after a fixed number of points are scored by one team or the other. The models below are more appropriate for games that have a fixed duration in time. Modifications can be made to fit better within the OP's framework, but it would require a separate answer to develop.
Now we want to keep track of scores. Suppose it's a reasonable approximation that each team scores points independently of each other with the number of points scored in any interval independent of any disjoint interval. Then the number of points each team scores can be modeled as a Poisson random variable.
Thus, we can setup a Poisson GLM such that the score of some team consisting of players $i$ and $j$ in a particular game is
$$
\log(\mu) = \gamma_i + \gamma_j
$$
Note that this model ignores the actual matchups between teams, focusing purely on scoring.
It does have an interesting connection to the modified Bradley–Terry model. Define $\sigma_i = e^{\gamma_i}$ and suppose that a "sudden-death" game is played in which the first team to scores wins. If Team 1 has players $(i,j)$ and Team 2 has players $(k,\ell)$, then
$$
\mathbb P(\text{Team 1 beats Team 2 in sudden death}) = \frac{\sigma_i \sigma_j}{\sigma_i \sigma_j + \sigma_k \sigma_\ell} \>.
$$
Thus, the mean rate of scoring of the players is equivalent to the "strength" parameter formulation of Model 1.
We might consider making this model more complex by having an "offense" affinity $\rho_i$ and "defense" affinity $\delta_i$ for each player, such that if Team 1 with $(i,j)$ plays Team 2 with $(k,\ell)$, then
$$
\log(\mu_1) = \rho_i + \rho_j - \delta_k - \delta_{\ell}
$$
and
$$
\log(\mu_2) = \rho_k + \rho_{\ell} - \delta_i - \delta_j
$$
Scoring is still independent in this model, but now there is an interaction between the players on each team that affects the score. Players can also be ranked according to their affinity-coefficient estimates.
Model 2 (and its variants) allow for prediction of a final score as well.
Extensions: One useful way to extend both models is to incorporate an ordering where the positive indicators correspond to the "home" team and the negative indicators to the "away" team. Adding in an intercept term to the models can then be interpreted as a "home-field advantage". Other extensions might include incorporating the chance of ties in Model 1 (it's actually already a possibility in Model 2).
Side note: At least one of the computerized polls (Peter Wolfe's) used for the Bowl Championship Series in American college football uses the (standard) Bradley–Terry model to produce its rankings.
|
Measuring individual player effectiveness in 2-player per team sports
Below are a couple very simple models. They are both deficient in at least one way, but maybe they'll provide something to build on. The second model actually does not (quite) address the OP's scenari
|
14,828
|
Measuring individual player effectiveness in 2-player per team sports
|
Microsoft's TrueSkill algorithm, as used to rank players on XBox Live, can deal with team matches, but does not incorporate margin of victory. It may still be of some use to you.
|
Measuring individual player effectiveness in 2-player per team sports
|
Microsoft's TrueSkill algorithm, as used to rank players on XBox Live, can deal with team matches, but does not incorporate margin of victory. It may still be of some use to you.
|
Measuring individual player effectiveness in 2-player per team sports
Microsoft's TrueSkill algorithm, as used to rank players on XBox Live, can deal with team matches, but does not incorporate margin of victory. It may still be of some use to you.
|
Measuring individual player effectiveness in 2-player per team sports
Microsoft's TrueSkill algorithm, as used to rank players on XBox Live, can deal with team matches, but does not incorporate margin of victory. It may still be of some use to you.
|
14,829
|
Measuring individual player effectiveness in 2-player per team sports
|
Yes.
You could look at each players win/loss record, and point differential. I realize that's a simple answer, but, those stats would still be meaningful.
|
Measuring individual player effectiveness in 2-player per team sports
|
Yes.
You could look at each players win/loss record, and point differential. I realize that's a simple answer, but, those stats would still be meaningful.
|
Measuring individual player effectiveness in 2-player per team sports
Yes.
You could look at each players win/loss record, and point differential. I realize that's a simple answer, but, those stats would still be meaningful.
|
Measuring individual player effectiveness in 2-player per team sports
Yes.
You could look at each players win/loss record, and point differential. I realize that's a simple answer, but, those stats would still be meaningful.
|
14,830
|
Measuring individual player effectiveness in 2-player per team sports
|
(I'd like to add this as a comment for a previous answer, but my reputation was not enough, for the time being)
Martin O'Leary linked TrueSkill algorithm, and it's a good option.
If you're interested in use (more than in development), you should give a try to rankade, our ranking system. Like TrueSkill it can manage two faction with more than one players each (2-vs-2 foosball, 2-vs-2 table tennis, basketball 3-on-3 and 5-on-5, and more). Some remarkable differences, among others, are that rankade allows more structured factions building (1-vs-1, faction vs faction, multiplayer, multifaction, cooperative games, asymmetrical factions, and more) and that it's free to use.
Here's a comparison between most known ranking systems.
|
Measuring individual player effectiveness in 2-player per team sports
|
(I'd like to add this as a comment for a previous answer, but my reputation was not enough, for the time being)
Martin O'Leary linked TrueSkill algorithm, and it's a good option.
If you're interested
|
Measuring individual player effectiveness in 2-player per team sports
(I'd like to add this as a comment for a previous answer, but my reputation was not enough, for the time being)
Martin O'Leary linked TrueSkill algorithm, and it's a good option.
If you're interested in use (more than in development), you should give a try to rankade, our ranking system. Like TrueSkill it can manage two faction with more than one players each (2-vs-2 foosball, 2-vs-2 table tennis, basketball 3-on-3 and 5-on-5, and more). Some remarkable differences, among others, are that rankade allows more structured factions building (1-vs-1, faction vs faction, multiplayer, multifaction, cooperative games, asymmetrical factions, and more) and that it's free to use.
Here's a comparison between most known ranking systems.
|
Measuring individual player effectiveness in 2-player per team sports
(I'd like to add this as a comment for a previous answer, but my reputation was not enough, for the time being)
Martin O'Leary linked TrueSkill algorithm, and it's a good option.
If you're interested
|
14,831
|
Topic prediction using latent Dirichlet allocation
|
I'd try 'folding in'. This refers to taking one new document, adding it to the corpus, and then running Gibbs sampling just on the words in that new document, keeping the topic assignments of the old documents the same. This usually converges fast (maybe 5-10-20 iterations), and you don't need to sample your old corpus, so it also runs fast. At the end you will have the topic assignment for every word in the new document. This will give you the distribution of topics in that document.
In your Gibbs sampler, you probably have something similar to the following code:
// This will initialize the matrices of counts, N_tw (topic-word matrix) and N_dt (document-topic matrix)
for doc = 1 to N_Documents
for token = 1 to N_Tokens_In_Document
Assign current token to a random topic, updating the count matrices
end
end
// This will do the Gibbs sampling
for doc = 1 to N_Documents
for token = 1 to N_Tokens_In_Document
Compute probability of current token being assigned to each topic
Sample a topic from this distribution
Assign the token to the new topic, updating the count matrices
end
end
Folding-in is the same, except you start with the existing matrices, add the new document's tokens to them, and do the sampling for only the new tokens. I.e.:
Start with the N_tw and N_dt matrices from the previous step
// This will update the count matrices for folding-in
for token = 1 to N_Tokens_In_New_Document
Assign current token to a random topic, updating the count matrices
end
// This will do the folding-in by Gibbs sampling
for token = 1 to N_Tokens_In_New_Document
Compute probability of current token being assigned to each topic
Sample a topic from this distribution
Assign the token to the new topic, updating the count matrices
end
If you do standard LDA, it is unlikely that an entire document was generated by one topic. So I don't know how useful it is to compute the probability of the document under one topic. But if you still wanted to do it, it's easy. From the two matrices you get you can compute $p^i_w$, the probability of word $w$ in topic $i$. Take your new document; suppose the $j$'th word is $w_j$. The words are independent given the topic, so the probability is just $$\prod_j p^i_{w_j}$$ (note that you will probably need to compute it in log space).
|
Topic prediction using latent Dirichlet allocation
|
I'd try 'folding in'. This refers to taking one new document, adding it to the corpus, and then running Gibbs sampling just on the words in that new document, keeping the topic assignments of the old
|
Topic prediction using latent Dirichlet allocation
I'd try 'folding in'. This refers to taking one new document, adding it to the corpus, and then running Gibbs sampling just on the words in that new document, keeping the topic assignments of the old documents the same. This usually converges fast (maybe 5-10-20 iterations), and you don't need to sample your old corpus, so it also runs fast. At the end you will have the topic assignment for every word in the new document. This will give you the distribution of topics in that document.
In your Gibbs sampler, you probably have something similar to the following code:
// This will initialize the matrices of counts, N_tw (topic-word matrix) and N_dt (document-topic matrix)
for doc = 1 to N_Documents
for token = 1 to N_Tokens_In_Document
Assign current token to a random topic, updating the count matrices
end
end
// This will do the Gibbs sampling
for doc = 1 to N_Documents
for token = 1 to N_Tokens_In_Document
Compute probability of current token being assigned to each topic
Sample a topic from this distribution
Assign the token to the new topic, updating the count matrices
end
end
Folding-in is the same, except you start with the existing matrices, add the new document's tokens to them, and do the sampling for only the new tokens. I.e.:
Start with the N_tw and N_dt matrices from the previous step
// This will update the count matrices for folding-in
for token = 1 to N_Tokens_In_New_Document
Assign current token to a random topic, updating the count matrices
end
// This will do the folding-in by Gibbs sampling
for token = 1 to N_Tokens_In_New_Document
Compute probability of current token being assigned to each topic
Sample a topic from this distribution
Assign the token to the new topic, updating the count matrices
end
If you do standard LDA, it is unlikely that an entire document was generated by one topic. So I don't know how useful it is to compute the probability of the document under one topic. But if you still wanted to do it, it's easy. From the two matrices you get you can compute $p^i_w$, the probability of word $w$ in topic $i$. Take your new document; suppose the $j$'th word is $w_j$. The words are independent given the topic, so the probability is just $$\prod_j p^i_{w_j}$$ (note that you will probably need to compute it in log space).
|
Topic prediction using latent Dirichlet allocation
I'd try 'folding in'. This refers to taking one new document, adding it to the corpus, and then running Gibbs sampling just on the words in that new document, keeping the topic assignments of the old
|
14,832
|
Robust t-test for mean
|
Why are you looking at non-parametric tests? Are the assumptions of the t-test violated? Namely, ordinal or non-normal data and inconstant variances? Of course, if your sample is large enough you can justify the parametric t-test with its greater power despite the lack of normality in the sample. Likewise if your concern is unequal variances, there are corrections to the parametric test that yield accurate p-values (the Welch correction).
Otherwise, comparing your results to the t-test is not a good way to go about this, because the t-test results are biased when the assumptions are not met. The Mann-Whitney U is an appropriate non-parametric alternative, if that's what you really need. You only lose power if you are using the non-parametric test when you could justifiably use the t-test (because the assumptions are met).
And, just for some more background, go here: Student's t Test for Independent Samples.
|
Robust t-test for mean
|
Why are you looking at non-parametric tests? Are the assumptions of the t-test violated? Namely, ordinal or non-normal data and inconstant variances? Of course, if your sample is large enough you c
|
Robust t-test for mean
Why are you looking at non-parametric tests? Are the assumptions of the t-test violated? Namely, ordinal or non-normal data and inconstant variances? Of course, if your sample is large enough you can justify the parametric t-test with its greater power despite the lack of normality in the sample. Likewise if your concern is unequal variances, there are corrections to the parametric test that yield accurate p-values (the Welch correction).
Otherwise, comparing your results to the t-test is not a good way to go about this, because the t-test results are biased when the assumptions are not met. The Mann-Whitney U is an appropriate non-parametric alternative, if that's what you really need. You only lose power if you are using the non-parametric test when you could justifiably use the t-test (because the assumptions are met).
And, just for some more background, go here: Student's t Test for Independent Samples.
|
Robust t-test for mean
Why are you looking at non-parametric tests? Are the assumptions of the t-test violated? Namely, ordinal or non-normal data and inconstant variances? Of course, if your sample is large enough you c
|
14,833
|
Robust t-test for mean
|
I agree that if you want to actually test whether the group means are different (as opposed to testing differences between group medians or trimmed means, etc.), then you don't want to use a nonparametric test that tests a different hypothesis.
In general p-values from a t-test tend to be fairly accurate given moderate departures of the assumption of normality of residuals.
Check out this applet to get an intuition on this robustness: http://onlinestatbook.com/stat_sim/robustness/index.html
If you're still concerned about the violation of the normality assumption,
you might want to bootstrap.
e.g., http://biostat.mc.vanderbilt.edu/wiki/pub/Main/JenniferThompson/ms_mtg_18oct07.pdf
You could also transform the skewed dependent variable to resolve issues with departures from normality.
|
Robust t-test for mean
|
I agree that if you want to actually test whether the group means are different (as opposed to testing differences between group medians or trimmed means, etc.), then you don't want to use a nonparame
|
Robust t-test for mean
I agree that if you want to actually test whether the group means are different (as opposed to testing differences between group medians or trimmed means, etc.), then you don't want to use a nonparametric test that tests a different hypothesis.
In general p-values from a t-test tend to be fairly accurate given moderate departures of the assumption of normality of residuals.
Check out this applet to get an intuition on this robustness: http://onlinestatbook.com/stat_sim/robustness/index.html
If you're still concerned about the violation of the normality assumption,
you might want to bootstrap.
e.g., http://biostat.mc.vanderbilt.edu/wiki/pub/Main/JenniferThompson/ms_mtg_18oct07.pdf
You could also transform the skewed dependent variable to resolve issues with departures from normality.
|
Robust t-test for mean
I agree that if you want to actually test whether the group means are different (as opposed to testing differences between group medians or trimmed means, etc.), then you don't want to use a nonparame
|
14,834
|
Robust t-test for mean
|
Johnson (1978) gives a modification for the $t$-statistic and confidence intervals which is a good starting point for my problem. The correction is based on a Cornish-Fisher expansion, and uses sample skew.
The 'latest and greatest' is due to Ogaswara, with references therein to Hall and others.
|
Robust t-test for mean
|
Johnson (1978) gives a modification for the $t$-statistic and confidence intervals which is a good starting point for my problem. The correction is based on a Cornish-Fisher expansion, and uses sample
|
Robust t-test for mean
Johnson (1978) gives a modification for the $t$-statistic and confidence intervals which is a good starting point for my problem. The correction is based on a Cornish-Fisher expansion, and uses sample skew.
The 'latest and greatest' is due to Ogaswara, with references therein to Hall and others.
|
Robust t-test for mean
Johnson (1978) gives a modification for the $t$-statistic and confidence intervals which is a good starting point for my problem. The correction is based on a Cornish-Fisher expansion, and uses sample
|
14,835
|
Robust t-test for mean
|
Yes, there is, the Yuen test for paired and un-paired data, which is nothing but the t-test based on trimmed means. When there are unequal variances in both samples, the Yuen-Welch test is the replacement of the classic Welch t-test. It is implemented in various statistical software.
|
Robust t-test for mean
|
Yes, there is, the Yuen test for paired and un-paired data, which is nothing but the t-test based on trimmed means. When there are unequal variances in both samples, the Yuen-Welch test is the replace
|
Robust t-test for mean
Yes, there is, the Yuen test for paired and un-paired data, which is nothing but the t-test based on trimmed means. When there are unequal variances in both samples, the Yuen-Welch test is the replacement of the classic Welch t-test. It is implemented in various statistical software.
|
Robust t-test for mean
Yes, there is, the Yuen test for paired and un-paired data, which is nothing but the t-test based on trimmed means. When there are unequal variances in both samples, the Yuen-Welch test is the replace
|
14,836
|
Robust t-test for mean
|
I don't have enough reputation for a comment, thus as an answer: Have a look at this calcuation. I think this provides an excellent answer. In brief:
The asymptotic performance is much more sensitive to deviations from
normality in the form of skewness than in the form of kurtosis ...
Thus Student's t-test is sensitive to skewness but relatively robust
against heavy tails, and it is reasonable to use a test for normality
that is directed towards skew alternatives before applying the t-test.
|
Robust t-test for mean
|
I don't have enough reputation for a comment, thus as an answer: Have a look at this calcuation. I think this provides an excellent answer. In brief:
The asymptotic performance is much more sensitive
|
Robust t-test for mean
I don't have enough reputation for a comment, thus as an answer: Have a look at this calcuation. I think this provides an excellent answer. In brief:
The asymptotic performance is much more sensitive to deviations from
normality in the form of skewness than in the form of kurtosis ...
Thus Student's t-test is sensitive to skewness but relatively robust
against heavy tails, and it is reasonable to use a test for normality
that is directed towards skew alternatives before applying the t-test.
|
Robust t-test for mean
I don't have enough reputation for a comment, thus as an answer: Have a look at this calcuation. I think this provides an excellent answer. In brief:
The asymptotic performance is much more sensitive
|
14,837
|
Why are mean 0 and standard deviation 1 distributions always used?
|
At the beginning the most useful answer is probably that mean of 0 and sd of 1 are mathematically convenient. If you can work out the probabilities for a distribution with a mean of 0 and standard deviation of 1 you can work them out for any similar distribution of scores with a very simple equation.
I'm not following this question. The mean of 0 and standard deviation of 1 usually applies to the standard normal distribution, often called the bell curve. The most likely value is the mean and it falls off as you get farther away. If you have a truly flat distribution then there is no value more likely than another. Your question here is poorly formed. Were you looking at questions about coin flips perhaps? Look up binomial distribution and central limit theorem.
"mean here"? Where? The simple answer for z-scores is that they are your scores scaled as if your mean were 0 and standard deviation were 1. Another way of thinking about it is that it takes an individual score as the number of standard deviations that score is from the mean. The equation is calculating the (score - mean) / standard deviation. The reasons you'd do that are quite varied but one is that in intro statistics courses you have tables of probabilities for different z-scores (see answer 1).
If you looked up z-score first, even in wikipedia, you would have gotten pretty good answers.
|
Why are mean 0 and standard deviation 1 distributions always used?
|
At the beginning the most useful answer is probably that mean of 0 and sd of 1 are mathematically convenient. If you can work out the probabilities for a distribution with a mean of 0 and standard de
|
Why are mean 0 and standard deviation 1 distributions always used?
At the beginning the most useful answer is probably that mean of 0 and sd of 1 are mathematically convenient. If you can work out the probabilities for a distribution with a mean of 0 and standard deviation of 1 you can work them out for any similar distribution of scores with a very simple equation.
I'm not following this question. The mean of 0 and standard deviation of 1 usually applies to the standard normal distribution, often called the bell curve. The most likely value is the mean and it falls off as you get farther away. If you have a truly flat distribution then there is no value more likely than another. Your question here is poorly formed. Were you looking at questions about coin flips perhaps? Look up binomial distribution and central limit theorem.
"mean here"? Where? The simple answer for z-scores is that they are your scores scaled as if your mean were 0 and standard deviation were 1. Another way of thinking about it is that it takes an individual score as the number of standard deviations that score is from the mean. The equation is calculating the (score - mean) / standard deviation. The reasons you'd do that are quite varied but one is that in intro statistics courses you have tables of probabilities for different z-scores (see answer 1).
If you looked up z-score first, even in wikipedia, you would have gotten pretty good answers.
|
Why are mean 0 and standard deviation 1 distributions always used?
At the beginning the most useful answer is probably that mean of 0 and sd of 1 are mathematically convenient. If you can work out the probabilities for a distribution with a mean of 0 and standard de
|
14,838
|
Why are mean 0 and standard deviation 1 distributions always used?
|
To start with what we're talking about here is the standard normal distribution, a normal distribution with a mean of 0 and a standard deviation of 1. The short-hand for a variable which is distributed as a standard normal distribution is Z.
Here are my answers to your questions.
(1) I think there are two key reasons why standard normal distributions are attractive. Firstly, any normally distributed variable can be converted or transformed to a standard normal by subtracting its mean from each observation before dividing each observation by the standard deviation. This is called the Z-transformation or the creation of Z-scores. This is very handy especially in the days before computers.
If you wanted to find out the probability of some event from your variable which is normally distributed with mean 65.6 with a standard deviation of 10.2 wouldn't that be a right pain in the backside without a computer? Let's say that this variable is the heights in inches of American women. And let's say that we're interested in finding out the probability that a woman randomly drawn from the population will be very tall - say over 75 inches tall. Well this is a bit of a pain to find out with a computer as I would have to carry around a table for every possible normal distribution with me. However, if I transform this to a Z-score I can use the one table to find out the probability, thus:
$$
\begin{aligned}
\frac{(x_i - \bar x)}{\sigma_x} &= Z \\
\frac{(75 - 65.6)}{10.2} &= 0.9215
\end{aligned}
$$
Using the Z table I find that the cumulative probability P(z < Z) - 0.8212 and therefore the probability of finding a woman as tall or taller than 75 inches is 17.88%. We can do this with any normally distributed variable and so this standard normal distribution is very handy.
The second reason why the standard normal distribution is used frequently is due to the interpretation is provides in terms of Z-scores. Each "observation" in a Z-transformed variable is how many standard deviations the original untransformed observation was from the mean. This is particularly handy for standardized tests where the raw or absolute performance is less important than the relative performance.
(2) I don't follow you here. I think you may be confused as to what we mean by a cumulative distribution function. Note that the expected value of a standard normal distribution is 0, and this value corresponds to the value of .5 on the associated cumulative distribution function.
(3) Z-scores are the individual "observations" or datum in a variable which has been Z-transformed. Return to my example of the variable - height of American women in inches. One particular observation of which may be a tall woman of height 75 inches. The Z-score for this is the result of Z-transforming the variable as we did earlier:
$$
\begin{aligned}
\frac{(x_i - \bar x)}{\sigma_x} &= Z \\
\frac{(75 - 65.6)}{10.2} &= 0.9215
\end{aligned}
$$
The Z-score in this case is 0.9215. The interpretation of the Z-score is that this particular woman is 0.9215 standard deviations taller than the mean height. A person who was 55.4 inches tall have a Z-score of 1 and would be 1 standard deviation below mean height.
|
Why are mean 0 and standard deviation 1 distributions always used?
|
To start with what we're talking about here is the standard normal distribution, a normal distribution with a mean of 0 and a standard deviation of 1. The short-hand for a variable which is distribute
|
Why are mean 0 and standard deviation 1 distributions always used?
To start with what we're talking about here is the standard normal distribution, a normal distribution with a mean of 0 and a standard deviation of 1. The short-hand for a variable which is distributed as a standard normal distribution is Z.
Here are my answers to your questions.
(1) I think there are two key reasons why standard normal distributions are attractive. Firstly, any normally distributed variable can be converted or transformed to a standard normal by subtracting its mean from each observation before dividing each observation by the standard deviation. This is called the Z-transformation or the creation of Z-scores. This is very handy especially in the days before computers.
If you wanted to find out the probability of some event from your variable which is normally distributed with mean 65.6 with a standard deviation of 10.2 wouldn't that be a right pain in the backside without a computer? Let's say that this variable is the heights in inches of American women. And let's say that we're interested in finding out the probability that a woman randomly drawn from the population will be very tall - say over 75 inches tall. Well this is a bit of a pain to find out with a computer as I would have to carry around a table for every possible normal distribution with me. However, if I transform this to a Z-score I can use the one table to find out the probability, thus:
$$
\begin{aligned}
\frac{(x_i - \bar x)}{\sigma_x} &= Z \\
\frac{(75 - 65.6)}{10.2} &= 0.9215
\end{aligned}
$$
Using the Z table I find that the cumulative probability P(z < Z) - 0.8212 and therefore the probability of finding a woman as tall or taller than 75 inches is 17.88%. We can do this with any normally distributed variable and so this standard normal distribution is very handy.
The second reason why the standard normal distribution is used frequently is due to the interpretation is provides in terms of Z-scores. Each "observation" in a Z-transformed variable is how many standard deviations the original untransformed observation was from the mean. This is particularly handy for standardized tests where the raw or absolute performance is less important than the relative performance.
(2) I don't follow you here. I think you may be confused as to what we mean by a cumulative distribution function. Note that the expected value of a standard normal distribution is 0, and this value corresponds to the value of .5 on the associated cumulative distribution function.
(3) Z-scores are the individual "observations" or datum in a variable which has been Z-transformed. Return to my example of the variable - height of American women in inches. One particular observation of which may be a tall woman of height 75 inches. The Z-score for this is the result of Z-transforming the variable as we did earlier:
$$
\begin{aligned}
\frac{(x_i - \bar x)}{\sigma_x} &= Z \\
\frac{(75 - 65.6)}{10.2} &= 0.9215
\end{aligned}
$$
The Z-score in this case is 0.9215. The interpretation of the Z-score is that this particular woman is 0.9215 standard deviations taller than the mean height. A person who was 55.4 inches tall have a Z-score of 1 and would be 1 standard deviation below mean height.
|
Why are mean 0 and standard deviation 1 distributions always used?
To start with what we're talking about here is the standard normal distribution, a normal distribution with a mean of 0 and a standard deviation of 1. The short-hand for a variable which is distribute
|
14,839
|
Why are mean 0 and standard deviation 1 distributions always used?
|
Since you received excellent explanations from Graham and John, I'm just going to answer your last question:
When people talk about Z Scores what do they actually mean here?
Best way to answer this is to think about this question: The grades in class CS 101 is normally distributed with $\mu$ = 80 and $\sigma$ = 5. What is the z-score for the grade 65?
So: (65-80)/5=-3
You can say the z-score for the grade 65 is -3; or in other words 3 standard deviation to the left.
|
Why are mean 0 and standard deviation 1 distributions always used?
|
Since you received excellent explanations from Graham and John, I'm just going to answer your last question:
When people talk about Z Scores what do they actually mean here?
Best way to answer this
|
Why are mean 0 and standard deviation 1 distributions always used?
Since you received excellent explanations from Graham and John, I'm just going to answer your last question:
When people talk about Z Scores what do they actually mean here?
Best way to answer this is to think about this question: The grades in class CS 101 is normally distributed with $\mu$ = 80 and $\sigma$ = 5. What is the z-score for the grade 65?
So: (65-80)/5=-3
You can say the z-score for the grade 65 is -3; or in other words 3 standard deviation to the left.
|
Why are mean 0 and standard deviation 1 distributions always used?
Since you received excellent explanations from Graham and John, I'm just going to answer your last question:
When people talk about Z Scores what do they actually mean here?
Best way to answer this
|
14,840
|
Different ways of modelling interactions between continuous and categorical predictors in GAM
|
gam1 and gam2 are fine; they are different models, although they are trying to do the same thing, which is model group-specific smooths.
The gam1 form
y ~ f + s(x, by = f)
does this by estimating a separate smoother for each level of f (assuming that f is a standard factor), and indeed, a separate smoothness parameter is estimated for each smooth also.
The gam2 form
y ~ f + s(x) + s(x, by = f, m = 1)
achieves the same aim as gam1 (of modelling the smooth relationship between x and y for each level of f) but it does so by estimating a global or average smooth effect of x on y (the s(x) term) plus a smooth difference term (the second s(x, by = f, m = 1) term). As the penalty here is on the first derivative (m = 1) for this difference smoother, it is penalising departure from a flat line, which when added to the global or average smooth term (s(x)) reflects a deviation from the global or average effect.
gam3 form
y ~ s(x, by = f)
is wrong regardless of how well it may fit in a particular situation. The reason I say it is wrong is that each smooth specified by the s(x, by = f) part is centred about zero because of the sum-to-zero constraint imposed for model identifiability. As such, there is nothing in the model that accounts for the mean of $Y$ in each of the groups defined by f. There is only the overall mean given by the model intercept. This means that smoother, which is centred about zero and which has had the flat basis function removed from the basis expansion of x (as it is confounded with the model intercept) is now responsible for modelling both the difference in the mean of $Y$ for the current group and the overall mean (model intercept), plus the smooth effect of x on $Y$.
None of these models is appropriate for your data however; ignoring, for now, the wrong distribution for the response (density can't be negative and there is a heterogeneity issue which a non-Gaussian family would fix or address), you haven't taken into account the grouping by flower (SampleID in your dataset).
If your aim is to model Taxon specific curves then a model of the form would be a starting point:
m1 <- gam(density ~ Taxon + s(wl, by = Taxon, k = 20) + s(SampleID, bs = 're'),
data = df, method = 'REML')
where I have added a random effect for SampleID and boosted the size of the basis expansion for the Taxon specific smooths.
This model, m1, models the observations as coming from either a smooth wl effect depending on which species (Taxon) the observation comes from (the Taxon parametric term just sets the mean density for each species and is needed as discussed above), plus a random intercept. Taken together, the curves for individual flowers arise from shifted versions of the Taxon specific curves, with the amount of shift given by the random intercept. This model assumes that all individuals have the same shape of smooth as given by the smooth for the particular Taxon that individual flower comes from.
Another version of this model is the gam2 form from above but with an added random effect
m2 <- gam(density ~ Taxon + s(wl) + s(wl, by = Taxon, m = 1) + s(SampleID, bs = 're'),
data = df, method = 'REML')
This model fits better but I don't think it is solving the problem at all, see below. One thing I think it does suggest is that the default k is potentially too low for the Taxon specific curves in these models. There is still a lot of residual smooth variation that we're not modelling if you look at the diagnostic plots.
This model is more than likely too restrictive for your data; some of the curves in your plot of the individual smooths do not appear to be simply shifted versions of the Taxon average curves. A more complex model would allow for individual-specific smooths too. Such a model can be estimated using the fs or factor-smooth interaction basis. We still want Taxon specific curves but we also want to have a separate smooth for each SampleID, but unlike the by smooths, I would suggest that initially, you want all of those SampleID-specific curves to have the same wiggliness. In the same sense as the random intercept that we included earlier, the fs basis adds a random intercept, but also includes a "random" spline (I use the scare quotes as in a Bayesian interpretation of the GAM, all these models are just variations on random effects).
This model is fitted for your data as
m3 <- gam(density ~ Taxon + s(wl, by = Taxon, k = 20) + s(wl, SampleID, bs = 'fs'),
data = df, method = 'REML')
Note that I have increased k here, in case we need more wiggliness in the Taxon-specific smooths. We still need the Taxon parametric effect for the reasons explained above.
That model takes a long time to fit on a single core with gam() — bam() will most likely be better at fitting this model as there are a relatively large number of random effects here.
If we compare these models with a smoothness parameter selection-corrected version of AIC we see just how dramatically better this latter model, m3, is compared to the other two even though it uses an order of magnitude more degrees of freedom
> AIC(m1, m2, m3)
df AIC
m1 190.7045 67264.24
m2 192.2335 67099.28
m3 1672.7410 31474.80
If we look at this model's smooths we get a better idea of how it is fitting the data:
(Note this was produced using draw(m3) using the draw() function from my gratia package. The colours in the lower-left plot are irrelevant and don't help here.)
Each SampleID's fitted curve is built up from either the intercept or the parametric term TaxonSpeciesB plus one of the two Taxon-specific smooths, depending on to which Taxon each SampleID belongs, plus its own SampleID-specifc smooth.
Note that all these models are still wrong as they don't account for the heterogeneity; gamma or Tweedie models with a log link would be my choices to take this further. Something like:
m4 <- gam(density ~ Taxon + s(wl, by = Taxon) + s(wl, SampleID, bs = 'fs'),
data = df, method = 'REML', family = tw())
But I'm having trouble with this model fitting at the moment, which might indicate it is too complex with multiple smooths of wl included.
An alternative form is to use the ordered factor approach, which does an ANOVA-like decomposition on the smooths:
Taxon parametric term is retained
s(wl) is a smooth that will represent the reference level
s(wl, by = Taxon) will have a separate difference smooth for each other level. In your case, you'll have only one of these.
This model is fitted like m3,
df <- transform(df, fTaxon = ordered(Taxon))
m3 <- gam(density ~ fTaxon + s(wl) + s(wl, by = fTaxon) +
s(wl, SampleID, bs = 'fs'),
data = df, method = 'REML')
but the interpretation is different; the first s(wl) will refer to TaxonA and the smooth implied by s(wl, by = fTaxon) will be a smooth difference between the smooth for TaxonA and that of TaxonB.
|
Different ways of modelling interactions between continuous and categorical predictors in GAM
|
gam1 and gam2 are fine; they are different models, although they are trying to do the same thing, which is model group-specific smooths.
The gam1 form
y ~ f + s(x, by = f)
does this by estimating a s
|
Different ways of modelling interactions between continuous and categorical predictors in GAM
gam1 and gam2 are fine; they are different models, although they are trying to do the same thing, which is model group-specific smooths.
The gam1 form
y ~ f + s(x, by = f)
does this by estimating a separate smoother for each level of f (assuming that f is a standard factor), and indeed, a separate smoothness parameter is estimated for each smooth also.
The gam2 form
y ~ f + s(x) + s(x, by = f, m = 1)
achieves the same aim as gam1 (of modelling the smooth relationship between x and y for each level of f) but it does so by estimating a global or average smooth effect of x on y (the s(x) term) plus a smooth difference term (the second s(x, by = f, m = 1) term). As the penalty here is on the first derivative (m = 1) for this difference smoother, it is penalising departure from a flat line, which when added to the global or average smooth term (s(x)) reflects a deviation from the global or average effect.
gam3 form
y ~ s(x, by = f)
is wrong regardless of how well it may fit in a particular situation. The reason I say it is wrong is that each smooth specified by the s(x, by = f) part is centred about zero because of the sum-to-zero constraint imposed for model identifiability. As such, there is nothing in the model that accounts for the mean of $Y$ in each of the groups defined by f. There is only the overall mean given by the model intercept. This means that smoother, which is centred about zero and which has had the flat basis function removed from the basis expansion of x (as it is confounded with the model intercept) is now responsible for modelling both the difference in the mean of $Y$ for the current group and the overall mean (model intercept), plus the smooth effect of x on $Y$.
None of these models is appropriate for your data however; ignoring, for now, the wrong distribution for the response (density can't be negative and there is a heterogeneity issue which a non-Gaussian family would fix or address), you haven't taken into account the grouping by flower (SampleID in your dataset).
If your aim is to model Taxon specific curves then a model of the form would be a starting point:
m1 <- gam(density ~ Taxon + s(wl, by = Taxon, k = 20) + s(SampleID, bs = 're'),
data = df, method = 'REML')
where I have added a random effect for SampleID and boosted the size of the basis expansion for the Taxon specific smooths.
This model, m1, models the observations as coming from either a smooth wl effect depending on which species (Taxon) the observation comes from (the Taxon parametric term just sets the mean density for each species and is needed as discussed above), plus a random intercept. Taken together, the curves for individual flowers arise from shifted versions of the Taxon specific curves, with the amount of shift given by the random intercept. This model assumes that all individuals have the same shape of smooth as given by the smooth for the particular Taxon that individual flower comes from.
Another version of this model is the gam2 form from above but with an added random effect
m2 <- gam(density ~ Taxon + s(wl) + s(wl, by = Taxon, m = 1) + s(SampleID, bs = 're'),
data = df, method = 'REML')
This model fits better but I don't think it is solving the problem at all, see below. One thing I think it does suggest is that the default k is potentially too low for the Taxon specific curves in these models. There is still a lot of residual smooth variation that we're not modelling if you look at the diagnostic plots.
This model is more than likely too restrictive for your data; some of the curves in your plot of the individual smooths do not appear to be simply shifted versions of the Taxon average curves. A more complex model would allow for individual-specific smooths too. Such a model can be estimated using the fs or factor-smooth interaction basis. We still want Taxon specific curves but we also want to have a separate smooth for each SampleID, but unlike the by smooths, I would suggest that initially, you want all of those SampleID-specific curves to have the same wiggliness. In the same sense as the random intercept that we included earlier, the fs basis adds a random intercept, but also includes a "random" spline (I use the scare quotes as in a Bayesian interpretation of the GAM, all these models are just variations on random effects).
This model is fitted for your data as
m3 <- gam(density ~ Taxon + s(wl, by = Taxon, k = 20) + s(wl, SampleID, bs = 'fs'),
data = df, method = 'REML')
Note that I have increased k here, in case we need more wiggliness in the Taxon-specific smooths. We still need the Taxon parametric effect for the reasons explained above.
That model takes a long time to fit on a single core with gam() — bam() will most likely be better at fitting this model as there are a relatively large number of random effects here.
If we compare these models with a smoothness parameter selection-corrected version of AIC we see just how dramatically better this latter model, m3, is compared to the other two even though it uses an order of magnitude more degrees of freedom
> AIC(m1, m2, m3)
df AIC
m1 190.7045 67264.24
m2 192.2335 67099.28
m3 1672.7410 31474.80
If we look at this model's smooths we get a better idea of how it is fitting the data:
(Note this was produced using draw(m3) using the draw() function from my gratia package. The colours in the lower-left plot are irrelevant and don't help here.)
Each SampleID's fitted curve is built up from either the intercept or the parametric term TaxonSpeciesB plus one of the two Taxon-specific smooths, depending on to which Taxon each SampleID belongs, plus its own SampleID-specifc smooth.
Note that all these models are still wrong as they don't account for the heterogeneity; gamma or Tweedie models with a log link would be my choices to take this further. Something like:
m4 <- gam(density ~ Taxon + s(wl, by = Taxon) + s(wl, SampleID, bs = 'fs'),
data = df, method = 'REML', family = tw())
But I'm having trouble with this model fitting at the moment, which might indicate it is too complex with multiple smooths of wl included.
An alternative form is to use the ordered factor approach, which does an ANOVA-like decomposition on the smooths:
Taxon parametric term is retained
s(wl) is a smooth that will represent the reference level
s(wl, by = Taxon) will have a separate difference smooth for each other level. In your case, you'll have only one of these.
This model is fitted like m3,
df <- transform(df, fTaxon = ordered(Taxon))
m3 <- gam(density ~ fTaxon + s(wl) + s(wl, by = fTaxon) +
s(wl, SampleID, bs = 'fs'),
data = df, method = 'REML')
but the interpretation is different; the first s(wl) will refer to TaxonA and the smooth implied by s(wl, by = fTaxon) will be a smooth difference between the smooth for TaxonA and that of TaxonB.
|
Different ways of modelling interactions between continuous and categorical predictors in GAM
gam1 and gam2 are fine; they are different models, although they are trying to do the same thing, which is model group-specific smooths.
The gam1 form
y ~ f + s(x, by = f)
does this by estimating a s
|
14,841
|
Different ways of modelling interactions between continuous and categorical predictors in GAM
|
This is what Jacolien van Rij writes in her tutorial page:
How to set up the interaction depends on the type of grouping
predictor:
with factor include intercept difference: Group + s(Time, by=Group)
with ordered factor include intercept difference and
reference smooth: Group + s(Time) + s(Time, by=Group)
with binary predictor include reference smooth: s(Time) + s(Time, by=IsGroupChildren)
Categorical variables must be specified as factors, ordered factors, or binary factors with the appropriate R functions.
To understand how to interpret the outputs and what each model can and cannot tell us, see Jacolien van Rij's tutorial page directly. Her tutorial also explains how to fit mixed-effect GAMs.
To understand the concept of interactions in the context of GAMs, this tutorial page by Peter Laurinec is also useful. Both pages provide plenty further information to run GAMs correctly in different scenarios.
|
Different ways of modelling interactions between continuous and categorical predictors in GAM
|
This is what Jacolien van Rij writes in her tutorial page:
How to set up the interaction depends on the type of grouping
predictor:
with factor include intercept difference: Group + s(Time, by=Gro
|
Different ways of modelling interactions between continuous and categorical predictors in GAM
This is what Jacolien van Rij writes in her tutorial page:
How to set up the interaction depends on the type of grouping
predictor:
with factor include intercept difference: Group + s(Time, by=Group)
with ordered factor include intercept difference and
reference smooth: Group + s(Time) + s(Time, by=Group)
with binary predictor include reference smooth: s(Time) + s(Time, by=IsGroupChildren)
Categorical variables must be specified as factors, ordered factors, or binary factors with the appropriate R functions.
To understand how to interpret the outputs and what each model can and cannot tell us, see Jacolien van Rij's tutorial page directly. Her tutorial also explains how to fit mixed-effect GAMs.
To understand the concept of interactions in the context of GAMs, this tutorial page by Peter Laurinec is also useful. Both pages provide plenty further information to run GAMs correctly in different scenarios.
|
Different ways of modelling interactions between continuous and categorical predictors in GAM
This is what Jacolien van Rij writes in her tutorial page:
How to set up the interaction depends on the type of grouping
predictor:
with factor include intercept difference: Group + s(Time, by=Gro
|
14,842
|
Why in Variational Auto Encoder (Gaussian variational family) we model $\log\sigma^2$ and not $\sigma^2$ (or $\sigma$) itself?
|
it brings stability and ease of training.
by definition sigma has to be a positive real number. one way to enforce this would be to use a ReLU funtion to obtain its value, but the gradient is not well defined around zero. in addition, the standard deviation values are usually very small 1>>sigma>0. the optimization has to work with very small numbers, where the floating point arithmetic and the poorly defined gradient bring numerical instabilities.
if you use the log transform, you map the numerically unstable very small numbers in [1,0] interval to [log(1), -inf], where you have a lot more space to work with. calculating log and exp are numerically stable and easy, so you basically gain space where your optimization variable can move within.
please do not confuse: people do not use the log(sigma) value as the sigma value, but always transform it back to the original space. also in VAEs, you need the log(sigma) value in the Kullback-Leibler divergence term, so you need to calculate it anyways...
|
Why in Variational Auto Encoder (Gaussian variational family) we model $\log\sigma^2$ and not $\sigm
|
it brings stability and ease of training.
by definition sigma has to be a positive real number. one way to enforce this would be to use a ReLU funtion to obtain its value, but the gradient is not wel
|
Why in Variational Auto Encoder (Gaussian variational family) we model $\log\sigma^2$ and not $\sigma^2$ (or $\sigma$) itself?
it brings stability and ease of training.
by definition sigma has to be a positive real number. one way to enforce this would be to use a ReLU funtion to obtain its value, but the gradient is not well defined around zero. in addition, the standard deviation values are usually very small 1>>sigma>0. the optimization has to work with very small numbers, where the floating point arithmetic and the poorly defined gradient bring numerical instabilities.
if you use the log transform, you map the numerically unstable very small numbers in [1,0] interval to [log(1), -inf], where you have a lot more space to work with. calculating log and exp are numerically stable and easy, so you basically gain space where your optimization variable can move within.
please do not confuse: people do not use the log(sigma) value as the sigma value, but always transform it back to the original space. also in VAEs, you need the log(sigma) value in the Kullback-Leibler divergence term, so you need to calculate it anyways...
|
Why in Variational Auto Encoder (Gaussian variational family) we model $\log\sigma^2$ and not $\sigm
it brings stability and ease of training.
by definition sigma has to be a positive real number. one way to enforce this would be to use a ReLU funtion to obtain its value, but the gradient is not wel
|
14,843
|
Interpreting ROUGE scores
|
As a user of these methods I need to gauge how far I can rely on the algorithms and how far I need to use humans to do some post-processing on the summarisations.
How "good" is a particular absolute ROUGE score? I'm defining "good" as "minimises the need for human post-processing".
There are two aspects that may impact the need for human post-processing:
Does the summary sound fluent?
Is summary adequate? I.e. is the length appropriate and does it cover the most important information of the text it summarizes?
ROUGE doesn't try to assess how fluent the summary: ROUGE only tries to assess the adequacy, by simply counting how many n-grams in your generated summary matches the n-grams in your reference summary (or summaries, as ROUGE supports multi-reference corpora).
From https://en.wikipedia.org/w/index.php?title=Automatic_summarization&oldid=808057887#Document_summarization:
If there are multiple references, the ROUGE-1 scores are averaged. Because ROUGE is based only on content overlap, it can determine if the same general concepts are discussed between an automatic summary and a reference summary, but it cannot determine if the result is coherent or the sentences flow together in a sensible manner. High-order n-gram ROUGE measures try to judge fluency to some degree. Note that ROUGE is similar to the BLEU measure for machine translation, but BLEU is precision- based, because translation systems favor accuracy.
Note that BLEU has the same issue, as you can see on these correlation plots, taken from {1}:
What is the best way to really understand what a ROUGE score actually measures?
In short and approximately:
ROUGE-n recall=40% means that 40% of the n-grams in the reference summary are also present in the generated summary.
ROUGE-n precision=40% means that 40% of the n-grams in the generated summary are also present in the reference summary.
ROUGE-n F1-score=40% is more difficult to interpret, like any F1-score.
ROUGE is more interpretable than BLEU (from {2}: "Other Known Deficiencies of Bleu: Scores hard to interpret"). I said approximately because the original ROUGE implementation from the paper that introduced ROUGE {3} may perform a few more things such as stemming.
References:
{1} Callison-Burch, Chris, Miles Osborne, and Philipp Koehn. "Re-evaluation the Role of Bleu in Machine Translation Research." In EACL, vol. 6, pp. 249-256. 2006. https://scholar.google.com/scholar?cluster=8900239586727494087&hl=en&as_sdt=0,5 ;
{2} Slides of 1:
https://pdfs.semanticscholar.org/60f4/f98ff57be60a786803a88f5e7e970b35c79e.pdf (mirror)
{3} Lin, Chin-Yew. "Rouge: A package for automatic evaluation of summaries." In Text summarization branches out: Proceedings of the ACL-04 workshop, vol. 8. 2004. https://scholar.google.com/scholar?cluster=2397172516759442154&hl=en&as_sdt=0,5 ; http://anthology.aclweb.org/W/W04/W04-1013.pdf
|
Interpreting ROUGE scores
|
As a user of these methods I need to gauge how far I can rely on the algorithms and how far I need to use humans to do some post-processing on the summarisations.
How "good" is a particular absolute R
|
Interpreting ROUGE scores
As a user of these methods I need to gauge how far I can rely on the algorithms and how far I need to use humans to do some post-processing on the summarisations.
How "good" is a particular absolute ROUGE score? I'm defining "good" as "minimises the need for human post-processing".
There are two aspects that may impact the need for human post-processing:
Does the summary sound fluent?
Is summary adequate? I.e. is the length appropriate and does it cover the most important information of the text it summarizes?
ROUGE doesn't try to assess how fluent the summary: ROUGE only tries to assess the adequacy, by simply counting how many n-grams in your generated summary matches the n-grams in your reference summary (or summaries, as ROUGE supports multi-reference corpora).
From https://en.wikipedia.org/w/index.php?title=Automatic_summarization&oldid=808057887#Document_summarization:
If there are multiple references, the ROUGE-1 scores are averaged. Because ROUGE is based only on content overlap, it can determine if the same general concepts are discussed between an automatic summary and a reference summary, but it cannot determine if the result is coherent or the sentences flow together in a sensible manner. High-order n-gram ROUGE measures try to judge fluency to some degree. Note that ROUGE is similar to the BLEU measure for machine translation, but BLEU is precision- based, because translation systems favor accuracy.
Note that BLEU has the same issue, as you can see on these correlation plots, taken from {1}:
What is the best way to really understand what a ROUGE score actually measures?
In short and approximately:
ROUGE-n recall=40% means that 40% of the n-grams in the reference summary are also present in the generated summary.
ROUGE-n precision=40% means that 40% of the n-grams in the generated summary are also present in the reference summary.
ROUGE-n F1-score=40% is more difficult to interpret, like any F1-score.
ROUGE is more interpretable than BLEU (from {2}: "Other Known Deficiencies of Bleu: Scores hard to interpret"). I said approximately because the original ROUGE implementation from the paper that introduced ROUGE {3} may perform a few more things such as stemming.
References:
{1} Callison-Burch, Chris, Miles Osborne, and Philipp Koehn. "Re-evaluation the Role of Bleu in Machine Translation Research." In EACL, vol. 6, pp. 249-256. 2006. https://scholar.google.com/scholar?cluster=8900239586727494087&hl=en&as_sdt=0,5 ;
{2} Slides of 1:
https://pdfs.semanticscholar.org/60f4/f98ff57be60a786803a88f5e7e970b35c79e.pdf (mirror)
{3} Lin, Chin-Yew. "Rouge: A package for automatic evaluation of summaries." In Text summarization branches out: Proceedings of the ACL-04 workshop, vol. 8. 2004. https://scholar.google.com/scholar?cluster=2397172516759442154&hl=en&as_sdt=0,5 ; http://anthology.aclweb.org/W/W04/W04-1013.pdf
|
Interpreting ROUGE scores
As a user of these methods I need to gauge how far I can rely on the algorithms and how far I need to use humans to do some post-processing on the summarisations.
How "good" is a particular absolute R
|
14,844
|
Interpreting ROUGE scores
|
You should read the original ROUGE paper by Chin-Yew Lin which goes in depth about the various definitions.
ROUGE is a score of overlapping words. ROUGE-N refers to overlapping n-grams. Specifically:
$$
\frac{\sum_{r}\sum_s\text{match}(\text{gram}_{s,c})}{\sum_{r}\sum_s\text{count}(\text{gram}_s)}
$$
I tried to simplify the notation when compared with the original paper. Let's assume we are calculating ROUGE-2, aka bigram matches. The numerator $\sum_s$ loops through all bigrams in a single reference summary and calculates the number of times a matching bigram is found in the candidate summary (proposed by the summarization algorithm). If there are more than one reference summary, $\sum_r$ ensures we repeat the process over all reference summaries.
The denominator simply counts the total number of bigrams in all reference summaries. This is the process for one document-summary pair. You repeat the process for all documents, and average all the scores and that gives you a ROUGE-N score. So a higher score would mean that on average there is a high overlap of n-grams between your summaries and the references.
Example:
S1. police killed the gunman
S2. police kill the gunman
S3. the gunman kill police
S1 is the reference and S2 and S3 are candidates. Note S2 and S3 both have one overlapping bigram with the reference, so they have the same ROUGE-2 score, although S2 should be better. An additional ROUGE-L score deals with this, where L stands for Longest Common Subsequence. In S2, the first word and last two words match the reference, so it scores 3/4, whereas S3 only matches the bigram, so scores 2/4. See the paper for more details
|
Interpreting ROUGE scores
|
You should read the original ROUGE paper by Chin-Yew Lin which goes in depth about the various definitions.
ROUGE is a score of overlapping words. ROUGE-N refers to overlapping n-grams. Specifically:
|
Interpreting ROUGE scores
You should read the original ROUGE paper by Chin-Yew Lin which goes in depth about the various definitions.
ROUGE is a score of overlapping words. ROUGE-N refers to overlapping n-grams. Specifically:
$$
\frac{\sum_{r}\sum_s\text{match}(\text{gram}_{s,c})}{\sum_{r}\sum_s\text{count}(\text{gram}_s)}
$$
I tried to simplify the notation when compared with the original paper. Let's assume we are calculating ROUGE-2, aka bigram matches. The numerator $\sum_s$ loops through all bigrams in a single reference summary and calculates the number of times a matching bigram is found in the candidate summary (proposed by the summarization algorithm). If there are more than one reference summary, $\sum_r$ ensures we repeat the process over all reference summaries.
The denominator simply counts the total number of bigrams in all reference summaries. This is the process for one document-summary pair. You repeat the process for all documents, and average all the scores and that gives you a ROUGE-N score. So a higher score would mean that on average there is a high overlap of n-grams between your summaries and the references.
Example:
S1. police killed the gunman
S2. police kill the gunman
S3. the gunman kill police
S1 is the reference and S2 and S3 are candidates. Note S2 and S3 both have one overlapping bigram with the reference, so they have the same ROUGE-2 score, although S2 should be better. An additional ROUGE-L score deals with this, where L stands for Longest Common Subsequence. In S2, the first word and last two words match the reference, so it scores 3/4, whereas S3 only matches the bigram, so scores 2/4. See the paper for more details
|
Interpreting ROUGE scores
You should read the original ROUGE paper by Chin-Yew Lin which goes in depth about the various definitions.
ROUGE is a score of overlapping words. ROUGE-N refers to overlapping n-grams. Specifically:
|
14,845
|
How to interpret differential entropy?
|
There is no interpretation of differential entropy which would be as meaningful or useful as that of entropy. The problem with continuous random variables is that their values typically have 0 probability, and therefore would require an infinite number of bits to encode.
If you look at the limit of discrete entropy by measuring the probability of intervals $[n\varepsilon, (n + 1)\varepsilon[$, you end up with
$$-\int p(x) \log_2 p(x) \, dx - \log_2 \varepsilon$$
and not the differential entropy. This quantity is in a sense more meaningful, but will diverge to infinity as we take smaller and smaller intervals. It makes sense, since we'll need more and more bits to encode in which of the many intervals the value of our random value falls.
A more useful quantity to look at for continuous distributions is the relative entropy (also Kullback-Leibler divergence). For discrete distributions:
$$D_\text{KL}[P || Q] = \sum_x P(x) \log_2 \frac{P(x)}{Q(x)}.$$
It measures the number of extra bits used when the true distribution is $P$, but we use $-\log Q_2(x)$ bits to encode $x$. We can take the limit of relative entropy and arrive at
$$D_\text{KL}[p \mid\mid q] = \int p(x) \log_2 \frac{p(x)}{q(x)} \, dx,$$
because $\log_2 \varepsilon$ will cancel. For continuous distributions this corresponds to the number of extra bits used in the limit of infinitesimally small bins. For both continuous and discrete distributions, this is always non-negative.
Now, we could think of differential entropy as the negative relative entropy between $p(x)$ and an unnormalized density $\lambda(x) = 1$,
$$-\int p(x) \log_2 p(x) \, dx = -D_\text{KL}[p \mid\mid \lambda].$$
Its interpretation would be the difference in the number of bits required by using $-\log_2 \int_{n\varepsilon}^{(n + 1)\varepsilon} p(x) \, dx$ bits to encode the $n$-th interval instead of $-\log \varepsilon$ bits. Even though the former would be optimal, this difference can now be negative, because $\lambda$ is cheating (by not integrating to 1) and therefore might assign fewer bits on average than theoretically possible.
See Sergio Verdu's talk for a great introduction to relative entropy.
|
How to interpret differential entropy?
|
There is no interpretation of differential entropy which would be as meaningful or useful as that of entropy. The problem with continuous random variables is that their values typically have 0 probabi
|
How to interpret differential entropy?
There is no interpretation of differential entropy which would be as meaningful or useful as that of entropy. The problem with continuous random variables is that their values typically have 0 probability, and therefore would require an infinite number of bits to encode.
If you look at the limit of discrete entropy by measuring the probability of intervals $[n\varepsilon, (n + 1)\varepsilon[$, you end up with
$$-\int p(x) \log_2 p(x) \, dx - \log_2 \varepsilon$$
and not the differential entropy. This quantity is in a sense more meaningful, but will diverge to infinity as we take smaller and smaller intervals. It makes sense, since we'll need more and more bits to encode in which of the many intervals the value of our random value falls.
A more useful quantity to look at for continuous distributions is the relative entropy (also Kullback-Leibler divergence). For discrete distributions:
$$D_\text{KL}[P || Q] = \sum_x P(x) \log_2 \frac{P(x)}{Q(x)}.$$
It measures the number of extra bits used when the true distribution is $P$, but we use $-\log Q_2(x)$ bits to encode $x$. We can take the limit of relative entropy and arrive at
$$D_\text{KL}[p \mid\mid q] = \int p(x) \log_2 \frac{p(x)}{q(x)} \, dx,$$
because $\log_2 \varepsilon$ will cancel. For continuous distributions this corresponds to the number of extra bits used in the limit of infinitesimally small bins. For both continuous and discrete distributions, this is always non-negative.
Now, we could think of differential entropy as the negative relative entropy between $p(x)$ and an unnormalized density $\lambda(x) = 1$,
$$-\int p(x) \log_2 p(x) \, dx = -D_\text{KL}[p \mid\mid \lambda].$$
Its interpretation would be the difference in the number of bits required by using $-\log_2 \int_{n\varepsilon}^{(n + 1)\varepsilon} p(x) \, dx$ bits to encode the $n$-th interval instead of $-\log \varepsilon$ bits. Even though the former would be optimal, this difference can now be negative, because $\lambda$ is cheating (by not integrating to 1) and therefore might assign fewer bits on average than theoretically possible.
See Sergio Verdu's talk for a great introduction to relative entropy.
|
How to interpret differential entropy?
There is no interpretation of differential entropy which would be as meaningful or useful as that of entropy. The problem with continuous random variables is that their values typically have 0 probabi
|
14,846
|
How to interpret differential entropy?
|
For the differential entropy there also exists another, more mathematical interpretation, which is closely related to the bit-interpretation for the entropy.
The differential entropy describes the equivalent side length (in logs) of the set that contains most of the probability of the distribution.
This is nicely illustrated and explained in Theorem 8.2.3 in Elements of Information Theory by Thomas M. Cover, Joy A. Thomas
Intuitive Explanation
In non-rigorous terms, this statement means the following:
Let's assume we have a multivariate probability distribution with entropy $h$.
The side length of a volume that entails most of the probability mass of this distribution (apart from a negligible amount), can be described by some volume.
If we assume we describe this volume by some hypercube with sides of equal length (= equivalent side lengths), then this side length is equal to $2^h$.
Intuitively this means, that if we have a low entropy, the probability mass of the distribution is confined to a small area.
Vice versa, high entropy tells us that the probability mass is spread widely across a large area.
Mathematical View
In actual notation, the theorem states the following
$1 - \epsilon 2^{n(h(X) - \epsilon)} \leq \text{Vol}(A_{\epsilon}^{(n)}) \leq 2^{n(h(X) + \epsilon)}$,
where $X$ is a random variable with the distribution of interest, $\epsilon$ is a real number, $A_{\epsilon}^{(n)}$ is a set, $h(X)$ is the differential entropy of $X$ and $n$ (required to be large) is the dimension of $X$.
This implies that "$A_{\epsilon}^{(n)}$ is the smallest volume set with probability $1-\epsilon$, to first order in the exponent." (Elements of Information Theory by Thomas M. Cover, Joy A. Thomas, Wiley, Second Edition, 2006)
Relation to entropy of discrete probability distributions
This interpretation of differential entropy is closely related to the entropy for discrete distributions.
Discrete Case: As OP stated, the entropy tells us how many bits are needed to encode a message given a probability distribution over words.
Continuous case: Here we are dealing with continuous support. For example, let's assume the support is on the real line $\mathbb{R}$. The differential entropy tells us, how long the interval on the real line has to be to capture almost all information contained in the probability distribution.
If we have a widely spread distribution -> the entropy will be high
If we have a sharp distribution, most probability mass will be in a small interval -> the entropy will be low.
Example with $N(0,1)$
The entropy of a standard normal distribution with $\sigma^2 = 1$ is
$\frac{1}{2}\text{ln}(2\pi \sigma^2) + \frac{1}{2} = \frac{1}{2}\text{ln}( 2 \pi) + \frac{1}{2}$
We can visualize this with a small code example in Python:
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
ys = np.random.normal(size = 10000)
h = 0.5*np.log(2*np.pi*np.exp(1))
side_length = 2**h
sns.kdeplot(ys, fill = True)
plt.vlines(x = side_length/2, ymin = 0, ymax = 0.4, color = 'red', linestyles = 'dashed')
plt.vlines(x = -side_length/2, ymin = 0, ymax = 0.4, color = 'red', linestyles = 'dashed')
This side length captures a large portion of the probability mass in this distribution:
The interval between the red lines is $2^h$. As in this case $n$ is only 1 (and the Theorem above requires $n$ to be large), we can clearly see that the entropy is not exactly the equivalent side length of the volume that captures almost all probability mass.
This graph also explains why for the Gaussian, the mean does not affect the differential entropy: No matter where I shift the distribution to - the equivalent side length will stay the same and is only influenced by the variance.
|
How to interpret differential entropy?
|
For the differential entropy there also exists another, more mathematical interpretation, which is closely related to the bit-interpretation for the entropy.
The differential entropy describes the equ
|
How to interpret differential entropy?
For the differential entropy there also exists another, more mathematical interpretation, which is closely related to the bit-interpretation for the entropy.
The differential entropy describes the equivalent side length (in logs) of the set that contains most of the probability of the distribution.
This is nicely illustrated and explained in Theorem 8.2.3 in Elements of Information Theory by Thomas M. Cover, Joy A. Thomas
Intuitive Explanation
In non-rigorous terms, this statement means the following:
Let's assume we have a multivariate probability distribution with entropy $h$.
The side length of a volume that entails most of the probability mass of this distribution (apart from a negligible amount), can be described by some volume.
If we assume we describe this volume by some hypercube with sides of equal length (= equivalent side lengths), then this side length is equal to $2^h$.
Intuitively this means, that if we have a low entropy, the probability mass of the distribution is confined to a small area.
Vice versa, high entropy tells us that the probability mass is spread widely across a large area.
Mathematical View
In actual notation, the theorem states the following
$1 - \epsilon 2^{n(h(X) - \epsilon)} \leq \text{Vol}(A_{\epsilon}^{(n)}) \leq 2^{n(h(X) + \epsilon)}$,
where $X$ is a random variable with the distribution of interest, $\epsilon$ is a real number, $A_{\epsilon}^{(n)}$ is a set, $h(X)$ is the differential entropy of $X$ and $n$ (required to be large) is the dimension of $X$.
This implies that "$A_{\epsilon}^{(n)}$ is the smallest volume set with probability $1-\epsilon$, to first order in the exponent." (Elements of Information Theory by Thomas M. Cover, Joy A. Thomas, Wiley, Second Edition, 2006)
Relation to entropy of discrete probability distributions
This interpretation of differential entropy is closely related to the entropy for discrete distributions.
Discrete Case: As OP stated, the entropy tells us how many bits are needed to encode a message given a probability distribution over words.
Continuous case: Here we are dealing with continuous support. For example, let's assume the support is on the real line $\mathbb{R}$. The differential entropy tells us, how long the interval on the real line has to be to capture almost all information contained in the probability distribution.
If we have a widely spread distribution -> the entropy will be high
If we have a sharp distribution, most probability mass will be in a small interval -> the entropy will be low.
Example with $N(0,1)$
The entropy of a standard normal distribution with $\sigma^2 = 1$ is
$\frac{1}{2}\text{ln}(2\pi \sigma^2) + \frac{1}{2} = \frac{1}{2}\text{ln}( 2 \pi) + \frac{1}{2}$
We can visualize this with a small code example in Python:
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
ys = np.random.normal(size = 10000)
h = 0.5*np.log(2*np.pi*np.exp(1))
side_length = 2**h
sns.kdeplot(ys, fill = True)
plt.vlines(x = side_length/2, ymin = 0, ymax = 0.4, color = 'red', linestyles = 'dashed')
plt.vlines(x = -side_length/2, ymin = 0, ymax = 0.4, color = 'red', linestyles = 'dashed')
This side length captures a large portion of the probability mass in this distribution:
The interval between the red lines is $2^h$. As in this case $n$ is only 1 (and the Theorem above requires $n$ to be large), we can clearly see that the entropy is not exactly the equivalent side length of the volume that captures almost all probability mass.
This graph also explains why for the Gaussian, the mean does not affect the differential entropy: No matter where I shift the distribution to - the equivalent side length will stay the same and is only influenced by the variance.
|
How to interpret differential entropy?
For the differential entropy there also exists another, more mathematical interpretation, which is closely related to the bit-interpretation for the entropy.
The differential entropy describes the equ
|
14,847
|
"Ties should not be present" in one-sample Kolmgorov-Smirnov test in R
|
You have two problems here:
The K-S test is for a continuous distribution and so MYDATA should not contain any ties (repeated values).
The theory underlying the K-S test does not let you estimate the parameters of the distribution from the data as you have done. The help for ks.test explains this.
|
"Ties should not be present" in one-sample Kolmgorov-Smirnov test in R
|
You have two problems here:
The K-S test is for a continuous distribution and so MYDATA should not contain any ties (repeated values).
The theory underlying the K-S test does not let you estimate the
|
"Ties should not be present" in one-sample Kolmgorov-Smirnov test in R
You have two problems here:
The K-S test is for a continuous distribution and so MYDATA should not contain any ties (repeated values).
The theory underlying the K-S test does not let you estimate the parameters of the distribution from the data as you have done. The help for ks.test explains this.
|
"Ties should not be present" in one-sample Kolmgorov-Smirnov test in R
You have two problems here:
The K-S test is for a continuous distribution and so MYDATA should not contain any ties (repeated values).
The theory underlying the K-S test does not let you estimate the
|
14,848
|
"Ties should not be present" in one-sample Kolmgorov-Smirnov test in R
|
As explained by @mdewey, The K-S test is not suitable when estimating the parameters from the data.
You can use the following code, which relies on the Anderson-Darling test for normality, and does not require you to supply the mean and the stddev. This test is stronger in accuracy than the Lilliefors test.
install.packages("nortest")
library(nortest)
ad.test(MYDATA)
|
"Ties should not be present" in one-sample Kolmgorov-Smirnov test in R
|
As explained by @mdewey, The K-S test is not suitable when estimating the parameters from the data.
You can use the following code, which relies on the Anderson-Darling test for normality, and does no
|
"Ties should not be present" in one-sample Kolmgorov-Smirnov test in R
As explained by @mdewey, The K-S test is not suitable when estimating the parameters from the data.
You can use the following code, which relies on the Anderson-Darling test for normality, and does not require you to supply the mean and the stddev. This test is stronger in accuracy than the Lilliefors test.
install.packages("nortest")
library(nortest)
ad.test(MYDATA)
|
"Ties should not be present" in one-sample Kolmgorov-Smirnov test in R
As explained by @mdewey, The K-S test is not suitable when estimating the parameters from the data.
You can use the following code, which relies on the Anderson-Darling test for normality, and does no
|
14,849
|
Kullback-Leibler Divergence for two samples
|
The Kullback-Leibler divergence is defined as
$$
\DeclareMathOperator{\KL}{KL}
\KL(P || Q) = \int_{-\infty}^\infty p(x) \log \frac{p(x)}{q(x)} \; dx
$$
so to calculate (estimate) this from empirical data we would need, maybe, some estimates of the density functions $p(x), q(x)$. So a natural starting point could be via density estimation (and after that, just numerical integration). How good or stable such a method would be I don't know.
But first your second question, then I will return to the first one. Lets say $p$ and $q$ are uniform densities on $[0,1]$ and $[0,10]$ respectively. Then $\KL(p || q)=\log 10$ while $\KL(q || p)$ is more difficult to define, but the only reasonable value to give it is $\infty$, as far as I can see, since it involves integrating $\log(1/0)$ which we can choose to interprete as $\log \infty$. This results are reasonable from the interpretation I give in Intuition on the Kullback-Leibler (KL) Divergence
Returning to the main question. It is asked in a very nonparametric way, and no assumptions are stated on the densities. Probably some assumptions are needed. But assuming the two densities are proposed as competing models for the same phenomenon, we can probably assume they have the same dominating measure: KL divergence between a continuous and a discrete probability distribution would always be infinity, for example. One paper addressing this question is the following: https://pdfs.semanticscholar.org/1fbd/31b690e078ce938f73f14462fceadc2748bf.pdf They propose a method which do not need preliminary density estimation, and analyses its properties.
(There are many other papers). I will come back and post some details from that paper, the ideas.
EDIT
Some ideas from that paper, which is about estimation of KL divergence with iid samples from absolutely continuous distributions. I show their proposal for one-dimensional distributions, but they give a solution for vectors also (using nearest neighbor density estimation). For proofs read the paper!
They propose to use a version of the empirical distribution function, but interpolated linearly between sample points to get a continuous version. They define
$$
P_e(x) = \frac1{n}\sum_{i=1}^n U(x-x_i)
$$
where $U$ is the Heavyside step function, but defined so that $U(0)=0.5$. Then that function interpolated linearly (and extended horizontally beyond the range) is $P_c$ ($c$ for continuous). Then they propose to estimate the Kullback-Leibler divergence by
$$
\hat{D}(P \| Q) = \frac1{n}\sum_{i=1}^n \log\left(\frac{\delta P_c(x_i)}{\delta Q_c(x_i)}\right)
$$
where $\delta P_c = P_c(x_i)-P_c(x_i-\epsilon)$ and $\epsilon$ is a number smaller than the smallest spacing of the samples.
R code for the version of the empirical distribution function that we need is
my.ecdf <- function(x) {
x <- sort(x)
x.u <- unique(x)
n <- length(x)
x.rle <- rle(x)$lengths
y <- (cumsum(x.rle)-0.5) / n
FUN <- approxfun(x.u, y, method="linear", yleft=0, yright=1,
rule=2)
FUN
}
note that rle is used to take care of the case with duplicates in x.
Then the estimation of the KL divergence is given by
KL_est <- function(x, y) {
dx <- diff(sort(unique(x)))
dy <- diff(sort(unique(y)))
ex <- min(dx) ; ey <- min(dy)
e <- min(ex, ey)/2
n <- length(x)
P <- my.ecdf(x) ; Q <- my.ecdf(y)
KL <- sum( log( (P(x)-P(x-e))/(Q(x)-Q(x-e)))) / n
KL
}
Then I show a small simulation:
KL <- replicate(1000, {x <- rnorm(100)
y <- rt(100, df=5)
KL_est(x, y)})
hist(KL, prob=TRUE)
which gives the following histogram, showing (an estimation) of the sampling distribution of this estimator:
For comparison, we calculate the KL divergence in this example by numerical integration:
LR <- function(x) dnorm(x,log=TRUE)-dt(x,5,log=TRUE)
100*integrate(function(x) dnorm(x)*LR(x),lower=-Inf,upper=Inf)$value
[1] 3.337668
hmm ... the difference being large enough that there is much here to investigate!
|
Kullback-Leibler Divergence for two samples
|
The Kullback-Leibler divergence is defined as
$$
\DeclareMathOperator{\KL}{KL}
\KL(P || Q) = \int_{-\infty}^\infty p(x) \log \frac{p(x)}{q(x)} \; dx
$$
so to calculate (estimate) this from empir
|
Kullback-Leibler Divergence for two samples
The Kullback-Leibler divergence is defined as
$$
\DeclareMathOperator{\KL}{KL}
\KL(P || Q) = \int_{-\infty}^\infty p(x) \log \frac{p(x)}{q(x)} \; dx
$$
so to calculate (estimate) this from empirical data we would need, maybe, some estimates of the density functions $p(x), q(x)$. So a natural starting point could be via density estimation (and after that, just numerical integration). How good or stable such a method would be I don't know.
But first your second question, then I will return to the first one. Lets say $p$ and $q$ are uniform densities on $[0,1]$ and $[0,10]$ respectively. Then $\KL(p || q)=\log 10$ while $\KL(q || p)$ is more difficult to define, but the only reasonable value to give it is $\infty$, as far as I can see, since it involves integrating $\log(1/0)$ which we can choose to interprete as $\log \infty$. This results are reasonable from the interpretation I give in Intuition on the Kullback-Leibler (KL) Divergence
Returning to the main question. It is asked in a very nonparametric way, and no assumptions are stated on the densities. Probably some assumptions are needed. But assuming the two densities are proposed as competing models for the same phenomenon, we can probably assume they have the same dominating measure: KL divergence between a continuous and a discrete probability distribution would always be infinity, for example. One paper addressing this question is the following: https://pdfs.semanticscholar.org/1fbd/31b690e078ce938f73f14462fceadc2748bf.pdf They propose a method which do not need preliminary density estimation, and analyses its properties.
(There are many other papers). I will come back and post some details from that paper, the ideas.
EDIT
Some ideas from that paper, which is about estimation of KL divergence with iid samples from absolutely continuous distributions. I show their proposal for one-dimensional distributions, but they give a solution for vectors also (using nearest neighbor density estimation). For proofs read the paper!
They propose to use a version of the empirical distribution function, but interpolated linearly between sample points to get a continuous version. They define
$$
P_e(x) = \frac1{n}\sum_{i=1}^n U(x-x_i)
$$
where $U$ is the Heavyside step function, but defined so that $U(0)=0.5$. Then that function interpolated linearly (and extended horizontally beyond the range) is $P_c$ ($c$ for continuous). Then they propose to estimate the Kullback-Leibler divergence by
$$
\hat{D}(P \| Q) = \frac1{n}\sum_{i=1}^n \log\left(\frac{\delta P_c(x_i)}{\delta Q_c(x_i)}\right)
$$
where $\delta P_c = P_c(x_i)-P_c(x_i-\epsilon)$ and $\epsilon$ is a number smaller than the smallest spacing of the samples.
R code for the version of the empirical distribution function that we need is
my.ecdf <- function(x) {
x <- sort(x)
x.u <- unique(x)
n <- length(x)
x.rle <- rle(x)$lengths
y <- (cumsum(x.rle)-0.5) / n
FUN <- approxfun(x.u, y, method="linear", yleft=0, yright=1,
rule=2)
FUN
}
note that rle is used to take care of the case with duplicates in x.
Then the estimation of the KL divergence is given by
KL_est <- function(x, y) {
dx <- diff(sort(unique(x)))
dy <- diff(sort(unique(y)))
ex <- min(dx) ; ey <- min(dy)
e <- min(ex, ey)/2
n <- length(x)
P <- my.ecdf(x) ; Q <- my.ecdf(y)
KL <- sum( log( (P(x)-P(x-e))/(Q(x)-Q(x-e)))) / n
KL
}
Then I show a small simulation:
KL <- replicate(1000, {x <- rnorm(100)
y <- rt(100, df=5)
KL_est(x, y)})
hist(KL, prob=TRUE)
which gives the following histogram, showing (an estimation) of the sampling distribution of this estimator:
For comparison, we calculate the KL divergence in this example by numerical integration:
LR <- function(x) dnorm(x,log=TRUE)-dt(x,5,log=TRUE)
100*integrate(function(x) dnorm(x)*LR(x),lower=-Inf,upper=Inf)$value
[1] 3.337668
hmm ... the difference being large enough that there is much here to investigate!
|
Kullback-Leibler Divergence for two samples
The Kullback-Leibler divergence is defined as
$$
\DeclareMathOperator{\KL}{KL}
\KL(P || Q) = \int_{-\infty}^\infty p(x) \log \frac{p(x)}{q(x)} \; dx
$$
so to calculate (estimate) this from empir
|
14,850
|
Kullback-Leibler Divergence for two samples
|
Expanding a little bit on kjetil-b-halvorsen's answer, and sorry for not commenting, I don't have the reputation:
I have the feeling that the analytical computation should be (without multiplication by 100):
LR <- function(x) dnorm(x,log=TRUE)-dt(x,5,log=TRUE)
integrate(function(x) dnorm(x)*LR(x),lower=-Inf,upper=Inf)$value
If I'm right, the estimator $\hat D(P||Q)$ does not converge to the KL divergence, but the convergence is stated as: $\hat D(P||Q)-1 \to D(P||Q)$. The arrow represents a.s convergence.
Once those two corrections are made, the results seem more realistic.
|
Kullback-Leibler Divergence for two samples
|
Expanding a little bit on kjetil-b-halvorsen's answer, and sorry for not commenting, I don't have the reputation:
I have the feeling that the analytical computation should be (without multiplication
|
Kullback-Leibler Divergence for two samples
Expanding a little bit on kjetil-b-halvorsen's answer, and sorry for not commenting, I don't have the reputation:
I have the feeling that the analytical computation should be (without multiplication by 100):
LR <- function(x) dnorm(x,log=TRUE)-dt(x,5,log=TRUE)
integrate(function(x) dnorm(x)*LR(x),lower=-Inf,upper=Inf)$value
If I'm right, the estimator $\hat D(P||Q)$ does not converge to the KL divergence, but the convergence is stated as: $\hat D(P||Q)-1 \to D(P||Q)$. The arrow represents a.s convergence.
Once those two corrections are made, the results seem more realistic.
|
Kullback-Leibler Divergence for two samples
Expanding a little bit on kjetil-b-halvorsen's answer, and sorry for not commenting, I don't have the reputation:
I have the feeling that the analytical computation should be (without multiplication
|
14,851
|
What is the difference between GINI and AUC curve interpretation?
|
The Gini Coefficient is the summary statistic of the Cumulative Accuracy Profile (CAP) chart. It is calculated as the quotient of the area which the CAP curve and diagonal enclose and the corresponding area in an ideal rating procedure.
Area Under Receiver Operating Characteristic curve (or AUROC for short) is the summary statistic of the ROC curve chart.
The direct conversion between Gini and AUROC is given by:
$$ Gini = 2\times AUROC - 1$$
|
What is the difference between GINI and AUC curve interpretation?
|
The Gini Coefficient is the summary statistic of the Cumulative Accuracy Profile (CAP) chart. It is calculated as the quotient of the area which the CAP curve and diagonal enclose and the correspondin
|
What is the difference between GINI and AUC curve interpretation?
The Gini Coefficient is the summary statistic of the Cumulative Accuracy Profile (CAP) chart. It is calculated as the quotient of the area which the CAP curve and diagonal enclose and the corresponding area in an ideal rating procedure.
Area Under Receiver Operating Characteristic curve (or AUROC for short) is the summary statistic of the ROC curve chart.
The direct conversion between Gini and AUROC is given by:
$$ Gini = 2\times AUROC - 1$$
|
What is the difference between GINI and AUC curve interpretation?
The Gini Coefficient is the summary statistic of the Cumulative Accuracy Profile (CAP) chart. It is calculated as the quotient of the area which the CAP curve and diagonal enclose and the correspondin
|
14,852
|
Is using correlation matrix to select predictors for regression correct?
|
If, for some reason, you are going to include only one variable in your model, then selecting the predictor which has the highest correlation with $y$ has several advantages. Out of the possible regression models with only one predictor, then this model is the one with the highest standardized regression coefficient and also (since $R^2$ is the square of $r$ in a simple linear regression) the highest coefficient of determination.
But it's not clear why you would want to restrict your regression model to one predictor if you have data available for several. As mentioned in the comments, just looking at the correlations doesn't work if your model might include several variables. For example, from this scatter matrix, you might think that the predictors for $y$ you should include in your model are $x_1$ (correlation 0.824) and $x_2$ (correlation 0.782) but that $x_3$ (correlation 0.134) is not a useful predictor.
But you'd be wrong - in fact in this example, $y$ depends on two independent variables, $x_1$ and $x_3$, but not directly on $x_2$. However $x_2$ is highly correlated with $x_1$, which leads to a correlation with $y$ also. Looking at the correlation between $y$ and $x_2$ in isolation, this might suggest $x_2$ is a good predictor of $y$. But once the effects of $x_1$ are partialled out by including $x_1$ in the model, no such relationship remains.
require(MASS) #for mvrnorm
set.seed(42) #so reproduces same result
Sigma <- matrix(c(1,0.95,0,0.95,1,0,0,0,1),3,3)
N <- 1e4
x <- mvrnorm(n=N, c(0,0,0), Sigma, empirical=TRUE)
data.df <- data.frame(x1=x[,1], x2=x[,2], x3=x[,3])
# y depends on x1 strongly and x3 weakly, but not directly on x2
data.df$y <- with(data.df, 5 + 3*x1 + 0.5*x3) + rnorm(N, sd=2)
round(cor(data.df), 3)
# x1 x2 x3 y
# x1 1.000 0.950 0.000 0.824
# x2 0.950 1.000 0.000 0.782
# x3 0.000 0.000 1.000 0.134
# y 0.824 0.782 0.134 1.000
# Note: x1 and x2 are highly correlated
# Since y is highly correlated with x1, it is with x2 too
# y depended only weakly on x3, their correlation is much lower
pairs(~y+x1+x2+x3,data=data.df, main="Scatterplot matrix")
# produces scatter plot above
model.lm <- lm(data=data.df, y ~ x1 + x2 + x3)
summary(model.lm)
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 4.99599 0.02018 247.631 <2e-16 ***
# x1 3.03724 0.06462 47.005 <2e-16 ***
# x2 -0.02436 0.06462 -0.377 0.706
# x3 0.49185 0.02018 24.378 <2e-16 ***
This sample size is sufficiently large to overcome multicollinearity issues in the estimation of coefficients for $x_1$ and $x_2$. The coefficient of $x_2$ is estimated near zero, and with non-significant p-value. The true coefficient is zero. The intercept and the slopes for $x_1$ and $x_3$ are estimated near their true values of 5, 3 and 0.5 respectively. Note that $x_3$ is correctly found to be a significant predictor, even though this is less than obvious from the scatter matrix.
And here is an example which is even worse:
Sigma <- matrix(c(1,0,0,0.5,0,1,0,0.5,0,0,1,0.5,0.5,0.5,0.5,1),4,4)
N <- 1e4
x <- mvrnorm(n=N, c(0,0,0,0), Sigma, empirical=TRUE)
data.df <- data.frame(x1=x[,1], x2=x[,2], x3=x[,3], x4=x[,4])
# y depends on x1, x2 and x3 but not directly on x4
data.df$y <- with(data.df, 5 + x1 + x2 + x3) + rnorm(N, sd=2)
round(cor(data.df), 3)
# x1 x2 x3 x4 y
# x1 1.000 0.000 0.000 0.500 0.387
# x2 0.000 1.000 0.000 0.500 0.391
# x3 0.000 0.000 1.000 0.500 0.378
# x4 0.500 0.500 0.500 1.000 0.583
# y 0.387 0.391 0.378 0.583 1.000
pairs(~y+x1+x2+x3+x4,data=data.df, main="Scatterplot matrix")
model.lm <- lm(data=data.df, y ~ x1 + x2 + x3 +x4)
summary(model.lm)
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 4.98117 0.01979 251.682 <2e-16 ***
# x1 0.99874 0.02799 35.681 <2e-16 ***
# x2 1.00812 0.02799 36.016 <2e-16 ***
# x3 0.97302 0.02799 34.762 <2e-16 ***
# x4 0.06002 0.03958 1.516 0.129
Here $y$ depends on the (uncorrelated) predictors $x_1$, $x_2$ and $x_3$ - in fact the true regression slope is one for each. It does not depend on a fourth variable, $x_4$, but because of the way that variable is correlated with each of $x_1$, $x_2$ and $x_3$, it would be $x_4$ that stands out in the scatterplot and correlation matrices (its correlation with $y$ is 0.583, while the others are below 0.4). So selecting the variable with the highest correlation with $y$ can actually find the variable that does not belong in the model at all.
|
Is using correlation matrix to select predictors for regression correct?
|
If, for some reason, you are going to include only one variable in your model, then selecting the predictor which has the highest correlation with $y$ has several advantages. Out of the possible regre
|
Is using correlation matrix to select predictors for regression correct?
If, for some reason, you are going to include only one variable in your model, then selecting the predictor which has the highest correlation with $y$ has several advantages. Out of the possible regression models with only one predictor, then this model is the one with the highest standardized regression coefficient and also (since $R^2$ is the square of $r$ in a simple linear regression) the highest coefficient of determination.
But it's not clear why you would want to restrict your regression model to one predictor if you have data available for several. As mentioned in the comments, just looking at the correlations doesn't work if your model might include several variables. For example, from this scatter matrix, you might think that the predictors for $y$ you should include in your model are $x_1$ (correlation 0.824) and $x_2$ (correlation 0.782) but that $x_3$ (correlation 0.134) is not a useful predictor.
But you'd be wrong - in fact in this example, $y$ depends on two independent variables, $x_1$ and $x_3$, but not directly on $x_2$. However $x_2$ is highly correlated with $x_1$, which leads to a correlation with $y$ also. Looking at the correlation between $y$ and $x_2$ in isolation, this might suggest $x_2$ is a good predictor of $y$. But once the effects of $x_1$ are partialled out by including $x_1$ in the model, no such relationship remains.
require(MASS) #for mvrnorm
set.seed(42) #so reproduces same result
Sigma <- matrix(c(1,0.95,0,0.95,1,0,0,0,1),3,3)
N <- 1e4
x <- mvrnorm(n=N, c(0,0,0), Sigma, empirical=TRUE)
data.df <- data.frame(x1=x[,1], x2=x[,2], x3=x[,3])
# y depends on x1 strongly and x3 weakly, but not directly on x2
data.df$y <- with(data.df, 5 + 3*x1 + 0.5*x3) + rnorm(N, sd=2)
round(cor(data.df), 3)
# x1 x2 x3 y
# x1 1.000 0.950 0.000 0.824
# x2 0.950 1.000 0.000 0.782
# x3 0.000 0.000 1.000 0.134
# y 0.824 0.782 0.134 1.000
# Note: x1 and x2 are highly correlated
# Since y is highly correlated with x1, it is with x2 too
# y depended only weakly on x3, their correlation is much lower
pairs(~y+x1+x2+x3,data=data.df, main="Scatterplot matrix")
# produces scatter plot above
model.lm <- lm(data=data.df, y ~ x1 + x2 + x3)
summary(model.lm)
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 4.99599 0.02018 247.631 <2e-16 ***
# x1 3.03724 0.06462 47.005 <2e-16 ***
# x2 -0.02436 0.06462 -0.377 0.706
# x3 0.49185 0.02018 24.378 <2e-16 ***
This sample size is sufficiently large to overcome multicollinearity issues in the estimation of coefficients for $x_1$ and $x_2$. The coefficient of $x_2$ is estimated near zero, and with non-significant p-value. The true coefficient is zero. The intercept and the slopes for $x_1$ and $x_3$ are estimated near their true values of 5, 3 and 0.5 respectively. Note that $x_3$ is correctly found to be a significant predictor, even though this is less than obvious from the scatter matrix.
And here is an example which is even worse:
Sigma <- matrix(c(1,0,0,0.5,0,1,0,0.5,0,0,1,0.5,0.5,0.5,0.5,1),4,4)
N <- 1e4
x <- mvrnorm(n=N, c(0,0,0,0), Sigma, empirical=TRUE)
data.df <- data.frame(x1=x[,1], x2=x[,2], x3=x[,3], x4=x[,4])
# y depends on x1, x2 and x3 but not directly on x4
data.df$y <- with(data.df, 5 + x1 + x2 + x3) + rnorm(N, sd=2)
round(cor(data.df), 3)
# x1 x2 x3 x4 y
# x1 1.000 0.000 0.000 0.500 0.387
# x2 0.000 1.000 0.000 0.500 0.391
# x3 0.000 0.000 1.000 0.500 0.378
# x4 0.500 0.500 0.500 1.000 0.583
# y 0.387 0.391 0.378 0.583 1.000
pairs(~y+x1+x2+x3+x4,data=data.df, main="Scatterplot matrix")
model.lm <- lm(data=data.df, y ~ x1 + x2 + x3 +x4)
summary(model.lm)
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 4.98117 0.01979 251.682 <2e-16 ***
# x1 0.99874 0.02799 35.681 <2e-16 ***
# x2 1.00812 0.02799 36.016 <2e-16 ***
# x3 0.97302 0.02799 34.762 <2e-16 ***
# x4 0.06002 0.03958 1.516 0.129
Here $y$ depends on the (uncorrelated) predictors $x_1$, $x_2$ and $x_3$ - in fact the true regression slope is one for each. It does not depend on a fourth variable, $x_4$, but because of the way that variable is correlated with each of $x_1$, $x_2$ and $x_3$, it would be $x_4$ that stands out in the scatterplot and correlation matrices (its correlation with $y$ is 0.583, while the others are below 0.4). So selecting the variable with the highest correlation with $y$ can actually find the variable that does not belong in the model at all.
|
Is using correlation matrix to select predictors for regression correct?
If, for some reason, you are going to include only one variable in your model, then selecting the predictor which has the highest correlation with $y$ has several advantages. Out of the possible regre
|
14,853
|
Is using correlation matrix to select predictors for regression correct?
|
You could run a step-wise regression analysis and let the software choose the variables based on F values. You could also look at Adjusted R^2 value when you run the regression each time, to see if adding any new variable contributing to your model. Your model may have the problem of multicollinearity if you just go by correlation matrix and choose variables with strong correlation. Hope this helps!
|
Is using correlation matrix to select predictors for regression correct?
|
You could run a step-wise regression analysis and let the software choose the variables based on F values. You could also look at Adjusted R^2 value when you run the regression each time, to see if ad
|
Is using correlation matrix to select predictors for regression correct?
You could run a step-wise regression analysis and let the software choose the variables based on F values. You could also look at Adjusted R^2 value when you run the regression each time, to see if adding any new variable contributing to your model. Your model may have the problem of multicollinearity if you just go by correlation matrix and choose variables with strong correlation. Hope this helps!
|
Is using correlation matrix to select predictors for regression correct?
You could run a step-wise regression analysis and let the software choose the variables based on F values. You could also look at Adjusted R^2 value when you run the regression each time, to see if ad
|
14,854
|
Is using correlation matrix to select predictors for regression correct?
|
Theres nothing wrong with this method, particularly if you know about multicollinearity. Avoiding multicollinearity is very easy.
Simply steer clear of adding independent variables that correlate with one another, since using only one of said variables is necessary. If x1 and x2 both correlate with y and correlate with each other, use reasonable judgement to assess which is higher in the causal chain, and omit the latter. A strong theoretical framework can help with such a selection process.
I.e. using a correlation matrix, or even better a scatterplot matrix, can work if you know what to look for.
|
Is using correlation matrix to select predictors for regression correct?
|
Theres nothing wrong with this method, particularly if you know about multicollinearity. Avoiding multicollinearity is very easy.
Simply steer clear of adding independent variables that correlate with
|
Is using correlation matrix to select predictors for regression correct?
Theres nothing wrong with this method, particularly if you know about multicollinearity. Avoiding multicollinearity is very easy.
Simply steer clear of adding independent variables that correlate with one another, since using only one of said variables is necessary. If x1 and x2 both correlate with y and correlate with each other, use reasonable judgement to assess which is higher in the causal chain, and omit the latter. A strong theoretical framework can help with such a selection process.
I.e. using a correlation matrix, or even better a scatterplot matrix, can work if you know what to look for.
|
Is using correlation matrix to select predictors for regression correct?
Theres nothing wrong with this method, particularly if you know about multicollinearity. Avoiding multicollinearity is very easy.
Simply steer clear of adding independent variables that correlate with
|
14,855
|
A routine to choose eps and minPts for DBSCAN
|
There are plenty of publications that propose methods to choose these parameters.
The most notable is OPTICS, a DBSCAN variation that does away with the epsilon parameter; it produces a hierarchical result that can roughly be seen as "running DBSCAN with every possible epsilon".
For minPts, I do suggest to not rely on an automatic method, but on your domain knowledge.
A good clustering algorithm has parameters, that allow you to customize it to your needs.
A parameter that you overlooked is the distance function. The first thing to do for DBSCAN is to find a good distance function for your application. Do not rely on Euclidean distance being the best for every application!
|
A routine to choose eps and minPts for DBSCAN
|
There are plenty of publications that propose methods to choose these parameters.
The most notable is OPTICS, a DBSCAN variation that does away with the epsilon parameter; it produces a hierarchical r
|
A routine to choose eps and minPts for DBSCAN
There are plenty of publications that propose methods to choose these parameters.
The most notable is OPTICS, a DBSCAN variation that does away with the epsilon parameter; it produces a hierarchical result that can roughly be seen as "running DBSCAN with every possible epsilon".
For minPts, I do suggest to not rely on an automatic method, but on your domain knowledge.
A good clustering algorithm has parameters, that allow you to customize it to your needs.
A parameter that you overlooked is the distance function. The first thing to do for DBSCAN is to find a good distance function for your application. Do not rely on Euclidean distance being the best for every application!
|
A routine to choose eps and minPts for DBSCAN
There are plenty of publications that propose methods to choose these parameters.
The most notable is OPTICS, a DBSCAN variation that does away with the epsilon parameter; it produces a hierarchical r
|
14,856
|
A routine to choose eps and minPts for DBSCAN
|
minPts is selected based on the domain knowledge. If you do not have domain understanding, a rule of thumb is to derive minPts from the number of dimensions D in the data set. minPts >= D + 1. For 2D data, take minPts = 4. For larger datasets, with much noise, it suggested to go with minPts = 2 * D.
Once you have the appropriate minPts, in order to determine the optimal eps, follow these steps -
Let's say minPts = 24
For every point in dataset, compute the distance of it's 24th nearest neighbor.(generally we use euclidean distance, but you can experiment with different distance metrics).
Sort the distances in the increasing order.
Plot the chart of distances on Y-axis v/s the index of the datapoints on X-axis.
Observe the sudden increase or what we popularly call as an 'elbow' or 'knee' in the plot.
Select the distance value that corresponds to the 'elbow' as optimal eps.
|
A routine to choose eps and minPts for DBSCAN
|
minPts is selected based on the domain knowledge. If you do not have domain understanding, a rule of thumb is to derive minPts from the number of dimensions D in the data set. minPts >= D + 1. For 2D
|
A routine to choose eps and minPts for DBSCAN
minPts is selected based on the domain knowledge. If you do not have domain understanding, a rule of thumb is to derive minPts from the number of dimensions D in the data set. minPts >= D + 1. For 2D data, take minPts = 4. For larger datasets, with much noise, it suggested to go with minPts = 2 * D.
Once you have the appropriate minPts, in order to determine the optimal eps, follow these steps -
Let's say minPts = 24
For every point in dataset, compute the distance of it's 24th nearest neighbor.(generally we use euclidean distance, but you can experiment with different distance metrics).
Sort the distances in the increasing order.
Plot the chart of distances on Y-axis v/s the index of the datapoints on X-axis.
Observe the sudden increase or what we popularly call as an 'elbow' or 'knee' in the plot.
Select the distance value that corresponds to the 'elbow' as optimal eps.
|
A routine to choose eps and minPts for DBSCAN
minPts is selected based on the domain knowledge. If you do not have domain understanding, a rule of thumb is to derive minPts from the number of dimensions D in the data set. minPts >= D + 1. For 2D
|
14,857
|
A routine to choose eps and minPts for DBSCAN
|
Maybe a bit late, but I would like to add an answer here for future knowledge.
One way to find the best $\epsilon$ for DBSCAN is to compute the knn, then sort the distances and see where the "knee" is located.
Example in python, because is the language I manage.:
from sklearn.neighbors import NearestNeighbors
import plotly.express as px
neighbors = 6
# X_embedded is your data
nbrs = NearestNeighbors(n_neighbors=neighbors ).fit(X_embedded)
distances, indices = nbrs.kneighbors(X_embedded)
distance_desc = sorted(distances[:,ns-1], reverse=True)
px.line(x=list(range(1,len(distance_desc )+1)),y= distance_desc )
Then, to find the "knee", you can use another package:
(pip install kneed)
from kneed import KneeLocator
kneedle = KneeLocator(range(1,len(distanceDec)+1), #x values
distanceDec, # y values
S=1.0, #parameter suggested from paper
curve="convex", #parameter from figure
direction="decreasing") #parameter from figure
To see where the "knee" is, you can run
kneedle.plot_knee_normalized()
the commands kneedle.elbow or kneedle.knee returns the index of the x array, and the kneedle.knee_y returns the optimum value for $\epsilon$.
|
A routine to choose eps and minPts for DBSCAN
|
Maybe a bit late, but I would like to add an answer here for future knowledge.
One way to find the best $\epsilon$ for DBSCAN is to compute the knn, then sort the distances and see where the "knee" is
|
A routine to choose eps and minPts for DBSCAN
Maybe a bit late, but I would like to add an answer here for future knowledge.
One way to find the best $\epsilon$ for DBSCAN is to compute the knn, then sort the distances and see where the "knee" is located.
Example in python, because is the language I manage.:
from sklearn.neighbors import NearestNeighbors
import plotly.express as px
neighbors = 6
# X_embedded is your data
nbrs = NearestNeighbors(n_neighbors=neighbors ).fit(X_embedded)
distances, indices = nbrs.kneighbors(X_embedded)
distance_desc = sorted(distances[:,ns-1], reverse=True)
px.line(x=list(range(1,len(distance_desc )+1)),y= distance_desc )
Then, to find the "knee", you can use another package:
(pip install kneed)
from kneed import KneeLocator
kneedle = KneeLocator(range(1,len(distanceDec)+1), #x values
distanceDec, # y values
S=1.0, #parameter suggested from paper
curve="convex", #parameter from figure
direction="decreasing") #parameter from figure
To see where the "knee" is, you can run
kneedle.plot_knee_normalized()
the commands kneedle.elbow or kneedle.knee returns the index of the x array, and the kneedle.knee_y returns the optimum value for $\epsilon$.
|
A routine to choose eps and minPts for DBSCAN
Maybe a bit late, but I would like to add an answer here for future knowledge.
One way to find the best $\epsilon$ for DBSCAN is to compute the knn, then sort the distances and see where the "knee" is
|
14,858
|
Why do odds ratios from formula and R's fisher.test differ? Which one should one choose?
|
From the help page for fisher.test():
Note that the conditional Maximum Likelihood Estimate (MLE) rather
than the unconditional MLE (the sample odds ratio) is used.
|
Why do odds ratios from formula and R's fisher.test differ? Which one should one choose?
|
From the help page for fisher.test():
Note that the conditional Maximum Likelihood Estimate (MLE) rather
than the unconditional MLE (the sample odds ratio) is used.
|
Why do odds ratios from formula and R's fisher.test differ? Which one should one choose?
From the help page for fisher.test():
Note that the conditional Maximum Likelihood Estimate (MLE) rather
than the unconditional MLE (the sample odds ratio) is used.
|
Why do odds ratios from formula and R's fisher.test differ? Which one should one choose?
From the help page for fisher.test():
Note that the conditional Maximum Likelihood Estimate (MLE) rather
than the unconditional MLE (the sample odds ratio) is used.
|
14,859
|
Why do odds ratios from formula and R's fisher.test differ? Which one should one choose?
|
To add to the discussion here, it is useful to ask what exactly is conditioned on in this "conditional" likelihood. The Fisher test differs from other categorical analyses in that it considers all margins of the table to be fixed whereas the logistic regression model (and corresponding Pearson chi-square test which is the score test of the logistic model) only consider one margin to be fixed.
The Fisher test then considers the hypergeometric distribution as a probability model for the counts observed in each of the 4 cells. The hypergeometric distribution has the peculiarity that, since the distribution of the originating odds ratio is not continuous, you often obtain a different OR as a maximum likelihood estimate.
|
Why do odds ratios from formula and R's fisher.test differ? Which one should one choose?
|
To add to the discussion here, it is useful to ask what exactly is conditioned on in this "conditional" likelihood. The Fisher test differs from other categorical analyses in that it considers all mar
|
Why do odds ratios from formula and R's fisher.test differ? Which one should one choose?
To add to the discussion here, it is useful to ask what exactly is conditioned on in this "conditional" likelihood. The Fisher test differs from other categorical analyses in that it considers all margins of the table to be fixed whereas the logistic regression model (and corresponding Pearson chi-square test which is the score test of the logistic model) only consider one margin to be fixed.
The Fisher test then considers the hypergeometric distribution as a probability model for the counts observed in each of the 4 cells. The hypergeometric distribution has the peculiarity that, since the distribution of the originating odds ratio is not continuous, you often obtain a different OR as a maximum likelihood estimate.
|
Why do odds ratios from formula and R's fisher.test differ? Which one should one choose?
To add to the discussion here, it is useful to ask what exactly is conditioned on in this "conditional" likelihood. The Fisher test differs from other categorical analyses in that it considers all mar
|
14,860
|
Why do odds ratios from formula and R's fisher.test differ? Which one should one choose?
|
To answer your second question, biostats isn't my forte but I believe the reason for multiple odds ratio statistics is to account for sampling design and design of experiments.
I've found three references here that will give you a bit of understanding as to why there is a difference between conditional MLE vs unconditional for odds ratio, as well as other types.
Point and interval estimation of the common odds ratio in the combination of 2 × 2 tables with fixed marginals
The Effect of Bias on Estimators of Relative Risk for Pair-Matched and Stratified Samples
A Comparative Study of Conditional Maximum Likelihood Estimation of a Common Odds Ratio
|
Why do odds ratios from formula and R's fisher.test differ? Which one should one choose?
|
To answer your second question, biostats isn't my forte but I believe the reason for multiple odds ratio statistics is to account for sampling design and design of experiments.
I've found three refer
|
Why do odds ratios from formula and R's fisher.test differ? Which one should one choose?
To answer your second question, biostats isn't my forte but I believe the reason for multiple odds ratio statistics is to account for sampling design and design of experiments.
I've found three references here that will give you a bit of understanding as to why there is a difference between conditional MLE vs unconditional for odds ratio, as well as other types.
Point and interval estimation of the common odds ratio in the combination of 2 × 2 tables with fixed marginals
The Effect of Bias on Estimators of Relative Risk for Pair-Matched and Stratified Samples
A Comparative Study of Conditional Maximum Likelihood Estimation of a Common Odds Ratio
|
Why do odds ratios from formula and R's fisher.test differ? Which one should one choose?
To answer your second question, biostats isn't my forte but I believe the reason for multiple odds ratio statistics is to account for sampling design and design of experiments.
I've found three refer
|
14,861
|
Why do odds ratios from formula and R's fisher.test differ? Which one should one choose?
|
I am stuck with the same problem. I searched and searched in stackExchange and Google, and I did not find anything explicitly explaining how the odds ratio in fisher.test() is calculated. This is not a complete answer, but it provides something useful to the discussion.
One thing is clear, fisher.test() calculates a completely different odds ratio with a different (and more complicated) method.
If you write fisher.test in R Console and press Enter, the output will be the complete code of the function fisher.test(). The interesting part concerning our doubt is:
m <- sum(x[, 1L])
n <- sum(x[, 2L])
k <- sum(x[1L, ])
x <- x[1L, 1L]
lo <- max(0L, k - n)
hi <- min(k, m)
support <- lo:hi #Interval of definition of Hypergeometric Distribution
logdc <- dhyper(support, m, n, k, log = TRUE) #log of Hypergeometric Probability Function
dnhyper <- function(ncp) {
d <- logdc + log(ncp) * support
d <- exp(d - max(d))
d/sum(d)
}
mnhyper <- function(ncp) {
if (ncp == 0)
return(lo)
if (ncp == Inf)
return(hi)
sum(support * dnhyper(ncp))
}
mle <- function(x) {
if (x == lo)
return(0)
if (x == hi)
return(Inf)
mu <- mnhyper(1)
if (mu > x)
uniroot(function(t) mnhyper(t) - x, c(0, 1))$root
else if (mu < x)
1/uniroot(function(t) mnhyper(1/t) - x, c(.Machine$double.eps,
1))$root
else 1
}
Well, if you
Define x = matrix(c(3, 6, 5, 6), nrow=2)
Run every line of the code extracted from fisher.test in order
Run mle(x)
You get 0.6155891. So this is how the odds ratio in fisher.test is calculated. However, I do not understand the algorithm. I have not found any article explaining it, and it does not seems possible to translate it to a "simple" mathematical formula.
EDIT:
The odds ratio estimate obtained by fisher.test is the conditional maximum-likelihood estimate (CLME) of the odds ratio.
Given a table like this (where $C1$ is the sum of the first column, and so on):
\begin{array}{ll|l}
a_{11} & a_{12} & R_1 \\
a_{12} & a_{22} & R_2 \\ \hline
C_1 & C_2 & n
\end{array}
The CLME odds ratio is defined as
$$ \text{arg } \max_{\psi >0} \frac{\binom{C_1}{a_{11}}\binom{C_2}{a_{12}}\psi^{a_{11}}}{\sum_{k = \max(0, R1 - C2)}^{min(R1, C1)} \binom{C_1}{k}\binom{C_2}{R_1-k}\psi^k }$$
There is no explicit formula of the CLME odds ratio, so one has to use iterative methods to find the argument of the maximum of the function. I suppose that is what fisher.test() does, but I don't recognize the algorithm. So I programmed one by myself:
OR <- function(x) {
C1 <- sum(x[, 1])
C2 <- sum(x[, 2])
R1 <- sum(x[1, ])
a11 <- x[1, 1]
a12 <- x[1, 2]
n <- sum(x)
fobjetivo <- function(psi) {
lo <- max(0, R1 - C2)
hi <- min(R1, C1)
sumandosDenom <- numeric(n)
for (k in lo:hi) {
sumandosDenom[k] <- choose(C1,k) * choose(C2, R1-k) * psi^k
}
salida <- ( choose(C1, a11) * choose(C2, a12) * psi^a11 ) / sum( sumandosDenom )
return(salida)
}
as.numeric(optimize(fobjetivo, interval = c(0.01, 100), maximum = T, tol = 0.00001)$maximum)
}
x = matrix(c(3, 6, 5, 6), nrow=2)
OR(x)
The output is 0.6052843. Still not the same value obtained by the fisher.test(), but in general it appears to be closer than the sample odds ratio. I suppose the difference is explained by the different methods, by I don't know for sure.
Reference: Kenneth J. Rothman, Sander Greenland and Timothy L. Lash (2008): Modern Epidemiology, 3rd Edition, Lippincott-Raven Publishers, p. 257
|
Why do odds ratios from formula and R's fisher.test differ? Which one should one choose?
|
I am stuck with the same problem. I searched and searched in stackExchange and Google, and I did not find anything explicitly explaining how the odds ratio in fisher.test() is calculated. This is not
|
Why do odds ratios from formula and R's fisher.test differ? Which one should one choose?
I am stuck with the same problem. I searched and searched in stackExchange and Google, and I did not find anything explicitly explaining how the odds ratio in fisher.test() is calculated. This is not a complete answer, but it provides something useful to the discussion.
One thing is clear, fisher.test() calculates a completely different odds ratio with a different (and more complicated) method.
If you write fisher.test in R Console and press Enter, the output will be the complete code of the function fisher.test(). The interesting part concerning our doubt is:
m <- sum(x[, 1L])
n <- sum(x[, 2L])
k <- sum(x[1L, ])
x <- x[1L, 1L]
lo <- max(0L, k - n)
hi <- min(k, m)
support <- lo:hi #Interval of definition of Hypergeometric Distribution
logdc <- dhyper(support, m, n, k, log = TRUE) #log of Hypergeometric Probability Function
dnhyper <- function(ncp) {
d <- logdc + log(ncp) * support
d <- exp(d - max(d))
d/sum(d)
}
mnhyper <- function(ncp) {
if (ncp == 0)
return(lo)
if (ncp == Inf)
return(hi)
sum(support * dnhyper(ncp))
}
mle <- function(x) {
if (x == lo)
return(0)
if (x == hi)
return(Inf)
mu <- mnhyper(1)
if (mu > x)
uniroot(function(t) mnhyper(t) - x, c(0, 1))$root
else if (mu < x)
1/uniroot(function(t) mnhyper(1/t) - x, c(.Machine$double.eps,
1))$root
else 1
}
Well, if you
Define x = matrix(c(3, 6, 5, 6), nrow=2)
Run every line of the code extracted from fisher.test in order
Run mle(x)
You get 0.6155891. So this is how the odds ratio in fisher.test is calculated. However, I do not understand the algorithm. I have not found any article explaining it, and it does not seems possible to translate it to a "simple" mathematical formula.
EDIT:
The odds ratio estimate obtained by fisher.test is the conditional maximum-likelihood estimate (CLME) of the odds ratio.
Given a table like this (where $C1$ is the sum of the first column, and so on):
\begin{array}{ll|l}
a_{11} & a_{12} & R_1 \\
a_{12} & a_{22} & R_2 \\ \hline
C_1 & C_2 & n
\end{array}
The CLME odds ratio is defined as
$$ \text{arg } \max_{\psi >0} \frac{\binom{C_1}{a_{11}}\binom{C_2}{a_{12}}\psi^{a_{11}}}{\sum_{k = \max(0, R1 - C2)}^{min(R1, C1)} \binom{C_1}{k}\binom{C_2}{R_1-k}\psi^k }$$
There is no explicit formula of the CLME odds ratio, so one has to use iterative methods to find the argument of the maximum of the function. I suppose that is what fisher.test() does, but I don't recognize the algorithm. So I programmed one by myself:
OR <- function(x) {
C1 <- sum(x[, 1])
C2 <- sum(x[, 2])
R1 <- sum(x[1, ])
a11 <- x[1, 1]
a12 <- x[1, 2]
n <- sum(x)
fobjetivo <- function(psi) {
lo <- max(0, R1 - C2)
hi <- min(R1, C1)
sumandosDenom <- numeric(n)
for (k in lo:hi) {
sumandosDenom[k] <- choose(C1,k) * choose(C2, R1-k) * psi^k
}
salida <- ( choose(C1, a11) * choose(C2, a12) * psi^a11 ) / sum( sumandosDenom )
return(salida)
}
as.numeric(optimize(fobjetivo, interval = c(0.01, 100), maximum = T, tol = 0.00001)$maximum)
}
x = matrix(c(3, 6, 5, 6), nrow=2)
OR(x)
The output is 0.6052843. Still not the same value obtained by the fisher.test(), but in general it appears to be closer than the sample odds ratio. I suppose the difference is explained by the different methods, by I don't know for sure.
Reference: Kenneth J. Rothman, Sander Greenland and Timothy L. Lash (2008): Modern Epidemiology, 3rd Edition, Lippincott-Raven Publishers, p. 257
|
Why do odds ratios from formula and R's fisher.test differ? Which one should one choose?
I am stuck with the same problem. I searched and searched in stackExchange and Google, and I did not find anything explicitly explaining how the odds ratio in fisher.test() is calculated. This is not
|
14,862
|
Programmer looking to break into machine learning field
|
Everytime I have talked to someone about learning more machine learning they always point me to the Elements of Statistical Learning by Hastie and Tibshirani. This book has the good fortune of being available online for free (a hard copy does have a certain appeal, but is not required) and it is a really great introduction to the subject. I have not read everything in it yet, but I have read much of it and it has really helped me understand things better.
Another resource that I have been working my way through is the Stanford Machine Learning class, which is also online and free. Andrew Ng does a great job of walking you through things. I find it particularly helpful, because my background in implementing algorithms is weak (I am a self taught programmer) and it shows you how to implement things in Octave (granted R has much of it implemented in packages already). I also found these notes on reddit statistics a few months ago, so I kind of skim through those and then watch the video and reflect on it with my own notes.
My background is in statistics and I got some exposure to machine learning concepts (a good buddy of mine is really into it), but I have always felt like I am lacking on the machine learning front, so I have been trying to learn it all a bit more on my own. Thankfully there are a ton of great resources out there.
As far as getting a job in the industry or graduate school requirements I am not in a great position to advise (turns out I have never hired anyone), but I have noticed that the business world seems to really like people that can do things and are a bit less concerned with pieces of paper that say you can do something.
If I were you, I would spend some of my free time getting confident in my machine learning knowledge and then implement things as you see opportunities. Granted your position may not give you that opportunity, but if you can get something implemented that adds value to your company (while maintaining your other obligations), I can't imagine anyone being upset with you. The nice thing here is if you do find yourself doing a bit of machine learning at this job, when you go out looking for a new job you can talk about the experience you already have, which would help folks look past a lack of a specific degree.
There are a lot of resources and its incredibly interesting, I wish you luck!
Another idea: You could start a blog about your Machine Learning learning process and maybe document a few projects you work on in your free time. I have done this with a programming project and it allows you to talk about a project you are working on in your free time (looks good to the employer) and you can also direct them to the blog (obviously keep it professional) about your work. So far I have sent quite a few people to my dorky little programming blog (I have been a bit lazy on posting lately, but I kept it up to date when I was applying to jobs) and everyone I have talked to has been impressed with it.
|
Programmer looking to break into machine learning field
|
Everytime I have talked to someone about learning more machine learning they always point me to the Elements of Statistical Learning by Hastie and Tibshirani. This book has the good fortune of being a
|
Programmer looking to break into machine learning field
Everytime I have talked to someone about learning more machine learning they always point me to the Elements of Statistical Learning by Hastie and Tibshirani. This book has the good fortune of being available online for free (a hard copy does have a certain appeal, but is not required) and it is a really great introduction to the subject. I have not read everything in it yet, but I have read much of it and it has really helped me understand things better.
Another resource that I have been working my way through is the Stanford Machine Learning class, which is also online and free. Andrew Ng does a great job of walking you through things. I find it particularly helpful, because my background in implementing algorithms is weak (I am a self taught programmer) and it shows you how to implement things in Octave (granted R has much of it implemented in packages already). I also found these notes on reddit statistics a few months ago, so I kind of skim through those and then watch the video and reflect on it with my own notes.
My background is in statistics and I got some exposure to machine learning concepts (a good buddy of mine is really into it), but I have always felt like I am lacking on the machine learning front, so I have been trying to learn it all a bit more on my own. Thankfully there are a ton of great resources out there.
As far as getting a job in the industry or graduate school requirements I am not in a great position to advise (turns out I have never hired anyone), but I have noticed that the business world seems to really like people that can do things and are a bit less concerned with pieces of paper that say you can do something.
If I were you, I would spend some of my free time getting confident in my machine learning knowledge and then implement things as you see opportunities. Granted your position may not give you that opportunity, but if you can get something implemented that adds value to your company (while maintaining your other obligations), I can't imagine anyone being upset with you. The nice thing here is if you do find yourself doing a bit of machine learning at this job, when you go out looking for a new job you can talk about the experience you already have, which would help folks look past a lack of a specific degree.
There are a lot of resources and its incredibly interesting, I wish you luck!
Another idea: You could start a blog about your Machine Learning learning process and maybe document a few projects you work on in your free time. I have done this with a programming project and it allows you to talk about a project you are working on in your free time (looks good to the employer) and you can also direct them to the blog (obviously keep it professional) about your work. So far I have sent quite a few people to my dorky little programming blog (I have been a bit lazy on posting lately, but I kept it up to date when I was applying to jobs) and everyone I have talked to has been impressed with it.
|
Programmer looking to break into machine learning field
Everytime I have talked to someone about learning more machine learning they always point me to the Elements of Statistical Learning by Hastie and Tibshirani. This book has the good fortune of being a
|
14,863
|
Programmer looking to break into machine learning field
|
In addition to all the other great advices I suggest to get your hands dirty by participating in online competitions, see Sites for predictive modeling competitions
Regarding books etc. you should take a look at:
Machine learning self-learning book?
Can you recommend a book to read before Elements of Statistical Learning?
Regarding degrees I agree with @asjohnson that a certificate does matter less, at least I can confirm this for the area I am working in (Data Mining / ML on the web). It might be different for more "academic" areas like bioinformatics though. Being able to demonstrate that one is a) enthusiastic and b) has done actual work ("smart and getting things done") by showing off a small portfolio (e.g. online competitions ... ) should be more effective IMHO.
|
Programmer looking to break into machine learning field
|
In addition to all the other great advices I suggest to get your hands dirty by participating in online competitions, see Sites for predictive modeling competitions
Regarding books etc. you should tak
|
Programmer looking to break into machine learning field
In addition to all the other great advices I suggest to get your hands dirty by participating in online competitions, see Sites for predictive modeling competitions
Regarding books etc. you should take a look at:
Machine learning self-learning book?
Can you recommend a book to read before Elements of Statistical Learning?
Regarding degrees I agree with @asjohnson that a certificate does matter less, at least I can confirm this for the area I am working in (Data Mining / ML on the web). It might be different for more "academic" areas like bioinformatics though. Being able to demonstrate that one is a) enthusiastic and b) has done actual work ("smart and getting things done") by showing off a small portfolio (e.g. online competitions ... ) should be more effective IMHO.
|
Programmer looking to break into machine learning field
In addition to all the other great advices I suggest to get your hands dirty by participating in online competitions, see Sites for predictive modeling competitions
Regarding books etc. you should tak
|
14,864
|
Programmer looking to break into machine learning field
|
Read Tom Mitchell's Machine Learning. That is a good book that should get you started in the field of Machine Learning.
One thing to be aware of: please note that the same algorithm may sometimes perform better or worse according to the scenario and parameters supplied and random chance. Do not get drawn into optimising parameters for your training data - this is a poor application of machine learning.
There are plenty of techniques suitable for particular applications (but not all applications) and there is lots of theory that you can read to understand machine learning better. In order to be good at machine learning you need to make sure to know what you are doing as otherwise you cannot be sure whether your results will generalise well.
Good luck.
|
Programmer looking to break into machine learning field
|
Read Tom Mitchell's Machine Learning. That is a good book that should get you started in the field of Machine Learning.
One thing to be aware of: please note that the same algorithm may sometimes per
|
Programmer looking to break into machine learning field
Read Tom Mitchell's Machine Learning. That is a good book that should get you started in the field of Machine Learning.
One thing to be aware of: please note that the same algorithm may sometimes perform better or worse according to the scenario and parameters supplied and random chance. Do not get drawn into optimising parameters for your training data - this is a poor application of machine learning.
There are plenty of techniques suitable for particular applications (but not all applications) and there is lots of theory that you can read to understand machine learning better. In order to be good at machine learning you need to make sure to know what you are doing as otherwise you cannot be sure whether your results will generalise well.
Good luck.
|
Programmer looking to break into machine learning field
Read Tom Mitchell's Machine Learning. That is a good book that should get you started in the field of Machine Learning.
One thing to be aware of: please note that the same algorithm may sometimes per
|
14,865
|
Programmer looking to break into machine learning field
|
There are a large number of good books about machine learning, including several in the O'Reilly series that make use of Python. Working through one, or several of these might might be a good starting point.
I'd also suggest getting some knowledge of statistics - through a course or two, or self study, doesn't really matter. The reason is that there are some machine learning books that focus on the algorithms and the mechanics, but ignore the fundamental question of how likely it is that what your algorithm tells you is just due to chance. And, this is essential to know.
Good luck & have fun, it is a great field.
|
Programmer looking to break into machine learning field
|
There are a large number of good books about machine learning, including several in the O'Reilly series that make use of Python. Working through one, or several of these might might be a good starting
|
Programmer looking to break into machine learning field
There are a large number of good books about machine learning, including several in the O'Reilly series that make use of Python. Working through one, or several of these might might be a good starting point.
I'd also suggest getting some knowledge of statistics - through a course or two, or self study, doesn't really matter. The reason is that there are some machine learning books that focus on the algorithms and the mechanics, but ignore the fundamental question of how likely it is that what your algorithm tells you is just due to chance. And, this is essential to know.
Good luck & have fun, it is a great field.
|
Programmer looking to break into machine learning field
There are a large number of good books about machine learning, including several in the O'Reilly series that make use of Python. Working through one, or several of these might might be a good starting
|
14,866
|
Programmer looking to break into machine learning field
|
Very nice question. A thing to realize upfront is that machine learning is both an art and science and involves meticulously cleaning out data, visualizing it and eventually build models that suite the business in question, while simultaneously keeping it scalable & tractable.
Skills wise, more important than anything else is to focus on probability and to use simple methods first before jumping onto complex ones. I prefer the R & Perl combination, since you known python that should be good enough. When working on a real job, you will invariably have to pull your own data so knowledge of SQL (or whatever other no-sql your company supports) is a must.
Nothing beats experience in the ML area, so engaging in sites like stackexchange, kaggle is also a great way to get exposed to this field. Good luck.
|
Programmer looking to break into machine learning field
|
Very nice question. A thing to realize upfront is that machine learning is both an art and science and involves meticulously cleaning out data, visualizing it and eventually build models that suite th
|
Programmer looking to break into machine learning field
Very nice question. A thing to realize upfront is that machine learning is both an art and science and involves meticulously cleaning out data, visualizing it and eventually build models that suite the business in question, while simultaneously keeping it scalable & tractable.
Skills wise, more important than anything else is to focus on probability and to use simple methods first before jumping onto complex ones. I prefer the R & Perl combination, since you known python that should be good enough. When working on a real job, you will invariably have to pull your own data so knowledge of SQL (or whatever other no-sql your company supports) is a must.
Nothing beats experience in the ML area, so engaging in sites like stackexchange, kaggle is also a great way to get exposed to this field. Good luck.
|
Programmer looking to break into machine learning field
Very nice question. A thing to realize upfront is that machine learning is both an art and science and involves meticulously cleaning out data, visualizing it and eventually build models that suite th
|
14,867
|
Programmer looking to break into machine learning field
|
I know its a bit of an old question but given the fact that I saw a lot of programmers still don’t know how to get started.
Thus, I created "A complete daily plan for studying to become a machine learning engineer" repository.
This is my multi-month study plan for going from mobile developer (self-taught, no CS degree) to machine learning engineer.
My main goal was to find an approach to studying Machine Learning that is mainly hands-on and abstracts most of the Math for the beginner. This approach is unconventional because it’s the top-down and results-first approach designed for software engineers.
Please, feel free to make any contributions you feel will make it better.
|
Programmer looking to break into machine learning field
|
I know its a bit of an old question but given the fact that I saw a lot of programmers still don’t know how to get started.
Thus, I created "A complete daily plan for studying to become a machine lear
|
Programmer looking to break into machine learning field
I know its a bit of an old question but given the fact that I saw a lot of programmers still don’t know how to get started.
Thus, I created "A complete daily plan for studying to become a machine learning engineer" repository.
This is my multi-month study plan for going from mobile developer (self-taught, no CS degree) to machine learning engineer.
My main goal was to find an approach to studying Machine Learning that is mainly hands-on and abstracts most of the Math for the beginner. This approach is unconventional because it’s the top-down and results-first approach designed for software engineers.
Please, feel free to make any contributions you feel will make it better.
|
Programmer looking to break into machine learning field
I know its a bit of an old question but given the fact that I saw a lot of programmers still don’t know how to get started.
Thus, I created "A complete daily plan for studying to become a machine lear
|
14,868
|
Can I do a PCA on repeated measures for data reduction?
|
You could look into Multiple Factor Analysis. This can be implemented in R with FactoMineR.
UPDATE:
To elaborate, Leann was proposing – however long ago – to conduct a PCA on a dataset with repeated measures. If I understand the structure of her dataset correctly, for a given 'context' she had an animal x 'specific measure' (time to enter, number of times returning to shelter, etc) matrix. Each of the 64 animals (those without missing obs.) were followed three times. Let's say she had 10 'specific measures', so she would then have three 64×10 matrices on the animals' behavior (we can call the matrices X1, X2, X3). To run a PCA on the three matrices simultaneously, she would have to 'row bind' the three matrices (e.g. PCA(rbind(X1, X2, X3))). But this ignores the fact that the first and 64th observation are on the same animal. To circumvent this problem, she can 'column bind' the three matrices and run them through a Multiple Factor Analysis.
MFA is a useful way of analyzing multiple sets of variables measured on the same individuals or objects at different points in time. She'll be able to extract the principle components from the MFA in the same way as in a PCA but will have a single coordinate for each animal. The animal objects will now have been placed in a multivariate space of compromise delimited by her three observations.
She would be able to execute the analysis using the FactoMineR package in R.
Example code would look something like:
df=data.frame(X1, X2, X3)
mfa1=MFA(df, group=c(10, 10, 10), type=c("s", "s", "s"),
name.group=c("Observation 1", "Observation 2", "Observation 3"))
# presuming the data is quantitative and needs to be
# scaled to unit variance
Also, instead of extracting the first three components from the MFA and putting them through multiple regression, she might think about projecting her explanatory variables directly onto the MFA as 'supplemental tables' (see ?FactoMineR). Another approach would be to calculate a Euclidean distance matrix of the object coordinates from the MFA (e.g. dist1=vegdist(mfa1$ind$coord, "euc")) and put it through an RDA with dist1 as a function of the animal specific variables (e.g. rda(dist1 ~ age + sex + pedigree) using the vegan package).
|
Can I do a PCA on repeated measures for data reduction?
|
You could look into Multiple Factor Analysis. This can be implemented in R with FactoMineR.
UPDATE:
To elaborate, Leann was proposing – however long ago – to conduct a PCA on a dataset with repeated m
|
Can I do a PCA on repeated measures for data reduction?
You could look into Multiple Factor Analysis. This can be implemented in R with FactoMineR.
UPDATE:
To elaborate, Leann was proposing – however long ago – to conduct a PCA on a dataset with repeated measures. If I understand the structure of her dataset correctly, for a given 'context' she had an animal x 'specific measure' (time to enter, number of times returning to shelter, etc) matrix. Each of the 64 animals (those without missing obs.) were followed three times. Let's say she had 10 'specific measures', so she would then have three 64×10 matrices on the animals' behavior (we can call the matrices X1, X2, X3). To run a PCA on the three matrices simultaneously, she would have to 'row bind' the three matrices (e.g. PCA(rbind(X1, X2, X3))). But this ignores the fact that the first and 64th observation are on the same animal. To circumvent this problem, she can 'column bind' the three matrices and run them through a Multiple Factor Analysis.
MFA is a useful way of analyzing multiple sets of variables measured on the same individuals or objects at different points in time. She'll be able to extract the principle components from the MFA in the same way as in a PCA but will have a single coordinate for each animal. The animal objects will now have been placed in a multivariate space of compromise delimited by her three observations.
She would be able to execute the analysis using the FactoMineR package in R.
Example code would look something like:
df=data.frame(X1, X2, X3)
mfa1=MFA(df, group=c(10, 10, 10), type=c("s", "s", "s"),
name.group=c("Observation 1", "Observation 2", "Observation 3"))
# presuming the data is quantitative and needs to be
# scaled to unit variance
Also, instead of extracting the first three components from the MFA and putting them through multiple regression, she might think about projecting her explanatory variables directly onto the MFA as 'supplemental tables' (see ?FactoMineR). Another approach would be to calculate a Euclidean distance matrix of the object coordinates from the MFA (e.g. dist1=vegdist(mfa1$ind$coord, "euc")) and put it through an RDA with dist1 as a function of the animal specific variables (e.g. rda(dist1 ~ age + sex + pedigree) using the vegan package).
|
Can I do a PCA on repeated measures for data reduction?
You could look into Multiple Factor Analysis. This can be implemented in R with FactoMineR.
UPDATE:
To elaborate, Leann was proposing – however long ago – to conduct a PCA on a dataset with repeated m
|
14,869
|
Can I do a PCA on repeated measures for data reduction?
|
It is commonplace to use PCA when analyzing repeated measures (e.g., it is used for analyzing sales data, stock prices and exchange rates) The logic is as you articulate (i.e., the justification is that PCA is a data reduction tool not an inferential tool).
One publication by a pretty good statistician is:
Bradlow, E. T. (2002). "Exploring repeated measures data sets for key features using Principal Components Analysis." Journal of Research in Marketing 19: 167-179.
|
Can I do a PCA on repeated measures for data reduction?
|
It is commonplace to use PCA when analyzing repeated measures (e.g., it is used for analyzing sales data, stock prices and exchange rates) The logic is as you articulate (i.e., the justification is t
|
Can I do a PCA on repeated measures for data reduction?
It is commonplace to use PCA when analyzing repeated measures (e.g., it is used for analyzing sales data, stock prices and exchange rates) The logic is as you articulate (i.e., the justification is that PCA is a data reduction tool not an inferential tool).
One publication by a pretty good statistician is:
Bradlow, E. T. (2002). "Exploring repeated measures data sets for key features using Principal Components Analysis." Journal of Research in Marketing 19: 167-179.
|
Can I do a PCA on repeated measures for data reduction?
It is commonplace to use PCA when analyzing repeated measures (e.g., it is used for analyzing sales data, stock prices and exchange rates) The logic is as you articulate (i.e., the justification is t
|
14,870
|
Why is a projection matrix of an orthogonal projection symmetric?
|
This is a fundamental results from linear algebra on orthogonal projections. A relatively simple approach is as follows. If $u_1, \ldots, u_m$ are orthonormal vectors spanning an $m$-dimensional subspace $A$, and $\mathbf{U}$ is the $n \times p$ matrix with the $u_i$'s as the columns, then
$$\mathbf{P} = \mathbf{U}\mathbf{U}^T.$$
This follows directly from the fact that the orthogonal projection of $x$ onto $A$ can be computed in terms of the orthonormal basis of $A$ as
$$\sum_{i=1}^m u_i u_i^T x.$$
It follows directly from the formula above that $\mathbf{P}^2 = \mathbf{P}$ and that $\mathbf{P}^T = \mathbf{P}.$
It is also possible to give a different argument. If $\mathbf{P}$ is a projection matrix for an orthogonal projection, then, by definition, for all $x,y \in \mathbb{R}^n$
$$\mathbf{P}x \perp y-\mathbf{P}y.$$
Consequently,
$$0 = (\mathbf{P} x)^T (y - \mathbf{P}y) = x^T \mathbf{P}^T (I - \mathbf{P}) y = x^T (\mathbf{P}^T - \mathbf{P}^T \mathbf{P}) y $$
for all $x, y \in \mathbb{R}^n$. This shows that $\mathbf{P}^T = \mathbf{P}^T \mathbf{P}$, whence
$$\mathbf{P} = (\mathbf{P}^T)^T = (\mathbf{P}^T \mathbf{P})^T = \mathbf{P}^T \mathbf{P} = \mathbf{P}^T.$$
|
Why is a projection matrix of an orthogonal projection symmetric?
|
This is a fundamental results from linear algebra on orthogonal projections. A relatively simple approach is as follows. If $u_1, \ldots, u_m$ are orthonormal vectors spanning an $m$-dimensional subsp
|
Why is a projection matrix of an orthogonal projection symmetric?
This is a fundamental results from linear algebra on orthogonal projections. A relatively simple approach is as follows. If $u_1, \ldots, u_m$ are orthonormal vectors spanning an $m$-dimensional subspace $A$, and $\mathbf{U}$ is the $n \times p$ matrix with the $u_i$'s as the columns, then
$$\mathbf{P} = \mathbf{U}\mathbf{U}^T.$$
This follows directly from the fact that the orthogonal projection of $x$ onto $A$ can be computed in terms of the orthonormal basis of $A$ as
$$\sum_{i=1}^m u_i u_i^T x.$$
It follows directly from the formula above that $\mathbf{P}^2 = \mathbf{P}$ and that $\mathbf{P}^T = \mathbf{P}.$
It is also possible to give a different argument. If $\mathbf{P}$ is a projection matrix for an orthogonal projection, then, by definition, for all $x,y \in \mathbb{R}^n$
$$\mathbf{P}x \perp y-\mathbf{P}y.$$
Consequently,
$$0 = (\mathbf{P} x)^T (y - \mathbf{P}y) = x^T \mathbf{P}^T (I - \mathbf{P}) y = x^T (\mathbf{P}^T - \mathbf{P}^T \mathbf{P}) y $$
for all $x, y \in \mathbb{R}^n$. This shows that $\mathbf{P}^T = \mathbf{P}^T \mathbf{P}$, whence
$$\mathbf{P} = (\mathbf{P}^T)^T = (\mathbf{P}^T \mathbf{P})^T = \mathbf{P}^T \mathbf{P} = \mathbf{P}^T.$$
|
Why is a projection matrix of an orthogonal projection symmetric?
This is a fundamental results from linear algebra on orthogonal projections. A relatively simple approach is as follows. If $u_1, \ldots, u_m$ are orthonormal vectors spanning an $m$-dimensional subsp
|
14,871
|
Why is a projection matrix of an orthogonal projection symmetric?
|
An attempt at geometrical intuition...
Recall that:
A symmetric matrix is self adjoint.
A scalar product is determined only by the components in the mutual linear space (and independent of the orthogonal components of any of the vectors).
What you want to "see" is that a projection is self adjoint thus symmetric-- following (1). Why is this so?
Consider the scalar product of a vector $x$ with the projection $A$ of a second vector $y$: $ \langle x,Ay \rangle$. Following (2), the product will depend only on the components of $x$ in the span of the projection of $y$. So the product should be the same as $\langle Ax,Ay \rangle$, and also $\langle Ax,y\rangle $ following the same argument.
Since $A$ is self adjoint- it is symmetric.
|
Why is a projection matrix of an orthogonal projection symmetric?
|
An attempt at geometrical intuition...
Recall that:
A symmetric matrix is self adjoint.
A scalar product is determined only by the components in the mutual linear space (and independent of the ortho
|
Why is a projection matrix of an orthogonal projection symmetric?
An attempt at geometrical intuition...
Recall that:
A symmetric matrix is self adjoint.
A scalar product is determined only by the components in the mutual linear space (and independent of the orthogonal components of any of the vectors).
What you want to "see" is that a projection is self adjoint thus symmetric-- following (1). Why is this so?
Consider the scalar product of a vector $x$ with the projection $A$ of a second vector $y$: $ \langle x,Ay \rangle$. Following (2), the product will depend only on the components of $x$ in the span of the projection of $y$. So the product should be the same as $\langle Ax,Ay \rangle$, and also $\langle Ax,y\rangle $ following the same argument.
Since $A$ is self adjoint- it is symmetric.
|
Why is a projection matrix of an orthogonal projection symmetric?
An attempt at geometrical intuition...
Recall that:
A symmetric matrix is self adjoint.
A scalar product is determined only by the components in the mutual linear space (and independent of the ortho
|
14,872
|
Is it ok to use Manhattan distance with Ward's inter-cluster linkage in hierarchical clustering?
|
The Ward clustering algorithm is a hierarchical clustering method that minimizes an 'inertia' criteria at each step. This inertia quantifies the sum of squared residuals between the reduced signal and the initial signal: it is a measure of the variance of the error in an l2 (Euclidean) sens. Actually, you even mention it in your question. This is why, I believe, it makes no sens to apply it to a distance matrix that is not a l2 Euclidean distance.
On the other hand, an average linkage or a single linkage hierarchical clustering would be perfectly suitable for other distances.
|
Is it ok to use Manhattan distance with Ward's inter-cluster linkage in hierarchical clustering?
|
The Ward clustering algorithm is a hierarchical clustering method that minimizes an 'inertia' criteria at each step. This inertia quantifies the sum of squared residuals between the reduced signal and
|
Is it ok to use Manhattan distance with Ward's inter-cluster linkage in hierarchical clustering?
The Ward clustering algorithm is a hierarchical clustering method that minimizes an 'inertia' criteria at each step. This inertia quantifies the sum of squared residuals between the reduced signal and the initial signal: it is a measure of the variance of the error in an l2 (Euclidean) sens. Actually, you even mention it in your question. This is why, I believe, it makes no sens to apply it to a distance matrix that is not a l2 Euclidean distance.
On the other hand, an average linkage or a single linkage hierarchical clustering would be perfectly suitable for other distances.
|
Is it ok to use Manhattan distance with Ward's inter-cluster linkage in hierarchical clustering?
The Ward clustering algorithm is a hierarchical clustering method that minimizes an 'inertia' criteria at each step. This inertia quantifies the sum of squared residuals between the reduced signal and
|
14,873
|
Is it ok to use Manhattan distance with Ward's inter-cluster linkage in hierarchical clustering?
|
I can't think of any reason why Ward should favor any metric. Ward's method is just another option to decide which clusters to fusion next during agglomeration. This is achieved by finding the two clusters whose fusion will minimize a certain error (examplary source for the formula).
Hence it relies on two concepts:
The mean of vectors which (for numerical vectors) is generally calculated by averaging over every dimension separately.
The distance metric itself i.e. the concept of similarity expressed by this metric.
So: As long as the properties of the choosen metric (like e.g. rotation,translation or scale invariance) satisfy your needs (and the metric fits to the way the cluster mean is calculated), I don't see any reason to not use it.
I suspect that most people suggest the euclidean metric because they
want to increase the weight of the differences between a cluster mean and a single observation vector (which is done by quadration)
or because it came out as best metric in the validation based on their data
or because it is used in general.
|
Is it ok to use Manhattan distance with Ward's inter-cluster linkage in hierarchical clustering?
|
I can't think of any reason why Ward should favor any metric. Ward's method is just another option to decide which clusters to fusion next during agglomeration. This is achieved by finding the two clu
|
Is it ok to use Manhattan distance with Ward's inter-cluster linkage in hierarchical clustering?
I can't think of any reason why Ward should favor any metric. Ward's method is just another option to decide which clusters to fusion next during agglomeration. This is achieved by finding the two clusters whose fusion will minimize a certain error (examplary source for the formula).
Hence it relies on two concepts:
The mean of vectors which (for numerical vectors) is generally calculated by averaging over every dimension separately.
The distance metric itself i.e. the concept of similarity expressed by this metric.
So: As long as the properties of the choosen metric (like e.g. rotation,translation or scale invariance) satisfy your needs (and the metric fits to the way the cluster mean is calculated), I don't see any reason to not use it.
I suspect that most people suggest the euclidean metric because they
want to increase the weight of the differences between a cluster mean and a single observation vector (which is done by quadration)
or because it came out as best metric in the validation based on their data
or because it is used in general.
|
Is it ok to use Manhattan distance with Ward's inter-cluster linkage in hierarchical clustering?
I can't think of any reason why Ward should favor any metric. Ward's method is just another option to decide which clusters to fusion next during agglomeration. This is achieved by finding the two clu
|
14,874
|
Is it ok to use Manhattan distance with Ward's inter-cluster linkage in hierarchical clustering?
|
Another way of thinking about this, which might lend itself to an adaptation for $\ell_1$ is that choice of the mean comes from the fact that the mean is the point that minimizes the sum of squared Euclidean distances. If you're using $\ell_1$ to measure the distance between time series, then you should be using a center that minimizes the sum of squared $\ell_1$ distances.
|
Is it ok to use Manhattan distance with Ward's inter-cluster linkage in hierarchical clustering?
|
Another way of thinking about this, which might lend itself to an adaptation for $\ell_1$ is that choice of the mean comes from the fact that the mean is the point that minimizes the sum of squared Eu
|
Is it ok to use Manhattan distance with Ward's inter-cluster linkage in hierarchical clustering?
Another way of thinking about this, which might lend itself to an adaptation for $\ell_1$ is that choice of the mean comes from the fact that the mean is the point that minimizes the sum of squared Euclidean distances. If you're using $\ell_1$ to measure the distance between time series, then you should be using a center that minimizes the sum of squared $\ell_1$ distances.
|
Is it ok to use Manhattan distance with Ward's inter-cluster linkage in hierarchical clustering?
Another way of thinking about this, which might lend itself to an adaptation for $\ell_1$ is that choice of the mean comes from the fact that the mean is the point that minimizes the sum of squared Eu
|
14,875
|
Is it ok to use Manhattan distance with Ward's inter-cluster linkage in hierarchical clustering?
|
Although Ward is meant to be used with Euclidean distances, this paper suggests that the clustering results using Ward and non-euclidean distances are essentially the same as if they had been used with Euclidean distances as it is meant to be.
It is shown that the result from the Ward method to a non positive-definite and normalized similarity is almost the same as another result from the Ward method to a positive-definite matrix obtained from the original similarity by adding a positive constant to the diagonal elements.
S. Miyamoto, R. Abe, Y. Endo and J. Takeshita, "Ward method of hierarchical clustering for non-Euclidean similarity measures," 2015 7th International Conference of Soft Computing and Pattern Recognition (SoCPaR), Fukuoka, 2015, pp. 60-63, doi: 10.1109/SOCPAR.2015.7492784.
|
Is it ok to use Manhattan distance with Ward's inter-cluster linkage in hierarchical clustering?
|
Although Ward is meant to be used with Euclidean distances, this paper suggests that the clustering results using Ward and non-euclidean distances are essentially the same as if they had been used wit
|
Is it ok to use Manhattan distance with Ward's inter-cluster linkage in hierarchical clustering?
Although Ward is meant to be used with Euclidean distances, this paper suggests that the clustering results using Ward and non-euclidean distances are essentially the same as if they had been used with Euclidean distances as it is meant to be.
It is shown that the result from the Ward method to a non positive-definite and normalized similarity is almost the same as another result from the Ward method to a positive-definite matrix obtained from the original similarity by adding a positive constant to the diagonal elements.
S. Miyamoto, R. Abe, Y. Endo and J. Takeshita, "Ward method of hierarchical clustering for non-Euclidean similarity measures," 2015 7th International Conference of Soft Computing and Pattern Recognition (SoCPaR), Fukuoka, 2015, pp. 60-63, doi: 10.1109/SOCPAR.2015.7492784.
|
Is it ok to use Manhattan distance with Ward's inter-cluster linkage in hierarchical clustering?
Although Ward is meant to be used with Euclidean distances, this paper suggests that the clustering results using Ward and non-euclidean distances are essentially the same as if they had been used wit
|
14,876
|
Why is the squared difference so commonly used?
|
A decision-theoretic approach to statistics provides a deep explanation. It says that squaring differences is a proxy for a wide range of loss functions which (whenever they might be justifiably adopted) lead to considerable simplification in the possible statistical procedures one has to consider.
Unfortunately, explaining what this means and indicating why it is true takes a lot of setting up. The notation can quickly become incomprehensible. What I aim to do here, then, is just to sketch the main ideas, with little elaboration. For fuller accounts see the references.
A standard, rich model of data $\mathbf x$ posits that they are a realization of a (real, vector-valued) random variable $\mathbf X$ whose distribution $F$ is known only to be an element of some set $\Omega$ of distributions, the states of nature. A statistical procedure is a function $t$ of $\mathbf x$ taking values in some set of decisions $D$, the decision space.
For instance, in a prediction or classification problem $\mathbf x$ would consist of a union of a "training set" and a "test set of data" and $t$ will map $\mathbf x$ into a set of predicted values for the test set. The set of all possible predicted values would be $D$.
A full theoretical discussion of procedures has to accommodate randomized procedures. A randomized procedure chooses among two or more possible decisions according to some probability distribution (which depends on the data $\mathbf x$). It generalizes the intuitive idea that when the data do not seem to distinguish between two alternatives, you subsequently "flip a coin" to decide on a definite alternative. Many people dislike randomized procedures, objecting to making decisions in such an unpredictable manner.
The distinguishing feature of decision theory is its use of a loss function $W$. For any state of nature $F \in \Omega$ and decision $d \in D$, the loss
$$W(F,d)$$
is a numeric value representing how "bad" it would be to make decision $d$ when the true state of nature is $F$: small losses are good, large losses are bad. In a hypothesis testing situation, for instance, $D$ has the two elements "accept" and "reject" (the null hypothesis). The loss function emphasizes making the right decision: it is set to zero when the decision is correct and otherwise is some constant $w$. (This is called a "$0-1$ loss function:" all bad decisions are equally bad and all good decisions are equally good.) Specifically, $W(F,\text{ accept})=0$ when $F$ is in the null hypothesis and $W(F,\text{ reject})=0$ when $F$ is in the alternative hypothesis.
When using procedure $t$, the loss for the data $x$ when the true state of nature is $F$ can be written $W(F, t(x))$. This makes the loss $W(F, t(X))$ a random variable whose distribution is determined by (the unknown) $F$.
The expected loss of a procedure $t$ is called its risk, $r_t$. The expectation uses the true state of nature $F$, which therefore will appear explicitly as a subscript of the expectation operator. We will view the risk as a function of $F$ and emphasize that with the notation:
$$r_t(F) = \mathbb{E}_F(W(F, t(X))).$$
Better procedures have lower risk. Thus, comparing risk functions is the basis for selecting good statistical procedures. Since rescaling all risk functions by a common (positive) constant would not change any comparisons, the scale of $W$ makes no difference: we are free to multiply it by any positive value we like. In particular, upon multiplying $W$ by $1/w$ we may always take $w=1$ for a $0-1$ loss function (justifying its name).
To continue the hypothesis testing example, which illustrates a $0-1$ loss function, these definitions imply the risk of any $F$ in the null hypothesis is the chance that the decision is "reject," while the risk of any $F$ in the alternative is the chance that the decision is "accept." The maximum value (over all $F$ in the null hypothesis) is the test size, while the part of the risk function defined on the alternative hypothesis is the complement of the test power ($\text{power}_t(F) = 1 - r_t(F)$). In this we see how the entirety of classical (frequentist) hypothesis testing theory amounts to a particular way to compare risk functions for a special kind of loss.
By the way, everything presented so far is perfectly compatible with all mainstream statistics, including the Bayesian paradigm. In addition, Bayesian analysis introduces a "prior" probability distribution over $\Omega$ and uses this to simplify the comparison of risk functions: the potentially complicated function $r_t$ can be replaced by its expected value with respect to the prior distribution. Thus all procedures $t$ are characterized by a single number $r_t$; a Bayes procedure (which usually is unique) minimizes $r_t$. The loss function still plays an essential role in computing $r_t$.
There is some (unavoidable) controversy surrounding the use of loss functions. How does one pick $W$? It is essentially unique for hypothesis testing, but in most other statistical settings many choices are possible. They reflect the values of the decision-maker. For example, if the data are physiological measurements of a medical patient and the decisions are "treat" or "do not treat," the physician must consider--and weigh in the balance--the consequences of either action. How the consequences are weighed may depend on the patient's own wishes, their age, their quality of life, and many other things. Choice of a loss function can be fraught and deeply personal. Normally it should be not left to the statistician!
One thing we would like to know, then, is how would the choice of best procedure change when the loss is changed? It turns out that in many common, practical situations a certain amount of variation can be tolerated without changing which procedure is best. These situations are characterized by the following conditions:
The decision space is a convex set (often an interval of numbers). This means that any value lying between any two decisions is also a valid decision.
The loss is zero when the best possible decision is made and otherwise increases (to reflect discrepancies between the decision that is made and the best one that could be made for the true--but unknown--state of nature).
The loss is a differentiable function of the decision (at least locally near the best decision). This implies it is continuous--it does not jump the way a $0-1$ loss does--but it also implies that it changes relatively little when the decision is close to the best one.
When these conditions hold, some complications involved in comparing risk functions go away. The differentiability and convexity of $W$ allow us to apply Jensen's Inequality to show that
(1) We don't have to consider randomized procedures [Lehmann, corollary 6.2].
(2) If one procedure $t$ is considered to have the best risk for one such $W$, it can be improved into a procedure $t^{*}$ which depends only on a sufficient statistic and has at least as good a risk function for all such $W$ [Kiefer, p. 151].
As an example, suppose $\Omega$ is the set of Normal distributions with mean $\mu$ (and unit variance). This identifies $\Omega$ with the set of all real numbers, so (abusing notation) I will also use "$\mu$" to identify the distribution in $\Omega$ with mean $\mu$. Let $X$ be an iid sample of size $n$ from one of these distributions. Suppose the objective is to estimate $\mu$. This identifies the decision space $D$ with all possible values of $\mu$ (any real number). Letting $\hat\mu$ designate an arbitrary decision, the loss is a function
$$W(\mu, \hat\mu) \ge 0$$
with $W(\mu, \hat\mu)=0$ if and only if $\mu=\hat\mu$. The preceding assumptions imply (via Taylor's Theorem) that
$$W(\mu, \hat\mu) = w_2 (\hat\mu - \mu)^2 + o(\hat\mu - \mu)^2$$
for some constant positive number $w_2$. (The little-o notation "$o(y)^p$" means any function $f$ where the limiting value of $f(y) / y^p$ is $0$ as $y\to 0$.) As previously noted, we are free to rescale $W$ to make $w_2=1$. For this family $\Omega$, the mean of $X$, written $\bar X$, is a sufficient statistic. The previous result (quoted from Kiefer) says any estimator of $\mu$, which could be some arbitrary function of the $n$ variables $(x_1, \ldots, x_n)$ that is good for one such $W$, can be converted into an estimator depending only on $\bar x$ which is at least as good for all such $W$.
What has been accomplished in this example is typical: the hugely complicated set of possible procedures, which originally consisted of possibly randomized functions of $n$ variables, has been reduced to a much simpler set of procedures consisting of non-randomized functions of a single variable (or at least a fewer number of variables in cases where sufficient statistics are multivariate). And this can be done without worrying about precisely what the decision-maker's loss function is, provided only that it is convex and differentiable.
What is the simplest such loss function? The one that ignores the remainder term, of course, making it purely a quadratic function. Other loss functions in this same class include powers of $z = |\hat\mu-\mu|$ that are greater than $2$ (such as the $2.1, e,$ and $\pi$ mentioned in the question), $\exp(z)-1-z$, and many more.
The blue (upper) curve plots $2(\exp(|z|)-1-|z|)$ while the red (lower) curve plots $z^2$. Because the blue curve also has a minimum at $0$, is differentiable, and convex, many of the nice properties of statistical procedures enjoyed by quadratic loss (the red curve) will apply to the blue loss function, too (even though globally the exponential function behaves differently than the quadratic function).
These results (although obviously limited by the conditions that were imposed) help explain why quadratic loss is ubiquitous in statistical theory and practice: to a limited extent, it is an analytically convenient proxy for any convex differentiable loss function.
Quadratic loss is by no means the only or even the best loss to consider. Indeed, Lehmann writes that
Convex loss functions have been seen to lead to a number of simplifications of estimation problems. One may wonder, however, whether such loss functions are likely to be realistic. If $W(F, d)$ represents not just a measure of inaccuracy but a real (for example, financial) loss, one may argue that all such losses are bounded: once you have lost all, you cannot lose any more. ...
... [F]ast-growing loss functions lead to estimators that tend to be sensitive to the assumptions made about [the] tail behavior [of the assumed distribution], and these assumptions typically are based on little information and thus are not very reliable.
It turns out that the estimators produced by squared error loss often are uncomfortably sensitive in this respect.
[Lehman, section 1.6; with some changes of notation.]
Considering alternative losses opens up a rich set of possibilities: quantile regression, M-estimators, robust statistics, and much more can all be framed in this decision-theoretic way and justified using alternative loss functions. For a simple example, see Percentile Loss Functions.
References
Jack Carl Kiefer, Introduction to Statistical Inference. Springer-Verlag 1987.
E. L. Lehmann, Theory of Point Estimation. Wiley 1983.
|
Why is the squared difference so commonly used?
|
A decision-theoretic approach to statistics provides a deep explanation. It says that squaring differences is a proxy for a wide range of loss functions which (whenever they might be justifiably adop
|
Why is the squared difference so commonly used?
A decision-theoretic approach to statistics provides a deep explanation. It says that squaring differences is a proxy for a wide range of loss functions which (whenever they might be justifiably adopted) lead to considerable simplification in the possible statistical procedures one has to consider.
Unfortunately, explaining what this means and indicating why it is true takes a lot of setting up. The notation can quickly become incomprehensible. What I aim to do here, then, is just to sketch the main ideas, with little elaboration. For fuller accounts see the references.
A standard, rich model of data $\mathbf x$ posits that they are a realization of a (real, vector-valued) random variable $\mathbf X$ whose distribution $F$ is known only to be an element of some set $\Omega$ of distributions, the states of nature. A statistical procedure is a function $t$ of $\mathbf x$ taking values in some set of decisions $D$, the decision space.
For instance, in a prediction or classification problem $\mathbf x$ would consist of a union of a "training set" and a "test set of data" and $t$ will map $\mathbf x$ into a set of predicted values for the test set. The set of all possible predicted values would be $D$.
A full theoretical discussion of procedures has to accommodate randomized procedures. A randomized procedure chooses among two or more possible decisions according to some probability distribution (which depends on the data $\mathbf x$). It generalizes the intuitive idea that when the data do not seem to distinguish between two alternatives, you subsequently "flip a coin" to decide on a definite alternative. Many people dislike randomized procedures, objecting to making decisions in such an unpredictable manner.
The distinguishing feature of decision theory is its use of a loss function $W$. For any state of nature $F \in \Omega$ and decision $d \in D$, the loss
$$W(F,d)$$
is a numeric value representing how "bad" it would be to make decision $d$ when the true state of nature is $F$: small losses are good, large losses are bad. In a hypothesis testing situation, for instance, $D$ has the two elements "accept" and "reject" (the null hypothesis). The loss function emphasizes making the right decision: it is set to zero when the decision is correct and otherwise is some constant $w$. (This is called a "$0-1$ loss function:" all bad decisions are equally bad and all good decisions are equally good.) Specifically, $W(F,\text{ accept})=0$ when $F$ is in the null hypothesis and $W(F,\text{ reject})=0$ when $F$ is in the alternative hypothesis.
When using procedure $t$, the loss for the data $x$ when the true state of nature is $F$ can be written $W(F, t(x))$. This makes the loss $W(F, t(X))$ a random variable whose distribution is determined by (the unknown) $F$.
The expected loss of a procedure $t$ is called its risk, $r_t$. The expectation uses the true state of nature $F$, which therefore will appear explicitly as a subscript of the expectation operator. We will view the risk as a function of $F$ and emphasize that with the notation:
$$r_t(F) = \mathbb{E}_F(W(F, t(X))).$$
Better procedures have lower risk. Thus, comparing risk functions is the basis for selecting good statistical procedures. Since rescaling all risk functions by a common (positive) constant would not change any comparisons, the scale of $W$ makes no difference: we are free to multiply it by any positive value we like. In particular, upon multiplying $W$ by $1/w$ we may always take $w=1$ for a $0-1$ loss function (justifying its name).
To continue the hypothesis testing example, which illustrates a $0-1$ loss function, these definitions imply the risk of any $F$ in the null hypothesis is the chance that the decision is "reject," while the risk of any $F$ in the alternative is the chance that the decision is "accept." The maximum value (over all $F$ in the null hypothesis) is the test size, while the part of the risk function defined on the alternative hypothesis is the complement of the test power ($\text{power}_t(F) = 1 - r_t(F)$). In this we see how the entirety of classical (frequentist) hypothesis testing theory amounts to a particular way to compare risk functions for a special kind of loss.
By the way, everything presented so far is perfectly compatible with all mainstream statistics, including the Bayesian paradigm. In addition, Bayesian analysis introduces a "prior" probability distribution over $\Omega$ and uses this to simplify the comparison of risk functions: the potentially complicated function $r_t$ can be replaced by its expected value with respect to the prior distribution. Thus all procedures $t$ are characterized by a single number $r_t$; a Bayes procedure (which usually is unique) minimizes $r_t$. The loss function still plays an essential role in computing $r_t$.
There is some (unavoidable) controversy surrounding the use of loss functions. How does one pick $W$? It is essentially unique for hypothesis testing, but in most other statistical settings many choices are possible. They reflect the values of the decision-maker. For example, if the data are physiological measurements of a medical patient and the decisions are "treat" or "do not treat," the physician must consider--and weigh in the balance--the consequences of either action. How the consequences are weighed may depend on the patient's own wishes, their age, their quality of life, and many other things. Choice of a loss function can be fraught and deeply personal. Normally it should be not left to the statistician!
One thing we would like to know, then, is how would the choice of best procedure change when the loss is changed? It turns out that in many common, practical situations a certain amount of variation can be tolerated without changing which procedure is best. These situations are characterized by the following conditions:
The decision space is a convex set (often an interval of numbers). This means that any value lying between any two decisions is also a valid decision.
The loss is zero when the best possible decision is made and otherwise increases (to reflect discrepancies between the decision that is made and the best one that could be made for the true--but unknown--state of nature).
The loss is a differentiable function of the decision (at least locally near the best decision). This implies it is continuous--it does not jump the way a $0-1$ loss does--but it also implies that it changes relatively little when the decision is close to the best one.
When these conditions hold, some complications involved in comparing risk functions go away. The differentiability and convexity of $W$ allow us to apply Jensen's Inequality to show that
(1) We don't have to consider randomized procedures [Lehmann, corollary 6.2].
(2) If one procedure $t$ is considered to have the best risk for one such $W$, it can be improved into a procedure $t^{*}$ which depends only on a sufficient statistic and has at least as good a risk function for all such $W$ [Kiefer, p. 151].
As an example, suppose $\Omega$ is the set of Normal distributions with mean $\mu$ (and unit variance). This identifies $\Omega$ with the set of all real numbers, so (abusing notation) I will also use "$\mu$" to identify the distribution in $\Omega$ with mean $\mu$. Let $X$ be an iid sample of size $n$ from one of these distributions. Suppose the objective is to estimate $\mu$. This identifies the decision space $D$ with all possible values of $\mu$ (any real number). Letting $\hat\mu$ designate an arbitrary decision, the loss is a function
$$W(\mu, \hat\mu) \ge 0$$
with $W(\mu, \hat\mu)=0$ if and only if $\mu=\hat\mu$. The preceding assumptions imply (via Taylor's Theorem) that
$$W(\mu, \hat\mu) = w_2 (\hat\mu - \mu)^2 + o(\hat\mu - \mu)^2$$
for some constant positive number $w_2$. (The little-o notation "$o(y)^p$" means any function $f$ where the limiting value of $f(y) / y^p$ is $0$ as $y\to 0$.) As previously noted, we are free to rescale $W$ to make $w_2=1$. For this family $\Omega$, the mean of $X$, written $\bar X$, is a sufficient statistic. The previous result (quoted from Kiefer) says any estimator of $\mu$, which could be some arbitrary function of the $n$ variables $(x_1, \ldots, x_n)$ that is good for one such $W$, can be converted into an estimator depending only on $\bar x$ which is at least as good for all such $W$.
What has been accomplished in this example is typical: the hugely complicated set of possible procedures, which originally consisted of possibly randomized functions of $n$ variables, has been reduced to a much simpler set of procedures consisting of non-randomized functions of a single variable (or at least a fewer number of variables in cases where sufficient statistics are multivariate). And this can be done without worrying about precisely what the decision-maker's loss function is, provided only that it is convex and differentiable.
What is the simplest such loss function? The one that ignores the remainder term, of course, making it purely a quadratic function. Other loss functions in this same class include powers of $z = |\hat\mu-\mu|$ that are greater than $2$ (such as the $2.1, e,$ and $\pi$ mentioned in the question), $\exp(z)-1-z$, and many more.
The blue (upper) curve plots $2(\exp(|z|)-1-|z|)$ while the red (lower) curve plots $z^2$. Because the blue curve also has a minimum at $0$, is differentiable, and convex, many of the nice properties of statistical procedures enjoyed by quadratic loss (the red curve) will apply to the blue loss function, too (even though globally the exponential function behaves differently than the quadratic function).
These results (although obviously limited by the conditions that were imposed) help explain why quadratic loss is ubiquitous in statistical theory and practice: to a limited extent, it is an analytically convenient proxy for any convex differentiable loss function.
Quadratic loss is by no means the only or even the best loss to consider. Indeed, Lehmann writes that
Convex loss functions have been seen to lead to a number of simplifications of estimation problems. One may wonder, however, whether such loss functions are likely to be realistic. If $W(F, d)$ represents not just a measure of inaccuracy but a real (for example, financial) loss, one may argue that all such losses are bounded: once you have lost all, you cannot lose any more. ...
... [F]ast-growing loss functions lead to estimators that tend to be sensitive to the assumptions made about [the] tail behavior [of the assumed distribution], and these assumptions typically are based on little information and thus are not very reliable.
It turns out that the estimators produced by squared error loss often are uncomfortably sensitive in this respect.
[Lehman, section 1.6; with some changes of notation.]
Considering alternative losses opens up a rich set of possibilities: quantile regression, M-estimators, robust statistics, and much more can all be framed in this decision-theoretic way and justified using alternative loss functions. For a simple example, see Percentile Loss Functions.
References
Jack Carl Kiefer, Introduction to Statistical Inference. Springer-Verlag 1987.
E. L. Lehmann, Theory of Point Estimation. Wiley 1983.
|
Why is the squared difference so commonly used?
A decision-theoretic approach to statistics provides a deep explanation. It says that squaring differences is a proxy for a wide range of loss functions which (whenever they might be justifiably adop
|
14,877
|
Meaning of latent features?
|
Latent means not directly observable. The common use of the term in PCA and Factor Analysis is to reduce dimension of a large number of directly observable features into a smaller set of indirectly observable features.
|
Meaning of latent features?
|
Latent means not directly observable. The common use of the term in PCA and Factor Analysis is to reduce dimension of a large number of directly observable features into a smaller set of indirectly ob
|
Meaning of latent features?
Latent means not directly observable. The common use of the term in PCA and Factor Analysis is to reduce dimension of a large number of directly observable features into a smaller set of indirectly observable features.
|
Meaning of latent features?
Latent means not directly observable. The common use of the term in PCA and Factor Analysis is to reduce dimension of a large number of directly observable features into a smaller set of indirectly ob
|
14,878
|
Meaning of latent features?
|
In the context of Factorization Method latent features are usually meant to characterize items along each dimension. Let me explain by example.
Suppose we have a matrix of item-users interactions $R$. The model assumption in Matrix Factorization methods is that each cell $R_{ui}$ of this matrix is generated by, for example, $p_u^T q_i$ — a dot product between latent vector $p_u$, describing user $u$ and a latent vector $q_i$, describing item $i$. Intuitively, this product measures how similar these vectors are. During training you want to find "good" vectors, such that the approximation error is minimized.
One may think that these latent features are meaningful, that is, there's a feature in user's vector $p_u$ like "likes items with property X" and corresponding feature in item's vector $q_i$ like "has property X". Unfortunately, unless it's somehow enforced, it's hard to find interpretable latent features. So, you can think of latent features that way, but not use these features to reason about the data.
|
Meaning of latent features?
|
In the context of Factorization Method latent features are usually meant to characterize items along each dimension. Let me explain by example.
Suppose we have a matrix of item-users interactions $R$.
|
Meaning of latent features?
In the context of Factorization Method latent features are usually meant to characterize items along each dimension. Let me explain by example.
Suppose we have a matrix of item-users interactions $R$. The model assumption in Matrix Factorization methods is that each cell $R_{ui}$ of this matrix is generated by, for example, $p_u^T q_i$ — a dot product between latent vector $p_u$, describing user $u$ and a latent vector $q_i$, describing item $i$. Intuitively, this product measures how similar these vectors are. During training you want to find "good" vectors, such that the approximation error is minimized.
One may think that these latent features are meaningful, that is, there's a feature in user's vector $p_u$ like "likes items with property X" and corresponding feature in item's vector $q_i$ like "has property X". Unfortunately, unless it's somehow enforced, it's hard to find interpretable latent features. So, you can think of latent features that way, but not use these features to reason about the data.
|
Meaning of latent features?
In the context of Factorization Method latent features are usually meant to characterize items along each dimension. Let me explain by example.
Suppose we have a matrix of item-users interactions $R$.
|
14,879
|
Meaning of latent features?
|
Here your data is ratings given by various users to various movies. As others have pointed out, latent means not directly observable.
For a movie, its latent features determine the amount of action, romance, story-line, a famous actor, etc. Similarly, for another dataset consisting of handwritten digits, the latent variables may be angle of edges, skew, etc.
|
Meaning of latent features?
|
Here your data is ratings given by various users to various movies. As others have pointed out, latent means not directly observable.
For a movie, its latent features determine the amount of action,
|
Meaning of latent features?
Here your data is ratings given by various users to various movies. As others have pointed out, latent means not directly observable.
For a movie, its latent features determine the amount of action, romance, story-line, a famous actor, etc. Similarly, for another dataset consisting of handwritten digits, the latent variables may be angle of edges, skew, etc.
|
Meaning of latent features?
Here your data is ratings given by various users to various movies. As others have pointed out, latent means not directly observable.
For a movie, its latent features determine the amount of action,
|
14,880
|
Meaning of latent features?
|
I would say that factors are more representative than principal components to get a perception of 'latency'/hiddenness of a variable. Latency is one of the reasons why behavioral scientists measure perceptual constructs like feeling, sadness in terms of multiple items/measures and derive a number for such hidden variables which cannot be directly measured.
|
Meaning of latent features?
|
I would say that factors are more representative than principal components to get a perception of 'latency'/hiddenness of a variable. Latency is one of the reasons why behavioral scientists measure pe
|
Meaning of latent features?
I would say that factors are more representative than principal components to get a perception of 'latency'/hiddenness of a variable. Latency is one of the reasons why behavioral scientists measure perceptual constructs like feeling, sadness in terms of multiple items/measures and derive a number for such hidden variables which cannot be directly measured.
|
Meaning of latent features?
I would say that factors are more representative than principal components to get a perception of 'latency'/hiddenness of a variable. Latency is one of the reasons why behavioral scientists measure pe
|
14,881
|
Under which conditions do gradient boosting machines outperform random forests?
|
The following provides an explanation as per why Boosting generally outperforms Random Forest in practice, but I would be very interested to know which other different factors may explain Boosting's edge over RF in specific settings.
Basically, within the $error=bias+variance$ framework, RF can only reduce error through reducing the variance (Hastie et al. 2009 p. 588). The bias is fixed and equal to the bias of a single tree in the forest (hence the need to grow very large trees, that have very low bias).
On the other hand, Boosting reduces bias (by adding each new tree in the sequence so that what was missed by the preceding tree is captured), but also variance (by combining many models).
So, Boosting reduces error on both fronts, whereas RF can only reduce error through reducing variance. Of course, as I said, there might be other explanations for the better performance of Boosting observed in practice. For instance, page 591 of the aforementioned book, it is said that Boosting outperforms RF on the nested sphere problem because in that particular case the true decision boundary is additive. (?) They also report that Boosting does better than RF for the spam and the California housing data.
Another reference that found Boosting to outperform RF is Caruana and Niculescu-Mizil 2006. Unfortunately, they report the results but don't try to explain what causes them. They compared the two classifiers (and many more) on 11 binary classification problems for 8 different performance metrics.
|
Under which conditions do gradient boosting machines outperform random forests?
|
The following provides an explanation as per why Boosting generally outperforms Random Forest in practice, but I would be very interested to know which other different factors may explain Boosting's e
|
Under which conditions do gradient boosting machines outperform random forests?
The following provides an explanation as per why Boosting generally outperforms Random Forest in practice, but I would be very interested to know which other different factors may explain Boosting's edge over RF in specific settings.
Basically, within the $error=bias+variance$ framework, RF can only reduce error through reducing the variance (Hastie et al. 2009 p. 588). The bias is fixed and equal to the bias of a single tree in the forest (hence the need to grow very large trees, that have very low bias).
On the other hand, Boosting reduces bias (by adding each new tree in the sequence so that what was missed by the preceding tree is captured), but also variance (by combining many models).
So, Boosting reduces error on both fronts, whereas RF can only reduce error through reducing variance. Of course, as I said, there might be other explanations for the better performance of Boosting observed in practice. For instance, page 591 of the aforementioned book, it is said that Boosting outperforms RF on the nested sphere problem because in that particular case the true decision boundary is additive. (?) They also report that Boosting does better than RF for the spam and the California housing data.
Another reference that found Boosting to outperform RF is Caruana and Niculescu-Mizil 2006. Unfortunately, they report the results but don't try to explain what causes them. They compared the two classifiers (and many more) on 11 binary classification problems for 8 different performance metrics.
|
Under which conditions do gradient boosting machines outperform random forests?
The following provides an explanation as per why Boosting generally outperforms Random Forest in practice, but I would be very interested to know which other different factors may explain Boosting's e
|
14,882
|
Under which conditions do gradient boosting machines outperform random forests?
|
As bayerj said it, there is no way to know a priori !
Random Forests are relatively easy to calibrate: default parameters of most implementations (R or Python, per example) achieve great results.
On the other hand, GBMs are hard to tune (a too large number of tree leads to overfit, maximum depth is critical, the learning rate and the number of trees act together...) and longer to train (multithreaded implementations are scarce). A loosely performed tuning may lead to low performance.
However, from my experience, if you spend enough time on GBMs, you are likely to achieve better performance than random forest.
Edit. Why do GBMs outperform Random Forests? Antoine's answer is much more rigorous, this is just an intuitive explanation. They have more critical parameters. Just like the random forests, you can calibrate the number of trees and $m$ the number of variables on which trees are grown. But you can also calibrate the learning rate and the maximum depth. As you observe more different models than you would do with a random forest, you are more likely to find something better.
|
Under which conditions do gradient boosting machines outperform random forests?
|
As bayerj said it, there is no way to know a priori !
Random Forests are relatively easy to calibrate: default parameters of most implementations (R or Python, per example) achieve great results.
On
|
Under which conditions do gradient boosting machines outperform random forests?
As bayerj said it, there is no way to know a priori !
Random Forests are relatively easy to calibrate: default parameters of most implementations (R or Python, per example) achieve great results.
On the other hand, GBMs are hard to tune (a too large number of tree leads to overfit, maximum depth is critical, the learning rate and the number of trees act together...) and longer to train (multithreaded implementations are scarce). A loosely performed tuning may lead to low performance.
However, from my experience, if you spend enough time on GBMs, you are likely to achieve better performance than random forest.
Edit. Why do GBMs outperform Random Forests? Antoine's answer is much more rigorous, this is just an intuitive explanation. They have more critical parameters. Just like the random forests, you can calibrate the number of trees and $m$ the number of variables on which trees are grown. But you can also calibrate the learning rate and the maximum depth. As you observe more different models than you would do with a random forest, you are more likely to find something better.
|
Under which conditions do gradient boosting machines outperform random forests?
As bayerj said it, there is no way to know a priori !
Random Forests are relatively easy to calibrate: default parameters of most implementations (R or Python, per example) achieve great results.
On
|
14,883
|
What is ridge regression? [duplicate]
|
Ridge Regression is a remedial measure taken to alleviate multicollinearity amongst regression predictor variables in a model. Often predictor variables used in a regression are highly correlated. When they are, the regression coefficient of any one variable depend on which other predictor variables are included in the model, and which ones are left out. (So the predictor variable does not reflect any inherent effect of that particular predictor on the response variable, but only a marginal or partial effect, given whatever other correlated predictor variables are included in the model). Ridge regression adds a small bias factor to the variables in order to alleviate this problem. Hope that helps.
|
What is ridge regression? [duplicate]
|
Ridge Regression is a remedial measure taken to alleviate multicollinearity amongst regression predictor variables in a model. Often predictor variables used in a regression are highly correlated. W
|
What is ridge regression? [duplicate]
Ridge Regression is a remedial measure taken to alleviate multicollinearity amongst regression predictor variables in a model. Often predictor variables used in a regression are highly correlated. When they are, the regression coefficient of any one variable depend on which other predictor variables are included in the model, and which ones are left out. (So the predictor variable does not reflect any inherent effect of that particular predictor on the response variable, but only a marginal or partial effect, given whatever other correlated predictor variables are included in the model). Ridge regression adds a small bias factor to the variables in order to alleviate this problem. Hope that helps.
|
What is ridge regression? [duplicate]
Ridge Regression is a remedial measure taken to alleviate multicollinearity amongst regression predictor variables in a model. Often predictor variables used in a regression are highly correlated. W
|
14,884
|
What is ridge regression? [duplicate]
|
The posts above nicely describe ridge regression and its mathematical underpinning. However, they don't address the issue of where ridge regression should be used, compared to other shrinkage methods. It might be so because there are no specific situation where one shrinkage method has been shown to perform better than another. There are many different ways of addressing the issue of multicollinearity among the predictor variables, depending on its source. Ridge regression happens to be one of those methods that addresses the issue of multicollinearity by shrinking (in some cases, shrinking it close to or equal to zero, for large values of the tuning parameter) the coefficient estimates of the highly correlated variables.
Unlike least squares method, ridge regression produces a set of coefficient estimates for different values of the tuning parameter. So, it's advisable to use the results of ridge regession (the set of coefficient estimates) with a model selection technique (such as, cross-validation) to determine the most appropriate model for the given data.
|
What is ridge regression? [duplicate]
|
The posts above nicely describe ridge regression and its mathematical underpinning. However, they don't address the issue of where ridge regression should be used, compared to other shrinkage methods.
|
What is ridge regression? [duplicate]
The posts above nicely describe ridge regression and its mathematical underpinning. However, they don't address the issue of where ridge regression should be used, compared to other shrinkage methods. It might be so because there are no specific situation where one shrinkage method has been shown to perform better than another. There are many different ways of addressing the issue of multicollinearity among the predictor variables, depending on its source. Ridge regression happens to be one of those methods that addresses the issue of multicollinearity by shrinking (in some cases, shrinking it close to or equal to zero, for large values of the tuning parameter) the coefficient estimates of the highly correlated variables.
Unlike least squares method, ridge regression produces a set of coefficient estimates for different values of the tuning parameter. So, it's advisable to use the results of ridge regession (the set of coefficient estimates) with a model selection technique (such as, cross-validation) to determine the most appropriate model for the given data.
|
What is ridge regression? [duplicate]
The posts above nicely describe ridge regression and its mathematical underpinning. However, they don't address the issue of where ridge regression should be used, compared to other shrinkage methods.
|
14,885
|
How does extreme random forest differ from random forest?
|
This is pretty simple -- RF optimizes splits on trees (i.e. select those which give best information gain with respect to decision) and ERF makes them at random. Now,
optimisation costs (not much, but still), so ERF is usually faster.
optimisation may contribute to correlation of trees in ensemble or overall overfitting, so ERFs are probably more robust, especially if the signal is weak.
Going even further in this direction, you can gain extra speed by equalising splits on each tree level, this way converting trees into ferns, which are also pretty interesting; there is my R implementation of such an individuum.
|
How does extreme random forest differ from random forest?
|
This is pretty simple -- RF optimizes splits on trees (i.e. select those which give best information gain with respect to decision) and ERF makes them at random. Now,
optimisation costs (not much, bu
|
How does extreme random forest differ from random forest?
This is pretty simple -- RF optimizes splits on trees (i.e. select those which give best information gain with respect to decision) and ERF makes them at random. Now,
optimisation costs (not much, but still), so ERF is usually faster.
optimisation may contribute to correlation of trees in ensemble or overall overfitting, so ERFs are probably more robust, especially if the signal is weak.
Going even further in this direction, you can gain extra speed by equalising splits on each tree level, this way converting trees into ferns, which are also pretty interesting; there is my R implementation of such an individuum.
|
How does extreme random forest differ from random forest?
This is pretty simple -- RF optimizes splits on trees (i.e. select those which give best information gain with respect to decision) and ERF makes them at random. Now,
optimisation costs (not much, bu
|
14,886
|
What summary statistics to use with categorical or qualitative variables?
|
In general, the answer is no. However, one could argue that you can take the median of ordinal data, but you will, of course, have a category as the median, not a number. The median divides the data equally: Half above, half below. Ordinal data depends only on order.
Further, in some cases, the ordinality can be made into rough interval level data. This is true when the ordinal data are grouped (e.g. questions about income are often asked this way). In this case, you can find a precise median, and you may be able to approximate the other values, especially if the lower and upper bounds are specified: You can assume some distribution (e.g. uniform) within each category. Another case of ordinal data that can be made interval is when the levels are given numeric equivalents. For example: Never (0%), sometimes (10-30%), about half the time (50%) and so on.
To (once again) quote David Cox:
There are no routine statistical questions, only questionable
statistical routines
|
What summary statistics to use with categorical or qualitative variables?
|
In general, the answer is no. However, one could argue that you can take the median of ordinal data, but you will, of course, have a category as the median, not a number. The median divides the data e
|
What summary statistics to use with categorical or qualitative variables?
In general, the answer is no. However, one could argue that you can take the median of ordinal data, but you will, of course, have a category as the median, not a number. The median divides the data equally: Half above, half below. Ordinal data depends only on order.
Further, in some cases, the ordinality can be made into rough interval level data. This is true when the ordinal data are grouped (e.g. questions about income are often asked this way). In this case, you can find a precise median, and you may be able to approximate the other values, especially if the lower and upper bounds are specified: You can assume some distribution (e.g. uniform) within each category. Another case of ordinal data that can be made interval is when the levels are given numeric equivalents. For example: Never (0%), sometimes (10-30%), about half the time (50%) and so on.
To (once again) quote David Cox:
There are no routine statistical questions, only questionable
statistical routines
|
What summary statistics to use with categorical or qualitative variables?
In general, the answer is no. However, one could argue that you can take the median of ordinal data, but you will, of course, have a category as the median, not a number. The median divides the data e
|
14,887
|
What summary statistics to use with categorical or qualitative variables?
|
As has been mentioned, means, SDs and hinge points are not meaningful for categorical data. Hinge points (e.g., median and quartiles) may be meaningful for ordinal data. Your title also asks what summary statistics should be used to describe categorical data. It is standard to characterize categorical data by counts and percentages. (You may also want to include a 95% confidence interval around the percentages.) For example, if your data were:
"Hispanic" "Hispanic" "White" "White"
"White" "White" "African American" "Hispanic"
"White" "White" "White" "other"
"White" "White" "White" "African American"
"Asian"
You could summarize them like so:
White 10 (59%)
African American 2 (12%)
Hispanic 3 (18%)
Asian 1 ( 6%)
other 1 ( 6%)
|
What summary statistics to use with categorical or qualitative variables?
|
As has been mentioned, means, SDs and hinge points are not meaningful for categorical data. Hinge points (e.g., median and quartiles) may be meaningful for ordinal data. Your title also asks what su
|
What summary statistics to use with categorical or qualitative variables?
As has been mentioned, means, SDs and hinge points are not meaningful for categorical data. Hinge points (e.g., median and quartiles) may be meaningful for ordinal data. Your title also asks what summary statistics should be used to describe categorical data. It is standard to characterize categorical data by counts and percentages. (You may also want to include a 95% confidence interval around the percentages.) For example, if your data were:
"Hispanic" "Hispanic" "White" "White"
"White" "White" "African American" "Hispanic"
"White" "White" "White" "other"
"White" "White" "White" "African American"
"Asian"
You could summarize them like so:
White 10 (59%)
African American 2 (12%)
Hispanic 3 (18%)
Asian 1 ( 6%)
other 1 ( 6%)
|
What summary statistics to use with categorical or qualitative variables?
As has been mentioned, means, SDs and hinge points are not meaningful for categorical data. Hinge points (e.g., median and quartiles) may be meaningful for ordinal data. Your title also asks what su
|
14,888
|
What summary statistics to use with categorical or qualitative variables?
|
If you have nominal variables there is no ordering or distance function. So how could you define any of the summary statistics that you mention? I don't think you can. Quartiles and range at least require ordering and means and variance require numerical data. I think bar graphs and pie chart are typical examples of the proper ways to summarize qualitative variables that are not ordinal.
|
What summary statistics to use with categorical or qualitative variables?
|
If you have nominal variables there is no ordering or distance function. So how could you define any of the summary statistics that you mention? I don't think you can. Quartiles and range at least re
|
What summary statistics to use with categorical or qualitative variables?
If you have nominal variables there is no ordering or distance function. So how could you define any of the summary statistics that you mention? I don't think you can. Quartiles and range at least require ordering and means and variance require numerical data. I think bar graphs and pie chart are typical examples of the proper ways to summarize qualitative variables that are not ordinal.
|
What summary statistics to use with categorical or qualitative variables?
If you have nominal variables there is no ordering or distance function. So how could you define any of the summary statistics that you mention? I don't think you can. Quartiles and range at least re
|
14,889
|
What summary statistics to use with categorical or qualitative variables?
|
Mode still works! Is that not an important summary statistic? (What's the most common category?) I think the median suggestion has little to no value as a statistic, but the mode does.
Also count distinct would be valuable. (How many categories do you have?)
You might create ratios, like (most common category) / (least common category) or (#1 most common category) / (#2 most common category). Also (most common category) / (all other categories), like the 80/20 rule.
You can also assign numbers to your categories and go nuts with all the usual statistics. AA=1, Hisp=2, etc. Now you can compute mean, median, mode, SD, etc.
|
What summary statistics to use with categorical or qualitative variables?
|
Mode still works! Is that not an important summary statistic? (What's the most common category?) I think the median suggestion has little to no value as a statistic, but the mode does.
Also count d
|
What summary statistics to use with categorical or qualitative variables?
Mode still works! Is that not an important summary statistic? (What's the most common category?) I think the median suggestion has little to no value as a statistic, but the mode does.
Also count distinct would be valuable. (How many categories do you have?)
You might create ratios, like (most common category) / (least common category) or (#1 most common category) / (#2 most common category). Also (most common category) / (all other categories), like the 80/20 rule.
You can also assign numbers to your categories and go nuts with all the usual statistics. AA=1, Hisp=2, etc. Now you can compute mean, median, mode, SD, etc.
|
What summary statistics to use with categorical or qualitative variables?
Mode still works! Is that not an important summary statistic? (What's the most common category?) I think the median suggestion has little to no value as a statistic, but the mode does.
Also count d
|
14,890
|
What summary statistics to use with categorical or qualitative variables?
|
I do appreciate the other answers, but it seems to me that some topological background would give a much-needed structure to the responses.
Definitions
Let's start with establishing the definitions of the domains:
categorical variable is one whose domain contains elements, but there's no known relationship between them (thus we have only categories). Examples, depend on the context, but I'd say in the general case, it is difficult to compare days of the week: is Monday before Sunday, if so, what about next Monday? Maybe an easier, but less used example are pieces of clothes: without providing some context that would make sense of an order, it is difficult to say whether trousers come before jumpers or vice versa.
ordinal variable is one that has a total order defined over the domain, i.e. for every two elements of the domain, we can tell that either they are identical, or one is bigger than the other. A Likert-scale is a good example of a definition of an ordinal variable. "somewhat agree" is definitely closer to "strongly agree" than "disagree".
interval variable is one, whose domain defines distances between elements (a metric), thus allowing us to define intervals.
Domain examples
As the most common set that we use, natural and real numbers have standard total order and metrics. This is why we need to be careful when we assign numbers to our categories. If we are not careful to disregard order and distance, we practically convert our categorical data in interval data. When one uses a machine learning algorithm without knowing how it works, one risks making such assumptions unwillingly, thus potentially invalidating one's own results. For example, most popular deep learning algorithms work with real numbers taking advantage of their interval and continuous properties. Another example, think of 5-point Likert scales, and how the analysis we apply on them assumes that the distance between strongly agree and agree is the same as disagree and neither agree nor disagree. Hard to make a case for such a relationship.
Another set that we often work with is strings. There are a number of string similarity metrics that come in handy when working with strings. However, these are not always useful. For example, for addresses, John Smith Street and John Smith Road are quite close in terms of string similarity, but obviously represent two different entities that could be miles apart.
Summary statistics
Ok, now let's see how some summary statistics fit in this. Since statistics works with numbers, its functions are well defined over intervals. But let's see examples on whether/how we could generalise them to categorical or ordinal data:
mode - both when working with categorical and ordinal data, we can tell which element is most frequently used. So we have this. Then we can also derive all the other measures that @Maddenker lists in their answer. @gung's confidence interval could also be useful.
median - as @peter-flom says, as long as you have an order, you can derive your median.
mean, but also standard deviation, percentiles, etc. - you get these only with interval data, due to the need for a distance metric.
Example of data contextuality
At the end, I want to stress again that the order and metrics you define on your data are very contextual. This should be obvious by now, but let me give you a last example: when working with geographical locations, we have lots of different way to approach them:
if we are interested in the distance between them, we can work with their geolocation, which basically gives us a two-dimensional numerical space, thus interval.
if we are interested in their part of relationship, we can define a total order (e.g. a street is part of a city, two cities are equal, a continent contains a country)
if we are interested in whether two strings represent the same address, we could work with some string distance that would tolerate spelling mistakes and swapping positions of words, but make sure to distinguish different terms and names. This is not an easy thing, but just to make the case.
There are plenty of other use cases, that all of us encounter daily, where none of this makes sense. In some of them there's nothing more to do than treat the addresses as just different categories, in others it comes down to very smart data modelling and preprocessing.
|
What summary statistics to use with categorical or qualitative variables?
|
I do appreciate the other answers, but it seems to me that some topological background would give a much-needed structure to the responses.
Definitions
Let's start with establishing the definitions of
|
What summary statistics to use with categorical or qualitative variables?
I do appreciate the other answers, but it seems to me that some topological background would give a much-needed structure to the responses.
Definitions
Let's start with establishing the definitions of the domains:
categorical variable is one whose domain contains elements, but there's no known relationship between them (thus we have only categories). Examples, depend on the context, but I'd say in the general case, it is difficult to compare days of the week: is Monday before Sunday, if so, what about next Monday? Maybe an easier, but less used example are pieces of clothes: without providing some context that would make sense of an order, it is difficult to say whether trousers come before jumpers or vice versa.
ordinal variable is one that has a total order defined over the domain, i.e. for every two elements of the domain, we can tell that either they are identical, or one is bigger than the other. A Likert-scale is a good example of a definition of an ordinal variable. "somewhat agree" is definitely closer to "strongly agree" than "disagree".
interval variable is one, whose domain defines distances between elements (a metric), thus allowing us to define intervals.
Domain examples
As the most common set that we use, natural and real numbers have standard total order and metrics. This is why we need to be careful when we assign numbers to our categories. If we are not careful to disregard order and distance, we practically convert our categorical data in interval data. When one uses a machine learning algorithm without knowing how it works, one risks making such assumptions unwillingly, thus potentially invalidating one's own results. For example, most popular deep learning algorithms work with real numbers taking advantage of their interval and continuous properties. Another example, think of 5-point Likert scales, and how the analysis we apply on them assumes that the distance between strongly agree and agree is the same as disagree and neither agree nor disagree. Hard to make a case for such a relationship.
Another set that we often work with is strings. There are a number of string similarity metrics that come in handy when working with strings. However, these are not always useful. For example, for addresses, John Smith Street and John Smith Road are quite close in terms of string similarity, but obviously represent two different entities that could be miles apart.
Summary statistics
Ok, now let's see how some summary statistics fit in this. Since statistics works with numbers, its functions are well defined over intervals. But let's see examples on whether/how we could generalise them to categorical or ordinal data:
mode - both when working with categorical and ordinal data, we can tell which element is most frequently used. So we have this. Then we can also derive all the other measures that @Maddenker lists in their answer. @gung's confidence interval could also be useful.
median - as @peter-flom says, as long as you have an order, you can derive your median.
mean, but also standard deviation, percentiles, etc. - you get these only with interval data, due to the need for a distance metric.
Example of data contextuality
At the end, I want to stress again that the order and metrics you define on your data are very contextual. This should be obvious by now, but let me give you a last example: when working with geographical locations, we have lots of different way to approach them:
if we are interested in the distance between them, we can work with their geolocation, which basically gives us a two-dimensional numerical space, thus interval.
if we are interested in their part of relationship, we can define a total order (e.g. a street is part of a city, two cities are equal, a continent contains a country)
if we are interested in whether two strings represent the same address, we could work with some string distance that would tolerate spelling mistakes and swapping positions of words, but make sure to distinguish different terms and names. This is not an easy thing, but just to make the case.
There are plenty of other use cases, that all of us encounter daily, where none of this makes sense. In some of them there's nothing more to do than treat the addresses as just different categories, in others it comes down to very smart data modelling and preprocessing.
|
What summary statistics to use with categorical or qualitative variables?
I do appreciate the other answers, but it seems to me that some topological background would give a much-needed structure to the responses.
Definitions
Let's start with establishing the definitions of
|
14,891
|
Distribution that describes the difference between negative binomial distributed variables?
|
I don't know the name of this distribution but you can just derive it from the law of total probability. Suppose $X, Y$ each have negative binomial distributions with parameters $(r_{1}, p_{1})$ and $(r_{2}, p_{2})$, respectively. I'm using the parameterization where $X,Y$ represent the number of successes before the $r_{1}$'th, and $r_{2}$'th failures, respectively. Then,
$$ P(X - Y = k) = E_{Y} \Big( P(X-Y = k) \Big) = E_{Y} \Big( P(X = k+Y) \Big) =
\sum_{y=0}^{\infty} P(Y=y)P(X = k+y) $$
We know
$$ P(X = k + y) = {k+y+r_{1}-1 \choose k+y} (1-p_{1})^{r_{1}} p_{1}^{k+y} $$
and
$$ P(Y = y) = {y+r_{2}-1 \choose y} (1-p_{2})^{r_{2}} p_{2}^{y} $$
so
$$ P(X-Y=k) = \sum_{y=0}^{\infty} {y+r_{2}-1 \choose y} (1-p_{2})^{r_{2}} p_{2}^{y} \cdot
{k+y+r_{1}-1 \choose k+y} (1-p_{1})^{r_{1}} p_{1}^{k+y} $$
That's not pretty (yikes!). The only simplification I see right off is
$$ p_{1}^{k} (1-p_{1})^{r_{1}} (1-p_{2})^{r_{2}}
\sum_{y=0}^{\infty} (p_{1}p_{2})^{y} {y+r_{2}-1 \choose y}
{k+y+r_{1}-1 \choose k+y} $$
which is still pretty ugly. I'm not sure if this is helpful but this can also be re-written as
$$ \frac{ p_{1}^{k} (1-p_{1})^{r_{1}} (1-p_{2})^{r_{2}} }{ (r_{1}-1)! (r_{2}-1)! }
\sum_{y=0}^{\infty}
(p_{1}p_{2})^{y}
\frac{ (y+r_{2}-1)! (k+y+r_{1}-1)! }{y! (k+y)! } $$
I'm not sure if there is a simplified expression for this sum but it could be approximated numerically if you only need it to calculate $p$-values
I verified with simulation that the above calculation is correct. Here is a crude R function to calculate this mass function and carry out a few simulations
f = function(k,r1,r2,p1,p2,UB)
{
S=0
const = (p1^k) * ((1-p1)^r1) * ((1-p2)^r2)
const = const/( factorial(r1-1) * factorial(r2-1) )
for(y in 0:UB)
{
iy = ((p1*p2)^y) * factorial(y+r2-1)*factorial(k+y+r1-1)
iy = iy/( factorial(y)*factorial(y+k) )
S = S + iy
}
return(S*const)
}
### Sims
r1 = 6; r2 = 4;
p1 = .7; p2 = .53;
X = rnbinom(1e5,r1,p1)
Y = rnbinom(1e5,r2,p2)
mean( (X-Y) == 2 )
[1] 0.08508
f(2,r1,r2,1-p1,1-p2,20)
[1] 0.08509068
mean( (X-Y) == 1 )
[1] 0.11581
f(1,r1,r2,1-p1,1-p2,20)
[1] 0.1162279
mean( (X-Y) == 0 )
[1] 0.13888
f(0,r1,r2,1-p1,1-p2,20)
[1] 0.1363209
I've found the sum converges very quickly for all of the values I tried, so setting UB higher than 10 or so
is not necessary. Note that R's built in rnbinom function parameterizes the negative binomial in terms of
the number of failures before the $r$'th success, in which case you'd need to replace all of the $p_{1}, p_{2}$'s
in the above formulas with $1-p_{1}, 1-p_{2}$ for compatibility.
|
Distribution that describes the difference between negative binomial distributed variables?
|
I don't know the name of this distribution but you can just derive it from the law of total probability. Suppose $X, Y$ each have negative binomial distributions with parameters $(r_{1}, p_{1})$ and $
|
Distribution that describes the difference between negative binomial distributed variables?
I don't know the name of this distribution but you can just derive it from the law of total probability. Suppose $X, Y$ each have negative binomial distributions with parameters $(r_{1}, p_{1})$ and $(r_{2}, p_{2})$, respectively. I'm using the parameterization where $X,Y$ represent the number of successes before the $r_{1}$'th, and $r_{2}$'th failures, respectively. Then,
$$ P(X - Y = k) = E_{Y} \Big( P(X-Y = k) \Big) = E_{Y} \Big( P(X = k+Y) \Big) =
\sum_{y=0}^{\infty} P(Y=y)P(X = k+y) $$
We know
$$ P(X = k + y) = {k+y+r_{1}-1 \choose k+y} (1-p_{1})^{r_{1}} p_{1}^{k+y} $$
and
$$ P(Y = y) = {y+r_{2}-1 \choose y} (1-p_{2})^{r_{2}} p_{2}^{y} $$
so
$$ P(X-Y=k) = \sum_{y=0}^{\infty} {y+r_{2}-1 \choose y} (1-p_{2})^{r_{2}} p_{2}^{y} \cdot
{k+y+r_{1}-1 \choose k+y} (1-p_{1})^{r_{1}} p_{1}^{k+y} $$
That's not pretty (yikes!). The only simplification I see right off is
$$ p_{1}^{k} (1-p_{1})^{r_{1}} (1-p_{2})^{r_{2}}
\sum_{y=0}^{\infty} (p_{1}p_{2})^{y} {y+r_{2}-1 \choose y}
{k+y+r_{1}-1 \choose k+y} $$
which is still pretty ugly. I'm not sure if this is helpful but this can also be re-written as
$$ \frac{ p_{1}^{k} (1-p_{1})^{r_{1}} (1-p_{2})^{r_{2}} }{ (r_{1}-1)! (r_{2}-1)! }
\sum_{y=0}^{\infty}
(p_{1}p_{2})^{y}
\frac{ (y+r_{2}-1)! (k+y+r_{1}-1)! }{y! (k+y)! } $$
I'm not sure if there is a simplified expression for this sum but it could be approximated numerically if you only need it to calculate $p$-values
I verified with simulation that the above calculation is correct. Here is a crude R function to calculate this mass function and carry out a few simulations
f = function(k,r1,r2,p1,p2,UB)
{
S=0
const = (p1^k) * ((1-p1)^r1) * ((1-p2)^r2)
const = const/( factorial(r1-1) * factorial(r2-1) )
for(y in 0:UB)
{
iy = ((p1*p2)^y) * factorial(y+r2-1)*factorial(k+y+r1-1)
iy = iy/( factorial(y)*factorial(y+k) )
S = S + iy
}
return(S*const)
}
### Sims
r1 = 6; r2 = 4;
p1 = .7; p2 = .53;
X = rnbinom(1e5,r1,p1)
Y = rnbinom(1e5,r2,p2)
mean( (X-Y) == 2 )
[1] 0.08508
f(2,r1,r2,1-p1,1-p2,20)
[1] 0.08509068
mean( (X-Y) == 1 )
[1] 0.11581
f(1,r1,r2,1-p1,1-p2,20)
[1] 0.1162279
mean( (X-Y) == 0 )
[1] 0.13888
f(0,r1,r2,1-p1,1-p2,20)
[1] 0.1363209
I've found the sum converges very quickly for all of the values I tried, so setting UB higher than 10 or so
is not necessary. Note that R's built in rnbinom function parameterizes the negative binomial in terms of
the number of failures before the $r$'th success, in which case you'd need to replace all of the $p_{1}, p_{2}$'s
in the above formulas with $1-p_{1}, 1-p_{2}$ for compatibility.
|
Distribution that describes the difference between negative binomial distributed variables?
I don't know the name of this distribution but you can just derive it from the law of total probability. Suppose $X, Y$ each have negative binomial distributions with parameters $(r_{1}, p_{1})$ and $
|
14,892
|
Distribution that describes the difference between negative binomial distributed variables?
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Yes. skewed generalized discrete Laplace distribution is the difference of two negative binomial distributed random variables.
For more clarifications refer the online available article "skewed generalized discrete Laplace distribution" by seetha Lekshmi.V. and simi sebastian
|
Distribution that describes the difference between negative binomial distributed variables?
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
Distribution that describes the difference between negative binomial distributed variables?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Yes. skewed generalized discrete Laplace distribution is the difference of two negative binomial distributed random variables.
For more clarifications refer the online available article "skewed generalized discrete Laplace distribution" by seetha Lekshmi.V. and simi sebastian
|
Distribution that describes the difference between negative binomial distributed variables?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
14,893
|
Why do the ANOVA assumptions (equality of variance, normality of residuals) matter?
|
The assumptions matter insofar as they affect the properties of the hypothesis tests (and intervals) you might use whose distributional properties under the null are calculated relying on those assumptions.
In particular, for hypothesis tests, the things we might care about are how far the true significance level might be from what we want it to be, and whether power against alternatives of interest is good.
In relation to the assumptions you ask about:
1. Equality of variance
The variance of your dependent variable (residuals) should be equal in each cell of the design
This can certainly impact the significance level, at least when sample sizes are unequal.
(Edit:) An ANOVA F-statistic is the ratio of two estimates of variance (the partitioning and comparison of variances is why it's called analysis of variance). The denominator is an estimate of the supposedly-common-to-all-cells error variance (calculated from residuals), while the numerator, based on variation in the group means, will have two components, one from variation in the population means and one due to the error variance. If the null is true, the two variances that are being estimated will be the same (two estimates of the common error variance); this common but unknown value cancels out (because we took a ratio), leaving an F-statistic that only depends on the distributions of the errors (which under the assumptions we can show has an F distribution. (Similar comments apply to the t-test I used for illustration.)
[There's a little bit more detail on some of that information in my answer here]
However, here the two population variances differ across the two differently-sized samples. Consider the denominator (of the F-statistic in ANOVA and of the t-statistic in a t-test) - it is composed of two different variance estimates, not one, so it will not have the "right" distribution (a scaled chi-square for the F and its square root in the case of a t - both the shape and the scale are issues).
As a result, the F-statistic or the t-statistic will no longer have the F- or t-distribution, but the manner in which it is affected is different depending on whether the large or the smaller sample was drawn from the population with the larger variance. This in turn affects the distribution of p-values.
Under the null (i.e. when the population means are equal), the distribution of p-values should be uniformly distributed. However, if the variances and the sample sizes are unequal but the means are equal (so we don't want to reject the null), the p-values are not uniformly distributed. I did a small simulation
to show you what happens. In this case, I used only 2 groups so ANOVA is equivalent to a two-sample t-test with the equal variance assumption. So I simulated samples from two normal distributions one with standard deviation ten
times as large as the other, but equal means.
For the left side plot, the larger (population) standard deviation was for n=5 and the smaller standard deviation was for n=30. For the right side plot the larger standard deviation went with n=30 and the smaller with n=5. I simulated each one 10000 times and found the p-value each time. In each case you
want the histogram to be completely flat (rectangular), since this means all tests conducted at some significance level $\alpha$ with actually get that type I error rate. In particular it's most important that the leftmost parts of the histogram to stay close to the grey line:
As we see, the left side plot (larger variance in the smaller sample) the
p-values tend to be very small -- we would reject the null hypothesis very often (nearly half the time in this example) even though the null is true. That is, our significance levels are much larger than we asked for. In the right hand side plot we see the p-values are mostly large (and so our significance level is much smaller than we asked for) -- in fact not once in ten thousand simulations did we reject at the 5% level (the smallest p-value here was 0.055). [This may not sound like such a bad thing, until we remember that we will also have very low power to go with our very low significance level.]
That's quite a consequence. This is why it's a good idea to use a Welch-Satterthwaite type t-test or ANOVA when we don't have a good reason to assume that the variances will be close to equal -- by comparison it's barely affected
in these situations (I simulated this case as well; the two distributions of simulated p-values - which I have not shown here - came out quite close to flat).
2. Conditional distribution of the response (DV)
Your dependent variable (residuals) should be approximately normally distributed for each cell of the design
This is somewhat less directly critical - for moderate deviations from normality, the significance level is so not much affected in larger samples (though the power can be!).
Here's one example, where the values are exponentially distributed (with identical distributions and sample sizes), where we can see this significance level issue being substantial at small $n$ but reducing with large $n$.
We see that at n=5 there are substantially too few small p-values (the significance level for a 5% test would be about half what it should be), but
at n=50 the problem is reduced -- for a 5% test in this case the true significance level is about 4.5%.
So we might be tempted to say "well, that's fine, if n is big enough to get the significance level to be pretty close", but we may also be throwing a way a good deal of power. In particular, it's known that the asymptotic relative efficiency of the t-test relative to widely used alternatives can go to 0. This means that better test choices can get the same power with a vanishingly small fraction of the sample size required to get it with the t-test. You don't need anything out of the ordinary to be going on to need more than say twice as much data to have the same power with the t as you would need with an alternative test - moderately heavier-than normal tails in the population distribution and moderately large samples can be enough to do it.
(Other choices of distribution may make the significance level higher than it should be, or substantially lower than we saw here.)
|
Why do the ANOVA assumptions (equality of variance, normality of residuals) matter?
|
The assumptions matter insofar as they affect the properties of the hypothesis tests (and intervals) you might use whose distributional properties under the null are calculated relying on those assump
|
Why do the ANOVA assumptions (equality of variance, normality of residuals) matter?
The assumptions matter insofar as they affect the properties of the hypothesis tests (and intervals) you might use whose distributional properties under the null are calculated relying on those assumptions.
In particular, for hypothesis tests, the things we might care about are how far the true significance level might be from what we want it to be, and whether power against alternatives of interest is good.
In relation to the assumptions you ask about:
1. Equality of variance
The variance of your dependent variable (residuals) should be equal in each cell of the design
This can certainly impact the significance level, at least when sample sizes are unequal.
(Edit:) An ANOVA F-statistic is the ratio of two estimates of variance (the partitioning and comparison of variances is why it's called analysis of variance). The denominator is an estimate of the supposedly-common-to-all-cells error variance (calculated from residuals), while the numerator, based on variation in the group means, will have two components, one from variation in the population means and one due to the error variance. If the null is true, the two variances that are being estimated will be the same (two estimates of the common error variance); this common but unknown value cancels out (because we took a ratio), leaving an F-statistic that only depends on the distributions of the errors (which under the assumptions we can show has an F distribution. (Similar comments apply to the t-test I used for illustration.)
[There's a little bit more detail on some of that information in my answer here]
However, here the two population variances differ across the two differently-sized samples. Consider the denominator (of the F-statistic in ANOVA and of the t-statistic in a t-test) - it is composed of two different variance estimates, not one, so it will not have the "right" distribution (a scaled chi-square for the F and its square root in the case of a t - both the shape and the scale are issues).
As a result, the F-statistic or the t-statistic will no longer have the F- or t-distribution, but the manner in which it is affected is different depending on whether the large or the smaller sample was drawn from the population with the larger variance. This in turn affects the distribution of p-values.
Under the null (i.e. when the population means are equal), the distribution of p-values should be uniformly distributed. However, if the variances and the sample sizes are unequal but the means are equal (so we don't want to reject the null), the p-values are not uniformly distributed. I did a small simulation
to show you what happens. In this case, I used only 2 groups so ANOVA is equivalent to a two-sample t-test with the equal variance assumption. So I simulated samples from two normal distributions one with standard deviation ten
times as large as the other, but equal means.
For the left side plot, the larger (population) standard deviation was for n=5 and the smaller standard deviation was for n=30. For the right side plot the larger standard deviation went with n=30 and the smaller with n=5. I simulated each one 10000 times and found the p-value each time. In each case you
want the histogram to be completely flat (rectangular), since this means all tests conducted at some significance level $\alpha$ with actually get that type I error rate. In particular it's most important that the leftmost parts of the histogram to stay close to the grey line:
As we see, the left side plot (larger variance in the smaller sample) the
p-values tend to be very small -- we would reject the null hypothesis very often (nearly half the time in this example) even though the null is true. That is, our significance levels are much larger than we asked for. In the right hand side plot we see the p-values are mostly large (and so our significance level is much smaller than we asked for) -- in fact not once in ten thousand simulations did we reject at the 5% level (the smallest p-value here was 0.055). [This may not sound like such a bad thing, until we remember that we will also have very low power to go with our very low significance level.]
That's quite a consequence. This is why it's a good idea to use a Welch-Satterthwaite type t-test or ANOVA when we don't have a good reason to assume that the variances will be close to equal -- by comparison it's barely affected
in these situations (I simulated this case as well; the two distributions of simulated p-values - which I have not shown here - came out quite close to flat).
2. Conditional distribution of the response (DV)
Your dependent variable (residuals) should be approximately normally distributed for each cell of the design
This is somewhat less directly critical - for moderate deviations from normality, the significance level is so not much affected in larger samples (though the power can be!).
Here's one example, where the values are exponentially distributed (with identical distributions and sample sizes), where we can see this significance level issue being substantial at small $n$ but reducing with large $n$.
We see that at n=5 there are substantially too few small p-values (the significance level for a 5% test would be about half what it should be), but
at n=50 the problem is reduced -- for a 5% test in this case the true significance level is about 4.5%.
So we might be tempted to say "well, that's fine, if n is big enough to get the significance level to be pretty close", but we may also be throwing a way a good deal of power. In particular, it's known that the asymptotic relative efficiency of the t-test relative to widely used alternatives can go to 0. This means that better test choices can get the same power with a vanishingly small fraction of the sample size required to get it with the t-test. You don't need anything out of the ordinary to be going on to need more than say twice as much data to have the same power with the t as you would need with an alternative test - moderately heavier-than normal tails in the population distribution and moderately large samples can be enough to do it.
(Other choices of distribution may make the significance level higher than it should be, or substantially lower than we saw here.)
|
Why do the ANOVA assumptions (equality of variance, normality of residuals) matter?
The assumptions matter insofar as they affect the properties of the hypothesis tests (and intervals) you might use whose distributional properties under the null are calculated relying on those assump
|
14,894
|
Why do the ANOVA assumptions (equality of variance, normality of residuals) matter?
|
In a nutshell, ANOVA is adding, squaring and averaging residuals. Residuals tell you how well your model fits the data. For this example, I used the PlantGrowth dataset in R:
Results from an experiment to compare yields (as measured by dried weight of plants) obtained under a control and two different treatment conditions.
This first plot shows you the grand mean across all three treatment levels:
The red lines are the residuals. Now by squaring and adding the length of those individual lines, you will get a value that tells you how well the mean (our model) describes the data. A small number, tells you the mean describes your data points well, a bigger number tells you the mean describes your data not so well. This number is called the Total Sums of Squares:
$SS_{total}=\sum(x_i-\bar{x}_{grand})^2$, where $x_{i}$ represents the individual data point and $\bar{x}_{grand}$ the grand mean across the dataset.
Now you do the same thing for the residuals in your treatment (Residual Sums of Squares, which is also known as the noise in the treatment levels):
And the formula:
$SS_{residuals}=\sum(x_{ik}-\bar{x}_{k})^2$, where $x_{ik}$ are the individual data points $i$ in the $k$ number of levels and $\bar{x}_{k}$ the mean across the treatment levels.
Lastly, we need to determine the signal in the data, which is known as the Model Sums of Squares, which will later be used to calculate whether the treatment means are any different from the grand mean:
And the formula:
$SS_{model}=\sum n_{k}(\bar{x}_k-\bar{x}_{grand})^2$, where $n_{k}$ is the sample size $n$ in your $k$ number of levels, and $\bar{x}_k$ as well as $\bar{x}_{grand}$ the mean within and across the treatment levels, respectively.
Now the disadvantage with the sums of squares is that they get bigger as the sample size increase. To express those sums of squares relative to the number of observation in the data set, you divide them by their degrees of freedom turning them into variances. So after squaring and adding your data points you are now averaging them using their degrees of freedom:
$df_{total}=(n-1)$
$df_{residual}=(n-k)$
$df_{model}=(k-1)$
where $n$ is the total number of observations and $k$ the number of treatment levels.
This results in the Model Mean Square and the Residual Mean Square (both are variances), or the signal to noise ratio, which is known as the F-value:
$MS_{model}=\frac{SS_{model}}{df_{model}}$
$MS_{residual}=\frac{SS_{residual}}{df_{residual}}$
$F=\frac{MS_{model}}{MS_{residual}}$
The F-value describes the signal to noise ratio, or whether the treatment means are any different from the grand mean. The F-value is now used to calculate p-values and those will decide whether at least one of the treatment means will be significantly different from the grand mean or not.
Now I hope you can see that the assumptions are based on calculations with residuals and why they are important. Since we adding, squaring and averaging residuals, we should make sure that before we are doing this, the data in those treatment groups behaves similar, or else the F-value may be biased to some degree and inferences drawn from this F-value may not be valid.
Edit: I added two paragraphs to address the OP's question 2 and 1 more specifically.
Normality assumption:
The mean (or expected value) is often used in statistics to describe the center of a distribution, however it is not very robust and easily influenced by outliers. The mean is the simplest model we can fit to the data. Since in ANOVA we are using the mean to calculate the residuals and the sums of squares (see formulae above), the data should be roughly normally distributed (normality assumption). If this is not the case, the mean may not be the appropriate model for the data since it wouldn’t give us a correct location of the center of the sample distribution. Instead once could use the median for example (see non parametric testing procedures).
Homogeneity of variance assumption:
Later when we calculate the mean squares (model and residual), we are pooling the individual sums of squares from the treatment levels and averaging them (see formulae above). By pooling and averaging we are losing the information of the individual treatment level variances and their contribution to the mean squares. Therefore, we should have roughly the same variance among all treatment levels so that the contribution to the mean squares is similar. If the variances between those treatment levels were different, then the resulting mean squares and F-value would be biased and will influence the calculation of the p-values making inferences drawn from these p-values questionable (see also @whuber 's comment and @Glen_b 's answer).
This is how I see it for myself. It may not be 100% accurate (I am not a statistician) but it helps me understanding why satisfying the assumptions for ANOVA is important.
|
Why do the ANOVA assumptions (equality of variance, normality of residuals) matter?
|
In a nutshell, ANOVA is adding, squaring and averaging residuals. Residuals tell you how well your model fits the data. For this example, I used the PlantGrowth dataset in R:
Results from an experime
|
Why do the ANOVA assumptions (equality of variance, normality of residuals) matter?
In a nutshell, ANOVA is adding, squaring and averaging residuals. Residuals tell you how well your model fits the data. For this example, I used the PlantGrowth dataset in R:
Results from an experiment to compare yields (as measured by dried weight of plants) obtained under a control and two different treatment conditions.
This first plot shows you the grand mean across all three treatment levels:
The red lines are the residuals. Now by squaring and adding the length of those individual lines, you will get a value that tells you how well the mean (our model) describes the data. A small number, tells you the mean describes your data points well, a bigger number tells you the mean describes your data not so well. This number is called the Total Sums of Squares:
$SS_{total}=\sum(x_i-\bar{x}_{grand})^2$, where $x_{i}$ represents the individual data point and $\bar{x}_{grand}$ the grand mean across the dataset.
Now you do the same thing for the residuals in your treatment (Residual Sums of Squares, which is also known as the noise in the treatment levels):
And the formula:
$SS_{residuals}=\sum(x_{ik}-\bar{x}_{k})^2$, where $x_{ik}$ are the individual data points $i$ in the $k$ number of levels and $\bar{x}_{k}$ the mean across the treatment levels.
Lastly, we need to determine the signal in the data, which is known as the Model Sums of Squares, which will later be used to calculate whether the treatment means are any different from the grand mean:
And the formula:
$SS_{model}=\sum n_{k}(\bar{x}_k-\bar{x}_{grand})^2$, where $n_{k}$ is the sample size $n$ in your $k$ number of levels, and $\bar{x}_k$ as well as $\bar{x}_{grand}$ the mean within and across the treatment levels, respectively.
Now the disadvantage with the sums of squares is that they get bigger as the sample size increase. To express those sums of squares relative to the number of observation in the data set, you divide them by their degrees of freedom turning them into variances. So after squaring and adding your data points you are now averaging them using their degrees of freedom:
$df_{total}=(n-1)$
$df_{residual}=(n-k)$
$df_{model}=(k-1)$
where $n$ is the total number of observations and $k$ the number of treatment levels.
This results in the Model Mean Square and the Residual Mean Square (both are variances), or the signal to noise ratio, which is known as the F-value:
$MS_{model}=\frac{SS_{model}}{df_{model}}$
$MS_{residual}=\frac{SS_{residual}}{df_{residual}}$
$F=\frac{MS_{model}}{MS_{residual}}$
The F-value describes the signal to noise ratio, or whether the treatment means are any different from the grand mean. The F-value is now used to calculate p-values and those will decide whether at least one of the treatment means will be significantly different from the grand mean or not.
Now I hope you can see that the assumptions are based on calculations with residuals and why they are important. Since we adding, squaring and averaging residuals, we should make sure that before we are doing this, the data in those treatment groups behaves similar, or else the F-value may be biased to some degree and inferences drawn from this F-value may not be valid.
Edit: I added two paragraphs to address the OP's question 2 and 1 more specifically.
Normality assumption:
The mean (or expected value) is often used in statistics to describe the center of a distribution, however it is not very robust and easily influenced by outliers. The mean is the simplest model we can fit to the data. Since in ANOVA we are using the mean to calculate the residuals and the sums of squares (see formulae above), the data should be roughly normally distributed (normality assumption). If this is not the case, the mean may not be the appropriate model for the data since it wouldn’t give us a correct location of the center of the sample distribution. Instead once could use the median for example (see non parametric testing procedures).
Homogeneity of variance assumption:
Later when we calculate the mean squares (model and residual), we are pooling the individual sums of squares from the treatment levels and averaging them (see formulae above). By pooling and averaging we are losing the information of the individual treatment level variances and their contribution to the mean squares. Therefore, we should have roughly the same variance among all treatment levels so that the contribution to the mean squares is similar. If the variances between those treatment levels were different, then the resulting mean squares and F-value would be biased and will influence the calculation of the p-values making inferences drawn from these p-values questionable (see also @whuber 's comment and @Glen_b 's answer).
This is how I see it for myself. It may not be 100% accurate (I am not a statistician) but it helps me understanding why satisfying the assumptions for ANOVA is important.
|
Why do the ANOVA assumptions (equality of variance, normality of residuals) matter?
In a nutshell, ANOVA is adding, squaring and averaging residuals. Residuals tell you how well your model fits the data. For this example, I used the PlantGrowth dataset in R:
Results from an experime
|
14,895
|
Why do the ANOVA assumptions (equality of variance, normality of residuals) matter?
|
ANOVA it's just a method, it calculates the F-test from your samples and it compares it to the F-distribution.
You need some assumptions to decide what you want to compare and to calculate the p-values.
If you don't meet that assumptions you could calculate other things but it won't be an ANOVA.
The most useful distribution is the normal one (because of the CLT), that's why it's the most commonly used.
If your data it's not normally distributed you need at least to know what's its distribution in order to calculate something.
Homoscedasticity is a common assumption also in regression analysis, it just makes things easier.
We need some assumptions to start with.
If you don't have homoscedasticity you can try transform your data to achieve it.
The ANOVA F-test is known to be nearly optimal in the sense of minimizing false negative errors for a fixed rate of false positive errors
|
Why do the ANOVA assumptions (equality of variance, normality of residuals) matter?
|
ANOVA it's just a method, it calculates the F-test from your samples and it compares it to the F-distribution.
You need some assumptions to decide what you want to compare and to calculate the p-value
|
Why do the ANOVA assumptions (equality of variance, normality of residuals) matter?
ANOVA it's just a method, it calculates the F-test from your samples and it compares it to the F-distribution.
You need some assumptions to decide what you want to compare and to calculate the p-values.
If you don't meet that assumptions you could calculate other things but it won't be an ANOVA.
The most useful distribution is the normal one (because of the CLT), that's why it's the most commonly used.
If your data it's not normally distributed you need at least to know what's its distribution in order to calculate something.
Homoscedasticity is a common assumption also in regression analysis, it just makes things easier.
We need some assumptions to start with.
If you don't have homoscedasticity you can try transform your data to achieve it.
The ANOVA F-test is known to be nearly optimal in the sense of minimizing false negative errors for a fixed rate of false positive errors
|
Why do the ANOVA assumptions (equality of variance, normality of residuals) matter?
ANOVA it's just a method, it calculates the F-test from your samples and it compares it to the F-distribution.
You need some assumptions to decide what you want to compare and to calculate the p-value
|
14,896
|
What loss function should one use to get a high precision or high recall binary classifier?
|
Artificially constructing a balanced training set is debatable, quite controversial actually. If you do it, you should empirically verify that it really works better than leaving the training set unbalanced. Artificially balancing the test-set is almost never a good idea. The test-set should represent new data points as they come in without labels. You expect them to be unbalanced, so you need to know if your model can handle an unbalanced test-set. (If you don't expect new records to be unbalanced, why are all your existing records unbalanced?)
Regarding your performance metric, you will always get what you ask. If accuracy is not what you need foremost in an unbalanced set, because not only the classes but also the misclassification costs are unbalanced, then don't use it. If you had used accuracy as metric and done all your model selection and hyperparameter tuning by always taking the one with the best accuracy, you are optimizing for accuracy.
I take the minority class as the positive class, this is the conventional way of naming them. Thus precision and recall as discussed below are precision and recall of the minority class.
If the only important thing is to identify all the minority class records, you could take recall. You are thus accepting more false positives.
Optimizing only precision would be a very weird idea. You would be telling your classifier that it's not a problem to underdetect the minority class. The easiest way to have a high precision is to be overcautious in declaring the minority class.
If you need precision and recall, you could take F-measure. It is the harmonic mean between precision and recall and thus penalizes outcomes where both metrics diverge.
If you know the concrete misclassification costs in both directions (and the profits of correct classification if they are different per class), you can put all that in a loss function and optimize it.
|
What loss function should one use to get a high precision or high recall binary classifier?
|
Artificially constructing a balanced training set is debatable, quite controversial actually. If you do it, you should empirically verify that it really works better than leaving the training set unba
|
What loss function should one use to get a high precision or high recall binary classifier?
Artificially constructing a balanced training set is debatable, quite controversial actually. If you do it, you should empirically verify that it really works better than leaving the training set unbalanced. Artificially balancing the test-set is almost never a good idea. The test-set should represent new data points as they come in without labels. You expect them to be unbalanced, so you need to know if your model can handle an unbalanced test-set. (If you don't expect new records to be unbalanced, why are all your existing records unbalanced?)
Regarding your performance metric, you will always get what you ask. If accuracy is not what you need foremost in an unbalanced set, because not only the classes but also the misclassification costs are unbalanced, then don't use it. If you had used accuracy as metric and done all your model selection and hyperparameter tuning by always taking the one with the best accuracy, you are optimizing for accuracy.
I take the minority class as the positive class, this is the conventional way of naming them. Thus precision and recall as discussed below are precision and recall of the minority class.
If the only important thing is to identify all the minority class records, you could take recall. You are thus accepting more false positives.
Optimizing only precision would be a very weird idea. You would be telling your classifier that it's not a problem to underdetect the minority class. The easiest way to have a high precision is to be overcautious in declaring the minority class.
If you need precision and recall, you could take F-measure. It is the harmonic mean between precision and recall and thus penalizes outcomes where both metrics diverge.
If you know the concrete misclassification costs in both directions (and the profits of correct classification if they are different per class), you can put all that in a loss function and optimize it.
|
What loss function should one use to get a high precision or high recall binary classifier?
Artificially constructing a balanced training set is debatable, quite controversial actually. If you do it, you should empirically verify that it really works better than leaving the training set unba
|
14,897
|
What loss function should one use to get a high precision or high recall binary classifier?
|
You are making several assumptions. It is best to think of the ultimate goal in general terms, then formulate a strategy that meets that goal. For example do you really need forced-choice classification and is the signal:noise ratio large enough to support that (good examples: sound and image recognition)? Or is the signal:noise ratio low or you are interested in tendencies? For the latter, risk estimation is for you. The choice is key and dictates the predictive accuracy metric you choose. For more thoughts on all this see http://www.fharrell.com/2017/01/classification-vs-prediction.html and http://www.fharrell.com/2017/03/damage-caused-by-classification.html.
The majority of problems concern decision making, and optimum decisions come from risk estimation coupled with a loss/cost/utility function.
One of the best aspects of a risk (probability) estimation approach is that it handles gray zones where it would be a mistake to make a classification or decision without acquiring more data. And then there is the fact that probability estimation does not require (even does not allow) one to "balance" the outcomes by artificially manipulating the sample.
|
What loss function should one use to get a high precision or high recall binary classifier?
|
You are making several assumptions. It is best to think of the ultimate goal in general terms, then formulate a strategy that meets that goal. For example do you really need forced-choice classifica
|
What loss function should one use to get a high precision or high recall binary classifier?
You are making several assumptions. It is best to think of the ultimate goal in general terms, then formulate a strategy that meets that goal. For example do you really need forced-choice classification and is the signal:noise ratio large enough to support that (good examples: sound and image recognition)? Or is the signal:noise ratio low or you are interested in tendencies? For the latter, risk estimation is for you. The choice is key and dictates the predictive accuracy metric you choose. For more thoughts on all this see http://www.fharrell.com/2017/01/classification-vs-prediction.html and http://www.fharrell.com/2017/03/damage-caused-by-classification.html.
The majority of problems concern decision making, and optimum decisions come from risk estimation coupled with a loss/cost/utility function.
One of the best aspects of a risk (probability) estimation approach is that it handles gray zones where it would be a mistake to make a classification or decision without acquiring more data. And then there is the fact that probability estimation does not require (even does not allow) one to "balance" the outcomes by artificially manipulating the sample.
|
What loss function should one use to get a high precision or high recall binary classifier?
You are making several assumptions. It is best to think of the ultimate goal in general terms, then formulate a strategy that meets that goal. For example do you really need forced-choice classifica
|
14,898
|
What loss function should one use to get a high precision or high recall binary classifier?
|
Not too long after you asked this question, there was an interesting research paper entitled Scalable Learning of Non-Decomposable Objectives that I stumbled across from a StackOverflow question that finds ways to build several interesting loss functions:
Precision at fixed recall
Recall at fixed precision
AUCROC maximization
There was an implementation for TF 1.x over here. Unfortunately it does not appear to have garnered much attention so it is not being actively maintained; however, I think this is a quite valuable approach when trying to build real-world binary classifiers.
|
What loss function should one use to get a high precision or high recall binary classifier?
|
Not too long after you asked this question, there was an interesting research paper entitled Scalable Learning of Non-Decomposable Objectives that I stumbled across from a StackOverflow question that
|
What loss function should one use to get a high precision or high recall binary classifier?
Not too long after you asked this question, there was an interesting research paper entitled Scalable Learning of Non-Decomposable Objectives that I stumbled across from a StackOverflow question that finds ways to build several interesting loss functions:
Precision at fixed recall
Recall at fixed precision
AUCROC maximization
There was an implementation for TF 1.x over here. Unfortunately it does not appear to have garnered much attention so it is not being actively maintained; however, I think this is a quite valuable approach when trying to build real-world binary classifiers.
|
What loss function should one use to get a high precision or high recall binary classifier?
Not too long after you asked this question, there was an interesting research paper entitled Scalable Learning of Non-Decomposable Objectives that I stumbled across from a StackOverflow question that
|
14,899
|
What loss function should one use to get a high precision or high recall binary classifier?
|
Regarding your question about whether reweighting training samples is equivalent to multiplying the loss in one of the two cases by a constant: yes, it is. One way to write the logistic regression loss function is
$$\sum_{j=1}^J\log\left\{1+\exp\left[-f\left(x_j\right)\right]\right\}+\sum_{k=1}^K\log\left\{1+\exp\left[f\left(x_k\right)\right]\right\}$$
where $j$ and $k$ denote respective positive and negative instances, and $f(\cdot)$ is the logistic classifier built from features $x$. If you want to give more weight to your negative instances, for example, you might wish to modify your loss as
$$\sum_{j=1}^J\log\left\{1+\exp\left[-f\left(x_j\right)\right]\right\}+\sum_{k=1}^Kw\log\left\{1+\exp\left[f\left(x_k\right)\right]\right\}$$
for some $w>1$. This loss function is minimized by software implementations of weighted logistic regression, but you could also arrive at the same answer by upweighting your negative instances by a factor of $w$ and fitting a standard logistic regression (for example, if $w=2$, then you create 2 copies of each negative instance and fit). Some further details on this kind of approach here. And there is a general warning about what happens to parameter standard errors here, but this may not be such a concern if you're solely doing prediction.
|
What loss function should one use to get a high precision or high recall binary classifier?
|
Regarding your question about whether reweighting training samples is equivalent to multiplying the loss in one of the two cases by a constant: yes, it is. One way to write the logistic regression los
|
What loss function should one use to get a high precision or high recall binary classifier?
Regarding your question about whether reweighting training samples is equivalent to multiplying the loss in one of the two cases by a constant: yes, it is. One way to write the logistic regression loss function is
$$\sum_{j=1}^J\log\left\{1+\exp\left[-f\left(x_j\right)\right]\right\}+\sum_{k=1}^K\log\left\{1+\exp\left[f\left(x_k\right)\right]\right\}$$
where $j$ and $k$ denote respective positive and negative instances, and $f(\cdot)$ is the logistic classifier built from features $x$. If you want to give more weight to your negative instances, for example, you might wish to modify your loss as
$$\sum_{j=1}^J\log\left\{1+\exp\left[-f\left(x_j\right)\right]\right\}+\sum_{k=1}^Kw\log\left\{1+\exp\left[f\left(x_k\right)\right]\right\}$$
for some $w>1$. This loss function is minimized by software implementations of weighted logistic regression, but you could also arrive at the same answer by upweighting your negative instances by a factor of $w$ and fitting a standard logistic regression (for example, if $w=2$, then you create 2 copies of each negative instance and fit). Some further details on this kind of approach here. And there is a general warning about what happens to parameter standard errors here, but this may not be such a concern if you're solely doing prediction.
|
What loss function should one use to get a high precision or high recall binary classifier?
Regarding your question about whether reweighting training samples is equivalent to multiplying the loss in one of the two cases by a constant: yes, it is. One way to write the logistic regression los
|
14,900
|
What is meant by proximity in random forests?
|
The term "proximity" means the "closeness" or "nearness" between pairs of cases.
Proximities are calculated for each pair of cases/observations/sample points. If two cases occupy the same terminal node through one tree, their proximity is increased by one. At the end of the run of all trees, the proximities are normalized by dividing by the number of trees. Proximities are used in replacing missing data, locating outliers, and producing illuminating low-dimensional views of the data.
Proximities
The proximities originally formed a NxN matrix. After a tree is grown, put all of the data, both training and oob, down the tree. If cases k and n are in the same terminal node increase their proximity by one. At the end, normalize the proximities by dividing by the number of trees.
Users noted that with large data sets, they could not fit an NxN matrix into fast memory. A modification reduced the required memory size to NxT where T is the number of trees in the forest. To speed up the computation-intensive scaling and iterative missing value replacement, the user is given the option of retaining only the nrnn largest proximities to each case.
When a test set is present, the proximities of each case in the test set with each case in the training set can also be computed. The amount of additional computing is moderate.
quote: https://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm
|
What is meant by proximity in random forests?
|
The term "proximity" means the "closeness" or "nearness" between pairs of cases.
Proximities are calculated for each pair of cases/observations/sample points. If two cases occupy the same terminal n
|
What is meant by proximity in random forests?
The term "proximity" means the "closeness" or "nearness" between pairs of cases.
Proximities are calculated for each pair of cases/observations/sample points. If two cases occupy the same terminal node through one tree, their proximity is increased by one. At the end of the run of all trees, the proximities are normalized by dividing by the number of trees. Proximities are used in replacing missing data, locating outliers, and producing illuminating low-dimensional views of the data.
Proximities
The proximities originally formed a NxN matrix. After a tree is grown, put all of the data, both training and oob, down the tree. If cases k and n are in the same terminal node increase their proximity by one. At the end, normalize the proximities by dividing by the number of trees.
Users noted that with large data sets, they could not fit an NxN matrix into fast memory. A modification reduced the required memory size to NxT where T is the number of trees in the forest. To speed up the computation-intensive scaling and iterative missing value replacement, the user is given the option of retaining only the nrnn largest proximities to each case.
When a test set is present, the proximities of each case in the test set with each case in the training set can also be computed. The amount of additional computing is moderate.
quote: https://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm
|
What is meant by proximity in random forests?
The term "proximity" means the "closeness" or "nearness" between pairs of cases.
Proximities are calculated for each pair of cases/observations/sample points. If two cases occupy the same terminal n
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.