idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
13,401
When can we speak of collinearity
A common way to evaluate collinearity is with variance inflation factors (VIFs). This can be achieved in R using the 'vif' function within the 'car' package. This has an advantage over looking at only the correlations between two variables, as it simultaneously evaluates the correlation between one variable and the rest of the variables in the model. It then gives you a single score for each predictor in the model. As stated above there is no hard and fast cutoff, but VIF scores are often decided to be problematic once they are between 5-10. I use field specific rules of thumb for this. Also- there is nothing necessarily invalid about using correlated predictors (so long as they are not perfectly correlated). You will just need more data to separate effects. When you don't have enough data there will be large uncertainties in the parameter estimates of the correlated predictors, and these estimates will be sensitive to re-sampling. To answer your questions specifically: Don't use correlation coefficients. use VIFs of the model with all predictors and no interactions. VIFs of 5-10 are indicating too much correlation, your specific cutoff depends on what you need to do with the model. It depends on the other predictors in the model, which is why it is beneficial to use VIFs. Nope! The statistics will better quantify what you are eyeballing with the scatter plot. Unless there is a super violation of the assumptions of OLS when regressing your predictors against each other.
When can we speak of collinearity
A common way to evaluate collinearity is with variance inflation factors (VIFs). This can be achieved in R using the 'vif' function within the 'car' package. This has an advantage over looking at only
When can we speak of collinearity A common way to evaluate collinearity is with variance inflation factors (VIFs). This can be achieved in R using the 'vif' function within the 'car' package. This has an advantage over looking at only the correlations between two variables, as it simultaneously evaluates the correlation between one variable and the rest of the variables in the model. It then gives you a single score for each predictor in the model. As stated above there is no hard and fast cutoff, but VIF scores are often decided to be problematic once they are between 5-10. I use field specific rules of thumb for this. Also- there is nothing necessarily invalid about using correlated predictors (so long as they are not perfectly correlated). You will just need more data to separate effects. When you don't have enough data there will be large uncertainties in the parameter estimates of the correlated predictors, and these estimates will be sensitive to re-sampling. To answer your questions specifically: Don't use correlation coefficients. use VIFs of the model with all predictors and no interactions. VIFs of 5-10 are indicating too much correlation, your specific cutoff depends on what you need to do with the model. It depends on the other predictors in the model, which is why it is beneficial to use VIFs. Nope! The statistics will better quantify what you are eyeballing with the scatter plot. Unless there is a super violation of the assumptions of OLS when regressing your predictors against each other.
When can we speak of collinearity A common way to evaluate collinearity is with variance inflation factors (VIFs). This can be achieved in R using the 'vif' function within the 'car' package. This has an advantage over looking at only
13,402
Gaussian RBF vs. Gaussian kernel
The only real difference is in the regularisation that is applied. A regularised RBF network typically uses a penalty based on the squared norm of the weights. For the kernel version, the penalty is typically on the squared norm of the weights of the linear model implicitly constructed in the feature space induced by the kernel. The key practical difference this makes is that the penalty for the RBF network depends on the centers of the RBF network (and hence on the sample of data used) whereas for the RBF kernel, the induced feature space is the same regardless of the sample of data, so the penalty is a penalty on the function of the model, rather than on its parameterisation. In other words, for both models we have $f(\vec{x}') = \sum_{i=1}^\ell \alpha_i \mathcal{K}(\vec{x}_i, \vec{x}')$ For the RBF network approach, the training criterion is $L = \sum_{i=1}^\ell (y_i - f(\vec{x}_i))^2 + \lambda \|\alpha\|^2$ For the RBF kernel method, we have that $\mathcal{K}(\vec{x},\vec{x}') = \phi(\vec{x})\cdot\phi(\vec{x}')$, and $\vec{w} = \sum_{i=1}^\ell \alpha_i\phi(\vec{x}_i)$. This means that a squared norm penalty on the weights of the model in the induced feature space, $\vec{w}$ can be written in terms of the dual parameters, $\vec{\alpha}$ as $\|\vec{w}\|^2 = \vec{\alpha}^T\matrix{K}\vec{\alpha},$ where $\matrix{K}$ is the matix of pair-wise evaluations of the kernel for all training patterns. The training criterion is then $L = \sum_{i=1}^\ell (y_i - f(\vec{x}_i))^2 + \lambda \vec{\alpha}^T\matrix{K}\vec{\alpha}$. The only difference between the two models is the $\matrix{K}$ in the regularisation term. The key theoretical advantage of the kernel approach is that it allows you to interpret a non-linear model as a linear model following a fixed non-linear transformation that doesn't depend on the sample of data. Thus any statistical learning theory that exists for linear models automatically transfers to the non-linear version. However, this all breaks down as soon as you try and tune the kernel parameters, at which point we are back to pretty much the same point theoretically speaking as we were with RBF (and MLP) neural networks. So the theoretical advantage is perhaps not as great as we would like. Is it likely to make any real difference in terms of performance? Probably not much. The "no free lunch" theorems suggest that there is no a-priori superiority of any algorithm over all others, and the difference in the regularisation is fairly subtle, so if in doubt try both and choose the best according to e.g. cross-validation.
Gaussian RBF vs. Gaussian kernel
The only real difference is in the regularisation that is applied. A regularised RBF network typically uses a penalty based on the squared norm of the weights. For the kernel version, the penalty is
Gaussian RBF vs. Gaussian kernel The only real difference is in the regularisation that is applied. A regularised RBF network typically uses a penalty based on the squared norm of the weights. For the kernel version, the penalty is typically on the squared norm of the weights of the linear model implicitly constructed in the feature space induced by the kernel. The key practical difference this makes is that the penalty for the RBF network depends on the centers of the RBF network (and hence on the sample of data used) whereas for the RBF kernel, the induced feature space is the same regardless of the sample of data, so the penalty is a penalty on the function of the model, rather than on its parameterisation. In other words, for both models we have $f(\vec{x}') = \sum_{i=1}^\ell \alpha_i \mathcal{K}(\vec{x}_i, \vec{x}')$ For the RBF network approach, the training criterion is $L = \sum_{i=1}^\ell (y_i - f(\vec{x}_i))^2 + \lambda \|\alpha\|^2$ For the RBF kernel method, we have that $\mathcal{K}(\vec{x},\vec{x}') = \phi(\vec{x})\cdot\phi(\vec{x}')$, and $\vec{w} = \sum_{i=1}^\ell \alpha_i\phi(\vec{x}_i)$. This means that a squared norm penalty on the weights of the model in the induced feature space, $\vec{w}$ can be written in terms of the dual parameters, $\vec{\alpha}$ as $\|\vec{w}\|^2 = \vec{\alpha}^T\matrix{K}\vec{\alpha},$ where $\matrix{K}$ is the matix of pair-wise evaluations of the kernel for all training patterns. The training criterion is then $L = \sum_{i=1}^\ell (y_i - f(\vec{x}_i))^2 + \lambda \vec{\alpha}^T\matrix{K}\vec{\alpha}$. The only difference between the two models is the $\matrix{K}$ in the regularisation term. The key theoretical advantage of the kernel approach is that it allows you to interpret a non-linear model as a linear model following a fixed non-linear transformation that doesn't depend on the sample of data. Thus any statistical learning theory that exists for linear models automatically transfers to the non-linear version. However, this all breaks down as soon as you try and tune the kernel parameters, at which point we are back to pretty much the same point theoretically speaking as we were with RBF (and MLP) neural networks. So the theoretical advantage is perhaps not as great as we would like. Is it likely to make any real difference in terms of performance? Probably not much. The "no free lunch" theorems suggest that there is no a-priori superiority of any algorithm over all others, and the difference in the regularisation is fairly subtle, so if in doubt try both and choose the best according to e.g. cross-validation.
Gaussian RBF vs. Gaussian kernel The only real difference is in the regularisation that is applied. A regularised RBF network typically uses a penalty based on the squared norm of the weights. For the kernel version, the penalty is
13,403
When is distance covariance less appropriate than linear covariance?
I have tried to collect a few remarks on distance covariance based on my impressions from reading the references listed below. However, I do not consider myself an expert on this topic. Comments, corrections, suggestions, etc. are welcome. The remarks are (strongly) biased towards potential drawbacks, as requested in the original question. As I see it, the potential drawbacks are as follows: The methodology is new. My guess is that this is the single biggest factor regarding lack of popularity at this time. The papers outlining distance covariance start in the mid 2000s and progress up to present day. The paper cited above is the one that received the most attention (hype?) and it is less than three years old. In contrast, the theory and results on correlation and correlation-like measures have over a century of work already behind them. The basic concepts are more challenging. Pearson's product-moment correlation, at an operational level, can be explained to college freshman without a calculus background pretty readily. A simple "algorithmic" viewpoint can be laid out and the geometric intuition is easy to describe. In contrast, in the case of distance covariance, even the notion of sums of products of pairwise Euclidean distances is quite a bit more difficult and the notion of covariance with respect to a stochastic process goes far beyond what could reasonably be explained to such an audience. It is computationally more demanding. The basic algorithm for computing the test statistic is $O(n^2)$ in the sample size as opposed to $O(n)$ for standard correlation metrics. For small sample sizes this is not a big deal, but for larger ones it becomes more important. The test statistic is not distribution free, even asymptotically. One might hope that for a test statistic that is consistent against all alternatives, that the distribution—at least asymptotically—might be independent of the underlying distributions of $X$ and $Y$ under the null hypothesis. This is not the case for distance covariance as the distribution under the null depends on the underlying distribution of $X$ and $Y$ even as the sample size tends to infinity. It is true that the distributions are uniformly bounded by a $\chi^2_1$ distribution, which allows for the calculation of a conservative critical value. The distance correlation is a one-to-one transform of $|\rho|$ in the bivariate normal case. This is not really a drawback, and might even be viewed as a strength. But, if one accepts a bivariate normal approximation to the data, which can be quite common in practice, then little, if anything, is gained from using distance correlation in place of standard procedures. Unknown power properties. Being consistent against all alternatives essentially guarantees that distance covariance must have very low power against some alternatives. In many cases, one is willing to give up generality in order to gain additional power against particular alternatives of interest. The original papers show some examples in which they claim high power relative to standard correlation metrics, but I believe that, going back to (1.) above, its behavior against alternatives is not yet well understood. To reiterate, this answer probably comes across quite negative. But, that is not the intent. There are some very beautiful and interesting ideas related to distance covariance and the relative novelty of it also opens up research avenues for understanding it more fully. References: G. J. Szekely and M. L. Rizzo (2009), Brownian distance covariance, Ann. Appl. Statist., vol. 3, no. 4, 1236–1265. G. J. Szekely, M. L. Rizzo and N. K. Bakirov (2007), Measuring and testing independence by correlation of distances, Ann. Statist., vol. 35, 2769–2794. R. Lyons (2012), Distance covariance in metric spaces, Ann. Probab. (to appear).
When is distance covariance less appropriate than linear covariance?
I have tried to collect a few remarks on distance covariance based on my impressions from reading the references listed below. However, I do not consider myself an expert on this topic. Comments, corr
When is distance covariance less appropriate than linear covariance? I have tried to collect a few remarks on distance covariance based on my impressions from reading the references listed below. However, I do not consider myself an expert on this topic. Comments, corrections, suggestions, etc. are welcome. The remarks are (strongly) biased towards potential drawbacks, as requested in the original question. As I see it, the potential drawbacks are as follows: The methodology is new. My guess is that this is the single biggest factor regarding lack of popularity at this time. The papers outlining distance covariance start in the mid 2000s and progress up to present day. The paper cited above is the one that received the most attention (hype?) and it is less than three years old. In contrast, the theory and results on correlation and correlation-like measures have over a century of work already behind them. The basic concepts are more challenging. Pearson's product-moment correlation, at an operational level, can be explained to college freshman without a calculus background pretty readily. A simple "algorithmic" viewpoint can be laid out and the geometric intuition is easy to describe. In contrast, in the case of distance covariance, even the notion of sums of products of pairwise Euclidean distances is quite a bit more difficult and the notion of covariance with respect to a stochastic process goes far beyond what could reasonably be explained to such an audience. It is computationally more demanding. The basic algorithm for computing the test statistic is $O(n^2)$ in the sample size as opposed to $O(n)$ for standard correlation metrics. For small sample sizes this is not a big deal, but for larger ones it becomes more important. The test statistic is not distribution free, even asymptotically. One might hope that for a test statistic that is consistent against all alternatives, that the distribution—at least asymptotically—might be independent of the underlying distributions of $X$ and $Y$ under the null hypothesis. This is not the case for distance covariance as the distribution under the null depends on the underlying distribution of $X$ and $Y$ even as the sample size tends to infinity. It is true that the distributions are uniformly bounded by a $\chi^2_1$ distribution, which allows for the calculation of a conservative critical value. The distance correlation is a one-to-one transform of $|\rho|$ in the bivariate normal case. This is not really a drawback, and might even be viewed as a strength. But, if one accepts a bivariate normal approximation to the data, which can be quite common in practice, then little, if anything, is gained from using distance correlation in place of standard procedures. Unknown power properties. Being consistent against all alternatives essentially guarantees that distance covariance must have very low power against some alternatives. In many cases, one is willing to give up generality in order to gain additional power against particular alternatives of interest. The original papers show some examples in which they claim high power relative to standard correlation metrics, but I believe that, going back to (1.) above, its behavior against alternatives is not yet well understood. To reiterate, this answer probably comes across quite negative. But, that is not the intent. There are some very beautiful and interesting ideas related to distance covariance and the relative novelty of it also opens up research avenues for understanding it more fully. References: G. J. Szekely and M. L. Rizzo (2009), Brownian distance covariance, Ann. Appl. Statist., vol. 3, no. 4, 1236–1265. G. J. Szekely, M. L. Rizzo and N. K. Bakirov (2007), Measuring and testing independence by correlation of distances, Ann. Statist., vol. 35, 2769–2794. R. Lyons (2012), Distance covariance in metric spaces, Ann. Probab. (to appear).
When is distance covariance less appropriate than linear covariance? I have tried to collect a few remarks on distance covariance based on my impressions from reading the references listed below. However, I do not consider myself an expert on this topic. Comments, corr
13,404
When is distance covariance less appropriate than linear covariance?
I could well be missing something, but just having a quantification of the nonlinear dependence between two variables doesn't seem to have much of a payoff. It won't tell you the shape of the relationship. It won't give you any means to predict one variable from the other. By analogy, when doing exploratory data analysis one sometimes uses a loess curve (locally weighted scatterplot smoother) as a first step towards seeing whether the data are best modeled with a straight line, a quadratic, a cubic, etc. But the loess in and of itself is not a very useful predictive tool. It's just a first approximation on the way to finding a workable equation to describe a bivariate shape. That equation, unlike the loess (or the distance covariance result), can form the basis of a confirmatory model.
When is distance covariance less appropriate than linear covariance?
I could well be missing something, but just having a quantification of the nonlinear dependence between two variables doesn't seem to have much of a payoff. It won't tell you the shape of the relatio
When is distance covariance less appropriate than linear covariance? I could well be missing something, but just having a quantification of the nonlinear dependence between two variables doesn't seem to have much of a payoff. It won't tell you the shape of the relationship. It won't give you any means to predict one variable from the other. By analogy, when doing exploratory data analysis one sometimes uses a loess curve (locally weighted scatterplot smoother) as a first step towards seeing whether the data are best modeled with a straight line, a quadratic, a cubic, etc. But the loess in and of itself is not a very useful predictive tool. It's just a first approximation on the way to finding a workable equation to describe a bivariate shape. That equation, unlike the loess (or the distance covariance result), can form the basis of a confirmatory model.
When is distance covariance less appropriate than linear covariance? I could well be missing something, but just having a quantification of the nonlinear dependence between two variables doesn't seem to have much of a payoff. It won't tell you the shape of the relatio
13,405
Is Bayesian statistics genuinely an improvement over traditional (frequentist) statistics for behavioral research?
A quick response to the bulleted content: 1) Power / Type 1 error in a Bayesian analysis vs. a frequentist analysis Asking about Type 1 and power (i.e. one minus the probability of Type 2 error) implies that you can put your inference problem into a repeated sampling framework. Can you? If you can't then there isn't much choice but to move away from frequentist inference tools. If you can, and if the behavior of your estimator over many such samples is of relevance, and if you are not particularly interested in making probability statements about particular events, then I there's no strong reason to move. The argument here is not that such situations never arise - certainly they do - but that they typically don't arise in the fields where the methods are applied. 2) The trade-off in complexity of the analysis (Bayesian seems more complicated) vs. the benefits gained. It is important to ask where the complexity goes. In frequentist procedures the implementation may be very simple, e.g. minimize the sum of squares, but the principles may be arbitrarily complex, typically revolving around which estimator(s) to choose, how to find the right test(s), what to think when they disagree. For an example. see the still lively discussion, picked up in this forum, of different confidence intervals for a proportion! In Bayesian procedures the implementation can be arbitrarily complex even in models that look like they 'ought' to be simple, usually because of difficult integrals but the principles are extremely simple. It rather depends where you'd like the messiness to be. 3) Traditional statistical analyses are straightforward, with well-established guidelines for drawing conclusions. Personally I can no longer remember, but certainly my students never found these straightforward, mostly due to the principle proliferation described above. But the question is not really whether a procedure is straightforward, but whether is closer to being right given the structure of the problem. Finally, I strongly disagree that there are "well-established guidelines for drawing conclusions" in either paradigm. And I think that's a good thing. Sure, "find p<.05" is a clear guideline, but for what model, with what corrections, etc.? And what do I do when my tests disagree? Scientific or engineering judgement is needed here, as it is elsewhere.
Is Bayesian statistics genuinely an improvement over traditional (frequentist) statistics for behavi
A quick response to the bulleted content: 1) Power / Type 1 error in a Bayesian analysis vs. a frequentist analysis Asking about Type 1 and power (i.e. one minus the probability of Type 2 error) impli
Is Bayesian statistics genuinely an improvement over traditional (frequentist) statistics for behavioral research? A quick response to the bulleted content: 1) Power / Type 1 error in a Bayesian analysis vs. a frequentist analysis Asking about Type 1 and power (i.e. one minus the probability of Type 2 error) implies that you can put your inference problem into a repeated sampling framework. Can you? If you can't then there isn't much choice but to move away from frequentist inference tools. If you can, and if the behavior of your estimator over many such samples is of relevance, and if you are not particularly interested in making probability statements about particular events, then I there's no strong reason to move. The argument here is not that such situations never arise - certainly they do - but that they typically don't arise in the fields where the methods are applied. 2) The trade-off in complexity of the analysis (Bayesian seems more complicated) vs. the benefits gained. It is important to ask where the complexity goes. In frequentist procedures the implementation may be very simple, e.g. minimize the sum of squares, but the principles may be arbitrarily complex, typically revolving around which estimator(s) to choose, how to find the right test(s), what to think when they disagree. For an example. see the still lively discussion, picked up in this forum, of different confidence intervals for a proportion! In Bayesian procedures the implementation can be arbitrarily complex even in models that look like they 'ought' to be simple, usually because of difficult integrals but the principles are extremely simple. It rather depends where you'd like the messiness to be. 3) Traditional statistical analyses are straightforward, with well-established guidelines for drawing conclusions. Personally I can no longer remember, but certainly my students never found these straightforward, mostly due to the principle proliferation described above. But the question is not really whether a procedure is straightforward, but whether is closer to being right given the structure of the problem. Finally, I strongly disagree that there are "well-established guidelines for drawing conclusions" in either paradigm. And I think that's a good thing. Sure, "find p<.05" is a clear guideline, but for what model, with what corrections, etc.? And what do I do when my tests disagree? Scientific or engineering judgement is needed here, as it is elsewhere.
Is Bayesian statistics genuinely an improvement over traditional (frequentist) statistics for behavi A quick response to the bulleted content: 1) Power / Type 1 error in a Bayesian analysis vs. a frequentist analysis Asking about Type 1 and power (i.e. one minus the probability of Type 2 error) impli
13,406
Is Bayesian statistics genuinely an improvement over traditional (frequentist) statistics for behavioral research?
Bayesian statistics can be derived from a few Logical principles. Try Searching "probability as extended logic" and you will find more in depth analysis of the fundamentals. But basically, Bayesian statistics rests on three basic "desiderata" or normative principles: The plausability of a proposition is to be represented by a single real number The plausability if a proposition is to have qualitative correspondance with "common sense". If given initial plausibility $p(A|C^{(0)})$, then change from $C^{(0)}\rightarrow C^{(1)}$ such that $p(A|C^{(1)})>p(A|C^{(0)})$ (A becomes more plausible) and also $p(B|A C^{(0)})=p(B|AC^{(1)})$ (given A, B remains just as plausible) then we must have $p(AB| C^{(0)})\leq p(AB|C^{(1)})$ (A and B must be at least as plausible) and $p(\overline{A}|C^{(1)})<p(\overline{A}|C^{(0)})$ (not A must become less plausible). The plausability of a proposition is to be calculated consistently. This means a) if a plausability can be reasoned in more than 1 way, all answers must be equal; b) In two problems where we are presented with the same information, we must assign the same plausabilities; and c) we must take account of all the information that is available. We must not add information that isn't there, and we must not ignore information which we do have. These three desiderata (along with the rules of logic and set theory) uniquely determine the sum and product rules of probability theory. Thus, if you would like to reason according to the above three desiderata, they you must adopt a Bayesian approach. You do not have to adopt the "Bayesian Philosophy" but you must adopt the numerical results. The first three chapters of this book describe these in more detail, and provide the proof. And last but not least, the "Bayesian machinery" is the most powerful data processing tool you have. This is mainly because of the desiderata 3c) using all the information you have (this also explains why Bayes can be more complicated than non-Bayes). It can be quite difficult to decide "what is relevant" using your intuition. Bayes theorem does this for you (and it does it without adding in arbitrary assumptions, also due to 3c). EDIT: to address the question more directly (as suggested in the comment), suppose you have two hypothesis $H_0$ and $H_1$. You have a "false negative" loss $L_1$ (Reject $H_0$ when it is true: type 1 error) and "false positive" loss $L_2$ (Accept $H_0$ when it is false: type 2 error). probability theory says you should: Calculate $P(H_0|E_1,E_2,\dots)$, where $E_i$ is all the pieces of evidence related to the test: data, prior information, whatever you want the calculation to incorporate into the analysis Calculate $P(H_1|E_1,E_2,\dots)$ Calculate the odds $O=\frac{P(H_0|E_1,E_2,\dots)}{P(H_1|E_1,E_2,\dots)}$ Accept $H_0$ if $O > \frac{L_2}{L_1}$ Although you don't really need to introduce the losses. If you just look at the odds, you will get one of three results: i) definitely $H_0$, $O>>1$, ii) definitely $H_1$, $O<<1$, or iii) "inconclusive" $O\approx 1$. Now if the calculation becomes "too hard", then you must either approximate the numbers, or ignore some information. For a actual example with worked out numbers see my answer to this question
Is Bayesian statistics genuinely an improvement over traditional (frequentist) statistics for behavi
Bayesian statistics can be derived from a few Logical principles. Try Searching "probability as extended logic" and you will find more in depth analysis of the fundamentals. But basically, Bayesian
Is Bayesian statistics genuinely an improvement over traditional (frequentist) statistics for behavioral research? Bayesian statistics can be derived from a few Logical principles. Try Searching "probability as extended logic" and you will find more in depth analysis of the fundamentals. But basically, Bayesian statistics rests on three basic "desiderata" or normative principles: The plausability of a proposition is to be represented by a single real number The plausability if a proposition is to have qualitative correspondance with "common sense". If given initial plausibility $p(A|C^{(0)})$, then change from $C^{(0)}\rightarrow C^{(1)}$ such that $p(A|C^{(1)})>p(A|C^{(0)})$ (A becomes more plausible) and also $p(B|A C^{(0)})=p(B|AC^{(1)})$ (given A, B remains just as plausible) then we must have $p(AB| C^{(0)})\leq p(AB|C^{(1)})$ (A and B must be at least as plausible) and $p(\overline{A}|C^{(1)})<p(\overline{A}|C^{(0)})$ (not A must become less plausible). The plausability of a proposition is to be calculated consistently. This means a) if a plausability can be reasoned in more than 1 way, all answers must be equal; b) In two problems where we are presented with the same information, we must assign the same plausabilities; and c) we must take account of all the information that is available. We must not add information that isn't there, and we must not ignore information which we do have. These three desiderata (along with the rules of logic and set theory) uniquely determine the sum and product rules of probability theory. Thus, if you would like to reason according to the above three desiderata, they you must adopt a Bayesian approach. You do not have to adopt the "Bayesian Philosophy" but you must adopt the numerical results. The first three chapters of this book describe these in more detail, and provide the proof. And last but not least, the "Bayesian machinery" is the most powerful data processing tool you have. This is mainly because of the desiderata 3c) using all the information you have (this also explains why Bayes can be more complicated than non-Bayes). It can be quite difficult to decide "what is relevant" using your intuition. Bayes theorem does this for you (and it does it without adding in arbitrary assumptions, also due to 3c). EDIT: to address the question more directly (as suggested in the comment), suppose you have two hypothesis $H_0$ and $H_1$. You have a "false negative" loss $L_1$ (Reject $H_0$ when it is true: type 1 error) and "false positive" loss $L_2$ (Accept $H_0$ when it is false: type 2 error). probability theory says you should: Calculate $P(H_0|E_1,E_2,\dots)$, where $E_i$ is all the pieces of evidence related to the test: data, prior information, whatever you want the calculation to incorporate into the analysis Calculate $P(H_1|E_1,E_2,\dots)$ Calculate the odds $O=\frac{P(H_0|E_1,E_2,\dots)}{P(H_1|E_1,E_2,\dots)}$ Accept $H_0$ if $O > \frac{L_2}{L_1}$ Although you don't really need to introduce the losses. If you just look at the odds, you will get one of three results: i) definitely $H_0$, $O>>1$, ii) definitely $H_1$, $O<<1$, or iii) "inconclusive" $O\approx 1$. Now if the calculation becomes "too hard", then you must either approximate the numbers, or ignore some information. For a actual example with worked out numbers see my answer to this question
Is Bayesian statistics genuinely an improvement over traditional (frequentist) statistics for behavi Bayesian statistics can be derived from a few Logical principles. Try Searching "probability as extended logic" and you will find more in depth analysis of the fundamentals. But basically, Bayesian
13,407
Is Bayesian statistics genuinely an improvement over traditional (frequentist) statistics for behavioral research?
I am not familiar with Bayesian Statistics myself but I do know that Skeptics Guide to the Universe Episode 294 has and interview with Eric-Jan Wagenmakers where they discuss Bayesian Statistics. Here is a link to the podcast: http://www.theskepticsguide.org/archive/podcastinfo.aspx?mid=1&pid=294
Is Bayesian statistics genuinely an improvement over traditional (frequentist) statistics for behavi
I am not familiar with Bayesian Statistics myself but I do know that Skeptics Guide to the Universe Episode 294 has and interview with Eric-Jan Wagenmakers where they discuss Bayesian Statistics. Here
Is Bayesian statistics genuinely an improvement over traditional (frequentist) statistics for behavioral research? I am not familiar with Bayesian Statistics myself but I do know that Skeptics Guide to the Universe Episode 294 has and interview with Eric-Jan Wagenmakers where they discuss Bayesian Statistics. Here is a link to the podcast: http://www.theskepticsguide.org/archive/podcastinfo.aspx?mid=1&pid=294
Is Bayesian statistics genuinely an improvement over traditional (frequentist) statistics for behavi I am not familiar with Bayesian Statistics myself but I do know that Skeptics Guide to the Universe Episode 294 has and interview with Eric-Jan Wagenmakers where they discuss Bayesian Statistics. Here
13,408
Is my weatherman accurate?
In effect you are thinking of a model in which the true chance of rain, p, is a function of the predicted chance q: p = p(q). Each time a prediction is made, you observe one realization of a Bernoulli variate having probability p(q) of success. This is a classic logistic regression setup if you are willing to model the true chance as a linear combination of basis functions f1, f2, ..., fk; that is, the model says Logit(p) = b0 + b1 f1(q) + b2 f2(q) + ... + bk fk(q) + e with iid errors e. If you're agnostic about the form of the relationship (although if the weatherman is any good p(q) - q should be reasonably small), consider using a set of splines for the basis. The output, as usual, consists of estimates of the coefficients and an estimate of the variance of e. Given any future prediction q, just plug the value into the model with the estimated coefficients to obtain an answer to your question (and use the variance of e to construct a prediction interval around that answer if you like). This framework is flexible enough to include other factors, such as the possibility of changes in the quality of predictions over time. It also lets you test hypotheses, such as whether p = q (which is what the weatherman implicitly claims).
Is my weatherman accurate?
In effect you are thinking of a model in which the true chance of rain, p, is a function of the predicted chance q: p = p(q). Each time a prediction is made, you observe one realization of a Bernoull
Is my weatherman accurate? In effect you are thinking of a model in which the true chance of rain, p, is a function of the predicted chance q: p = p(q). Each time a prediction is made, you observe one realization of a Bernoulli variate having probability p(q) of success. This is a classic logistic regression setup if you are willing to model the true chance as a linear combination of basis functions f1, f2, ..., fk; that is, the model says Logit(p) = b0 + b1 f1(q) + b2 f2(q) + ... + bk fk(q) + e with iid errors e. If you're agnostic about the form of the relationship (although if the weatherman is any good p(q) - q should be reasonably small), consider using a set of splines for the basis. The output, as usual, consists of estimates of the coefficients and an estimate of the variance of e. Given any future prediction q, just plug the value into the model with the estimated coefficients to obtain an answer to your question (and use the variance of e to construct a prediction interval around that answer if you like). This framework is flexible enough to include other factors, such as the possibility of changes in the quality of predictions over time. It also lets you test hypotheses, such as whether p = q (which is what the weatherman implicitly claims).
Is my weatherman accurate? In effect you are thinking of a model in which the true chance of rain, p, is a function of the predicted chance q: p = p(q). Each time a prediction is made, you observe one realization of a Bernoull
13,409
Is my weatherman accurate?
Comparison of probability forecast for binary event (or discrete Random Variable) can be done upon the Brier score but you can also use ROC curve since any probability forecast of this type can be transformed into a dicrimination procedure with a varying threshold Indeed you can say "it will rain" if your probability is greater than $\tau$ and evaluate the missed, false discovery,true discovery and true negatives for different values of $\tau$. You should take a look at how the European center for medium range weather forecast (ECMWF does) .
Is my weatherman accurate?
Comparison of probability forecast for binary event (or discrete Random Variable) can be done upon the Brier score but you can also use ROC curve since any probability forecast of this type can be t
Is my weatherman accurate? Comparison of probability forecast for binary event (or discrete Random Variable) can be done upon the Brier score but you can also use ROC curve since any probability forecast of this type can be transformed into a dicrimination procedure with a varying threshold Indeed you can say "it will rain" if your probability is greater than $\tau$ and evaluate the missed, false discovery,true discovery and true negatives for different values of $\tau$. You should take a look at how the European center for medium range weather forecast (ECMWF does) .
Is my weatherman accurate? Comparison of probability forecast for binary event (or discrete Random Variable) can be done upon the Brier score but you can also use ROC curve since any probability forecast of this type can be t
13,410
Is my weatherman accurate?
When the forecast says "X percent chance of rain in (area)", it means that the numerical weather model has indicated rain in X percent of the area, for the time interval in question. For example, it would normally be accurate to predict "100 percent chance of rain in North America". Bear in mind that the models are good at predicting dynamics and poor at predicting thermodynamics.
Is my weatherman accurate?
When the forecast says "X percent chance of rain in (area)", it means that the numerical weather model has indicated rain in X percent of the area, for the time interval in question. For example, it
Is my weatherman accurate? When the forecast says "X percent chance of rain in (area)", it means that the numerical weather model has indicated rain in X percent of the area, for the time interval in question. For example, it would normally be accurate to predict "100 percent chance of rain in North America". Bear in mind that the models are good at predicting dynamics and poor at predicting thermodynamics.
Is my weatherman accurate? When the forecast says "X percent chance of rain in (area)", it means that the numerical weather model has indicated rain in X percent of the area, for the time interval in question. For example, it
13,411
Is my weatherman accurate?
The Brier Score approach is very simple and the most directly applicable way verify accuracy of a predicted outcome versus binary event. Don't rely on just formulas ...plot the scores for different periods of time, data, errors, [weighted] rolling average of data, errors ... it's tough to say what visual analysis might reveal ... after you think you see something, you will better know what kind of hypothesis test to perform until AFTER you look at the data. The Brier Score inherently assumes stability of the variation/underlying distributions weather and technology driving the forecasting models, lack of linearity, no bias, lack of change in bias ... it assumes that same general level of accuracy/inaccuracy is consistent. As climate changes in ways that are not yet understood, the accuracy of weather predictions would decrease; conversely, the scientists feeding information to the weatherman have more resources, more complete models, more computing power so perhaps the accuracy of the predictions would increase. Looking at the errors would tell something about stability, linearity and bias of the forecasts ... you may not have enough data to see trends; you may learn that stability, linearity and bias are not an issue. You may learn that weather forecasts are getting more accurate ... or not.
Is my weatherman accurate?
The Brier Score approach is very simple and the most directly applicable way verify accuracy of a predicted outcome versus binary event. Don't rely on just formulas ...plot the scores for different
Is my weatherman accurate? The Brier Score approach is very simple and the most directly applicable way verify accuracy of a predicted outcome versus binary event. Don't rely on just formulas ...plot the scores for different periods of time, data, errors, [weighted] rolling average of data, errors ... it's tough to say what visual analysis might reveal ... after you think you see something, you will better know what kind of hypothesis test to perform until AFTER you look at the data. The Brier Score inherently assumes stability of the variation/underlying distributions weather and technology driving the forecasting models, lack of linearity, no bias, lack of change in bias ... it assumes that same general level of accuracy/inaccuracy is consistent. As climate changes in ways that are not yet understood, the accuracy of weather predictions would decrease; conversely, the scientists feeding information to the weatherman have more resources, more complete models, more computing power so perhaps the accuracy of the predictions would increase. Looking at the errors would tell something about stability, linearity and bias of the forecasts ... you may not have enough data to see trends; you may learn that stability, linearity and bias are not an issue. You may learn that weather forecasts are getting more accurate ... or not.
Is my weatherman accurate? The Brier Score approach is very simple and the most directly applicable way verify accuracy of a predicted outcome versus binary event. Don't rely on just formulas ...plot the scores for different
13,412
Is my weatherman accurate?
How about just binning the given predictions and taking the observed fractions as your estimate for each bin? You can generalise this to a continuous model by weighing all the observations around your value of interest (say the prediction by tomorrow) by a Gaussian and seeing what the weighted average is. You can guess a width to get you a given fraction of your data (or, say, never less than 100 points for a good estimate). Alternatively use a method such as cross-validation of max-likelihood to get the Gaussian width.
Is my weatherman accurate?
How about just binning the given predictions and taking the observed fractions as your estimate for each bin? You can generalise this to a continuous model by weighing all the observations around your
Is my weatherman accurate? How about just binning the given predictions and taking the observed fractions as your estimate for each bin? You can generalise this to a continuous model by weighing all the observations around your value of interest (say the prediction by tomorrow) by a Gaussian and seeing what the weighted average is. You can guess a width to get you a given fraction of your data (or, say, never less than 100 points for a good estimate). Alternatively use a method such as cross-validation of max-likelihood to get the Gaussian width.
Is my weatherman accurate? How about just binning the given predictions and taking the observed fractions as your estimate for each bin? You can generalise this to a continuous model by weighing all the observations around your
13,413
Is my weatherman accurate?
Do you want to know if his forecast is more accurate than another forecast? If so, you can look at basic accuracy metrics for probabilistic classification like cross-entropy, precision/recall, ROC curves, and the f1-score. Determining if the forecast is objectively good is a different matter. One option is to look at calibration. Of all the days where he said that there would be a 90% chance of rain, did roughly 90% of those days have rain? Take all of the days where he has a forecast and then bucket them by his estimate of the probability of rain. For each bucket, calculate the percentage of the days where rain actually occurred. Then for each bucket plot the actual probability of rain against his estimate for the probability of rain. The plot will look like a straight line if the forecast is well calibrated.
Is my weatherman accurate?
Do you want to know if his forecast is more accurate than another forecast? If so, you can look at basic accuracy metrics for probabilistic classification like cross-entropy, precision/recall, ROC cu
Is my weatherman accurate? Do you want to know if his forecast is more accurate than another forecast? If so, you can look at basic accuracy metrics for probabilistic classification like cross-entropy, precision/recall, ROC curves, and the f1-score. Determining if the forecast is objectively good is a different matter. One option is to look at calibration. Of all the days where he said that there would be a 90% chance of rain, did roughly 90% of those days have rain? Take all of the days where he has a forecast and then bucket them by his estimate of the probability of rain. For each bucket, calculate the percentage of the days where rain actually occurred. Then for each bucket plot the actual probability of rain against his estimate for the probability of rain. The plot will look like a straight line if the forecast is well calibrated.
Is my weatherman accurate? Do you want to know if his forecast is more accurate than another forecast? If so, you can look at basic accuracy metrics for probabilistic classification like cross-entropy, precision/recall, ROC cu
13,414
RMSE vs Standard deviation in population
TLDR; While the formulas may be similar, RMSE and standard deviation have different usage. You are right that both standard deviation and RMSE are similar because they are square roots of squared differences between some values. Nonetheless, they are not the same. Standard deviation is used to measure the spread of data around the mean, while RMSE is used to measure distance between some values and prediction for those values. RMSE is generally used to measure the error of prediction, i.e. how much the predictions you made differ from the predicted data. If you use mean as your prediction for all the cases, then RMSE and SD will be exactly the same. As a sidenote, you may notice that mean is a value that minimizes the squared distance to all the values in the sample. This is the reason why we use standard deviation along with it -- they are related species!
RMSE vs Standard deviation in population
TLDR; While the formulas may be similar, RMSE and standard deviation have different usage. You are right that both standard deviation and RMSE are similar because they are square roots of squared diff
RMSE vs Standard deviation in population TLDR; While the formulas may be similar, RMSE and standard deviation have different usage. You are right that both standard deviation and RMSE are similar because they are square roots of squared differences between some values. Nonetheless, they are not the same. Standard deviation is used to measure the spread of data around the mean, while RMSE is used to measure distance between some values and prediction for those values. RMSE is generally used to measure the error of prediction, i.e. how much the predictions you made differ from the predicted data. If you use mean as your prediction for all the cases, then RMSE and SD will be exactly the same. As a sidenote, you may notice that mean is a value that minimizes the squared distance to all the values in the sample. This is the reason why we use standard deviation along with it -- they are related species!
RMSE vs Standard deviation in population TLDR; While the formulas may be similar, RMSE and standard deviation have different usage. You are right that both standard deviation and RMSE are similar because they are square roots of squared diff
13,415
RMSE vs Standard deviation in population
This will make bit clear, RMSE calculated between two sets, eg: set and predicted set, to calculate the error, eg : price Vs predicted price 10 12 12 10 13 17 $$ {RMSE}=\sqrt{\frac{\sum_{i=1}^N{(F_i - O_i)^2}}{N}} $$ f = forecasts (expected values or unknown results), o = observed values (known results). we find the difference of each row, then sum the differences, and square it, divided by N and finally root... (or you can use a single fixed predicted value and subtract from all rows) RMSD will use a single set to calculate the spread, (not between predicted, but itself) price 10 12 13 $$ {RMSD}=\sqrt{\frac{\sum_{i=1}^N{(x_i - \mu_i)^2}}{N}} $$ μ is the average value
RMSE vs Standard deviation in population
This will make bit clear, RMSE calculated between two sets, eg: set and predicted set, to calculate the error, eg : price Vs predicted price 10 12 12 10 13 17 $$ {RMSE}=\sqrt
RMSE vs Standard deviation in population This will make bit clear, RMSE calculated between two sets, eg: set and predicted set, to calculate the error, eg : price Vs predicted price 10 12 12 10 13 17 $$ {RMSE}=\sqrt{\frac{\sum_{i=1}^N{(F_i - O_i)^2}}{N}} $$ f = forecasts (expected values or unknown results), o = observed values (known results). we find the difference of each row, then sum the differences, and square it, divided by N and finally root... (or you can use a single fixed predicted value and subtract from all rows) RMSD will use a single set to calculate the spread, (not between predicted, but itself) price 10 12 13 $$ {RMSD}=\sqrt{\frac{\sum_{i=1}^N{(x_i - \mu_i)^2}}{N}} $$ μ is the average value
RMSE vs Standard deviation in population This will make bit clear, RMSE calculated between two sets, eg: set and predicted set, to calculate the error, eg : price Vs predicted price 10 12 12 10 13 17 $$ {RMSE}=\sqrt
13,416
Interpretation of .L & .Q output from a negative binomial GLM with categorical data
Your variables aren't just coded as factors (to make them categorical), they are coded as ordered factors. Then, by default, R fits a series of polynomial functions to the levels of the variable. The first is linear (.L), the second is quadratic (.Q), the third (if you had enough levels) would be cubic, etc. R will fit one fewer polynomial function than the number of levels in your variable. For example, if you have only two levels, only the linear trend would be fit. Moreover, the polynomial bases used are orthogonal. (For what it's worth, none of this is specific to R—or to negative binomial models—all software and types of regression models would do the same.) Focusing specifically on R, if you wanted your variables to be coded as ordered or unordered, you would use ?factor: my.variable <- factor(my.variable, ordered=TRUE) # an ordered factor my.variable <- factor(my.variable, ordered=FALSE) # an unordered factor
Interpretation of .L & .Q output from a negative binomial GLM with categorical data
Your variables aren't just coded as factors (to make them categorical), they are coded as ordered factors. Then, by default, R fits a series of polynomial functions to the levels of the variable. Th
Interpretation of .L & .Q output from a negative binomial GLM with categorical data Your variables aren't just coded as factors (to make them categorical), they are coded as ordered factors. Then, by default, R fits a series of polynomial functions to the levels of the variable. The first is linear (.L), the second is quadratic (.Q), the third (if you had enough levels) would be cubic, etc. R will fit one fewer polynomial function than the number of levels in your variable. For example, if you have only two levels, only the linear trend would be fit. Moreover, the polynomial bases used are orthogonal. (For what it's worth, none of this is specific to R—or to negative binomial models—all software and types of regression models would do the same.) Focusing specifically on R, if you wanted your variables to be coded as ordered or unordered, you would use ?factor: my.variable <- factor(my.variable, ordered=TRUE) # an ordered factor my.variable <- factor(my.variable, ordered=FALSE) # an unordered factor
Interpretation of .L & .Q output from a negative binomial GLM with categorical data Your variables aren't just coded as factors (to make them categorical), they are coded as ordered factors. Then, by default, R fits a series of polynomial functions to the levels of the variable. Th
13,417
Interpretation of LASSO regression coefficients
Are the LASSO coefficients interpreted in the same method as logistic regression? Let me rephrase: Are the LASSO coefficients interpreted in the same way as, for example, OLS maximum likelihood coefficients in a logistic regression? LASSO (a penalized estimation method) aims at estimating the same quantities (model coefficients) as, say, OLS maximum likelihood (an unpenalized method). The model is the same, and the interpretation remains the same. The numerical values from LASSO will normally differ from those from OLS maximum likelihood: some will be closer to zero, others will be exactly zero. If a sensible amount of penalization has been applied, the LASSO estimates will lie closer to the true values than the OLS maximum likelihood estimates, which is a desirable result. Would it be appropriate to use the features selected from LASSO in logistic regression? There is no inherent problem with that, but you could use LASSO not only for feature selection but also for coefficient estimation. As I mention above, LASSO estimates may be more accurate than, say, OLS maximum likelihood estimates.
Interpretation of LASSO regression coefficients
Are the LASSO coefficients interpreted in the same method as logistic regression? Let me rephrase: Are the LASSO coefficients interpreted in the same way as, for example, OLS maximum likelihood coeff
Interpretation of LASSO regression coefficients Are the LASSO coefficients interpreted in the same method as logistic regression? Let me rephrase: Are the LASSO coefficients interpreted in the same way as, for example, OLS maximum likelihood coefficients in a logistic regression? LASSO (a penalized estimation method) aims at estimating the same quantities (model coefficients) as, say, OLS maximum likelihood (an unpenalized method). The model is the same, and the interpretation remains the same. The numerical values from LASSO will normally differ from those from OLS maximum likelihood: some will be closer to zero, others will be exactly zero. If a sensible amount of penalization has been applied, the LASSO estimates will lie closer to the true values than the OLS maximum likelihood estimates, which is a desirable result. Would it be appropriate to use the features selected from LASSO in logistic regression? There is no inherent problem with that, but you could use LASSO not only for feature selection but also for coefficient estimation. As I mention above, LASSO estimates may be more accurate than, say, OLS maximum likelihood estimates.
Interpretation of LASSO regression coefficients Are the LASSO coefficients interpreted in the same method as logistic regression? Let me rephrase: Are the LASSO coefficients interpreted in the same way as, for example, OLS maximum likelihood coeff
13,418
Making sense of independent component analysis
Here's my attempt. Background Consider the following two cases. You are a private eye at a party. Suddenly, you see one of your old clients talking to someone, and you can hear some of the words but not quite, because you also hear someone else who's next to him, participating in an unrelated discussion about sports. You don't want to come closer - he'll spot you. You decide to take your partner's phone (who's busy convincing the bartender non-alcoholic beer is great) and plant it about 10 meters next to you. The phone is recording, and the phone also records the old client's talk as well as the interfering sports guy. You take your own phone and start recording as well, from where you're standing. After about 15 minutes you go home with two recordings: one from your position, and the other from about 10 meters away. Both recordings contain your old client and Mr. Sporty, but on each recording, one of the speakers is of a slightly different volume relative to the other (and this relative volume is kept constant during the entire talk for each recording, because fortunately no one of the participants moved around the room). You take a picture of a cute Labrador Retriever dog you see outside the window. You check-out the image, and unfortunately you see a reflection from the window that's between you and the dog. You can't open the window (it's one of those, yes) and you can't go outside because you're afraid he'll run away. So you take (for some unclear reason) another image, from a slightly different position. You still see the reflection and the dog, but they are in different positions now, since you're taking the picture from a different place. Also note that the position changed uniformly for each pixel in the image, because the window is flat and not concave/convex. The question is, in both cases, how to restore the conversation (in 1.) or the image of the dog (in 2.), given the two images that contain the same two "sources" but with slightly different relative contributions from each. Surely my educated grandchild can make sense of this! Intuitive solution How can we, at least in principle, get back the image of the dog from a mixture? Each pixel contains values that are a sum of two values! Well, if each pixel was given without any other pixels, our intuition would be correct - we would not have been able to guess the exact relative contributions of each of the pixels. However, we are given a set of pixels (or points in time in the case of the recording), that we know hold the same relations. For example, if on the first image, the dog is always twice stronger than the reflection, and on the second image, it is just the opposite, then we might be able to get the correct contributions after all. And then, we can come up with the correct way to subtract the two images at hand so that the reflection is exactly cancelled! [Mathematically, this means finding the inverse mixture matrix.] Diving into details Let's say you have a mixture of two signals, $$Y_1=a_{11}S_1+a_{12}S_2 \\ Y_2 = a_{21}S_1 + a_{22} S_2 $$ and let's say you would like to obtain back $S_1$ as a function of the two mixtures, $Y_1,Y_2$. And let's also assume that you want a linear combination: $S_1=b_{11} Y_1 + b_{12} Y_2$. So, all you need to do is to find the best vector $(b_{11},b_{12})$ and there you have it. Similarly for $S_2$ and $(b_{21},b_{22})$. But how can you find it for general signals? they may look similar, have similar statistics, etc. So let's assume they're independent. That's reasonable if you have an interfering signal, such as noise, or if the two signals are images, the interfering signal may be a reflection of something else (and you took two images from different angles). Now, we know that $Y_1$ and $Y_2$ are dependent. Since we may not recover $S_1,S_2$ exactly, denote our estimation for these signals as $X_1,X_2$, respectively. How can we make $X_1,X_2$ be as close as possible to $S_1,S_2$? Since we know the latter are independent, we might want to make $X_1,X_2$ as independent as possible, by jiggling with the values of $b_{ij}$. After all, if the matrix $\{a_{ij}\}$ is invertible, we can find some matrix $\{b_{ij}\}$ that inverts the mixing operation (and if it's not invertible, we can get close), and if we make them independent, good chance we restore our $S_i$ signals. If you are convinced we need to find such $\{b_{ij}\}$ that makes $X_1,X_2$ independent, we now need to ask how to do that. So first consider this: if we sum up several independent, non-Gaussian signals, we make the sum "more Gaussian" than the components. Why? due to the central limit theorem, and you can also think about the density of the sum of two indep. variables, which is the convolution of the densities. If we sum several indep. Bernoulli variables, the empirical distribution will resemble more and more a Gaussian shape. Will it be a true Gaussian? probably not (no pun intended), but we can measure a Gaussianity of a signal by the amount it resembles a Gaussian distribution. For instance, we can measure its excess kurtosis. If it's really high, it is probably less Gaussian than one with the same variance but with excess kurtosis close to zero. Therefore, if we were to find the mixing weights, we might try to find $\{b_{ij}\}$ by formulating an optimization problem that at each iteration, makes the vector of $X_1,X_2$ slightly less Gaussian. Mind that it may not be truly Gaussian at any stage, but we just want to reduce the Gaussianity. Hopefully, finally, and if we don't get stuck at local minima, we would get the backwards mixing matrix $\{b_{ij}\}$ and get our indep. signals back. Of course, this adds another assumption - the two signals need to be non-Gaussian to begin with.
Making sense of independent component analysis
Here's my attempt. Background Consider the following two cases. You are a private eye at a party. Suddenly, you see one of your old clients talking to someone, and you can hear some of the words but
Making sense of independent component analysis Here's my attempt. Background Consider the following two cases. You are a private eye at a party. Suddenly, you see one of your old clients talking to someone, and you can hear some of the words but not quite, because you also hear someone else who's next to him, participating in an unrelated discussion about sports. You don't want to come closer - he'll spot you. You decide to take your partner's phone (who's busy convincing the bartender non-alcoholic beer is great) and plant it about 10 meters next to you. The phone is recording, and the phone also records the old client's talk as well as the interfering sports guy. You take your own phone and start recording as well, from where you're standing. After about 15 minutes you go home with two recordings: one from your position, and the other from about 10 meters away. Both recordings contain your old client and Mr. Sporty, but on each recording, one of the speakers is of a slightly different volume relative to the other (and this relative volume is kept constant during the entire talk for each recording, because fortunately no one of the participants moved around the room). You take a picture of a cute Labrador Retriever dog you see outside the window. You check-out the image, and unfortunately you see a reflection from the window that's between you and the dog. You can't open the window (it's one of those, yes) and you can't go outside because you're afraid he'll run away. So you take (for some unclear reason) another image, from a slightly different position. You still see the reflection and the dog, but they are in different positions now, since you're taking the picture from a different place. Also note that the position changed uniformly for each pixel in the image, because the window is flat and not concave/convex. The question is, in both cases, how to restore the conversation (in 1.) or the image of the dog (in 2.), given the two images that contain the same two "sources" but with slightly different relative contributions from each. Surely my educated grandchild can make sense of this! Intuitive solution How can we, at least in principle, get back the image of the dog from a mixture? Each pixel contains values that are a sum of two values! Well, if each pixel was given without any other pixels, our intuition would be correct - we would not have been able to guess the exact relative contributions of each of the pixels. However, we are given a set of pixels (or points in time in the case of the recording), that we know hold the same relations. For example, if on the first image, the dog is always twice stronger than the reflection, and on the second image, it is just the opposite, then we might be able to get the correct contributions after all. And then, we can come up with the correct way to subtract the two images at hand so that the reflection is exactly cancelled! [Mathematically, this means finding the inverse mixture matrix.] Diving into details Let's say you have a mixture of two signals, $$Y_1=a_{11}S_1+a_{12}S_2 \\ Y_2 = a_{21}S_1 + a_{22} S_2 $$ and let's say you would like to obtain back $S_1$ as a function of the two mixtures, $Y_1,Y_2$. And let's also assume that you want a linear combination: $S_1=b_{11} Y_1 + b_{12} Y_2$. So, all you need to do is to find the best vector $(b_{11},b_{12})$ and there you have it. Similarly for $S_2$ and $(b_{21},b_{22})$. But how can you find it for general signals? they may look similar, have similar statistics, etc. So let's assume they're independent. That's reasonable if you have an interfering signal, such as noise, or if the two signals are images, the interfering signal may be a reflection of something else (and you took two images from different angles). Now, we know that $Y_1$ and $Y_2$ are dependent. Since we may not recover $S_1,S_2$ exactly, denote our estimation for these signals as $X_1,X_2$, respectively. How can we make $X_1,X_2$ be as close as possible to $S_1,S_2$? Since we know the latter are independent, we might want to make $X_1,X_2$ as independent as possible, by jiggling with the values of $b_{ij}$. After all, if the matrix $\{a_{ij}\}$ is invertible, we can find some matrix $\{b_{ij}\}$ that inverts the mixing operation (and if it's not invertible, we can get close), and if we make them independent, good chance we restore our $S_i$ signals. If you are convinced we need to find such $\{b_{ij}\}$ that makes $X_1,X_2$ independent, we now need to ask how to do that. So first consider this: if we sum up several independent, non-Gaussian signals, we make the sum "more Gaussian" than the components. Why? due to the central limit theorem, and you can also think about the density of the sum of two indep. variables, which is the convolution of the densities. If we sum several indep. Bernoulli variables, the empirical distribution will resemble more and more a Gaussian shape. Will it be a true Gaussian? probably not (no pun intended), but we can measure a Gaussianity of a signal by the amount it resembles a Gaussian distribution. For instance, we can measure its excess kurtosis. If it's really high, it is probably less Gaussian than one with the same variance but with excess kurtosis close to zero. Therefore, if we were to find the mixing weights, we might try to find $\{b_{ij}\}$ by formulating an optimization problem that at each iteration, makes the vector of $X_1,X_2$ slightly less Gaussian. Mind that it may not be truly Gaussian at any stage, but we just want to reduce the Gaussianity. Hopefully, finally, and if we don't get stuck at local minima, we would get the backwards mixing matrix $\{b_{ij}\}$ and get our indep. signals back. Of course, this adds another assumption - the two signals need to be non-Gaussian to begin with.
Making sense of independent component analysis Here's my attempt. Background Consider the following two cases. You are a private eye at a party. Suddenly, you see one of your old clients talking to someone, and you can hear some of the words but
13,419
Making sense of independent component analysis
Very simple. Imagine you, your grandma and the family members are gathered around the table. Larger groups of people tend to break up where the chat topic is specific to that subgroup. Your grandma sits there and hears the noise of all of people speaking, what appears to be just a cacophony. If she turns to one group, she can clearly isolate discussions in teens/youth group, if she turns to the other group, she can isolate adult people chat. To summarize, ICA is about isolating or extracting a specific signal (one people or a group of people talking) from a mixture of signals (crowd talking).
Making sense of independent component analysis
Very simple. Imagine you, your grandma and the family members are gathered around the table. Larger groups of people tend to break up where the chat topic is specific to that subgroup. Your grandma si
Making sense of independent component analysis Very simple. Imagine you, your grandma and the family members are gathered around the table. Larger groups of people tend to break up where the chat topic is specific to that subgroup. Your grandma sits there and hears the noise of all of people speaking, what appears to be just a cacophony. If she turns to one group, she can clearly isolate discussions in teens/youth group, if she turns to the other group, she can isolate adult people chat. To summarize, ICA is about isolating or extracting a specific signal (one people or a group of people talking) from a mixture of signals (crowd talking).
Making sense of independent component analysis Very simple. Imagine you, your grandma and the family members are gathered around the table. Larger groups of people tend to break up where the chat topic is specific to that subgroup. Your grandma si
13,420
Sampling from von Mises-Fisher distribution in Python?
Finally, I got it. Here is my answer. I finally put my hands on Directional Statistics (Mardia and Jupp, 1999) and on the Ulrich-Wood's algorithm for sampling. I post here what I understood from it, i.e. my code (in Python). The rejection sampling scheme: def rW(n, kappa, m): dim = m-1 b = dim / (np.sqrt(4*kappa*kappa + dim*dim) + 2*kappa) x = (1-b) / (1+b) c = kappa*x + dim*np.log(1-x*x) y = [] for i in range(0,n): done = False while not done: z = sc.stats.beta.rvs(dim/2,dim/2) w = (1 - (1+b)*z) / (1 - (1-b)*z) u = sc.stats.uniform.rvs() if kappa*w + dim*np.log(1-x*w) - c >= np.log(u): done = True y.append(w) return y Then, the desired sampling is $v \sqrt{1-w^2} + w \mu$, where $w$ is the result from the rejection sampling scheme, and $v$ is uniformly sampled over the hypersphere. def rvMF(n,theta): dim = len(theta) kappa = np.linalg.norm(theta) mu = theta / kappa result = [] for sample in range(0,n): w = rW(n, kappa, dim) v = np.random.randn(dim) v = v / np.linalg.norm(v) result.append(np.sqrt(1-w**2)*v + w*mu) return result And, for effectively sampling with this code, here is an example: import numpy as np import scipy as sc import scipy.stats n = 10 kappa = 100000 direction = np.array([1,-1,1]) direction = direction / np.linalg.norm(direction) res_sampling = rvMF(n, kappa * direction)
Sampling from von Mises-Fisher distribution in Python?
Finally, I got it. Here is my answer. I finally put my hands on Directional Statistics (Mardia and Jupp, 1999) and on the Ulrich-Wood's algorithm for sampling. I post here what I understood from it, i
Sampling from von Mises-Fisher distribution in Python? Finally, I got it. Here is my answer. I finally put my hands on Directional Statistics (Mardia and Jupp, 1999) and on the Ulrich-Wood's algorithm for sampling. I post here what I understood from it, i.e. my code (in Python). The rejection sampling scheme: def rW(n, kappa, m): dim = m-1 b = dim / (np.sqrt(4*kappa*kappa + dim*dim) + 2*kappa) x = (1-b) / (1+b) c = kappa*x + dim*np.log(1-x*x) y = [] for i in range(0,n): done = False while not done: z = sc.stats.beta.rvs(dim/2,dim/2) w = (1 - (1+b)*z) / (1 - (1-b)*z) u = sc.stats.uniform.rvs() if kappa*w + dim*np.log(1-x*w) - c >= np.log(u): done = True y.append(w) return y Then, the desired sampling is $v \sqrt{1-w^2} + w \mu$, where $w$ is the result from the rejection sampling scheme, and $v$ is uniformly sampled over the hypersphere. def rvMF(n,theta): dim = len(theta) kappa = np.linalg.norm(theta) mu = theta / kappa result = [] for sample in range(0,n): w = rW(n, kappa, dim) v = np.random.randn(dim) v = v / np.linalg.norm(v) result.append(np.sqrt(1-w**2)*v + w*mu) return result And, for effectively sampling with this code, here is an example: import numpy as np import scipy as sc import scipy.stats n = 10 kappa = 100000 direction = np.array([1,-1,1]) direction = direction / np.linalg.norm(direction) res_sampling = rvMF(n, kappa * direction)
Sampling from von Mises-Fisher distribution in Python? Finally, I got it. Here is my answer. I finally put my hands on Directional Statistics (Mardia and Jupp, 1999) and on the Ulrich-Wood's algorithm for sampling. I post here what I understood from it, i
13,421
Sampling from von Mises-Fisher distribution in Python?
(I apologize for the formatting here, I created an account just to reply to this question, since I was also trying to figure this out recently). The answer of mic isn't quite right, the vector $v$ needs to come from $S^{p-2}$ in the tangent space to $\mu$, that is, $v$ should be a unit vector orthogonal to $\mu$. Otherwise, the vector $v\sqrt{1-w^2}+w\mu$ will not have norm one. You can see this in the example provided by mic. To fix this, use something like: import scipy.linalg as la def sample_tangent_unit(mu): mat = np.matrix(mu) if mat.shape[1]>mat.shape[0]: mat = mat.T U,_,_ = la.svd(mat) nu = np.matrix(np.random.randn(mat.shape[0])).T x = np.dot(U[:,1:],nu[1:,:]) return x/la.norm(x) and replace v = np.random.randn(dim) v = v / np.linalg.norm(v) in mic's example with a call to v = sample_tangent_unit(mu)
Sampling from von Mises-Fisher distribution in Python?
(I apologize for the formatting here, I created an account just to reply to this question, since I was also trying to figure this out recently). The answer of mic isn't quite right, the vector $v$ nee
Sampling from von Mises-Fisher distribution in Python? (I apologize for the formatting here, I created an account just to reply to this question, since I was also trying to figure this out recently). The answer of mic isn't quite right, the vector $v$ needs to come from $S^{p-2}$ in the tangent space to $\mu$, that is, $v$ should be a unit vector orthogonal to $\mu$. Otherwise, the vector $v\sqrt{1-w^2}+w\mu$ will not have norm one. You can see this in the example provided by mic. To fix this, use something like: import scipy.linalg as la def sample_tangent_unit(mu): mat = np.matrix(mu) if mat.shape[1]>mat.shape[0]: mat = mat.T U,_,_ = la.svd(mat) nu = np.matrix(np.random.randn(mat.shape[0])).T x = np.dot(U[:,1:],nu[1:,:]) return x/la.norm(x) and replace v = np.random.randn(dim) v = v / np.linalg.norm(v) in mic's example with a call to v = sample_tangent_unit(mu)
Sampling from von Mises-Fisher distribution in Python? (I apologize for the formatting here, I created an account just to reply to this question, since I was also trying to figure this out recently). The answer of mic isn't quite right, the vector $v$ nee
13,422
Why $\sqrt{n}$ in the definition of asymptotic normality?
We don't get to choose here. The "normalizing" factor, in essence is a "variance-stabilizing to something finite" factor, so as for the expression not to go to zero or to infinity as sample size goes to infinity, but to maintain a distribution at the limit. So it has to be whatever it has to be in each case. Of course it is interesting that in many cases it emerges that it has to be $\sqrt n$. (but see also @whuber's comment below). A standard example where the normalizing factor has to be $n$, rather than $\sqrt n$ is when we have a model $$y_t = \beta y_{t-1} + u_t, \;\; y_0 = 0,\; t=1,...,T$$ with $u_t$ white noise, and we estimate the unknown $\beta$ by Ordinary Least Squares. If it so happens that the true value of the coefficient is $|\beta|<1$, then the the OLS estimator is consistent and converges at the usual $\sqrt n$ rate. But if instead the true value is $\beta=1$ (i.e we have in reality a pure random walk), then the OLS estimator is consistent but will converge "faster", at rate $n$ (this is sometimes called a "superconsistent" estimator -since, I guess, so many estimators converge at rate $\sqrt n$). In this case, to obtain its (non-normal) asymptotic distribution, we have to scale $(\hat \beta - \beta)$ by $n$ (if we scale only by $\sqrt n$ the expression will go to zero). Hamilton ch 17 has the details.
Why $\sqrt{n}$ in the definition of asymptotic normality?
We don't get to choose here. The "normalizing" factor, in essence is a "variance-stabilizing to something finite" factor, so as for the expression not to go to zero or to infinity as sample size goes
Why $\sqrt{n}$ in the definition of asymptotic normality? We don't get to choose here. The "normalizing" factor, in essence is a "variance-stabilizing to something finite" factor, so as for the expression not to go to zero or to infinity as sample size goes to infinity, but to maintain a distribution at the limit. So it has to be whatever it has to be in each case. Of course it is interesting that in many cases it emerges that it has to be $\sqrt n$. (but see also @whuber's comment below). A standard example where the normalizing factor has to be $n$, rather than $\sqrt n$ is when we have a model $$y_t = \beta y_{t-1} + u_t, \;\; y_0 = 0,\; t=1,...,T$$ with $u_t$ white noise, and we estimate the unknown $\beta$ by Ordinary Least Squares. If it so happens that the true value of the coefficient is $|\beta|<1$, then the the OLS estimator is consistent and converges at the usual $\sqrt n$ rate. But if instead the true value is $\beta=1$ (i.e we have in reality a pure random walk), then the OLS estimator is consistent but will converge "faster", at rate $n$ (this is sometimes called a "superconsistent" estimator -since, I guess, so many estimators converge at rate $\sqrt n$). In this case, to obtain its (non-normal) asymptotic distribution, we have to scale $(\hat \beta - \beta)$ by $n$ (if we scale only by $\sqrt n$ the expression will go to zero). Hamilton ch 17 has the details.
Why $\sqrt{n}$ in the definition of asymptotic normality? We don't get to choose here. The "normalizing" factor, in essence is a "variance-stabilizing to something finite" factor, so as for the expression not to go to zero or to infinity as sample size goes
13,423
Why $\sqrt{n}$ in the definition of asymptotic normality?
You were on the right track with a sample mean variance intuition. Re-arrange the condition: $$\sqrt{n}(U_n - \theta) \to N(0,v)$$ $$(U_n - \theta) \to \frac{N(0,v)}{\sqrt{n}}$$ $$U_n \to N(\theta,\frac{v}{n})$$ The last equation is informal. However, it's in some way more intuitive: you say that the deviation of $U_n$ from $\theta$ is becoming more like a normal distribution when $n$ increases. The variance is shrinking, but the shape becomes closer to normal distribution. In math they don't define the convergence to the changing right hand side ($n$ is varying). That's why the same idea is expressed as the original condition, that you gave. In which the right hand side is fixed, and the left hand side converges to it.
Why $\sqrt{n}$ in the definition of asymptotic normality?
You were on the right track with a sample mean variance intuition. Re-arrange the condition: $$\sqrt{n}(U_n - \theta) \to N(0,v)$$ $$(U_n - \theta) \to \frac{N(0,v)}{\sqrt{n}}$$ $$U_n \to N(\theta,\f
Why $\sqrt{n}$ in the definition of asymptotic normality? You were on the right track with a sample mean variance intuition. Re-arrange the condition: $$\sqrt{n}(U_n - \theta) \to N(0,v)$$ $$(U_n - \theta) \to \frac{N(0,v)}{\sqrt{n}}$$ $$U_n \to N(\theta,\frac{v}{n})$$ The last equation is informal. However, it's in some way more intuitive: you say that the deviation of $U_n$ from $\theta$ is becoming more like a normal distribution when $n$ increases. The variance is shrinking, but the shape becomes closer to normal distribution. In math they don't define the convergence to the changing right hand side ($n$ is varying). That's why the same idea is expressed as the original condition, that you gave. In which the right hand side is fixed, and the left hand side converges to it.
Why $\sqrt{n}$ in the definition of asymptotic normality? You were on the right track with a sample mean variance intuition. Re-arrange the condition: $$\sqrt{n}(U_n - \theta) \to N(0,v)$$ $$(U_n - \theta) \to \frac{N(0,v)}{\sqrt{n}}$$ $$U_n \to N(\theta,\f
13,424
stochastic vs deterministic trend/seasonality in time series forecasting
1) As regards your first question, some tests statistics have been developed and discussed in the literature to test the null of stationarity and the null of a unit root. Some of the many papers that were written on this issue are the following: Related to the trend: Dickey, D. y Fuller, W. (1979a), Distribution of the estimators for autoregressive time series with a unit root, Journal of the American Statistical Association 74, 427-31. Dickey, D. y Fuller, W. (1981), Likelihood ratio statistics for autoregressive time series with a unit root, Econometrica 49, 1057-1071. Kwiatkowski, D., Phillips, P., Schmidt, P. y Shin, Y. (1992), Testing the null hypothesis of stationarity against the alternative of a unit root: How sure are we that economic time series have a unit root?, Journal of Econometrics 54, 159-178. Phillips, P. y Perron, P. (1988), Testing for a unit root in time series regression, Biometrika 75, 335-46. Durlauf, S. y Phillips, P. (1988), Trends versus random walks in time series analysis, Econometrica 56, 1333-54. Related to the seasonal component: Hylleberg, S., Engle, R., Granger, C. y Yoo, B. (1990), Seasonal integration and cointegration, Journal of Econometrics 44, 215-38. Canova, F. y Hansen, B. E. (1995), Are seasonal patterns constant over time? a test for seasonal stability, Journal of Business and Economic Statistics 13, 237-252. Franses, P. (1990), Testing for seasonal unit roots in monthly data, Technical Report 9032, Econometric Institute. Ghysels, E., Lee, H. y Noh, J. (1994), Testing for unit roots in seasonal time series. some theoretical extensions and a monte carlo investigation, Journal of Econometrics 62, 415-442. The textbook Banerjee, A., Dolado, J., Galbraith, J. y Hendry, D. (1993), Co-Integration,Error Correction, and the econometric analysis of non-stationary data, Advanced Texts in Econometrics. Oxford University Press is also a good reference. 2) Your second concern is justified by the literature. If there is a unit root test then the traditional t-statistic that you would apply on a linear trend does not follow the standard distribution. See for example, Phillips, P. (1987), Time series regression with unit root, Econometrica 55(2), 277-301. If a unit root exists and is ignored, then the probability of rejecting the null that the coefficient of a linear trend is zero is reduced. That is, we would end up modelling a deterministic linear trend too often for a given significance level. In the presence of a unit root we should instead transform the data by taking regular differences to the data. 3) For illustration, if you use R you can do the following analysis with your data. x <- structure(c(7657, 5451, 10883, 9554, 9519, 10047, 10663, 10864, 11447, 12710, 15169, 16205, 14507, 15400, 16800, 19000, 20198, 18573, 19375, 21032, 23250, 25219, 28549, 29759, 28262, 28506, 33885, 34776, 35347, 34628, 33043, 30214, 31013, 31496, 34115, 33433, 34198, 35863, 37789, 34561, 36434, 34371, 33307, 33295, 36514, 36593, 38311, 42773, 45000, 46000, 42000, 47000, 47500, 48000, 48500, 47000, 48900), .Tsp = c(1, 57, 1), class = "ts") First, you can apply the Dickey-Fuller test for the null of a unit root: require(tseries) adf.test(x, alternative = "explosive") # Augmented Dickey-Fuller Test # Dickey-Fuller = -2.0685, Lag order = 3, p-value = 0.453 # alternative hypothesis: explosive and the KPSS test for the reverse null hypothesis, stationarity against the alternative of stationarity around a linear trend: kpss.test(x, null = "Trend", lshort = TRUE) # KPSS Test for Trend Stationarity # KPSS Trend = 0.2691, Truncation lag parameter = 1, p-value = 0.01 Results: ADF test, at the 5% significance level a unit root is not rejected; KPSS test, the null of stationarity is rejected in favour of a model with a linear trend. Aside note: using lshort=FALSE the null of the KPSS test is not rejected at the 5% level, however, it selects 5 lags; a further inspection not shown here suggested that choosing 1-3 lags is appropriate for the data and leads to reject the null hypothesis. In principle, we should guide ourselves by the test for which we were able to the reject the null hypothesis (rather than by the test for which we did not reject (we accepted) the null). However, a regression of the original series on a linear trend turns out to be not reliable. On the one hand, the R-square is high (over 90%) which is pointed in the literature as an indicator of spurious regression. fit <- lm(x ~ 1 + poly(c(time(x)))) summary(fit) #Coefficients: # Estimate Std. Error t value Pr(>|t|) #(Intercept) 28499.3 381.6 74.69 <2e-16 *** #poly(c(time(x))) 91387.5 2880.9 31.72 <2e-16 *** #--- #Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 # #Residual standard error: 2881 on 55 degrees of freedom #Multiple R-squared: 0.9482, Adjusted R-squared: 0.9472 #F-statistic: 1006 on 1 and 55 DF, p-value: < 2.2e-16 On the other hand, the residuals are autocorrelated: acf(residuals(fit)) # not displayed to save space Moreover, the null of a unit root in the residuals cannot be rejected. adf.test(residuals(fit)) # Augmented Dickey-Fuller Test #Dickey-Fuller = -2.0685, Lag order = 3, p-value = 0.547 #alternative hypothesis: stationary At this point, you can choose a model to be used to obtain forecasts. For example, forecasts based on a structural time series model and on an ARIMA model can be obtained as follows. # StructTS fit1 <- StructTS(x, type = "trend") fit1 #Variances: # level slope epsilon #2982955 0 487180 # # forecasts p1 <- predict(fit1, 10, main = "Local trend model") p1$pred # [1] 49466.53 50150.56 50834.59 51518.62 52202.65 52886.68 53570.70 54254.73 # [9] 54938.76 55622.79 # ARIMA require(forecast) fit2 <- auto.arima(x, ic="bic", allowdrift = TRUE) fit2 #ARIMA(0,1,0) with drift #Coefficients: # drift # 736.4821 #s.e. 267.0055 #sigma^2 estimated as 3992341: log likelihood=-495.54 #AIC=995.09 AICc=995.31 BIC=999.14 # # forecasts p2 <- forecast(fit2, 10, main = "ARIMA model") p2$mean # [1] 49636.48 50372.96 51109.45 51845.93 52582.41 53318.89 54055.37 54791.86 # [9] 55528.34 56264.82 A plot of the forecasts: par(mfrow = c(2, 1), mar = c(2.5,2.2,2,2)) plot((cbind(x, p1$pred)), plot.type = "single", type = "n", ylim = range(c(x, p1$pred + 1.96 * p1$se)), main = "Local trend model") grid() lines(x) lines(p1$pred, col = "blue") lines(p1$pred + 1.96 * p1$se, col = "red", lty = 2) lines(p1$pred - 1.96 * p1$se, col = "red", lty = 2) legend("topleft", legend = c("forecasts", "95% confidence interval"), lty = c(1,2), col = c("blue", "red"), bty = "n") plot((cbind(x, p2$mean)), plot.type = "single", type = "n", ylim = range(c(x, p2$upper)), main = "ARIMA (0,1,0) with drift") grid() lines(x) lines(p2$mean, col = "blue") lines(ts(p2$lower[,2], start = end(x)[1] + 1), col = "red", lty = 2) lines(ts(p2$upper[,2], start = end(x)[1] + 1), col = "red", lty = 2) The forecasts are similar in both cases and look reasonable. Notice that the forecasts follow a relatively deterministic pattern similar to a linear trend, but we did not modelled explicitly a linear trend. The reason is the following: i) in the local trend model, the variance of the slope component is estimated as zero. This turns the trend component into a drift that has the effect of a linear trend. ii) ARIMA(0,1,1), a model with a drift is selected in a model for the differenced series.The effect of the constant term on a differenced series is a linear trend. This is discussed in this post. You may check that if a local model or an ARIMA(0,1,0) without drift are chosen, then the forecasts are a straight horizontal line and, hence, would have no resemblance with the observed dynamic of the data. Well, this is part of the puzzle of unit root tests and deterministic components. Edit 1 (inspection of residuals): The autocorrelation and partial ACF do not suggest a structure in the residuals. resid1 <- residuals(fit1) resid2 <- residuals(fit2) par(mfrow = c(2, 2)) acf(resid1, lag.max = 20, main = "ACF residuals. Local trend model") pacf(resid1, lag.max = 20, main = "PACF residuals. Local trend model") acf(resid2, lag.max = 20, main = "ACF residuals. ARIMA(0,1,0) with drift") pacf(resid2, lag.max = 20, main = "PACF residuals. ARIMA(0,1,0) with drift") As IrishStat suggested, checking for the presence of outliers is also advisable. Two additive outliers are detected using the package tsoutliers. require(tsoutliers) resol <- tsoutliers(x, types = c("AO", "LS", "TC"), remove.method = "bottom-up", args.tsmethod = list(ic="bic", allowdrift=TRUE)) resol #ARIMA(0,1,0) with drift #Coefficients: # drift AO2 AO51 # 736.4821 -3819.000 -4500.000 #s.e. 220.6171 1167.396 1167.397 #sigma^2 estimated as 2725622: log likelihood=-485.05 #AIC=978.09 AICc=978.88 BIC=986.2 #Outliers: # type ind time coefhat tstat #1 AO 2 2 -3819 -3.271 #2 AO 51 51 -4500 -3.855 Looking at the ACF, we can say that, at the 5% significance level, the residuals are random in this model as well. par(mfrow = c(2, 1)) acf(residuals(resol$fit), lag.max = 20, main = "ACF residuals. ARIMA with additive outliers") pacf(residuals(resol$fit), lag.max = 20, main = "PACF residuals. ARIMA with additive outliers") In this case, the presence of potential outliers does not appear to distort the performance of the models. This is supported by the Jarque-Bera test for normality; the null of normality in the residuals from the initial models (fit1, fit2) is not rejected at the 5% significance level. jarque.bera.test(resid1)[[1]] # X-squared = 0.3221, df = 2, p-value = 0.8513 jarque.bera.test(resid2)[[1]] #X-squared = 0.426, df = 2, p-value = 0.8082 Edit 2 (plot of residuals and their values) This is how the residuals look like: And these are their values in a csv format: 0;6.9205 -0.9571;-2942.4821 2.6108;4695.5179 -0.5453;-2065.4821 -0.2026;-771.4821 0.1242;-208.4821 0.1909;-120.4821 -0.0179;-535.4821 0.1449;-153.4821 0.484;526.5179 1.0748;1722.5179 0.3818;299.5179 -1.061;-2434.4821 0.0996;156.5179 0.4805;663.5179 0.8969;1463.5179 0.4111;461.5179 -1.0595;-2361.4821 0.0098;65.5179 0.5605;920.5179 0.8835;1481.5179 0.7669;1232.5179 1.4024;2593.5179 0.3785;473.5179 -1.1032;-2233.4821 -0.3813;-492.4821 2.2745;4642.5179 0.2935;154.5179 -0.1138;-165.4821 -0.8035;-1455.4821 -1.2982;-2321.4821 -1.9463;-3565.4821 -0.1648;62.5179 -0.1022;-253.4821 0.9755;1882.5179 -0.5662;-1418.4821 -0.0176;28.5179 0.5;928.5179 0.6831;1189.5179 -1.8889;-3964.4821 0.3896;1136.5179 -1.3113;-2799.4821 -0.9934;-1800.4821 -0.4085;-748.4821 1.2902;2482.5179 -0.0996;-657.4821 0.5539;981.5179 2.0007;3725.5179 1.0227;1490.5179 0.27;263.5179 -2.336;-4736.4821 1.8994;4263.5179 0.1301;-236.4821 -0.0892;-236.4821 -0.1148;-236.4821 -1.1207;-2236.4821 0.4801;1163.5179
stochastic vs deterministic trend/seasonality in time series forecasting
1) As regards your first question, some tests statistics have been developed and discussed in the literature to test the null of stationarity and the null of a unit root. Some of the many papers that
stochastic vs deterministic trend/seasonality in time series forecasting 1) As regards your first question, some tests statistics have been developed and discussed in the literature to test the null of stationarity and the null of a unit root. Some of the many papers that were written on this issue are the following: Related to the trend: Dickey, D. y Fuller, W. (1979a), Distribution of the estimators for autoregressive time series with a unit root, Journal of the American Statistical Association 74, 427-31. Dickey, D. y Fuller, W. (1981), Likelihood ratio statistics for autoregressive time series with a unit root, Econometrica 49, 1057-1071. Kwiatkowski, D., Phillips, P., Schmidt, P. y Shin, Y. (1992), Testing the null hypothesis of stationarity against the alternative of a unit root: How sure are we that economic time series have a unit root?, Journal of Econometrics 54, 159-178. Phillips, P. y Perron, P. (1988), Testing for a unit root in time series regression, Biometrika 75, 335-46. Durlauf, S. y Phillips, P. (1988), Trends versus random walks in time series analysis, Econometrica 56, 1333-54. Related to the seasonal component: Hylleberg, S., Engle, R., Granger, C. y Yoo, B. (1990), Seasonal integration and cointegration, Journal of Econometrics 44, 215-38. Canova, F. y Hansen, B. E. (1995), Are seasonal patterns constant over time? a test for seasonal stability, Journal of Business and Economic Statistics 13, 237-252. Franses, P. (1990), Testing for seasonal unit roots in monthly data, Technical Report 9032, Econometric Institute. Ghysels, E., Lee, H. y Noh, J. (1994), Testing for unit roots in seasonal time series. some theoretical extensions and a monte carlo investigation, Journal of Econometrics 62, 415-442. The textbook Banerjee, A., Dolado, J., Galbraith, J. y Hendry, D. (1993), Co-Integration,Error Correction, and the econometric analysis of non-stationary data, Advanced Texts in Econometrics. Oxford University Press is also a good reference. 2) Your second concern is justified by the literature. If there is a unit root test then the traditional t-statistic that you would apply on a linear trend does not follow the standard distribution. See for example, Phillips, P. (1987), Time series regression with unit root, Econometrica 55(2), 277-301. If a unit root exists and is ignored, then the probability of rejecting the null that the coefficient of a linear trend is zero is reduced. That is, we would end up modelling a deterministic linear trend too often for a given significance level. In the presence of a unit root we should instead transform the data by taking regular differences to the data. 3) For illustration, if you use R you can do the following analysis with your data. x <- structure(c(7657, 5451, 10883, 9554, 9519, 10047, 10663, 10864, 11447, 12710, 15169, 16205, 14507, 15400, 16800, 19000, 20198, 18573, 19375, 21032, 23250, 25219, 28549, 29759, 28262, 28506, 33885, 34776, 35347, 34628, 33043, 30214, 31013, 31496, 34115, 33433, 34198, 35863, 37789, 34561, 36434, 34371, 33307, 33295, 36514, 36593, 38311, 42773, 45000, 46000, 42000, 47000, 47500, 48000, 48500, 47000, 48900), .Tsp = c(1, 57, 1), class = "ts") First, you can apply the Dickey-Fuller test for the null of a unit root: require(tseries) adf.test(x, alternative = "explosive") # Augmented Dickey-Fuller Test # Dickey-Fuller = -2.0685, Lag order = 3, p-value = 0.453 # alternative hypothesis: explosive and the KPSS test for the reverse null hypothesis, stationarity against the alternative of stationarity around a linear trend: kpss.test(x, null = "Trend", lshort = TRUE) # KPSS Test for Trend Stationarity # KPSS Trend = 0.2691, Truncation lag parameter = 1, p-value = 0.01 Results: ADF test, at the 5% significance level a unit root is not rejected; KPSS test, the null of stationarity is rejected in favour of a model with a linear trend. Aside note: using lshort=FALSE the null of the KPSS test is not rejected at the 5% level, however, it selects 5 lags; a further inspection not shown here suggested that choosing 1-3 lags is appropriate for the data and leads to reject the null hypothesis. In principle, we should guide ourselves by the test for which we were able to the reject the null hypothesis (rather than by the test for which we did not reject (we accepted) the null). However, a regression of the original series on a linear trend turns out to be not reliable. On the one hand, the R-square is high (over 90%) which is pointed in the literature as an indicator of spurious regression. fit <- lm(x ~ 1 + poly(c(time(x)))) summary(fit) #Coefficients: # Estimate Std. Error t value Pr(>|t|) #(Intercept) 28499.3 381.6 74.69 <2e-16 *** #poly(c(time(x))) 91387.5 2880.9 31.72 <2e-16 *** #--- #Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 # #Residual standard error: 2881 on 55 degrees of freedom #Multiple R-squared: 0.9482, Adjusted R-squared: 0.9472 #F-statistic: 1006 on 1 and 55 DF, p-value: < 2.2e-16 On the other hand, the residuals are autocorrelated: acf(residuals(fit)) # not displayed to save space Moreover, the null of a unit root in the residuals cannot be rejected. adf.test(residuals(fit)) # Augmented Dickey-Fuller Test #Dickey-Fuller = -2.0685, Lag order = 3, p-value = 0.547 #alternative hypothesis: stationary At this point, you can choose a model to be used to obtain forecasts. For example, forecasts based on a structural time series model and on an ARIMA model can be obtained as follows. # StructTS fit1 <- StructTS(x, type = "trend") fit1 #Variances: # level slope epsilon #2982955 0 487180 # # forecasts p1 <- predict(fit1, 10, main = "Local trend model") p1$pred # [1] 49466.53 50150.56 50834.59 51518.62 52202.65 52886.68 53570.70 54254.73 # [9] 54938.76 55622.79 # ARIMA require(forecast) fit2 <- auto.arima(x, ic="bic", allowdrift = TRUE) fit2 #ARIMA(0,1,0) with drift #Coefficients: # drift # 736.4821 #s.e. 267.0055 #sigma^2 estimated as 3992341: log likelihood=-495.54 #AIC=995.09 AICc=995.31 BIC=999.14 # # forecasts p2 <- forecast(fit2, 10, main = "ARIMA model") p2$mean # [1] 49636.48 50372.96 51109.45 51845.93 52582.41 53318.89 54055.37 54791.86 # [9] 55528.34 56264.82 A plot of the forecasts: par(mfrow = c(2, 1), mar = c(2.5,2.2,2,2)) plot((cbind(x, p1$pred)), plot.type = "single", type = "n", ylim = range(c(x, p1$pred + 1.96 * p1$se)), main = "Local trend model") grid() lines(x) lines(p1$pred, col = "blue") lines(p1$pred + 1.96 * p1$se, col = "red", lty = 2) lines(p1$pred - 1.96 * p1$se, col = "red", lty = 2) legend("topleft", legend = c("forecasts", "95% confidence interval"), lty = c(1,2), col = c("blue", "red"), bty = "n") plot((cbind(x, p2$mean)), plot.type = "single", type = "n", ylim = range(c(x, p2$upper)), main = "ARIMA (0,1,0) with drift") grid() lines(x) lines(p2$mean, col = "blue") lines(ts(p2$lower[,2], start = end(x)[1] + 1), col = "red", lty = 2) lines(ts(p2$upper[,2], start = end(x)[1] + 1), col = "red", lty = 2) The forecasts are similar in both cases and look reasonable. Notice that the forecasts follow a relatively deterministic pattern similar to a linear trend, but we did not modelled explicitly a linear trend. The reason is the following: i) in the local trend model, the variance of the slope component is estimated as zero. This turns the trend component into a drift that has the effect of a linear trend. ii) ARIMA(0,1,1), a model with a drift is selected in a model for the differenced series.The effect of the constant term on a differenced series is a linear trend. This is discussed in this post. You may check that if a local model or an ARIMA(0,1,0) without drift are chosen, then the forecasts are a straight horizontal line and, hence, would have no resemblance with the observed dynamic of the data. Well, this is part of the puzzle of unit root tests and deterministic components. Edit 1 (inspection of residuals): The autocorrelation and partial ACF do not suggest a structure in the residuals. resid1 <- residuals(fit1) resid2 <- residuals(fit2) par(mfrow = c(2, 2)) acf(resid1, lag.max = 20, main = "ACF residuals. Local trend model") pacf(resid1, lag.max = 20, main = "PACF residuals. Local trend model") acf(resid2, lag.max = 20, main = "ACF residuals. ARIMA(0,1,0) with drift") pacf(resid2, lag.max = 20, main = "PACF residuals. ARIMA(0,1,0) with drift") As IrishStat suggested, checking for the presence of outliers is also advisable. Two additive outliers are detected using the package tsoutliers. require(tsoutliers) resol <- tsoutliers(x, types = c("AO", "LS", "TC"), remove.method = "bottom-up", args.tsmethod = list(ic="bic", allowdrift=TRUE)) resol #ARIMA(0,1,0) with drift #Coefficients: # drift AO2 AO51 # 736.4821 -3819.000 -4500.000 #s.e. 220.6171 1167.396 1167.397 #sigma^2 estimated as 2725622: log likelihood=-485.05 #AIC=978.09 AICc=978.88 BIC=986.2 #Outliers: # type ind time coefhat tstat #1 AO 2 2 -3819 -3.271 #2 AO 51 51 -4500 -3.855 Looking at the ACF, we can say that, at the 5% significance level, the residuals are random in this model as well. par(mfrow = c(2, 1)) acf(residuals(resol$fit), lag.max = 20, main = "ACF residuals. ARIMA with additive outliers") pacf(residuals(resol$fit), lag.max = 20, main = "PACF residuals. ARIMA with additive outliers") In this case, the presence of potential outliers does not appear to distort the performance of the models. This is supported by the Jarque-Bera test for normality; the null of normality in the residuals from the initial models (fit1, fit2) is not rejected at the 5% significance level. jarque.bera.test(resid1)[[1]] # X-squared = 0.3221, df = 2, p-value = 0.8513 jarque.bera.test(resid2)[[1]] #X-squared = 0.426, df = 2, p-value = 0.8082 Edit 2 (plot of residuals and their values) This is how the residuals look like: And these are their values in a csv format: 0;6.9205 -0.9571;-2942.4821 2.6108;4695.5179 -0.5453;-2065.4821 -0.2026;-771.4821 0.1242;-208.4821 0.1909;-120.4821 -0.0179;-535.4821 0.1449;-153.4821 0.484;526.5179 1.0748;1722.5179 0.3818;299.5179 -1.061;-2434.4821 0.0996;156.5179 0.4805;663.5179 0.8969;1463.5179 0.4111;461.5179 -1.0595;-2361.4821 0.0098;65.5179 0.5605;920.5179 0.8835;1481.5179 0.7669;1232.5179 1.4024;2593.5179 0.3785;473.5179 -1.1032;-2233.4821 -0.3813;-492.4821 2.2745;4642.5179 0.2935;154.5179 -0.1138;-165.4821 -0.8035;-1455.4821 -1.2982;-2321.4821 -1.9463;-3565.4821 -0.1648;62.5179 -0.1022;-253.4821 0.9755;1882.5179 -0.5662;-1418.4821 -0.0176;28.5179 0.5;928.5179 0.6831;1189.5179 -1.8889;-3964.4821 0.3896;1136.5179 -1.3113;-2799.4821 -0.9934;-1800.4821 -0.4085;-748.4821 1.2902;2482.5179 -0.0996;-657.4821 0.5539;981.5179 2.0007;3725.5179 1.0227;1490.5179 0.27;263.5179 -2.336;-4736.4821 1.8994;4263.5179 0.1301;-236.4821 -0.0892;-236.4821 -0.1148;-236.4821 -1.1207;-2236.4821 0.4801;1163.5179
stochastic vs deterministic trend/seasonality in time series forecasting 1) As regards your first question, some tests statistics have been developed and discussed in the literature to test the null of stationarity and the null of a unit root. Some of the many papers that
13,425
stochastic vs deterministic trend/seasonality in time series forecasting
With respect to your non-seasonal data ...Trends can be of two forms y(t)=y(t−1)+θ0 (A) Stochastic Trend or Y(t)=a+bx1+cx2 (B) Deterministic Trend etc where x1=1,2,3,4....t and x2=0,0,0,0,0,1,2,3,4 thus one trend applies to observations 1−t and a second trend applies to observations 6 to t. Your non-seasonal series contained 29 values. I used AUTOBOX a piece of software that I had helped develop in a totally automatic fashion. AUTOBOX is a transparent procedure as it details each step in the modeling process. A graph of the series/fitted values/forecasts are presented here . Using AUTOBOX to form a type A model led to the following . The equation is presented again here , The statistics of the model are . A plot of the residuals is here while the table of forecasted values are here . Restricting AUTOBOX to a type B model led to AUTOBOX detecting an increased trend at period 14:. ! In terms of comparing models: Since the number of fitted observations differ (26 and 29 respectively) it is not possible to use standard metrics (i.e. r-square,error standard dev, AIC etc) to determine dominance although in this case the nod would go to A. The residuals from A are better due to the AR(2) structure. The forecasts from B are a tad aggressive while the pattern of the A forecasts are more intuitive. One could hold back say 4 observations and evaluate forecast accuracy for a 1 period out forecast from 4 distinct origins( 25,26,27 and 28).
stochastic vs deterministic trend/seasonality in time series forecasting
With respect to your non-seasonal data ...Trends can be of two forms y(t)=y(t−1)+θ0 (A) Stochastic Trend or Y(t)=a+bx1+cx2 (B) Deterministic Trend etc where x1=1,2,3,4....t and x2=0,0,0
stochastic vs deterministic trend/seasonality in time series forecasting With respect to your non-seasonal data ...Trends can be of two forms y(t)=y(t−1)+θ0 (A) Stochastic Trend or Y(t)=a+bx1+cx2 (B) Deterministic Trend etc where x1=1,2,3,4....t and x2=0,0,0,0,0,1,2,3,4 thus one trend applies to observations 1−t and a second trend applies to observations 6 to t. Your non-seasonal series contained 29 values. I used AUTOBOX a piece of software that I had helped develop in a totally automatic fashion. AUTOBOX is a transparent procedure as it details each step in the modeling process. A graph of the series/fitted values/forecasts are presented here . Using AUTOBOX to form a type A model led to the following . The equation is presented again here , The statistics of the model are . A plot of the residuals is here while the table of forecasted values are here . Restricting AUTOBOX to a type B model led to AUTOBOX detecting an increased trend at period 14:. ! In terms of comparing models: Since the number of fitted observations differ (26 and 29 respectively) it is not possible to use standard metrics (i.e. r-square,error standard dev, AIC etc) to determine dominance although in this case the nod would go to A. The residuals from A are better due to the AR(2) structure. The forecasts from B are a tad aggressive while the pattern of the A forecasts are more intuitive. One could hold back say 4 observations and evaluate forecast accuracy for a 1 period out forecast from 4 distinct origins( 25,26,27 and 28).
stochastic vs deterministic trend/seasonality in time series forecasting With respect to your non-seasonal data ...Trends can be of two forms y(t)=y(t−1)+θ0 (A) Stochastic Trend or Y(t)=a+bx1+cx2 (B) Deterministic Trend etc where x1=1,2,3,4....t and x2=0,0,0
13,426
What are some well known improvements over textbook MCMC algorithms that people use for bayesian inference?
I'm not an expert in any of these, but I thought I'd put them out there anyway to see what the community thought. Corrections are welcome. One increasingly popular method, which is not terribly straightforward to implement, is called Hamiltonian Monte Carlo (or sometimes Hybrid Monte Carlo). It uses a physical model with potential and kinetic energy to simulate a ball rolling around the parameter space, as described in this paper by Radford Neal. The physical model takes a fair amount of computational resources, so you tend to get many fewer updates, but the updates tend to be less correlated. HMC is the engine behind the new STAN software that is being developed as a more efficient and flexible alternative to BUGS or JAGS for statistical modeling. There's also a whole cluster of methods that involve "heating up" the Markov chain, which you can think of as introducing thermal noise to the model and increasing the chances of sampling low-probability states. At first glance, that seems like a bad idea, since you want the model to sample in proportion to the posterior probability. But you actually only end up using the "hot" states to help the chain mix better. The actual samples are only collected when the chain is at its "normal" temperature. If you do it correctly, you can use the heated chains to find modes that an ordinary chain wouldn't be able to get to because of large valleys of low probability blocking the transition from mode-to-mode. A few examples of these methods include Metropolis-coupled MCMC, tempered transitions, parallel tempering, and annealed importance sampling. Finally, you can use sequential Monte Carlo or particle filtering when the rejection rate would be so high that these other methods would all fail. I know the least about this family of methods, so my description may be incorrect here, but my understanding is that it works like this. You start out by running your favorite sampler, even though the chances of rejection are essentially one. Rather than rejecting all your samples, you pick the least objectionable ones, and initialize new samplers from there, repeating the process until you find some samples that you can actually accept. Then you go back and correct for the fact that your samples were nonrandom, because you didn't initialize your samplers from random locations. Hope this helps.
What are some well known improvements over textbook MCMC algorithms that people use for bayesian inf
I'm not an expert in any of these, but I thought I'd put them out there anyway to see what the community thought. Corrections are welcome. One increasingly popular method, which is not terribly strai
What are some well known improvements over textbook MCMC algorithms that people use for bayesian inference? I'm not an expert in any of these, but I thought I'd put them out there anyway to see what the community thought. Corrections are welcome. One increasingly popular method, which is not terribly straightforward to implement, is called Hamiltonian Monte Carlo (or sometimes Hybrid Monte Carlo). It uses a physical model with potential and kinetic energy to simulate a ball rolling around the parameter space, as described in this paper by Radford Neal. The physical model takes a fair amount of computational resources, so you tend to get many fewer updates, but the updates tend to be less correlated. HMC is the engine behind the new STAN software that is being developed as a more efficient and flexible alternative to BUGS or JAGS for statistical modeling. There's also a whole cluster of methods that involve "heating up" the Markov chain, which you can think of as introducing thermal noise to the model and increasing the chances of sampling low-probability states. At first glance, that seems like a bad idea, since you want the model to sample in proportion to the posterior probability. But you actually only end up using the "hot" states to help the chain mix better. The actual samples are only collected when the chain is at its "normal" temperature. If you do it correctly, you can use the heated chains to find modes that an ordinary chain wouldn't be able to get to because of large valleys of low probability blocking the transition from mode-to-mode. A few examples of these methods include Metropolis-coupled MCMC, tempered transitions, parallel tempering, and annealed importance sampling. Finally, you can use sequential Monte Carlo or particle filtering when the rejection rate would be so high that these other methods would all fail. I know the least about this family of methods, so my description may be incorrect here, but my understanding is that it works like this. You start out by running your favorite sampler, even though the chances of rejection are essentially one. Rather than rejecting all your samples, you pick the least objectionable ones, and initialize new samplers from there, repeating the process until you find some samples that you can actually accept. Then you go back and correct for the fact that your samples were nonrandom, because you didn't initialize your samplers from random locations. Hope this helps.
What are some well known improvements over textbook MCMC algorithms that people use for bayesian inf I'm not an expert in any of these, but I thought I'd put them out there anyway to see what the community thought. Corrections are welcome. One increasingly popular method, which is not terribly strai
13,427
Dealing with ties, weights and voting in kNN
When doing kNN you need to keep one thing in mind, namely that it's not a strictly, mathematically derived algorithm, but rather a simple classifier / regressor based on one intuition - the underlying function doesn't change much when the arguments don't change much. Or in other words the underlying function is locally near-constant. With this assumption, you can estimate the value of underlying function in any given point, by a (possibly weighted) mean of the values of nearest k points. Keeping this in mind, you can realize there is no clear imperative on what to do when there is no clear winner in majority voting. You can either always use an odd k, or use some injective weighting. In the case of neighbours 3 to 5 being at the same distance from the point of interest, you can either use only two, or use all 5. Again, keep in mind kNN is not some algorithm derived from complex mathematical analysis, but just a simple intuition. It's up to you how you want to deal with those special cases. When it comes to weighting, you base your algorithm on the intuition that function doesn't change much when arguments don't change much. So you want to give bigger weights to points that are closer to point of interest. A good weighting would be for example $\frac{1}{||x-y||^2}$, or any other that is relatively big when distance is small, and relatively small when distance between points is big (so probably an inverse of some continuous metric function). There has also been a nice paper by Samory Kpotufe and Abdeslam Boularias this year on NIPS touching on the issue of finding the right weighting. Their general intuition, is that the underlying function varies differently in different directions (i.e., its different partial derivatives are of different magnitude), hence it would be wise to in some sense change the metrics / weighting according to this intuition. They claim this trick generally improves performance of kNN and kernel regression, and I think they even have some theoretical results to back up this claim (although I'm not sure what do those theoretical results actually claim, I didn't have time to go through the whole paper yet). The paper can be downloaded for free from their sites, or after Googling "Gradient Weights help Nonparametric Regressors". Their research is directed towards regression, but I guess it applies to classification to some extent as well. Now, you will probably want to know how you can find the right k, metric, weighting, action to perform when there are draws and so on. The sad thing is, that it's basically hard to arrive at the right hyperparameters after some deep thinking, you will probably need to test different bunches of hyperparameters and see which ones work well on some validation set. If you have some computational resources, and want to arrive at the right parameters automatically at a good set of hyperparameters, there is a recent idea (that I like very much) to use Gaussian processes for derivative-free optimization in that setting. Let me elaborate - finding the set of hyperparameters (i.e., that minimize error on validation data), can be viewed as an optimization problem. Unfortunately, in this setting we can't get the gradient of the function we try to optimize (which is what we usually want to do, to perform gradient descent or some more advanced methods). Gaussian processes can be used in this setting, for finding sets of hyperparameters, that have big chances, to perform better than the best ones we have found up to the point. Hence you can iteratively run the algorithm with some set of hyperparameters, then ask the Gaussian process for which ones would be best to try next, try those ones, and so on. For details, look for paper "Practical Bayesian Optimization of Machine Learning Algorithms" by Jasper Snoek, Hugo Larochelle and Ryan P Adams (also to be found on either their websites or via Google).
Dealing with ties, weights and voting in kNN
When doing kNN you need to keep one thing in mind, namely that it's not a strictly, mathematically derived algorithm, but rather a simple classifier / regressor based on one intuition - the underlying
Dealing with ties, weights and voting in kNN When doing kNN you need to keep one thing in mind, namely that it's not a strictly, mathematically derived algorithm, but rather a simple classifier / regressor based on one intuition - the underlying function doesn't change much when the arguments don't change much. Or in other words the underlying function is locally near-constant. With this assumption, you can estimate the value of underlying function in any given point, by a (possibly weighted) mean of the values of nearest k points. Keeping this in mind, you can realize there is no clear imperative on what to do when there is no clear winner in majority voting. You can either always use an odd k, or use some injective weighting. In the case of neighbours 3 to 5 being at the same distance from the point of interest, you can either use only two, or use all 5. Again, keep in mind kNN is not some algorithm derived from complex mathematical analysis, but just a simple intuition. It's up to you how you want to deal with those special cases. When it comes to weighting, you base your algorithm on the intuition that function doesn't change much when arguments don't change much. So you want to give bigger weights to points that are closer to point of interest. A good weighting would be for example $\frac{1}{||x-y||^2}$, or any other that is relatively big when distance is small, and relatively small when distance between points is big (so probably an inverse of some continuous metric function). There has also been a nice paper by Samory Kpotufe and Abdeslam Boularias this year on NIPS touching on the issue of finding the right weighting. Their general intuition, is that the underlying function varies differently in different directions (i.e., its different partial derivatives are of different magnitude), hence it would be wise to in some sense change the metrics / weighting according to this intuition. They claim this trick generally improves performance of kNN and kernel regression, and I think they even have some theoretical results to back up this claim (although I'm not sure what do those theoretical results actually claim, I didn't have time to go through the whole paper yet). The paper can be downloaded for free from their sites, or after Googling "Gradient Weights help Nonparametric Regressors". Their research is directed towards regression, but I guess it applies to classification to some extent as well. Now, you will probably want to know how you can find the right k, metric, weighting, action to perform when there are draws and so on. The sad thing is, that it's basically hard to arrive at the right hyperparameters after some deep thinking, you will probably need to test different bunches of hyperparameters and see which ones work well on some validation set. If you have some computational resources, and want to arrive at the right parameters automatically at a good set of hyperparameters, there is a recent idea (that I like very much) to use Gaussian processes for derivative-free optimization in that setting. Let me elaborate - finding the set of hyperparameters (i.e., that minimize error on validation data), can be viewed as an optimization problem. Unfortunately, in this setting we can't get the gradient of the function we try to optimize (which is what we usually want to do, to perform gradient descent or some more advanced methods). Gaussian processes can be used in this setting, for finding sets of hyperparameters, that have big chances, to perform better than the best ones we have found up to the point. Hence you can iteratively run the algorithm with some set of hyperparameters, then ask the Gaussian process for which ones would be best to try next, try those ones, and so on. For details, look for paper "Practical Bayesian Optimization of Machine Learning Algorithms" by Jasper Snoek, Hugo Larochelle and Ryan P Adams (also to be found on either their websites or via Google).
Dealing with ties, weights and voting in kNN When doing kNN you need to keep one thing in mind, namely that it's not a strictly, mathematically derived algorithm, but rather a simple classifier / regressor based on one intuition - the underlying
13,428
Dealing with ties, weights and voting in kNN
The ideal way to break a tie for a k nearest neighbor in my view would be to decrease k by 1 until you have broken the tie. This will always work regardless of the vote weighting scheme, since a tie is impossible when k = 1. If you were to increase k, pending your weighting scheme and number of categories, you would not be able to guarantee a tie break.
Dealing with ties, weights and voting in kNN
The ideal way to break a tie for a k nearest neighbor in my view would be to decrease k by 1 until you have broken the tie. This will always work regardless of the vote weighting scheme, since a tie i
Dealing with ties, weights and voting in kNN The ideal way to break a tie for a k nearest neighbor in my view would be to decrease k by 1 until you have broken the tie. This will always work regardless of the vote weighting scheme, since a tie is impossible when k = 1. If you were to increase k, pending your weighting scheme and number of categories, you would not be able to guarantee a tie break.
Dealing with ties, weights and voting in kNN The ideal way to break a tie for a k nearest neighbor in my view would be to decrease k by 1 until you have broken the tie. This will always work regardless of the vote weighting scheme, since a tie i
13,429
Dealing with ties, weights and voting in kNN
About this tie part, the best baseline idea for ties is usually random breaking, so selecting random class of all winning the voting and randomly selecting a subset of tied objects large enough to fill k. Such a solution stresses the fact that those are pathological cases that simply don't provide enough information to make a decision in kNN regime. BTW if they are common to your data, maybe you should try some more differentiating distance?
Dealing with ties, weights and voting in kNN
About this tie part, the best baseline idea for ties is usually random breaking, so selecting random class of all winning the voting and randomly selecting a subset of tied objects large enough to fil
Dealing with ties, weights and voting in kNN About this tie part, the best baseline idea for ties is usually random breaking, so selecting random class of all winning the voting and randomly selecting a subset of tied objects large enough to fill k. Such a solution stresses the fact that those are pathological cases that simply don't provide enough information to make a decision in kNN regime. BTW if they are common to your data, maybe you should try some more differentiating distance?
Dealing with ties, weights and voting in kNN About this tie part, the best baseline idea for ties is usually random breaking, so selecting random class of all winning the voting and randomly selecting a subset of tied objects large enough to fil
13,430
Dealing with ties, weights and voting in kNN
One possible way is to have the algorithm automatically increase or decrease k until you get a clear winner.
Dealing with ties, weights and voting in kNN
One possible way is to have the algorithm automatically increase or decrease k until you get a clear winner.
Dealing with ties, weights and voting in kNN One possible way is to have the algorithm automatically increase or decrease k until you get a clear winner.
Dealing with ties, weights and voting in kNN One possible way is to have the algorithm automatically increase or decrease k until you get a clear winner.
13,431
How to do a generalized linear model with multiple dependent variables in R?
The short answer is that glm doesn't work like that. The lm will create mlm objects if you give it a matrix, but this is not widely supported in the generics and anyway couldn't easily generalize to glm because users need to be able to specify dual column dependent variables for logistic regression models. The solution is to fit the models separately. Assume your IVs and DVs live in a data.frame called dd and are labelled the way they are in your question. The following code makes a list of fitted models indexed by the name of the dependent variable they use: models <- list() dvnames <- paste("DV", 1:6, sep='') ivnames <- paste("IV", 1:n, sep='') ## for some value of n for (y in dvnames){ form <- formula(paste(y, "~", ivnames)) models[[y]] <- glm(form, data=dd, family='poisson') } To examine the results, just wrap your usual functions in a lapply, like this: lapply(models, summary) ## summarize each model There are no doubt more elegant ways to do this in R, but that should work.
How to do a generalized linear model with multiple dependent variables in R?
The short answer is that glm doesn't work like that. The lm will create mlm objects if you give it a matrix, but this is not widely supported in the generics and anyway couldn't easily generalize to
How to do a generalized linear model with multiple dependent variables in R? The short answer is that glm doesn't work like that. The lm will create mlm objects if you give it a matrix, but this is not widely supported in the generics and anyway couldn't easily generalize to glm because users need to be able to specify dual column dependent variables for logistic regression models. The solution is to fit the models separately. Assume your IVs and DVs live in a data.frame called dd and are labelled the way they are in your question. The following code makes a list of fitted models indexed by the name of the dependent variable they use: models <- list() dvnames <- paste("DV", 1:6, sep='') ivnames <- paste("IV", 1:n, sep='') ## for some value of n for (y in dvnames){ form <- formula(paste(y, "~", ivnames)) models[[y]] <- glm(form, data=dd, family='poisson') } To examine the results, just wrap your usual functions in a lapply, like this: lapply(models, summary) ## summarize each model There are no doubt more elegant ways to do this in R, but that should work.
How to do a generalized linear model with multiple dependent variables in R? The short answer is that glm doesn't work like that. The lm will create mlm objects if you give it a matrix, but this is not widely supported in the generics and anyway couldn't easily generalize to
13,432
How to do a generalized linear model with multiple dependent variables in R?
I was told Multivariate Generalized Linear (Mixed) Models exists that address your problem. I'm not an expert about it, but I would have a look to SABRE documentation and this book on multivariate GLMs. Maybe they help...
How to do a generalized linear model with multiple dependent variables in R?
I was told Multivariate Generalized Linear (Mixed) Models exists that address your problem. I'm not an expert about it, but I would have a look to SABRE documentation and this book on multivariate GLM
How to do a generalized linear model with multiple dependent variables in R? I was told Multivariate Generalized Linear (Mixed) Models exists that address your problem. I'm not an expert about it, but I would have a look to SABRE documentation and this book on multivariate GLMs. Maybe they help...
How to do a generalized linear model with multiple dependent variables in R? I was told Multivariate Generalized Linear (Mixed) Models exists that address your problem. I'm not an expert about it, but I would have a look to SABRE documentation and this book on multivariate GLM
13,433
Examples of hidden Markov models problems?
I've used HMM in a demand / inventory level estimation scenario, where we had goods being purchased from many stores that might or might not be out of inventory of the goods. The sequence of daily demands for these items thus contained zeroes that were legitimate zero demand days and also zeroes that were because the store was out of stock. You would think you'd know whether the store was out of stock from the inventory level, but errors in inventory records propagate and it is not at all uncommon to find a store that thinks it has a positive number of items on hand, but actually has none; the hidden state is, more or less, whether the store actually has any inventory, and the signal is the (daily demand, nominal inventory level). No references for this work, though; we were not supposed to publish the results for competitive reasons. Edit: I'll add that this is especially important because, with zero demands, the store's nominal on hand inventory doesn't ever decrease and cross an order point, triggering an order for more inventory - therefore, a zero on hand state due to erroneous inventory records doesn't get fixed for a long time, until somebody notices something is wrong or a cycle count occurs, which may be many months after the problem has started.
Examples of hidden Markov models problems?
I've used HMM in a demand / inventory level estimation scenario, where we had goods being purchased from many stores that might or might not be out of inventory of the goods. The sequence of daily de
Examples of hidden Markov models problems? I've used HMM in a demand / inventory level estimation scenario, where we had goods being purchased from many stores that might or might not be out of inventory of the goods. The sequence of daily demands for these items thus contained zeroes that were legitimate zero demand days and also zeroes that were because the store was out of stock. You would think you'd know whether the store was out of stock from the inventory level, but errors in inventory records propagate and it is not at all uncommon to find a store that thinks it has a positive number of items on hand, but actually has none; the hidden state is, more or less, whether the store actually has any inventory, and the signal is the (daily demand, nominal inventory level). No references for this work, though; we were not supposed to publish the results for competitive reasons. Edit: I'll add that this is especially important because, with zero demands, the store's nominal on hand inventory doesn't ever decrease and cross an order point, triggering an order for more inventory - therefore, a zero on hand state due to erroneous inventory records doesn't get fixed for a long time, until somebody notices something is wrong or a cycle count occurs, which may be many months after the problem has started.
Examples of hidden Markov models problems? I've used HMM in a demand / inventory level estimation scenario, where we had goods being purchased from many stores that might or might not be out of inventory of the goods. The sequence of daily de
13,434
Examples of hidden Markov models problems?
I pretty much experienced the same thing and didn't find much beyond the weather. Areas that come to mind include: speech recognition, change point detection, tagging parts of speech in text, aligning overlapping items/text, and recognizing sign language. One example I found and did some exploration of was in Section 8 of this introduction, which is one of the references for HMM's in Wikipedia. (It's actually pretty fun: your analysis discovers that there are vowels and consonants.) This also introduces you to working with a text corpus, which is useful. (If you want to play with generation with HMMs, you could train on Shakespeare text and then generate faux-Shakespeare.)
Examples of hidden Markov models problems?
I pretty much experienced the same thing and didn't find much beyond the weather. Areas that come to mind include: speech recognition, change point detection, tagging parts of speech in text, aligning
Examples of hidden Markov models problems? I pretty much experienced the same thing and didn't find much beyond the weather. Areas that come to mind include: speech recognition, change point detection, tagging parts of speech in text, aligning overlapping items/text, and recognizing sign language. One example I found and did some exploration of was in Section 8 of this introduction, which is one of the references for HMM's in Wikipedia. (It's actually pretty fun: your analysis discovers that there are vowels and consonants.) This also introduces you to working with a text corpus, which is useful. (If you want to play with generation with HMMs, you could train on Shakespeare text and then generate faux-Shakespeare.)
Examples of hidden Markov models problems? I pretty much experienced the same thing and didn't find much beyond the weather. Areas that come to mind include: speech recognition, change point detection, tagging parts of speech in text, aligning
13,435
Examples of hidden Markov models problems?
Most speech recognition software uses Hidden Markov Models. You can experiment with natural language processing if you want to get a feel for HMM applications. Here's a good source: Probabilistic Graphical Models, by Koller and Friedman.
Examples of hidden Markov models problems?
Most speech recognition software uses Hidden Markov Models. You can experiment with natural language processing if you want to get a feel for HMM applications. Here's a good source: Probabilistic Grap
Examples of hidden Markov models problems? Most speech recognition software uses Hidden Markov Models. You can experiment with natural language processing if you want to get a feel for HMM applications. Here's a good source: Probabilistic Graphical Models, by Koller and Friedman.
Examples of hidden Markov models problems? Most speech recognition software uses Hidden Markov Models. You can experiment with natural language processing if you want to get a feel for HMM applications. Here's a good source: Probabilistic Grap
13,436
Examples of hidden Markov models problems?
Hidden markov models are very useful in monitoring HIV. HIV enters the blood stream and looks for the immune response cells. It then sits on the protein content of the cell and gets into the core of the cell and changes the DNA content of the cell and starts proliferation of virions until it burst out of the cells. All these stages are unobservable and called latent. An ideal example for hidden markovian modelling.
Examples of hidden Markov models problems?
Hidden markov models are very useful in monitoring HIV. HIV enters the blood stream and looks for the immune response cells. It then sits on the protein content of the cell and gets into the core of
Examples of hidden Markov models problems? Hidden markov models are very useful in monitoring HIV. HIV enters the blood stream and looks for the immune response cells. It then sits on the protein content of the cell and gets into the core of the cell and changes the DNA content of the cell and starts proliferation of virions until it burst out of the cells. All these stages are unobservable and called latent. An ideal example for hidden markovian modelling.
Examples of hidden Markov models problems? Hidden markov models are very useful in monitoring HIV. HIV enters the blood stream and looks for the immune response cells. It then sits on the protein content of the cell and gets into the core of
13,437
Examples of hidden Markov models problems?
For me, very nice application of HMM is chord identification in musical composition. See for example this lecture.
Examples of hidden Markov models problems?
For me, very nice application of HMM is chord identification in musical composition. See for example this lecture.
Examples of hidden Markov models problems? For me, very nice application of HMM is chord identification in musical composition. See for example this lecture.
Examples of hidden Markov models problems? For me, very nice application of HMM is chord identification in musical composition. See for example this lecture.
13,438
Examples of hidden Markov models problems?
Markov models may be useful in analyzing the interactions of a user with a website - For Example on Amazon.com where figuring out what series of interactions lead to a checkout to give recommendations in the future. A fun example showing the use of Markov Model is the following- http://freakonometrics.blog.free.fr/index.php?post/2011/12/20/Basic-on-Markov-Chain-(for-parents)
Examples of hidden Markov models problems?
Markov models may be useful in analyzing the interactions of a user with a website - For Example on Amazon.com where figuring out what series of interactions lead to a checkout to give recommendations
Examples of hidden Markov models problems? Markov models may be useful in analyzing the interactions of a user with a website - For Example on Amazon.com where figuring out what series of interactions lead to a checkout to give recommendations in the future. A fun example showing the use of Markov Model is the following- http://freakonometrics.blog.free.fr/index.php?post/2011/12/20/Basic-on-Markov-Chain-(for-parents)
Examples of hidden Markov models problems? Markov models may be useful in analyzing the interactions of a user with a website - For Example on Amazon.com where figuring out what series of interactions lead to a checkout to give recommendations
13,439
Proportion of explained variance in a mixed-effects model
I can provide some references: Xu, R. (2003). Measuring explained variation in linear mixed effects models. Statistics in Medicine, 22, 3527-3541. DOI:10.1002/sim.1572 Edwards, L. J., Muller, K. E., Wolfinger, R. D., Qaqish, B. F., & Schabenberger, O. (2008). An $R^2$ statistic for fixed effects in the linear mixed model. Statistics in Medicine, 27, 6137-6157. DOI:10.1002/sim.3429 Hössjer, O. (2008). On the coefficient of determination for mixed regression models. Journal of Statistical Planning and Inference, 138, 3022-3038. DOI:10.1016/j.jspi.2007.11.010 Nakagawa, S., & Schielzeth, H. (2013). A general and simple method for obtaining $R^2$ from generalized linear mixed-effects models. Methods in Ecology and Evolution, 4, 133-142. DOI:10.1111/j.2041-210x.2012.00261.x Happy reading!
Proportion of explained variance in a mixed-effects model
I can provide some references: Xu, R. (2003). Measuring explained variation in linear mixed effects models. Statistics in Medicine, 22, 3527-3541. DOI:10.1002/sim.1572 Edwards, L. J., Muller, K. E., W
Proportion of explained variance in a mixed-effects model I can provide some references: Xu, R. (2003). Measuring explained variation in linear mixed effects models. Statistics in Medicine, 22, 3527-3541. DOI:10.1002/sim.1572 Edwards, L. J., Muller, K. E., Wolfinger, R. D., Qaqish, B. F., & Schabenberger, O. (2008). An $R^2$ statistic for fixed effects in the linear mixed model. Statistics in Medicine, 27, 6137-6157. DOI:10.1002/sim.3429 Hössjer, O. (2008). On the coefficient of determination for mixed regression models. Journal of Statistical Planning and Inference, 138, 3022-3038. DOI:10.1016/j.jspi.2007.11.010 Nakagawa, S., & Schielzeth, H. (2013). A general and simple method for obtaining $R^2$ from generalized linear mixed-effects models. Methods in Ecology and Evolution, 4, 133-142. DOI:10.1111/j.2041-210x.2012.00261.x Happy reading!
Proportion of explained variance in a mixed-effects model I can provide some references: Xu, R. (2003). Measuring explained variation in linear mixed effects models. Statistics in Medicine, 22, 3527-3541. DOI:10.1002/sim.1572 Edwards, L. J., Muller, K. E., W
13,440
Proportion of explained variance in a mixed-effects model
According to this blog post from 2013, the MuMIn package in R can provide R$^2$ values for mixed models ala an approach developed by Nakagawa & Schielzeth 2013$^1$ (which was mentioned in a previous answer). #load packages library(lme4) library(MuMIn) #Fit Model m <- lmer(mpg ~ gear + disp + (1|cyl), data = mtcars) #Determine R2: r.squaredGLMM(m) R2m R2c 0.5476160 0.7150239 The output for functionr.squaredGLMM provides: R2m: marginal R squared value associated with fixed effects R2c conditional R2 value associated with fixed effects plus the random effects. Note: a comment on the linked blog post suggests that an alternative Nakagawa & Schielzeth inspired approach developed by Jon Lefcheck (using the sem.model.fits function in the piecewiseSEM package) produced identical results. [So you have options :p]. I did not test this latter function, but I did test the r.squaredGLMM() function in the MuMIn package and so can attest that it is still functional today (2018). As for the validity of this approach, I leave reading Nakagawa & Schielzeth (2013) (and follow-up article Johnson 2014$^2$) up to you. 1: Nakagawa, S., and Schielzeth, H. 2013. A general and simple method for obtaining R2 from generalized linear mixed‐effects models. Methods in Ecology and Evolution 4(2): 133-142. 2: Johnson, P. C. D. 2014 Extension of Nakagawa & Schielzeth’s R2GLMM to random slopes models. Methods in Ecology and Evolution 5: 44–946.
Proportion of explained variance in a mixed-effects model
According to this blog post from 2013, the MuMIn package in R can provide R$^2$ values for mixed models ala an approach developed by Nakagawa & Schielzeth 2013$^1$ (which was mentioned in a previous a
Proportion of explained variance in a mixed-effects model According to this blog post from 2013, the MuMIn package in R can provide R$^2$ values for mixed models ala an approach developed by Nakagawa & Schielzeth 2013$^1$ (which was mentioned in a previous answer). #load packages library(lme4) library(MuMIn) #Fit Model m <- lmer(mpg ~ gear + disp + (1|cyl), data = mtcars) #Determine R2: r.squaredGLMM(m) R2m R2c 0.5476160 0.7150239 The output for functionr.squaredGLMM provides: R2m: marginal R squared value associated with fixed effects R2c conditional R2 value associated with fixed effects plus the random effects. Note: a comment on the linked blog post suggests that an alternative Nakagawa & Schielzeth inspired approach developed by Jon Lefcheck (using the sem.model.fits function in the piecewiseSEM package) produced identical results. [So you have options :p]. I did not test this latter function, but I did test the r.squaredGLMM() function in the MuMIn package and so can attest that it is still functional today (2018). As for the validity of this approach, I leave reading Nakagawa & Schielzeth (2013) (and follow-up article Johnson 2014$^2$) up to you. 1: Nakagawa, S., and Schielzeth, H. 2013. A general and simple method for obtaining R2 from generalized linear mixed‐effects models. Methods in Ecology and Evolution 4(2): 133-142. 2: Johnson, P. C. D. 2014 Extension of Nakagawa & Schielzeth’s R2GLMM to random slopes models. Methods in Ecology and Evolution 5: 44–946.
Proportion of explained variance in a mixed-effects model According to this blog post from 2013, the MuMIn package in R can provide R$^2$ values for mixed models ala an approach developed by Nakagawa & Schielzeth 2013$^1$ (which was mentioned in a previous a
13,441
What's the history of box plots, and how did the "box and whiskers" design evolve?
Chief Executive Officer summary The history is much longer and more complicated than many people think it is. Executive summary The history of what Tukey called box plots is tangled up with that of what are now often called dot or strip plots (dozens of other names) and with representations of the empirical quantile function. Box plots in widely current forms are best known through the work of John Wilder Tukey (1970, 1972, 1977). But the idea of showing the median and quartiles as basic summaries -- together often but not always with dots showing all values -- goes back at least to the dispersion diagrams (many variant names) introduced by the geographer Percy Robert Crowe (1933). These were staple fare for geographers and used in many textbooks as well as research papers from the late 1930s on. Bibby (1986, pp.56, 59) gave even earlier references to similar ideas taught by Arthur Lyon Bowley (later Sir Arthur) in his lectures about 1897 and to his recommendation (Bowley, 1910, p.62; 1952, p.73) to use minimum and maximum and 10, 25, 50, 75 and 90% points as a basis for graphical summary. Range bars showing extremes and quartiles are often attributed to Mary Eleanor Spear (1952) but in my reading fewer people cite Kenneth W. Haemer (1948). Haemer's articles on statistical graphics in the American Statistician around 1950 were inventive and have critical bite and remain well worth re-reading. (Many readers will be able to access them through jstor.org.) In contrast Spear's books (Spear 1969 is a rehash) were accessible and sensible but deliberately introductory rather than innovative or scholarly. Variants of box plots in which whiskers extend to selected percentiles are commoner than many people seem to think. Again, equivalent plots were used by geographers from the 1930s on. What is most original in Tukey's version of box plots are first of all criteria for identifying points in the tails to be plotted separately and identified as deserving detailed consideration -- and as often flagging that a variable should be analysed on a transformed scale. His 1.5 IQR rule of thumb emerged only after much experimentation. It has mutated in some hands to a hard rule for deleting data points, which was never Tukey's intent. A punchy, memorable name -- box plot -- did no harm in ensuring much wider impact of these ideas. Dispersion diagram in contrast is rather a dull and dreary term. The fairly long list of references here is, possibly contrary to appearances, not intended to be exhaustive. The aim is just to provide documentation for some precursors and alternatives of the box plot. Specific references may be helpful for detailed queries or if they are in to close to your field. Conversely, learning about practices in other fields can be salutary. The graphical -- not just cartographical -- expertise of geographers has often been underestimated. More details Hybrid dot-box plots were used by Crowe (1933, 1936), Matthews (1936), Hogg (1948), Monkhouse and Wilkinson (1952), Farmer (1956), Gregory (1963), Hammond and McCullagh (1974), Lewis (1975), Matthews (1981), Wilkinson (1992, 2005), Ellison (1993, 2001), Wild and Seber (2000), Quinn and Keough (2002), Young et al. (2006) and Hendry and Nielsen (2007) and many others. See also Miller (1953, 1964). Drawing whiskers to particular percentiles, rather than to data points within so many IQR of the quartiles, was emphasised by Cleveland (1985), but anticipated by Matthews (1936) and Grove (1956) who plotted the interoctile range, meaning between the first and seventh octiles, as well as the range and interquartile range. Dury (1963), Johnson (1975), Harris (1999), Myatt (2007), Myatt and Johnson (2009, 2011) and Davino et al. (2014) showed means as well as minimum, quartiles, median and maximum. Schmid (1954) showed summary graphs with median, quartiles and 5 and 95% points. Bentley (1985, 1988), Davis (2002), Spence (2007, 2014) and Motulsky (2010, 2014, 2018) plotted whiskers to 5 and 95% points. Morgan and Henrion (1990, pp.221, 241), Spence (2001, p.36), and Gotelli and Ellison (2004, 2013, pp.72, 110, 213, 416) plotted whiskers to 10% and 90% points. Harris (1999) showed examples of both 5 and 95% and 10 and 90% points. Altman (1991, pp.34, 63) and Greenacre (2016) plotted whiskers to 2.5% and 97.5% points. Reimann et al. (2008, pp.46-47) plotted whiskers to 5% and 95% and 2% and 98% points. Parzen (1979a, 1979b, 1982) hybridised box and quantile plots as quantile-box plots. See also (e.g.) Shera (1991), Militký and Meloun (1993), Meloun and Militký (1994). Note, however, that the quantile box plot of Keen (2010) is just a box plot with whiskers extending to the extremes. In contrast, the quantile box plots of JMP are evidently box plots with marks at 0.5%, 2.5%, 10%, 90%, 97.5%, 99.5%: see Sall et al. (2014, pp.143-4). Here are some notes on variants of quantile-box plots. (A) The box-percentile plot of Esty and Banfield (2003) plots the same information differently, plotting data as continuous lines and producing a symmetric display in which the vertical axis shows quantiles and the horizontal axis shows not plotting position $p$, but both min($p, 1 - p$) and its mirror image $-$min($p, 1 - p$). Minor detail: in their paper plotting positions are misdescribed as "percentiles". See also Martinez et al. (2011, 2017), which perpetuates that confusion. The idea of plotting min($p, 1 - p$) (or its percent equivalent) appears independently in (B) "mountain plots" (Krouwer 1992; Monti 1995; Krouwer and Monti 1995; Goldstein 1996) and in (C) plots of the "flipped empirical distribution function" (Huh 1995). See also Xue and Titterington (2011) for a detailed analysis of folding a distribution function at any quantile. From literature seen by me, it seems that none of these threads -- quantile-box plots or the later variants (A) (B) (C) -- cites each other. !!! as at 3 October 2018 details for some references need to be supplied in the next edit. Altman, D.G. 1991. Practical Statistics in Medical Research. London: Chapman and Hall. Bentley, J.L. 1985. Programming pearls: Selection. Communications of the ACM 28: 1121-1127. Bentley, J.L. 1988. More Programming Pearls: Confessions of a Coder. Reading, MA: Addison-Wesley. Bibby, J. 1986. Notes Towards a History of Teaching Statistics. Edinburgh: John Bibby (Books). Bowley, A.L. 1910. An Elementary Manual of Statistics. London: Macdonald and Evans. (seventh edition 1952) Cleveland, W.S. 1985. Elements of Graphing Data. Monterey, CA: Wadsworth. Crowe, P.R. 1933. The analysis of rainfall probability: A graphical method and its application to European data. Scottish Geographical Magazine 49: 73-91. Crowe, P.R. 1936. The rainfall regime of the Western Plains. Geographical Review 26: 463-484. Davis, J.C. 2002. Statistics and Data Analysis in Geology. New York: John Wiley. Dickinson, G.C. 1963. Statistical Mapping and the Presentation of Statistics. London: Edward Arnold. (second edition 1973) Dury, G.H. 1963. The East Midlands and the Peak. London: Thomas Nelson. Farmer, B.H. 1956. Rainfall and water-supply in the Dry Zone of Ceylon. In Steel, R.W. and C.A. Fisher (eds) Geographical Essays on British Tropical Lands. London: George Philip, 227-268. Gregory, S. 1963. Statistical Methods and the Geographer. London: Longmans. (later editions 1968, 1973, 1978; publisher later Longman) Grove, A.T. 1956. Soil erosion in Nigeria. In Steel, R.W. and C.A. Fisher (eds) Geographical Essays on British Tropical Lands. London: George Philip, 79-111. Haemer, K.W. 1948. Range-bar charts. American Statistician 2(2): 23. Hendry, D.F. and B. Nielsen. 2007. Econometric Modeling: A Likelihood Approach. Princeton, NJ: Princeton University Press. Hogg, W.H. 1948. Rainfall dispersion diagrams: a discussion of their advantages and disadvantages. Geography 33: 31-37. Ibrekk, H. and M.G. Morgan. 1987. Graphical communication of uncertain quantities to nontechnical people. Risk Analysis 7: 519-529. Johnson, B.L.C. 1975. Bangladesh. London: Heinemann Educational. Keen, K.J. 2010. Graphics for Statistics and Data Analysis with R. Boca Raton, FL: CRC Press. (2nd edition 2018) Lewis, C.R. 1975. The analysis of changes in urban status: a case study in Mid-Wales and the middle Welsh borderland. Transactions of the Institute of British Geographers 64: 49-65. Martinez, W.L., A.R. Martinez and J.L. Solka. 2011. Exploratory Data Analysis with MATLAB. Boca Raton, FL: CRC Press. Matthews, H.A. 1936. A new view of some familiar Indian rainfalls. Scottish Geographical Magazine 52: 84-97. Matthews, J.A. 1981. Quantitative and Statistical Approaches to Geography: A Practical Manual. Oxford: Pergamon. Meloun, M. and J. Militký. 1994. Computer-assisted data treatment in analytical chemometrics. I. Exploratory analysis of univariate data. Chemical Papers 48: 151-157. Militký, J. and M. Meloun. 1993. Some graphical aids for univariate exploratory data analysis. Analytica Chimica Acta 277: 215-221. Miller, A.A. 1953. The Skin of the Earth. London: Methuen. (2nd edition 1964) Monkhouse, F.J. and H.R. Wilkinson. 1952. Maps and Diagrams: Their Compilation and Construction. London: Methuen. (later editions 1963, 1971) Morgan, M.G. and M. Henrion. 1990. Uncertainty: A Guide to Dealing with Uncertainty in Quantitative Risk and Policy Analysis. Cambridge: Cambridge University Press. Myatt, G.J. 2007. Making Sense of Data: A Practical Guide to Exploratory Data Analysis and Data Mining. Hoboken, NJ: John Wiley. Myatt, G.J. and Johnson, W.P. 2009. Making Sense of Data II: A Practical Guide to Data Visualization, Advanced Data Mining Methods, and Applications. Hoboken, NJ: John Wiley. Myatt, G.J. and Johnson, W.P. 2011. Making Sense of Data III: A Practical Guide to Designing Interactive Data Visualizations. Hoboken, NJ: John Wiley. Ottaway, B. 1973. Dispersion diagrams: a new approach to the display of carbon-14 dates. Archaeometry 15: 5-12. Parzen, E. 1979a. Nonparametric statistical data modeling. Journal, American Statistical Association 74: 105-121. Parzen, E. 1979b. A density-quantile function perspective on robust estimation. In Launer, R.L. and G.N. Wilkinson (eds) Robustness in Statistics. New York: Academic Press, 237-258. Parzen, E. 1982. Data modeling using quantile and density-quantile functions. In Tiago de Oliveira, J. and Epstein, B. (eds) Some Recent Advances in Statistics. London: Academic Press, 23-52. Quinn, G.P. and M.J. Keough. 2002. Experimental Design and Data Analysis for Biologists. Cambridge: Cambridge University Press. Reimann, C., P. Filzmoser, R.G. Garrett and R. Dutter. 2008. Statistical Data Analysis Explained: Applied Environmental Statistics with R. Chichester: John Wiley. Sall, J., A. Lehman, M. Stephens and L. Creighton. 2014. JMP Start Statistics: A Guide to Statistics and Data Analysis Using JMP. Cary, NC: SAS Institute. Shera, D.M. 1991. Some uses of quantile plots to enhance data presentation. Computing Science and Statistics 23: 50-53. Spear, M.E. 1952. Charting Statistics. New York: McGraw-Hill. Spear, M.E. 1969. Practical Charting Techniques. New York: McGraw-Hill. Tukey, J.W. 1970. Exploratory data analysis. Limited Preliminary Edition. Volume I. Reading, MA: Addison-Wesley. Tukey, J.W. 1972. Some graphic and semi-graphic displays. In Bancroft, T.A. and Brown, S.A. (eds) Statistical Papers in Honor of George W. Snedecor. Ames, IA: Iowa State University Press, 293-316. (also accessible at http://www.edwardtufte.com/tufte/tukey) Tukey, J.W. 1977. Exploratory Data Analysis. Reading, MA: Addison-Wesley. Wild, C.J. and G.A.F. Seber. 2000. Chance Encounters: A First Course in Data Analysis and Inference. New York: John Wiley.
What's the history of box plots, and how did the "box and whiskers" design evolve?
Chief Executive Officer summary The history is much longer and more complicated than many people think it is. Executive summary The history of what Tukey called box plots is tangled up with that of
What's the history of box plots, and how did the "box and whiskers" design evolve? Chief Executive Officer summary The history is much longer and more complicated than many people think it is. Executive summary The history of what Tukey called box plots is tangled up with that of what are now often called dot or strip plots (dozens of other names) and with representations of the empirical quantile function. Box plots in widely current forms are best known through the work of John Wilder Tukey (1970, 1972, 1977). But the idea of showing the median and quartiles as basic summaries -- together often but not always with dots showing all values -- goes back at least to the dispersion diagrams (many variant names) introduced by the geographer Percy Robert Crowe (1933). These were staple fare for geographers and used in many textbooks as well as research papers from the late 1930s on. Bibby (1986, pp.56, 59) gave even earlier references to similar ideas taught by Arthur Lyon Bowley (later Sir Arthur) in his lectures about 1897 and to his recommendation (Bowley, 1910, p.62; 1952, p.73) to use minimum and maximum and 10, 25, 50, 75 and 90% points as a basis for graphical summary. Range bars showing extremes and quartiles are often attributed to Mary Eleanor Spear (1952) but in my reading fewer people cite Kenneth W. Haemer (1948). Haemer's articles on statistical graphics in the American Statistician around 1950 were inventive and have critical bite and remain well worth re-reading. (Many readers will be able to access them through jstor.org.) In contrast Spear's books (Spear 1969 is a rehash) were accessible and sensible but deliberately introductory rather than innovative or scholarly. Variants of box plots in which whiskers extend to selected percentiles are commoner than many people seem to think. Again, equivalent plots were used by geographers from the 1930s on. What is most original in Tukey's version of box plots are first of all criteria for identifying points in the tails to be plotted separately and identified as deserving detailed consideration -- and as often flagging that a variable should be analysed on a transformed scale. His 1.5 IQR rule of thumb emerged only after much experimentation. It has mutated in some hands to a hard rule for deleting data points, which was never Tukey's intent. A punchy, memorable name -- box plot -- did no harm in ensuring much wider impact of these ideas. Dispersion diagram in contrast is rather a dull and dreary term. The fairly long list of references here is, possibly contrary to appearances, not intended to be exhaustive. The aim is just to provide documentation for some precursors and alternatives of the box plot. Specific references may be helpful for detailed queries or if they are in to close to your field. Conversely, learning about practices in other fields can be salutary. The graphical -- not just cartographical -- expertise of geographers has often been underestimated. More details Hybrid dot-box plots were used by Crowe (1933, 1936), Matthews (1936), Hogg (1948), Monkhouse and Wilkinson (1952), Farmer (1956), Gregory (1963), Hammond and McCullagh (1974), Lewis (1975), Matthews (1981), Wilkinson (1992, 2005), Ellison (1993, 2001), Wild and Seber (2000), Quinn and Keough (2002), Young et al. (2006) and Hendry and Nielsen (2007) and many others. See also Miller (1953, 1964). Drawing whiskers to particular percentiles, rather than to data points within so many IQR of the quartiles, was emphasised by Cleveland (1985), but anticipated by Matthews (1936) and Grove (1956) who plotted the interoctile range, meaning between the first and seventh octiles, as well as the range and interquartile range. Dury (1963), Johnson (1975), Harris (1999), Myatt (2007), Myatt and Johnson (2009, 2011) and Davino et al. (2014) showed means as well as minimum, quartiles, median and maximum. Schmid (1954) showed summary graphs with median, quartiles and 5 and 95% points. Bentley (1985, 1988), Davis (2002), Spence (2007, 2014) and Motulsky (2010, 2014, 2018) plotted whiskers to 5 and 95% points. Morgan and Henrion (1990, pp.221, 241), Spence (2001, p.36), and Gotelli and Ellison (2004, 2013, pp.72, 110, 213, 416) plotted whiskers to 10% and 90% points. Harris (1999) showed examples of both 5 and 95% and 10 and 90% points. Altman (1991, pp.34, 63) and Greenacre (2016) plotted whiskers to 2.5% and 97.5% points. Reimann et al. (2008, pp.46-47) plotted whiskers to 5% and 95% and 2% and 98% points. Parzen (1979a, 1979b, 1982) hybridised box and quantile plots as quantile-box plots. See also (e.g.) Shera (1991), Militký and Meloun (1993), Meloun and Militký (1994). Note, however, that the quantile box plot of Keen (2010) is just a box plot with whiskers extending to the extremes. In contrast, the quantile box plots of JMP are evidently box plots with marks at 0.5%, 2.5%, 10%, 90%, 97.5%, 99.5%: see Sall et al. (2014, pp.143-4). Here are some notes on variants of quantile-box plots. (A) The box-percentile plot of Esty and Banfield (2003) plots the same information differently, plotting data as continuous lines and producing a symmetric display in which the vertical axis shows quantiles and the horizontal axis shows not plotting position $p$, but both min($p, 1 - p$) and its mirror image $-$min($p, 1 - p$). Minor detail: in their paper plotting positions are misdescribed as "percentiles". See also Martinez et al. (2011, 2017), which perpetuates that confusion. The idea of plotting min($p, 1 - p$) (or its percent equivalent) appears independently in (B) "mountain plots" (Krouwer 1992; Monti 1995; Krouwer and Monti 1995; Goldstein 1996) and in (C) plots of the "flipped empirical distribution function" (Huh 1995). See also Xue and Titterington (2011) for a detailed analysis of folding a distribution function at any quantile. From literature seen by me, it seems that none of these threads -- quantile-box plots or the later variants (A) (B) (C) -- cites each other. !!! as at 3 October 2018 details for some references need to be supplied in the next edit. Altman, D.G. 1991. Practical Statistics in Medical Research. London: Chapman and Hall. Bentley, J.L. 1985. Programming pearls: Selection. Communications of the ACM 28: 1121-1127. Bentley, J.L. 1988. More Programming Pearls: Confessions of a Coder. Reading, MA: Addison-Wesley. Bibby, J. 1986. Notes Towards a History of Teaching Statistics. Edinburgh: John Bibby (Books). Bowley, A.L. 1910. An Elementary Manual of Statistics. London: Macdonald and Evans. (seventh edition 1952) Cleveland, W.S. 1985. Elements of Graphing Data. Monterey, CA: Wadsworth. Crowe, P.R. 1933. The analysis of rainfall probability: A graphical method and its application to European data. Scottish Geographical Magazine 49: 73-91. Crowe, P.R. 1936. The rainfall regime of the Western Plains. Geographical Review 26: 463-484. Davis, J.C. 2002. Statistics and Data Analysis in Geology. New York: John Wiley. Dickinson, G.C. 1963. Statistical Mapping and the Presentation of Statistics. London: Edward Arnold. (second edition 1973) Dury, G.H. 1963. The East Midlands and the Peak. London: Thomas Nelson. Farmer, B.H. 1956. Rainfall and water-supply in the Dry Zone of Ceylon. In Steel, R.W. and C.A. Fisher (eds) Geographical Essays on British Tropical Lands. London: George Philip, 227-268. Gregory, S. 1963. Statistical Methods and the Geographer. London: Longmans. (later editions 1968, 1973, 1978; publisher later Longman) Grove, A.T. 1956. Soil erosion in Nigeria. In Steel, R.W. and C.A. Fisher (eds) Geographical Essays on British Tropical Lands. London: George Philip, 79-111. Haemer, K.W. 1948. Range-bar charts. American Statistician 2(2): 23. Hendry, D.F. and B. Nielsen. 2007. Econometric Modeling: A Likelihood Approach. Princeton, NJ: Princeton University Press. Hogg, W.H. 1948. Rainfall dispersion diagrams: a discussion of their advantages and disadvantages. Geography 33: 31-37. Ibrekk, H. and M.G. Morgan. 1987. Graphical communication of uncertain quantities to nontechnical people. Risk Analysis 7: 519-529. Johnson, B.L.C. 1975. Bangladesh. London: Heinemann Educational. Keen, K.J. 2010. Graphics for Statistics and Data Analysis with R. Boca Raton, FL: CRC Press. (2nd edition 2018) Lewis, C.R. 1975. The analysis of changes in urban status: a case study in Mid-Wales and the middle Welsh borderland. Transactions of the Institute of British Geographers 64: 49-65. Martinez, W.L., A.R. Martinez and J.L. Solka. 2011. Exploratory Data Analysis with MATLAB. Boca Raton, FL: CRC Press. Matthews, H.A. 1936. A new view of some familiar Indian rainfalls. Scottish Geographical Magazine 52: 84-97. Matthews, J.A. 1981. Quantitative and Statistical Approaches to Geography: A Practical Manual. Oxford: Pergamon. Meloun, M. and J. Militký. 1994. Computer-assisted data treatment in analytical chemometrics. I. Exploratory analysis of univariate data. Chemical Papers 48: 151-157. Militký, J. and M. Meloun. 1993. Some graphical aids for univariate exploratory data analysis. Analytica Chimica Acta 277: 215-221. Miller, A.A. 1953. The Skin of the Earth. London: Methuen. (2nd edition 1964) Monkhouse, F.J. and H.R. Wilkinson. 1952. Maps and Diagrams: Their Compilation and Construction. London: Methuen. (later editions 1963, 1971) Morgan, M.G. and M. Henrion. 1990. Uncertainty: A Guide to Dealing with Uncertainty in Quantitative Risk and Policy Analysis. Cambridge: Cambridge University Press. Myatt, G.J. 2007. Making Sense of Data: A Practical Guide to Exploratory Data Analysis and Data Mining. Hoboken, NJ: John Wiley. Myatt, G.J. and Johnson, W.P. 2009. Making Sense of Data II: A Practical Guide to Data Visualization, Advanced Data Mining Methods, and Applications. Hoboken, NJ: John Wiley. Myatt, G.J. and Johnson, W.P. 2011. Making Sense of Data III: A Practical Guide to Designing Interactive Data Visualizations. Hoboken, NJ: John Wiley. Ottaway, B. 1973. Dispersion diagrams: a new approach to the display of carbon-14 dates. Archaeometry 15: 5-12. Parzen, E. 1979a. Nonparametric statistical data modeling. Journal, American Statistical Association 74: 105-121. Parzen, E. 1979b. A density-quantile function perspective on robust estimation. In Launer, R.L. and G.N. Wilkinson (eds) Robustness in Statistics. New York: Academic Press, 237-258. Parzen, E. 1982. Data modeling using quantile and density-quantile functions. In Tiago de Oliveira, J. and Epstein, B. (eds) Some Recent Advances in Statistics. London: Academic Press, 23-52. Quinn, G.P. and M.J. Keough. 2002. Experimental Design and Data Analysis for Biologists. Cambridge: Cambridge University Press. Reimann, C., P. Filzmoser, R.G. Garrett and R. Dutter. 2008. Statistical Data Analysis Explained: Applied Environmental Statistics with R. Chichester: John Wiley. Sall, J., A. Lehman, M. Stephens and L. Creighton. 2014. JMP Start Statistics: A Guide to Statistics and Data Analysis Using JMP. Cary, NC: SAS Institute. Shera, D.M. 1991. Some uses of quantile plots to enhance data presentation. Computing Science and Statistics 23: 50-53. Spear, M.E. 1952. Charting Statistics. New York: McGraw-Hill. Spear, M.E. 1969. Practical Charting Techniques. New York: McGraw-Hill. Tukey, J.W. 1970. Exploratory data analysis. Limited Preliminary Edition. Volume I. Reading, MA: Addison-Wesley. Tukey, J.W. 1972. Some graphic and semi-graphic displays. In Bancroft, T.A. and Brown, S.A. (eds) Statistical Papers in Honor of George W. Snedecor. Ames, IA: Iowa State University Press, 293-316. (also accessible at http://www.edwardtufte.com/tufte/tukey) Tukey, J.W. 1977. Exploratory Data Analysis. Reading, MA: Addison-Wesley. Wild, C.J. and G.A.F. Seber. 2000. Chance Encounters: A First Course in Data Analysis and Inference. New York: John Wiley.
What's the history of box plots, and how did the "box and whiskers" design evolve? Chief Executive Officer summary The history is much longer and more complicated than many people think it is. Executive summary The history of what Tukey called box plots is tangled up with that of
13,442
Evaluate Random Forest: OOB vs CV
Note: While I feel that my answer is probably correct, I also feel doubtful due to the fact that I made all this up by thinking about this problem only after reading this question for about 30-60 minutes. So you better be sceptical and scrutinize this and not get fooled by my possibly overly confident writing style (me using big words and fancy Greek symbols doesn't mean I am right). Summary This is just a summary. All details are mentioned in sections $\S1$ and $\S2$ below. Let's assume the case of classification (can be extended to regression too, but omit for brevity). Essentially, our goal is to estimate the error of a forest of trees. Both out-of-bag error and k-fold cross-validation try to tell us the probability that: The forest gives the correct classification (k-fold cross-validation looks at it this way). Which is identical to the probability that: The majority vote of forest's trees is the correct vote (OOBE looks at it this way). And both are identical. The only difference is that k-fold cross-validation and OOBE assume different size of learning samples. For example: In 10-fold cross-validation, the learning set is 90%, while the testing set is 10%. However, in OOBE if each bag has $n$ samples, such that $n = $ total number of samples in the whole samples set, then this implies that the learning set is practically about 66% (two third) and the testing set is about 33% (one third). Therefore in my view the only reason why OOBE is a pessimistic estimation of forest's error is only because it usually trains by a smaller number of samples than usually done with k-fold cross-validation (where 10 folds is common). Due to that, I also think that 2-fold cross-validation is going to be more pessimistic estimation of forest's error than OOBE, and 3-fold cross-validation to be approximately equally pessimistic to OOBE. 1. Understanding out-of-bag error 1.1 Common view on bagging Each tree in RF is grown by a list of $n$ samples that are randomly drawn from the learning set $\mathcal{X}$ with replacement. This way, the $n$ many samples can have duplicates, and if $n = |\mathcal{X}|$ then it can be found that approximately one 3rd of the samples in $\mathcal{X}$ are likely to end up not being in the list of $n$ samples that are used to grow a given tree (these are the out-of-bag samples of this specific tree. This process is independently repeated for each tree, so each tree has a different set of out-of-bag samples. 1.2. Another view on bagging Now, let's re-describe bagging a bit differently with the hope of finding an equal description that is hopefully simpler to deal with. I do this by stating that the tree $t$ is trained by the bagged samples in the set $\mathcal{X}_t \subseteq \mathcal{X}$. However, this is not exactly true as the set $\mathcal{X}_t$ does not have duplicated samples (this is how sets work), while -on the other hand- the $n$ list of samples can have duplicates. Therefore, we can say that a tree $t$ is grown by analysing samples $\mathcal{X}_t$ plus a number of randomly chosen duplicates drawn from $\mathcal{X}_t$, namely $\mathcal{X}_{t,1}, \mathcal{X}_{t,2}, \ldots, \mathcal{X}_{t,r} \subseteq \mathcal{X}_t$, such that: \begin{equation} |\mathcal{X}_t| + \sum_{i=1}^r|\mathcal{X}_{t,i}| = n \end{equation} It is trivial to see that from this collection of sets $\mathcal{C} = \{\mathcal{X}_t, \mathcal{X}_{t,1}, \ldots, \mathcal{X}_{t,r}\}$, we can define a list of $n$-many samples that contain duplicates by simply appending the elements in each set $\mathcal{C}_i \in \mathcal{C}$ to an array $a$. This way, for any $1 \le p \le n$, there exists at least one value of $i$ such that $a[p] \in \mathcal{C}_i$. We can also see that the list of $n$ samples in the array $a$ is a generalization of bagging as I defined in Section 1. It is trivial to see that for some specific definition of $\mathcal{X}_t$ that I have defined in this section ($\S2$), list of samples in array $a$ can be exactly identical to the list of samples as defined in Section 1. 1.3. Simplifying bagging Instead of growing tree $t$ by samples in array $a$, we will grow them by the duplication-free list of instances that are found in $\mathcal{X}_t$ only. I believe that, if $n$ is large enough, a tree $t$ that is grown by analysing samples in $\mathcal{X}_t$ is identical to another tree $t'$ that is grown from the samples in array $a$. My reason is that, the probability of duplicating samples in $\mathcal{X}_t$ is equally likely across other samples in the same set. This means that, when we measure the information gain (IG) of some split, the IG will remain identical as the entropies will remain identical too. And the reason I believe entropies won't change systematically for a given split is because the empirically measured probability of a sample having a specific label in some sub-set (after applying a decision split) won't change either. And the reason the probabilities shouldn't change in my view is that that all samples in $\mathcal{X}_t$ are equally likely to be duplicated into $d$ copies. 1.4 Measuring out-of-bag errors Let $\mathcal{O}_t$ be the out-of-bag samples of tree $t$. I.e. $\mathcal{O}_t = \mathcal{X} \setminus \mathcal{X}_t$. Then the error of a single tree $t$ is: \begin{equation} \frac{\text{total $\mathbf{x}$ in $\mathcal{O}_t$ misclassified by $t$}}{|\mathcal{O}_t|} \end{equation} And the total error of the forest with $n_t$ many trees is: \begin{equation} \frac{\sum_{t=1}^{n_t} \text{total $\mathbf{x}$ in $\mathcal{O}_t$ misclassified by $t$}}{\sum_{t=1}^{n_t}|\mathcal{O}_t|} \end{equation} which can be thought as the empirically measured probability that the majority vote of all trees in a forest is a correct vote. 2. Understanding k-fold cross-validation First we partition the learning set $\mathcal{X}$ into $n_k$ many equally-sized partitions namely $\mathcal{K} = \{\mathcal{K}_1, \mathcal{K}_2, \ldots, \mathcal{K}_{n_k}\}$. I.e. $\mathcal{K}_1 \cup \mathcal{K}_2 \cup \ldots \cup \mathcal{K}_{n_k} = \mathcal{X}$, and for any $\mathcal{K}_i, \mathcal{K}_j \in \mathcal{K}$, $\mathcal{K}_i \cap \mathcal{K}_j = \emptyset$ (this is what portioning implies). Let $\mathcal{K}_t$ be the testing fold, and $\mathcal{K} \setminus \{\mathcal{K}_t\}$ be the set of learning folds. Let $f$ be a forest of some trees that is built by using $\mathcal{K} \setminus \{\mathcal{K}_t\}$ as the learning set. Then, k-fold cross-validation of forest $f$ is: \begin{equation} \frac{\sum_{t=1}^{n_k} \text{total $\mathbf{x}$ in $\mathcal{K}_t$ misclassified by $f$}}{\sum_{t=1}^{n_k} |\mathcal{K}_t|} \end{equation} Which is also probability that forest $f$ classifies any input sample correctly.
Evaluate Random Forest: OOB vs CV
Note: While I feel that my answer is probably correct, I also feel doubtful due to the fact that I made all this up by thinking about this problem only after reading this question for about 30-60 minu
Evaluate Random Forest: OOB vs CV Note: While I feel that my answer is probably correct, I also feel doubtful due to the fact that I made all this up by thinking about this problem only after reading this question for about 30-60 minutes. So you better be sceptical and scrutinize this and not get fooled by my possibly overly confident writing style (me using big words and fancy Greek symbols doesn't mean I am right). Summary This is just a summary. All details are mentioned in sections $\S1$ and $\S2$ below. Let's assume the case of classification (can be extended to regression too, but omit for brevity). Essentially, our goal is to estimate the error of a forest of trees. Both out-of-bag error and k-fold cross-validation try to tell us the probability that: The forest gives the correct classification (k-fold cross-validation looks at it this way). Which is identical to the probability that: The majority vote of forest's trees is the correct vote (OOBE looks at it this way). And both are identical. The only difference is that k-fold cross-validation and OOBE assume different size of learning samples. For example: In 10-fold cross-validation, the learning set is 90%, while the testing set is 10%. However, in OOBE if each bag has $n$ samples, such that $n = $ total number of samples in the whole samples set, then this implies that the learning set is practically about 66% (two third) and the testing set is about 33% (one third). Therefore in my view the only reason why OOBE is a pessimistic estimation of forest's error is only because it usually trains by a smaller number of samples than usually done with k-fold cross-validation (where 10 folds is common). Due to that, I also think that 2-fold cross-validation is going to be more pessimistic estimation of forest's error than OOBE, and 3-fold cross-validation to be approximately equally pessimistic to OOBE. 1. Understanding out-of-bag error 1.1 Common view on bagging Each tree in RF is grown by a list of $n$ samples that are randomly drawn from the learning set $\mathcal{X}$ with replacement. This way, the $n$ many samples can have duplicates, and if $n = |\mathcal{X}|$ then it can be found that approximately one 3rd of the samples in $\mathcal{X}$ are likely to end up not being in the list of $n$ samples that are used to grow a given tree (these are the out-of-bag samples of this specific tree. This process is independently repeated for each tree, so each tree has a different set of out-of-bag samples. 1.2. Another view on bagging Now, let's re-describe bagging a bit differently with the hope of finding an equal description that is hopefully simpler to deal with. I do this by stating that the tree $t$ is trained by the bagged samples in the set $\mathcal{X}_t \subseteq \mathcal{X}$. However, this is not exactly true as the set $\mathcal{X}_t$ does not have duplicated samples (this is how sets work), while -on the other hand- the $n$ list of samples can have duplicates. Therefore, we can say that a tree $t$ is grown by analysing samples $\mathcal{X}_t$ plus a number of randomly chosen duplicates drawn from $\mathcal{X}_t$, namely $\mathcal{X}_{t,1}, \mathcal{X}_{t,2}, \ldots, \mathcal{X}_{t,r} \subseteq \mathcal{X}_t$, such that: \begin{equation} |\mathcal{X}_t| + \sum_{i=1}^r|\mathcal{X}_{t,i}| = n \end{equation} It is trivial to see that from this collection of sets $\mathcal{C} = \{\mathcal{X}_t, \mathcal{X}_{t,1}, \ldots, \mathcal{X}_{t,r}\}$, we can define a list of $n$-many samples that contain duplicates by simply appending the elements in each set $\mathcal{C}_i \in \mathcal{C}$ to an array $a$. This way, for any $1 \le p \le n$, there exists at least one value of $i$ such that $a[p] \in \mathcal{C}_i$. We can also see that the list of $n$ samples in the array $a$ is a generalization of bagging as I defined in Section 1. It is trivial to see that for some specific definition of $\mathcal{X}_t$ that I have defined in this section ($\S2$), list of samples in array $a$ can be exactly identical to the list of samples as defined in Section 1. 1.3. Simplifying bagging Instead of growing tree $t$ by samples in array $a$, we will grow them by the duplication-free list of instances that are found in $\mathcal{X}_t$ only. I believe that, if $n$ is large enough, a tree $t$ that is grown by analysing samples in $\mathcal{X}_t$ is identical to another tree $t'$ that is grown from the samples in array $a$. My reason is that, the probability of duplicating samples in $\mathcal{X}_t$ is equally likely across other samples in the same set. This means that, when we measure the information gain (IG) of some split, the IG will remain identical as the entropies will remain identical too. And the reason I believe entropies won't change systematically for a given split is because the empirically measured probability of a sample having a specific label in some sub-set (after applying a decision split) won't change either. And the reason the probabilities shouldn't change in my view is that that all samples in $\mathcal{X}_t$ are equally likely to be duplicated into $d$ copies. 1.4 Measuring out-of-bag errors Let $\mathcal{O}_t$ be the out-of-bag samples of tree $t$. I.e. $\mathcal{O}_t = \mathcal{X} \setminus \mathcal{X}_t$. Then the error of a single tree $t$ is: \begin{equation} \frac{\text{total $\mathbf{x}$ in $\mathcal{O}_t$ misclassified by $t$}}{|\mathcal{O}_t|} \end{equation} And the total error of the forest with $n_t$ many trees is: \begin{equation} \frac{\sum_{t=1}^{n_t} \text{total $\mathbf{x}$ in $\mathcal{O}_t$ misclassified by $t$}}{\sum_{t=1}^{n_t}|\mathcal{O}_t|} \end{equation} which can be thought as the empirically measured probability that the majority vote of all trees in a forest is a correct vote. 2. Understanding k-fold cross-validation First we partition the learning set $\mathcal{X}$ into $n_k$ many equally-sized partitions namely $\mathcal{K} = \{\mathcal{K}_1, \mathcal{K}_2, \ldots, \mathcal{K}_{n_k}\}$. I.e. $\mathcal{K}_1 \cup \mathcal{K}_2 \cup \ldots \cup \mathcal{K}_{n_k} = \mathcal{X}$, and for any $\mathcal{K}_i, \mathcal{K}_j \in \mathcal{K}$, $\mathcal{K}_i \cap \mathcal{K}_j = \emptyset$ (this is what portioning implies). Let $\mathcal{K}_t$ be the testing fold, and $\mathcal{K} \setminus \{\mathcal{K}_t\}$ be the set of learning folds. Let $f$ be a forest of some trees that is built by using $\mathcal{K} \setminus \{\mathcal{K}_t\}$ as the learning set. Then, k-fold cross-validation of forest $f$ is: \begin{equation} \frac{\sum_{t=1}^{n_k} \text{total $\mathbf{x}$ in $\mathcal{K}_t$ misclassified by $f$}}{\sum_{t=1}^{n_k} |\mathcal{K}_t|} \end{equation} Which is also probability that forest $f$ classifies any input sample correctly.
Evaluate Random Forest: OOB vs CV Note: While I feel that my answer is probably correct, I also feel doubtful due to the fact that I made all this up by thinking about this problem only after reading this question for about 30-60 minu
13,443
Evaluate Random Forest: OOB vs CV
The motivation : Therefore in my view the only reason why OOBE is a pessimistic estimation of forest's error is only because it usually trains by a smaller number of samples than usually done with k-fold cross-validation (where 10 folds is common). does not seem correct. It is not true that OOBE error estimate being pessimistic is due to it being trained on a smaller number of samples, actually the contrary applies. In fact, each single random forest's tree is trained on a bootstrapped sample of the training set, so meanwhile it is true that on average every single tree sees about 66% of the training set, but overall the random forest ensemble sees more than that (depending on the number of trees fitted) because the out-of-bag samples of individual trees may not overlap. Hence, supposing we fit a random forest composed of $B$ trees to the entire training set composed of $n$ samples (as in the OOBE case) we have that: on average each sample $z_i$ appears in the training set of $\frac{2}{3}B$ and in the out-of-bag samples of the remaining $\frac{1}{3}B$ the probability that $z_j$ does not appear in neither the training set of $m$ selected trees is $p_m = (\frac{1}{3})^m$ From the two points above it follows that the OOBE on sample $z_i$ is on average computed considering a subset of $\frac{1}{3}B$ trees that collectively are trained on $n_{OOB}(B) = n-1-np_{\frac{B}{3}} = n(1-p_{\frac{B}{3}})-1$ data samples on average. When we estimate the error with k-fold cv instead we are doing the following (let's take $k=10$): we fit $B$ trees using $0.9n$ data samples, but again each tree use a bootstrapped version of the training set, so collectively the ensemble sees $n_{10-Fold}(B) = 0.9n(1-p_B)$ data samples on average. Then, considering that $p_{\frac{B}{3}}<0.1$ implies that $n_{OOB} > n_{10-fold}$ we conclude that the ensemble collective training set is larger in the OOB case whenever $B>6$, which is almost always in practice. Conclusion Given what we found above, what can we say of the OOB vs k-fold CV estimate for random forests? Is the OOB pessimistic ? Since the OOBE is evaluated on ensembles that have collectively seen more data it should actually be a less pessimistically biased estimate of the test error with respect to k-fold CV. However, since OOBE ensembles are composed of less trees and are trained on overlapping training sets, the variance of the estimate can be larger. Actually, it can even be showed that for $\lim_{B\to\infty} $ the OOBE converges to the LOO-CV error estimate (leave one out cross validation)[1] which is unbiased but with higher variance with respect to k-fold CV with $k<n$. So neither estimator is by default better than the other, even though OOBE has a clear computational advantage since you can "fit and test" your forest at the same time. [1] Hastie, T., Tibshirani, R., Friedman, J. (2001). The Elements of Statistical Learning.
Evaluate Random Forest: OOB vs CV
The motivation : Therefore in my view the only reason why OOBE is a pessimistic estimation of forest's error is only because it usually trains by a smaller number of samples than usually done with k-
Evaluate Random Forest: OOB vs CV The motivation : Therefore in my view the only reason why OOBE is a pessimistic estimation of forest's error is only because it usually trains by a smaller number of samples than usually done with k-fold cross-validation (where 10 folds is common). does not seem correct. It is not true that OOBE error estimate being pessimistic is due to it being trained on a smaller number of samples, actually the contrary applies. In fact, each single random forest's tree is trained on a bootstrapped sample of the training set, so meanwhile it is true that on average every single tree sees about 66% of the training set, but overall the random forest ensemble sees more than that (depending on the number of trees fitted) because the out-of-bag samples of individual trees may not overlap. Hence, supposing we fit a random forest composed of $B$ trees to the entire training set composed of $n$ samples (as in the OOBE case) we have that: on average each sample $z_i$ appears in the training set of $\frac{2}{3}B$ and in the out-of-bag samples of the remaining $\frac{1}{3}B$ the probability that $z_j$ does not appear in neither the training set of $m$ selected trees is $p_m = (\frac{1}{3})^m$ From the two points above it follows that the OOBE on sample $z_i$ is on average computed considering a subset of $\frac{1}{3}B$ trees that collectively are trained on $n_{OOB}(B) = n-1-np_{\frac{B}{3}} = n(1-p_{\frac{B}{3}})-1$ data samples on average. When we estimate the error with k-fold cv instead we are doing the following (let's take $k=10$): we fit $B$ trees using $0.9n$ data samples, but again each tree use a bootstrapped version of the training set, so collectively the ensemble sees $n_{10-Fold}(B) = 0.9n(1-p_B)$ data samples on average. Then, considering that $p_{\frac{B}{3}}<0.1$ implies that $n_{OOB} > n_{10-fold}$ we conclude that the ensemble collective training set is larger in the OOB case whenever $B>6$, which is almost always in practice. Conclusion Given what we found above, what can we say of the OOB vs k-fold CV estimate for random forests? Is the OOB pessimistic ? Since the OOBE is evaluated on ensembles that have collectively seen more data it should actually be a less pessimistically biased estimate of the test error with respect to k-fold CV. However, since OOBE ensembles are composed of less trees and are trained on overlapping training sets, the variance of the estimate can be larger. Actually, it can even be showed that for $\lim_{B\to\infty} $ the OOBE converges to the LOO-CV error estimate (leave one out cross validation)[1] which is unbiased but with higher variance with respect to k-fold CV with $k<n$. So neither estimator is by default better than the other, even though OOBE has a clear computational advantage since you can "fit and test" your forest at the same time. [1] Hastie, T., Tibshirani, R., Friedman, J. (2001). The Elements of Statistical Learning.
Evaluate Random Forest: OOB vs CV The motivation : Therefore in my view the only reason why OOBE is a pessimistic estimation of forest's error is only because it usually trains by a smaller number of samples than usually done with k-
13,444
Are Random Forest and Boosting parametric or non-parametric?
Parametrical models have parameters (infering them)or assumptions regarding the data distribution, whereas RF ,neural nets or boosting trees have parameters related with the algorithm itself, but they don't need assumptions about your data distribution or classify your data into a theoretical distribution. In fact almost all algorithms have parameters such as iterations or margin values related with optimization.
Are Random Forest and Boosting parametric or non-parametric?
Parametrical models have parameters (infering them)or assumptions regarding the data distribution, whereas RF ,neural nets or boosting trees have parameters related with the algorithm itself, but they
Are Random Forest and Boosting parametric or non-parametric? Parametrical models have parameters (infering them)or assumptions regarding the data distribution, whereas RF ,neural nets or boosting trees have parameters related with the algorithm itself, but they don't need assumptions about your data distribution or classify your data into a theoretical distribution. In fact almost all algorithms have parameters such as iterations or margin values related with optimization.
Are Random Forest and Boosting parametric or non-parametric? Parametrical models have parameters (infering them)or assumptions regarding the data distribution, whereas RF ,neural nets or boosting trees have parameters related with the algorithm itself, but they
13,445
Are Random Forest and Boosting parametric or non-parametric?
I think the criterion for parametric and non-parametric is this: whether the number of parameters grows with the number of training samples. For logistic regression and svm, when you select the features, you won't get more parameters by adding more training data. But for RF and so on, the details of model will change (like the depth of the tree) even though the number of trees does not change.
Are Random Forest and Boosting parametric or non-parametric?
I think the criterion for parametric and non-parametric is this: whether the number of parameters grows with the number of training samples. For logistic regression and svm, when you select the featur
Are Random Forest and Boosting parametric or non-parametric? I think the criterion for parametric and non-parametric is this: whether the number of parameters grows with the number of training samples. For logistic regression and svm, when you select the features, you won't get more parameters by adding more training data. But for RF and so on, the details of model will change (like the depth of the tree) even though the number of trees does not change.
Are Random Forest and Boosting parametric or non-parametric? I think the criterion for parametric and non-parametric is this: whether the number of parameters grows with the number of training samples. For logistic regression and svm, when you select the featur
13,446
Are Random Forest and Boosting parametric or non-parametric?
The term "non-parametric" is a bit of a misnomer, as generally these models/algorithms are defined as having the number of parameters which increase as the sample size increases. Whether a RF does this or not depends on how the tree splitting/pruning algorithm works. If no pruning is done, and splitting it based on sample size rules (e.g. split a node if it contains more than 10 data points) then a RF would be non-parametric. However, there are other "parametric" methods like regression, which become somewhat "non-parametric" once you add in feature selection methods. In my view the process of feature selection for linear/logistic regression is very similar to tree based methods. I think a lot of what the ML community has done is fill in the space of how to convert a set of "raw inputs" into "regression inputs". At the basic level, a regression tree is still a "linear model" - but with a transformed set of inputs. Splines are in a similar group as well. Regarding assumptions, the ML models are not "assumption free". Some of the assumptions for ML would be things like "validation error is similar to the error for a new case" - that is an assumption about the distribution of the errors! The choice of how to measure "error" is also an assumption about distribution of errors - eg using squared error vs absolute error as the measure you are minimising (eg normal vs laplace distribution). Whether to treat/remove "outliers" is also a distributional assumption (eg normal vs cauchy distribution). I think instead that ML output just doesn't bother checking if "underlying assumptions" are true - more based on checking if the outputs "look good/reasonable" (similar to the IT culture of testing...does input+process=good output?). This is often better because "modelling assumptions" (eg the error terms are normally distributed) may not uniquely characterise any algorithm. Further the predictions might also not be that different if we change assumptions (eg normal vs t with 30 degrees of freedom). However, we see that the ML community has discovered a lot of the practical problems that statisticians knew about - bias-variance trade off, the need for large datasets to fit complex models (ie regression with n<p is a difficult modelling problem), the problems of data dredging (overfitting) vs omitting key factors (underfitting). One aspect that I think ML has done better is the notion of reproducibility - a good model should work on multiple datasets. The idea of test-train-validate is a useful way to bring this concept to the practical level.
Are Random Forest and Boosting parametric or non-parametric?
The term "non-parametric" is a bit of a misnomer, as generally these models/algorithms are defined as having the number of parameters which increase as the sample size increases. Whether a RF does thi
Are Random Forest and Boosting parametric or non-parametric? The term "non-parametric" is a bit of a misnomer, as generally these models/algorithms are defined as having the number of parameters which increase as the sample size increases. Whether a RF does this or not depends on how the tree splitting/pruning algorithm works. If no pruning is done, and splitting it based on sample size rules (e.g. split a node if it contains more than 10 data points) then a RF would be non-parametric. However, there are other "parametric" methods like regression, which become somewhat "non-parametric" once you add in feature selection methods. In my view the process of feature selection for linear/logistic regression is very similar to tree based methods. I think a lot of what the ML community has done is fill in the space of how to convert a set of "raw inputs" into "regression inputs". At the basic level, a regression tree is still a "linear model" - but with a transformed set of inputs. Splines are in a similar group as well. Regarding assumptions, the ML models are not "assumption free". Some of the assumptions for ML would be things like "validation error is similar to the error for a new case" - that is an assumption about the distribution of the errors! The choice of how to measure "error" is also an assumption about distribution of errors - eg using squared error vs absolute error as the measure you are minimising (eg normal vs laplace distribution). Whether to treat/remove "outliers" is also a distributional assumption (eg normal vs cauchy distribution). I think instead that ML output just doesn't bother checking if "underlying assumptions" are true - more based on checking if the outputs "look good/reasonable" (similar to the IT culture of testing...does input+process=good output?). This is often better because "modelling assumptions" (eg the error terms are normally distributed) may not uniquely characterise any algorithm. Further the predictions might also not be that different if we change assumptions (eg normal vs t with 30 degrees of freedom). However, we see that the ML community has discovered a lot of the practical problems that statisticians knew about - bias-variance trade off, the need for large datasets to fit complex models (ie regression with n<p is a difficult modelling problem), the problems of data dredging (overfitting) vs omitting key factors (underfitting). One aspect that I think ML has done better is the notion of reproducibility - a good model should work on multiple datasets. The idea of test-train-validate is a useful way to bring this concept to the practical level.
Are Random Forest and Boosting parametric or non-parametric? The term "non-parametric" is a bit of a misnomer, as generally these models/algorithms are defined as having the number of parameters which increase as the sample size increases. Whether a RF does thi
13,447
Are Random Forest and Boosting parametric or non-parametric?
In statistical sense, the model is parametric, if parameters are learned or inferred based on the data. A tree in this sense is nonparametric. Of course the tree depth is a parameter of the algorithm, but it is not inherently derived from the data, but rather an input parameter that has to be provided by the user.
Are Random Forest and Boosting parametric or non-parametric?
In statistical sense, the model is parametric, if parameters are learned or inferred based on the data. A tree in this sense is nonparametric. Of course the tree depth is a parameter of the algorithm,
Are Random Forest and Boosting parametric or non-parametric? In statistical sense, the model is parametric, if parameters are learned or inferred based on the data. A tree in this sense is nonparametric. Of course the tree depth is a parameter of the algorithm, but it is not inherently derived from the data, but rather an input parameter that has to be provided by the user.
Are Random Forest and Boosting parametric or non-parametric? In statistical sense, the model is parametric, if parameters are learned or inferred based on the data. A tree in this sense is nonparametric. Of course the tree depth is a parameter of the algorithm,
13,448
Are Random Forest and Boosting parametric or non-parametric?
I would have thought that the fact that a given training set only has one possible set of computed parameters would also determine if the model is parametric. This is the case in boosting, logistic regression, linear regression and models of this sort which would mostly be considered parametric whereas the parameters estimated in things like neural networks can be different depending on how the same set is sampled to train the model. Boosting which will always update the pseudo loss at every iteration the same give a set seems to me to be more like a parametric training method. But I'm really just guessing.
Are Random Forest and Boosting parametric or non-parametric?
I would have thought that the fact that a given training set only has one possible set of computed parameters would also determine if the model is parametric. This is the case in boosting, logistic re
Are Random Forest and Boosting parametric or non-parametric? I would have thought that the fact that a given training set only has one possible set of computed parameters would also determine if the model is parametric. This is the case in boosting, logistic regression, linear regression and models of this sort which would mostly be considered parametric whereas the parameters estimated in things like neural networks can be different depending on how the same set is sampled to train the model. Boosting which will always update the pseudo loss at every iteration the same give a set seems to me to be more like a parametric training method. But I'm really just guessing.
Are Random Forest and Boosting parametric or non-parametric? I would have thought that the fact that a given training set only has one possible set of computed parameters would also determine if the model is parametric. This is the case in boosting, logistic re
13,449
What is Recurrent Reinforcement Learning
What is a "recurrent reinforcement learning"? Recurrent reinforcement learning (RRL) was first introduced for training neural network trading systems in 1996. "Recurrent" means that previous output is fed into the model as a part of input. It was soon extended to trading in a FX market. The RRL technique has been found to be a successful machine learning technique for building financial trading systems. What is the difference between "recurrent reinforcement learning" and normal "reinforcement learning" (like Q-Learning algorithm)? The RRL approach differs clearly from dynamic programming and reinforcement algorithms such as TD-learning and Q-learning, which attempt to estimate a value function for the control problem. The RRL framework allows to create the simple and elegant problem representation, avoids Bellman's curse of dimensionality and offers compelling advantages in efficiency: RRL produces real valued actions (portfolio weights) naturally without resorting to the discretization method in the Q-learning. RRL has more stable performance compared to the Q-learning when exposed to noisy datasets. Q-learning algorithm is more sensitive to the value function selection (perhaps) due to the recursive property of dynamic optimization, while RRL algorithm is more flexible in choosing objective function and saving computational time. With RRL, trading systems can be optimized by maximizing performance functions, $U( )$, such as "profit" (return after transaction costs), "wealth", utility functions of wealth or risk-adjusted performance ratios like the "sharpe ratio". Here you will find a Matlab implementation of the RRL algorithm. References Reinforcement Learning for Trading Reinforcement Learning for Trading Systems and Portfolios FX trading via recurrent reinforcement learning Stock Trading with Recurrent Reinforcement Learning (RRL) Algorithm Trading using Q-Learning and Recurrent Reinforcement Learning EXPLORING ALGORITHMS FOR AUTOMATED FX TRADING – CONSTRUCTING A HYBRID MODEL
What is Recurrent Reinforcement Learning
What is a "recurrent reinforcement learning"? Recurrent reinforcement learning (RRL) was first introduced for training neural network trading systems in 1996. "Recurrent" means that previous output i
What is Recurrent Reinforcement Learning What is a "recurrent reinforcement learning"? Recurrent reinforcement learning (RRL) was first introduced for training neural network trading systems in 1996. "Recurrent" means that previous output is fed into the model as a part of input. It was soon extended to trading in a FX market. The RRL technique has been found to be a successful machine learning technique for building financial trading systems. What is the difference between "recurrent reinforcement learning" and normal "reinforcement learning" (like Q-Learning algorithm)? The RRL approach differs clearly from dynamic programming and reinforcement algorithms such as TD-learning and Q-learning, which attempt to estimate a value function for the control problem. The RRL framework allows to create the simple and elegant problem representation, avoids Bellman's curse of dimensionality and offers compelling advantages in efficiency: RRL produces real valued actions (portfolio weights) naturally without resorting to the discretization method in the Q-learning. RRL has more stable performance compared to the Q-learning when exposed to noisy datasets. Q-learning algorithm is more sensitive to the value function selection (perhaps) due to the recursive property of dynamic optimization, while RRL algorithm is more flexible in choosing objective function and saving computational time. With RRL, trading systems can be optimized by maximizing performance functions, $U( )$, such as "profit" (return after transaction costs), "wealth", utility functions of wealth or risk-adjusted performance ratios like the "sharpe ratio". Here you will find a Matlab implementation of the RRL algorithm. References Reinforcement Learning for Trading Reinforcement Learning for Trading Systems and Portfolios FX trading via recurrent reinforcement learning Stock Trading with Recurrent Reinforcement Learning (RRL) Algorithm Trading using Q-Learning and Recurrent Reinforcement Learning EXPLORING ALGORITHMS FOR AUTOMATED FX TRADING – CONSTRUCTING A HYBRID MODEL
What is Recurrent Reinforcement Learning What is a "recurrent reinforcement learning"? Recurrent reinforcement learning (RRL) was first introduced for training neural network trading systems in 1996. "Recurrent" means that previous output i
13,450
What is Recurrent Reinforcement Learning
The distinction of (Deep) Recurrent RL, is that the function mapping the agents observations to its output action is a Recurrent Neural Network. A Recurrent Neural Network is a type of neural network that processes each observation sequentially, in the same way for each time step. Original paper: Deep Recurrent Q-Learning for Partially Observable MDPs
What is Recurrent Reinforcement Learning
The distinction of (Deep) Recurrent RL, is that the function mapping the agents observations to its output action is a Recurrent Neural Network. A Recurrent Neural Network is a type of neural network
What is Recurrent Reinforcement Learning The distinction of (Deep) Recurrent RL, is that the function mapping the agents observations to its output action is a Recurrent Neural Network. A Recurrent Neural Network is a type of neural network that processes each observation sequentially, in the same way for each time step. Original paper: Deep Recurrent Q-Learning for Partially Observable MDPs
What is Recurrent Reinforcement Learning The distinction of (Deep) Recurrent RL, is that the function mapping the agents observations to its output action is a Recurrent Neural Network. A Recurrent Neural Network is a type of neural network
13,451
Variance-covariance matrix in lmer
Mixed models are (generalized versions of) variance components models. You write down the fixed effects part, add error terms that may be common for some groups of observations, add link function if needed, and put this into a likelihood maximizer. The various variance structures you are describing, however, are the working correlation models for the generalized estimating equations, which trade off some of the flexibility of the mixed/multilevel models for robustness of inference. With GEEs, you are only interested in conducting inference on the fixed part, and you are OK with not estimating the variance components, as you would in a mixed model. For these fixed effects, you get a robust/sandwich estimate that is appropriate even when your correlation structure is misspecfieid. Inference for the mixed model will break down if the model is misspecified, though. So while having a lot in common (a multilevel structure and ability to address residual correlations), mixed models and GEEs are still somewhat distinct procedures. The R package that deals with GEEs is appropriately called gee, and in the list of possible values of corstr option you will find the structures you mentioned. From the point of view of GEEs, lmer works with exchangeable correlations... at least when the model has two levels of hierarchy, and only random intercepts are specified.
Variance-covariance matrix in lmer
Mixed models are (generalized versions of) variance components models. You write down the fixed effects part, add error terms that may be common for some groups of observations, add link function if n
Variance-covariance matrix in lmer Mixed models are (generalized versions of) variance components models. You write down the fixed effects part, add error terms that may be common for some groups of observations, add link function if needed, and put this into a likelihood maximizer. The various variance structures you are describing, however, are the working correlation models for the generalized estimating equations, which trade off some of the flexibility of the mixed/multilevel models for robustness of inference. With GEEs, you are only interested in conducting inference on the fixed part, and you are OK with not estimating the variance components, as you would in a mixed model. For these fixed effects, you get a robust/sandwich estimate that is appropriate even when your correlation structure is misspecfieid. Inference for the mixed model will break down if the model is misspecified, though. So while having a lot in common (a multilevel structure and ability to address residual correlations), mixed models and GEEs are still somewhat distinct procedures. The R package that deals with GEEs is appropriately called gee, and in the list of possible values of corstr option you will find the structures you mentioned. From the point of view of GEEs, lmer works with exchangeable correlations... at least when the model has two levels of hierarchy, and only random intercepts are specified.
Variance-covariance matrix in lmer Mixed models are (generalized versions of) variance components models. You write down the fixed effects part, add error terms that may be common for some groups of observations, add link function if n
13,452
Variance-covariance matrix in lmer
The FlexLamba branch of lmer provides such a functionality. See https://github.com/lme4/lme4/issues/224 for examples how to implement a specific structure of errors or random effects.
Variance-covariance matrix in lmer
The FlexLamba branch of lmer provides such a functionality. See https://github.com/lme4/lme4/issues/224 for examples how to implement a specific structure of errors or random effects.
Variance-covariance matrix in lmer The FlexLamba branch of lmer provides such a functionality. See https://github.com/lme4/lme4/issues/224 for examples how to implement a specific structure of errors or random effects.
Variance-covariance matrix in lmer The FlexLamba branch of lmer provides such a functionality. See https://github.com/lme4/lme4/issues/224 for examples how to implement a specific structure of errors or random effects.
13,453
Variance-covariance matrix in lmer
To my knowledge lmer is not having an "easy" way to address this. Also given that in most cases lmer makes heavy use of sparse matrices for Cholesky factorization I would find it unlikely that it allows for totally unstructured VCV's. To your address your question on "default structure": there is not a concept of default; depending on how you define your structure, you use that structure. Eg. using random effects like : $(1|RandEff_1)+(1|RandEff_2)$ where each random effect has 3 levels will result in unnested and independent random effects and a diagonal random effects VCV matrix of the form: $R = \begin{bmatrix} \sigma_{RE1}^2 & 0& 0 & & 0 & 0 & 0\\ 0 & \sigma_{RE1}^2& 0 & & 0 & 0 & 0\\ 0 & 0& \sigma_{RE1}^2 & & 0 & 0 & 0\\ 0& 0& 0 & & \sigma_{RE2}^2 & 0 & 0 \\ 0 & 0& 0 & & 0 & \sigma_{RE2}^2 & 0\\ 0& 0& 0 & & 0& 0 & \sigma_{RE2}^2 \\\end{bmatrix}$ All is not lost with LME's though: You can specify these VCV matrix attributes "easily" is you are using the R-package MCMCglmm. Look at the CourseNotes.pdf, p.70. In that page it does give some analogues on how lme4 random effects structure would be defined but as you'll see yourself, lmer is less flexible than MCMCglmm in this matter. Half-way there is problem nlme's lme corStruct classes, eg. corCompSymm, corAR1, etc. etc. Fabian's response in this tread gives some more concise examples for lme4-based VCV specification but as mentioned before they are not as explictly as those in MCMCglmm or nlme.
Variance-covariance matrix in lmer
To my knowledge lmer is not having an "easy" way to address this. Also given that in most cases lmer makes heavy use of sparse matrices for Cholesky factorization I would find it unlikely that it allo
Variance-covariance matrix in lmer To my knowledge lmer is not having an "easy" way to address this. Also given that in most cases lmer makes heavy use of sparse matrices for Cholesky factorization I would find it unlikely that it allows for totally unstructured VCV's. To your address your question on "default structure": there is not a concept of default; depending on how you define your structure, you use that structure. Eg. using random effects like : $(1|RandEff_1)+(1|RandEff_2)$ where each random effect has 3 levels will result in unnested and independent random effects and a diagonal random effects VCV matrix of the form: $R = \begin{bmatrix} \sigma_{RE1}^2 & 0& 0 & & 0 & 0 & 0\\ 0 & \sigma_{RE1}^2& 0 & & 0 & 0 & 0\\ 0 & 0& \sigma_{RE1}^2 & & 0 & 0 & 0\\ 0& 0& 0 & & \sigma_{RE2}^2 & 0 & 0 \\ 0 & 0& 0 & & 0 & \sigma_{RE2}^2 & 0\\ 0& 0& 0 & & 0& 0 & \sigma_{RE2}^2 \\\end{bmatrix}$ All is not lost with LME's though: You can specify these VCV matrix attributes "easily" is you are using the R-package MCMCglmm. Look at the CourseNotes.pdf, p.70. In that page it does give some analogues on how lme4 random effects structure would be defined but as you'll see yourself, lmer is less flexible than MCMCglmm in this matter. Half-way there is problem nlme's lme corStruct classes, eg. corCompSymm, corAR1, etc. etc. Fabian's response in this tread gives some more concise examples for lme4-based VCV specification but as mentioned before they are not as explictly as those in MCMCglmm or nlme.
Variance-covariance matrix in lmer To my knowledge lmer is not having an "easy" way to address this. Also given that in most cases lmer makes heavy use of sparse matrices for Cholesky factorization I would find it unlikely that it allo
13,454
Is there an unbiased estimator of the Hellinger distance between two distributions?
No unbiased estimator either of $\mathfrak{H}$ or of $\mathfrak{H}^2$ exists for $f$ from any reasonably broad nonparametric class of distributions. We can show this with the beautifully simple argument of Bickel and Lehmann (1969). Unbiased estimation in convex families. The Annals of Mathematical Statistics, 40 (5) 1523–1535. (project euclid) Fix some distributions $F_0$, $F$, and $G$, with corresponding densities $f_0$, $f$, and $g$. Let $H(F)$ denote $\mathfrak{H}(f, f_0)$, and let $\hat H(\mathbf X)$ be some estimator of $H(F)$ based on $n$ iid samples $X_i \sim F$. Suppose that $\hat H$ is unbiased for samples from any distribution of the form $$M_\alpha := \alpha F + (1 - \alpha) G .$$ But then \begin{align} Q(\alpha) &= H(M_\alpha) \\&= \int_{x_1} \cdots \int_{x_n} \hat H(\mathbf X) \,\mathrm{d}M_\alpha(x_1) \cdots\mathrm{d}M_\alpha(x_n) \\&= \int_{x_1} \cdots \int_{x_n} \hat H(\mathbf X) \left[ \alpha \mathrm{d}F(x_1) + (1-\alpha) \mathrm{d}G(x_1) \right] \cdots \left[ \alpha \mathrm{d}F(x_n) + (1-\alpha) \mathrm{d}G(x_n) \right] \\&= \alpha^n \operatorname{\mathbb{E}}_{\mathbf X \sim F^n}[ \hat H(\mathbf X)] + \dots + (1 - \alpha)^n \operatorname{\mathbb{E}}_{\mathbf X \sim G^n}[ \hat H(\mathbf X)] ,\end{align} so that $Q(\alpha)$ must be a polynomial in $\alpha$ of degree at most $n$. Now, let's specialize to a reasonable case and show that the corresponding $Q$ is not polynomial. Let $F_0$ be some distribution which has constant density on $[-1, 1]$: $f_0(x) = c$ for all $\lvert x \rvert \le 1$. (Its behavior outside that range doesn't matter.) Let $F$ be some distribution supported only on $[-1, 0]$, and $G$ some distribution supported only on $[0, 1]$. Now \begin{align} Q(\alpha) &= \mathfrak{H}(m_\alpha, f_0) \\&= \sqrt{1 - \int_{\mathbb R} \sqrt{m_\alpha(x) f_0(x)} \mathrm{d}x} \\&= \sqrt{1 - \int_{-1}^0 \sqrt{c \, \alpha f(x)} \mathrm{d}x - \int_{0}^1 \sqrt{c \, (1 - \alpha) g(x)} \mathrm{d}x} \\&= \sqrt{1 - \sqrt{\alpha} B_F - \sqrt{1 - \alpha} B_G} ,\end{align} where $B_F := \int_{\mathbb R} \sqrt{f(x) f_0(x)} \mathrm{d}x$ and likewise for $B_G$. Note that $B_F > 0$, $B_G > 0$ for any distributions $F$, $G$ which have a density. $\sqrt{1 - \sqrt{\alpha} B_F - \sqrt{1 - \alpha} B_G}$ is not a polynomial of any finite degree. Thus, no estimator $\hat H$ can be unbiased for $\mathfrak{H}$ on all of the distributions $M_\alpha$ with finitely many samples. Likewise, because $1 - \sqrt{\alpha} B_F - \sqrt{1 - \alpha} B_G$ is also not a polynomial, there is no estimator for $\mathfrak{H}^2$ which is unbiased on all of the distributions $M_\alpha$ with finitely many samples. This excludes pretty much all reasonable nonparametric classes of distributions, except for those with densities bounded below (an assumption nonparametric analyses sometimes make). You could probably kill those classes too with a similar argument by just making the densities constant or something.
Is there an unbiased estimator of the Hellinger distance between two distributions?
No unbiased estimator either of $\mathfrak{H}$ or of $\mathfrak{H}^2$ exists for $f$ from any reasonably broad nonparametric class of distributions. We can show this with the beautifully simple argume
Is there an unbiased estimator of the Hellinger distance between two distributions? No unbiased estimator either of $\mathfrak{H}$ or of $\mathfrak{H}^2$ exists for $f$ from any reasonably broad nonparametric class of distributions. We can show this with the beautifully simple argument of Bickel and Lehmann (1969). Unbiased estimation in convex families. The Annals of Mathematical Statistics, 40 (5) 1523–1535. (project euclid) Fix some distributions $F_0$, $F$, and $G$, with corresponding densities $f_0$, $f$, and $g$. Let $H(F)$ denote $\mathfrak{H}(f, f_0)$, and let $\hat H(\mathbf X)$ be some estimator of $H(F)$ based on $n$ iid samples $X_i \sim F$. Suppose that $\hat H$ is unbiased for samples from any distribution of the form $$M_\alpha := \alpha F + (1 - \alpha) G .$$ But then \begin{align} Q(\alpha) &= H(M_\alpha) \\&= \int_{x_1} \cdots \int_{x_n} \hat H(\mathbf X) \,\mathrm{d}M_\alpha(x_1) \cdots\mathrm{d}M_\alpha(x_n) \\&= \int_{x_1} \cdots \int_{x_n} \hat H(\mathbf X) \left[ \alpha \mathrm{d}F(x_1) + (1-\alpha) \mathrm{d}G(x_1) \right] \cdots \left[ \alpha \mathrm{d}F(x_n) + (1-\alpha) \mathrm{d}G(x_n) \right] \\&= \alpha^n \operatorname{\mathbb{E}}_{\mathbf X \sim F^n}[ \hat H(\mathbf X)] + \dots + (1 - \alpha)^n \operatorname{\mathbb{E}}_{\mathbf X \sim G^n}[ \hat H(\mathbf X)] ,\end{align} so that $Q(\alpha)$ must be a polynomial in $\alpha$ of degree at most $n$. Now, let's specialize to a reasonable case and show that the corresponding $Q$ is not polynomial. Let $F_0$ be some distribution which has constant density on $[-1, 1]$: $f_0(x) = c$ for all $\lvert x \rvert \le 1$. (Its behavior outside that range doesn't matter.) Let $F$ be some distribution supported only on $[-1, 0]$, and $G$ some distribution supported only on $[0, 1]$. Now \begin{align} Q(\alpha) &= \mathfrak{H}(m_\alpha, f_0) \\&= \sqrt{1 - \int_{\mathbb R} \sqrt{m_\alpha(x) f_0(x)} \mathrm{d}x} \\&= \sqrt{1 - \int_{-1}^0 \sqrt{c \, \alpha f(x)} \mathrm{d}x - \int_{0}^1 \sqrt{c \, (1 - \alpha) g(x)} \mathrm{d}x} \\&= \sqrt{1 - \sqrt{\alpha} B_F - \sqrt{1 - \alpha} B_G} ,\end{align} where $B_F := \int_{\mathbb R} \sqrt{f(x) f_0(x)} \mathrm{d}x$ and likewise for $B_G$. Note that $B_F > 0$, $B_G > 0$ for any distributions $F$, $G$ which have a density. $\sqrt{1 - \sqrt{\alpha} B_F - \sqrt{1 - \alpha} B_G}$ is not a polynomial of any finite degree. Thus, no estimator $\hat H$ can be unbiased for $\mathfrak{H}$ on all of the distributions $M_\alpha$ with finitely many samples. Likewise, because $1 - \sqrt{\alpha} B_F - \sqrt{1 - \alpha} B_G$ is also not a polynomial, there is no estimator for $\mathfrak{H}^2$ which is unbiased on all of the distributions $M_\alpha$ with finitely many samples. This excludes pretty much all reasonable nonparametric classes of distributions, except for those with densities bounded below (an assumption nonparametric analyses sometimes make). You could probably kill those classes too with a similar argument by just making the densities constant or something.
Is there an unbiased estimator of the Hellinger distance between two distributions? No unbiased estimator either of $\mathfrak{H}$ or of $\mathfrak{H}^2$ exists for $f$ from any reasonably broad nonparametric class of distributions. We can show this with the beautifully simple argume
13,455
Is there an unbiased estimator of the Hellinger distance between two distributions?
I don't know how to construct (if it exists) an unbiased estimator of the Hellinger distance. It seems possible to construct a consistent estimator. We have some fixed known density $f_0$, and a random sample $X_1,\dots,X_n$ from a density $f>0$. We want to estimate $$ H(f,f_0) = \sqrt{1 - \int_\mathscr{X} \sqrt{f(x)f_0(x)}\,dx} = \sqrt{1 - \int_\mathscr{X} \sqrt{\frac{f_0(x)}{f(x)}}\;\;f(x)\,dx} $$ $$ = \sqrt{1 - \mathbb{E}\left[\sqrt{\frac{f_0(X)}{f(X)}}\;\;\right] }\, , $$ where $X\sim f$. By the SLLN, we know that $$ \sqrt{1 - \frac{1}{n} \sum_{i=1}^n \sqrt{\frac{f_0(X_i)}{f(X_i)}}} \quad \rightarrow H(f,f_0) \, , $$ almost surely, as $n\to\infty$. Hence, a resonable way to estimate $H(f,f_0)$ would be to take some density estimator $\hat{f_n}$ (such as a traditional kernel density estimator) of $f$, and compute $$ \hat{H}=\sqrt{1 - \frac{1}{n} \sum_{i=1}^n \sqrt{\frac{f_0(X_i)}{\hat{f_n}(X_i)}}} \, . $$
Is there an unbiased estimator of the Hellinger distance between two distributions?
I don't know how to construct (if it exists) an unbiased estimator of the Hellinger distance. It seems possible to construct a consistent estimator. We have some fixed known density $f_0$, and a rando
Is there an unbiased estimator of the Hellinger distance between two distributions? I don't know how to construct (if it exists) an unbiased estimator of the Hellinger distance. It seems possible to construct a consistent estimator. We have some fixed known density $f_0$, and a random sample $X_1,\dots,X_n$ from a density $f>0$. We want to estimate $$ H(f,f_0) = \sqrt{1 - \int_\mathscr{X} \sqrt{f(x)f_0(x)}\,dx} = \sqrt{1 - \int_\mathscr{X} \sqrt{\frac{f_0(x)}{f(x)}}\;\;f(x)\,dx} $$ $$ = \sqrt{1 - \mathbb{E}\left[\sqrt{\frac{f_0(X)}{f(X)}}\;\;\right] }\, , $$ where $X\sim f$. By the SLLN, we know that $$ \sqrt{1 - \frac{1}{n} \sum_{i=1}^n \sqrt{\frac{f_0(X_i)}{f(X_i)}}} \quad \rightarrow H(f,f_0) \, , $$ almost surely, as $n\to\infty$. Hence, a resonable way to estimate $H(f,f_0)$ would be to take some density estimator $\hat{f_n}$ (such as a traditional kernel density estimator) of $f$, and compute $$ \hat{H}=\sqrt{1 - \frac{1}{n} \sum_{i=1}^n \sqrt{\frac{f_0(X_i)}{\hat{f_n}(X_i)}}} \, . $$
Is there an unbiased estimator of the Hellinger distance between two distributions? I don't know how to construct (if it exists) an unbiased estimator of the Hellinger distance. It seems possible to construct a consistent estimator. We have some fixed known density $f_0$, and a rando
13,456
How can I predict values from new inputs of a linear model in R?
If you want the predicted values for train_x = 1, 2, and 3, use predict(mod, data.frame(train_x = c(1, 2, 3))).
How can I predict values from new inputs of a linear model in R?
If you want the predicted values for train_x = 1, 2, and 3, use predict(mod, data.frame(train_x = c(1, 2, 3))).
How can I predict values from new inputs of a linear model in R? If you want the predicted values for train_x = 1, 2, and 3, use predict(mod, data.frame(train_x = c(1, 2, 3))).
How can I predict values from new inputs of a linear model in R? If you want the predicted values for train_x = 1, 2, and 3, use predict(mod, data.frame(train_x = c(1, 2, 3))).
13,457
How to use weights in function lm in R?
I think R help page of lm answers your question pretty well. The only requirement for weights is that the vector supplied must be the same length as the data. You can even supply only the name of the variable in the data set, R will take care of the rest, NA management, etc. You can also use formulas in the weight argument. Here is the example: x <-c(rnorm(10),NA) df <- data.frame(y=1+2*x+rnorm(11)/2, x=x, wght1=1:11) ## Fancy weights as numeric vector summary(lm(y~x,data=df,weights=(df$wght1)^(3/4))) # Fancy weights as formula on column of the data set summary(lm(y~x,data=df,weights=I(wght1^(3/4)))) # Mundane weights as the column of the data set summary(lm(y~x,data=df,weights=wght1)) Note that weights must be positive, otherwise R will produce an error.
How to use weights in function lm in R?
I think R help page of lm answers your question pretty well. The only requirement for weights is that the vector supplied must be the same length as the data. You can even supply only the name of the
How to use weights in function lm in R? I think R help page of lm answers your question pretty well. The only requirement for weights is that the vector supplied must be the same length as the data. You can even supply only the name of the variable in the data set, R will take care of the rest, NA management, etc. You can also use formulas in the weight argument. Here is the example: x <-c(rnorm(10),NA) df <- data.frame(y=1+2*x+rnorm(11)/2, x=x, wght1=1:11) ## Fancy weights as numeric vector summary(lm(y~x,data=df,weights=(df$wght1)^(3/4))) # Fancy weights as formula on column of the data set summary(lm(y~x,data=df,weights=I(wght1^(3/4)))) # Mundane weights as the column of the data set summary(lm(y~x,data=df,weights=wght1)) Note that weights must be positive, otherwise R will produce an error.
How to use weights in function lm in R? I think R help page of lm answers your question pretty well. The only requirement for weights is that the vector supplied must be the same length as the data. You can even supply only the name of the
13,458
How to use weights in function lm in R?
What you suggest should work. See if this makes sense: lm(c(8000, 50000, 116000) ~ c(6, 7, 8)) lm(c(8000, 50000, 116000) ~ c(6, 7, 8), weight = c(123, 123, 246)) lm(c(8000, 50000, 116000, 116000) ~ c(6, 7, 8, 8)) The second line produces the same intercept and slope as the third line (distinct from the first line's result), by giving one observation relatively twice the weight of each of the other two observations, similar to the impact of duplicating the third observation.
How to use weights in function lm in R?
What you suggest should work. See if this makes sense: lm(c(8000, 50000, 116000) ~ c(6, 7, 8)) lm(c(8000, 50000, 116000) ~ c(6, 7, 8), weight = c(123, 123, 246)) lm(c(8000, 50000, 116000, 116000) ~ c
How to use weights in function lm in R? What you suggest should work. See if this makes sense: lm(c(8000, 50000, 116000) ~ c(6, 7, 8)) lm(c(8000, 50000, 116000) ~ c(6, 7, 8), weight = c(123, 123, 246)) lm(c(8000, 50000, 116000, 116000) ~ c(6, 7, 8, 8)) The second line produces the same intercept and slope as the third line (distinct from the first line's result), by giving one observation relatively twice the weight of each of the other two observations, similar to the impact of duplicating the third observation.
How to use weights in function lm in R? What you suggest should work. See if this makes sense: lm(c(8000, 50000, 116000) ~ c(6, 7, 8)) lm(c(8000, 50000, 116000) ~ c(6, 7, 8), weight = c(123, 123, 246)) lm(c(8000, 50000, 116000, 116000) ~ c
13,459
Why is LogLoss preferred over other proper scoring rules?
Arguments for the log score On the one hand, as kjetil b halvorsen writes, the log loss is just a reformulation of the log likelihood, which statisticians are very used to maximizing, so it is simply very natural as a KPIs. (A somewhat more common convention is to minimize the score, in which case, one takes the negative log of predicted probabilities, but the same point still applies.) On the other hand, in the single-class classification case, Merkle & Steyvers (2013, Decision Analysis) point out that the log score is just one member of an entire family of strictly proper scoring rules, which are indexed by two parameters $\alpha\geq 0$ and $\beta\geq 0$. Particular values of $\alpha$ and $\beta$ can be set based on the cost $c$ of misclassifications (based, in turn, on comparing probabilistic predictions to a threshold). Smaller values of $\alpha+\beta$ correspond to higher uncertainty in $c$... and the log score just happens to be the member of the family with $\alpha=\beta=0$. So at least in this classification case, you could say that the log score is a reasonable choice (within this family of scoring rules) that corresponds to the highest uncertainy or agnosticity in the misclassification cost. On the third hand, Benedetti (2010, Monthly Weather Review) considers three properties a scoring rule should have: it should be additive when adding a new event it should only depend on the probabilities assigned to events that actually occur and can be observed ("locality") and it should be proper (more strongly, Benedetti requires differentiability in predictions and a derivative of zero at the true probabilities) Benedetti (2010) then proceeds to show that the log loss is the only scoring rule that satisfies these conditions in the case of finitely many possible events. (To be honest, I don't quite follow Benedetti's derivation; specifically, I don't see how he arrives at equation (7). But I'll put this edit in here as a pointer so smarter people than me can look at the paper.) Benedetti (2010) then explores connections to information theory and Kullback-Leibler divergence between the probabilistic prediction and the actual outcome distribution. He draws attention to one disadvantage of the Brier score: it depends on probabilities predicted for unobserved events and thus violates the locality requirement. Specifically, assume we have $R=3$ possible events and two different probabilistic predictions, $(0.2,0.4,0.4)$ and $(0.2,0.3,0.5)$. Suppose further that the first event actually occurs. Note that both predictions assign the same probability of $0.2$ to this event. Locality would require both predictions' scores to be identical, since they only differ on predicted probabilities for unobserved events. However, the multi-category Brier score for the first prediction is $$ (1-0.2)^2+0.4^2+0.4^2 = 0.96 $$ whereas the score for the second prediction is $$ (1-0.2)^2+0.3^2+0.5^2 = 0.98. $$ However, as Benedetti (2010) points out, the Brier score is a second-order approximation to the logarithmic skill score, which explains some of its appeal. Finally, one more argument for the log loss that I'm taking from Benedetti (2010, p. 208): if an event occurs which we had predicted to be completely impossible, $\hat{p}=0$, then the log loss is infinite, with no chance of being "saved" by other better predictions. Thus, using the log loss truly forces us to consider the possibility of extremely rare events and not just sweep them under the table. The Brier score, in contrast, is much more relaxed about observing events predicted to be impossible. For instance, Jewson (2004, arXiv:physics/0401046v1) gives the following example: assume a simple two-class prediction situation. The event occurs with a true probability of $p=0.1$. We have two competing predictions: the first is that the event is impossible, $\hat{p}_1=0$, the second overestimates the true probability, $\hat{p}_2=0.25$. Then the expected Brier score for the first prediction is $$ 0.1\times 1^2+0.9\times 0^2 = 0.1 $$ whereas the expected Brier score for the second prediction is $$ 0.1\times (1-0.25)^2+0.9\times 0.25^2 = 0.1125. $$ So the Brier score would actually prefer the first prediction, which is completely off base in that it considers an event with a $0.1$ probability of occurring as completely impossible. This does not make intuitive sense. Arguments for the Brier score Of course, the Brier score also has advantages. For instance, the log score explodes if we observe an event that we thought would be impossible, because we then take the log of zero. To some, that's a feature (see above), to others, it's a bug. The Brier score will still be defined if an "impossible" event occurs. The Brier score is conceptually very close to the Mean Squared Error, and can in fact be expressed as such (between a vector of probabilistic predictions and a 0-1 vector of which class actually occurred). This is easy to understand. Selten (1998, Experimental Economics) offers four axioms we could require a scoring rule to fulfill: it should be symmetric if classes are reordered adding a class with zero predicted and true probability should not change the score if the true class probabilities are $p=(p_1, \dots, p_k)$ and we predict $\hat{p}=(\hat{p}_1, \dots, \hat{p}_k)$, then the score should be positive (i.e., "bad", see above on conventions about positive and negative orientation) - this is strict properness, which Selten (1998) calls "incentive compatibility" if the true class probabilities are $p$ and we predict $\hat{p}$, then the score should be equal to the case where the true probabilities are $\hat{p}$ and we predict $p$ (symmetry; Selten calls this "neutrality") Selten (1998) then shows that the Brier score is the only one that satisfies these axioms, up to scaling. The log score, of course, violates the fourth requirement, because in general $$ p\log \hat p \neq \hat p\log p. $$ So one way of looking at it is whether we prefer Benedetti's argument that a scoring rule should be "local" (i.e., not be influenced by predicted probabilities for unobserved events), or Selten's argument that it should be symmetric (i.e., give the same result if we exchange the predicted and the true probability vector). In the first case, we should use the log score, in the second case the Brier score. I personally find Selten's requirement of symmetry (the fourth bullet point in the Brier section above) unnecessary, and I consider the log score explosion a feature and not a bug (see above). Thus, I prefer the log score.
Why is LogLoss preferred over other proper scoring rules?
Arguments for the log score On the one hand, as kjetil b halvorsen writes, the log loss is just a reformulation of the log likelihood, which statisticians are very used to maximizing, so it is simply
Why is LogLoss preferred over other proper scoring rules? Arguments for the log score On the one hand, as kjetil b halvorsen writes, the log loss is just a reformulation of the log likelihood, which statisticians are very used to maximizing, so it is simply very natural as a KPIs. (A somewhat more common convention is to minimize the score, in which case, one takes the negative log of predicted probabilities, but the same point still applies.) On the other hand, in the single-class classification case, Merkle & Steyvers (2013, Decision Analysis) point out that the log score is just one member of an entire family of strictly proper scoring rules, which are indexed by two parameters $\alpha\geq 0$ and $\beta\geq 0$. Particular values of $\alpha$ and $\beta$ can be set based on the cost $c$ of misclassifications (based, in turn, on comparing probabilistic predictions to a threshold). Smaller values of $\alpha+\beta$ correspond to higher uncertainty in $c$... and the log score just happens to be the member of the family with $\alpha=\beta=0$. So at least in this classification case, you could say that the log score is a reasonable choice (within this family of scoring rules) that corresponds to the highest uncertainy or agnosticity in the misclassification cost. On the third hand, Benedetti (2010, Monthly Weather Review) considers three properties a scoring rule should have: it should be additive when adding a new event it should only depend on the probabilities assigned to events that actually occur and can be observed ("locality") and it should be proper (more strongly, Benedetti requires differentiability in predictions and a derivative of zero at the true probabilities) Benedetti (2010) then proceeds to show that the log loss is the only scoring rule that satisfies these conditions in the case of finitely many possible events. (To be honest, I don't quite follow Benedetti's derivation; specifically, I don't see how he arrives at equation (7). But I'll put this edit in here as a pointer so smarter people than me can look at the paper.) Benedetti (2010) then explores connections to information theory and Kullback-Leibler divergence between the probabilistic prediction and the actual outcome distribution. He draws attention to one disadvantage of the Brier score: it depends on probabilities predicted for unobserved events and thus violates the locality requirement. Specifically, assume we have $R=3$ possible events and two different probabilistic predictions, $(0.2,0.4,0.4)$ and $(0.2,0.3,0.5)$. Suppose further that the first event actually occurs. Note that both predictions assign the same probability of $0.2$ to this event. Locality would require both predictions' scores to be identical, since they only differ on predicted probabilities for unobserved events. However, the multi-category Brier score for the first prediction is $$ (1-0.2)^2+0.4^2+0.4^2 = 0.96 $$ whereas the score for the second prediction is $$ (1-0.2)^2+0.3^2+0.5^2 = 0.98. $$ However, as Benedetti (2010) points out, the Brier score is a second-order approximation to the logarithmic skill score, which explains some of its appeal. Finally, one more argument for the log loss that I'm taking from Benedetti (2010, p. 208): if an event occurs which we had predicted to be completely impossible, $\hat{p}=0$, then the log loss is infinite, with no chance of being "saved" by other better predictions. Thus, using the log loss truly forces us to consider the possibility of extremely rare events and not just sweep them under the table. The Brier score, in contrast, is much more relaxed about observing events predicted to be impossible. For instance, Jewson (2004, arXiv:physics/0401046v1) gives the following example: assume a simple two-class prediction situation. The event occurs with a true probability of $p=0.1$. We have two competing predictions: the first is that the event is impossible, $\hat{p}_1=0$, the second overestimates the true probability, $\hat{p}_2=0.25$. Then the expected Brier score for the first prediction is $$ 0.1\times 1^2+0.9\times 0^2 = 0.1 $$ whereas the expected Brier score for the second prediction is $$ 0.1\times (1-0.25)^2+0.9\times 0.25^2 = 0.1125. $$ So the Brier score would actually prefer the first prediction, which is completely off base in that it considers an event with a $0.1$ probability of occurring as completely impossible. This does not make intuitive sense. Arguments for the Brier score Of course, the Brier score also has advantages. For instance, the log score explodes if we observe an event that we thought would be impossible, because we then take the log of zero. To some, that's a feature (see above), to others, it's a bug. The Brier score will still be defined if an "impossible" event occurs. The Brier score is conceptually very close to the Mean Squared Error, and can in fact be expressed as such (between a vector of probabilistic predictions and a 0-1 vector of which class actually occurred). This is easy to understand. Selten (1998, Experimental Economics) offers four axioms we could require a scoring rule to fulfill: it should be symmetric if classes are reordered adding a class with zero predicted and true probability should not change the score if the true class probabilities are $p=(p_1, \dots, p_k)$ and we predict $\hat{p}=(\hat{p}_1, \dots, \hat{p}_k)$, then the score should be positive (i.e., "bad", see above on conventions about positive and negative orientation) - this is strict properness, which Selten (1998) calls "incentive compatibility" if the true class probabilities are $p$ and we predict $\hat{p}$, then the score should be equal to the case where the true probabilities are $\hat{p}$ and we predict $p$ (symmetry; Selten calls this "neutrality") Selten (1998) then shows that the Brier score is the only one that satisfies these axioms, up to scaling. The log score, of course, violates the fourth requirement, because in general $$ p\log \hat p \neq \hat p\log p. $$ So one way of looking at it is whether we prefer Benedetti's argument that a scoring rule should be "local" (i.e., not be influenced by predicted probabilities for unobserved events), or Selten's argument that it should be symmetric (i.e., give the same result if we exchange the predicted and the true probability vector). In the first case, we should use the log score, in the second case the Brier score. I personally find Selten's requirement of symmetry (the fourth bullet point in the Brier section above) unnecessary, and I consider the log score explosion a feature and not a bug (see above). Thus, I prefer the log score.
Why is LogLoss preferred over other proper scoring rules? Arguments for the log score On the one hand, as kjetil b halvorsen writes, the log loss is just a reformulation of the log likelihood, which statisticians are very used to maximizing, so it is simply
13,460
At What Level is a $\chi^2$ test Mathematically Identical to a $z$-test of Proportions?
Let us have a 2x2 frequency table where columns are two groups of respondents and rows are the two responses "Yes" and "No". And we've turned the frequencies into the proportions within group, i.e. into the vertical profiles: Gr1 Gr2 Total Yes p1 p2 p No q1 q2 q -------------- 100% 100% 100% n1 n2 N The usual (not Yates corrected) $\chi^2$ of this table, after you substitute proportions instead of frequencies in its formula, looks like this: $$n_1[\frac{(p_1-p)^2}{p}+\frac{(q_1-q)^2}{q}]+n_2[\frac{(p_2-p)^2}{p}+\frac{(q_2-q)^2}{q}]= \frac{n_1(p_1-p)^2+n_2(p_2-p)^2}{pq}.$$ Remember that $p= \frac{n_1p_1+n_2p_2}{n_1+n_2}$, the element of the weighted average profile of the two profiles (p1,q1) and (p2,q2), and plug it in the formula, to obtain $$...= \frac{(p_1-p_2)^2(n_1^2n_2+n_1n_2^2)}{pqN^2}$$ Divide both numerator and denominator by the $(n_1^2n_2+n_1n_2^2)$ and get $$\frac{(p_1-p_2)^2}{pq(1/n_1+1/n_2)}=Z^2,$$ the squared z-statistic of the z-test of proportions for "Yes" response. Thus, the 2x2 homogeneity Chi-square statistic (and test) is equivalent to the z-test of two proportions. The so called expected frequencies computed in the chi-square test in a given column is the weighted (by the group n) average vertical profile (i.e. the profile of the "average group") multiplied by that group's n. Thus, it comes out that chi-square tests the deviation of each of the two groups profiles from this average group profile, - which is equivalent to testing the groups' profiles difference from each other, which is the z-test of proportions. This is one demonstration of a link between a variables association measure (chi-square) and a group difference measure (z-test statistic). Attribute associations and group differences are (often) the two facets of the same thing. (Showing the expansion in the first line above, By @Antoni's request): $n_1[\frac{(p_1-p)^2}{p}+\frac{(q_1-q)^2}{q}]+n_2[\frac{(p_2-p)^2}{p}+\frac{(q_2-q)^2}{q}] = \frac{n_1(p_1-p)^2q}{pq}+\frac{n_1(q_1-q)^2p}{pq}+\frac{n_2(p_2-p)^2q}{pq}+\frac{n_2(q_2-q)^2p}{pq} = \frac{n_1(p_1-p)^2(1-p)+n_1(1-p_1-1+p)^2p+n_2(p_2-p)^2(1-p)+n_2(1-p_2-1+p)^2p}{pq} = \frac{n_1(p_1-p)^2(1-p)+n_1(p-p_1)^2p+n_2(p_2-p)^2(1-p)+n_2(p-p_2)^2p}{pq} = \frac{[n_1(p_1-p)^2][(1-p)+p]+[n_2(p_2-p)^2][(1-p)+p]}{pq} = \frac{n_1(p_1-p)^2+n_2(p_2-p)^2}{pq}.$
At What Level is a $\chi^2$ test Mathematically Identical to a $z$-test of Proportions?
Let us have a 2x2 frequency table where columns are two groups of respondents and rows are the two responses "Yes" and "No". And we've turned the frequencies into the proportions within group, i.e. in
At What Level is a $\chi^2$ test Mathematically Identical to a $z$-test of Proportions? Let us have a 2x2 frequency table where columns are two groups of respondents and rows are the two responses "Yes" and "No". And we've turned the frequencies into the proportions within group, i.e. into the vertical profiles: Gr1 Gr2 Total Yes p1 p2 p No q1 q2 q -------------- 100% 100% 100% n1 n2 N The usual (not Yates corrected) $\chi^2$ of this table, after you substitute proportions instead of frequencies in its formula, looks like this: $$n_1[\frac{(p_1-p)^2}{p}+\frac{(q_1-q)^2}{q}]+n_2[\frac{(p_2-p)^2}{p}+\frac{(q_2-q)^2}{q}]= \frac{n_1(p_1-p)^2+n_2(p_2-p)^2}{pq}.$$ Remember that $p= \frac{n_1p_1+n_2p_2}{n_1+n_2}$, the element of the weighted average profile of the two profiles (p1,q1) and (p2,q2), and plug it in the formula, to obtain $$...= \frac{(p_1-p_2)^2(n_1^2n_2+n_1n_2^2)}{pqN^2}$$ Divide both numerator and denominator by the $(n_1^2n_2+n_1n_2^2)$ and get $$\frac{(p_1-p_2)^2}{pq(1/n_1+1/n_2)}=Z^2,$$ the squared z-statistic of the z-test of proportions for "Yes" response. Thus, the 2x2 homogeneity Chi-square statistic (and test) is equivalent to the z-test of two proportions. The so called expected frequencies computed in the chi-square test in a given column is the weighted (by the group n) average vertical profile (i.e. the profile of the "average group") multiplied by that group's n. Thus, it comes out that chi-square tests the deviation of each of the two groups profiles from this average group profile, - which is equivalent to testing the groups' profiles difference from each other, which is the z-test of proportions. This is one demonstration of a link between a variables association measure (chi-square) and a group difference measure (z-test statistic). Attribute associations and group differences are (often) the two facets of the same thing. (Showing the expansion in the first line above, By @Antoni's request): $n_1[\frac{(p_1-p)^2}{p}+\frac{(q_1-q)^2}{q}]+n_2[\frac{(p_2-p)^2}{p}+\frac{(q_2-q)^2}{q}] = \frac{n_1(p_1-p)^2q}{pq}+\frac{n_1(q_1-q)^2p}{pq}+\frac{n_2(p_2-p)^2q}{pq}+\frac{n_2(q_2-q)^2p}{pq} = \frac{n_1(p_1-p)^2(1-p)+n_1(1-p_1-1+p)^2p+n_2(p_2-p)^2(1-p)+n_2(1-p_2-1+p)^2p}{pq} = \frac{n_1(p_1-p)^2(1-p)+n_1(p-p_1)^2p+n_2(p_2-p)^2(1-p)+n_2(p-p_2)^2p}{pq} = \frac{[n_1(p_1-p)^2][(1-p)+p]+[n_2(p_2-p)^2][(1-p)+p]}{pq} = \frac{n_1(p_1-p)^2+n_2(p_2-p)^2}{pq}.$
At What Level is a $\chi^2$ test Mathematically Identical to a $z$-test of Proportions? Let us have a 2x2 frequency table where columns are two groups of respondents and rows are the two responses "Yes" and "No". And we've turned the frequencies into the proportions within group, i.e. in
13,461
How to achieve strictly positive forecasts?
With the forecast package for R, simply set lambda=0 when fitting a model. For example: fit <- auto.arima(x, lambda=0) forecast(fit) Many of the functions in the package allow the lambda argument. When the lambda argument is specified, a Box-Cox transformation is used. The value $\lambda=0$ specifies a log transformation. So setting lambda=0 means the logged data are modelled, and when forecasts are produced, they are back-transformed to the original space. See http://www.otexts.org/fpp/2/4 for further discussion.
How to achieve strictly positive forecasts?
With the forecast package for R, simply set lambda=0 when fitting a model. For example: fit <- auto.arima(x, lambda=0) forecast(fit) Many of the functions in the package allow the lambda argument. Wh
How to achieve strictly positive forecasts? With the forecast package for R, simply set lambda=0 when fitting a model. For example: fit <- auto.arima(x, lambda=0) forecast(fit) Many of the functions in the package allow the lambda argument. When the lambda argument is specified, a Box-Cox transformation is used. The value $\lambda=0$ specifies a log transformation. So setting lambda=0 means the logged data are modelled, and when forecasts are produced, they are back-transformed to the original space. See http://www.otexts.org/fpp/2/4 for further discussion.
How to achieve strictly positive forecasts? With the forecast package for R, simply set lambda=0 when fitting a model. For example: fit <- auto.arima(x, lambda=0) forecast(fit) Many of the functions in the package allow the lambda argument. Wh
13,462
Can I convert a covariance matrix into uncertainties for variables?
There is no single number that encompasses all of the covariance information - there are 6 pieces of information, so you'd always need 6 numbers. However there are a number of things you could consider doing. Firstly, the error (variance) in any particular direction $i$, is given by $\sigma_i^2 = \mathbf{e}_i ^ \top \Sigma \mathbf{e}_i$ Where $\mathbf{e}_i$ is the unit vector in the direction of interest. Now if you look at this for your three basic coordinates $(x,y,z)$ then you can see that: $\sigma_x^2 = \left[\begin{matrix} 1 \\ 0 \\ 0 \end{matrix}\right]^\top \left[\begin{matrix} \sigma_{xx} & \sigma_{xy} & \sigma_{xz} \\ \sigma_{yx} & \sigma_{yy} & \sigma_{yz} \\ \sigma_{xz} & \sigma_{yz} & \sigma_{zz} \end{matrix}\right] \left[\begin{matrix} 1 \\ 0 \\ 0 \end{matrix}\right] = \sigma_{xx}$ $\sigma_y^2 = \sigma_{yy}$ $\sigma_z^2 = \sigma_{zz}$ So the error in each of the directions considered separately is given by the diagonal of the covariance matrix. This makes sense intuitively - if I am only considering one direction, then changing just the correlation should make no difference. You are correct in noting that simply stating: $x = \mu_x \pm \sigma_x$ $y = \mu_x \pm \sigma_y$ $z = \mu_z \pm \sigma_z$ Does not imply any correlation between those three statement - each statement on its own is perfectly correct, but taken together some information (correlation) has been dropped. If you will be taking many measurements each with the same error correlation (supposing that this comes from the measurement equipment) then one elegant possibility is to rotate your coordinates so as to diagonalise your covariance matrix. Then you can present errors in each of those directions separately since they will now be uncorrelated. As to taking the "vector error" by adding in quadrature I'm not sure I understand what you are saying. These three errors are errors in different quantities - they don't cancel each other out and so I don't see how you can add them together. Do you mean error in the distance?
Can I convert a covariance matrix into uncertainties for variables?
There is no single number that encompasses all of the covariance information - there are 6 pieces of information, so you'd always need 6 numbers. However there are a number of things you could conside
Can I convert a covariance matrix into uncertainties for variables? There is no single number that encompasses all of the covariance information - there are 6 pieces of information, so you'd always need 6 numbers. However there are a number of things you could consider doing. Firstly, the error (variance) in any particular direction $i$, is given by $\sigma_i^2 = \mathbf{e}_i ^ \top \Sigma \mathbf{e}_i$ Where $\mathbf{e}_i$ is the unit vector in the direction of interest. Now if you look at this for your three basic coordinates $(x,y,z)$ then you can see that: $\sigma_x^2 = \left[\begin{matrix} 1 \\ 0 \\ 0 \end{matrix}\right]^\top \left[\begin{matrix} \sigma_{xx} & \sigma_{xy} & \sigma_{xz} \\ \sigma_{yx} & \sigma_{yy} & \sigma_{yz} \\ \sigma_{xz} & \sigma_{yz} & \sigma_{zz} \end{matrix}\right] \left[\begin{matrix} 1 \\ 0 \\ 0 \end{matrix}\right] = \sigma_{xx}$ $\sigma_y^2 = \sigma_{yy}$ $\sigma_z^2 = \sigma_{zz}$ So the error in each of the directions considered separately is given by the diagonal of the covariance matrix. This makes sense intuitively - if I am only considering one direction, then changing just the correlation should make no difference. You are correct in noting that simply stating: $x = \mu_x \pm \sigma_x$ $y = \mu_x \pm \sigma_y$ $z = \mu_z \pm \sigma_z$ Does not imply any correlation between those three statement - each statement on its own is perfectly correct, but taken together some information (correlation) has been dropped. If you will be taking many measurements each with the same error correlation (supposing that this comes from the measurement equipment) then one elegant possibility is to rotate your coordinates so as to diagonalise your covariance matrix. Then you can present errors in each of those directions separately since they will now be uncorrelated. As to taking the "vector error" by adding in quadrature I'm not sure I understand what you are saying. These three errors are errors in different quantities - they don't cancel each other out and so I don't see how you can add them together. Do you mean error in the distance?
Can I convert a covariance matrix into uncertainties for variables? There is no single number that encompasses all of the covariance information - there are 6 pieces of information, so you'd always need 6 numbers. However there are a number of things you could conside
13,463
Survival analysis: continuous vs discrete time
The choice of the survival model should be guided by the underlying phenomenon. In this case it appears to be continuous, even if the data is collected in a somewhat discrete manner. A resolution of one month would be just fine over a 5-year period. However, the large number of ties at 6 and 12 months makes one wonder wether you really have a 1-month precision (the ties at 0 are expected - that's a special value where relatively lot of deaths actually happen). I am not quite sure what you can do about that as this most likely reflects after-the-fact rounding rather than interval censoring.
Survival analysis: continuous vs discrete time
The choice of the survival model should be guided by the underlying phenomenon. In this case it appears to be continuous, even if the data is collected in a somewhat discrete manner. A resolution of o
Survival analysis: continuous vs discrete time The choice of the survival model should be guided by the underlying phenomenon. In this case it appears to be continuous, even if the data is collected in a somewhat discrete manner. A resolution of one month would be just fine over a 5-year period. However, the large number of ties at 6 and 12 months makes one wonder wether you really have a 1-month precision (the ties at 0 are expected - that's a special value where relatively lot of deaths actually happen). I am not quite sure what you can do about that as this most likely reflects after-the-fact rounding rather than interval censoring.
Survival analysis: continuous vs discrete time The choice of the survival model should be guided by the underlying phenomenon. In this case it appears to be continuous, even if the data is collected in a somewhat discrete manner. A resolution of o
13,464
Survival analysis: continuous vs discrete time
I suspect if you use continuous time models you will want to use interval censoring, reflecting the fact that you don't know the exact time of failure, just an interval in which the failure ocurred. If you fit parametric regression models with interval censoring using maximum likelihhod the tied survival times is not an issue IIRC.
Survival analysis: continuous vs discrete time
I suspect if you use continuous time models you will want to use interval censoring, reflecting the fact that you don't know the exact time of failure, just an interval in which the failure ocurred.
Survival analysis: continuous vs discrete time I suspect if you use continuous time models you will want to use interval censoring, reflecting the fact that you don't know the exact time of failure, just an interval in which the failure ocurred. If you fit parametric regression models with interval censoring using maximum likelihhod the tied survival times is not an issue IIRC.
Survival analysis: continuous vs discrete time I suspect if you use continuous time models you will want to use interval censoring, reflecting the fact that you don't know the exact time of failure, just an interval in which the failure ocurred.
13,465
Survival analysis: continuous vs discrete time
There will be tied survival times in most analysis, but big, clear chunks of ties at particular events is troubling. I would think long and hard about the study itself, how its collecting data, etc. Because, outside of some methodological needs to use one type of time or the other, how you model survival should depend on whether or not the underlying process is discrete or continuous in the world.
Survival analysis: continuous vs discrete time
There will be tied survival times in most analysis, but big, clear chunks of ties at particular events is troubling. I would think long and hard about the study itself, how its collecting data, etc. B
Survival analysis: continuous vs discrete time There will be tied survival times in most analysis, but big, clear chunks of ties at particular events is troubling. I would think long and hard about the study itself, how its collecting data, etc. Because, outside of some methodological needs to use one type of time or the other, how you model survival should depend on whether or not the underlying process is discrete or continuous in the world.
Survival analysis: continuous vs discrete time There will be tied survival times in most analysis, but big, clear chunks of ties at particular events is troubling. I would think long and hard about the study itself, how its collecting data, etc. B
13,466
Survival analysis: continuous vs discrete time
If you have covariates that vary over time for the some individuals (e.g. family income may vary in your example over the lifetime of a child), survival models (parametric and the cox model) require you to slice up the data into discrete intervals defined by the varying covariates. I found this pdf of lecture notes by German Rodriguez helpful.
Survival analysis: continuous vs discrete time
If you have covariates that vary over time for the some individuals (e.g. family income may vary in your example over the lifetime of a child), survival models (parametric and the cox model) require y
Survival analysis: continuous vs discrete time If you have covariates that vary over time for the some individuals (e.g. family income may vary in your example over the lifetime of a child), survival models (parametric and the cox model) require you to slice up the data into discrete intervals defined by the varying covariates. I found this pdf of lecture notes by German Rodriguez helpful.
Survival analysis: continuous vs discrete time If you have covariates that vary over time for the some individuals (e.g. family income may vary in your example over the lifetime of a child), survival models (parametric and the cox model) require y
13,467
Interpreting the drop1 output in R
drop1 gives you a comparison of models based on the AIC criterion, and when using the option test="F" you add a "type II ANOVA" to it, as explained in the help files. As long as you only have continuous variables, this table is exactly equivalent to summary(lm1), as the F-values are just those T-values squared. P-values are exactly the same. So what to do with it? Interprete it in exactly that way: it expresses in a way if the model without that term is "significantly" different from the model with that term. Mind the "" around significantly, as the significance here cannot be interpreted as most people think. (multi-testing problem and all...) And regarding the AIC : the lower the better seems more like it. AIC is a value that goes for the model, not for the variable. So the best model from that output would be the one without the variable examination. Mind you, the calculation of both AIC and the F statistic are different from the R functions AIC(lm1) resp. anova(lm1). For AIC(), that information is given on the help pages of extractAIC(). For the anova() function, it's rather obvious that type I and type II SS are not the same. I'm trying not to be rude, but if you don't understand what is explained in the help files there, you shouldn't be using the function in the first place. Stepwise regression is incredibly tricky, jeopardizing your p-values in a most profound manner. So again, do not base yourself on the p-values. Your model should reflect your hypothesis and not the other way around.
Interpreting the drop1 output in R
drop1 gives you a comparison of models based on the AIC criterion, and when using the option test="F" you add a "type II ANOVA" to it, as explained in the help files. As long as you only have continuo
Interpreting the drop1 output in R drop1 gives you a comparison of models based on the AIC criterion, and when using the option test="F" you add a "type II ANOVA" to it, as explained in the help files. As long as you only have continuous variables, this table is exactly equivalent to summary(lm1), as the F-values are just those T-values squared. P-values are exactly the same. So what to do with it? Interprete it in exactly that way: it expresses in a way if the model without that term is "significantly" different from the model with that term. Mind the "" around significantly, as the significance here cannot be interpreted as most people think. (multi-testing problem and all...) And regarding the AIC : the lower the better seems more like it. AIC is a value that goes for the model, not for the variable. So the best model from that output would be the one without the variable examination. Mind you, the calculation of both AIC and the F statistic are different from the R functions AIC(lm1) resp. anova(lm1). For AIC(), that information is given on the help pages of extractAIC(). For the anova() function, it's rather obvious that type I and type II SS are not the same. I'm trying not to be rude, but if you don't understand what is explained in the help files there, you shouldn't be using the function in the first place. Stepwise regression is incredibly tricky, jeopardizing your p-values in a most profound manner. So again, do not base yourself on the p-values. Your model should reflect your hypothesis and not the other way around.
Interpreting the drop1 output in R drop1 gives you a comparison of models based on the AIC criterion, and when using the option test="F" you add a "type II ANOVA" to it, as explained in the help files. As long as you only have continuo
13,468
Interpreting the drop1 output in R
For reference, these are the values that are included in the table: Df refers to Degrees of freedom, "the number of degrees of freedom is the number of values in the final calculation of a statistic that are free to vary." The Sum of Sq column refers to the sum of squares (or more precisely sum of squared deviations). In short this is a measure of the amount that each individual value deviates from the overall mean of those values. RSS is the Residual Sum of Squares. These are a measure of how much the predicted value of the dependent (or output) variable varies from the true value for each data point in the set (or more colloquially: each "line" in the data table). AIC is the Akaike information criterion which is generally regarded as "too complex to explain" but is, in short, a measure of the goodness of fit of an estimated statistical model. If you require further details, you will have to turn to dead trees with words on them (i.e., books). Or Wikipedia and the resources there. The F value is used to perform what's called an F-test and from it is derived the Pr(F) value, which describes how likely (or Probable = Pr) that F value is. A Pr(F) value close to zero (indicated by ***) is indicative of an input variable that is in some way important to include in a good model, that is, a model that does not include it is "significantly" different than the one that does. All of these values are, in the context of the drop1 command, calculated to compare the overall model (including all the input variables) with the model resulting from removing that one specific variable per each line in the output table. Now, if this can be improved upon, please feel free to add to it or clarify any issues. My goal is only to clarify and provide a better "reverse lookup" reference from the output of an R command to the actual meaning of it.
Interpreting the drop1 output in R
For reference, these are the values that are included in the table: Df refers to Degrees of freedom, "the number of degrees of freedom is the number of values in the final calculation of a statistic t
Interpreting the drop1 output in R For reference, these are the values that are included in the table: Df refers to Degrees of freedom, "the number of degrees of freedom is the number of values in the final calculation of a statistic that are free to vary." The Sum of Sq column refers to the sum of squares (or more precisely sum of squared deviations). In short this is a measure of the amount that each individual value deviates from the overall mean of those values. RSS is the Residual Sum of Squares. These are a measure of how much the predicted value of the dependent (or output) variable varies from the true value for each data point in the set (or more colloquially: each "line" in the data table). AIC is the Akaike information criterion which is generally regarded as "too complex to explain" but is, in short, a measure of the goodness of fit of an estimated statistical model. If you require further details, you will have to turn to dead trees with words on them (i.e., books). Or Wikipedia and the resources there. The F value is used to perform what's called an F-test and from it is derived the Pr(F) value, which describes how likely (or Probable = Pr) that F value is. A Pr(F) value close to zero (indicated by ***) is indicative of an input variable that is in some way important to include in a good model, that is, a model that does not include it is "significantly" different than the one that does. All of these values are, in the context of the drop1 command, calculated to compare the overall model (including all the input variables) with the model resulting from removing that one specific variable per each line in the output table. Now, if this can be improved upon, please feel free to add to it or clarify any issues. My goal is only to clarify and provide a better "reverse lookup" reference from the output of an R command to the actual meaning of it.
Interpreting the drop1 output in R For reference, these are the values that are included in the table: Df refers to Degrees of freedom, "the number of degrees of freedom is the number of values in the final calculation of a statistic t
13,469
Brier Score and extreme class imbalance
If there is extreme class imbalance (e.g. 5 positive cases vs 1,000 negative cases), how does the Brier score ensure that we select the model that gives us the best performance regarding high probability forecasts for the 5 positive cases? As we do not care if the negative cases have predictions near 0 or 0.5 as long as they are relatively lower than those for the positive classes. This depends crucially on whether we can separate subpopulations with different class probabilities based on predictors. As an extreme example, if there are no (or no useful) predictors, then predicted probabilities for all instances will be equal, and requiring lower predictions for negative vs. positive classes makes no sense, whether we are looking at Brier scores or other loss functions. Yes, this is rather obvious. But we need to keep it in mind. So let's look at the second simplest case. Assume we have a predictor that separates our population cleanly into two subpopulations. Among subpopulation 1, there are 4 positive and 200 negative cases. Among subpopulation 2, there is 1 positive and 800 negative cases. (The numbers match your example.) And again, there is zero possibility of further subdividing the subpopulations. Then we will get constant predicted probabilities to belong to the positive class $p_1$ for subpopulation 1 and $p_2$ for subpopulation 2. The Brier score then is $$ \frac{1}{5+1000}\big(4(1-p_1)^2+200p_1^2+1(1-p_2)^2+800p_2^2\big). $$ Using a little calculus, we find that this is optimized by $$ p_1 = \frac{1}{51} \quad\text{and}\quad p_2=\frac{1}{801}, $$ which are precisely the proportions of positive classes in the two subpopulations. Which in turn is as it should be, because this is what the Brier score being proper means. And there you have it. The Brier score, being proper, will be optimized by the true class membership probabilities. If you have predictors that allow you to identify subpopulations or instances with a higher true probability, then the Brier score will incentivize you to output these higher probabilities. Conversely, if you can't identify such subpopulations, then the Brier score can't help you - but neither can anything else, simply because the information is not there. However, the Brier score will not help you in overestimating the probability in subpopulation 1 and in underestimating the probability in subpopulation 2 beyond the true values $p_1=\frac{1}{51}$ and $p_2=\frac{1}{801}$, e.g., because "there are more positive cases in subpopulation 1 than in 2". Yes, that is so, but what use would over-/underestimating this value be? We already know about the differential based on the differences in $p_1$ and $p_2$, and biasing these will not serve us at all. In particular, there is nothing an ROC analysis can help you with beyond finding an "optimal" threshold (which I pontificate on here). And finally, there is nothing in this analysis that depends in any way on classes being balanced or not, so I argue that unbalanced datasets are not a problem. Finally, this is why I don't see the two answers you propose as useful. The Brier score helps us get at true class membership probabilities. What we then do with these probabilities will depend on our cost structure, and per my post on thresholds above, that is a separate problem. Yes, depending on this cost structure, we may end up with an algebraically reformulated version of a stratified Brier score, but keeping the statistical and the decision theoretic aspect separate keeps the process much cleaner.
Brier Score and extreme class imbalance
If there is extreme class imbalance (e.g. 5 positive cases vs 1,000 negative cases), how does the Brier score ensure that we select the model that gives us the best performance regarding high probabil
Brier Score and extreme class imbalance If there is extreme class imbalance (e.g. 5 positive cases vs 1,000 negative cases), how does the Brier score ensure that we select the model that gives us the best performance regarding high probability forecasts for the 5 positive cases? As we do not care if the negative cases have predictions near 0 or 0.5 as long as they are relatively lower than those for the positive classes. This depends crucially on whether we can separate subpopulations with different class probabilities based on predictors. As an extreme example, if there are no (or no useful) predictors, then predicted probabilities for all instances will be equal, and requiring lower predictions for negative vs. positive classes makes no sense, whether we are looking at Brier scores or other loss functions. Yes, this is rather obvious. But we need to keep it in mind. So let's look at the second simplest case. Assume we have a predictor that separates our population cleanly into two subpopulations. Among subpopulation 1, there are 4 positive and 200 negative cases. Among subpopulation 2, there is 1 positive and 800 negative cases. (The numbers match your example.) And again, there is zero possibility of further subdividing the subpopulations. Then we will get constant predicted probabilities to belong to the positive class $p_1$ for subpopulation 1 and $p_2$ for subpopulation 2. The Brier score then is $$ \frac{1}{5+1000}\big(4(1-p_1)^2+200p_1^2+1(1-p_2)^2+800p_2^2\big). $$ Using a little calculus, we find that this is optimized by $$ p_1 = \frac{1}{51} \quad\text{and}\quad p_2=\frac{1}{801}, $$ which are precisely the proportions of positive classes in the two subpopulations. Which in turn is as it should be, because this is what the Brier score being proper means. And there you have it. The Brier score, being proper, will be optimized by the true class membership probabilities. If you have predictors that allow you to identify subpopulations or instances with a higher true probability, then the Brier score will incentivize you to output these higher probabilities. Conversely, if you can't identify such subpopulations, then the Brier score can't help you - but neither can anything else, simply because the information is not there. However, the Brier score will not help you in overestimating the probability in subpopulation 1 and in underestimating the probability in subpopulation 2 beyond the true values $p_1=\frac{1}{51}$ and $p_2=\frac{1}{801}$, e.g., because "there are more positive cases in subpopulation 1 than in 2". Yes, that is so, but what use would over-/underestimating this value be? We already know about the differential based on the differences in $p_1$ and $p_2$, and biasing these will not serve us at all. In particular, there is nothing an ROC analysis can help you with beyond finding an "optimal" threshold (which I pontificate on here). And finally, there is nothing in this analysis that depends in any way on classes being balanced or not, so I argue that unbalanced datasets are not a problem. Finally, this is why I don't see the two answers you propose as useful. The Brier score helps us get at true class membership probabilities. What we then do with these probabilities will depend on our cost structure, and per my post on thresholds above, that is a separate problem. Yes, depending on this cost structure, we may end up with an algebraically reformulated version of a stratified Brier score, but keeping the statistical and the decision theoretic aspect separate keeps the process much cleaner.
Brier Score and extreme class imbalance If there is extreme class imbalance (e.g. 5 positive cases vs 1,000 negative cases), how does the Brier score ensure that we select the model that gives us the best performance regarding high probabil
13,470
Brier Score and extreme class imbalance
The paper "Class Probability Estimates are Unreliable for Imbalanced Data (and How to Fix Them)" (Wallace & Dahabreh 2012) argues that the Brier score as is fails to account for poor calibrations in minority classes. They propose a stratified Brier score: $$BS^+ = \frac{\sum_{y_i=1}\left(y_i- \hat{P}\left\{y_i|x_i\right\}\right)^2}{N_{pos}}$$ $$BS^- = \frac{\sum_{y_i=0}\left(y_i- \hat{P}\left\{y_i|x_i\right\}\right)^2}{N_{neg}}$$ Unfortunately this does not give you a single metric with which to optimize, but you could take the maximum of the stratified Brier Scores for your model to make your descision based on the worst performance over all classes. As an aside the authors point out that the probability estimates obtained using Platt Scaling are woefully inaccurate for the minority class as well. To remedy this some combination of undersampling and bagging is proposed.
Brier Score and extreme class imbalance
The paper "Class Probability Estimates are Unreliable for Imbalanced Data (and How to Fix Them)" (Wallace & Dahabreh 2012) argues that the Brier score as is fails to account for poor calibrations in m
Brier Score and extreme class imbalance The paper "Class Probability Estimates are Unreliable for Imbalanced Data (and How to Fix Them)" (Wallace & Dahabreh 2012) argues that the Brier score as is fails to account for poor calibrations in minority classes. They propose a stratified Brier score: $$BS^+ = \frac{\sum_{y_i=1}\left(y_i- \hat{P}\left\{y_i|x_i\right\}\right)^2}{N_{pos}}$$ $$BS^- = \frac{\sum_{y_i=0}\left(y_i- \hat{P}\left\{y_i|x_i\right\}\right)^2}{N_{neg}}$$ Unfortunately this does not give you a single metric with which to optimize, but you could take the maximum of the stratified Brier Scores for your model to make your descision based on the worst performance over all classes. As an aside the authors point out that the probability estimates obtained using Platt Scaling are woefully inaccurate for the minority class as well. To remedy this some combination of undersampling and bagging is proposed.
Brier Score and extreme class imbalance The paper "Class Probability Estimates are Unreliable for Imbalanced Data (and How to Fix Them)" (Wallace & Dahabreh 2012) argues that the Brier score as is fails to account for poor calibrations in m
13,471
Brier Score and extreme class imbalance
If there is extreme class imbalance (e.g. 5 positive cases vs 1,000 negative cases), how does the Brier score ensure that we select the model that gives us the best performance regarding high probability forecasts for the 5 positive cases? As we do not care if the negative cases have predictions near 0 or 0.5 as long as they are relatively lower than those for the positive classes It doesn't ensure that, see my counter example here: Why is accuracy not the best measure for assessing classification models? That doesn't mean the Brier score isn't a good idea, just that it is no panacea (because it doesn't take into account the purpose of the analysis and just measures the quality of the probability estimates everywhere according to the data density).
Brier Score and extreme class imbalance
If there is extreme class imbalance (e.g. 5 positive cases vs 1,000 negative cases), how does the Brier score ensure that we select the model that gives us the best performance regarding high probabil
Brier Score and extreme class imbalance If there is extreme class imbalance (e.g. 5 positive cases vs 1,000 negative cases), how does the Brier score ensure that we select the model that gives us the best performance regarding high probability forecasts for the 5 positive cases? As we do not care if the negative cases have predictions near 0 or 0.5 as long as they are relatively lower than those for the positive classes It doesn't ensure that, see my counter example here: Why is accuracy not the best measure for assessing classification models? That doesn't mean the Brier score isn't a good idea, just that it is no panacea (because it doesn't take into account the purpose of the analysis and just measures the quality of the probability estimates everywhere according to the data density).
Brier Score and extreme class imbalance If there is extreme class imbalance (e.g. 5 positive cases vs 1,000 negative cases), how does the Brier score ensure that we select the model that gives us the best performance regarding high probabil
13,472
How to interpret PCA on time-series data?
Q1: What is the connection between PC time series and "maximum variance"? The data that they are analyzing are $\hat t$ data points for each of the $n$ neurons, so one can think about that as $\hat t$ data points in the $n$-dimensional space $\mathbb R^n$. It is "a cloud of points", so performing PCA amounts to finding directions of maximal variance, as you are well aware. I prefer to call these directions (which are eigenvectors of the covariance matrix) "principal axes", and the projections of the data onto these directions "principal components". When analyzing time series, the only addition to this picture is that the points are meaningfully ordered, or numbered (from $1$ to $\hat t$), as opposed to being simply an unordered collection of points. Which means that if we take firing rate of one single neuron (which is one coordinate in the $\mathbb R^n$), then its values can be plotted as a function of time. Similarly, if we take one PC (which is a projection from $\mathbb R^n$ on some line), then it also has $\hat t$ values and can be plotted as a function of time. So if original features are time series, then PCs are also time series. I agree with @Nestor's interpretation above: each original feature can be then seen as a linear combination of PCs, and as PCs are uncorrelated between each other, one can think of them as basis functions that the original features are decomposed into. It's a little bit like Fourier analysis, but instead of taking fixed basis of sines and cosines, we are finding the "most appropriate" basis for this particular dataset, in a sense that first PC accounts for most variance, etc. "Accounting for most variance" here means that if you only take one basis function (time series) and try to approximate all your features with it, then the first PC will do the best job. So the basic intuition here is that the first PC is a basis function time series that fits all the available time series the best, etc. Why is this passage in Freeman et al. so confusing? Freeman et al. analyze the data matrix $\hat{\mathbf Y}$ with variables (i.e. neurons) in rows (!), not in columns. Note that they subtract row means, which makes sense as variables are usually centred prior to PCA. Then they perform SVD: $$\hat {\mathbf Y} = \mathbf{USV}^\top.$$ Using the terminology I advocate above, columns of $\mathbf U$ are principal axes (directions in $\mathbb R^n$) and columns of $\mathbf{SV}$ are principal components (time series of length $\hat t$). The sentence that you quoted from Freeman et al. is quite confusing indeed: The principal components (the columns of $\mathbf V$) are vectors of length $\hat t$, and the scores (the columns of $\mathbf U$) are vectors of length $n$ (number of voxels), describing the projection of each voxel on the direction given by the corresponding component, forming projections on the volume, i.e. whole-brain maps. First, columns of $\mathbf V$ are not PCs, but PCs scaled to unit norm. Second, columns of $\mathbf U$ are NOT scores, because "scores" usually means PCs. Third, "direction given by the corresponding component" is a cryptic notion. I think that they flip the picture here and suggest to think about $n$ points in $\hat t$-dimensional space, so that now each neuron is a data point (and not a variable). Conceptually it sounds like a huge change, but mathematically it makes almost no difference, with the only change being that principal axes and [unit-norm] principal components change places. In this case, my PCs from above ($\hat t$-long time series) will become principal axes, i.e. directions, and $\mathbf U$ can be thought as normalized projections on these directions (normalized scores?). I find this very confusing and so I suggest to ignore their choice of words, but only look at the formulas. From this point on I will keep using the terms as I like them, not how Freeman et al. use them. Q2: What are the state space trajectories? They take single-trial data and project it onto the first two principal axes, i.e. the first two columns of $\mathbf U$). If you did it with the original data $\hat{\mathbf Y}$, you would get two first principal components back. Again, projection on one principal axis is one principal component, i.e. a $\hat t$-long time series. If you do it with some single-trial data $\mathbf Y$, you again get two $\hat t$-long time series. In the movie, each single line corresponds to such projection: x-coordinate evolves according to PC1 and y-coordinate according to PC2. This is what is called "state space": PC1 plotted against PC2. Time goes by as the dot moves around. Each line in the movie is obtained with a different single trial $\mathbf Y$.
How to interpret PCA on time-series data?
Q1: What is the connection between PC time series and "maximum variance"? The data that they are analyzing are $\hat t$ data points for each of the $n$ neurons, so one can think about that as $\hat t$
How to interpret PCA on time-series data? Q1: What is the connection between PC time series and "maximum variance"? The data that they are analyzing are $\hat t$ data points for each of the $n$ neurons, so one can think about that as $\hat t$ data points in the $n$-dimensional space $\mathbb R^n$. It is "a cloud of points", so performing PCA amounts to finding directions of maximal variance, as you are well aware. I prefer to call these directions (which are eigenvectors of the covariance matrix) "principal axes", and the projections of the data onto these directions "principal components". When analyzing time series, the only addition to this picture is that the points are meaningfully ordered, or numbered (from $1$ to $\hat t$), as opposed to being simply an unordered collection of points. Which means that if we take firing rate of one single neuron (which is one coordinate in the $\mathbb R^n$), then its values can be plotted as a function of time. Similarly, if we take one PC (which is a projection from $\mathbb R^n$ on some line), then it also has $\hat t$ values and can be plotted as a function of time. So if original features are time series, then PCs are also time series. I agree with @Nestor's interpretation above: each original feature can be then seen as a linear combination of PCs, and as PCs are uncorrelated between each other, one can think of them as basis functions that the original features are decomposed into. It's a little bit like Fourier analysis, but instead of taking fixed basis of sines and cosines, we are finding the "most appropriate" basis for this particular dataset, in a sense that first PC accounts for most variance, etc. "Accounting for most variance" here means that if you only take one basis function (time series) and try to approximate all your features with it, then the first PC will do the best job. So the basic intuition here is that the first PC is a basis function time series that fits all the available time series the best, etc. Why is this passage in Freeman et al. so confusing? Freeman et al. analyze the data matrix $\hat{\mathbf Y}$ with variables (i.e. neurons) in rows (!), not in columns. Note that they subtract row means, which makes sense as variables are usually centred prior to PCA. Then they perform SVD: $$\hat {\mathbf Y} = \mathbf{USV}^\top.$$ Using the terminology I advocate above, columns of $\mathbf U$ are principal axes (directions in $\mathbb R^n$) and columns of $\mathbf{SV}$ are principal components (time series of length $\hat t$). The sentence that you quoted from Freeman et al. is quite confusing indeed: The principal components (the columns of $\mathbf V$) are vectors of length $\hat t$, and the scores (the columns of $\mathbf U$) are vectors of length $n$ (number of voxels), describing the projection of each voxel on the direction given by the corresponding component, forming projections on the volume, i.e. whole-brain maps. First, columns of $\mathbf V$ are not PCs, but PCs scaled to unit norm. Second, columns of $\mathbf U$ are NOT scores, because "scores" usually means PCs. Third, "direction given by the corresponding component" is a cryptic notion. I think that they flip the picture here and suggest to think about $n$ points in $\hat t$-dimensional space, so that now each neuron is a data point (and not a variable). Conceptually it sounds like a huge change, but mathematically it makes almost no difference, with the only change being that principal axes and [unit-norm] principal components change places. In this case, my PCs from above ($\hat t$-long time series) will become principal axes, i.e. directions, and $\mathbf U$ can be thought as normalized projections on these directions (normalized scores?). I find this very confusing and so I suggest to ignore their choice of words, but only look at the formulas. From this point on I will keep using the terms as I like them, not how Freeman et al. use them. Q2: What are the state space trajectories? They take single-trial data and project it onto the first two principal axes, i.e. the first two columns of $\mathbf U$). If you did it with the original data $\hat{\mathbf Y}$, you would get two first principal components back. Again, projection on one principal axis is one principal component, i.e. a $\hat t$-long time series. If you do it with some single-trial data $\mathbf Y$, you again get two $\hat t$-long time series. In the movie, each single line corresponds to such projection: x-coordinate evolves according to PC1 and y-coordinate according to PC2. This is what is called "state space": PC1 plotted against PC2. Time goes by as the dot moves around. Each line in the movie is obtained with a different single trial $\mathbf Y$.
How to interpret PCA on time-series data? Q1: What is the connection between PC time series and "maximum variance"? The data that they are analyzing are $\hat t$ data points for each of the $n$ neurons, so one can think about that as $\hat t$
13,473
How to interpret PCA on time-series data?
With respect to the first question. Consider the whole time series through a particular voxel to be a single draw from a multivariate distribution. We can now think of this as a multivariate vector much like any other that we might apply PCA to. The first $p$ columns of $\bf V$ are then the eigen-timecourses which, when linearly combined provide the best approximation to the time course through a particular voxel for the duration $\hat t$ of a stimulus. So $\bf \hat Y$ is an $n \times \hat t$ matrix and therefore $\bf U$ is $n \times n$ while $\bf V$ is $\hat t \times \hat t$. With respect to the second question. The equation given is $\bf J = \bf U^T Y$ We are given that $\bf J$ is a 2 or 3 $ \times t$ matrix. (This involves a small sleight of hand in dropping rows/columns.) Two or three is picked as the dimensionality as this is what can be plotted in figure 6 of the paper. However $t \ne \hat t$ so I expect the separate traces (lines in fig 6) have been obtained by chopping $\bf J$ into the different segments corresponding to presentations of the stimulus. Each of these blocks can then be plotted in 2 or 3 dimensional space by considering each column as a point in that space and then drawing a line between the points defined by adjacent columns giving the trajectories. Following on from the above video 8 appears for each block to add each (column-)point sequentially, join it to the last point, and render this length $\hat t$ sequence as a video. I've not dealt with the colouring methodology before, and it would take a while before I was confident to comment on that aspect. I found the comment on similarity to Fig 4c confusing as the colouring is obtained there by per-voxel regression. Whereas in Fig 6 each trace is a whole-image artefact. Unless I'm put straight I think it's the direction of the stimulus during that time segment as per the comment in the Figure.
How to interpret PCA on time-series data?
With respect to the first question. Consider the whole time series through a particular voxel to be a single draw from a multivariate distribution. We can now think of this as a multivariate vector mu
How to interpret PCA on time-series data? With respect to the first question. Consider the whole time series through a particular voxel to be a single draw from a multivariate distribution. We can now think of this as a multivariate vector much like any other that we might apply PCA to. The first $p$ columns of $\bf V$ are then the eigen-timecourses which, when linearly combined provide the best approximation to the time course through a particular voxel for the duration $\hat t$ of a stimulus. So $\bf \hat Y$ is an $n \times \hat t$ matrix and therefore $\bf U$ is $n \times n$ while $\bf V$ is $\hat t \times \hat t$. With respect to the second question. The equation given is $\bf J = \bf U^T Y$ We are given that $\bf J$ is a 2 or 3 $ \times t$ matrix. (This involves a small sleight of hand in dropping rows/columns.) Two or three is picked as the dimensionality as this is what can be plotted in figure 6 of the paper. However $t \ne \hat t$ so I expect the separate traces (lines in fig 6) have been obtained by chopping $\bf J$ into the different segments corresponding to presentations of the stimulus. Each of these blocks can then be plotted in 2 or 3 dimensional space by considering each column as a point in that space and then drawing a line between the points defined by adjacent columns giving the trajectories. Following on from the above video 8 appears for each block to add each (column-)point sequentially, join it to the last point, and render this length $\hat t$ sequence as a video. I've not dealt with the colouring methodology before, and it would take a while before I was confident to comment on that aspect. I found the comment on similarity to Fig 4c confusing as the colouring is obtained there by per-voxel regression. Whereas in Fig 6 each trace is a whole-image artefact. Unless I'm put straight I think it's the direction of the stimulus during that time segment as per the comment in the Figure.
How to interpret PCA on time-series data? With respect to the first question. Consider the whole time series through a particular voxel to be a single draw from a multivariate distribution. We can now think of this as a multivariate vector mu
13,474
Calculating standard error after a log-transform
Your main problem with the initial calculation is there's no good reason why $e^{\text{sd}(\log(Y))}$ should be like $\text{sd}(Y)$. It's generally quite different. In some situations, you can compute a rough approximation of $\text{sd}(Y)$ from $\text{sd}(\log(Y))$ via Taylor expansion. $$\text{Var}(g(X))\approx \left(g'(\mu_X)\right)^2\sigma^2_X\,.$$ If we consider $X$ to be the random variable on the log scale, here, $g(X)=\exp(X)$ If $\text{Var}(\exp(X))\approx \exp(\mu_X)^2\sigma_X^2$ then $\text{sd}(\exp(X))\approx \exp(\mu_X)\sigma_X$ These notions carry across to sampling distributions. This tends to work reasonably well if the standard deviation is really small compared to the mean, as in your example. > mean(y) [1] 10 > sd(y) [1] 0.03 > lm=mean(log(y)) > ls=sd(log(y)) > exp(lm)*ls [1] 0.0300104 If you want to transform a CI for a parameter, that works by transforming the endpoints. If you're trying to transform back to obtain point estimate and interval for the mean on the original (unlogged) scale, you will also want to unbias the estimate of the mean (see the above link): $E(\exp(X))\approx \exp(\mu_X)\cdot (1+\sigma_X^2/2)$, so a (very) rough large sample interval for the mean might be $(c.\exp(L),c.\exp(U))$, where $L,U$ are the upper and lower limits of a log-scale interval, and $c$ is some consistent estimate of $1+\sigma_X^2/2$. If your data are approximately normal on the log scale, you may want to treat it as a problem of producing an interval for a lognormal mean. (There are other approaches to unbiasing mean estimates across transformations; e.g. see Duan, N., 1983. Smearing estimate: a nonparametric retransformation method. JASA, 78, 605-610)
Calculating standard error after a log-transform
Your main problem with the initial calculation is there's no good reason why $e^{\text{sd}(\log(Y))}$ should be like $\text{sd}(Y)$. It's generally quite different. In some situations, you can compute
Calculating standard error after a log-transform Your main problem with the initial calculation is there's no good reason why $e^{\text{sd}(\log(Y))}$ should be like $\text{sd}(Y)$. It's generally quite different. In some situations, you can compute a rough approximation of $\text{sd}(Y)$ from $\text{sd}(\log(Y))$ via Taylor expansion. $$\text{Var}(g(X))\approx \left(g'(\mu_X)\right)^2\sigma^2_X\,.$$ If we consider $X$ to be the random variable on the log scale, here, $g(X)=\exp(X)$ If $\text{Var}(\exp(X))\approx \exp(\mu_X)^2\sigma_X^2$ then $\text{sd}(\exp(X))\approx \exp(\mu_X)\sigma_X$ These notions carry across to sampling distributions. This tends to work reasonably well if the standard deviation is really small compared to the mean, as in your example. > mean(y) [1] 10 > sd(y) [1] 0.03 > lm=mean(log(y)) > ls=sd(log(y)) > exp(lm)*ls [1] 0.0300104 If you want to transform a CI for a parameter, that works by transforming the endpoints. If you're trying to transform back to obtain point estimate and interval for the mean on the original (unlogged) scale, you will also want to unbias the estimate of the mean (see the above link): $E(\exp(X))\approx \exp(\mu_X)\cdot (1+\sigma_X^2/2)$, so a (very) rough large sample interval for the mean might be $(c.\exp(L),c.\exp(U))$, where $L,U$ are the upper and lower limits of a log-scale interval, and $c$ is some consistent estimate of $1+\sigma_X^2/2$. If your data are approximately normal on the log scale, you may want to treat it as a problem of producing an interval for a lognormal mean. (There are other approaches to unbiasing mean estimates across transformations; e.g. see Duan, N., 1983. Smearing estimate: a nonparametric retransformation method. JASA, 78, 605-610)
Calculating standard error after a log-transform Your main problem with the initial calculation is there's no good reason why $e^{\text{sd}(\log(Y))}$ should be like $\text{sd}(Y)$. It's generally quite different. In some situations, you can compute
13,475
Calculating standard error after a log-transform
It sounds like you effectively want the geometric standard error, akin to the geometric mean exp(mean(log(x))). While it might seem reasonable to compute that as: exp(sd(log(x)/sqrt(n-1))) You and others have already pointed out that that isn't correct for a few reasons. Instead, use: exp(mean(log(x))) * (sd(log(x))/sqrt(n-1)) Which is the geometric mean multiplied by the log-standard error. This should approximate the "natural" standard error pretty well. Source: https://www.jstor.org/stable/pdf/2235723.pdf
Calculating standard error after a log-transform
It sounds like you effectively want the geometric standard error, akin to the geometric mean exp(mean(log(x))). While it might seem reasonable to compute that as: exp(sd(log(x)/sqrt(n-1))) You and ot
Calculating standard error after a log-transform It sounds like you effectively want the geometric standard error, akin to the geometric mean exp(mean(log(x))). While it might seem reasonable to compute that as: exp(sd(log(x)/sqrt(n-1))) You and others have already pointed out that that isn't correct for a few reasons. Instead, use: exp(mean(log(x))) * (sd(log(x))/sqrt(n-1)) Which is the geometric mean multiplied by the log-standard error. This should approximate the "natural" standard error pretty well. Source: https://www.jstor.org/stable/pdf/2235723.pdf
Calculating standard error after a log-transform It sounds like you effectively want the geometric standard error, akin to the geometric mean exp(mean(log(x))). While it might seem reasonable to compute that as: exp(sd(log(x)/sqrt(n-1))) You and ot
13,476
Are "random sample" and "iid random variable" synonyms?
You don't say what the other statistics book is, but I'd guess that it is a book (or section) about finite population sampling. When you sample random variables, i.e. when you consider a set $X_1,\dots,X_n$ of $n$ random variables, you know that if they are independent, $f(x_1,\dots,x_n)=f(x_1)\cdots f(x_n)$, and identically distributed, in particular $E(X_i)=\mu$ and $\text{Var}(X_i)=\sigma^2$ for all $i$, then: $$\overline{X}=\frac{\sum_i X_i}{n},\quad E(\overline{X})=\mu,\quad \text{Var}(\overline{X})=\frac{\sigma^2}{n}$$ where $\sigma^2$ is the second central moment. Sampling a finite population is somewhat different. If the population is of size $N$, in sampling without replacement there are $\binom{N}{n}$ possible samples $s_i$ of size $n$ and they are equiprobable: $$p(s_i)=\frac{1}{\binom{N}{n}}\quad\forall i=1,\dots,\binom{N}{n}$$ For example, if $N=5$ and $n=3$, the sample space is $\{s_1,\dots,s_{10}\}$ and the possibile samples are: $$\begin{gather}s_1=\{1,2,3\},s_2=\{1,2,4\},s_3=\{1,2,5\},s_4=\{1,3,4\},s_5=\{1,3,5\},\\ s_6=\{1,4,5\},s_7=\{2,3,4\},s_8=\{2,3,5\},s_9=\{2,4,5\},s_{10}=\{3,4,5\}\end{gather}$$ If you count the number of occurences of each individual, you can see that they are six, i.e. each individual has an equal chanche of being selected (6/10). So each $s_i$ is a random sample according to the second definition. Roughly, it is not an i.i.d. random sample because individuals are not random variables: you can consistently estimate $E[X]$ by a sample mean but will never know its exact value, but you can know the exact population mean if $n=N$ (let me repeat: roughly.)${}^1$ Let $\mu$ be some polulation mean (mean height, mean income, ...). When $n<N$ you can estimate $\mu$ like in random variable sampling: $$\overline{y}_s=\sum_{i=1}^n y_i,\quad E(\overline{y}_s)=\mu$$ but the sample mean variance is different: $$\text{Var}(\overline{y}_s)=\frac{\tilde\sigma^2}{n}\left(1-\frac{n}{N}\right)$$ where $\tilde\sigma^2$ is the population quasi-variance: $\frac{\sum_{i=1}^N(y_i-\overline{y})^2}{N-1}$. Factor $(1-n/N)$ is usally called "finite population correction factor". This is a quick example of how a (random variable) i.i.d. random sample and a (finite population) random sample may differ. Statistical inference is mainly about random variable sampling, sampling theory is about finite population sampling. ${}^1$ Say you are manufacturing light bulbs and wish to know their average life span. Your "population" is just a theoretical or virtual one, at least if you keep manufacturing light bulbs. So you have to model a data generation process and intepret a set of light bulbs as a (random variable) sample. Say now that you find a box of 1000 light bulbs and wish to know their average life span. You can select a small set of light bulbs (a finite population sample), but you could select all of them. If you select a small sample, this doesn't transform light bulbs into random variables: the random variable is generated by you, as the choice between "all" and "a small set" is up to you. However, when a finite population is very large (say your country population), when choosing "all" is not viable, the second situation is better handled as the first one.
Are "random sample" and "iid random variable" synonyms?
You don't say what the other statistics book is, but I'd guess that it is a book (or section) about finite population sampling. When you sample random variables, i.e. when you consider a set $X_1,\dot
Are "random sample" and "iid random variable" synonyms? You don't say what the other statistics book is, but I'd guess that it is a book (or section) about finite population sampling. When you sample random variables, i.e. when you consider a set $X_1,\dots,X_n$ of $n$ random variables, you know that if they are independent, $f(x_1,\dots,x_n)=f(x_1)\cdots f(x_n)$, and identically distributed, in particular $E(X_i)=\mu$ and $\text{Var}(X_i)=\sigma^2$ for all $i$, then: $$\overline{X}=\frac{\sum_i X_i}{n},\quad E(\overline{X})=\mu,\quad \text{Var}(\overline{X})=\frac{\sigma^2}{n}$$ where $\sigma^2$ is the second central moment. Sampling a finite population is somewhat different. If the population is of size $N$, in sampling without replacement there are $\binom{N}{n}$ possible samples $s_i$ of size $n$ and they are equiprobable: $$p(s_i)=\frac{1}{\binom{N}{n}}\quad\forall i=1,\dots,\binom{N}{n}$$ For example, if $N=5$ and $n=3$, the sample space is $\{s_1,\dots,s_{10}\}$ and the possibile samples are: $$\begin{gather}s_1=\{1,2,3\},s_2=\{1,2,4\},s_3=\{1,2,5\},s_4=\{1,3,4\},s_5=\{1,3,5\},\\ s_6=\{1,4,5\},s_7=\{2,3,4\},s_8=\{2,3,5\},s_9=\{2,4,5\},s_{10}=\{3,4,5\}\end{gather}$$ If you count the number of occurences of each individual, you can see that they are six, i.e. each individual has an equal chanche of being selected (6/10). So each $s_i$ is a random sample according to the second definition. Roughly, it is not an i.i.d. random sample because individuals are not random variables: you can consistently estimate $E[X]$ by a sample mean but will never know its exact value, but you can know the exact population mean if $n=N$ (let me repeat: roughly.)${}^1$ Let $\mu$ be some polulation mean (mean height, mean income, ...). When $n<N$ you can estimate $\mu$ like in random variable sampling: $$\overline{y}_s=\sum_{i=1}^n y_i,\quad E(\overline{y}_s)=\mu$$ but the sample mean variance is different: $$\text{Var}(\overline{y}_s)=\frac{\tilde\sigma^2}{n}\left(1-\frac{n}{N}\right)$$ where $\tilde\sigma^2$ is the population quasi-variance: $\frac{\sum_{i=1}^N(y_i-\overline{y})^2}{N-1}$. Factor $(1-n/N)$ is usally called "finite population correction factor". This is a quick example of how a (random variable) i.i.d. random sample and a (finite population) random sample may differ. Statistical inference is mainly about random variable sampling, sampling theory is about finite population sampling. ${}^1$ Say you are manufacturing light bulbs and wish to know their average life span. Your "population" is just a theoretical or virtual one, at least if you keep manufacturing light bulbs. So you have to model a data generation process and intepret a set of light bulbs as a (random variable) sample. Say now that you find a box of 1000 light bulbs and wish to know their average life span. You can select a small set of light bulbs (a finite population sample), but you could select all of them. If you select a small sample, this doesn't transform light bulbs into random variables: the random variable is generated by you, as the choice between "all" and "a small set" is up to you. However, when a finite population is very large (say your country population), when choosing "all" is not viable, the second situation is better handled as the first one.
Are "random sample" and "iid random variable" synonyms? You don't say what the other statistics book is, but I'd guess that it is a book (or section) about finite population sampling. When you sample random variables, i.e. when you consider a set $X_1,\dot
13,477
Are "random sample" and "iid random variable" synonyms?
I will not bore you with probabilistic definitions and formulas, which you may easily pick up at any textbook (or here is a good place to start) Just think of this intuitively, random sample is a set of random values. In general, each one of the values may either be identically or differently distributed. $i.i.d.$ sample is a special case of random sample, such that every value comes from the same distribution as the others and its value does not have any influence upon other values. Independence deals with $how$ the values were generated $i.i.d$ example: draw a random card from a deck and return it back (do this 5 times). You will get 5 realized values (cards). Each one of these values comes from a uniform distribution (there is equal probability to get each one of the outcomes) and each draw is independent of the others (i.e. the fact that you get an ace of spades in the first draw, does not influence in any way the result you may get in other draws). non $i.i.d.$ example: Now do the same thing, but without returning the card to the deck (I hope you feel the difference by now). Again you will have 5 realized values (cards) after you do this. But clearly they are dependent (the fact that you draw the ace of spades on the first draw, means you will not have a chance to get in on the 2nd draw).
Are "random sample" and "iid random variable" synonyms?
I will not bore you with probabilistic definitions and formulas, which you may easily pick up at any textbook (or here is a good place to start) Just think of this intuitively, random sample is a set
Are "random sample" and "iid random variable" synonyms? I will not bore you with probabilistic definitions and formulas, which you may easily pick up at any textbook (or here is a good place to start) Just think of this intuitively, random sample is a set of random values. In general, each one of the values may either be identically or differently distributed. $i.i.d.$ sample is a special case of random sample, such that every value comes from the same distribution as the others and its value does not have any influence upon other values. Independence deals with $how$ the values were generated $i.i.d$ example: draw a random card from a deck and return it back (do this 5 times). You will get 5 realized values (cards). Each one of these values comes from a uniform distribution (there is equal probability to get each one of the outcomes) and each draw is independent of the others (i.e. the fact that you get an ace of spades in the first draw, does not influence in any way the result you may get in other draws). non $i.i.d.$ example: Now do the same thing, but without returning the card to the deck (I hope you feel the difference by now). Again you will have 5 realized values (cards) after you do this. But clearly they are dependent (the fact that you draw the ace of spades on the first draw, means you will not have a chance to get in on the 2nd draw).
Are "random sample" and "iid random variable" synonyms? I will not bore you with probabilistic definitions and formulas, which you may easily pick up at any textbook (or here is a good place to start) Just think of this intuitively, random sample is a set
13,478
Are "random sample" and "iid random variable" synonyms?
A Random Variable usually written X, is a variable whose possible values are numerical outcomes of a random phenomenon. The random phenomenon may produce outcomes that have numerical values captured by the random variable --e.g. number of heads in 10 tosses of a coin or incomes/heights etc in a sample -- but that is not necessary . More generally a Random Variable is a function that maps random outcomes to numeric values. E.g. each day may be sunny, cloudy or rainy. We can define a Random Variable that takes the value 1 if it is rainy, 2 if it is cloudy and 3 if it is sunny. The domain of a random variable is the set of possible outcomes. To establish a Random Variable there must be a process or experiment that is associated with possible outcomes that can not be predicted with certainty. Coming now to the issue of independence. Two Random Variables are independent if the value of one of them does not affect the PDF of the other. We don't revise our predictions regarding the probabilities of different values of one variable when we know something about the other variable. Therefore in the case of independence the Posterior PDFs are identical to the Prior PDFs. E.g. when we toss a unbiased coin repeatedly, the information we have about the outcome of the 5 prior tosses does not affect our prediction about the current toss, it will be always 0.5. However, if the bias of the coin is unknown and is modeled as a Random Variable, then the outcome of the previous 5 tosses affects our predictions regarding the current toss because it allows us to make inferences regarding the unknown bias of the coin. In that case the Random Variables capturing the number of Heads in a sequence of n tosses are dependent and not independent. Coming now to the issue of Sampling. The purpose of Sampling is to inform us about the properties of an underlying distribution that is not known and must be inferred. Remember that a Distribution refers to the relative likelihood of possible outcomes in the Sample Space (which may also be a Conditional Universe). So when we Sample we chose a finite number of outcomes from the Sample space and we reproduce the Sample Space in a smaller more manageable scale. Equal probability then refers to the process of the Sampling not the probability of the Outcomes in the Sample. Equal probability sampling implies that the Sample will reflect the proportions of the outcomes in the original Sample Space. E.g. if we ask 10,000 people if they have ever been arrested it is probable that the sample we will end up will not be representative of the Population -- the Sample Space-- since people who would have been arrested might refuse to reply, therefore the proportion of possible outcomes (arrested - not arrested) will differ between our sample and the population for systematic reasons. Or if we chose a particular neighborhood to conduct a survey the results will not be representative of the City as a whole. So equal probability sampling implies that there are no systematic reasons --other than pure randomness-- that makes us believe that the proportions of possible outcomes in our sample are different from the proportions of outcomes in the Population / Sample Space.
Are "random sample" and "iid random variable" synonyms?
A Random Variable usually written X, is a variable whose possible values are numerical outcomes of a random phenomenon. The random phenomenon may produce outcomes that have numerical values captured b
Are "random sample" and "iid random variable" synonyms? A Random Variable usually written X, is a variable whose possible values are numerical outcomes of a random phenomenon. The random phenomenon may produce outcomes that have numerical values captured by the random variable --e.g. number of heads in 10 tosses of a coin or incomes/heights etc in a sample -- but that is not necessary . More generally a Random Variable is a function that maps random outcomes to numeric values. E.g. each day may be sunny, cloudy or rainy. We can define a Random Variable that takes the value 1 if it is rainy, 2 if it is cloudy and 3 if it is sunny. The domain of a random variable is the set of possible outcomes. To establish a Random Variable there must be a process or experiment that is associated with possible outcomes that can not be predicted with certainty. Coming now to the issue of independence. Two Random Variables are independent if the value of one of them does not affect the PDF of the other. We don't revise our predictions regarding the probabilities of different values of one variable when we know something about the other variable. Therefore in the case of independence the Posterior PDFs are identical to the Prior PDFs. E.g. when we toss a unbiased coin repeatedly, the information we have about the outcome of the 5 prior tosses does not affect our prediction about the current toss, it will be always 0.5. However, if the bias of the coin is unknown and is modeled as a Random Variable, then the outcome of the previous 5 tosses affects our predictions regarding the current toss because it allows us to make inferences regarding the unknown bias of the coin. In that case the Random Variables capturing the number of Heads in a sequence of n tosses are dependent and not independent. Coming now to the issue of Sampling. The purpose of Sampling is to inform us about the properties of an underlying distribution that is not known and must be inferred. Remember that a Distribution refers to the relative likelihood of possible outcomes in the Sample Space (which may also be a Conditional Universe). So when we Sample we chose a finite number of outcomes from the Sample space and we reproduce the Sample Space in a smaller more manageable scale. Equal probability then refers to the process of the Sampling not the probability of the Outcomes in the Sample. Equal probability sampling implies that the Sample will reflect the proportions of the outcomes in the original Sample Space. E.g. if we ask 10,000 people if they have ever been arrested it is probable that the sample we will end up will not be representative of the Population -- the Sample Space-- since people who would have been arrested might refuse to reply, therefore the proportion of possible outcomes (arrested - not arrested) will differ between our sample and the population for systematic reasons. Or if we chose a particular neighborhood to conduct a survey the results will not be representative of the City as a whole. So equal probability sampling implies that there are no systematic reasons --other than pure randomness-- that makes us believe that the proportions of possible outcomes in our sample are different from the proportions of outcomes in the Population / Sample Space.
Are "random sample" and "iid random variable" synonyms? A Random Variable usually written X, is a variable whose possible values are numerical outcomes of a random phenomenon. The random phenomenon may produce outcomes that have numerical values captured b
13,479
Are "random sample" and "iid random variable" synonyms?
Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted. A random sample is a realization of a sequence of random variables. Those random variables may be i.i.d or not.
Are "random sample" and "iid random variable" synonyms?
Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
Are "random sample" and "iid random variable" synonyms? Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted. A random sample is a realization of a sequence of random variables. Those random variables may be i.i.d or not.
Are "random sample" and "iid random variable" synonyms? Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
13,480
Error in normal approximation to a uniform sum distribution
Let $U_1, U_2,\dots$ be iid $\mathcal U(-b,b)$ random variables and consider the normalized sum $$ S_n = \frac{\sqrt{3} \sum_{i=1}^n U_i}{b \sqrt{n}} \>, $$ and the associated $\sup$ norm $$ \delta_n = \sup_{x\in\mathbb R} |F_n(x) - \Phi(x)| \>, $$ where $F_n$ is the distribution of $S_n$. Lemma 1 (Uspensky): The following bound on $\delta_n$ holds. $$ \delta_n < \frac{1}{7.5 \pi n} + \frac{1}{\pi}\left(\frac{2}{\pi}\right)^n + \frac{12}{\pi^3 n} \exp(-\pi^2 n / 24) \>. $$ Proof. See J. V. Uspensky (1937), Introduction to mathematical probability, New York: McGraw-Hill, p. 305. This was later improved by R. Sherman to the following. Lemma 2 (Sherman): The following improvement on the Uspensky bound holds. $$\delta_n < \frac{1}{7.5 \pi n} - \left(\frac{\pi}{180}+\frac{1}{7.5\pi n}\right) e^{-\pi^2 n / 24} + \frac{1}{(n+1)\pi}\left(\frac{2}{\pi}\right)^n + \frac{12}{\pi^3 n} e^{-\pi^2 n / 24} \>.$$ Proof: See R. Sherman, Error of the normal approximation to the sum of N random variables, Biometrika, vol. 58, no. 2, 396–398. The proof is a pretty straightforward application of the triangle inequality and classical bounds on the tail of the normal distribution and on $(\sin x) / x$ applied to the characteristic functions of each of the two distributions.
Error in normal approximation to a uniform sum distribution
Let $U_1, U_2,\dots$ be iid $\mathcal U(-b,b)$ random variables and consider the normalized sum $$ S_n = \frac{\sqrt{3} \sum_{i=1}^n U_i}{b \sqrt{n}} \>, $$ and the associated $\sup$ norm $$ \delta_n
Error in normal approximation to a uniform sum distribution Let $U_1, U_2,\dots$ be iid $\mathcal U(-b,b)$ random variables and consider the normalized sum $$ S_n = \frac{\sqrt{3} \sum_{i=1}^n U_i}{b \sqrt{n}} \>, $$ and the associated $\sup$ norm $$ \delta_n = \sup_{x\in\mathbb R} |F_n(x) - \Phi(x)| \>, $$ where $F_n$ is the distribution of $S_n$. Lemma 1 (Uspensky): The following bound on $\delta_n$ holds. $$ \delta_n < \frac{1}{7.5 \pi n} + \frac{1}{\pi}\left(\frac{2}{\pi}\right)^n + \frac{12}{\pi^3 n} \exp(-\pi^2 n / 24) \>. $$ Proof. See J. V. Uspensky (1937), Introduction to mathematical probability, New York: McGraw-Hill, p. 305. This was later improved by R. Sherman to the following. Lemma 2 (Sherman): The following improvement on the Uspensky bound holds. $$\delta_n < \frac{1}{7.5 \pi n} - \left(\frac{\pi}{180}+\frac{1}{7.5\pi n}\right) e^{-\pi^2 n / 24} + \frac{1}{(n+1)\pi}\left(\frac{2}{\pi}\right)^n + \frac{12}{\pi^3 n} e^{-\pi^2 n / 24} \>.$$ Proof: See R. Sherman, Error of the normal approximation to the sum of N random variables, Biometrika, vol. 58, no. 2, 396–398. The proof is a pretty straightforward application of the triangle inequality and classical bounds on the tail of the normal distribution and on $(\sin x) / x$ applied to the characteristic functions of each of the two distributions.
Error in normal approximation to a uniform sum distribution Let $U_1, U_2,\dots$ be iid $\mathcal U(-b,b)$ random variables and consider the normalized sum $$ S_n = \frac{\sqrt{3} \sum_{i=1}^n U_i}{b \sqrt{n}} \>, $$ and the associated $\sup$ norm $$ \delta_n
13,481
How to calculate perplexity of a holdout with Latent Dirichlet Allocation?
This is indeed something often glossed over. Some people are doing something a bit cheeky: holding out a proportion of the words in each document, and giving using predictive probabilities of these held-out words given the document-topic mixtures as well as the topic-word mixtures. This is obviously not ideal as it doesn't evaluate performance on any held-out documents. To do it properly with held-out documents, as suggested, you do need to "integrate over the Dirichlet prior for all possible topic mixtures". http://people.cs.umass.edu/~wallach/talks/evaluation.pdf reviews a few methods for tackling this slightly unpleasant integral. I'm just about to try and implement this myself in fact, so good luck!
How to calculate perplexity of a holdout with Latent Dirichlet Allocation?
This is indeed something often glossed over. Some people are doing something a bit cheeky: holding out a proportion of the words in each document, and giving using predictive probabilities of these he
How to calculate perplexity of a holdout with Latent Dirichlet Allocation? This is indeed something often glossed over. Some people are doing something a bit cheeky: holding out a proportion of the words in each document, and giving using predictive probabilities of these held-out words given the document-topic mixtures as well as the topic-word mixtures. This is obviously not ideal as it doesn't evaluate performance on any held-out documents. To do it properly with held-out documents, as suggested, you do need to "integrate over the Dirichlet prior for all possible topic mixtures". http://people.cs.umass.edu/~wallach/talks/evaluation.pdf reviews a few methods for tackling this slightly unpleasant integral. I'm just about to try and implement this myself in fact, so good luck!
How to calculate perplexity of a holdout with Latent Dirichlet Allocation? This is indeed something often glossed over. Some people are doing something a bit cheeky: holding out a proportion of the words in each document, and giving using predictive probabilities of these he
13,482
How to calculate perplexity of a holdout with Latent Dirichlet Allocation?
We know that parameters of LDA are estimated through Variational Inference. So $\log p(w|\alpha, \beta) = E[\log p(\theta,z,w|\alpha,\beta)]-E[\log q(\theta,z)] + D(q(\theta,z)||p(\theta,z))$. If your variational distribution is enough equal to the original distribution, then $D(q(\theta,z)||p(\theta,z)) = 0$. So, $\log p(w|\alpha, \beta) = E[\log p(\theta,z,w|\alpha,\beta)]-E[\log q(\theta,z)]$, which is the likelihood. $\log p(w|\alpha, \beta)$ approximates to the likelihood you got from the Variational Inference.
How to calculate perplexity of a holdout with Latent Dirichlet Allocation?
We know that parameters of LDA are estimated through Variational Inference. So $\log p(w|\alpha, \beta) = E[\log p(\theta,z,w|\alpha,\beta)]-E[\log q(\theta,z)] + D(q(\theta,z)||p(\theta,z))$. If yo
How to calculate perplexity of a holdout with Latent Dirichlet Allocation? We know that parameters of LDA are estimated through Variational Inference. So $\log p(w|\alpha, \beta) = E[\log p(\theta,z,w|\alpha,\beta)]-E[\log q(\theta,z)] + D(q(\theta,z)||p(\theta,z))$. If your variational distribution is enough equal to the original distribution, then $D(q(\theta,z)||p(\theta,z)) = 0$. So, $\log p(w|\alpha, \beta) = E[\log p(\theta,z,w|\alpha,\beta)]-E[\log q(\theta,z)]$, which is the likelihood. $\log p(w|\alpha, \beta)$ approximates to the likelihood you got from the Variational Inference.
How to calculate perplexity of a holdout with Latent Dirichlet Allocation? We know that parameters of LDA are estimated through Variational Inference. So $\log p(w|\alpha, \beta) = E[\log p(\theta,z,w|\alpha,\beta)]-E[\log q(\theta,z)] + D(q(\theta,z)||p(\theta,z))$. If yo
13,483
How to combine confidence intervals for a variance component of a mixed-effects model when using multiple imputation
This is a great question! Not sure this is a full answer, however, I drop these few lines in case it helps. It seems that Yucel and Demirtas (2010) refer to an older paper published in the JCGS, Computational strategies for multivariate linear mixed-effects models with missing values, which uses an hybrid EM/Fisher scoring approach for producing likelihood-based estimates of the VCs. It has been implemented in the R package mlmmm. I don't know, however, if it produces CIs. Otherwise, I would definitely check the WinBUGS program, which is largely used for multilevel models, including those with missing data. I seem to remember it will only works if your MV are in the response variable, not in the covariates because we generally have to specify the full conditional distributions (if MV are present in the independent variables, it means that we must give a prior to the missing Xs, and that will be considered as a parameter to be estimated by WinBUGS...). It seems to apply to R as well, if I refer to the following thread on r-sig-mixed, missing data in lme, lmer, PROC MIXED. Also, it may be worth looking at the MLwiN software.
How to combine confidence intervals for a variance component of a mixed-effects model when using mul
This is a great question! Not sure this is a full answer, however, I drop these few lines in case it helps. It seems that Yucel and Demirtas (2010) refer to an older paper published in the JCGS, Compu
How to combine confidence intervals for a variance component of a mixed-effects model when using multiple imputation This is a great question! Not sure this is a full answer, however, I drop these few lines in case it helps. It seems that Yucel and Demirtas (2010) refer to an older paper published in the JCGS, Computational strategies for multivariate linear mixed-effects models with missing values, which uses an hybrid EM/Fisher scoring approach for producing likelihood-based estimates of the VCs. It has been implemented in the R package mlmmm. I don't know, however, if it produces CIs. Otherwise, I would definitely check the WinBUGS program, which is largely used for multilevel models, including those with missing data. I seem to remember it will only works if your MV are in the response variable, not in the covariates because we generally have to specify the full conditional distributions (if MV are present in the independent variables, it means that we must give a prior to the missing Xs, and that will be considered as a parameter to be estimated by WinBUGS...). It seems to apply to R as well, if I refer to the following thread on r-sig-mixed, missing data in lme, lmer, PROC MIXED. Also, it may be worth looking at the MLwiN software.
How to combine confidence intervals for a variance component of a mixed-effects model when using mul This is a great question! Not sure this is a full answer, however, I drop these few lines in case it helps. It seems that Yucel and Demirtas (2010) refer to an older paper published in the JCGS, Compu
13,484
How to combine confidence intervals for a variance component of a mixed-effects model when using multiple imputation
Repeated comment from above: i'm not sure that a proper analytical solution to this problem even exists. I've looked at some additional literature, but this problem is elegantly overlooked everywhere. I've also noticed that Yucel & Demirtas (in the article i mentioned, page 798) write: These multiply imputed datasets were used to estimate the model […] using the R package lme4 leading to 10 sets of (beta, se(beta)), (sigma_b, se(sigma_b)) which were then combined using the MI combining rules defined by Rubin. It seems they used some kind of shortcut to estimate the SE of the variance component (which is, of course, inappropriate, since the CI is asymmetrical) and then applied the classic formula.
How to combine confidence intervals for a variance component of a mixed-effects model when using mul
Repeated comment from above: i'm not sure that a proper analytical solution to this problem even exists. I've looked at some additional literature, but this problem is elegantly overlooked everywhere.
How to combine confidence intervals for a variance component of a mixed-effects model when using multiple imputation Repeated comment from above: i'm not sure that a proper analytical solution to this problem even exists. I've looked at some additional literature, but this problem is elegantly overlooked everywhere. I've also noticed that Yucel & Demirtas (in the article i mentioned, page 798) write: These multiply imputed datasets were used to estimate the model […] using the R package lme4 leading to 10 sets of (beta, se(beta)), (sigma_b, se(sigma_b)) which were then combined using the MI combining rules defined by Rubin. It seems they used some kind of shortcut to estimate the SE of the variance component (which is, of course, inappropriate, since the CI is asymmetrical) and then applied the classic formula.
How to combine confidence intervals for a variance component of a mixed-effects model when using mul Repeated comment from above: i'm not sure that a proper analytical solution to this problem even exists. I've looked at some additional literature, but this problem is elegantly overlooked everywhere.
13,485
How to combine confidence intervals for a variance component of a mixed-effects model when using multiple imputation
Disclaimer: This idea might be foolish & I'm not going to pretend to understand the theoretical implications of what I'm proposing. "Suggestion" : Why don't you simply impute 100 (I know you normally do 5) datasets, run the lme4 or nmle, get the confidence intervals (you have 100 of them) and then: Using a small interval width (say range / 1000 or something), test over the range of possible values of each parameter and include only those small intervals which appear in at least 95 of the 100 CIs. You would then have a Monte Carlo "average" of your confidence intervals. I'm sure there are issues (or perhaps theoretical problems) with this approach. For instance, you could end up with a set of disjoint intervals. This may or may not be a bad thing depending on your field. Note that this is only possible if you have at least two completely non-overlapping confidence intervals which are separated by a region with less than 95% coverage. You might also consider something closer to the Bayesian treatment of missing data to get a posterior credible region which would certainly be better formed & more theoretically support than my ad-hoc suggestion.
How to combine confidence intervals for a variance component of a mixed-effects model when using mul
Disclaimer: This idea might be foolish & I'm not going to pretend to understand the theoretical implications of what I'm proposing. "Suggestion" : Why don't you simply impute 100 (I know you normal
How to combine confidence intervals for a variance component of a mixed-effects model when using multiple imputation Disclaimer: This idea might be foolish & I'm not going to pretend to understand the theoretical implications of what I'm proposing. "Suggestion" : Why don't you simply impute 100 (I know you normally do 5) datasets, run the lme4 or nmle, get the confidence intervals (you have 100 of them) and then: Using a small interval width (say range / 1000 or something), test over the range of possible values of each parameter and include only those small intervals which appear in at least 95 of the 100 CIs. You would then have a Monte Carlo "average" of your confidence intervals. I'm sure there are issues (or perhaps theoretical problems) with this approach. For instance, you could end up with a set of disjoint intervals. This may or may not be a bad thing depending on your field. Note that this is only possible if you have at least two completely non-overlapping confidence intervals which are separated by a region with less than 95% coverage. You might also consider something closer to the Bayesian treatment of missing data to get a posterior credible region which would certainly be better formed & more theoretically support than my ad-hoc suggestion.
How to combine confidence intervals for a variance component of a mixed-effects model when using mul Disclaimer: This idea might be foolish & I'm not going to pretend to understand the theoretical implications of what I'm proposing. "Suggestion" : Why don't you simply impute 100 (I know you normal
13,486
What is the difference between Markov chains and Markov processes?
From the preface to the first edition of "Markov Chains and Stochastic Stability" by Meyn and Tweedie: We deal here with Markov Chains. Despite the initial attempts by Doob and Chung [99,71] to reserve this term for systems evolving on countable spaces with both discrete and continuous time parameters, usage seems to have decreed (see for example Revuz [326]) that Markov chains move in discrete time, on whatever space they wish; and such are the systems we describe here. Edit: the references cited by my reference are, respectively: 99: J.L. Doob. Stochastic Processes. John Wiley& Sons, New York 1953 71: K.L. Chung. Markov Chains with Stationary Transition Probabilities. Springer-Verlag, Berlin, second edition, 1967. 326: D. Revuz. Markov Chains. North-Holland, Amsterdam, second edition, 1984.
What is the difference between Markov chains and Markov processes?
From the preface to the first edition of "Markov Chains and Stochastic Stability" by Meyn and Tweedie: We deal here with Markov Chains. Despite the initial attempts by Doob and Chung [99,71] to reser
What is the difference between Markov chains and Markov processes? From the preface to the first edition of "Markov Chains and Stochastic Stability" by Meyn and Tweedie: We deal here with Markov Chains. Despite the initial attempts by Doob and Chung [99,71] to reserve this term for systems evolving on countable spaces with both discrete and continuous time parameters, usage seems to have decreed (see for example Revuz [326]) that Markov chains move in discrete time, on whatever space they wish; and such are the systems we describe here. Edit: the references cited by my reference are, respectively: 99: J.L. Doob. Stochastic Processes. John Wiley& Sons, New York 1953 71: K.L. Chung. Markov Chains with Stationary Transition Probabilities. Springer-Verlag, Berlin, second edition, 1967. 326: D. Revuz. Markov Chains. North-Holland, Amsterdam, second edition, 1984.
What is the difference between Markov chains and Markov processes? From the preface to the first edition of "Markov Chains and Stochastic Stability" by Meyn and Tweedie: We deal here with Markov Chains. Despite the initial attempts by Doob and Chung [99,71] to reser
13,487
What is the difference between Markov chains and Markov processes?
One method of classification of stochastic processes is based on the nature of the time parameter(discrete or continuous) and state space(discrete or continuous). This leads to four categories of stochastic processes. If the state space of stochastic process is discrete, whether the time parameter is discrete or continuous, the process is usually called a chain. If a stochastic process possesses Markov property, irrespective of the nature of the time parameter(discrete or continuous) and state space(discrete or continuous), then it is called a Markov process. Hence, we will have four categories of Markov processes. A continuous time parameter, discrete state space stochastic process possessing Markov property is called a continuous parameter Markov chain ( CTMC ). A discrete time parameter, discrete state space stochastic process possessing Markov property is called a discrete parameter Markov chain( DTMC ). Similarly, we can have other two Markov processes. Update 2017-03-09: Every independent increment process is a Markov process. Poisson process having the independent increment property is a Markov process with time parameter continuous and state space discrete. Brownian motion process having the independent increment property is a Markov process with continuous time parameter and continuous state space process.
What is the difference between Markov chains and Markov processes?
One method of classification of stochastic processes is based on the nature of the time parameter(discrete or continuous) and state space(discrete or continuous). This leads to four categories of stoc
What is the difference between Markov chains and Markov processes? One method of classification of stochastic processes is based on the nature of the time parameter(discrete or continuous) and state space(discrete or continuous). This leads to four categories of stochastic processes. If the state space of stochastic process is discrete, whether the time parameter is discrete or continuous, the process is usually called a chain. If a stochastic process possesses Markov property, irrespective of the nature of the time parameter(discrete or continuous) and state space(discrete or continuous), then it is called a Markov process. Hence, we will have four categories of Markov processes. A continuous time parameter, discrete state space stochastic process possessing Markov property is called a continuous parameter Markov chain ( CTMC ). A discrete time parameter, discrete state space stochastic process possessing Markov property is called a discrete parameter Markov chain( DTMC ). Similarly, we can have other two Markov processes. Update 2017-03-09: Every independent increment process is a Markov process. Poisson process having the independent increment property is a Markov process with time parameter continuous and state space discrete. Brownian motion process having the independent increment property is a Markov process with continuous time parameter and continuous state space process.
What is the difference between Markov chains and Markov processes? One method of classification of stochastic processes is based on the nature of the time parameter(discrete or continuous) and state space(discrete or continuous). This leads to four categories of stoc
13,488
What is the origin of the autoencoder neural networks?
According to the history provided in Schmidhuber, "Deep learning in neural networks: an overview," Neural Networks (2015), auto-encoders were proposed as a method for unsupervised pre-training in Ballard, "Modular learning in neural networks," Proceedings AAAI (1987). It's not clear if that's the first time auto-encoders were used, however; it's just the first time that they were used for the purpose of pre-training ANNs. As the introduction to the Schmidhuber article makes clear, it's somewhat difficult to attribute all of the ideas used in ANNs because the literature is diverse and terminology has evolved over time.
What is the origin of the autoencoder neural networks?
According to the history provided in Schmidhuber, "Deep learning in neural networks: an overview," Neural Networks (2015), auto-encoders were proposed as a method for unsupervised pre-training in Ball
What is the origin of the autoencoder neural networks? According to the history provided in Schmidhuber, "Deep learning in neural networks: an overview," Neural Networks (2015), auto-encoders were proposed as a method for unsupervised pre-training in Ballard, "Modular learning in neural networks," Proceedings AAAI (1987). It's not clear if that's the first time auto-encoders were used, however; it's just the first time that they were used for the purpose of pre-training ANNs. As the introduction to the Schmidhuber article makes clear, it's somewhat difficult to attribute all of the ideas used in ANNs because the literature is diverse and terminology has evolved over time.
What is the origin of the autoencoder neural networks? According to the history provided in Schmidhuber, "Deep learning in neural networks: an overview," Neural Networks (2015), auto-encoders were proposed as a method for unsupervised pre-training in Ball
13,489
What is the origin of the autoencoder neural networks?
The paper below talks about autoencoder indirectly and dates back to 1986.(which is a year earlier than the paper by Ballard in 1987) D.E. Rumelhart, G.E. Hinton, and R.J. Williams, "Learning internal representations by error propagation." , Parallel Distributed Processing. Vol 1: Foundations. MIT Press, Cambridge, MA, 1986. The paper basically describes a novel kind of feedforward network at that time , and its mathematical formalism.
What is the origin of the autoencoder neural networks?
The paper below talks about autoencoder indirectly and dates back to 1986.(which is a year earlier than the paper by Ballard in 1987) D.E. Rumelhart, G.E. Hinton, and R.J. Williams, "Learning internal
What is the origin of the autoencoder neural networks? The paper below talks about autoencoder indirectly and dates back to 1986.(which is a year earlier than the paper by Ballard in 1987) D.E. Rumelhart, G.E. Hinton, and R.J. Williams, "Learning internal representations by error propagation." , Parallel Distributed Processing. Vol 1: Foundations. MIT Press, Cambridge, MA, 1986. The paper basically describes a novel kind of feedforward network at that time , and its mathematical formalism.
What is the origin of the autoencoder neural networks? The paper below talks about autoencoder indirectly and dates back to 1986.(which is a year earlier than the paper by Ballard in 1987) D.E. Rumelhart, G.E. Hinton, and R.J. Williams, "Learning internal
13,490
What is the origin of the autoencoder neural networks?
Reviving this thread - In "Neurocomputing" by Robert Hecht-Nielsen @ 1990 there is reference to a 1986 paper by Cottrell/Munro/Zipser that outlines use of a neural network that has the architecture of an autoencoder, and is trained on the identity function, for compression and reconstruction of image data. The term "autoencoder" isn't mentioned, but that's what it is. There is no mention of using it for anomaly detection either. Seems like that usage was discovered later.
What is the origin of the autoencoder neural networks?
Reviving this thread - In "Neurocomputing" by Robert Hecht-Nielsen @ 1990 there is reference to a 1986 paper by Cottrell/Munro/Zipser that outlines use of a neural network that has the architecture of
What is the origin of the autoencoder neural networks? Reviving this thread - In "Neurocomputing" by Robert Hecht-Nielsen @ 1990 there is reference to a 1986 paper by Cottrell/Munro/Zipser that outlines use of a neural network that has the architecture of an autoencoder, and is trained on the identity function, for compression and reconstruction of image data. The term "autoencoder" isn't mentioned, but that's what it is. There is no mention of using it for anomaly detection either. Seems like that usage was discovered later.
What is the origin of the autoencoder neural networks? Reviving this thread - In "Neurocomputing" by Robert Hecht-Nielsen @ 1990 there is reference to a 1986 paper by Cottrell/Munro/Zipser that outlines use of a neural network that has the architecture of
13,491
What is the origin of the autoencoder neural networks?
The first clear autoencoder presentation featuring a feedforward, multilayer neural network with a bottleneck layer was presented by Kramer in 1991 (full text at https://people.engr.tamu.edu/rgutier/web_courses/cpsc636_s10/kramer1991nonlinearPCA.pdf). He discusses dimensionality reduction and feature extraction and applications such as noise filtering, anomaly detection, and input estimation. Variational autoencoders, referred to as "robust autoassociative neural networks", were anticipated by by Kramer, 1992 (https://www.sciencedirect.com/science/article/abs/pii/009813549280051A?via%3Dihub). It wasn't until 15 years later that Hinton popularized autoencoders for dimensionality reduction, in 2006.
What is the origin of the autoencoder neural networks?
The first clear autoencoder presentation featuring a feedforward, multilayer neural network with a bottleneck layer was presented by Kramer in 1991 (full text at https://people.engr.tamu.edu/rgutier/w
What is the origin of the autoencoder neural networks? The first clear autoencoder presentation featuring a feedforward, multilayer neural network with a bottleneck layer was presented by Kramer in 1991 (full text at https://people.engr.tamu.edu/rgutier/web_courses/cpsc636_s10/kramer1991nonlinearPCA.pdf). He discusses dimensionality reduction and feature extraction and applications such as noise filtering, anomaly detection, and input estimation. Variational autoencoders, referred to as "robust autoassociative neural networks", were anticipated by by Kramer, 1992 (https://www.sciencedirect.com/science/article/abs/pii/009813549280051A?via%3Dihub). It wasn't until 15 years later that Hinton popularized autoencoders for dimensionality reduction, in 2006.
What is the origin of the autoencoder neural networks? The first clear autoencoder presentation featuring a feedforward, multilayer neural network with a bottleneck layer was presented by Kramer in 1991 (full text at https://people.engr.tamu.edu/rgutier/w
13,492
optimizing auc vs logloss in binary classification problems
As you mention, AUC is a rank statistic (i.e. scale invariant) & log loss is a calibration statistic. One may trivially construct a model which has the same AUC but fails to minimize log loss w.r.t. some other model by scaling the predicted values. Consider: auc <- function(prediction, actual) { mann_whit <- wilcox.test(prediction~actual)$statistic 1 - mann_whit / (sum(actual)*as.double(sum(!actual))) } log_loss <- function (prediction, actual) { -1/length(prediction) * sum(actual * log(prediction) + (1-actual) * log(1-prediction)) } sampled_data <- function(effect_size, positive_prior = .03, n_obs = 5e3) { y <- rbinom(n_obs, size = 1, prob = positive_prior) data.frame( y = y, x1 =rnorm(n_obs, mean = ifelse(y==1, effect_size, 0))) } train_data <- sampled_data(4) m1 <- glm(y~x1, data = train_data, family = 'binomial') m2 <- m1 m2$coefficients[2] <- 2 * m2$coefficients[2] m1_predictions <- predict(m1, newdata = train_data, type= 'response') m2_predictions <- predict(m2, newdata = train_data, type= 'response') auc(m1_predictions, train_data$y) #0.9925867 auc(m2_predictions, train_data$y) #0.9925867 log_loss(m1_predictions, train_data$y) #0.01985058 log_loss(m2_predictions, train_data$y) #0.2355433 So, we cannot say that a model maximizing AUC means minimized log loss. Whether a model minimizing log loss corresponds to maximized AUC will rely heavily on the context; class separability, model bias, etc. In practice, one might consider a weak relationship, but in general they are simply different objectives. Consider the following example which grows the class separability (effect size of our predictor): for (effect_size in 1:7) { results <- dplyr::bind_rows(lapply(1:100, function(trial) { train_data <- sampled_data(effect_size) m <- glm(y~x1, data = train_data, family = 'binomial') predictions <- predict(m, type = 'response') list(auc = auc(predictions, train_data$y), log_loss = log_loss(predictions, train_data$y), effect_size = effect_size) })) plot(results$auc, results$log_loss, main = paste("Effect size =", effect_size)) readline() }
optimizing auc vs logloss in binary classification problems
As you mention, AUC is a rank statistic (i.e. scale invariant) & log loss is a calibration statistic. One may trivially construct a model which has the same AUC but fails to minimize log loss w.r.t. s
optimizing auc vs logloss in binary classification problems As you mention, AUC is a rank statistic (i.e. scale invariant) & log loss is a calibration statistic. One may trivially construct a model which has the same AUC but fails to minimize log loss w.r.t. some other model by scaling the predicted values. Consider: auc <- function(prediction, actual) { mann_whit <- wilcox.test(prediction~actual)$statistic 1 - mann_whit / (sum(actual)*as.double(sum(!actual))) } log_loss <- function (prediction, actual) { -1/length(prediction) * sum(actual * log(prediction) + (1-actual) * log(1-prediction)) } sampled_data <- function(effect_size, positive_prior = .03, n_obs = 5e3) { y <- rbinom(n_obs, size = 1, prob = positive_prior) data.frame( y = y, x1 =rnorm(n_obs, mean = ifelse(y==1, effect_size, 0))) } train_data <- sampled_data(4) m1 <- glm(y~x1, data = train_data, family = 'binomial') m2 <- m1 m2$coefficients[2] <- 2 * m2$coefficients[2] m1_predictions <- predict(m1, newdata = train_data, type= 'response') m2_predictions <- predict(m2, newdata = train_data, type= 'response') auc(m1_predictions, train_data$y) #0.9925867 auc(m2_predictions, train_data$y) #0.9925867 log_loss(m1_predictions, train_data$y) #0.01985058 log_loss(m2_predictions, train_data$y) #0.2355433 So, we cannot say that a model maximizing AUC means minimized log loss. Whether a model minimizing log loss corresponds to maximized AUC will rely heavily on the context; class separability, model bias, etc. In practice, one might consider a weak relationship, but in general they are simply different objectives. Consider the following example which grows the class separability (effect size of our predictor): for (effect_size in 1:7) { results <- dplyr::bind_rows(lapply(1:100, function(trial) { train_data <- sampled_data(effect_size) m <- glm(y~x1, data = train_data, family = 'binomial') predictions <- predict(m, type = 'response') list(auc = auc(predictions, train_data$y), log_loss = log_loss(predictions, train_data$y), effect_size = effect_size) })) plot(results$auc, results$log_loss, main = paste("Effect size =", effect_size)) readline() }
optimizing auc vs logloss in binary classification problems As you mention, AUC is a rank statistic (i.e. scale invariant) & log loss is a calibration statistic. One may trivially construct a model which has the same AUC but fails to minimize log loss w.r.t. s
13,493
optimizing auc vs logloss in binary classification problems
For imbalanced labels, area under precision-recall curve is preferable to AUC (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4349800/ or python scikit-learn docs) Also, if your goal is to maximize precision, you can consider doing cross-validation to select the best model (algorithm + hyperparameters) using "precision" as the performance metric.
optimizing auc vs logloss in binary classification problems
For imbalanced labels, area under precision-recall curve is preferable to AUC (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4349800/ or python scikit-learn docs) Also, if your goal is to maximize prec
optimizing auc vs logloss in binary classification problems For imbalanced labels, area under precision-recall curve is preferable to AUC (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4349800/ or python scikit-learn docs) Also, if your goal is to maximize precision, you can consider doing cross-validation to select the best model (algorithm + hyperparameters) using "precision" as the performance metric.
optimizing auc vs logloss in binary classification problems For imbalanced labels, area under precision-recall curve is preferable to AUC (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4349800/ or python scikit-learn docs) Also, if your goal is to maximize prec
13,494
What is the connection between regularization and the method of lagrange multipliers ?
Say we are optimizing a model with parameters $\vec{\theta}$, by minimizing some criterion $f(\vec{\theta})$ subject to a constraint on the magnitude of the parameter vector (for instance to implement a structural risk minimization approach by constructing a nested set of models of increasing complexity), we would need to solve: $\mathrm{min}_\vec{\theta} f(\vec{\theta}) \quad \mathrm{s.t.} \quad \|\vec{\theta}\|^2 < C$ The Lagrangian for this problem is (caveat: I think, its been a long day... ;-) $\Lambda(\vec{\theta},\lambda) = f(\vec{\theta}) + \lambda\|\vec{\theta}\|^2 - \lambda C.$ So it can easily be seen that a regularized cost function is closely related to a constrained optimization problem with the regularization parameter $\lambda$ being related to the constant governing the constraint ($C$), and is essentially the Lagrange multiplier. The $-\lambda C$ term is just an additive constant, so it doesn't change the solution of the optimisation problem if it is omitted, just the value of the objective function. This illustrates why e.g. ridge regression implements structural risk minimization: Regularization is equivalent to putting a constraint on the magnitude of the weight vector and if $C_1 > C_2$ then every model that can be made while obeying the constraint that $\|\vec{\theta}\|^2 < C_2$ will also be available under the constraint $\|\vec{\theta}\|^2 < C_1$. Hence reducing $\lambda$ generates a sequence of hypothesis spaces of increasing complexity.
What is the connection between regularization and the method of lagrange multipliers ?
Say we are optimizing a model with parameters $\vec{\theta}$, by minimizing some criterion $f(\vec{\theta})$ subject to a constraint on the magnitude of the parameter vector (for instance to implement
What is the connection between regularization and the method of lagrange multipliers ? Say we are optimizing a model with parameters $\vec{\theta}$, by minimizing some criterion $f(\vec{\theta})$ subject to a constraint on the magnitude of the parameter vector (for instance to implement a structural risk minimization approach by constructing a nested set of models of increasing complexity), we would need to solve: $\mathrm{min}_\vec{\theta} f(\vec{\theta}) \quad \mathrm{s.t.} \quad \|\vec{\theta}\|^2 < C$ The Lagrangian for this problem is (caveat: I think, its been a long day... ;-) $\Lambda(\vec{\theta},\lambda) = f(\vec{\theta}) + \lambda\|\vec{\theta}\|^2 - \lambda C.$ So it can easily be seen that a regularized cost function is closely related to a constrained optimization problem with the regularization parameter $\lambda$ being related to the constant governing the constraint ($C$), and is essentially the Lagrange multiplier. The $-\lambda C$ term is just an additive constant, so it doesn't change the solution of the optimisation problem if it is omitted, just the value of the objective function. This illustrates why e.g. ridge regression implements structural risk minimization: Regularization is equivalent to putting a constraint on the magnitude of the weight vector and if $C_1 > C_2$ then every model that can be made while obeying the constraint that $\|\vec{\theta}\|^2 < C_2$ will also be available under the constraint $\|\vec{\theta}\|^2 < C_1$. Hence reducing $\lambda$ generates a sequence of hypothesis spaces of increasing complexity.
What is the connection between regularization and the method of lagrange multipliers ? Say we are optimizing a model with parameters $\vec{\theta}$, by minimizing some criterion $f(\vec{\theta})$ subject to a constraint on the magnitude of the parameter vector (for instance to implement
13,495
How do CNN's avoid the vanishing gradient problem
The vanishing gradient problem requires us to use small learning rates with gradient descent which then needs many small steps to converge. This is a problem if you have a slow computer which takes a long time for each step. If you have a fast GPU which can perform many more steps in a day, this is less of a problem. There are several ways to tackle the vanishing gradient problem. I would guess that the largest effect for CNNs came from switching from sigmoid nonlinear units to rectified linear units. If you consider a simple neural network whose error $E$ depends on weight $w_{ij}$ only through $y_j$, where $$y_j = f\left( \sum_iw_{ij}x_i \right),$$ its gradient is \begin{align} \frac{\partial}{\partial w_{ij}} E &= \frac{\partial E}{\partial y_j} \cdot \frac{\partial y_j}{\partial w_{ij}} \\ &= \frac{\partial E}{\partial y_j} \cdot f'\left(\sum_i w_{ij} x_i\right) x_i. \end{align} If $f$ is the logistic sigmoid function, $f'$ will be close to zero for large inputs as well as small inputs. If $f$ is a rectified linear unit, \begin{align} f(u) = \max\left(0, u\right), \end{align} the derivative is zero only for negative inputs and 1 for positive inputs. Another important contribution comes from properly initializing the weights. This paper looks like a good source for understanding the challenges in more details (although I haven't read it yet): http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf
How do CNN's avoid the vanishing gradient problem
The vanishing gradient problem requires us to use small learning rates with gradient descent which then needs many small steps to converge. This is a problem if you have a slow computer which takes a
How do CNN's avoid the vanishing gradient problem The vanishing gradient problem requires us to use small learning rates with gradient descent which then needs many small steps to converge. This is a problem if you have a slow computer which takes a long time for each step. If you have a fast GPU which can perform many more steps in a day, this is less of a problem. There are several ways to tackle the vanishing gradient problem. I would guess that the largest effect for CNNs came from switching from sigmoid nonlinear units to rectified linear units. If you consider a simple neural network whose error $E$ depends on weight $w_{ij}$ only through $y_j$, where $$y_j = f\left( \sum_iw_{ij}x_i \right),$$ its gradient is \begin{align} \frac{\partial}{\partial w_{ij}} E &= \frac{\partial E}{\partial y_j} \cdot \frac{\partial y_j}{\partial w_{ij}} \\ &= \frac{\partial E}{\partial y_j} \cdot f'\left(\sum_i w_{ij} x_i\right) x_i. \end{align} If $f$ is the logistic sigmoid function, $f'$ will be close to zero for large inputs as well as small inputs. If $f$ is a rectified linear unit, \begin{align} f(u) = \max\left(0, u\right), \end{align} the derivative is zero only for negative inputs and 1 for positive inputs. Another important contribution comes from properly initializing the weights. This paper looks like a good source for understanding the challenges in more details (although I haven't read it yet): http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf
How do CNN's avoid the vanishing gradient problem The vanishing gradient problem requires us to use small learning rates with gradient descent which then needs many small steps to converge. This is a problem if you have a slow computer which takes a
13,496
Do I need to drop variables that are correlated/collinear before running kmeans?
Don't drop any variables, but do consider using PCA. Here's why. Firstly, as pointed out by Anony-mousse, k-means is not badly affected by collinearity/correlations. You don't need to throw away information because of that. Secondly, if you drop your variables in the wrong way, you'll artificially bring some samples closer together. An example: Customer CatA CatB CatC 1 1 0 0 2 0 1 0 3 0 0 1 (I've removed the % notation and just put values between 0 and 1, constrained so they all sum to 1.) The euclidean distance between each of those customers in their natural 3d space is $\sqrt{(1-0)^2+(0-1)^2+(0-0)^2} = \sqrt{2}$ Now let's say you drop CatC. Customer CatA CatB 1 1 0 2 0 1 3 0 0 Now the distance between customers 1 and 2 is still $\sqrt{2}$, but between customers 1 and 3, and 2 and 3, it's only $\sqrt{(1-0)^2+(0-0)^2}=1$. You've artificially made customer 3 more similar to 1 and 2, in a way the raw data doesn't support. Thirdly, collinerarity/correlations are not the problem. Your dimensionality is. 100 variables is large enough that even with 10 million datapoints, I worry that k-means may find spurious patterns in the data and fit to that. Instead, think about using PCA to compress it down to a more manageable number of dimensions - say 10 or 12 to start with (maybe much higher, maybe much lower - you'll have to look at the variance along each component, and play around a bit, to find the correct number). You'll artificially bring some samples closer together doing this, yes, but you'll do so in a way that should preserve most of the variance in the data, and which will preferentially remove correlations. ~~~~~ EDIT: Re, comments below about PCA. Yes, it absolutely does have pathologies. But it's pretty quick and easy to try, so still seems not a bad bet to me if you want to reduce the dimensionality of the problem. On that note though, I tried quickly throwing a few sets of 100 dimensional synthetic data into a k-means algorithm to see what they came up with. While the cluster centre position estimates weren't that accurate, the cluster membership (i.e. whether two samples were assigned to the same cluster or not, which seems to be what the OP is interested in) was much better than I thought it would be. So my gut feeling earlier was quite possibly wrong - k-means migth work just fine on the raw data.
Do I need to drop variables that are correlated/collinear before running kmeans?
Don't drop any variables, but do consider using PCA. Here's why. Firstly, as pointed out by Anony-mousse, k-means is not badly affected by collinearity/correlations. You don't need to throw away infor
Do I need to drop variables that are correlated/collinear before running kmeans? Don't drop any variables, but do consider using PCA. Here's why. Firstly, as pointed out by Anony-mousse, k-means is not badly affected by collinearity/correlations. You don't need to throw away information because of that. Secondly, if you drop your variables in the wrong way, you'll artificially bring some samples closer together. An example: Customer CatA CatB CatC 1 1 0 0 2 0 1 0 3 0 0 1 (I've removed the % notation and just put values between 0 and 1, constrained so they all sum to 1.) The euclidean distance between each of those customers in their natural 3d space is $\sqrt{(1-0)^2+(0-1)^2+(0-0)^2} = \sqrt{2}$ Now let's say you drop CatC. Customer CatA CatB 1 1 0 2 0 1 3 0 0 Now the distance between customers 1 and 2 is still $\sqrt{2}$, but between customers 1 and 3, and 2 and 3, it's only $\sqrt{(1-0)^2+(0-0)^2}=1$. You've artificially made customer 3 more similar to 1 and 2, in a way the raw data doesn't support. Thirdly, collinerarity/correlations are not the problem. Your dimensionality is. 100 variables is large enough that even with 10 million datapoints, I worry that k-means may find spurious patterns in the data and fit to that. Instead, think about using PCA to compress it down to a more manageable number of dimensions - say 10 or 12 to start with (maybe much higher, maybe much lower - you'll have to look at the variance along each component, and play around a bit, to find the correct number). You'll artificially bring some samples closer together doing this, yes, but you'll do so in a way that should preserve most of the variance in the data, and which will preferentially remove correlations. ~~~~~ EDIT: Re, comments below about PCA. Yes, it absolutely does have pathologies. But it's pretty quick and easy to try, so still seems not a bad bet to me if you want to reduce the dimensionality of the problem. On that note though, I tried quickly throwing a few sets of 100 dimensional synthetic data into a k-means algorithm to see what they came up with. While the cluster centre position estimates weren't that accurate, the cluster membership (i.e. whether two samples were assigned to the same cluster or not, which seems to be what the OP is interested in) was much better than I thought it would be. So my gut feeling earlier was quite possibly wrong - k-means migth work just fine on the raw data.
Do I need to drop variables that are correlated/collinear before running kmeans? Don't drop any variables, but do consider using PCA. Here's why. Firstly, as pointed out by Anony-mousse, k-means is not badly affected by collinearity/correlations. You don't need to throw away infor
13,497
Do I need to drop variables that are correlated/collinear before running kmeans?
It's advisable to remove variables if they are highly correlated. Irrespective of the clustering algorithm or linkage method, one thing that you generally follow is to find the distance between points. Keeping variables which are highly correlated is all but giving them more, double the weight in computing the distance between two points(As all the variables are normalised the effect will usually be double). In short the variables strength to influence the cluster formation increases if it has a high correlation with any other variable.
Do I need to drop variables that are correlated/collinear before running kmeans?
It's advisable to remove variables if they are highly correlated. Irrespective of the clustering algorithm or linkage method, one thing that you generally follow is to find the distance between points
Do I need to drop variables that are correlated/collinear before running kmeans? It's advisable to remove variables if they are highly correlated. Irrespective of the clustering algorithm or linkage method, one thing that you generally follow is to find the distance between points. Keeping variables which are highly correlated is all but giving them more, double the weight in computing the distance between two points(As all the variables are normalised the effect will usually be double). In short the variables strength to influence the cluster formation increases if it has a high correlation with any other variable.
Do I need to drop variables that are correlated/collinear before running kmeans? It's advisable to remove variables if they are highly correlated. Irrespective of the clustering algorithm or linkage method, one thing that you generally follow is to find the distance between points
13,498
Do I need to drop variables that are correlated/collinear before running kmeans?
On a toy example in 2d or 3d, it shouldn't make much of a difference, it just adds some redundancy to your data: all your points are on an odd, (d-1) dimensional hyperplane. So are the cluster means. And distance in this (d-1) dimensional hyperplane is a linear multiple of the same distance, so it doesn't change anything. If you artificially construct such data, e.g. by doing $(x,y)\mapsto(x,y,x+y)$ then you do distort space, and emphasize the influence of $x$ and $y$. If you do this to all variables it does not matter; but you can easily change weights this way. This empasizes the known fact that normalizing and weighting variables is essential. If you have correlations in your data, this is more important than ever. Let's look at the simplest example: duplicate variables. If you run PCA on your data set, and duplicate a variable, this effectively means putting duplicate weight on this variable. PCA is based on the assumption that variance in every direction is equally important - so you should, indeed, carefully weight variables (taking correlations into account, also do any other preprocessing necessary) before doing PCA.
Do I need to drop variables that are correlated/collinear before running kmeans?
On a toy example in 2d or 3d, it shouldn't make much of a difference, it just adds some redundancy to your data: all your points are on an odd, (d-1) dimensional hyperplane. So are the cluster means.
Do I need to drop variables that are correlated/collinear before running kmeans? On a toy example in 2d or 3d, it shouldn't make much of a difference, it just adds some redundancy to your data: all your points are on an odd, (d-1) dimensional hyperplane. So are the cluster means. And distance in this (d-1) dimensional hyperplane is a linear multiple of the same distance, so it doesn't change anything. If you artificially construct such data, e.g. by doing $(x,y)\mapsto(x,y,x+y)$ then you do distort space, and emphasize the influence of $x$ and $y$. If you do this to all variables it does not matter; but you can easily change weights this way. This empasizes the known fact that normalizing and weighting variables is essential. If you have correlations in your data, this is more important than ever. Let's look at the simplest example: duplicate variables. If you run PCA on your data set, and duplicate a variable, this effectively means putting duplicate weight on this variable. PCA is based on the assumption that variance in every direction is equally important - so you should, indeed, carefully weight variables (taking correlations into account, also do any other preprocessing necessary) before doing PCA.
Do I need to drop variables that are correlated/collinear before running kmeans? On a toy example in 2d or 3d, it shouldn't make much of a difference, it just adds some redundancy to your data: all your points are on an odd, (d-1) dimensional hyperplane. So are the cluster means.
13,499
Is building a multiclass classifier better than several binary ones?
First of all, you must ask yourself if your problem is multilabel (i.e. a single URL can belong to several classes) or not (i.e. a single URL can belong to only one class). If you are in the former situation, go with a battery of binary classifiers, because this is a default way of doing multilabel problems. If the latter, the answer depends on a combination of shape of your data, the aim of your analysis and the method are you using -- probably you should just try both and select the best. Only note that some methods (like SVM) can't actually do multiclass classification because of how they are defined and thus internally use a battery of binary classifiers.
Is building a multiclass classifier better than several binary ones?
First of all, you must ask yourself if your problem is multilabel (i.e. a single URL can belong to several classes) or not (i.e. a single URL can belong to only one class). If you are in the former si
Is building a multiclass classifier better than several binary ones? First of all, you must ask yourself if your problem is multilabel (i.e. a single URL can belong to several classes) or not (i.e. a single URL can belong to only one class). If you are in the former situation, go with a battery of binary classifiers, because this is a default way of doing multilabel problems. If the latter, the answer depends on a combination of shape of your data, the aim of your analysis and the method are you using -- probably you should just try both and select the best. Only note that some methods (like SVM) can't actually do multiclass classification because of how they are defined and thus internally use a battery of binary classifiers.
Is building a multiclass classifier better than several binary ones? First of all, you must ask yourself if your problem is multilabel (i.e. a single URL can belong to several classes) or not (i.e. a single URL can belong to only one class). If you are in the former si
13,500
Is building a multiclass classifier better than several binary ones?
This will depend on how your data is dispersed. There is a beautiful example that was given recently to a similar question where the OP wanted to know if a single linear discriminant function would be a better classifier for deciding population A vs B or C or one based on multiple linear discriminant functions that separate A, B and C. Someone gave a very nice coloured scatterplot to show how using two discriminants would be better than one in that case. I will try to link to it.
Is building a multiclass classifier better than several binary ones?
This will depend on how your data is dispersed. There is a beautiful example that was given recently to a similar question where the OP wanted to know if a single linear discriminant function would b
Is building a multiclass classifier better than several binary ones? This will depend on how your data is dispersed. There is a beautiful example that was given recently to a similar question where the OP wanted to know if a single linear discriminant function would be a better classifier for deciding population A vs B or C or one based on multiple linear discriminant functions that separate A, B and C. Someone gave a very nice coloured scatterplot to show how using two discriminants would be better than one in that case. I will try to link to it.
Is building a multiclass classifier better than several binary ones? This will depend on how your data is dispersed. There is a beautiful example that was given recently to a similar question where the OP wanted to know if a single linear discriminant function would b