idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
12,201
What is the "horseshoe effect" and/or the "arch effect" in PCA / correspondence analysis?
Q1 Ecologists talk of gradients all the time. There are lots of kinds of gradients, but it may be best to think of them as some combination of whatever variable(s) you want or are important for the response. So a gradient could be time, or space, or soil acidity, or nutrients, or something more complex such as a linear combination of a range of variables required by the response in some way. We talk about gradients because we observe species in space or time and a whole host of things vary with that space or time. Q2 I have come to the conclusion that in many cases the horseshoe in PCA is not a serious problem if you understand how it arises and don't do silly things like take PC1 when the "gradient" is actually represented by PC1 and PC2 (well it is also split into higher PCs too, but hopefully a 2-d representation is OK). In CA I guess I think the same (now having been forced to think a bit about it). The solution can form an arch when there is no strong 2nd dimension in the data such that a folded version of the first axis, which satisfies the orthogonality requirement of the CA axes, explains more "inertia" than another direction in the data. This may be more serious as this is made up structure where with PCA the arch is just a way to represent species abundances at sites along a single dominant gradient. I've never quite understood why people worry so much about the wrong ordering along PC1 with a strong horseshoe. I would counter that you shouldn't take just PC1 in such cases, and then the problem goes away; the pairs of coordinates on PC1 and PC2 get rid of the reversals on any one of those two axes. Q3 If I saw the horseshoe in a PCA biplot, I would interpret the data as having a single dominant gradient or direction of variation. If I saw the arch, I would probably conclude the same, but I would be very wary of trying to explain CA axis 2 at all. I would not apply DCA - it just twists the arch away (in the best circumstances) such that you don't see to oddities in 2-d plots, but in many cases it produces other spurious structures such as diamonds or trumpet shapes to the arrangement of samples in the DCA space. For example: library("vegan") data(BCI) plot(decorana(BCI), display = "sites", type = "p") ## does DCA We see a typical fanning out of sample points towards the left of the plot. Q4 I would suggest that the answer to this question depends on the aims of your analysis. If the arch/horseshoe was due to a single dominant gradient, then rather than have to represent this as $m$ PCA axes, it would be beneficial if we could estimate a single variable that represents the positions of sites/samples along the gradient. This would suggest finding a nonlinear direction in the high-dimensional space of the data. One such method is the principal curve of Hastie & Stuezel, but other non-linear manifold methods are available which might suffice. For example, for some pathological data We see a strong horseshoe. The principal curve tries to recover this underlying gradient or arrangement/ordering of samples via a smooth curve in the m dimensions of the data. The figure below shows how the iterative algorithm converges on something approximating the underlying gradient. (I think it wanders away from the data at the top of the plot so as to be closer to the data in higher dimensions, and partly because of the self-consistency criterion for a curve to be declared a principal curve.) I have more details including code on my blog post from which I took those images. But the main point here is the the principal curves easily recovers the known ordering of samples whereas PC1 or PC2 on its own does not. In the PCA case, it is common to apply transformations in ecology. Popular transformations are those that can be thought of returning some non-Euclidean distance when the Euclidean distance is computed on the transformed data. For example, the Hellinger distance is $$D_{\mathrm{Hellinger}}(x1, x2) = \sqrt{\sum_{j=1}^p \left [ \sqrt{\frac{y_{1j}}{y_{1+}}} - \sqrt{\frac{y_{2j}}{y_{2+}}} \right ]^2}$$ Where $y_{ij}$ is the abundance of the $j$th species in sample $i$, $y_{i+}$ is the sum of the abundances of all species in the $i$th sample. If we convert the data to proportions and apply a square-root transformation, then the Euclidean distance-preserving PCA will represent the Hellinger distances in the original data. The horseshoe has been known and studied for a long time in ecology; some of the early literature (plus a more modern look) is Goodall D.W. et al. (1954) Objective methods for the classification of vegetation. III. An essay in the use of factor analysis. Australian Journal of Botany 2, 304–324. Noy-Meir I. & Austin M.P. et al. (1970) Principal Component Ordination and Simulated Vegetational Data. Ecology 51, 551–552. Podani J. & Miklós I. et al. (2002) Resemblance Coefficients and the Horseshoe Effect in Principal Coordinates Analysis. Ecology 83, 3331–3343. Swan J.M.A. et al. (1970) An Examination of Some Ordination Problems By Use of Simulated Vegetational Data. Ecology 51, 89–102. The main principal curve references are De’ath G. et al. (1999) Principal Curves: a new technique for indirect and direct gradient analysis. Ecology 80, 2237–2253. Hastie T. & Stuetzle W. et al. (1989) Principal Curves. Journal of the American Statistical Association 84, 502–516. With the former being a very ecological presentation.
What is the "horseshoe effect" and/or the "arch effect" in PCA / correspondence analysis?
Q1 Ecologists talk of gradients all the time. There are lots of kinds of gradients, but it may be best to think of them as some combination of whatever variable(s) you want or are important for the re
What is the "horseshoe effect" and/or the "arch effect" in PCA / correspondence analysis? Q1 Ecologists talk of gradients all the time. There are lots of kinds of gradients, but it may be best to think of them as some combination of whatever variable(s) you want or are important for the response. So a gradient could be time, or space, or soil acidity, or nutrients, or something more complex such as a linear combination of a range of variables required by the response in some way. We talk about gradients because we observe species in space or time and a whole host of things vary with that space or time. Q2 I have come to the conclusion that in many cases the horseshoe in PCA is not a serious problem if you understand how it arises and don't do silly things like take PC1 when the "gradient" is actually represented by PC1 and PC2 (well it is also split into higher PCs too, but hopefully a 2-d representation is OK). In CA I guess I think the same (now having been forced to think a bit about it). The solution can form an arch when there is no strong 2nd dimension in the data such that a folded version of the first axis, which satisfies the orthogonality requirement of the CA axes, explains more "inertia" than another direction in the data. This may be more serious as this is made up structure where with PCA the arch is just a way to represent species abundances at sites along a single dominant gradient. I've never quite understood why people worry so much about the wrong ordering along PC1 with a strong horseshoe. I would counter that you shouldn't take just PC1 in such cases, and then the problem goes away; the pairs of coordinates on PC1 and PC2 get rid of the reversals on any one of those two axes. Q3 If I saw the horseshoe in a PCA biplot, I would interpret the data as having a single dominant gradient or direction of variation. If I saw the arch, I would probably conclude the same, but I would be very wary of trying to explain CA axis 2 at all. I would not apply DCA - it just twists the arch away (in the best circumstances) such that you don't see to oddities in 2-d plots, but in many cases it produces other spurious structures such as diamonds or trumpet shapes to the arrangement of samples in the DCA space. For example: library("vegan") data(BCI) plot(decorana(BCI), display = "sites", type = "p") ## does DCA We see a typical fanning out of sample points towards the left of the plot. Q4 I would suggest that the answer to this question depends on the aims of your analysis. If the arch/horseshoe was due to a single dominant gradient, then rather than have to represent this as $m$ PCA axes, it would be beneficial if we could estimate a single variable that represents the positions of sites/samples along the gradient. This would suggest finding a nonlinear direction in the high-dimensional space of the data. One such method is the principal curve of Hastie & Stuezel, but other non-linear manifold methods are available which might suffice. For example, for some pathological data We see a strong horseshoe. The principal curve tries to recover this underlying gradient or arrangement/ordering of samples via a smooth curve in the m dimensions of the data. The figure below shows how the iterative algorithm converges on something approximating the underlying gradient. (I think it wanders away from the data at the top of the plot so as to be closer to the data in higher dimensions, and partly because of the self-consistency criterion for a curve to be declared a principal curve.) I have more details including code on my blog post from which I took those images. But the main point here is the the principal curves easily recovers the known ordering of samples whereas PC1 or PC2 on its own does not. In the PCA case, it is common to apply transformations in ecology. Popular transformations are those that can be thought of returning some non-Euclidean distance when the Euclidean distance is computed on the transformed data. For example, the Hellinger distance is $$D_{\mathrm{Hellinger}}(x1, x2) = \sqrt{\sum_{j=1}^p \left [ \sqrt{\frac{y_{1j}}{y_{1+}}} - \sqrt{\frac{y_{2j}}{y_{2+}}} \right ]^2}$$ Where $y_{ij}$ is the abundance of the $j$th species in sample $i$, $y_{i+}$ is the sum of the abundances of all species in the $i$th sample. If we convert the data to proportions and apply a square-root transformation, then the Euclidean distance-preserving PCA will represent the Hellinger distances in the original data. The horseshoe has been known and studied for a long time in ecology; some of the early literature (plus a more modern look) is Goodall D.W. et al. (1954) Objective methods for the classification of vegetation. III. An essay in the use of factor analysis. Australian Journal of Botany 2, 304–324. Noy-Meir I. & Austin M.P. et al. (1970) Principal Component Ordination and Simulated Vegetational Data. Ecology 51, 551–552. Podani J. & Miklós I. et al. (2002) Resemblance Coefficients and the Horseshoe Effect in Principal Coordinates Analysis. Ecology 83, 3331–3343. Swan J.M.A. et al. (1970) An Examination of Some Ordination Problems By Use of Simulated Vegetational Data. Ecology 51, 89–102. The main principal curve references are De’ath G. et al. (1999) Principal Curves: a new technique for indirect and direct gradient analysis. Ecology 80, 2237–2253. Hastie T. & Stuetzle W. et al. (1989) Principal Curves. Journal of the American Statistical Association 84, 502–516. With the former being a very ecological presentation.
What is the "horseshoe effect" and/or the "arch effect" in PCA / correspondence analysis? Q1 Ecologists talk of gradients all the time. There are lots of kinds of gradients, but it may be best to think of them as some combination of whatever variable(s) you want or are important for the re
12,202
Why do we divide by the standard deviation and not some other standardizing factor before doing PCA?
This is in partial answer to "it is not clear to me why dividing by the standard deviation would achieve such a goal". In particular, why it puts the transformed (standardized) data on the "same scale". The question hints at deeper issues (what else might have "worked", which is linked to what "worked" might even mean, mathematically?), but it seemed sensible to at least address the more straightforward aspects of why this procedure "works" - that is, achieves the claims made for it in the text. The entry on row $i$ and column $j$ of a covariance matrix is the covariance between the $i^{th}$ and $j^{th}$ variables. Note that on a diagonal, row $i$ and column $i$, this becomes the covariance between the $i^{th}$ variable and itself - which is just the variance of the $i^{th}$ variable. Let's call the $i^{th}$ variable $X_i$ and the $j^{th}$ variable $X_j$; I'll assume these are already centered so that they have mean zero. Recall that $$Cov(X_i, X_j) =\sigma_{X_i} \, \sigma_{X_j} \, Cor(X_i, X_j)$$ We can standardize the variables so that they have variance one, simply by dividing by their standard deviations. When standardizing we would generally subtract the mean first, but I already assumed they are centered so we can skip that step. Let $Z_i = \frac{X_i}{\sigma_{X_i}}$ and to see why the variance is one, note that $$Var(Z_i) = Var\left(\frac{X_i}{\sigma_{X_i}}\right) = \frac{1}{\sigma_{X_i}^2}Var(X_i) = \frac{1}{\sigma_{X_i}^2} \sigma_{X_i}^2 = 1$$ Similarly for $Z_j$. If we take the entry in row $i$ and column $j$ of the covariance matrix for the standardized variables, note that since they are standardized: $$Cov(Z_i, Z_j) =\sigma_{Z_i} \, \sigma_{Z_j} \, Cor(Z_i, Z_j) = Cor(Z_i, Z_j)$$ Moreover when we rescale variables in this way, addition (equivalently: subtraction) does not change the correlation, while multiplication (equivalently: division) will simply reverse the sign of the correlation if the factor (divisor) is negative. In other words correlation is unchanged by translations or scaling but is reversed by reflection. (Here's a derivation of those correlation properties, as part of an otherwise unrelated answer.) Since we divided by standard deviations, which are positive, we see that $Cor(Z_i, Z_j)$ must equal $Cor(X_i, X_j)$ i.e. the correlation between the original data. Along the diagonal of the new covariance matrix, note that we get $Cov(Z_i, Z_i) = Var(Z_i) = 1$ so the entire diagonal is filled with ones, as we would expect. It's in this sense that the data are now "on the same scale" - their marginal distributions should look very similar, at least if they were roughly normally distributed to start with, with mean zero and with variance (and standard deviation) one. It is no longer the case that one variable's variability swamps the others. You could have divided by a different measure of spread, of course. The variance would have been a particularly bad choice due to dimensional inconsistency (think about what would have happened if you'd changed the units one of your variables was in, e.g. from metres to kilometres). Something like median absolute deviation (or an appropriate multiple of the MAD if you are trying to use it as a kind of robust estimator of the standard deviation) may have been more appropriate. But it still won't turn that diagonal into a diagonal of ones. The upshot is that a method that works on the covariance matrix of standardized data, is essentially using the correlation matrix of the original data. For which you'd prefer to use on PCA, see PCA on correlation or covariance?
Why do we divide by the standard deviation and not some other standardizing factor before doing PCA?
This is in partial answer to "it is not clear to me why dividing by the standard deviation would achieve such a goal". In particular, why it puts the transformed (standardized) data on the "same scale
Why do we divide by the standard deviation and not some other standardizing factor before doing PCA? This is in partial answer to "it is not clear to me why dividing by the standard deviation would achieve such a goal". In particular, why it puts the transformed (standardized) data on the "same scale". The question hints at deeper issues (what else might have "worked", which is linked to what "worked" might even mean, mathematically?), but it seemed sensible to at least address the more straightforward aspects of why this procedure "works" - that is, achieves the claims made for it in the text. The entry on row $i$ and column $j$ of a covariance matrix is the covariance between the $i^{th}$ and $j^{th}$ variables. Note that on a diagonal, row $i$ and column $i$, this becomes the covariance between the $i^{th}$ variable and itself - which is just the variance of the $i^{th}$ variable. Let's call the $i^{th}$ variable $X_i$ and the $j^{th}$ variable $X_j$; I'll assume these are already centered so that they have mean zero. Recall that $$Cov(X_i, X_j) =\sigma_{X_i} \, \sigma_{X_j} \, Cor(X_i, X_j)$$ We can standardize the variables so that they have variance one, simply by dividing by their standard deviations. When standardizing we would generally subtract the mean first, but I already assumed they are centered so we can skip that step. Let $Z_i = \frac{X_i}{\sigma_{X_i}}$ and to see why the variance is one, note that $$Var(Z_i) = Var\left(\frac{X_i}{\sigma_{X_i}}\right) = \frac{1}{\sigma_{X_i}^2}Var(X_i) = \frac{1}{\sigma_{X_i}^2} \sigma_{X_i}^2 = 1$$ Similarly for $Z_j$. If we take the entry in row $i$ and column $j$ of the covariance matrix for the standardized variables, note that since they are standardized: $$Cov(Z_i, Z_j) =\sigma_{Z_i} \, \sigma_{Z_j} \, Cor(Z_i, Z_j) = Cor(Z_i, Z_j)$$ Moreover when we rescale variables in this way, addition (equivalently: subtraction) does not change the correlation, while multiplication (equivalently: division) will simply reverse the sign of the correlation if the factor (divisor) is negative. In other words correlation is unchanged by translations or scaling but is reversed by reflection. (Here's a derivation of those correlation properties, as part of an otherwise unrelated answer.) Since we divided by standard deviations, which are positive, we see that $Cor(Z_i, Z_j)$ must equal $Cor(X_i, X_j)$ i.e. the correlation between the original data. Along the diagonal of the new covariance matrix, note that we get $Cov(Z_i, Z_i) = Var(Z_i) = 1$ so the entire diagonal is filled with ones, as we would expect. It's in this sense that the data are now "on the same scale" - their marginal distributions should look very similar, at least if they were roughly normally distributed to start with, with mean zero and with variance (and standard deviation) one. It is no longer the case that one variable's variability swamps the others. You could have divided by a different measure of spread, of course. The variance would have been a particularly bad choice due to dimensional inconsistency (think about what would have happened if you'd changed the units one of your variables was in, e.g. from metres to kilometres). Something like median absolute deviation (or an appropriate multiple of the MAD if you are trying to use it as a kind of robust estimator of the standard deviation) may have been more appropriate. But it still won't turn that diagonal into a diagonal of ones. The upshot is that a method that works on the covariance matrix of standardized data, is essentially using the correlation matrix of the original data. For which you'd prefer to use on PCA, see PCA on correlation or covariance?
Why do we divide by the standard deviation and not some other standardizing factor before doing PCA? This is in partial answer to "it is not clear to me why dividing by the standard deviation would achieve such a goal". In particular, why it puts the transformed (standardized) data on the "same scale
12,203
Why do we divide by the standard deviation and not some other standardizing factor before doing PCA?
Why do we divide by the standard deviation whats wrong with dividing by the variance? as @Silverfish already pointed out in a comment, the standard deviation has the same unit as the measurements. Thus, dividing by standard deviation as opposed to variance, you end up with a plain number that tells you where your case is relative to average and spread as measured by mean and standard deviation. This is very close to the idea of $z$-values and the standard normal distribution: If the data are normally distributed, standardization will transform them to a standard normal distribution. So: standardization (mean centering + scaling by standard deviation) makes sense if you consider the standard normal distribution sensible for your data. Why not some other quantity? Like...the sum of absolute values? or some other norm... Other quantities are used to scale data, but the procedure is called standardization only if it uses mean centering and dividing by standard deviation. Scaling is the generic term. E.g. I work with spectroscopic data and know that my detector has a wavelength-dependent sensitivity and an (electronic) bias. Thus I calibrate by subtracting the offset (blank) signal and multiplying (dividing) by a calibration factor. Also, I may be centering not to the mean but instead to some other baseline value, such as the mean of a control group instead of the grand mean. (Personally, I almost never standardize as my variates already have the same physical unit and are in the same order of magnitude) See also: Variables are often adjusted (e.g. standardised) before making a model - when is this a good idea, and when is it a bad one?
Why do we divide by the standard deviation and not some other standardizing factor before doing PCA?
Why do we divide by the standard deviation whats wrong with dividing by the variance? as @Silverfish already pointed out in a comment, the standard deviation has the same unit as the measurements.
Why do we divide by the standard deviation and not some other standardizing factor before doing PCA? Why do we divide by the standard deviation whats wrong with dividing by the variance? as @Silverfish already pointed out in a comment, the standard deviation has the same unit as the measurements. Thus, dividing by standard deviation as opposed to variance, you end up with a plain number that tells you where your case is relative to average and spread as measured by mean and standard deviation. This is very close to the idea of $z$-values and the standard normal distribution: If the data are normally distributed, standardization will transform them to a standard normal distribution. So: standardization (mean centering + scaling by standard deviation) makes sense if you consider the standard normal distribution sensible for your data. Why not some other quantity? Like...the sum of absolute values? or some other norm... Other quantities are used to scale data, but the procedure is called standardization only if it uses mean centering and dividing by standard deviation. Scaling is the generic term. E.g. I work with spectroscopic data and know that my detector has a wavelength-dependent sensitivity and an (electronic) bias. Thus I calibrate by subtracting the offset (blank) signal and multiplying (dividing) by a calibration factor. Also, I may be centering not to the mean but instead to some other baseline value, such as the mean of a control group instead of the grand mean. (Personally, I almost never standardize as my variates already have the same physical unit and are in the same order of magnitude) See also: Variables are often adjusted (e.g. standardised) before making a model - when is this a good idea, and when is it a bad one?
Why do we divide by the standard deviation and not some other standardizing factor before doing PCA? Why do we divide by the standard deviation whats wrong with dividing by the variance? as @Silverfish already pointed out in a comment, the standard deviation has the same unit as the measurements.
12,204
Why do we divide by the standard deviation and not some other standardizing factor before doing PCA?
This link answers your question clearly, I guess: http://sebastianraschka.com/Articles/2014_about_feature_scaling.html I quote a small piece: Z-score standardization or Min-Max scaling? “Standardization or Min-Max scaling?” - There is no obvious answer to this question: it really depends on the application. For example, in clustering analyses, standardization may be especially crucial in order to compare similarities between features based on certain distance measures. Another prominent example is the Principal Component Analysis, where we usually prefer standardization over Min-Max scaling, since we are interested in the components that maximize the variance (depending on the question and if the PCA computes the components via the correlation matrix instead of the covariance matrix; but more about PCA in my previous article). However, this doesn’t mean that Min-Max scaling is not useful at all! A popular application is image processing, where pixel intensities have to be normalized to fit within a certain range (i.e., 0 to 255 for the RGB color range). Also, typical neural network algorithm require data that on a 0-1 scale.
Why do we divide by the standard deviation and not some other standardizing factor before doing PCA?
This link answers your question clearly, I guess: http://sebastianraschka.com/Articles/2014_about_feature_scaling.html I quote a small piece: Z-score standardization or Min-Max scaling? “Standardizat
Why do we divide by the standard deviation and not some other standardizing factor before doing PCA? This link answers your question clearly, I guess: http://sebastianraschka.com/Articles/2014_about_feature_scaling.html I quote a small piece: Z-score standardization or Min-Max scaling? “Standardization or Min-Max scaling?” - There is no obvious answer to this question: it really depends on the application. For example, in clustering analyses, standardization may be especially crucial in order to compare similarities between features based on certain distance measures. Another prominent example is the Principal Component Analysis, where we usually prefer standardization over Min-Max scaling, since we are interested in the components that maximize the variance (depending on the question and if the PCA computes the components via the correlation matrix instead of the covariance matrix; but more about PCA in my previous article). However, this doesn’t mean that Min-Max scaling is not useful at all! A popular application is image processing, where pixel intensities have to be normalized to fit within a certain range (i.e., 0 to 255 for the RGB color range). Also, typical neural network algorithm require data that on a 0-1 scale.
Why do we divide by the standard deviation and not some other standardizing factor before doing PCA? This link answers your question clearly, I guess: http://sebastianraschka.com/Articles/2014_about_feature_scaling.html I quote a small piece: Z-score standardization or Min-Max scaling? “Standardizat
12,205
What is the best book about generalized linear models for novices?
For a new practitioner, I like Gelman and Hill. Data Analysis Using Regression and Multilevel/Hierarchical Models Ostensibly the book is about Hierarchical Generalized Linear Models, a more advanced topic than GLMs; the first section, though, is a wonderful practitioners guide to GLMs. The book is light on theory, heavy on disciplined statistical practice, overflowing with case studies and practical R code, all told in a pleasant, friendly voice.
What is the best book about generalized linear models for novices?
For a new practitioner, I like Gelman and Hill. Data Analysis Using Regression and Multilevel/Hierarchical Models Ostensibly the book is about Hierarchical Generalized Linear Models, a more advanced
What is the best book about generalized linear models for novices? For a new practitioner, I like Gelman and Hill. Data Analysis Using Regression and Multilevel/Hierarchical Models Ostensibly the book is about Hierarchical Generalized Linear Models, a more advanced topic than GLMs; the first section, though, is a wonderful practitioners guide to GLMs. The book is light on theory, heavy on disciplined statistical practice, overflowing with case studies and practical R code, all told in a pleasant, friendly voice.
What is the best book about generalized linear models for novices? For a new practitioner, I like Gelman and Hill. Data Analysis Using Regression and Multilevel/Hierarchical Models Ostensibly the book is about Hierarchical Generalized Linear Models, a more advanced
12,206
What is the best book about generalized linear models for novices?
I am a big fan of Agresti's Categorical Data Analysis. I have read Agresti's Intro book but found it missing key interpretations for how generalized linear model is built and how it works. For example, you may not need to know how the binomial distribution and logit link work if you only want to fit a logistic regression. However it is annoying when you have read the chapter and started to wonder about it but couldn't find it in the book. The McCullagh and Nelder GLM book is hard to read. It contains everything you need to know but lacks the derivation for the key results. Luckily Agresti's Categorical Data Analysis presents a good balance.
What is the best book about generalized linear models for novices?
I am a big fan of Agresti's Categorical Data Analysis. I have read Agresti's Intro book but found it missing key interpretations for how generalized linear model is built and how it works. For exampl
What is the best book about generalized linear models for novices? I am a big fan of Agresti's Categorical Data Analysis. I have read Agresti's Intro book but found it missing key interpretations for how generalized linear model is built and how it works. For example, you may not need to know how the binomial distribution and logit link work if you only want to fit a logistic regression. However it is annoying when you have read the chapter and started to wonder about it but couldn't find it in the book. The McCullagh and Nelder GLM book is hard to read. It contains everything you need to know but lacks the derivation for the key results. Luckily Agresti's Categorical Data Analysis presents a good balance.
What is the best book about generalized linear models for novices? I am a big fan of Agresti's Categorical Data Analysis. I have read Agresti's Intro book but found it missing key interpretations for how generalized linear model is built and how it works. For exampl
12,207
What is the best book about generalized linear models for novices?
As a complete beginner myself, I found Foundations of Linear and Generalized Linear Models by the celebrated author of Categorical Data Analysis Alan Agresti to be helpful. Language is fluid, though some exposure to Linear Algebra is assumed.
What is the best book about generalized linear models for novices?
As a complete beginner myself, I found Foundations of Linear and Generalized Linear Models by the celebrated author of Categorical Data Analysis Alan Agresti to be helpful. Language is fluid, though s
What is the best book about generalized linear models for novices? As a complete beginner myself, I found Foundations of Linear and Generalized Linear Models by the celebrated author of Categorical Data Analysis Alan Agresti to be helpful. Language is fluid, though some exposure to Linear Algebra is assumed.
What is the best book about generalized linear models for novices? As a complete beginner myself, I found Foundations of Linear and Generalized Linear Models by the celebrated author of Categorical Data Analysis Alan Agresti to be helpful. Language is fluid, though s
12,208
What is the best book about generalized linear models for novices?
I really liked Mixed Effects Models with Extensions in R - Zuur, et. al. It's a followup to their older book Analysing Ecological Data (2007). They do a good job of motivating the models, along with plenty of visual examples to explain what GLMs look like. They also strike a good balance between, theory, application and discussion. Plus they have all codes and datasets on their website, so you can immediately apply what you've learned.
What is the best book about generalized linear models for novices?
I really liked Mixed Effects Models with Extensions in R - Zuur, et. al. It's a followup to their older book Analysing Ecological Data (2007). They do a good job of motivating the models, along with p
What is the best book about generalized linear models for novices? I really liked Mixed Effects Models with Extensions in R - Zuur, et. al. It's a followup to their older book Analysing Ecological Data (2007). They do a good job of motivating the models, along with plenty of visual examples to explain what GLMs look like. They also strike a good balance between, theory, application and discussion. Plus they have all codes and datasets on their website, so you can immediately apply what you've learned.
What is the best book about generalized linear models for novices? I really liked Mixed Effects Models with Extensions in R - Zuur, et. al. It's a followup to their older book Analysing Ecological Data (2007). They do a good job of motivating the models, along with p
12,209
Repeated measures ANOVA with lme/lmer in R for two within-subject factors
What you're fitting with aov is called a strip plot, and it's tricky to fit with lme because the subject:A and subject:B random effects are crossed. Your first attempt is equivalent to aov(Y ~ A*B + Error(subject), data=d), which doesn't include all the random effects; your second attempt is the right idea, but the syntax for crossed random effects using lme is very tricky. Using lme from the nlme package, the code would be lme(Y ~ A*B, random=list(subject=pdBlocked(list(~1, pdIdent(~A-1), pdIdent(~B-1)))), data=d) Using lmer from the lme4 package, the code would be something like lmer(Y ~ A*B + (1|subject) + (1|A:subject) + (1|B:subject), data=d) These threads from R-help may be helpful (and to give credit, that's where I got the nlme code from). http://www.biostat.wustl.edu/archives/html/s-news/2005-01/msg00091.html http://permalink.gmane.org/gmane.comp.lang.r.lme4.devel/3328 http://www.mail-archive.com/r-help@stat.math.ethz.ch/msg10843.html This last link refers to p.165 of Pinheiro/Bates; that may be helpful too. EDIT: Also note that in the data set you have, some of variance components are negative, which is not allowed using random effects with lme, so the results differ. A data set with all positive variance components can be created using a seed of 8. The results then agree. See this answer for details. Also note that lme from nlme does not compute the denominator degrees of freedom correctly, so the F-statistics agree but not the p-values, and lmer from lme4 doesn't try too because it's very tricky in the presence of unbalanced crossed random effects, and may not even be a sensible thing to do. But that's more than I want to get into here.
Repeated measures ANOVA with lme/lmer in R for two within-subject factors
What you're fitting with aov is called a strip plot, and it's tricky to fit with lme because the subject:A and subject:B random effects are crossed. Your first attempt is equivalent to aov(Y ~ A*B + E
Repeated measures ANOVA with lme/lmer in R for two within-subject factors What you're fitting with aov is called a strip plot, and it's tricky to fit with lme because the subject:A and subject:B random effects are crossed. Your first attempt is equivalent to aov(Y ~ A*B + Error(subject), data=d), which doesn't include all the random effects; your second attempt is the right idea, but the syntax for crossed random effects using lme is very tricky. Using lme from the nlme package, the code would be lme(Y ~ A*B, random=list(subject=pdBlocked(list(~1, pdIdent(~A-1), pdIdent(~B-1)))), data=d) Using lmer from the lme4 package, the code would be something like lmer(Y ~ A*B + (1|subject) + (1|A:subject) + (1|B:subject), data=d) These threads from R-help may be helpful (and to give credit, that's where I got the nlme code from). http://www.biostat.wustl.edu/archives/html/s-news/2005-01/msg00091.html http://permalink.gmane.org/gmane.comp.lang.r.lme4.devel/3328 http://www.mail-archive.com/r-help@stat.math.ethz.ch/msg10843.html This last link refers to p.165 of Pinheiro/Bates; that may be helpful too. EDIT: Also note that in the data set you have, some of variance components are negative, which is not allowed using random effects with lme, so the results differ. A data set with all positive variance components can be created using a seed of 8. The results then agree. See this answer for details. Also note that lme from nlme does not compute the denominator degrees of freedom correctly, so the F-statistics agree but not the p-values, and lmer from lme4 doesn't try too because it's very tricky in the presence of unbalanced crossed random effects, and may not even be a sensible thing to do. But that's more than I want to get into here.
Repeated measures ANOVA with lme/lmer in R for two within-subject factors What you're fitting with aov is called a strip plot, and it's tricky to fit with lme because the subject:A and subject:B random effects are crossed. Your first attempt is equivalent to aov(Y ~ A*B + E
12,210
Repeated measures ANOVA with lme/lmer in R for two within-subject factors
Your first attempt is the correct answer if that's all you're trying to do. nlme() works out the between and within components, you don't need to specify them. The problem you're running into isn't because you don't know how to specify the model, it's because repeated measures ANOVA and mixed effects are not the same thing. Sometimes the results from the ANOVA and mixed effects model will match. This is especially the case when you aggregate your data like you would for an ANOVA and calculate both from that. But generally, when done correctly, while the conclusions may be similar the results are almost never the same. Your example data aren't like real repeated measures where you often have replications of each measure within S. When you do an ANOVA typically you aggregate across those replications to get an estimate of the effect for each subject. In mixed effects modelling you do no such thing. You work with the raw data. When you do that you'll find that the results are never the same between ANOVA and lme(). [as an aside, using lmer() (from the lme4 package) instead of lme() give me SS and MS values that exactly match the ANOVA for effects in your example, it's just that the F's are different]
Repeated measures ANOVA with lme/lmer in R for two within-subject factors
Your first attempt is the correct answer if that's all you're trying to do. nlme() works out the between and within components, you don't need to specify them. The problem you're running into isn't b
Repeated measures ANOVA with lme/lmer in R for two within-subject factors Your first attempt is the correct answer if that's all you're trying to do. nlme() works out the between and within components, you don't need to specify them. The problem you're running into isn't because you don't know how to specify the model, it's because repeated measures ANOVA and mixed effects are not the same thing. Sometimes the results from the ANOVA and mixed effects model will match. This is especially the case when you aggregate your data like you would for an ANOVA and calculate both from that. But generally, when done correctly, while the conclusions may be similar the results are almost never the same. Your example data aren't like real repeated measures where you often have replications of each measure within S. When you do an ANOVA typically you aggregate across those replications to get an estimate of the effect for each subject. In mixed effects modelling you do no such thing. You work with the raw data. When you do that you'll find that the results are never the same between ANOVA and lme(). [as an aside, using lmer() (from the lme4 package) instead of lme() give me SS and MS values that exactly match the ANOVA for effects in your example, it's just that the F's are different]
Repeated measures ANOVA with lme/lmer in R for two within-subject factors Your first attempt is the correct answer if that's all you're trying to do. nlme() works out the between and within components, you don't need to specify them. The problem you're running into isn't b
12,211
Relationship between Gram and covariance matrices
A Singular Value Decomposition (SVD) of $X$ expresses it as $$X = U D V^\prime$$ where $U$ is an $n\times r$ matrix whose columns are mutually orthonormal, $V$ is an $p\times r$ matrix whose columns are mutually orthonormal, and $D$ is an $r\times r$ diagonal matrix with positive values (the "singular values" of $X$) on the diagonal. Necessarily $r$--which is the rank of $X$--can be no greater than either $n$ or $p$. Using this we compute $$X^\prime X = (U D V^\prime)^\prime U D V^\prime = V D^\prime U^\prime U D V^\prime = V D^2 V^\prime$$ and $$ X X^\prime= U D V^\prime (U D V^\prime)^\prime= U D V^\prime V D^\prime U^\prime= U D^2 U^\prime.$$ Although we can recover $D^2$ by diagonalizing either of $X^\prime X$ or $X X^\prime$, the former gives no information about $U$ and the latter gives no information about $V$. However, $U$ and $V$ are completely independent of each other--starting with one of them, along with $D$, you can choose the other arbitrarily (subject to the orthonormality conditions) and construct a valid matrix $X$. Therefore $D^2$ contains all the information that is common to the matrices $X^\prime X$ and $X X^\prime$. There is a nice geometric interpretation that helps make this convincing. The SVD allows us to view any linear transformation $T_X$ (as represented by the matrix $X$) from $\mathbb{R}^p$ to $\mathbb{R}^n$ in terms of three easily understood linear transformations: $V$ is the matrix of a transformation $T_V:\mathbb{R}^r \to \mathbb{R}^p$ that is one-to-one (has no kernel) and isometric. That is, it rotates $\mathbb{R}^r$ into an $r$-dimensional subspace $T_V(\mathbb{R}^r)$ of a $p$-dimensional space. $U$ similarly is the matrix of a one-to-one, isometric transformation $T_U:\mathbb{R}^r\to \mathbb{R}^n$. $D$ positively rescales the $r$ coordinate axes in $\mathbb{R}^r$, corresponding to a linear transformation $T_D$ that distorts the unit sphere (used for reference) into an ellipsoid without rotating it. The transpose of $V$, $V^\prime$, corresponds to a linear transformation $T_{V^\prime}:\mathbb{R}^p\to\mathbb{R}^r$ that kills all vectors in $\mathbb{R}^p$ that are perpendicular to $T_V(\mathbb{R}^r)$. It otherwise rotates $T_V(\mathbb{R}^r)$ into $\mathbb{R}^r$. Equivalently, you can think of $T_{V^\prime}$ as "ignoring" any perpendicular directions and establishing an orthonormal coordinate system within $T_V(\mathbb{R}^r) \subset \mathbb{R}^p$. $T_D$ acts directly on that coordinate system, expanding by various amounts (as specified by the singular values) along the coordinate axes determined by $V$. $T_U$ then maps the result into $\mathbb{R}^n$. The linear transformation associated with $X^\prime X$ in effect acts on $T_V(\mathbb{R}^r)$ through two "round trips": $T_X$ expands the coordinates in the system determined by $V$ by $T_D$ and then $T_{X^\prime}$ does it all over again. Similarly, $X X^\prime$ does exactly the same thing to the $r$-dimensional subspace of $\mathbb{R}^n$ established by the $r$ orthogonal columns of $U$. Thus, the role of $V$ is to describe a frame in a subspace of $\mathbb{R}^p$ and the role of $U$ is to describe a frame in a subspace of $\mathbb{R}^n$. The matrix $X^\prime X$ gives us information about the frame in the first space and $X X\prime$ tells us the frame in the second space, but those two frames don't have to have any relationship at all to one another.
Relationship between Gram and covariance matrices
A Singular Value Decomposition (SVD) of $X$ expresses it as $$X = U D V^\prime$$ where $U$ is an $n\times r$ matrix whose columns are mutually orthonormal, $V$ is an $p\times r$ matrix whose columns a
Relationship between Gram and covariance matrices A Singular Value Decomposition (SVD) of $X$ expresses it as $$X = U D V^\prime$$ where $U$ is an $n\times r$ matrix whose columns are mutually orthonormal, $V$ is an $p\times r$ matrix whose columns are mutually orthonormal, and $D$ is an $r\times r$ diagonal matrix with positive values (the "singular values" of $X$) on the diagonal. Necessarily $r$--which is the rank of $X$--can be no greater than either $n$ or $p$. Using this we compute $$X^\prime X = (U D V^\prime)^\prime U D V^\prime = V D^\prime U^\prime U D V^\prime = V D^2 V^\prime$$ and $$ X X^\prime= U D V^\prime (U D V^\prime)^\prime= U D V^\prime V D^\prime U^\prime= U D^2 U^\prime.$$ Although we can recover $D^2$ by diagonalizing either of $X^\prime X$ or $X X^\prime$, the former gives no information about $U$ and the latter gives no information about $V$. However, $U$ and $V$ are completely independent of each other--starting with one of them, along with $D$, you can choose the other arbitrarily (subject to the orthonormality conditions) and construct a valid matrix $X$. Therefore $D^2$ contains all the information that is common to the matrices $X^\prime X$ and $X X^\prime$. There is a nice geometric interpretation that helps make this convincing. The SVD allows us to view any linear transformation $T_X$ (as represented by the matrix $X$) from $\mathbb{R}^p$ to $\mathbb{R}^n$ in terms of three easily understood linear transformations: $V$ is the matrix of a transformation $T_V:\mathbb{R}^r \to \mathbb{R}^p$ that is one-to-one (has no kernel) and isometric. That is, it rotates $\mathbb{R}^r$ into an $r$-dimensional subspace $T_V(\mathbb{R}^r)$ of a $p$-dimensional space. $U$ similarly is the matrix of a one-to-one, isometric transformation $T_U:\mathbb{R}^r\to \mathbb{R}^n$. $D$ positively rescales the $r$ coordinate axes in $\mathbb{R}^r$, corresponding to a linear transformation $T_D$ that distorts the unit sphere (used for reference) into an ellipsoid without rotating it. The transpose of $V$, $V^\prime$, corresponds to a linear transformation $T_{V^\prime}:\mathbb{R}^p\to\mathbb{R}^r$ that kills all vectors in $\mathbb{R}^p$ that are perpendicular to $T_V(\mathbb{R}^r)$. It otherwise rotates $T_V(\mathbb{R}^r)$ into $\mathbb{R}^r$. Equivalently, you can think of $T_{V^\prime}$ as "ignoring" any perpendicular directions and establishing an orthonormal coordinate system within $T_V(\mathbb{R}^r) \subset \mathbb{R}^p$. $T_D$ acts directly on that coordinate system, expanding by various amounts (as specified by the singular values) along the coordinate axes determined by $V$. $T_U$ then maps the result into $\mathbb{R}^n$. The linear transformation associated with $X^\prime X$ in effect acts on $T_V(\mathbb{R}^r)$ through two "round trips": $T_X$ expands the coordinates in the system determined by $V$ by $T_D$ and then $T_{X^\prime}$ does it all over again. Similarly, $X X^\prime$ does exactly the same thing to the $r$-dimensional subspace of $\mathbb{R}^n$ established by the $r$ orthogonal columns of $U$. Thus, the role of $V$ is to describe a frame in a subspace of $\mathbb{R}^p$ and the role of $U$ is to describe a frame in a subspace of $\mathbb{R}^n$. The matrix $X^\prime X$ gives us information about the frame in the first space and $X X\prime$ tells us the frame in the second space, but those two frames don't have to have any relationship at all to one another.
Relationship between Gram and covariance matrices A Singular Value Decomposition (SVD) of $X$ expresses it as $$X = U D V^\prime$$ where $U$ is an $n\times r$ matrix whose columns are mutually orthonormal, $V$ is an $p\times r$ matrix whose columns a
12,212
Jensen Shannon Divergence vs Kullback-Leibler Divergence?
I found a very mature answer on the Quora and just put it here for people who look for it here: The Kullback-Leibler divergence has a few nice properties, one of them being that $𝐾𝐿[𝑞;𝑝]$ kind of abhors regions where $𝑞(𝑥)$ have non-null mass and $𝑝(𝑥)$ has null mass. This might look like a bug, but it’s actually a feature in certain situations. If you’re trying to find approximations for a complex (intractable) distribution $𝑝(𝑥)$ by a (tractable) approximate distribution $𝑞(𝑥)$ you want to be absolutely sure that any 𝑥 that would be very improbable to be drawn from $𝑝(𝑥)$ would also be very improbable to be drawn from $𝑞(𝑥)$. That KL have this property is easily shown: there’s a $𝑞(𝑥)𝑙𝑜𝑔[𝑞(𝑥)/𝑝(𝑥)]$ in the integrand. When 𝑞(𝑥) is small but $𝑝(𝑥)$ is not, that’s ok. But when $𝑝(𝑥)$ is small, this grows very rapidly if $𝑞(𝑥)$ isn’t also small. So, if you’re choosing $𝑞(𝑥)$ to minimize $𝐾𝐿[𝑞;𝑝]$, it’s very improbable that $𝑞(𝑥)$ will assign a lot of mass on regions where $𝑝(𝑥)$ is near zero. The Jensen-Shannon divergence don’t have this property. It is well behaved both when $𝑝(𝑥)$ and $𝑞(𝑥)$ are small. This means that it won’t penalize as much a distribution $𝑞(𝑥)$ from which you can sample values that are impossible in $𝑝(𝑥)$.
Jensen Shannon Divergence vs Kullback-Leibler Divergence?
I found a very mature answer on the Quora and just put it here for people who look for it here: The Kullback-Leibler divergence has a few nice properties, one of them being that $𝐾𝐿[𝑞;𝑝]$ kind of a
Jensen Shannon Divergence vs Kullback-Leibler Divergence? I found a very mature answer on the Quora and just put it here for people who look for it here: The Kullback-Leibler divergence has a few nice properties, one of them being that $𝐾𝐿[𝑞;𝑝]$ kind of abhors regions where $𝑞(𝑥)$ have non-null mass and $𝑝(𝑥)$ has null mass. This might look like a bug, but it’s actually a feature in certain situations. If you’re trying to find approximations for a complex (intractable) distribution $𝑝(𝑥)$ by a (tractable) approximate distribution $𝑞(𝑥)$ you want to be absolutely sure that any 𝑥 that would be very improbable to be drawn from $𝑝(𝑥)$ would also be very improbable to be drawn from $𝑞(𝑥)$. That KL have this property is easily shown: there’s a $𝑞(𝑥)𝑙𝑜𝑔[𝑞(𝑥)/𝑝(𝑥)]$ in the integrand. When 𝑞(𝑥) is small but $𝑝(𝑥)$ is not, that’s ok. But when $𝑝(𝑥)$ is small, this grows very rapidly if $𝑞(𝑥)$ isn’t also small. So, if you’re choosing $𝑞(𝑥)$ to minimize $𝐾𝐿[𝑞;𝑝]$, it’s very improbable that $𝑞(𝑥)$ will assign a lot of mass on regions where $𝑝(𝑥)$ is near zero. The Jensen-Shannon divergence don’t have this property. It is well behaved both when $𝑝(𝑥)$ and $𝑞(𝑥)$ are small. This means that it won’t penalize as much a distribution $𝑞(𝑥)$ from which you can sample values that are impossible in $𝑝(𝑥)$.
Jensen Shannon Divergence vs Kullback-Leibler Divergence? I found a very mature answer on the Quora and just put it here for people who look for it here: The Kullback-Leibler divergence has a few nice properties, one of them being that $𝐾𝐿[𝑞;𝑝]$ kind of a
12,213
Jensen Shannon Divergence vs Kullback-Leibler Divergence?
I recently stumbled into a similar question. To answer why an asymmetric divergence can be more favourable than a symmetric divergence, consider a scenario where you want to quantify the quality of a proposal distribution used in importance sampling (IS). If you are unfamiliar with IS, the key idea here is that to design an efficient IS scheme, your proposal distribution should have heavier tails than the target distribution. Denote two distributions $H=\text{Normal}(0, 25)$ and $L=\text{Normal}(0, 1)$. Suppose you target $H$ with IS, using $L$ as the proposal distribution. To quantify the quality of your proposal distribution, you might compute the Jensen-Shannon (JS) divergence of $L,H$, and the Kullback-Leibler (KS) divergence of $L$ from $H$ and obtain some values. Both values should give you some sense of how good your proposal distribution $L$ is. Nothing to see here yet. However, consider reversing the setup, i.e., target $L$ with IS using $H$ as the proposal distribution. Here, the JS divergence would be the same due to its symmetric property, while KL of $H$ from $L$ would be much lower. In short, we expected using $H$ to target $L$ to be OK, and $L$ to target $H$ is not OK. KL divergence aligns with our expectation; $\text{KL}(H || L) > \text{KL}(L ||H)$. JS divergence doesn't. This asymmetric property aligns with our goal in that it can correctly, loosely speaking, account for the direction of discrepancy between two distributions. Another factor to consider is that sometimes it can be significantly more computationally challenging to compute JS divergence than KS divergence.
Jensen Shannon Divergence vs Kullback-Leibler Divergence?
I recently stumbled into a similar question. To answer why an asymmetric divergence can be more favourable than a symmetric divergence, consider a scenario where you want to quantify the quality of a
Jensen Shannon Divergence vs Kullback-Leibler Divergence? I recently stumbled into a similar question. To answer why an asymmetric divergence can be more favourable than a symmetric divergence, consider a scenario where you want to quantify the quality of a proposal distribution used in importance sampling (IS). If you are unfamiliar with IS, the key idea here is that to design an efficient IS scheme, your proposal distribution should have heavier tails than the target distribution. Denote two distributions $H=\text{Normal}(0, 25)$ and $L=\text{Normal}(0, 1)$. Suppose you target $H$ with IS, using $L$ as the proposal distribution. To quantify the quality of your proposal distribution, you might compute the Jensen-Shannon (JS) divergence of $L,H$, and the Kullback-Leibler (KS) divergence of $L$ from $H$ and obtain some values. Both values should give you some sense of how good your proposal distribution $L$ is. Nothing to see here yet. However, consider reversing the setup, i.e., target $L$ with IS using $H$ as the proposal distribution. Here, the JS divergence would be the same due to its symmetric property, while KL of $H$ from $L$ would be much lower. In short, we expected using $H$ to target $L$ to be OK, and $L$ to target $H$ is not OK. KL divergence aligns with our expectation; $\text{KL}(H || L) > \text{KL}(L ||H)$. JS divergence doesn't. This asymmetric property aligns with our goal in that it can correctly, loosely speaking, account for the direction of discrepancy between two distributions. Another factor to consider is that sometimes it can be significantly more computationally challenging to compute JS divergence than KS divergence.
Jensen Shannon Divergence vs Kullback-Leibler Divergence? I recently stumbled into a similar question. To answer why an asymmetric divergence can be more favourable than a symmetric divergence, consider a scenario where you want to quantify the quality of a
12,214
Jensen Shannon Divergence vs Kullback-Leibler Divergence?
KL divergence has clear information theoretical interpretation and is well-known; but I am first time to hear that the symmetrization of KL divergence is called JS divergence. The reason that JS-divergence is not so often used is probably that it is less well-known and does not offer must-have properties.
Jensen Shannon Divergence vs Kullback-Leibler Divergence?
KL divergence has clear information theoretical interpretation and is well-known; but I am first time to hear that the symmetrization of KL divergence is called JS divergence. The reason that JS-diver
Jensen Shannon Divergence vs Kullback-Leibler Divergence? KL divergence has clear information theoretical interpretation and is well-known; but I am first time to hear that the symmetrization of KL divergence is called JS divergence. The reason that JS-divergence is not so often used is probably that it is less well-known and does not offer must-have properties.
Jensen Shannon Divergence vs Kullback-Leibler Divergence? KL divergence has clear information theoretical interpretation and is well-known; but I am first time to hear that the symmetrization of KL divergence is called JS divergence. The reason that JS-diver
12,215
Multiple imputation and model selection
There are many things you could do to select variables from multiply imputed data, but not all yield appropriate estimates. See Wood et al (2008) Stat Med for a comparison of various possibilities. I have found the following two-step procedure useful in practice. Apply your preferred variable selection method independently to each of the $m$ imputed data sets. You will end up with $m$ different models. For each variable, count the number of times it appears in the model. Select those variables that appear in at least half of the $m$ models. Use the p-value of the Wald statistic or of the likelihood ratio test as calculated from the $m$ multiply-imputed data sets as the criterion for further stepwise model selection. The pre-selection step 1 is included to reduce the amount of computation. See https://stefvanbuuren.name/fimd/sec-stepwise.html (section 5.4.2) for a code example of the two-step method in R using mice(). In Stata, you can perform Step 2 (on all variables) with mim:stepwise.
Multiple imputation and model selection
There are many things you could do to select variables from multiply imputed data, but not all yield appropriate estimates. See Wood et al (2008) Stat Med for a comparison of various possibilities. I
Multiple imputation and model selection There are many things you could do to select variables from multiply imputed data, but not all yield appropriate estimates. See Wood et al (2008) Stat Med for a comparison of various possibilities. I have found the following two-step procedure useful in practice. Apply your preferred variable selection method independently to each of the $m$ imputed data sets. You will end up with $m$ different models. For each variable, count the number of times it appears in the model. Select those variables that appear in at least half of the $m$ models. Use the p-value of the Wald statistic or of the likelihood ratio test as calculated from the $m$ multiply-imputed data sets as the criterion for further stepwise model selection. The pre-selection step 1 is included to reduce the amount of computation. See https://stefvanbuuren.name/fimd/sec-stepwise.html (section 5.4.2) for a code example of the two-step method in R using mice(). In Stata, you can perform Step 2 (on all variables) with mim:stepwise.
Multiple imputation and model selection There are many things you could do to select variables from multiply imputed data, but not all yield appropriate estimates. See Wood et al (2008) Stat Med for a comparison of various possibilities. I
12,216
Multiple imputation and model selection
It is straightforward: You can apply standard MI combining rules - but effects of variables which are not supported throughout imputed datasets will be less pronounced. For example, if a variable is not selected in a specific imputed dataset its estimate (incl. variance) is zero and this has to be reflected in the estimates used when using multiple imputation. You can consider bootstrapping to construct confidence intervals to incorporate model selection uncertainty, have a look at this recent publication which addresses all questions: http://www.sciencedirect.com/science/article/pii/S016794731300073X I would avoid using pragmatic approaches such as selecting a variable if it is selected in m/2 datasets or sth similar, because inference is not clear and more complicated than it looks at first glance.
Multiple imputation and model selection
It is straightforward: You can apply standard MI combining rules - but effects of variables which are not supported throughout imputed datasets will be less pronounced. For example, if a variable is n
Multiple imputation and model selection It is straightforward: You can apply standard MI combining rules - but effects of variables which are not supported throughout imputed datasets will be less pronounced. For example, if a variable is not selected in a specific imputed dataset its estimate (incl. variance) is zero and this has to be reflected in the estimates used when using multiple imputation. You can consider bootstrapping to construct confidence intervals to incorporate model selection uncertainty, have a look at this recent publication which addresses all questions: http://www.sciencedirect.com/science/article/pii/S016794731300073X I would avoid using pragmatic approaches such as selecting a variable if it is selected in m/2 datasets or sth similar, because inference is not clear and more complicated than it looks at first glance.
Multiple imputation and model selection It is straightforward: You can apply standard MI combining rules - but effects of variables which are not supported throughout imputed datasets will be less pronounced. For example, if a variable is n
12,217
Multiple imputation and model selection
I was having the same problem. My choice was the so-called "multiple imputation lasso". Basically it combines all imputed datasets together and adopts the concept of group lasso: every candidate variable would generate m dummy variables. Each dummy variable corresponds to a imputed dataset. Then all the m dummy variables are grouped. you would either discard a candidate variable's m dummy variables in all imputed datasets or keep them in all imputed datasets. So the lasso regression is actually fit on all imputed datasets jointly. Check the paper: Chen, Q. & Wang, S. (2013). "Variable selection for multiply-imputed data with application to dioxin exposure study," Statistics in Medicine, 32:3646-59. And a relevant R program
Multiple imputation and model selection
I was having the same problem. My choice was the so-called "multiple imputation lasso". Basically it combines all imputed datasets together and adopts the concept of group lasso: every candidate vari
Multiple imputation and model selection I was having the same problem. My choice was the so-called "multiple imputation lasso". Basically it combines all imputed datasets together and adopts the concept of group lasso: every candidate variable would generate m dummy variables. Each dummy variable corresponds to a imputed dataset. Then all the m dummy variables are grouped. you would either discard a candidate variable's m dummy variables in all imputed datasets or keep them in all imputed datasets. So the lasso regression is actually fit on all imputed datasets jointly. Check the paper: Chen, Q. & Wang, S. (2013). "Variable selection for multiply-imputed data with application to dioxin exposure study," Statistics in Medicine, 32:3646-59. And a relevant R program
Multiple imputation and model selection I was having the same problem. My choice was the so-called "multiple imputation lasso". Basically it combines all imputed datasets together and adopts the concept of group lasso: every candidate vari
12,218
Multiple imputation and model selection
I've been facing a similar problem -- I've got a dataset in which I knew from the start that I wanted to include all variables (I was interested in the coefficients more than the prediction), but I didn't know a priori what interactions should be specified. My approach was to write out a set of candidate models, perform multiple imputations, estimate the multiple models, and simply save and average the AIC's from each model. The model specification with the lowest average-of-AIC's was selected. I thought about adding a correction wherein I penalize between-imputation variance in AIC. On reflection however, this seemed pointless. The approach seemed straightforward enough to me, but I invented it myself, and I'm no celebrated statistician. Before using it, you may wish to wait until people either correct me (which would be welcome!) or upvote this answer.
Multiple imputation and model selection
I've been facing a similar problem -- I've got a dataset in which I knew from the start that I wanted to include all variables (I was interested in the coefficients more than the prediction), but I di
Multiple imputation and model selection I've been facing a similar problem -- I've got a dataset in which I knew from the start that I wanted to include all variables (I was interested in the coefficients more than the prediction), but I didn't know a priori what interactions should be specified. My approach was to write out a set of candidate models, perform multiple imputations, estimate the multiple models, and simply save and average the AIC's from each model. The model specification with the lowest average-of-AIC's was selected. I thought about adding a correction wherein I penalize between-imputation variance in AIC. On reflection however, this seemed pointless. The approach seemed straightforward enough to me, but I invented it myself, and I'm no celebrated statistician. Before using it, you may wish to wait until people either correct me (which would be welcome!) or upvote this answer.
Multiple imputation and model selection I've been facing a similar problem -- I've got a dataset in which I knew from the start that I wanted to include all variables (I was interested in the coefficients more than the prediction), but I di
12,219
Support vector regression for multivariate time series prediction
In the context of support vector regression, the fact that your data is a time series is mainly relevant from a methodological standpoint -- for example, you can't do a k-fold cross validation, and you need to take precautions when running backtests/simulations. Basically, support vector regression is a discriminative regression technique much like any other discriminative regression technique. You give it a set of input vectors and associated responses, and it fits a model to try and predict the response given a new input vector. Kernel SVR, on the other hand, applies one of many transformations to your data set prior to the learning step. This allows it to pick up nonlinear trends in the data set, unlike e.g. linear regression. A good kernel to start with would probably be the Gaussian RBF -- it will have a hyperparameter you can tune, so try out a couple values. And then when you get a feeling for what's going on you can try out other kernels. With a time series, an import step is determining what your "feature vector" ${\bf x}$ will be; each $x_i$ is called a "feature" and can be calculated from present or past data, and each $y_i$, the response, will be the future change over some time period of whatever you're trying to predict. Take a stock for example. You have prices over time. Maybe your features are a.) the 200MA-30MA spread and b.) 20-day volatility, so you calculate each ${\bf x_t}$ at each point in time, along with $y_t$, the (say) following week's return on that stock. Thus, your SVR learns how to predict the following week's return based on the present MA spread and 20-day vol. (This strategy won't work, so don't get too excited ;)). If the papers you read were too difficult, you probably don't want to try to implement an SVM yourself, as it can be complicated. IIRC there is a "kernlab" package for R that has a Kernel SVM implementation with a number of kernels included, so that would provide a quick way to get up and running.
Support vector regression for multivariate time series prediction
In the context of support vector regression, the fact that your data is a time series is mainly relevant from a methodological standpoint -- for example, you can't do a k-fold cross validation, and yo
Support vector regression for multivariate time series prediction In the context of support vector regression, the fact that your data is a time series is mainly relevant from a methodological standpoint -- for example, you can't do a k-fold cross validation, and you need to take precautions when running backtests/simulations. Basically, support vector regression is a discriminative regression technique much like any other discriminative regression technique. You give it a set of input vectors and associated responses, and it fits a model to try and predict the response given a new input vector. Kernel SVR, on the other hand, applies one of many transformations to your data set prior to the learning step. This allows it to pick up nonlinear trends in the data set, unlike e.g. linear regression. A good kernel to start with would probably be the Gaussian RBF -- it will have a hyperparameter you can tune, so try out a couple values. And then when you get a feeling for what's going on you can try out other kernels. With a time series, an import step is determining what your "feature vector" ${\bf x}$ will be; each $x_i$ is called a "feature" and can be calculated from present or past data, and each $y_i$, the response, will be the future change over some time period of whatever you're trying to predict. Take a stock for example. You have prices over time. Maybe your features are a.) the 200MA-30MA spread and b.) 20-day volatility, so you calculate each ${\bf x_t}$ at each point in time, along with $y_t$, the (say) following week's return on that stock. Thus, your SVR learns how to predict the following week's return based on the present MA spread and 20-day vol. (This strategy won't work, so don't get too excited ;)). If the papers you read were too difficult, you probably don't want to try to implement an SVM yourself, as it can be complicated. IIRC there is a "kernlab" package for R that has a Kernel SVM implementation with a number of kernels included, so that would provide a quick way to get up and running.
Support vector regression for multivariate time series prediction In the context of support vector regression, the fact that your data is a time series is mainly relevant from a methodological standpoint -- for example, you can't do a k-fold cross validation, and yo
12,220
Support vector regression for multivariate time series prediction
My personal answer to the question as asked is "yes". You may view it as a pro or a con that there are an infinite number of choices of features to describe the past.Try to pick features that correspond to how you might concisely describe to someone what the market has just done [eg "the price is at 1.4" tells you nothing if it is not related to some other number]. As for the target of the SVM, the simplest are the difference in prices and the ratio of prices for two consecutive days. As these correspond directly to the fate of a hypothetical trade, they seem good choices. I have to pedantically disagree with the first statement by Jason: you can do k-fold cross-validation in situations like that described by raconteur and it is useful (with a proviso I will explain). The reason it is statistically valid is that the instances of the target in this case have no intrinsic relationship: they are disjoint differences or ratios. If you choose instead to use data at higher resolution than the scale of the target, there would be reason for concern that correlated instances might appear in the training set and validation set, which would compromise the cross-validation (by contrast, when applying the SVM you will have no instances available whose targets overlap the one you are interested in). The thing that does reduce the effectiveness of cross-validation is if the behavior of the market is changing over time. There are two possible ways to deal with this. The first is to incorporate time as a feature (I've not found this very useful, perhaps because the values of this feature in the future are all new). A well-motivated alternative is to use walk-forward validation (which means testing your methodology on a sliding window of time, and testing it on the period just after this window. If behaviour is changing over time, the saying attributed to Niels Bohr "Prediction is very difficult, especially about the future" is especially appropriate. There is some evidence in the literature that the behaviour of financial markets does change over time, generally becoming more efficient, which typically means that successful trading systems deteriorate in performance over time. Good luck!
Support vector regression for multivariate time series prediction
My personal answer to the question as asked is "yes". You may view it as a pro or a con that there are an infinite number of choices of features to describe the past.Try to pick features that correspo
Support vector regression for multivariate time series prediction My personal answer to the question as asked is "yes". You may view it as a pro or a con that there are an infinite number of choices of features to describe the past.Try to pick features that correspond to how you might concisely describe to someone what the market has just done [eg "the price is at 1.4" tells you nothing if it is not related to some other number]. As for the target of the SVM, the simplest are the difference in prices and the ratio of prices for two consecutive days. As these correspond directly to the fate of a hypothetical trade, they seem good choices. I have to pedantically disagree with the first statement by Jason: you can do k-fold cross-validation in situations like that described by raconteur and it is useful (with a proviso I will explain). The reason it is statistically valid is that the instances of the target in this case have no intrinsic relationship: they are disjoint differences or ratios. If you choose instead to use data at higher resolution than the scale of the target, there would be reason for concern that correlated instances might appear in the training set and validation set, which would compromise the cross-validation (by contrast, when applying the SVM you will have no instances available whose targets overlap the one you are interested in). The thing that does reduce the effectiveness of cross-validation is if the behavior of the market is changing over time. There are two possible ways to deal with this. The first is to incorporate time as a feature (I've not found this very useful, perhaps because the values of this feature in the future are all new). A well-motivated alternative is to use walk-forward validation (which means testing your methodology on a sliding window of time, and testing it on the period just after this window. If behaviour is changing over time, the saying attributed to Niels Bohr "Prediction is very difficult, especially about the future" is especially appropriate. There is some evidence in the literature that the behaviour of financial markets does change over time, generally becoming more efficient, which typically means that successful trading systems deteriorate in performance over time. Good luck!
Support vector regression for multivariate time series prediction My personal answer to the question as asked is "yes". You may view it as a pro or a con that there are an infinite number of choices of features to describe the past.Try to pick features that correspo
12,221
Support vector regression for multivariate time series prediction
There's an example up on Quantum Financier for using an SVM to forecast financial series. It could easily be converted from a classification system (Long/Short) to a regression system.
Support vector regression for multivariate time series prediction
There's an example up on Quantum Financier for using an SVM to forecast financial series. It could easily be converted from a classification system (Long/Short) to a regression system.
Support vector regression for multivariate time series prediction There's an example up on Quantum Financier for using an SVM to forecast financial series. It could easily be converted from a classification system (Long/Short) to a regression system.
Support vector regression for multivariate time series prediction There's an example up on Quantum Financier for using an SVM to forecast financial series. It could easily be converted from a classification system (Long/Short) to a regression system.
12,222
Linear regression what does the F statistic, R squared and residual standard error tell us?
The best way to understand these terms is to do a regression calculation by hand. I wrote two closely related answers (here and here), however they may not fully help you understanding your particular case. But read through them nonetheless. Maybe they will also help you conceptualizing these terms better. In a regression (or ANOVA), we build a model based on a sample dataset which enables us to predict outcomes from a population of interest. To do so, the following three components are calculated in a simple linear regression from which the other components can be calculated, e.g. the mean squares, the F-value, the $R^2$ (also the adjusted $R^2$), and the residual standard error ($RSE$): total sums of squares ($SS_{total}$) residual sums of squares ($SS_{residual}$) model sums of squares ($SS_{model}$) Each of them are assessing how well the model describes the data and are the sum of the squared distances from the data points to fitted model (illustrated as red lines in the plot below). The $SS_{total}$ assess how well the mean fits the data. Why the mean? Because the mean is the simplest model we can fit and hence serves as the model to which the least-squares regression line is compared to. This plot using the cars dataset illustrates that: The $SS_{residual}$ assess how well the regression line fits the data. The $SS_{model}$ compares how much better the regression line is compared to the mean (i.e. the difference between the $SS_{total}$ and the $SS_{residual}$). To answer your questions, let's first calculate those terms which you want to understand starting with model and output as a reference: # The model and output as reference m1 <- lm(dist ~ speed, data = cars) summary(m1) summary.aov(m1) # To get the sums of squares and mean squares The sums of squares are the squared distances of the individual data points to the model: # Calculate sums of squares (total, residual and model) y <- cars$dist ybar <- mean(y) ss.total <- sum((y-ybar)^2) ss.total ss.residual <- sum((y-m1$fitted)^2) ss.residual ss.model <- ss.total-ss.residual ss.model The mean squares are the sums of squares averaged by the degrees of freedom: # Calculate degrees of freedom (total, residual and model) n <- length(cars$speed) k <- length(m1$coef) # k = model parameter: b0, b1 df.total <- n-1 df.residual <- n-k df.model <- k-1 # Calculate mean squares (note that these are just variances) ms.residual <- ss.residual/df.residual ms.residual ms.model<- ss.model/df.model ms.model My answers to your questions: Q1: This is thus actually the average distance of the observed values from the lm line? The residual standard error ($RSE$) is the square root of the residual mean square ($MS_{residual}$): # Calculate residual standard error res.se <- sqrt(ms.residual) res.se If you remember that the $SS_{residual}$ were the squared distances of the observed data points and the model (regression line in the second plot above), and $MS_{residual}$ was just the averaged $SS_{residual}$, the answer to your first question is, yes: The $RSE$ represents the average distance of the observed data from the model. Intuitively, this also makes perfect sense because if the distance is smaller, your model fit is also better. Q2: Now I'm getting confused because if RSE tells us how far our observed points deviate from the regression line a low RSE is actually telling us "your model is fitting well based on the observed data points" --> thus how good our models fits, so what is the difference between R squared and RSE? Now the $R^2$ is the ratio of the $SS_{model}$ and the $SS_{total}$: # R squared r.sq <- ss.model/ss.total r.sq The $R^2$ expresses how much of the total variation in the data can be explained by the model (the regression line). Remember that the total variation was the variation in the data when we fitted the simplest model to the data, i.e. the mean. Compare the $SS_{total}$ plot with the $SS_{model}$ plot. So to answer your second question, the difference between the $RSE$ and the $R^2$ is that the $RSE$ tells you something about the inaccuracy of the model (in this case the regression line) given the observed data. The $R^2$ on the other hand tells you how much variation is explained by the model (i.e. the regression line) relative the variation that was explained by the mean alone (i.e. the simplest model). Q3: Is it true that we can have a F value indicating a strong relationship that is NON LINEAR so that our RSE is high and our R squared is low So the $F$-value on the other is calculated as the model mean square $MS_{model}$ (or the signal) divided by the $MS_{residual}$ (noise): # Calculate F-value F <- ms.model/ms.residual F # Calculate P-value p.F <- 1-pf(F, df.model, df.residual) p.F Or in other words the $F$-value expresses how much of the model has improved (compared to the mean) given the inaccuracy of the model. Your third question is a bit difficult to understand but I agree with the quote your provided.
Linear regression what does the F statistic, R squared and residual standard error tell us?
The best way to understand these terms is to do a regression calculation by hand. I wrote two closely related answers (here and here), however they may not fully help you understanding your particular
Linear regression what does the F statistic, R squared and residual standard error tell us? The best way to understand these terms is to do a regression calculation by hand. I wrote two closely related answers (here and here), however they may not fully help you understanding your particular case. But read through them nonetheless. Maybe they will also help you conceptualizing these terms better. In a regression (or ANOVA), we build a model based on a sample dataset which enables us to predict outcomes from a population of interest. To do so, the following three components are calculated in a simple linear regression from which the other components can be calculated, e.g. the mean squares, the F-value, the $R^2$ (also the adjusted $R^2$), and the residual standard error ($RSE$): total sums of squares ($SS_{total}$) residual sums of squares ($SS_{residual}$) model sums of squares ($SS_{model}$) Each of them are assessing how well the model describes the data and are the sum of the squared distances from the data points to fitted model (illustrated as red lines in the plot below). The $SS_{total}$ assess how well the mean fits the data. Why the mean? Because the mean is the simplest model we can fit and hence serves as the model to which the least-squares regression line is compared to. This plot using the cars dataset illustrates that: The $SS_{residual}$ assess how well the regression line fits the data. The $SS_{model}$ compares how much better the regression line is compared to the mean (i.e. the difference between the $SS_{total}$ and the $SS_{residual}$). To answer your questions, let's first calculate those terms which you want to understand starting with model and output as a reference: # The model and output as reference m1 <- lm(dist ~ speed, data = cars) summary(m1) summary.aov(m1) # To get the sums of squares and mean squares The sums of squares are the squared distances of the individual data points to the model: # Calculate sums of squares (total, residual and model) y <- cars$dist ybar <- mean(y) ss.total <- sum((y-ybar)^2) ss.total ss.residual <- sum((y-m1$fitted)^2) ss.residual ss.model <- ss.total-ss.residual ss.model The mean squares are the sums of squares averaged by the degrees of freedom: # Calculate degrees of freedom (total, residual and model) n <- length(cars$speed) k <- length(m1$coef) # k = model parameter: b0, b1 df.total <- n-1 df.residual <- n-k df.model <- k-1 # Calculate mean squares (note that these are just variances) ms.residual <- ss.residual/df.residual ms.residual ms.model<- ss.model/df.model ms.model My answers to your questions: Q1: This is thus actually the average distance of the observed values from the lm line? The residual standard error ($RSE$) is the square root of the residual mean square ($MS_{residual}$): # Calculate residual standard error res.se <- sqrt(ms.residual) res.se If you remember that the $SS_{residual}$ were the squared distances of the observed data points and the model (regression line in the second plot above), and $MS_{residual}$ was just the averaged $SS_{residual}$, the answer to your first question is, yes: The $RSE$ represents the average distance of the observed data from the model. Intuitively, this also makes perfect sense because if the distance is smaller, your model fit is also better. Q2: Now I'm getting confused because if RSE tells us how far our observed points deviate from the regression line a low RSE is actually telling us "your model is fitting well based on the observed data points" --> thus how good our models fits, so what is the difference between R squared and RSE? Now the $R^2$ is the ratio of the $SS_{model}$ and the $SS_{total}$: # R squared r.sq <- ss.model/ss.total r.sq The $R^2$ expresses how much of the total variation in the data can be explained by the model (the regression line). Remember that the total variation was the variation in the data when we fitted the simplest model to the data, i.e. the mean. Compare the $SS_{total}$ plot with the $SS_{model}$ plot. So to answer your second question, the difference between the $RSE$ and the $R^2$ is that the $RSE$ tells you something about the inaccuracy of the model (in this case the regression line) given the observed data. The $R^2$ on the other hand tells you how much variation is explained by the model (i.e. the regression line) relative the variation that was explained by the mean alone (i.e. the simplest model). Q3: Is it true that we can have a F value indicating a strong relationship that is NON LINEAR so that our RSE is high and our R squared is low So the $F$-value on the other is calculated as the model mean square $MS_{model}$ (or the signal) divided by the $MS_{residual}$ (noise): # Calculate F-value F <- ms.model/ms.residual F # Calculate P-value p.F <- 1-pf(F, df.model, df.residual) p.F Or in other words the $F$-value expresses how much of the model has improved (compared to the mean) given the inaccuracy of the model. Your third question is a bit difficult to understand but I agree with the quote your provided.
Linear regression what does the F statistic, R squared and residual standard error tell us? The best way to understand these terms is to do a regression calculation by hand. I wrote two closely related answers (here and here), however they may not fully help you understanding your particular
12,223
Linear regression what does the F statistic, R squared and residual standard error tell us?
(2) You are understanding it correctly, you are just having a hard time with the concept. The $R^2$ value represents how well the model accounts for all of the data. It can only take on values between 0 and 1. It is the percentage of the deviation of the points in the dataset that the model can explain. The RSE is more of a descriptor of what the deviation from the model the original data represents. So, the $R^2$ says, "the model does this well at explaining the presented data." The RSE says, "when mapped, we expected the data to be here, but here is where it actually was." They are very similar but are used to validate in different ways.
Linear regression what does the F statistic, R squared and residual standard error tell us?
(2) You are understanding it correctly, you are just having a hard time with the concept. The $R^2$ value represents how well the model accounts for all of the data. It can only take on values betwee
Linear regression what does the F statistic, R squared and residual standard error tell us? (2) You are understanding it correctly, you are just having a hard time with the concept. The $R^2$ value represents how well the model accounts for all of the data. It can only take on values between 0 and 1. It is the percentage of the deviation of the points in the dataset that the model can explain. The RSE is more of a descriptor of what the deviation from the model the original data represents. So, the $R^2$ says, "the model does this well at explaining the presented data." The RSE says, "when mapped, we expected the data to be here, but here is where it actually was." They are very similar but are used to validate in different ways.
Linear regression what does the F statistic, R squared and residual standard error tell us? (2) You are understanding it correctly, you are just having a hard time with the concept. The $R^2$ value represents how well the model accounts for all of the data. It can only take on values betwee
12,224
Linear regression what does the F statistic, R squared and residual standard error tell us?
Just to complement what Chris replied above: The F-statistic is the division of the model mean square and the residual mean square. Software like Stata, after fitting a regression model, also provide the p-value associated with the F-statistic. This allows you to test the null hypothesis that your model's coefficients are zero. You could think of it as the "statistical significance of the model as a whole."
Linear regression what does the F statistic, R squared and residual standard error tell us?
Just to complement what Chris replied above: The F-statistic is the division of the model mean square and the residual mean square. Software like Stata, after fitting a regression model, also provide
Linear regression what does the F statistic, R squared and residual standard error tell us? Just to complement what Chris replied above: The F-statistic is the division of the model mean square and the residual mean square. Software like Stata, after fitting a regression model, also provide the p-value associated with the F-statistic. This allows you to test the null hypothesis that your model's coefficients are zero. You could think of it as the "statistical significance of the model as a whole."
Linear regression what does the F statistic, R squared and residual standard error tell us? Just to complement what Chris replied above: The F-statistic is the division of the model mean square and the residual mean square. Software like Stata, after fitting a regression model, also provide
12,225
Linear regression what does the F statistic, R squared and residual standard error tell us?
As I point out in this other answer, $F$, $RSS$ and $R^2$ are all interrelated. Here's the relevant excerpt: The F-statistic between two models, the null model (intercept only) $m_0$ and the alternative model $m_1$ ($m_0$ is nested within $m_1$) is: $$F = \frac{\left( \frac{RSS_0-RSS_1}{p_1-p_0} \right)} {\left( \frac{RSS_1}{n-p_1} \right)} = \left( \frac{RSS_0-RSS_1}{p_1-p_0} \right) \left( \frac{n-p_1}{RSS_1} \right)$$ $R^2$ on the other hand, is defined as: $$ R^2 = 1-\frac{RSS_1}{RSS_0} $$ Rearranging $F$ we can see that: $$F = \left( \frac{RSS_0-RSS_1}{RSS_1} \right) \left( \frac{n-p_1}{p_1-p_0} \right) = \left( \frac{RSS_0}{RSS_1}-1 \right) \left( \frac{n-p_1}{p_1-p_0} \right) = \left( \frac{R^2}{1-R^2} \right) \left( \frac{n-p_1}{p_1-p_0} \right)$$
Linear regression what does the F statistic, R squared and residual standard error tell us?
As I point out in this other answer, $F$, $RSS$ and $R^2$ are all interrelated. Here's the relevant excerpt: The F-statistic between two models, the null model (intercept only) $m_0$ and the alternat
Linear regression what does the F statistic, R squared and residual standard error tell us? As I point out in this other answer, $F$, $RSS$ and $R^2$ are all interrelated. Here's the relevant excerpt: The F-statistic between two models, the null model (intercept only) $m_0$ and the alternative model $m_1$ ($m_0$ is nested within $m_1$) is: $$F = \frac{\left( \frac{RSS_0-RSS_1}{p_1-p_0} \right)} {\left( \frac{RSS_1}{n-p_1} \right)} = \left( \frac{RSS_0-RSS_1}{p_1-p_0} \right) \left( \frac{n-p_1}{RSS_1} \right)$$ $R^2$ on the other hand, is defined as: $$ R^2 = 1-\frac{RSS_1}{RSS_0} $$ Rearranging $F$ we can see that: $$F = \left( \frac{RSS_0-RSS_1}{RSS_1} \right) \left( \frac{n-p_1}{p_1-p_0} \right) = \left( \frac{RSS_0}{RSS_1}-1 \right) \left( \frac{n-p_1}{p_1-p_0} \right) = \left( \frac{R^2}{1-R^2} \right) \left( \frac{n-p_1}{p_1-p_0} \right)$$
Linear regression what does the F statistic, R squared and residual standard error tell us? As I point out in this other answer, $F$, $RSS$ and $R^2$ are all interrelated. Here's the relevant excerpt: The F-statistic between two models, the null model (intercept only) $m_0$ and the alternat
12,226
What is the difference between random variable and random sample?
A random variable, $X:\Omega \rightarrow \mathbb R$, is a function from the sample space to the real line. This is a deterministic formula that can be as simple as writing down the number a die lands on in the random experiment of tossing a die. The experiment is random, in the way that we don't control many of the physical factors determining its outcome; however, as soon as the die lands the random variable maps the outcome in the physical world to a number. Other examples would include measuring the height of a sample of eight graders, perhaps to infer the population parameters (including mean and variance). Each boy or girl would be the outcome of a random experiment, pretty much like tossing a coin. Once a subject is selected, the actual mapping to a number in inches or centimeters is not subject to randomness, despite its name of "random variable." A group of such experiments would constitute a sample: "In statistics, a simple random sample is a subset of individuals (a sample) chosen from a larger set (a population)." This definition is intuitive, but leaves the term population implicit. An attempt at fixing this gap is made in this paper, pointing out that 'the term “population” as a noun should refer to the sample space, not the random variable as is the case in many textbooks." A random sample is a collection of $n$ independent and identically distributed (i.i.d.) random variables $X_1, X_2, X_3,\dots, X_n.$ in which ${\displaystyle X_{i}}$ is the function $X(\cdot)$ applied to the outcome of the $i$-th experiment: ${\displaystyle x_{i}=X_{i}(\omega )}.$ Although sampling without replacement doesn't fulfill the independence requirement, this point is overlooked when sampling from a large population in favor of computational expediency. The $n$-tuples $x_1,x_2,x_3,\dots,x_n$ are particular realizations of the random variables, which in the case proposed in the question, would be drawn from $N(\mu,\sigma^2)$ identically distributed $X_i$ random variables. So in the OP the process of "drawing some samples" would result in individual realizations of this collection of random variables. Random variables are the object of mathematical laws, such as the LLN or the CLT. The distribution of the random variable will dictate the feasibility of induction from random samples. For example, any given realizations will always have a mean and a standard deviation as an $n$-tuple or real numbers, yet their generating random variables may not have finite moments, e.g. Pareto, compromising statistical inference about the population characteristics.
What is the difference between random variable and random sample?
A random variable, $X:\Omega \rightarrow \mathbb R$, is a function from the sample space to the real line. This is a deterministic formula that can be as simple as writing down the number a die lands
What is the difference between random variable and random sample? A random variable, $X:\Omega \rightarrow \mathbb R$, is a function from the sample space to the real line. This is a deterministic formula that can be as simple as writing down the number a die lands on in the random experiment of tossing a die. The experiment is random, in the way that we don't control many of the physical factors determining its outcome; however, as soon as the die lands the random variable maps the outcome in the physical world to a number. Other examples would include measuring the height of a sample of eight graders, perhaps to infer the population parameters (including mean and variance). Each boy or girl would be the outcome of a random experiment, pretty much like tossing a coin. Once a subject is selected, the actual mapping to a number in inches or centimeters is not subject to randomness, despite its name of "random variable." A group of such experiments would constitute a sample: "In statistics, a simple random sample is a subset of individuals (a sample) chosen from a larger set (a population)." This definition is intuitive, but leaves the term population implicit. An attempt at fixing this gap is made in this paper, pointing out that 'the term “population” as a noun should refer to the sample space, not the random variable as is the case in many textbooks." A random sample is a collection of $n$ independent and identically distributed (i.i.d.) random variables $X_1, X_2, X_3,\dots, X_n.$ in which ${\displaystyle X_{i}}$ is the function $X(\cdot)$ applied to the outcome of the $i$-th experiment: ${\displaystyle x_{i}=X_{i}(\omega )}.$ Although sampling without replacement doesn't fulfill the independence requirement, this point is overlooked when sampling from a large population in favor of computational expediency. The $n$-tuples $x_1,x_2,x_3,\dots,x_n$ are particular realizations of the random variables, which in the case proposed in the question, would be drawn from $N(\mu,\sigma^2)$ identically distributed $X_i$ random variables. So in the OP the process of "drawing some samples" would result in individual realizations of this collection of random variables. Random variables are the object of mathematical laws, such as the LLN or the CLT. The distribution of the random variable will dictate the feasibility of induction from random samples. For example, any given realizations will always have a mean and a standard deviation as an $n$-tuple or real numbers, yet their generating random variables may not have finite moments, e.g. Pareto, compromising statistical inference about the population characteristics.
What is the difference between random variable and random sample? A random variable, $X:\Omega \rightarrow \mathbb R$, is a function from the sample space to the real line. This is a deterministic formula that can be as simple as writing down the number a die lands
12,227
What is the difference between random variable and random sample?
In the OP's example, each random sample $X_i$ is an observation of same random variable $X$. So random sample is an observation of a random variable. Random variable is a function that maps sample space to real numbers.
What is the difference between random variable and random sample?
In the OP's example, each random sample $X_i$ is an observation of same random variable $X$. So random sample is an observation of a random variable. Random variable is a function that maps sample spa
What is the difference between random variable and random sample? In the OP's example, each random sample $X_i$ is an observation of same random variable $X$. So random sample is an observation of a random variable. Random variable is a function that maps sample space to real numbers.
What is the difference between random variable and random sample? In the OP's example, each random sample $X_i$ is an observation of same random variable $X$. So random sample is an observation of a random variable. Random variable is a function that maps sample spa
12,228
Linear regression, conditional expectations and expected values
In the probability model underlying linear regression, X and Y are random variables. if so, as an example, if Y = obesity and X = age, if we take the conditional expectation E(Y|X=35) meaning, whats the expected value of being obese if the individual is 35 across the sample, would we just take the average(arithmetic mean) of y for those observations where X=35? That's right. In general, you cannot expect that you will have enough data at each specific value of X, or it may be impossible to do so if X can take a continuous range of values. But conceptually, this is correct. yet doesn't the expected value entail that we must multiply this by the probability of occurring ? This is the difference between the unconditional expectation $E[Y]$ and the conditional expectation $E[Y \mid X = x]$. The relationship between them is $$ E[Y] = \sum_x E[Y \mid X = x] Pr[X = x] $$ which is the law of total expectation. but how in that sense to we find the probability of the X-value variable occurring if it represent something like age? Generally you don't in linear regression. Since we are attempting to determine $E[Y \mid X]$, we don't need to know $Pr[X = x]$. If we don't assume the independent variables are themselves random variables, since we don't obverse the probability, what do we assume they are? just fixed values or something? We do assume that Y is a random variable. One way to think about linear regression is as a probability model for $Y$ $$ Y \sim X \beta + N(0, \sigma) $$ Which says that, once you know the value of X, the random variation in Y is confined to the summand $N(0, \sigma)$.
Linear regression, conditional expectations and expected values
In the probability model underlying linear regression, X and Y are random variables. if so, as an example, if Y = obesity and X = age, if we take the conditional expectation E(Y|X=35) meaning, whats
Linear regression, conditional expectations and expected values In the probability model underlying linear regression, X and Y are random variables. if so, as an example, if Y = obesity and X = age, if we take the conditional expectation E(Y|X=35) meaning, whats the expected value of being obese if the individual is 35 across the sample, would we just take the average(arithmetic mean) of y for those observations where X=35? That's right. In general, you cannot expect that you will have enough data at each specific value of X, or it may be impossible to do so if X can take a continuous range of values. But conceptually, this is correct. yet doesn't the expected value entail that we must multiply this by the probability of occurring ? This is the difference between the unconditional expectation $E[Y]$ and the conditional expectation $E[Y \mid X = x]$. The relationship between them is $$ E[Y] = \sum_x E[Y \mid X = x] Pr[X = x] $$ which is the law of total expectation. but how in that sense to we find the probability of the X-value variable occurring if it represent something like age? Generally you don't in linear regression. Since we are attempting to determine $E[Y \mid X]$, we don't need to know $Pr[X = x]$. If we don't assume the independent variables are themselves random variables, since we don't obverse the probability, what do we assume they are? just fixed values or something? We do assume that Y is a random variable. One way to think about linear regression is as a probability model for $Y$ $$ Y \sim X \beta + N(0, \sigma) $$ Which says that, once you know the value of X, the random variation in Y is confined to the summand $N(0, \sigma)$.
Linear regression, conditional expectations and expected values In the probability model underlying linear regression, X and Y are random variables. if so, as an example, if Y = obesity and X = age, if we take the conditional expectation E(Y|X=35) meaning, whats
12,229
Linear regression, conditional expectations and expected values
There will be a LOT of answers to this question, but I still want to add one since you made some interesting points. For simplicity I only consider the simple linear model. It is my understanding that the linear regression model is predicted via a conditional expectation E(Y|X)=b+Xb+e The fundamental equation of a simple linear regression analysis is: $$\mathbb E(Y\,|\,X) = \beta_0 +\beta_1X,$$ This equation meaning is that the average value of $Y$ is linear on the values of $X$. One can also notice that the expected value is also linear on the parameters $\beta_0$ and $\beta_1$, which is why the model is called linear. This fundamental equation can be rewritten as: $$Y = \beta_0+\beta_1X+\epsilon,$$ where $\epsilon$ is a random variable with mean zero: $\mathbb E(\epsilon) = 0$ Do we assume that both X and Y are Random variables with some unknown probability distribution? ... If we don't assume the independent variables are themselves random The independent variable $X$ can be random or fixed. The dependent variable $Y$ is ALWAYS random. Usually one assumes that $\{X_1,...,X_n\}$ are fixed numbers. This is because regression analysis was developed and is vastly applied in the context of designed experiments, where the $X$'s values are previously fixed. The formulas for the least squares estimates of $\beta_0$ and $\beta_1$ are the same even if the $X$'s are assumed random, but the distribution of these estimates will generally not be the same compared to the situation with fixed $X$'s. if we take the conditional expectation E(Y|X=35) ... would we just take the average(arithmetic mean) of y for those observations where X=35? In the simple linear model you can build a estimate $\hat\varphi(x)$ of $\mathbb E(Y|X = x)$ based on the estimates of $\hat \beta_0$ and $\hat \beta_1$, namely: $$\hat\varphi(x) = \hat\beta_0+\hat\beta_1x$$ The conditional mean least squared estimator has expression equal to the one you described if your model treats the different weights as levels of a single factor. Those models are also known as one-way ANOVA, which is a particular case of (not simple) linear model.
Linear regression, conditional expectations and expected values
There will be a LOT of answers to this question, but I still want to add one since you made some interesting points. For simplicity I only consider the simple linear model. It is my understanding t
Linear regression, conditional expectations and expected values There will be a LOT of answers to this question, but I still want to add one since you made some interesting points. For simplicity I only consider the simple linear model. It is my understanding that the linear regression model is predicted via a conditional expectation E(Y|X)=b+Xb+e The fundamental equation of a simple linear regression analysis is: $$\mathbb E(Y\,|\,X) = \beta_0 +\beta_1X,$$ This equation meaning is that the average value of $Y$ is linear on the values of $X$. One can also notice that the expected value is also linear on the parameters $\beta_0$ and $\beta_1$, which is why the model is called linear. This fundamental equation can be rewritten as: $$Y = \beta_0+\beta_1X+\epsilon,$$ where $\epsilon$ is a random variable with mean zero: $\mathbb E(\epsilon) = 0$ Do we assume that both X and Y are Random variables with some unknown probability distribution? ... If we don't assume the independent variables are themselves random The independent variable $X$ can be random or fixed. The dependent variable $Y$ is ALWAYS random. Usually one assumes that $\{X_1,...,X_n\}$ are fixed numbers. This is because regression analysis was developed and is vastly applied in the context of designed experiments, where the $X$'s values are previously fixed. The formulas for the least squares estimates of $\beta_0$ and $\beta_1$ are the same even if the $X$'s are assumed random, but the distribution of these estimates will generally not be the same compared to the situation with fixed $X$'s. if we take the conditional expectation E(Y|X=35) ... would we just take the average(arithmetic mean) of y for those observations where X=35? In the simple linear model you can build a estimate $\hat\varphi(x)$ of $\mathbb E(Y|X = x)$ based on the estimates of $\hat \beta_0$ and $\hat \beta_1$, namely: $$\hat\varphi(x) = \hat\beta_0+\hat\beta_1x$$ The conditional mean least squared estimator has expression equal to the one you described if your model treats the different weights as levels of a single factor. Those models are also known as one-way ANOVA, which is a particular case of (not simple) linear model.
Linear regression, conditional expectations and expected values There will be a LOT of answers to this question, but I still want to add one since you made some interesting points. For simplicity I only consider the simple linear model. It is my understanding t
12,230
How many times must I roll a die to confidently assess its fairness?
TL;DR: if $p$ = 1/6 and you want to know how large $n$ needs to be 98% sure the dice is fair (to within 2%), $n$ needs to be at least $n$ ≥ 766. Let $n$ be the number of rolls and $X$ the number of rolls that land on some specified side. Then $X$ follows a Binomial(n,p) distribution where $p$ is the probability of getting that specified side. By the central limit theorem, we know that $$\sqrt{n} (X/n - p) \to N(0,p(1-p))$$ Since $X/n$ is the sample mean of $n$ Bernoulli$(p)$ random variables. Hence for large $n$, confidence intervals for $p$ can be constructed as $$\frac{X}{n} \pm Z \sqrt{\frac{p(1-p)}{n}}$$ Since $p$ is unknown, we can replace it with the sample average $\hat{p} = X/n$, and by various convergence theorems, we know the resulting confidence interval will be asymptotically valid. So we get confidence intervals of the form $$\hat{p} \pm Z \sqrt{\frac{\hat{p}(1-\hat{p})}{n}}$$ with $\hat{p} = X/n$. I'm going to assume you know what $Z$-scores are. For example, if you want a 95% confidence interval, you take $Z=1.96$. So for a given confidence level $\alpha$ we have $$\hat{p} \pm Z_\alpha \sqrt{\frac{\hat{p}(1-\hat{p})}{n}}$$ Now let's say you want this confidence interval to be of length less than $C_\alpha$, and want to know how big a sample we need to make this case. Well this is equivelant to asking what $n_\alpha$ satisfies $$Z_\alpha \sqrt{\frac{\hat{p}(1-\hat{p})}{n_\alpha}} \leq \frac{C_\alpha}{2}$$ Which is then solved to obtain $$n_\alpha \geq \left(\frac{2 Z_\alpha}{C_\alpha}\right)^2 \hat{p}(1-\hat{p})$$ So plug in your values for $Z_\alpha$, $C_\alpha$, and estimated $\hat{p}$ to obtain an estimate for $n_\alpha$. Note that since $p$ is unknown this is only an estimate, but asymptotically (as $n$ gets larger) it should be accurate.
How many times must I roll a die to confidently assess its fairness?
TL;DR: if $p$ = 1/6 and you want to know how large $n$ needs to be 98% sure the dice is fair (to within 2%), $n$ needs to be at least $n$ ≥ 766. Let $n$ be the number of rolls and $X$ the number of r
How many times must I roll a die to confidently assess its fairness? TL;DR: if $p$ = 1/6 and you want to know how large $n$ needs to be 98% sure the dice is fair (to within 2%), $n$ needs to be at least $n$ ≥ 766. Let $n$ be the number of rolls and $X$ the number of rolls that land on some specified side. Then $X$ follows a Binomial(n,p) distribution where $p$ is the probability of getting that specified side. By the central limit theorem, we know that $$\sqrt{n} (X/n - p) \to N(0,p(1-p))$$ Since $X/n$ is the sample mean of $n$ Bernoulli$(p)$ random variables. Hence for large $n$, confidence intervals for $p$ can be constructed as $$\frac{X}{n} \pm Z \sqrt{\frac{p(1-p)}{n}}$$ Since $p$ is unknown, we can replace it with the sample average $\hat{p} = X/n$, and by various convergence theorems, we know the resulting confidence interval will be asymptotically valid. So we get confidence intervals of the form $$\hat{p} \pm Z \sqrt{\frac{\hat{p}(1-\hat{p})}{n}}$$ with $\hat{p} = X/n$. I'm going to assume you know what $Z$-scores are. For example, if you want a 95% confidence interval, you take $Z=1.96$. So for a given confidence level $\alpha$ we have $$\hat{p} \pm Z_\alpha \sqrt{\frac{\hat{p}(1-\hat{p})}{n}}$$ Now let's say you want this confidence interval to be of length less than $C_\alpha$, and want to know how big a sample we need to make this case. Well this is equivelant to asking what $n_\alpha$ satisfies $$Z_\alpha \sqrt{\frac{\hat{p}(1-\hat{p})}{n_\alpha}} \leq \frac{C_\alpha}{2}$$ Which is then solved to obtain $$n_\alpha \geq \left(\frac{2 Z_\alpha}{C_\alpha}\right)^2 \hat{p}(1-\hat{p})$$ So plug in your values for $Z_\alpha$, $C_\alpha$, and estimated $\hat{p}$ to obtain an estimate for $n_\alpha$. Note that since $p$ is unknown this is only an estimate, but asymptotically (as $n$ gets larger) it should be accurate.
How many times must I roll a die to confidently assess its fairness? TL;DR: if $p$ = 1/6 and you want to know how large $n$ needs to be 98% sure the dice is fair (to within 2%), $n$ needs to be at least $n$ ≥ 766. Let $n$ be the number of rolls and $X$ the number of r
12,231
Advanced regression modeling examples
Regression Modeling Strategies and ISLR, which have already been mentioned by others, are two very good suggestions. I have a few others that you might want to consider. Applied Predictive Modeling by Kuhn and Johnson contains a number of good case studies and is pretty hands-on. Practical Data Science with R treats practical (regression) modeling in the context of its applications $-$ mostly as predictive models in a business situation. Generalized Additive Models: An Introduction with R by Simon Wood is a good treatment of generalized additive models and how you fit them using his mgcv package for R. It does contain some nontrivial practical examples. The use of GAM models is an alternative to figuring out the "correct" transformation as this is done in a data adaptive way via a spline expansion and penalized maximum-likelihood estimation. However, there are still other choices that need to be made, e.g. the choice of link function. The mboost package for R also fits GAM models but using a different approach via boosting. I recommend the tutorial for the package (one of the Vignettes). I will also mention Empirical Model Discovery and Theory Evaluation by Hendry and Doornik, though I have not yet read this book myself. It had been recommended to me.
Advanced regression modeling examples
Regression Modeling Strategies and ISLR, which have already been mentioned by others, are two very good suggestions. I have a few others that you might want to consider. Applied Predictive Modeling by
Advanced regression modeling examples Regression Modeling Strategies and ISLR, which have already been mentioned by others, are two very good suggestions. I have a few others that you might want to consider. Applied Predictive Modeling by Kuhn and Johnson contains a number of good case studies and is pretty hands-on. Practical Data Science with R treats practical (regression) modeling in the context of its applications $-$ mostly as predictive models in a business situation. Generalized Additive Models: An Introduction with R by Simon Wood is a good treatment of generalized additive models and how you fit them using his mgcv package for R. It does contain some nontrivial practical examples. The use of GAM models is an alternative to figuring out the "correct" transformation as this is done in a data adaptive way via a spline expansion and penalized maximum-likelihood estimation. However, there are still other choices that need to be made, e.g. the choice of link function. The mboost package for R also fits GAM models but using a different approach via boosting. I recommend the tutorial for the package (one of the Vignettes). I will also mention Empirical Model Discovery and Theory Evaluation by Hendry and Doornik, though I have not yet read this book myself. It had been recommended to me.
Advanced regression modeling examples Regression Modeling Strategies and ISLR, which have already been mentioned by others, are two very good suggestions. I have a few others that you might want to consider. Applied Predictive Modeling by
12,232
Advanced regression modeling examples
One of the best course material that you can find on advanced, multiple, complex (including nonlinear) regression is based on the book Regression Modeling Strategies by Frank E. Harrell Jr. The book is being discussed in the comments but not this material, which itself is a great resource.
Advanced regression modeling examples
One of the best course material that you can find on advanced, multiple, complex (including nonlinear) regression is based on the book Regression Modeling Strategies by Frank E. Harrell Jr. The book
Advanced regression modeling examples One of the best course material that you can find on advanced, multiple, complex (including nonlinear) regression is based on the book Regression Modeling Strategies by Frank E. Harrell Jr. The book is being discussed in the comments but not this material, which itself is a great resource.
Advanced regression modeling examples One of the best course material that you can find on advanced, multiple, complex (including nonlinear) regression is based on the book Regression Modeling Strategies by Frank E. Harrell Jr. The book
12,233
Advanced regression modeling examples
I would recommend the book Mostly Harmless Econometrics by by Joshua D. Angrist and Jörn-Steffen Pischke This is the most real-world, salt to the earth, text I own and it is super cheap, around $26.00 new. The book is written for the graduate statistician/economist so it is plenty advanced. Now this book is not exactly what your asking for in the sense that it doesn't focus on "complex, multiple non-linear relationships" as much as core fundamentals like endoegeneity, interpretation, and clever regression design. But I am offering this book to try to make a point. Which is, when it comes to real world application of regression analysis, the most challenging issues generally do not have to do with the fact that our models aren't complex enough...believe me we are plenty good at drumming-up very complex models! Rather the biggest issues are things like Endogeneity not having all the data we need Having to much data...and it's all a mess! To many people cannot interpret their own models correctly (a problem that becomes more prevalent as we make models more complex) A firm understanding of GMM, non-linear filters and non-parametric regression pretty much covers all the topics you have listed and can be learned as you go along. However, with real world data, these frameworks have the potential to be needlessly complex, often harmfully so. All to often it's the ability to be cleverly simple rather than completely generalized and highly sophisticated, that benefits you most with real-world analysis. This book will help you with the former.
Advanced regression modeling examples
I would recommend the book Mostly Harmless Econometrics by by Joshua D. Angrist and Jörn-Steffen Pischke This is the most real-world, salt to the earth, text I own and it is super cheap, around $26.0
Advanced regression modeling examples I would recommend the book Mostly Harmless Econometrics by by Joshua D. Angrist and Jörn-Steffen Pischke This is the most real-world, salt to the earth, text I own and it is super cheap, around $26.00 new. The book is written for the graduate statistician/economist so it is plenty advanced. Now this book is not exactly what your asking for in the sense that it doesn't focus on "complex, multiple non-linear relationships" as much as core fundamentals like endoegeneity, interpretation, and clever regression design. But I am offering this book to try to make a point. Which is, when it comes to real world application of regression analysis, the most challenging issues generally do not have to do with the fact that our models aren't complex enough...believe me we are plenty good at drumming-up very complex models! Rather the biggest issues are things like Endogeneity not having all the data we need Having to much data...and it's all a mess! To many people cannot interpret their own models correctly (a problem that becomes more prevalent as we make models more complex) A firm understanding of GMM, non-linear filters and non-parametric regression pretty much covers all the topics you have listed and can be learned as you go along. However, with real world data, these frameworks have the potential to be needlessly complex, often harmfully so. All to often it's the ability to be cleverly simple rather than completely generalized and highly sophisticated, that benefits you most with real-world analysis. This book will help you with the former.
Advanced regression modeling examples I would recommend the book Mostly Harmless Econometrics by by Joshua D. Angrist and Jörn-Steffen Pischke This is the most real-world, salt to the earth, text I own and it is super cheap, around $26.0
12,234
Advanced regression modeling examples
You can refer Introduction to Statistical Learning with R (ISLR), the book talks about splines and polynomial regression in detail with cases.
Advanced regression modeling examples
You can refer Introduction to Statistical Learning with R (ISLR), the book talks about splines and polynomial regression in detail with cases.
Advanced regression modeling examples You can refer Introduction to Statistical Learning with R (ISLR), the book talks about splines and polynomial regression in detail with cases.
Advanced regression modeling examples You can refer Introduction to Statistical Learning with R (ISLR), the book talks about splines and polynomial regression in detail with cases.
12,235
Advanced regression modeling examples
I'm not sure what is the objective of your question. I can recommend Greene's Econometric Analysis text. It has a ton of references to papers inside. Pretty much each example in the book references a published paper. To give you a flavor, look at Example 7.6 "Interaction Effects in a Loglinear Model for Income" on p.195. It refers to a paper and the data set: Regina T. Riphahn, Achim Wambach, and Andreas Million, "Incentive Effects in the Demand for Health Care: A Bivariate Panel Count Data Estimation", Journal of Applied Econometrics, Vol. 18, No. 4, 2003, pp. 387-405. The example is about usage of the loglinear models and the interaction effects. You can read the whole paper, or this textbooks description of it. This is not a made up use case. It's a real published research. This is how people actually use the statistical methods in economics research. As I wrote the book is pestered with use cases like this on the usage of advanced statistical methods.
Advanced regression modeling examples
I'm not sure what is the objective of your question. I can recommend Greene's Econometric Analysis text. It has a ton of references to papers inside. Pretty much each example in the book references a
Advanced regression modeling examples I'm not sure what is the objective of your question. I can recommend Greene's Econometric Analysis text. It has a ton of references to papers inside. Pretty much each example in the book references a published paper. To give you a flavor, look at Example 7.6 "Interaction Effects in a Loglinear Model for Income" on p.195. It refers to a paper and the data set: Regina T. Riphahn, Achim Wambach, and Andreas Million, "Incentive Effects in the Demand for Health Care: A Bivariate Panel Count Data Estimation", Journal of Applied Econometrics, Vol. 18, No. 4, 2003, pp. 387-405. The example is about usage of the loglinear models and the interaction effects. You can read the whole paper, or this textbooks description of it. This is not a made up use case. It's a real published research. This is how people actually use the statistical methods in economics research. As I wrote the book is pestered with use cases like this on the usage of advanced statistical methods.
Advanced regression modeling examples I'm not sure what is the objective of your question. I can recommend Greene's Econometric Analysis text. It has a ton of references to papers inside. Pretty much each example in the book references a
12,236
Advanced regression modeling examples
Have you looked into some of the Financial Time Series Analysis courses/books that Ruey Tsay (UChicago) writes? http://faculty.chicagobooth.edu/ruey.tsay/teaching/ Ruey Tsays classes and the textbook provide multiple real world examples in Finance of complex regressions of the type that are created for use in financial markets. Chapter 1 begins with multifactor regression models and expands to Seasonal Autoregressive Time series models by chapter 5 or 6.
Advanced regression modeling examples
Have you looked into some of the Financial Time Series Analysis courses/books that Ruey Tsay (UChicago) writes? http://faculty.chicagobooth.edu/ruey.tsay/teaching/ Ruey Tsays classes and the textboo
Advanced regression modeling examples Have you looked into some of the Financial Time Series Analysis courses/books that Ruey Tsay (UChicago) writes? http://faculty.chicagobooth.edu/ruey.tsay/teaching/ Ruey Tsays classes and the textbook provide multiple real world examples in Finance of complex regressions of the type that are created for use in financial markets. Chapter 1 begins with multifactor regression models and expands to Seasonal Autoregressive Time series models by chapter 5 or 6.
Advanced regression modeling examples Have you looked into some of the Financial Time Series Analysis courses/books that Ruey Tsay (UChicago) writes? http://faculty.chicagobooth.edu/ruey.tsay/teaching/ Ruey Tsays classes and the textboo
12,237
Overall rank from multiple ranked lists
I am not sure why you were looking at correlations and similar measures. There doesn't seem to be anything to correlate. Instead, there are a number of options, none really better than the other, but depending on what you want: Take the average rank and then rank the averages (but this treats the data as interval) Take the median rank and then rank the medians (but this may result in ties) Take the number of 1st place votes each item got, and rank them based on this Take the number of last place votes and rank them (inversely, obviously) based on that. Create some weighted combination of ranks, depending on what you think reasonable.
Overall rank from multiple ranked lists
I am not sure why you were looking at correlations and similar measures. There doesn't seem to be anything to correlate. Instead, there are a number of options, none really better than the other, but
Overall rank from multiple ranked lists I am not sure why you were looking at correlations and similar measures. There doesn't seem to be anything to correlate. Instead, there are a number of options, none really better than the other, but depending on what you want: Take the average rank and then rank the averages (but this treats the data as interval) Take the median rank and then rank the medians (but this may result in ties) Take the number of 1st place votes each item got, and rank them based on this Take the number of last place votes and rank them (inversely, obviously) based on that. Create some weighted combination of ranks, depending on what you think reasonable.
Overall rank from multiple ranked lists I am not sure why you were looking at correlations and similar measures. There doesn't seem to be anything to correlate. Instead, there are a number of options, none really better than the other, but
12,238
Overall rank from multiple ranked lists
As others have pointed out, there are a lot of options you might pursue. The method I recommend is based on average ranks, i.e., the first proposal of Peter. In this case, the statistical importance of the final ranking can be examined by a two-step statistical test. This is a non-parametric procedure consisting of the Friedman test with a corresponding post-hoc test, the Nemenyi test. Both of them are based on average ranks. The purpose of the Friedman test is to reject the null hypothesis and conclude that there are some differences between the items. If it is so, we proceed with the Nemenyi test to find out which items actually differ. (We don't directly start with the post-hoc test in order to avoid significance found by chance.) More details, such as the critical values for these both tests, can be found in the paper by Demsar.
Overall rank from multiple ranked lists
As others have pointed out, there are a lot of options you might pursue. The method I recommend is based on average ranks, i.e., the first proposal of Peter. In this case, the statistical importance o
Overall rank from multiple ranked lists As others have pointed out, there are a lot of options you might pursue. The method I recommend is based on average ranks, i.e., the first proposal of Peter. In this case, the statistical importance of the final ranking can be examined by a two-step statistical test. This is a non-parametric procedure consisting of the Friedman test with a corresponding post-hoc test, the Nemenyi test. Both of them are based on average ranks. The purpose of the Friedman test is to reject the null hypothesis and conclude that there are some differences between the items. If it is so, we proceed with the Nemenyi test to find out which items actually differ. (We don't directly start with the post-hoc test in order to avoid significance found by chance.) More details, such as the critical values for these both tests, can be found in the paper by Demsar.
Overall rank from multiple ranked lists As others have pointed out, there are a lot of options you might pursue. The method I recommend is based on average ranks, i.e., the first proposal of Peter. In this case, the statistical importance o
12,239
Overall rank from multiple ranked lists
I (well, Google) found a paper that benchmarks methods for combining ranked lists: Li, X., Wang, X. and Xiao, G., 2019. A comparative study of rank aggregation methods for partial and top ranked lists in genomic applications. Briefings in bioinformatics, 20(1), pp.178-189. https://doi.org/10.1093/bib/bbx101 They use two R packages: TopKLists: https://cran.r-project.org/web/packages/TopKLists/index.html RobustRankAggreg: https://cran.r-project.org/web/packages/RobustRankAggreg/index.html
Overall rank from multiple ranked lists
I (well, Google) found a paper that benchmarks methods for combining ranked lists: Li, X., Wang, X. and Xiao, G., 2019. A comparative study of rank aggregation methods for partial and top ranked lists
Overall rank from multiple ranked lists I (well, Google) found a paper that benchmarks methods for combining ranked lists: Li, X., Wang, X. and Xiao, G., 2019. A comparative study of rank aggregation methods for partial and top ranked lists in genomic applications. Briefings in bioinformatics, 20(1), pp.178-189. https://doi.org/10.1093/bib/bbx101 They use two R packages: TopKLists: https://cran.r-project.org/web/packages/TopKLists/index.html RobustRankAggreg: https://cran.r-project.org/web/packages/RobustRankAggreg/index.html
Overall rank from multiple ranked lists I (well, Google) found a paper that benchmarks methods for combining ranked lists: Li, X., Wang, X. and Xiao, G., 2019. A comparative study of rank aggregation methods for partial and top ranked lists
12,240
Overall rank from multiple ranked lists
Use Tau-x (where the "x" refers to "eXtended" Tau-b). Tau-x is the correlation equivalent of the Kemeny-Snell distance metric -- proven to be the unique distance metric between lists of ranked items that satisfies all the requirements of a distance metric. See chapter 2 of "Mathematical Models in the Social Sciences" by Kemeny and Snell, also "A New Rank Correlation Coefficient with Application to the Consensus Ranking Problem, Edward Emond, David Mason, Journal of Multi-Criteria Decision Analysis, 11:17-28 (2002).
Overall rank from multiple ranked lists
Use Tau-x (where the "x" refers to "eXtended" Tau-b). Tau-x is the correlation equivalent of the Kemeny-Snell distance metric -- proven to be the unique distance metric between lists of ranked items t
Overall rank from multiple ranked lists Use Tau-x (where the "x" refers to "eXtended" Tau-b). Tau-x is the correlation equivalent of the Kemeny-Snell distance metric -- proven to be the unique distance metric between lists of ranked items that satisfies all the requirements of a distance metric. See chapter 2 of "Mathematical Models in the Social Sciences" by Kemeny and Snell, also "A New Rank Correlation Coefficient with Application to the Consensus Ranking Problem, Edward Emond, David Mason, Journal of Multi-Criteria Decision Analysis, 11:17-28 (2002).
Overall rank from multiple ranked lists Use Tau-x (where the "x" refers to "eXtended" Tau-b). Tau-x is the correlation equivalent of the Kemeny-Snell distance metric -- proven to be the unique distance metric between lists of ranked items t
12,241
As a reviewer, can I justify requesting data and code be made available even if the journal does not?
As far as getting data as a reviewer goes, you're entitled to it if you need it to complete your review properly. More reviewers should be asking for data and assessing it. Lots of journals have policies that they may require the data and analysis code for review purposes. Availability at the time of publication isn't clear to me. It seems that you're saying that you want to force the issue that the data be made publicly available as a condition of publication. That's a bad idea if it's not journal policy already. You're making publication an unfair moving target. They submitted expecting that not to be a requirement and you, nor the editor, ought to be changing the game. Unbeknownst to many researchers publicly funded researchers, they are required to make their data publicly available. For example, most NIH grants have clauses where the researcher must be forthcoming with their data. Most government granting agencies have data sharing clauses that force the researcher to share what they find (perhaps force is a bit strong given that it's very hard to lose a grant over that... perhaps lose renewal though). The public paid for the data, therefore the public is entitled to it---in the case of human research, entitled to it anonymized. Some of the most expensive and sensitive data to collect, human FMRI data, is also some of the most commonly made publicly available. Not just PLoS, but major journals of the field require the submission of the data and maintain a publicly available data bank. I think this says a lot to people who object for reasons of cost (it's very expensive), and privacy (it's human data from small studies and sometimes unique clinical populations that could be very sensitive). Those are reasons that make that data more valuable to the public. Researchers who withhold such data are doing a disservice to the people who bought it (everyone), and need a lesson in what their responsibilities are outside of their little lab and publication competition. If the research was privately funded, genuinely privately funded, then best of luck.
As a reviewer, can I justify requesting data and code be made available even if the journal does not
As far as getting data as a reviewer goes, you're entitled to it if you need it to complete your review properly. More reviewers should be asking for data and assessing it. Lots of journals have pol
As a reviewer, can I justify requesting data and code be made available even if the journal does not? As far as getting data as a reviewer goes, you're entitled to it if you need it to complete your review properly. More reviewers should be asking for data and assessing it. Lots of journals have policies that they may require the data and analysis code for review purposes. Availability at the time of publication isn't clear to me. It seems that you're saying that you want to force the issue that the data be made publicly available as a condition of publication. That's a bad idea if it's not journal policy already. You're making publication an unfair moving target. They submitted expecting that not to be a requirement and you, nor the editor, ought to be changing the game. Unbeknownst to many researchers publicly funded researchers, they are required to make their data publicly available. For example, most NIH grants have clauses where the researcher must be forthcoming with their data. Most government granting agencies have data sharing clauses that force the researcher to share what they find (perhaps force is a bit strong given that it's very hard to lose a grant over that... perhaps lose renewal though). The public paid for the data, therefore the public is entitled to it---in the case of human research, entitled to it anonymized. Some of the most expensive and sensitive data to collect, human FMRI data, is also some of the most commonly made publicly available. Not just PLoS, but major journals of the field require the submission of the data and maintain a publicly available data bank. I think this says a lot to people who object for reasons of cost (it's very expensive), and privacy (it's human data from small studies and sometimes unique clinical populations that could be very sensitive). Those are reasons that make that data more valuable to the public. Researchers who withhold such data are doing a disservice to the people who bought it (everyone), and need a lesson in what their responsibilities are outside of their little lab and publication competition. If the research was privately funded, genuinely privately funded, then best of luck.
As a reviewer, can I justify requesting data and code be made available even if the journal does not As far as getting data as a reviewer goes, you're entitled to it if you need it to complete your review properly. More reviewers should be asking for data and assessing it. Lots of journals have pol
12,242
As a reviewer, can I justify requesting data and code be made available even if the journal does not?
Addressing the two situations seperately: As a reviewer: Yes, I think you'd have grounds to ask to see the data or the code. But if I were you, I'd prepare to see things like pared down code, or a subsample of the data. People implement future research not being reported in this paper in their code all the time, and you've no entitlement to said code. Since I do mostly biomedical research, I'd also be prepared to have to deal with some fairly restrictive data use agreements. In the journal itself: No. If a researcher wants to reproduce my results, they can approach me themselves to ask for code - that's why we have corresponding authors. For data, absolutely not, under no circumstances. My data is governed by IRB and confidentiality agreements - it's not just going to be made public. If I want a public-ish data set, I might simulate a dataset with similar properties (i.e. the "Faux-Mesa" network data available in one of the network packages for R), but as a reviewer, you've got no call to force that. If its a journal-wide requirement, then the authors knew their data/code would be public when submitting it, but if its not then no. Your role is to evaluate the quality of the paper itself (hence my being alright with it for the purposes of the review), not use your ability to contribute to the acceptance/rejection of the paper to push what is essentially a philosophical/political point outside the scope of the journal. At best, I'd put a "I would strongly urge the authors to make their code and data available, where possible" in your comments, but I wouldn't phrase it any stronger than that, and I wouldn't put it in the formal list of "Things I think need fixing before this sees the light of day".
As a reviewer, can I justify requesting data and code be made available even if the journal does not
Addressing the two situations seperately: As a reviewer: Yes, I think you'd have grounds to ask to see the data or the code. But if I were you, I'd prepare to see things like pared down code, or a sub
As a reviewer, can I justify requesting data and code be made available even if the journal does not? Addressing the two situations seperately: As a reviewer: Yes, I think you'd have grounds to ask to see the data or the code. But if I were you, I'd prepare to see things like pared down code, or a subsample of the data. People implement future research not being reported in this paper in their code all the time, and you've no entitlement to said code. Since I do mostly biomedical research, I'd also be prepared to have to deal with some fairly restrictive data use agreements. In the journal itself: No. If a researcher wants to reproduce my results, they can approach me themselves to ask for code - that's why we have corresponding authors. For data, absolutely not, under no circumstances. My data is governed by IRB and confidentiality agreements - it's not just going to be made public. If I want a public-ish data set, I might simulate a dataset with similar properties (i.e. the "Faux-Mesa" network data available in one of the network packages for R), but as a reviewer, you've got no call to force that. If its a journal-wide requirement, then the authors knew their data/code would be public when submitting it, but if its not then no. Your role is to evaluate the quality of the paper itself (hence my being alright with it for the purposes of the review), not use your ability to contribute to the acceptance/rejection of the paper to push what is essentially a philosophical/political point outside the scope of the journal. At best, I'd put a "I would strongly urge the authors to make their code and data available, where possible" in your comments, but I wouldn't phrase it any stronger than that, and I wouldn't put it in the formal list of "Things I think need fixing before this sees the light of day".
As a reviewer, can I justify requesting data and code be made available even if the journal does not Addressing the two situations seperately: As a reviewer: Yes, I think you'd have grounds to ask to see the data or the code. But if I were you, I'd prepare to see things like pared down code, or a sub
12,243
As a reviewer, can I justify requesting data and code be made available even if the journal does not?
As John says availability of data to reviewers should be a no-brainer; careful review should include replicating the analysis and as such necessitates access to the data. With regards to public availability of the data following publication, I'd say that battle should be fought with the journal generally rather than with regards to a specific submission. On a more general note, funding agencies and IRBs are becoming increasingly aware that data sharing is both scientifically and ethically necessary component of research. By increasing the availability for re-analysis that could yield new results of correct erroneous reports, data sharing increases the potential benefits to research, thereby modifying the cost/benefit tradeoff to the advantage of the participants of the research. Certainly it is necessary to inform participants of the possibility that their data will be shared, and it is also necessary to set up safeguards to prevent increased risk of identification to participants, but these can be achieved in most circumstances. In my own research, I assure participants (and my IRB) that (1) data will be stored in a strong encrypted format (updated as decryption technology advances), (2) data will be shared with qualified researchers upon request, but only if they agree (3) to similarly store the data in a strong encrypted format (updated as decryption technology advances), (4) refrain from sharing the data (instead referring requests to me), and (5) refrain from connecting the data with data from any other sources unless (6) the data connection is explicitly permitted by an IRB, who would determine whether the connection would unacceptably (relative to the potential benefits of the project) increase the risk of identifiability.
As a reviewer, can I justify requesting data and code be made available even if the journal does not
As John says availability of data to reviewers should be a no-brainer; careful review should include replicating the analysis and as such necessitates access to the data. With regards to public availa
As a reviewer, can I justify requesting data and code be made available even if the journal does not? As John says availability of data to reviewers should be a no-brainer; careful review should include replicating the analysis and as such necessitates access to the data. With regards to public availability of the data following publication, I'd say that battle should be fought with the journal generally rather than with regards to a specific submission. On a more general note, funding agencies and IRBs are becoming increasingly aware that data sharing is both scientifically and ethically necessary component of research. By increasing the availability for re-analysis that could yield new results of correct erroneous reports, data sharing increases the potential benefits to research, thereby modifying the cost/benefit tradeoff to the advantage of the participants of the research. Certainly it is necessary to inform participants of the possibility that their data will be shared, and it is also necessary to set up safeguards to prevent increased risk of identification to participants, but these can be achieved in most circumstances. In my own research, I assure participants (and my IRB) that (1) data will be stored in a strong encrypted format (updated as decryption technology advances), (2) data will be shared with qualified researchers upon request, but only if they agree (3) to similarly store the data in a strong encrypted format (updated as decryption technology advances), (4) refrain from sharing the data (instead referring requests to me), and (5) refrain from connecting the data with data from any other sources unless (6) the data connection is explicitly permitted by an IRB, who would determine whether the connection would unacceptably (relative to the potential benefits of the project) increase the risk of identifiability.
As a reviewer, can I justify requesting data and code be made available even if the journal does not As John says availability of data to reviewers should be a no-brainer; careful review should include replicating the analysis and as such necessitates access to the data. With regards to public availa
12,244
As a reviewer, can I justify requesting data and code be made available even if the journal does not?
I don't have any experience with this, but it seems to me that you might be able to insist on #1 as a part of your own due diligence in reviewing their results. I don't see how you can insist on #2, though.
As a reviewer, can I justify requesting data and code be made available even if the journal does not
I don't have any experience with this, but it seems to me that you might be able to insist on #1 as a part of your own due diligence in reviewing their results. I don't see how you can insist on #2, t
As a reviewer, can I justify requesting data and code be made available even if the journal does not? I don't have any experience with this, but it seems to me that you might be able to insist on #1 as a part of your own due diligence in reviewing their results. I don't see how you can insist on #2, though.
As a reviewer, can I justify requesting data and code be made available even if the journal does not I don't have any experience with this, but it seems to me that you might be able to insist on #1 as a part of your own due diligence in reviewing their results. I don't see how you can insist on #2, t
12,245
What is the essential difference between a neural network and nonlinear regression?
In theory, yes. In practice, things are more subtle. First of all, let's clear the field from a doubt raised in the comments: neural networks can handle multiple outputs in a seamless fashion, so it doesn't really matter whether we consider multiple regression or not (see The Elements of Statistical Learning, paragraph 11.4). Having said that, a neural network of fixed architecture and loss function would indeed just be a parametric nonlinear regression model. So it would even less flexible than nonparametric models such as Gaussian Processes. To be precise, a single hidden layer neural network with a sigmoid or tanh activation function would be less flexible than a Gaussian Process: see http://mlss.tuebingen.mpg.de/2015/slides/ghahramani/gp-neural-nets15.pdf. For deep networks this is not true, but it becomes true again when you consider Deep Gaussian Processes. So, why are Deep Neural Networks such a big deal? For very good reasons: They allow fitting models of a complexity that you wouldn't even begin to dream of, when you fit Nonlinear Least Squares models with the Levenberg-Marquard algorithm. See for example https://arxiv.org/pdf/1611.05431.pdf, https://arxiv.org/pdf/1706.02677.pdf and https://arxiv.org/pdf/1805.00932.pdf where the number of parameters $p$ goes from 25 to 829 millions. Of course DNNs are overparametrized, non-identifiable, etc. etc. so the number of parameters is very different from the "degrees of freedom" of the model (see https://arxiv.org/abs/1804.08838 for some intuition). Still, it's undeniably amazing that models with $N <<p$ ($N=$ sample size) are able to generalize so well. They scale to huge data sets. A vanilla Gaussian Process is a very flexible model, but inference has a $O(N^3)$ cost which is completely unacceptable for data sets as big as ImageNet or bigger such as Open Image V4. There are approximations to inference with GPs which scale as well as NNs, but I don't know why they don't enjoy the same fame (well, I have my ideas about that, but let's not digress). For some tasks, they're impressively accurate, much better than many other statistical learning models. You can try to match ResNeXt accuracy on ImageNet, with a 65536 inputs kernel SVM, or with a random forest for classification. Good luck with that. However, the real difference between theory: all neural networks are parametric nonlinear regression or classification models and practice in my opinion, is that in practice nothing about a deep neural network is really fixed in advance, so you end up fitting a model from a much bigger class than you would expect. In real-world applications, none of these aspects are really fixed: architecture (suppose I do sequence modeling: shall I use an RNN? A dilated CNN? Attention-based model?) details of the architecture (how many layers? how many units in layer 1, how many in layer 2, which activation function(s), etc.) how do I preprocess the data? Standardization? Minmax normalization? RobustScaler? kind of regularization ($l_1$? $l_2$? batch-norm? Before or after ReLU? Dropout? Between which layers?) optimizer (SGD? Path-SGD? Entropy-SGD? Adam? etc.) other hyperparameters such as the learning rate, early stopping, etc. etc. even the loss function is often not fixed in advance! We use NNs for mostly two applications (regression and classification), but people use a swath of different loss functions. Look how many choices are performed even in a relatively simple case where there is a strong seasonal signal, and the number of features is small, as far as DNNs go: https://stackoverflow.com/questions/48929272/non-linear-multivariate-time-series-response-prediction-using-rnn Thus in practice, even though ideally fitting a DNN would mean to just fit a model of the type $y=f(\mathbf{x}\vert\boldsymbol{\theta})+\epsilon$ where $f$ has a certain hierarchical structure, in practice very little (if anything at all) about the function and the fitting method is defined in advance, and thus the model is much more flexible than a "classic" parametric nonlinear model.
What is the essential difference between a neural network and nonlinear regression?
In theory, yes. In practice, things are more subtle. First of all, let's clear the field from a doubt raised in the comments: neural networks can handle multiple outputs in a seamless fashion, so it d
What is the essential difference between a neural network and nonlinear regression? In theory, yes. In practice, things are more subtle. First of all, let's clear the field from a doubt raised in the comments: neural networks can handle multiple outputs in a seamless fashion, so it doesn't really matter whether we consider multiple regression or not (see The Elements of Statistical Learning, paragraph 11.4). Having said that, a neural network of fixed architecture and loss function would indeed just be a parametric nonlinear regression model. So it would even less flexible than nonparametric models such as Gaussian Processes. To be precise, a single hidden layer neural network with a sigmoid or tanh activation function would be less flexible than a Gaussian Process: see http://mlss.tuebingen.mpg.de/2015/slides/ghahramani/gp-neural-nets15.pdf. For deep networks this is not true, but it becomes true again when you consider Deep Gaussian Processes. So, why are Deep Neural Networks such a big deal? For very good reasons: They allow fitting models of a complexity that you wouldn't even begin to dream of, when you fit Nonlinear Least Squares models with the Levenberg-Marquard algorithm. See for example https://arxiv.org/pdf/1611.05431.pdf, https://arxiv.org/pdf/1706.02677.pdf and https://arxiv.org/pdf/1805.00932.pdf where the number of parameters $p$ goes from 25 to 829 millions. Of course DNNs are overparametrized, non-identifiable, etc. etc. so the number of parameters is very different from the "degrees of freedom" of the model (see https://arxiv.org/abs/1804.08838 for some intuition). Still, it's undeniably amazing that models with $N <<p$ ($N=$ sample size) are able to generalize so well. They scale to huge data sets. A vanilla Gaussian Process is a very flexible model, but inference has a $O(N^3)$ cost which is completely unacceptable for data sets as big as ImageNet or bigger such as Open Image V4. There are approximations to inference with GPs which scale as well as NNs, but I don't know why they don't enjoy the same fame (well, I have my ideas about that, but let's not digress). For some tasks, they're impressively accurate, much better than many other statistical learning models. You can try to match ResNeXt accuracy on ImageNet, with a 65536 inputs kernel SVM, or with a random forest for classification. Good luck with that. However, the real difference between theory: all neural networks are parametric nonlinear regression or classification models and practice in my opinion, is that in practice nothing about a deep neural network is really fixed in advance, so you end up fitting a model from a much bigger class than you would expect. In real-world applications, none of these aspects are really fixed: architecture (suppose I do sequence modeling: shall I use an RNN? A dilated CNN? Attention-based model?) details of the architecture (how many layers? how many units in layer 1, how many in layer 2, which activation function(s), etc.) how do I preprocess the data? Standardization? Minmax normalization? RobustScaler? kind of regularization ($l_1$? $l_2$? batch-norm? Before or after ReLU? Dropout? Between which layers?) optimizer (SGD? Path-SGD? Entropy-SGD? Adam? etc.) other hyperparameters such as the learning rate, early stopping, etc. etc. even the loss function is often not fixed in advance! We use NNs for mostly two applications (regression and classification), but people use a swath of different loss functions. Look how many choices are performed even in a relatively simple case where there is a strong seasonal signal, and the number of features is small, as far as DNNs go: https://stackoverflow.com/questions/48929272/non-linear-multivariate-time-series-response-prediction-using-rnn Thus in practice, even though ideally fitting a DNN would mean to just fit a model of the type $y=f(\mathbf{x}\vert\boldsymbol{\theta})+\epsilon$ where $f$ has a certain hierarchical structure, in practice very little (if anything at all) about the function and the fitting method is defined in advance, and thus the model is much more flexible than a "classic" parametric nonlinear model.
What is the essential difference between a neural network and nonlinear regression? In theory, yes. In practice, things are more subtle. First of all, let's clear the field from a doubt raised in the comments: neural networks can handle multiple outputs in a seamless fashion, so it d
12,246
How are weights updated in the batch learning method in neural networks?
Using average or sum are equivalent, in the sense that there exist pairs of learning rates for which they produce the same update. To confirm this, first recall the update rule: $$\Delta w_{ij} = -\alpha \frac{\partial E}{\partial w_{ij}}$$ Then, let $\mu_E$ be the average error for a dataset of size $n$ over an epoch. The sum of error is then $n\mu_E$, and because $n$ doesn't depend on $w$, this holds: $$\Delta w_{ij} = -\alpha \frac{\partial (n\mu)}{\partial w_{ij}}= -\alpha n\frac{\partial \mu}{\partial w_{ij}}$$ To your second question, the phrase "accumulating the delta weights" would imply that one of these methods retains weight updates. That isn't the case: Batch learning accumulates error. There's only one, single $\Delta w$ vector in a given epoch. (Your pseudocode code omits the step of updating the weights, after which one can discard $\Delta w$.)
How are weights updated in the batch learning method in neural networks?
Using average or sum are equivalent, in the sense that there exist pairs of learning rates for which they produce the same update. To confirm this, first recall the update rule: $$\Delta w_{ij} = -\al
How are weights updated in the batch learning method in neural networks? Using average or sum are equivalent, in the sense that there exist pairs of learning rates for which they produce the same update. To confirm this, first recall the update rule: $$\Delta w_{ij} = -\alpha \frac{\partial E}{\partial w_{ij}}$$ Then, let $\mu_E$ be the average error for a dataset of size $n$ over an epoch. The sum of error is then $n\mu_E$, and because $n$ doesn't depend on $w$, this holds: $$\Delta w_{ij} = -\alpha \frac{\partial (n\mu)}{\partial w_{ij}}= -\alpha n\frac{\partial \mu}{\partial w_{ij}}$$ To your second question, the phrase "accumulating the delta weights" would imply that one of these methods retains weight updates. That isn't the case: Batch learning accumulates error. There's only one, single $\Delta w$ vector in a given epoch. (Your pseudocode code omits the step of updating the weights, after which one can discard $\Delta w$.)
How are weights updated in the batch learning method in neural networks? Using average or sum are equivalent, in the sense that there exist pairs of learning rates for which they produce the same update. To confirm this, first recall the update rule: $$\Delta w_{ij} = -\al
12,247
How are weights updated in the batch learning method in neural networks?
The two answers are equivalent. I personally would think of it as average error instead of the sum. But remember that gradient descent has a parameter called the learning rate, and that only a portion of the gradient of the error is subtracted. So whether the error is defined as total of average can be compensated by changing the learning rate.
How are weights updated in the batch learning method in neural networks?
The two answers are equivalent. I personally would think of it as average error instead of the sum. But remember that gradient descent has a parameter called the learning rate, and that only a porti
How are weights updated in the batch learning method in neural networks? The two answers are equivalent. I personally would think of it as average error instead of the sum. But remember that gradient descent has a parameter called the learning rate, and that only a portion of the gradient of the error is subtracted. So whether the error is defined as total of average can be compensated by changing the learning rate.
How are weights updated in the batch learning method in neural networks? The two answers are equivalent. I personally would think of it as average error instead of the sum. But remember that gradient descent has a parameter called the learning rate, and that only a porti
12,248
How are weights updated in the batch learning method in neural networks?
Someone explained like; The batch size is a hyperparameter that defines the number of samples to work through before updating the internal model parameters. Think of a batch as a for-loop iterating over one or more samples and making predictions. At the end of the batch, the predictions are compared to the expected output variables and an error is calculated. From this error, the update algorithm is used to improve the model, e.g. move down along the error gradient. A training dataset can be divided into one or more batches. When all training samples are used to create one batch, the learning algorithm is called batch gradient descent. When the batch is the size of one sample, the learning algorithm is called stochastic gradient descent. When the batch size is more than one sample and less than the size of the training dataset, the learning algorithm is called mini-batch gradient descent. You can read more Difference Between a Batch and an Epoch in a Neural Network
How are weights updated in the batch learning method in neural networks?
Someone explained like; The batch size is a hyperparameter that defines the number of samples to work through before updating the internal model parameters. Think of a batch as a for-loop iterating ov
How are weights updated in the batch learning method in neural networks? Someone explained like; The batch size is a hyperparameter that defines the number of samples to work through before updating the internal model parameters. Think of a batch as a for-loop iterating over one or more samples and making predictions. At the end of the batch, the predictions are compared to the expected output variables and an error is calculated. From this error, the update algorithm is used to improve the model, e.g. move down along the error gradient. A training dataset can be divided into one or more batches. When all training samples are used to create one batch, the learning algorithm is called batch gradient descent. When the batch is the size of one sample, the learning algorithm is called stochastic gradient descent. When the batch size is more than one sample and less than the size of the training dataset, the learning algorithm is called mini-batch gradient descent. You can read more Difference Between a Batch and an Epoch in a Neural Network
How are weights updated in the batch learning method in neural networks? Someone explained like; The batch size is a hyperparameter that defines the number of samples to work through before updating the internal model parameters. Think of a batch as a for-loop iterating ov
12,249
How to cluster time series?
A) Spend a lot of time on preprocessing the data. Preprocessing is 90% of your job. B) Choose an appropriate similarity measure for the time series. For example, threshold crossing distance may be a good choice here. You probably won't desire dynamic time warping distance, unless you have different time zones. Threshold crossing may be more appropriate to detect temporal patterns, while not paying attention the the actual magnitude (which will likely be very different from company to company). C) Cluster the resulting dissimlarity matrix using methods such as hierarchical clustering or DBSCAN that can work with arbitrary distance functions.
How to cluster time series?
A) Spend a lot of time on preprocessing the data. Preprocessing is 90% of your job. B) Choose an appropriate similarity measure for the time series. For example, threshold crossing distance may be a g
How to cluster time series? A) Spend a lot of time on preprocessing the data. Preprocessing is 90% of your job. B) Choose an appropriate similarity measure for the time series. For example, threshold crossing distance may be a good choice here. You probably won't desire dynamic time warping distance, unless you have different time zones. Threshold crossing may be more appropriate to detect temporal patterns, while not paying attention the the actual magnitude (which will likely be very different from company to company). C) Cluster the resulting dissimlarity matrix using methods such as hierarchical clustering or DBSCAN that can work with arbitrary distance functions.
How to cluster time series? A) Spend a lot of time on preprocessing the data. Preprocessing is 90% of your job. B) Choose an appropriate similarity measure for the time series. For example, threshold crossing distance may be a g
12,250
How to cluster time series?
You might want to look at Forecasting hourly time series with daily, weekly & annual periodicity for a discussion of hourly data involving daily data and holidays/regressors. You have 5 years of data while the other discussion involved 883 daily values. What I would suggest is that you could build an hourly forecast incorporating regressors such as day-of-the-week; week-of-the-year and holidays using daily totals as an additional predictor. In this way you would have 24 model for each of the 3,000 companies. Now what you want to do is by hour, estimate the 3,000 models using a common ARIMAX structure accounting for the pattern of response around each of the regressors, the day-of-the-week, changes in the day-of-the-week parameters and weekly indicators while isolating outliers. Then you could estimate the parameters globally using all 3000 companies. Perform a Chow Test http://en.wikipedia.org/wiki/Chow_test for constancy of parameters and upon rejection cluster the companies into homogenous groups . I have referred to this as single dimension cluster analysis. Since SPSS has very limited capabilities in time series you might want to look elsehere for software.
How to cluster time series?
You might want to look at Forecasting hourly time series with daily, weekly & annual periodicity for a discussion of hourly data involving daily data and holidays/regressors. You have 5 years of data
How to cluster time series? You might want to look at Forecasting hourly time series with daily, weekly & annual periodicity for a discussion of hourly data involving daily data and holidays/regressors. You have 5 years of data while the other discussion involved 883 daily values. What I would suggest is that you could build an hourly forecast incorporating regressors such as day-of-the-week; week-of-the-year and holidays using daily totals as an additional predictor. In this way you would have 24 model for each of the 3,000 companies. Now what you want to do is by hour, estimate the 3,000 models using a common ARIMAX structure accounting for the pattern of response around each of the regressors, the day-of-the-week, changes in the day-of-the-week parameters and weekly indicators while isolating outliers. Then you could estimate the parameters globally using all 3000 companies. Perform a Chow Test http://en.wikipedia.org/wiki/Chow_test for constancy of parameters and upon rejection cluster the companies into homogenous groups . I have referred to this as single dimension cluster analysis. Since SPSS has very limited capabilities in time series you might want to look elsehere for software.
How to cluster time series? You might want to look at Forecasting hourly time series with daily, weekly & annual periodicity for a discussion of hourly data involving daily data and holidays/regressors. You have 5 years of data
12,251
Why use ANOVA at all instead of jumping straight into post-hoc or planned comparisons tests?
Indeed an omnibus test is not strictly needed in that particular scenario and multiple inference procedures like Bonferroni or Bonferroni-Holm are not limited to an ANOVA/mean comparison settings. They are often presented as post-hoc tests in textbooks or associated with ANOVA in statistical software but if you look up papers on the topic (e.g. Holm, 1979), you will find out that they were originally discussed in a much broader context and you certainly can “skip the ANOVA” if you wish. One reason people still run ANOVAs is that pairwise comparisons with something like a Bonferroni adjustment have lower power (sometimes much lower). Tukey HSD and the omnibus test can have higher power and even if the pairwise comparisons do not reveal anything, the ANOVA F-test is already a result. If you work with small and haphazardly defined samples and are just looking for some publishable p-value, as many people are, this makes it attractive even if you always intended to do pairwise comparisons as well. Also, if you really care about any possible difference (as opposed to specific pairwise comparisons or knowing which means differ), then the ANOVA omnibus test is really the test you want. Similarly, multi-way ANOVA procedures conveniently provide tests of main effects and interactions that can be more directly interesting than a bunch of pairwise comparisons (planned contrasts can address the same kind of questions but are more complicated to set up). In psychology for example, omnibus tests are often thought of as the main results of an experiment, with multiple comparisons only regarded as adjuncts. Finally, many people are happy with this routine (ANOVA followed by post-hoc tests) and simply don't know that the Bonferroni inequalities are very general results that have nothing to do with ANOVA, that you can also run more focused planned comparisons or do a whole lot of things beside performing tests. It's certainly not easy to realize this if you are working from some of the most popular “cookbooks” in applied disciplines and that explains many common practices (even if it does not quite justify them). Holm, S. (1979). A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics, 6 (2), 65–70.
Why use ANOVA at all instead of jumping straight into post-hoc or planned comparisons tests?
Indeed an omnibus test is not strictly needed in that particular scenario and multiple inference procedures like Bonferroni or Bonferroni-Holm are not limited to an ANOVA/mean comparison settings. The
Why use ANOVA at all instead of jumping straight into post-hoc or planned comparisons tests? Indeed an omnibus test is not strictly needed in that particular scenario and multiple inference procedures like Bonferroni or Bonferroni-Holm are not limited to an ANOVA/mean comparison settings. They are often presented as post-hoc tests in textbooks or associated with ANOVA in statistical software but if you look up papers on the topic (e.g. Holm, 1979), you will find out that they were originally discussed in a much broader context and you certainly can “skip the ANOVA” if you wish. One reason people still run ANOVAs is that pairwise comparisons with something like a Bonferroni adjustment have lower power (sometimes much lower). Tukey HSD and the omnibus test can have higher power and even if the pairwise comparisons do not reveal anything, the ANOVA F-test is already a result. If you work with small and haphazardly defined samples and are just looking for some publishable p-value, as many people are, this makes it attractive even if you always intended to do pairwise comparisons as well. Also, if you really care about any possible difference (as opposed to specific pairwise comparisons or knowing which means differ), then the ANOVA omnibus test is really the test you want. Similarly, multi-way ANOVA procedures conveniently provide tests of main effects and interactions that can be more directly interesting than a bunch of pairwise comparisons (planned contrasts can address the same kind of questions but are more complicated to set up). In psychology for example, omnibus tests are often thought of as the main results of an experiment, with multiple comparisons only regarded as adjuncts. Finally, many people are happy with this routine (ANOVA followed by post-hoc tests) and simply don't know that the Bonferroni inequalities are very general results that have nothing to do with ANOVA, that you can also run more focused planned comparisons or do a whole lot of things beside performing tests. It's certainly not easy to realize this if you are working from some of the most popular “cookbooks” in applied disciplines and that explains many common practices (even if it does not quite justify them). Holm, S. (1979). A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics, 6 (2), 65–70.
Why use ANOVA at all instead of jumping straight into post-hoc or planned comparisons tests? Indeed an omnibus test is not strictly needed in that particular scenario and multiple inference procedures like Bonferroni or Bonferroni-Holm are not limited to an ANOVA/mean comparison settings. The
12,252
When is "Nearest Neighbor" meaningful, today?
I don't have a full answer to this question, but I can give a partial answer on some of the analytical aspects. Warning: I've been working on other problems since the first paper below, so it's very likely there is other good stuff out there I'm not aware of. First I think it's worth noting that despite the title of their paper "When is `nearest neighbor' meaningful", Beyer et al actually answered a different question, namely when is NN not meaningful. We proved the converse to their theorem, under some additional mild assumptions on the size of the sample, in When Is 'Nearest Neighbor' Meaningful: A Converse Theorem and Implications. Journal of Complexity, 25(4), August 2009, pp 385-397. and showed that there are situations when (in theory) the concentration of distances will not arise (we give examples, but in essence the number of non-noise features needs to grow with the dimensionality so of course they seldom arise in practice). The references 1 and 7 cited in our paper give some examples of ways in which the distance concentration can be mitigated in practice. A paper by my supervisor, Ata Kaban, looks at whether these distance concentration issues persist despite applying dimensionality reduction techniques in On the Distance Concentration Awareness of Certain Data Reduction Techniques. Pattern Recognition. Vol. 44, Issue 2, Feb 2011, pp.265-277.. There's some nice discussion in there too. A recent paper by Radovanovic et al Hubs in Space: Popular Nearest Neighbors in High-Dimensional Data. JMLR, 11(Sep), September 2010, pp:2487−2531. discusses the issue of "hubness", that is when a small subset of points belong to the $k$ nearest neighbours of many of the labelled observations. See also the first author's PhD thesis, which is on the web.
When is "Nearest Neighbor" meaningful, today?
I don't have a full answer to this question, but I can give a partial answer on some of the analytical aspects. Warning: I've been working on other problems since the first paper below, so it's very l
When is "Nearest Neighbor" meaningful, today? I don't have a full answer to this question, but I can give a partial answer on some of the analytical aspects. Warning: I've been working on other problems since the first paper below, so it's very likely there is other good stuff out there I'm not aware of. First I think it's worth noting that despite the title of their paper "When is `nearest neighbor' meaningful", Beyer et al actually answered a different question, namely when is NN not meaningful. We proved the converse to their theorem, under some additional mild assumptions on the size of the sample, in When Is 'Nearest Neighbor' Meaningful: A Converse Theorem and Implications. Journal of Complexity, 25(4), August 2009, pp 385-397. and showed that there are situations when (in theory) the concentration of distances will not arise (we give examples, but in essence the number of non-noise features needs to grow with the dimensionality so of course they seldom arise in practice). The references 1 and 7 cited in our paper give some examples of ways in which the distance concentration can be mitigated in practice. A paper by my supervisor, Ata Kaban, looks at whether these distance concentration issues persist despite applying dimensionality reduction techniques in On the Distance Concentration Awareness of Certain Data Reduction Techniques. Pattern Recognition. Vol. 44, Issue 2, Feb 2011, pp.265-277.. There's some nice discussion in there too. A recent paper by Radovanovic et al Hubs in Space: Popular Nearest Neighbors in High-Dimensional Data. JMLR, 11(Sep), September 2010, pp:2487−2531. discusses the issue of "hubness", that is when a small subset of points belong to the $k$ nearest neighbours of many of the labelled observations. See also the first author's PhD thesis, which is on the web.
When is "Nearest Neighbor" meaningful, today? I don't have a full answer to this question, but I can give a partial answer on some of the analytical aspects. Warning: I've been working on other problems since the first paper below, so it's very l
12,253
When is "Nearest Neighbor" meaningful, today?
You might as well be interested in neighbourhood components analysis by Goldberger et al. Here, a linear transformation is learned to maximize the expected correctly classified points via a stochastic nearest neighbourhood selection. As a side effect the (expected) number of neighbours is determined from the data.
When is "Nearest Neighbor" meaningful, today?
You might as well be interested in neighbourhood components analysis by Goldberger et al. Here, a linear transformation is learned to maximize the expected correctly classified points via a stochastic
When is "Nearest Neighbor" meaningful, today? You might as well be interested in neighbourhood components analysis by Goldberger et al. Here, a linear transformation is learned to maximize the expected correctly classified points via a stochastic nearest neighbourhood selection. As a side effect the (expected) number of neighbours is determined from the data.
When is "Nearest Neighbor" meaningful, today? You might as well be interested in neighbourhood components analysis by Goldberger et al. Here, a linear transformation is learned to maximize the expected correctly classified points via a stochastic
12,254
Clustering (k-means, or otherwise) with a minimum cluster size constraint
Use EM Clustering In EM clustering, the algorithm iteratively refines an initial cluster model to fit the data and determines the probability that a data point exists in a cluster. The algorithm ends the process when the probabilistic model fits the data. The function used to determine the fit is the log-likelihood of the data given the model. If empty clusters are generated during the process, or if the membership of one or more of the clusters falls below a given threshold, the clusters with low populations are reseeded at new points and the EM algorithm is rerun.
Clustering (k-means, or otherwise) with a minimum cluster size constraint
Use EM Clustering In EM clustering, the algorithm iteratively refines an initial cluster model to fit the data and determines the probability that a data point exists in a cluster. The algorithm ends
Clustering (k-means, or otherwise) with a minimum cluster size constraint Use EM Clustering In EM clustering, the algorithm iteratively refines an initial cluster model to fit the data and determines the probability that a data point exists in a cluster. The algorithm ends the process when the probabilistic model fits the data. The function used to determine the fit is the log-likelihood of the data given the model. If empty clusters are generated during the process, or if the membership of one or more of the clusters falls below a given threshold, the clusters with low populations are reseeded at new points and the EM algorithm is rerun.
Clustering (k-means, or otherwise) with a minimum cluster size constraint Use EM Clustering In EM clustering, the algorithm iteratively refines an initial cluster model to fit the data and determines the probability that a data point exists in a cluster. The algorithm ends
12,255
Clustering (k-means, or otherwise) with a minimum cluster size constraint
This problem is addressed in this paper: Bradley, P. S., K. P. Bennett, and Ayhan Demiriz. "Constrained k-means clustering." Microsoft Research, Redmond (2000): 1-8. I have an implementation of the algorithm in python.
Clustering (k-means, or otherwise) with a minimum cluster size constraint
This problem is addressed in this paper: Bradley, P. S., K. P. Bennett, and Ayhan Demiriz. "Constrained k-means clustering." Microsoft Research, Redmond (2000): 1-8. I have an implementation of the al
Clustering (k-means, or otherwise) with a minimum cluster size constraint This problem is addressed in this paper: Bradley, P. S., K. P. Bennett, and Ayhan Demiriz. "Constrained k-means clustering." Microsoft Research, Redmond (2000): 1-8. I have an implementation of the algorithm in python.
Clustering (k-means, or otherwise) with a minimum cluster size constraint This problem is addressed in this paper: Bradley, P. S., K. P. Bennett, and Ayhan Demiriz. "Constrained k-means clustering." Microsoft Research, Redmond (2000): 1-8. I have an implementation of the al
12,256
Clustering (k-means, or otherwise) with a minimum cluster size constraint
I think it would just be a matter of running the k means as part of an if loop with a test for cluster sizes, I.e. Count n in cluster k - also remember that k means will give different results for each run on the same data so you should probably be running it as part of a loop anyway to extract the "best" result
Clustering (k-means, or otherwise) with a minimum cluster size constraint
I think it would just be a matter of running the k means as part of an if loop with a test for cluster sizes, I.e. Count n in cluster k - also remember that k means will give different results for eac
Clustering (k-means, or otherwise) with a minimum cluster size constraint I think it would just be a matter of running the k means as part of an if loop with a test for cluster sizes, I.e. Count n in cluster k - also remember that k means will give different results for each run on the same data so you should probably be running it as part of a loop anyway to extract the "best" result
Clustering (k-means, or otherwise) with a minimum cluster size constraint I think it would just be a matter of running the k means as part of an if loop with a test for cluster sizes, I.e. Count n in cluster k - also remember that k means will give different results for eac
12,257
Clustering (k-means, or otherwise) with a minimum cluster size constraint
How large is your data set? Maybe you could try to run a hierarchical clustering and then decide which clusters retain based on your dendrogram. If your data set is huge, you could also combine both clustering methods: an initial non-hierarchical clustering and then a hierarchical clustering using the groups from the non-hierarchical analysis. You can find an example of this approach in Martínez-Pastor et al (2005)
Clustering (k-means, or otherwise) with a minimum cluster size constraint
How large is your data set? Maybe you could try to run a hierarchical clustering and then decide which clusters retain based on your dendrogram. If your data set is huge, you could also combine both c
Clustering (k-means, or otherwise) with a minimum cluster size constraint How large is your data set? Maybe you could try to run a hierarchical clustering and then decide which clusters retain based on your dendrogram. If your data set is huge, you could also combine both clustering methods: an initial non-hierarchical clustering and then a hierarchical clustering using the groups from the non-hierarchical analysis. You can find an example of this approach in Martínez-Pastor et al (2005)
Clustering (k-means, or otherwise) with a minimum cluster size constraint How large is your data set? Maybe you could try to run a hierarchical clustering and then decide which clusters retain based on your dendrogram. If your data set is huge, you could also combine both c
12,258
Clustering (k-means, or otherwise) with a minimum cluster size constraint
This can be achieved by modifying the cluster assignment step (E in EM) by formulating it as a Minimum Cost Flow (MCF) linear network optimisation problem. I have written a python package which uses Google's Operations Research tools's SimpleMinCostFlow which is a fast C++ implementation. Its has a standard scikit-lean API.
Clustering (k-means, or otherwise) with a minimum cluster size constraint
This can be achieved by modifying the cluster assignment step (E in EM) by formulating it as a Minimum Cost Flow (MCF) linear network optimisation problem. I have written a python package which uses G
Clustering (k-means, or otherwise) with a minimum cluster size constraint This can be achieved by modifying the cluster assignment step (E in EM) by formulating it as a Minimum Cost Flow (MCF) linear network optimisation problem. I have written a python package which uses Google's Operations Research tools's SimpleMinCostFlow which is a fast C++ implementation. Its has a standard scikit-lean API.
Clustering (k-means, or otherwise) with a minimum cluster size constraint This can be achieved by modifying the cluster assignment step (E in EM) by formulating it as a Minimum Cost Flow (MCF) linear network optimisation problem. I have written a python package which uses G
12,259
RNN for irregular time intervals?
I just wrote a blog post on that topic! In short, I write about different methods for dealing with the problem of sparse / irregular sequential data. Here is a short outline of methods to try: Lomb-Scargle Periodogram This is a way of computing spectrograms on non-equidistant timestep series. Data modeling with Interpolation networks You really don't want to interpolate naively between timesteps, but training a network to interpolate for you might help! Neural Ordinary Differential Equation models Neural networks that can work with continuous time can naturally work on irregular time series. Add timing dt to the input as an additional feature (or positional encoding in Tensorflow) Methods for dealing with missing values This is only viable if you have vast amounts of data Hope this helps point you to the right direction :)
RNN for irregular time intervals?
I just wrote a blog post on that topic! In short, I write about different methods for dealing with the problem of sparse / irregular sequential data. Here is a short outline of methods to try: Lomb-S
RNN for irregular time intervals? I just wrote a blog post on that topic! In short, I write about different methods for dealing with the problem of sparse / irregular sequential data. Here is a short outline of methods to try: Lomb-Scargle Periodogram This is a way of computing spectrograms on non-equidistant timestep series. Data modeling with Interpolation networks You really don't want to interpolate naively between timesteps, but training a network to interpolate for you might help! Neural Ordinary Differential Equation models Neural networks that can work with continuous time can naturally work on irregular time series. Add timing dt to the input as an additional feature (or positional encoding in Tensorflow) Methods for dealing with missing values This is only viable if you have vast amounts of data Hope this helps point you to the right direction :)
RNN for irregular time intervals? I just wrote a blog post on that topic! In short, I write about different methods for dealing with the problem of sparse / irregular sequential data. Here is a short outline of methods to try: Lomb-S
12,260
RNN for irregular time intervals?
If you are feeding in some data vector $v_t$ at time $t$, the straightforward solution is to obtain a one-hot encoding of the day of week, $d_t$, and then simply feed into the network the concatenation of $v_t$ and $d_t$. The time/date encoding scheme can be more complicated if the time format is more complicated than just day of week of course. Also, depending on exactly how sparse and irregular the data is, NULL entries should be a reasonable solution. I suspect that the input gate of an LSTM would allow the LSTM to properly read off the information of a NULL entry without contaminating the data (the memory/hidden state) as you put it.
RNN for irregular time intervals?
If you are feeding in some data vector $v_t$ at time $t$, the straightforward solution is to obtain a one-hot encoding of the day of week, $d_t$, and then simply feed into the network the concatenatio
RNN for irregular time intervals? If you are feeding in some data vector $v_t$ at time $t$, the straightforward solution is to obtain a one-hot encoding of the day of week, $d_t$, and then simply feed into the network the concatenation of $v_t$ and $d_t$. The time/date encoding scheme can be more complicated if the time format is more complicated than just day of week of course. Also, depending on exactly how sparse and irregular the data is, NULL entries should be a reasonable solution. I suspect that the input gate of an LSTM would allow the LSTM to properly read off the information of a NULL entry without contaminating the data (the memory/hidden state) as you put it.
RNN for irregular time intervals? If you are feeding in some data vector $v_t$ at time $t$, the straightforward solution is to obtain a one-hot encoding of the day of week, $d_t$, and then simply feed into the network the concatenatio
12,261
RNN for irregular time intervals?
I would try incorporating time interval explicitly into the model. For instance, a conventional time series models such as autoregressive AR(p) can be thought of as discretizations of continuous time model. For instance, AR(1) model: $$y_t=c+\phi y_{t-1}+\varepsilon_t$$ can be thought of as a version of: $$y_t=c\Delta t+e^{-\gamma\Delta t}y_{t-\Delta t}+\xi_t\sigma\sqrt {\Delta t}$$ You could draw analogies to time series models from RNN. For instance, $\phi$ in AR(1) process can be seen as a memory weight in RNNs. Hence, you could plug the time difference between observations into your features this way. I must warn that it's just an idea, and I didn't try it myself yet.
RNN for irregular time intervals?
I would try incorporating time interval explicitly into the model. For instance, a conventional time series models such as autoregressive AR(p) can be thought of as discretizations of continuous time
RNN for irregular time intervals? I would try incorporating time interval explicitly into the model. For instance, a conventional time series models such as autoregressive AR(p) can be thought of as discretizations of continuous time model. For instance, AR(1) model: $$y_t=c+\phi y_{t-1}+\varepsilon_t$$ can be thought of as a version of: $$y_t=c\Delta t+e^{-\gamma\Delta t}y_{t-\Delta t}+\xi_t\sigma\sqrt {\Delta t}$$ You could draw analogies to time series models from RNN. For instance, $\phi$ in AR(1) process can be seen as a memory weight in RNNs. Hence, you could plug the time difference between observations into your features this way. I must warn that it's just an idea, and I didn't try it myself yet.
RNN for irregular time intervals? I would try incorporating time interval explicitly into the model. For instance, a conventional time series models such as autoregressive AR(p) can be thought of as discretizations of continuous time
12,262
RNN for irregular time intervals?
I think it depends on the data. For example, if you are processing counts and you just forgot to measure it on some days, then the best strategy is to impute the missing values (e.g., via interpolation or Gaussian processes) and then process the imputed time series with an RNN. By imputing, you would be embedding knowledge. If the missingness is meaningful (it was too hot too measure counts on some days), then it's best to impute perhaps and also append an indicator vector that is 1 if the value was missing and 0 otherwise.
RNN for irregular time intervals?
I think it depends on the data. For example, if you are processing counts and you just forgot to measure it on some days, then the best strategy is to impute the missing values (e.g., via interpolati
RNN for irregular time intervals? I think it depends on the data. For example, if you are processing counts and you just forgot to measure it on some days, then the best strategy is to impute the missing values (e.g., via interpolation or Gaussian processes) and then process the imputed time series with an RNN. By imputing, you would be embedding knowledge. If the missingness is meaningful (it was too hot too measure counts on some days), then it's best to impute perhaps and also append an indicator vector that is 1 if the value was missing and 0 otherwise.
RNN for irregular time intervals? I think it depends on the data. For example, if you are processing counts and you just forgot to measure it on some days, then the best strategy is to impute the missing values (e.g., via interpolati
12,263
High Recall - Low Precision for unbalanced dataset
does anyone have a clue why I’m getting way more false positives than false negatives (positive is the minority class)? Thanks in advance for your help! Because positive is the minority class. There are a lot of negative examples that could become false positives. Conversely, there are fewer positive examples that could become false negatives. Recall that Recall = Sensitivity $=\dfrac{TP}{(TP+FN)}$ Sensitivity (True Positive Rate) is related to False Positive Rate (1-specificity) as visualized by an ROC curve. At one extreme, you call every example positive and have a 100% sensitivity with 100% FPR. At another, you call no example positive and have a 0% sensitivity with a 0% FPR. When the positive class is the minority, even a relatively small FPR (which you may have because you have a high recall=sensitivity=TPR) will end up causing a high number of FPs (because there are so many negative examples). Since Precision $=\dfrac{TP}{(TP+FP)}$ Even at a relatively low FPR, the FP will overwhelm the TP if the number of negative examples is much larger. Alternatively, Positive classifier: $C^+$ Positive example: $O^+$ Precision = $P(O^+|C^+)=\dfrac{P(C^+|O^+)P(O^+)}{P(C^+)}$ P(O+) is low when the positive class is small. Does anyone of you have some advice what I could do to improve my precision without hurting my recall? As mentioned by @rinspy, GBC works well in my experience. It will however be slower than SVC with a linear kernel, but you can make very shallow trees to speed it up. Also, more features or more observations might help (for example, there might be some currently un-analyzed feature that is always set to some value in all of your current FP). It might also be worth plotting ROC curves and calibration curves. It might be the case that even though the classifier has low precision, it could lead to a very useful probability estimate. For example, just knowing that a hard drive might have a 500 fold increased probability of failing, even though the absolute probability is fairly small, might be important information. Also, a low precision essentially means that the classifier returns a lot of false positives. This however might not be so bad if a false positive is cheap.
High Recall - Low Precision for unbalanced dataset
does anyone have a clue why I’m getting way more false positives than false negatives (positive is the minority class)? Thanks in advance for your help! Because positive is the minority class. There
High Recall - Low Precision for unbalanced dataset does anyone have a clue why I’m getting way more false positives than false negatives (positive is the minority class)? Thanks in advance for your help! Because positive is the minority class. There are a lot of negative examples that could become false positives. Conversely, there are fewer positive examples that could become false negatives. Recall that Recall = Sensitivity $=\dfrac{TP}{(TP+FN)}$ Sensitivity (True Positive Rate) is related to False Positive Rate (1-specificity) as visualized by an ROC curve. At one extreme, you call every example positive and have a 100% sensitivity with 100% FPR. At another, you call no example positive and have a 0% sensitivity with a 0% FPR. When the positive class is the minority, even a relatively small FPR (which you may have because you have a high recall=sensitivity=TPR) will end up causing a high number of FPs (because there are so many negative examples). Since Precision $=\dfrac{TP}{(TP+FP)}$ Even at a relatively low FPR, the FP will overwhelm the TP if the number of negative examples is much larger. Alternatively, Positive classifier: $C^+$ Positive example: $O^+$ Precision = $P(O^+|C^+)=\dfrac{P(C^+|O^+)P(O^+)}{P(C^+)}$ P(O+) is low when the positive class is small. Does anyone of you have some advice what I could do to improve my precision without hurting my recall? As mentioned by @rinspy, GBC works well in my experience. It will however be slower than SVC with a linear kernel, but you can make very shallow trees to speed it up. Also, more features or more observations might help (for example, there might be some currently un-analyzed feature that is always set to some value in all of your current FP). It might also be worth plotting ROC curves and calibration curves. It might be the case that even though the classifier has low precision, it could lead to a very useful probability estimate. For example, just knowing that a hard drive might have a 500 fold increased probability of failing, even though the absolute probability is fairly small, might be important information. Also, a low precision essentially means that the classifier returns a lot of false positives. This however might not be so bad if a false positive is cheap.
High Recall - Low Precision for unbalanced dataset does anyone have a clue why I’m getting way more false positives than false negatives (positive is the minority class)? Thanks in advance for your help! Because positive is the minority class. There
12,264
High Recall - Low Precision for unbalanced dataset
Methods to try out: UnderSampling: I suggest using under sampling techniques and then training your classifier. Imbalanced Learning provides a scikit learn style api for imbalanced dataset and should be a good starting point for sampling and algorithms to try out. Library: https://imbalanced-learn.readthedocs.io/en/stable/ Rank Based SVM : This has shown to give improvement in recall for high precision systems and is used by google for detecting bad advertisements. I recommend trying it out. Reference Paper for SVM : https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/37195.pdf
High Recall - Low Precision for unbalanced dataset
Methods to try out: UnderSampling: I suggest using under sampling techniques and then training your classifier. Imbalanced Learning provides a scikit learn style api for imbalanced dataset and should
High Recall - Low Precision for unbalanced dataset Methods to try out: UnderSampling: I suggest using under sampling techniques and then training your classifier. Imbalanced Learning provides a scikit learn style api for imbalanced dataset and should be a good starting point for sampling and algorithms to try out. Library: https://imbalanced-learn.readthedocs.io/en/stable/ Rank Based SVM : This has shown to give improvement in recall for high precision systems and is used by google for detecting bad advertisements. I recommend trying it out. Reference Paper for SVM : https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/37195.pdf
High Recall - Low Precision for unbalanced dataset Methods to try out: UnderSampling: I suggest using under sampling techniques and then training your classifier. Imbalanced Learning provides a scikit learn style api for imbalanced dataset and should
12,265
High Recall - Low Precision for unbalanced dataset
The standard approach would be to weight your error based on class frequency. For example, if you were doing it in Python with sklearn: model = sklearn.svm.SVC(C=1.0, kernel='linear', class_weight='balanced') model.fit(X, y)
High Recall - Low Precision for unbalanced dataset
The standard approach would be to weight your error based on class frequency. For example, if you were doing it in Python with sklearn: model = sklearn.svm.SVC(C=1.0, kernel='linear', class_weight='ba
High Recall - Low Precision for unbalanced dataset The standard approach would be to weight your error based on class frequency. For example, if you were doing it in Python with sklearn: model = sklearn.svm.SVC(C=1.0, kernel='linear', class_weight='balanced') model.fit(X, y)
High Recall - Low Precision for unbalanced dataset The standard approach would be to weight your error based on class frequency. For example, if you were doing it in Python with sklearn: model = sklearn.svm.SVC(C=1.0, kernel='linear', class_weight='ba
12,266
number of feature maps in convolutional neural networks
1) C1 in the layer 1 has 6 feature maps, does that mean there are six convolutional kernels? Each convolutional kernel is used to generate a feature map based on input. There are 6 convolutional kernels and each is used to generate a feature map based on input. Another way to say this is that there are 6 filters or 3D sets of weights which I will just call weights. What this image doesn't show, that it probably should, to make it clearer is that typically images have 3 channels, say red, green, and blue. So the weights that map you from the input to C1 are of shape/dimension 3x5x5 not just 5x5. The same 3 dimensional weights, or kernel, are applied across the entire 3x32x32 image to generate a 2 dimensional feature map in C1. There are 6 kernels (each 3x5x5) in this example so that makes 6 feature maps ( each 28x28 since the stride is 1 and padding is zero) in this example, each of which is the result of applying a 3x5x5 kernel across the input. 2) S1 in layer 1 has 6 feature maps, C2 in layer 2 has 16 feature maps. What is the process look like to get these 16 feature maps based on 6 feature maps in S1? Now do the same thing we did in layer one, but do it for layer 2, except this time the number of channels is not 3 (RGB) but 6, six for the number of feature maps/filters in S1. There are now 16 unique kernels each of shape/dimension 6x5x5. each layer 2 kernel is applied across all of S1 to generate a 2D feature map in C2. This is done 16 times for each unique kernel in layer 2, all 16, to generate the 16 feature maps in layer 2 (each 10x10 since stride is 1 and padding is zero) source: http://cs231n.github.io/convolutional-networks/
number of feature maps in convolutional neural networks
1) C1 in the layer 1 has 6 feature maps, does that mean there are six convolutional kernels? Each convolutional kernel is used to generate a feature map based on input. There are 6 convolutional kern
number of feature maps in convolutional neural networks 1) C1 in the layer 1 has 6 feature maps, does that mean there are six convolutional kernels? Each convolutional kernel is used to generate a feature map based on input. There are 6 convolutional kernels and each is used to generate a feature map based on input. Another way to say this is that there are 6 filters or 3D sets of weights which I will just call weights. What this image doesn't show, that it probably should, to make it clearer is that typically images have 3 channels, say red, green, and blue. So the weights that map you from the input to C1 are of shape/dimension 3x5x5 not just 5x5. The same 3 dimensional weights, or kernel, are applied across the entire 3x32x32 image to generate a 2 dimensional feature map in C1. There are 6 kernels (each 3x5x5) in this example so that makes 6 feature maps ( each 28x28 since the stride is 1 and padding is zero) in this example, each of which is the result of applying a 3x5x5 kernel across the input. 2) S1 in layer 1 has 6 feature maps, C2 in layer 2 has 16 feature maps. What is the process look like to get these 16 feature maps based on 6 feature maps in S1? Now do the same thing we did in layer one, but do it for layer 2, except this time the number of channels is not 3 (RGB) but 6, six for the number of feature maps/filters in S1. There are now 16 unique kernels each of shape/dimension 6x5x5. each layer 2 kernel is applied across all of S1 to generate a 2D feature map in C2. This is done 16 times for each unique kernel in layer 2, all 16, to generate the 16 feature maps in layer 2 (each 10x10 since stride is 1 and padding is zero) source: http://cs231n.github.io/convolutional-networks/
number of feature maps in convolutional neural networks 1) C1 in the layer 1 has 6 feature maps, does that mean there are six convolutional kernels? Each convolutional kernel is used to generate a feature map based on input. There are 6 convolutional kern
12,267
Bayesian Survival Analysis: please, write me a prior for Kaplan Meier!
Note that because your likelihood function is a product of $ \alpha_i $ functions - the data are telling you that there is no evidence for correlation between them. Note that the $ d_i $ variables are already scaling to account for time. Longer time period means more chance for events, generally meaning larger $ d_i $. The most basic way to "go Bayesian" here is to use independent uniform priors $ p (\alpha_i)=1 $. Note that $0 <\alpha_i <1 $ so this is a proper prior - hence posterior is also proper. The posterior is independent beta distributions with parameters $ p (\alpha_i)\sim beta (n_i-d_i+1, d_i+1) $. This can be easily simulated to generate the posterior distribution of the survival curve, using rbeta () function in R for example. I think this gets at your main question about a "simpler" method. Below is just the beginings of an idea to create a better model, that retains the flexible KM form for the survival function. I think the main problem with the KM curve is in the Survival function though, and not in the prior. For example, why should the $ t_i $ values correspond to time points that were observed? Wouldn't it make more sense to place them at points corresponding to meaningful event times based on the actual process? If the observed time points are too far apart, the KM curve will be "too smooth". If they are too close, the KM curve will be "too rough", and potentially exhibit abrupt changes. One way to deal with the "too rough" problem is to place a correlated prior on $\alpha $ such that $\alpha_i\approx \alpha_{i+1} $. The effect of this prior will be to shrink nearby parameters closer together. You could use this in the "log-odds" space $\eta_i=\log\left (\frac {\alpha_i}{1-\alpha_i}\right) $ and use a kth order random walk prior on $\eta $. For a first order random walk this introduces penalties of the form $-\tau(\eta_i -\eta_{i-1})^2 $ into the log-likelihood. The BayesX software has some very good documentation of this kind of smoothing. Basically choosing the order k is like doing a kth order local polynomial. If you like splines, choose k=3. Of course, by using a "fine" time grid you will have time points with no observations. Howdver, this complicates your likelihood function, as the $ n_i, d_i$ are missing for some $i $. For example if $( t_0,t_1) $ was split into 3 "finer" intervals $(t_{00}, t_{01}, t_{02}, t_{10}) $ then you don't know $ n_{02}, n_{10}, d_{01}, d_{02}, d_{10} $ but only $ n_1=n_{01}$ and $ d_1=d_{01}+d_{02}+d_{10} $. So you would probably need to add these "missing data" and use an EM algorithm or perhaps VB (provided you're not going down the mcmc path). Hope this gives you a start.
Bayesian Survival Analysis: please, write me a prior for Kaplan Meier!
Note that because your likelihood function is a product of $ \alpha_i $ functions - the data are telling you that there is no evidence for correlation between them. Note that the $ d_i $ variables ar
Bayesian Survival Analysis: please, write me a prior for Kaplan Meier! Note that because your likelihood function is a product of $ \alpha_i $ functions - the data are telling you that there is no evidence for correlation between them. Note that the $ d_i $ variables are already scaling to account for time. Longer time period means more chance for events, generally meaning larger $ d_i $. The most basic way to "go Bayesian" here is to use independent uniform priors $ p (\alpha_i)=1 $. Note that $0 <\alpha_i <1 $ so this is a proper prior - hence posterior is also proper. The posterior is independent beta distributions with parameters $ p (\alpha_i)\sim beta (n_i-d_i+1, d_i+1) $. This can be easily simulated to generate the posterior distribution of the survival curve, using rbeta () function in R for example. I think this gets at your main question about a "simpler" method. Below is just the beginings of an idea to create a better model, that retains the flexible KM form for the survival function. I think the main problem with the KM curve is in the Survival function though, and not in the prior. For example, why should the $ t_i $ values correspond to time points that were observed? Wouldn't it make more sense to place them at points corresponding to meaningful event times based on the actual process? If the observed time points are too far apart, the KM curve will be "too smooth". If they are too close, the KM curve will be "too rough", and potentially exhibit abrupt changes. One way to deal with the "too rough" problem is to place a correlated prior on $\alpha $ such that $\alpha_i\approx \alpha_{i+1} $. The effect of this prior will be to shrink nearby parameters closer together. You could use this in the "log-odds" space $\eta_i=\log\left (\frac {\alpha_i}{1-\alpha_i}\right) $ and use a kth order random walk prior on $\eta $. For a first order random walk this introduces penalties of the form $-\tau(\eta_i -\eta_{i-1})^2 $ into the log-likelihood. The BayesX software has some very good documentation of this kind of smoothing. Basically choosing the order k is like doing a kth order local polynomial. If you like splines, choose k=3. Of course, by using a "fine" time grid you will have time points with no observations. Howdver, this complicates your likelihood function, as the $ n_i, d_i$ are missing for some $i $. For example if $( t_0,t_1) $ was split into 3 "finer" intervals $(t_{00}, t_{01}, t_{02}, t_{10}) $ then you don't know $ n_{02}, n_{10}, d_{01}, d_{02}, d_{10} $ but only $ n_1=n_{01}$ and $ d_1=d_{01}+d_{02}+d_{10} $. So you would probably need to add these "missing data" and use an EM algorithm or perhaps VB (provided you're not going down the mcmc path). Hope this gives you a start.
Bayesian Survival Analysis: please, write me a prior for Kaplan Meier! Note that because your likelihood function is a product of $ \alpha_i $ functions - the data are telling you that there is no evidence for correlation between them. Note that the $ d_i $ variables ar
12,268
Bayesian Survival Analysis: please, write me a prior for Kaplan Meier!
For readers facing the problem of going to Bayesian for estimating survival functions accepting right censoring, I would recommend the nonparametric Bayesian approach developed by F Mangili, A Benavoli et al. The only prior specification is a (precision or strength) parameter. It avoids the need to specify the Dirichlet process in case of lack of prior information. The authors propose (1) - a robust estimator of the survival curves and its credible intervals for the probability of survival (2) - A test in the difference of survival of individuals from 2 independent populations which presents various benefits over the classical log rank test or other nonparametric tests. See the R package IDPsurvival and this reference: Reliable survival analysis based on the Dirichlet process. F Mangili et al. Biometrical Journal. 2014.
Bayesian Survival Analysis: please, write me a prior for Kaplan Meier!
For readers facing the problem of going to Bayesian for estimating survival functions accepting right censoring, I would recommend the nonparametric Bayesian approach developed by F Mangili, A Benavol
Bayesian Survival Analysis: please, write me a prior for Kaplan Meier! For readers facing the problem of going to Bayesian for estimating survival functions accepting right censoring, I would recommend the nonparametric Bayesian approach developed by F Mangili, A Benavoli et al. The only prior specification is a (precision or strength) parameter. It avoids the need to specify the Dirichlet process in case of lack of prior information. The authors propose (1) - a robust estimator of the survival curves and its credible intervals for the probability of survival (2) - A test in the difference of survival of individuals from 2 independent populations which presents various benefits over the classical log rank test or other nonparametric tests. See the R package IDPsurvival and this reference: Reliable survival analysis based on the Dirichlet process. F Mangili et al. Biometrical Journal. 2014.
Bayesian Survival Analysis: please, write me a prior for Kaplan Meier! For readers facing the problem of going to Bayesian for estimating survival functions accepting right censoring, I would recommend the nonparametric Bayesian approach developed by F Mangili, A Benavol
12,269
Topic stability in topic models
For my own curiosity, I applied a clustering algorithm that I've been working on to this dataset. I've temporarily put-up the results here (choose the essays dataset). It seems like the problem is not the starting points or the algorithm, but the data. You can 'reasonably' (subjectively, in my limited experience) get good clusters even with 147 instances as long as there is some hidden topics/concepts/themes/clusters (whatever you would like to call). If the data does not have well separated topics, then no matter whichever algorithm you use, you might not get good answers.
Topic stability in topic models
For my own curiosity, I applied a clustering algorithm that I've been working on to this dataset. I've temporarily put-up the results here (choose the essays dataset). It seems like the problem is not
Topic stability in topic models For my own curiosity, I applied a clustering algorithm that I've been working on to this dataset. I've temporarily put-up the results here (choose the essays dataset). It seems like the problem is not the starting points or the algorithm, but the data. You can 'reasonably' (subjectively, in my limited experience) get good clusters even with 147 instances as long as there is some hidden topics/concepts/themes/clusters (whatever you would like to call). If the data does not have well separated topics, then no matter whichever algorithm you use, you might not get good answers.
Topic stability in topic models For my own curiosity, I applied a clustering algorithm that I've been working on to this dataset. I've temporarily put-up the results here (choose the essays dataset). It seems like the problem is not
12,270
Topic stability in topic models
The notion of "topics" in so-called "topic models" is misleading. The model does not know or is not designed to know semantically coherent "topics" at all. The "topics" are just distributions over tokens (words). In other words, the model just capture the high-order co-occurrence of terms. Whether these structures mean something or not is not the purpose of the model. The "LDA" model has two parts (essentially all graphical models): a) model definition and b) an implementation of an inference algorithm to infer / estate model parameters. The thing you mentioned may or may not be the problem of "LDA" model but can be some bug / error / misconfig of the specific implementation you used (R package). Almost all implementations of "LDA" requires some randomization. And by the nature of inference algorithms (e.g., MCMC or variational inference), you'll get local minimum solutions or a distribution of many solutions. So, in short, what you observed is somehow expected. Practical Suggestions: Try different R packages: For example, this package is done by David Blei's former graduate student. Or, even try another environment, such as this one. If you get similar results from all these stable packages, at least, you get reduce the problem a bit. Try playing a bit with not removing stop-words. The rationale is that, these stop-words play important role in connecting semantic meanings in such a small corpus (e.g., 100 or so articles). Also, try not filtering things. Try playing a bit with hyper-parameters, like different numbers of topics. Papers about topic coherences: http://www.aclweb.org/anthology-new/D/D12/D12-1087.pdf http://people.cs.umass.edu/~wallach/publications/mimno11optimizing.pdf
Topic stability in topic models
The notion of "topics" in so-called "topic models" is misleading. The model does not know or is not designed to know semantically coherent "topics" at all. The "topics" are just distributions over tok
Topic stability in topic models The notion of "topics" in so-called "topic models" is misleading. The model does not know or is not designed to know semantically coherent "topics" at all. The "topics" are just distributions over tokens (words). In other words, the model just capture the high-order co-occurrence of terms. Whether these structures mean something or not is not the purpose of the model. The "LDA" model has two parts (essentially all graphical models): a) model definition and b) an implementation of an inference algorithm to infer / estate model parameters. The thing you mentioned may or may not be the problem of "LDA" model but can be some bug / error / misconfig of the specific implementation you used (R package). Almost all implementations of "LDA" requires some randomization. And by the nature of inference algorithms (e.g., MCMC or variational inference), you'll get local minimum solutions or a distribution of many solutions. So, in short, what you observed is somehow expected. Practical Suggestions: Try different R packages: For example, this package is done by David Blei's former graduate student. Or, even try another environment, such as this one. If you get similar results from all these stable packages, at least, you get reduce the problem a bit. Try playing a bit with not removing stop-words. The rationale is that, these stop-words play important role in connecting semantic meanings in such a small corpus (e.g., 100 or so articles). Also, try not filtering things. Try playing a bit with hyper-parameters, like different numbers of topics. Papers about topic coherences: http://www.aclweb.org/anthology-new/D/D12/D12-1087.pdf http://people.cs.umass.edu/~wallach/publications/mimno11optimizing.pdf
Topic stability in topic models The notion of "topics" in so-called "topic models" is misleading. The model does not know or is not designed to know semantically coherent "topics" at all. The "topics" are just distributions over tok
12,271
Should sampling for logistic regression reflect the real ratio of 1's and 0's?
If the goal of such a model is prediction, then you cannot use unweighted logistic regression to predict outcomes: you will overpredict risk. The strength of logistic models is that the odds ratio (OR)--the "slope" which measures association between a risk factor and a binary outcome in a logistic model--is invariant to outcome dependent sampling. So if cases are sampled in a 10:1, 5:1, 1:1, 5:1, 10:1 ratio to controls, it simply doesn't matter: the OR remains unchanged in either scenario so long as sampling is unconditional on the exposure (which would introduce Berkson's bias). Indeed, outcome dependent sampling is a cost saving endeavor when complete simple random sampling is just not gonna happen. Why are risk predictions biased from outcome dependent sampling using logistic models? Outcome dependent sampling impacts the intercept in a logistic model. This causes the S-shaped curve of association to "slide up the x-axis" by the difference in the log-odds of sampling a case in a simple random sample in the population and the log-odds of sampling a case in a pseudo-population of your experimental design. (So if you have 1:1 cases to controls, there is a 50% chance of sampling a case in this pseudo population). In rare outcomes, this is quite a big difference, a factor of 2 or 3. When you speak of such models being "wrong" then, you must focus on whether the objective is inference (right) or prediction (wrong). This also addresses the ratio of outcomes to cases. The language you tend to see around this topic is that of calling such a study a "case control" study, which has been written about extensively. Perhaps my favorite publication on the topic is Breslow and Day which as a landmark study characterized risk factors for rare causes of cancer (previously infeasible due to rarity of the events). Case control studies spark some controversy surrounding the frequent misinterpretation of findings: particularly conflating the OR with the RR (exaggerates findings) and also the "study base" as an intermediary of the sample and the population which enhances findings. Miettenen provides an excellent criticism of them. No critique, however, has claimed case-control studies are inherently invalid, I mean how could you? They've advanced public health in innumerable avenues. Miettenen's article is good at pointing out that, you can even use relative risk models or other models in outcome dependent sampling and describe the discrepancies between the results and population level findings in most cases: it's not really worse since the OR is typically a hard parameter to interpret. Probably the best and easiest way to overcome the oversampling bias in risk predictions is by using weighted likelihood. Scott and Wild discuss weighting and show it corrects the intercept term and the model's risk predictions. This is the best approach when there is a priori knowledge about the proportion of cases in the population. If the prevalence of the outcome is actually 1:100 and you sample cases to controls in a 1:1 fashion, you simply weight controls by a magnitude of 100 to obtain population consistent parameters and unbiased risk predictions. The downside to this method is that it doesn't account for uncertainty in the population prevalence if it has been estimated with error elsewhere. This is a huge area of open research, Lumley and Breslow came very far with some theory about two phase sampling and the doubly robust estimator. I think it's tremendously interesting stuff. Zelig's program seems to simply be an implementation of the weight feature (which seems a bit redundant as R's glm function allows for weights).
Should sampling for logistic regression reflect the real ratio of 1's and 0's?
If the goal of such a model is prediction, then you cannot use unweighted logistic regression to predict outcomes: you will overpredict risk. The strength of logistic models is that the odds ratio (OR
Should sampling for logistic regression reflect the real ratio of 1's and 0's? If the goal of such a model is prediction, then you cannot use unweighted logistic regression to predict outcomes: you will overpredict risk. The strength of logistic models is that the odds ratio (OR)--the "slope" which measures association between a risk factor and a binary outcome in a logistic model--is invariant to outcome dependent sampling. So if cases are sampled in a 10:1, 5:1, 1:1, 5:1, 10:1 ratio to controls, it simply doesn't matter: the OR remains unchanged in either scenario so long as sampling is unconditional on the exposure (which would introduce Berkson's bias). Indeed, outcome dependent sampling is a cost saving endeavor when complete simple random sampling is just not gonna happen. Why are risk predictions biased from outcome dependent sampling using logistic models? Outcome dependent sampling impacts the intercept in a logistic model. This causes the S-shaped curve of association to "slide up the x-axis" by the difference in the log-odds of sampling a case in a simple random sample in the population and the log-odds of sampling a case in a pseudo-population of your experimental design. (So if you have 1:1 cases to controls, there is a 50% chance of sampling a case in this pseudo population). In rare outcomes, this is quite a big difference, a factor of 2 or 3. When you speak of such models being "wrong" then, you must focus on whether the objective is inference (right) or prediction (wrong). This also addresses the ratio of outcomes to cases. The language you tend to see around this topic is that of calling such a study a "case control" study, which has been written about extensively. Perhaps my favorite publication on the topic is Breslow and Day which as a landmark study characterized risk factors for rare causes of cancer (previously infeasible due to rarity of the events). Case control studies spark some controversy surrounding the frequent misinterpretation of findings: particularly conflating the OR with the RR (exaggerates findings) and also the "study base" as an intermediary of the sample and the population which enhances findings. Miettenen provides an excellent criticism of them. No critique, however, has claimed case-control studies are inherently invalid, I mean how could you? They've advanced public health in innumerable avenues. Miettenen's article is good at pointing out that, you can even use relative risk models or other models in outcome dependent sampling and describe the discrepancies between the results and population level findings in most cases: it's not really worse since the OR is typically a hard parameter to interpret. Probably the best and easiest way to overcome the oversampling bias in risk predictions is by using weighted likelihood. Scott and Wild discuss weighting and show it corrects the intercept term and the model's risk predictions. This is the best approach when there is a priori knowledge about the proportion of cases in the population. If the prevalence of the outcome is actually 1:100 and you sample cases to controls in a 1:1 fashion, you simply weight controls by a magnitude of 100 to obtain population consistent parameters and unbiased risk predictions. The downside to this method is that it doesn't account for uncertainty in the population prevalence if it has been estimated with error elsewhere. This is a huge area of open research, Lumley and Breslow came very far with some theory about two phase sampling and the doubly robust estimator. I think it's tremendously interesting stuff. Zelig's program seems to simply be an implementation of the weight feature (which seems a bit redundant as R's glm function allows for weights).
Should sampling for logistic regression reflect the real ratio of 1's and 0's? If the goal of such a model is prediction, then you cannot use unweighted logistic regression to predict outcomes: you will overpredict risk. The strength of logistic models is that the odds ratio (OR
12,272
Comparing two histograms using Chi-Square distance
@Silverfish asked for an expansion of the answer by PolatAlemdar, which was not given, so I will try to expand on it here. Why the name chisquare distance? The chisquare test for contingency tables is based on $$ \chi^2 = \sum_{\text{cells}} \frac{(O_i-E_i)^2}{E_i} $$ so the idea is to keep this form and use it as a distance measure. This gives the third formula of the OP, with $x_i$ interpreted as observation and $y_i$ as expectation, which explains PolatAlemdar's comment "It is used in discrete probability distributions", as for instance in goodness of fit testing. This third form is not a distance function, as it is asymmetric in the variables $x$ and $y$. For histogram comparison, we will want a distance function which is symmetric in $x$ and $y$, and the two first forms give this. The difference between them is only a constant factor $\frac12$, which is unimportant as long as you just chooses one form consistently (though the version with extra factor $\frac12$ is better if you want to compare with the asymmetric form). Note the similarity in these formulas with squared euclidean distance, that is not coincidence, chisquare distance is a kind of weighted euclidean distance. For that reason, the formulas in the OP is usually put under a root sign to get distances. In the following we follow this. Chisquare distance is used also in correspondence analysis. To see the relationship to the form used there, let $x_{ij}$ be the cells of a contingency table with $R$ rows and $C$ columns. Denote the row totals be $x_{+j}=\sum_i x_{ij}$ and the column totals by $x_{i+}=\sum_j x_{ij}$. The the chisquare distance between rows $l,k$ is given by $$ \chi^2(l,k) = \sqrt{\sum_j \frac1{x_{+j}}\left(\frac{x_{lj}}{x_{l+}}-\frac{x_{kj}}{x_{k+}} \right)^2 } $$ For the case with only two rows (the two histograms) these recovers the OP's first formula (modulo the root sign). EDIT Answering to question in comments below: A book with long discussions of the chisquare distance is "CORRESPONDENCE ANALYSIS in PRACTICE (Second Edition)" by Michael Greenacre (Chapman & Hall). It is a well established name, coming from its similarity to chisquare as used with contingency tables. What distribution does it have? I have never studied that, but probably (under some conditions ...) it would have some chisquare distribution, approximately. Proofs should be similar to what is done with contingency tables, most literature about correspondence analysis does not go into distribution theory. A paper having some, maybe relevant such theory is ALTERNATIVE METHODS TO MULTIPLE CORRESPONDENCE ANALYSIS IN RECONSTRUCTING THE RELEVANT INFORMATION IN A BURT'S TABLE. Also see this other posts for some other relevant posts on this site.
Comparing two histograms using Chi-Square distance
@Silverfish asked for an expansion of the answer by PolatAlemdar, which was not given, so I will try to expand on it here. Why the name chisquare distance? The chisquare test for contingency tables i
Comparing two histograms using Chi-Square distance @Silverfish asked for an expansion of the answer by PolatAlemdar, which was not given, so I will try to expand on it here. Why the name chisquare distance? The chisquare test for contingency tables is based on $$ \chi^2 = \sum_{\text{cells}} \frac{(O_i-E_i)^2}{E_i} $$ so the idea is to keep this form and use it as a distance measure. This gives the third formula of the OP, with $x_i$ interpreted as observation and $y_i$ as expectation, which explains PolatAlemdar's comment "It is used in discrete probability distributions", as for instance in goodness of fit testing. This third form is not a distance function, as it is asymmetric in the variables $x$ and $y$. For histogram comparison, we will want a distance function which is symmetric in $x$ and $y$, and the two first forms give this. The difference between them is only a constant factor $\frac12$, which is unimportant as long as you just chooses one form consistently (though the version with extra factor $\frac12$ is better if you want to compare with the asymmetric form). Note the similarity in these formulas with squared euclidean distance, that is not coincidence, chisquare distance is a kind of weighted euclidean distance. For that reason, the formulas in the OP is usually put under a root sign to get distances. In the following we follow this. Chisquare distance is used also in correspondence analysis. To see the relationship to the form used there, let $x_{ij}$ be the cells of a contingency table with $R$ rows and $C$ columns. Denote the row totals be $x_{+j}=\sum_i x_{ij}$ and the column totals by $x_{i+}=\sum_j x_{ij}$. The the chisquare distance between rows $l,k$ is given by $$ \chi^2(l,k) = \sqrt{\sum_j \frac1{x_{+j}}\left(\frac{x_{lj}}{x_{l+}}-\frac{x_{kj}}{x_{k+}} \right)^2 } $$ For the case with only two rows (the two histograms) these recovers the OP's first formula (modulo the root sign). EDIT Answering to question in comments below: A book with long discussions of the chisquare distance is "CORRESPONDENCE ANALYSIS in PRACTICE (Second Edition)" by Michael Greenacre (Chapman & Hall). It is a well established name, coming from its similarity to chisquare as used with contingency tables. What distribution does it have? I have never studied that, but probably (under some conditions ...) it would have some chisquare distribution, approximately. Proofs should be similar to what is done with contingency tables, most literature about correspondence analysis does not go into distribution theory. A paper having some, maybe relevant such theory is ALTERNATIVE METHODS TO MULTIPLE CORRESPONDENCE ANALYSIS IN RECONSTRUCTING THE RELEVANT INFORMATION IN A BURT'S TABLE. Also see this other posts for some other relevant posts on this site.
Comparing two histograms using Chi-Square distance @Silverfish asked for an expansion of the answer by PolatAlemdar, which was not given, so I will try to expand on it here. Why the name chisquare distance? The chisquare test for contingency tables i
12,273
Comparing two histograms using Chi-Square distance
I found this link to be quite useful: http://docs.opencv.org/2.4/doc/tutorials/imgproc/histograms/histogram_comparison/histogram_comparison.html I am not quite sure why, but OpenCV uses the 3rd formula you list for Chi-Square histogram comparison. In terms of meaning, I am not sure any measurement algorithm is going to give you a bounded range, like 0% to 100%. In other words, you can tell for sure that two images are the same: a correlation value of 1.0 or a chi-square value of 0.0; but it's tough to set a limit on how different are two images: imagine comparing a completely white image vs a completely black image, the numerical value would be either Infinity or maybe Not-a-Number.
Comparing two histograms using Chi-Square distance
I found this link to be quite useful: http://docs.opencv.org/2.4/doc/tutorials/imgproc/histograms/histogram_comparison/histogram_comparison.html I am not quite sure why, but OpenCV uses the 3rd formu
Comparing two histograms using Chi-Square distance I found this link to be quite useful: http://docs.opencv.org/2.4/doc/tutorials/imgproc/histograms/histogram_comparison/histogram_comparison.html I am not quite sure why, but OpenCV uses the 3rd formula you list for Chi-Square histogram comparison. In terms of meaning, I am not sure any measurement algorithm is going to give you a bounded range, like 0% to 100%. In other words, you can tell for sure that two images are the same: a correlation value of 1.0 or a chi-square value of 0.0; but it's tough to set a limit on how different are two images: imagine comparing a completely white image vs a completely black image, the numerical value would be either Infinity or maybe Not-a-Number.
Comparing two histograms using Chi-Square distance I found this link to be quite useful: http://docs.opencv.org/2.4/doc/tutorials/imgproc/histograms/histogram_comparison/histogram_comparison.html I am not quite sure why, but OpenCV uses the 3rd formu
12,274
Comparing two histograms using Chi-Square distance
In fact you can use whatever you believe is correct for your case. The last one is different. It is used in discrete probability distributions, as the last one will be symmetric if you swap $x$ and $y$. The other two are used in calculating histogram similarities.
Comparing two histograms using Chi-Square distance
In fact you can use whatever you believe is correct for your case. The last one is different. It is used in discrete probability distributions, as the last one will be symmetric if you swap $x$ and $y
Comparing two histograms using Chi-Square distance In fact you can use whatever you believe is correct for your case. The last one is different. It is used in discrete probability distributions, as the last one will be symmetric if you swap $x$ and $y$. The other two are used in calculating histogram similarities.
Comparing two histograms using Chi-Square distance In fact you can use whatever you believe is correct for your case. The last one is different. It is used in discrete probability distributions, as the last one will be symmetric if you swap $x$ and $y
12,275
Comparing two histograms using Chi-Square distance
As OP requested, the value in percentage (for equation 1): $p = \frac{\chi * S * 100}{N}$ Where: $p$ is the percentage of difference (0..100). $\chi$ is the result of equation 1. $N$ is the number of bins in histogram. $S$ is the maximum possible value in the bin. Complemented as requested: Calculating this equation one can have the percentage of difference from a full histogram. Calculating this for both histograms and then subtracting one from another, one can have the difference in percentage.
Comparing two histograms using Chi-Square distance
As OP requested, the value in percentage (for equation 1): $p = \frac{\chi * S * 100}{N}$ Where: $p$ is the percentage of difference (0..100). $\chi$ is the result of equation 1. $N$ is th
Comparing two histograms using Chi-Square distance As OP requested, the value in percentage (for equation 1): $p = \frac{\chi * S * 100}{N}$ Where: $p$ is the percentage of difference (0..100). $\chi$ is the result of equation 1. $N$ is the number of bins in histogram. $S$ is the maximum possible value in the bin. Complemented as requested: Calculating this equation one can have the percentage of difference from a full histogram. Calculating this for both histograms and then subtracting one from another, one can have the difference in percentage.
Comparing two histograms using Chi-Square distance As OP requested, the value in percentage (for equation 1): $p = \frac{\chi * S * 100}{N}$ Where: $p$ is the percentage of difference (0..100). $\chi$ is the result of equation 1. $N$ is th
12,276
Has the reported state-of-the-art performance of using paragraph vectors for sentiment analysis been replicated?
Footnote at http://arxiv.org/abs/1412.5335 (one of the authors is Tomas Mikolov) says In our experiments, to match the results from (Le & Mikolov, 2014), we followed the suggestion by Quoc Le to use hierarchical softmax instead of negative sampling. However, this produces the 92.6% accuracy result only when the training and test data are not shuffled. Thus, we consider this result to be invalid.
Has the reported state-of-the-art performance of using paragraph vectors for sentiment analysis been
Footnote at http://arxiv.org/abs/1412.5335 (one of the authors is Tomas Mikolov) says In our experiments, to match the results from (Le & Mikolov, 2014), we followed the suggestion by Quoc Le to use
Has the reported state-of-the-art performance of using paragraph vectors for sentiment analysis been replicated? Footnote at http://arxiv.org/abs/1412.5335 (one of the authors is Tomas Mikolov) says In our experiments, to match the results from (Le & Mikolov, 2014), we followed the suggestion by Quoc Le to use hierarchical softmax instead of negative sampling. However, this produces the 92.6% accuracy result only when the training and test data are not shuffled. Thus, we consider this result to be invalid.
Has the reported state-of-the-art performance of using paragraph vectors for sentiment analysis been Footnote at http://arxiv.org/abs/1412.5335 (one of the authors is Tomas Mikolov) says In our experiments, to match the results from (Le & Mikolov, 2014), we followed the suggestion by Quoc Le to use
12,277
How to control the cost of misclassification in Random Forests?
Not really, if not by manually making RF clone doing bagging of rpart models. Some option comes from the fact that the output of RF is actually a continuous score rather than a crisp decision, i.e. the fraction of trees that voted on some class. It can be extracted with predict(rf_model,type="prob") and used to make, for instance, a ROC curve which will reveal a better threshold than .5 (which can be later incorporated in RF training with cutoff parameter). classwt approach also seems valid, but it does not work very well in practice -- the transition between balanced prediction and trivial casting of the same class regardless of attributes tends to be too sharp to be usable.
How to control the cost of misclassification in Random Forests?
Not really, if not by manually making RF clone doing bagging of rpart models. Some option comes from the fact that the output of RF is actually a continuous score rather than a crisp decision, i.e. t
How to control the cost of misclassification in Random Forests? Not really, if not by manually making RF clone doing bagging of rpart models. Some option comes from the fact that the output of RF is actually a continuous score rather than a crisp decision, i.e. the fraction of trees that voted on some class. It can be extracted with predict(rf_model,type="prob") and used to make, for instance, a ROC curve which will reveal a better threshold than .5 (which can be later incorporated in RF training with cutoff parameter). classwt approach also seems valid, but it does not work very well in practice -- the transition between balanced prediction and trivial casting of the same class regardless of attributes tends to be too sharp to be usable.
How to control the cost of misclassification in Random Forests? Not really, if not by manually making RF clone doing bagging of rpart models. Some option comes from the fact that the output of RF is actually a continuous score rather than a crisp decision, i.e. t
12,278
How to control the cost of misclassification in Random Forests?
There are a number of ways of including costs. (1) Over/under sampling for each bagged tree (stratified sampling) is the most common method of introducing costs. you intentionally imbalance dataset. (2) Weighting. Never works. I think this is emphasized in documentation. Some claim you just need to weight at all stages, including Gini spliting and final voting. If it is going to work, it is going to be a tricky implementation. (3) Metacost function in Weka. (4) Treating a random forest as a probabilistic classifier and changing the threshold. I like this option the least. Likely due to my lack of knowledge, but even though the algorithm can output probabilities doesn't make sense to me to treat them as if this was a probabilistic model. But I'm sure there are additional approaches.
How to control the cost of misclassification in Random Forests?
There are a number of ways of including costs. (1) Over/under sampling for each bagged tree (stratified sampling) is the most common method of introducing costs. you intentionally imbalance dataset.
How to control the cost of misclassification in Random Forests? There are a number of ways of including costs. (1) Over/under sampling for each bagged tree (stratified sampling) is the most common method of introducing costs. you intentionally imbalance dataset. (2) Weighting. Never works. I think this is emphasized in documentation. Some claim you just need to weight at all stages, including Gini spliting and final voting. If it is going to work, it is going to be a tricky implementation. (3) Metacost function in Weka. (4) Treating a random forest as a probabilistic classifier and changing the threshold. I like this option the least. Likely due to my lack of knowledge, but even though the algorithm can output probabilities doesn't make sense to me to treat them as if this was a probabilistic model. But I'm sure there are additional approaches.
How to control the cost of misclassification in Random Forests? There are a number of ways of including costs. (1) Over/under sampling for each bagged tree (stratified sampling) is the most common method of introducing costs. you intentionally imbalance dataset.
12,279
How to control the cost of misclassification in Random Forests?
It's recommended that if the variable you are trying to predict is not 50% for class 1 and 50% for class 2 (like most of the cases), you adjust the cutoff parameter to represents the real OOB in summary. For example, randomForest(data=my_data, formula, ntree = 501, cutoff=c(.96,.04)) In this case, probability of having a value of one class 1 is .96 while having a value of class 2 is .04. Otherwise random forests use a threshold of 0.5.
How to control the cost of misclassification in Random Forests?
It's recommended that if the variable you are trying to predict is not 50% for class 1 and 50% for class 2 (like most of the cases), you adjust the cutoff parameter to represents the real OOB in summa
How to control the cost of misclassification in Random Forests? It's recommended that if the variable you are trying to predict is not 50% for class 1 and 50% for class 2 (like most of the cases), you adjust the cutoff parameter to represents the real OOB in summary. For example, randomForest(data=my_data, formula, ntree = 501, cutoff=c(.96,.04)) In this case, probability of having a value of one class 1 is .96 while having a value of class 2 is .04. Otherwise random forests use a threshold of 0.5.
How to control the cost of misclassification in Random Forests? It's recommended that if the variable you are trying to predict is not 50% for class 1 and 50% for class 2 (like most of the cases), you adjust the cutoff parameter to represents the real OOB in summa
12,280
How to control the cost of misclassification in Random Forests?
One can incorporate costMatrix in randomForest explicitly via parms parameter: library(randomForest) costMatrix <- matrix(c(0,10,1,0), nrow=2) mod_rf <- randomForest(outcome ~ ., data = train, ntree = 1000, parms = list(loss=costMatrix))
How to control the cost of misclassification in Random Forests?
One can incorporate costMatrix in randomForest explicitly via parms parameter: library(randomForest) costMatrix <- matrix(c(0,10,1,0), nrow=2) mod_rf <- randomForest(outcome ~ ., data = train, ntree =
How to control the cost of misclassification in Random Forests? One can incorporate costMatrix in randomForest explicitly via parms parameter: library(randomForest) costMatrix <- matrix(c(0,10,1,0), nrow=2) mod_rf <- randomForest(outcome ~ ., data = train, ntree = 1000, parms = list(loss=costMatrix))
How to control the cost of misclassification in Random Forests? One can incorporate costMatrix in randomForest explicitly via parms parameter: library(randomForest) costMatrix <- matrix(c(0,10,1,0), nrow=2) mod_rf <- randomForest(outcome ~ ., data = train, ntree =
12,281
How to control the cost of misclassification in Random Forests?
You can incorporate cost sensitivity using the sampsize function in the randomForest package. model1=randomForest(DependentVariable~., data=my_data, sampsize=c(100,20)) Vary the figures (100,20) based on the data you have and the assumptions/business rules you are working with. It takes a bit of a trial and error approach to get a confusion matrix that reflects the costs of classification error. Have a look at Richard Berk's Criminal Forecasts of Risk: A Machine Learning Approach, p. 82.
How to control the cost of misclassification in Random Forests?
You can incorporate cost sensitivity using the sampsize function in the randomForest package. model1=randomForest(DependentVariable~., data=my_data, sampsize=c(100,20)) Vary the figures (100,20) base
How to control the cost of misclassification in Random Forests? You can incorporate cost sensitivity using the sampsize function in the randomForest package. model1=randomForest(DependentVariable~., data=my_data, sampsize=c(100,20)) Vary the figures (100,20) based on the data you have and the assumptions/business rules you are working with. It takes a bit of a trial and error approach to get a confusion matrix that reflects the costs of classification error. Have a look at Richard Berk's Criminal Forecasts of Risk: A Machine Learning Approach, p. 82.
How to control the cost of misclassification in Random Forests? You can incorporate cost sensitivity using the sampsize function in the randomForest package. model1=randomForest(DependentVariable~., data=my_data, sampsize=c(100,20)) Vary the figures (100,20) base
12,282
Difficulty of testing linearity in regression
I created a simulation that would answer to Breiman's description and found only the obvious: the result depends on the context and on what is meant by "extreme." An awful lot could be said, but let me limit it to just one example conducted by means of easily modified R code for interested readers to use in their own investigations. This code begins by setting up a design matrix consisting of approximately uniformly distributed independent values that are approximately orthogonal (so that we don't get into multicollinearity problems). It computes a single quadratic (i.e., nonlinear) interaction between the first two variables: this is only one of many kinds of "nonlinearities" that could be studied, but at least it is a common, well-understood one. Then it standardizes everything so that the coefficients will be comparable: set.seed(41) p <- 7 # Dimensions n <- 2^p # Observations x <- as.matrix(do.call(expand.grid, lapply(as.list(1:p), function(i) c(-1,1)))) x <- x + runif(n*p, min=-1, max=1) x <- cbind(x, x.12 = x[,1]*x[,2]) # The nonlinear part x <- apply(x, 2, function(y) (y - mean(y))/sd(y)) # Standardization For the base OLS model (without nonlinearity) we must specify some coefficients and the standard deviation of the residual error. Here is a set of unit coefficients and a comparable SD: beta <- rep(c(1,-1), p)[1:p] sd <- 1 To illustrate the situation, here is one hard-coded iteration of the simulation. It generates the dependent variable, summarizes its values, displays the full correlation matrix of all the variables (including the interaction), and displays a scatterplot matrix. Then it performs the OLS regression. In the following, the interaction coefficient of $1/4$ is substantially smaller than any of the other coefficients (all equal to $1$ or $-1$), so it would be difficult to call it "extreme": gamma = 1/4 # The standardized interaction term df <- data.frame(x) df$y <- x %*% c(beta, gamma) + rnorm(n, sd=sd) summary(df) cor(df)*100 plot(df, lower.panel=function(x,y) lines(lowess(y~x)), upper.panel=function(x,y) points(x,y, pch=".", cex=4)) summary(lm(df$y ~ x)) Rather than wade through all the output here, let's look at these data using the output of the plot command: The lowess traces on the lower triangle show essentially no linear relationship between the interaction (x.12) and the dependent variable (y) and modest linear relationships between the other variables and y. The OLS results confirm that; the interaction is scarcely significant: Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.0263 0.0828 0.32 0.751 xVar1 0.9947 0.0833 11.94 <2e-16 *** xVar2 -0.8713 0.0842 -10.35 <2e-16 *** xVar3 1.0709 0.0836 12.81 <2e-16 *** xVar4 -1.0007 0.0840 -11.92 <2e-16 *** xVar5 1.0233 0.0836 12.24 <2e-16 *** xVar6 -0.9514 0.0835 -11.40 <2e-16 *** xVar7 1.0482 0.0835 12.56 <2e-16 *** xx.12 0.1902 0.0836 2.27 0.025 * I will take the p-value of the interaction term as a test of nonlinearity: when this p-value is sufficiently low (you can choose just how low), we will have detected the nonlinearity. (There's a subtlety here about what exactly we're looking for. In practice we might need to examine all 7*6/2 = 21 possible such quadratic interactions, as well as perhaps 7 more quadratic terms, rather than focusing on a single term as is done here. We would want to make a correction for these 28 inter-related tests. I do not explicitly make this correction here, because instead I display the simulated distribution of the p-values. You can read the detection rates directly from the histograms at the end based on your thresholds of significance.) But let's not do this analysis just once; let's do it lots of times, generating new values of y in each iteration according to the same model and the same design matrix. To accomplish this, we use a function to carry out one iteration and return the p-value of the interaction term: test <- function(gamma, sd=1) { y <- x %*% c(beta, gamma) + rnorm(n, sd=sd) fit <- summary(lm(y ~ x)) m <- coef(fit) n <- dim(m)[1] m[n, 4] } I choose to present the simulation results as histograms of the p-values, varying the standardized coefficient gamma of the interaction term. First, the histograms: h <- function(g, n.trials=1000) { hist(replicate(n.trials, test(g, sd)), xlim=c(0,1), main=toString(g), xlab="x1:x2 p-value") } par(mfrow=c(2,2)) # Draw a 2 by 2 panel of results Now to do the work. It takes a few seconds for 1000 trials per simulation (and four independent simulations, starting with the given value of the interaction term and successively halving it each time): temp <- sapply(2^(-3:0) * gamma, h) The results: Reading backwards from the lower right, these plots show that for this design matrix x, for this standard deviation of errors sd, and for these standardized coefficients beta, OLS can detect a standardized interaction of $1/4$ (just one-quarter the size of the other coefficients) reliably, over 80% of the time (using a 5% threshold for the p-value--recall the brief discussion about correcting for multiple comparisons, which I am now ignoring); it can often detect an interaction size of $1/8$ (about 20% of the time); it will sometimes detect an interaction of size $1/16$, and really cannot identify any smaller interactions. Not shown here is a histogram for gamma equal to $1/2$, which shows that even when correcting for multiple comparisons, a quadratic interaction this large is almost surely detected. Whether you take these interactions, which range in size from $1/32$ to $1/4$, to be "extreme" or not will depend on your perspective, on the regression situation (as expressed by x, sd, and beta), on how many independent tests of nonlinearity you imagine conducting, and, pace Breiman, whom I respect greatly, perhaps on whether you have an axe to grind. You certainly can make it difficult for OLS to detect the nonlinearity: just inflate sd so it swamps the nonlinearity and simultaneously conduct many different tests for goodness of fit. In short, a simulation like this can prove whatever you like if you just set it up and interpret it the right way. That suggests the individual statistician should conduct their own explorations, suitable to the particular problems they face, in order to come to a personal and deep understanding of the capabilities and weaknesses of the procedures they are using.
Difficulty of testing linearity in regression
I created a simulation that would answer to Breiman's description and found only the obvious: the result depends on the context and on what is meant by "extreme." An awful lot could be said, but let m
Difficulty of testing linearity in regression I created a simulation that would answer to Breiman's description and found only the obvious: the result depends on the context and on what is meant by "extreme." An awful lot could be said, but let me limit it to just one example conducted by means of easily modified R code for interested readers to use in their own investigations. This code begins by setting up a design matrix consisting of approximately uniformly distributed independent values that are approximately orthogonal (so that we don't get into multicollinearity problems). It computes a single quadratic (i.e., nonlinear) interaction between the first two variables: this is only one of many kinds of "nonlinearities" that could be studied, but at least it is a common, well-understood one. Then it standardizes everything so that the coefficients will be comparable: set.seed(41) p <- 7 # Dimensions n <- 2^p # Observations x <- as.matrix(do.call(expand.grid, lapply(as.list(1:p), function(i) c(-1,1)))) x <- x + runif(n*p, min=-1, max=1) x <- cbind(x, x.12 = x[,1]*x[,2]) # The nonlinear part x <- apply(x, 2, function(y) (y - mean(y))/sd(y)) # Standardization For the base OLS model (without nonlinearity) we must specify some coefficients and the standard deviation of the residual error. Here is a set of unit coefficients and a comparable SD: beta <- rep(c(1,-1), p)[1:p] sd <- 1 To illustrate the situation, here is one hard-coded iteration of the simulation. It generates the dependent variable, summarizes its values, displays the full correlation matrix of all the variables (including the interaction), and displays a scatterplot matrix. Then it performs the OLS regression. In the following, the interaction coefficient of $1/4$ is substantially smaller than any of the other coefficients (all equal to $1$ or $-1$), so it would be difficult to call it "extreme": gamma = 1/4 # The standardized interaction term df <- data.frame(x) df$y <- x %*% c(beta, gamma) + rnorm(n, sd=sd) summary(df) cor(df)*100 plot(df, lower.panel=function(x,y) lines(lowess(y~x)), upper.panel=function(x,y) points(x,y, pch=".", cex=4)) summary(lm(df$y ~ x)) Rather than wade through all the output here, let's look at these data using the output of the plot command: The lowess traces on the lower triangle show essentially no linear relationship between the interaction (x.12) and the dependent variable (y) and modest linear relationships between the other variables and y. The OLS results confirm that; the interaction is scarcely significant: Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.0263 0.0828 0.32 0.751 xVar1 0.9947 0.0833 11.94 <2e-16 *** xVar2 -0.8713 0.0842 -10.35 <2e-16 *** xVar3 1.0709 0.0836 12.81 <2e-16 *** xVar4 -1.0007 0.0840 -11.92 <2e-16 *** xVar5 1.0233 0.0836 12.24 <2e-16 *** xVar6 -0.9514 0.0835 -11.40 <2e-16 *** xVar7 1.0482 0.0835 12.56 <2e-16 *** xx.12 0.1902 0.0836 2.27 0.025 * I will take the p-value of the interaction term as a test of nonlinearity: when this p-value is sufficiently low (you can choose just how low), we will have detected the nonlinearity. (There's a subtlety here about what exactly we're looking for. In practice we might need to examine all 7*6/2 = 21 possible such quadratic interactions, as well as perhaps 7 more quadratic terms, rather than focusing on a single term as is done here. We would want to make a correction for these 28 inter-related tests. I do not explicitly make this correction here, because instead I display the simulated distribution of the p-values. You can read the detection rates directly from the histograms at the end based on your thresholds of significance.) But let's not do this analysis just once; let's do it lots of times, generating new values of y in each iteration according to the same model and the same design matrix. To accomplish this, we use a function to carry out one iteration and return the p-value of the interaction term: test <- function(gamma, sd=1) { y <- x %*% c(beta, gamma) + rnorm(n, sd=sd) fit <- summary(lm(y ~ x)) m <- coef(fit) n <- dim(m)[1] m[n, 4] } I choose to present the simulation results as histograms of the p-values, varying the standardized coefficient gamma of the interaction term. First, the histograms: h <- function(g, n.trials=1000) { hist(replicate(n.trials, test(g, sd)), xlim=c(0,1), main=toString(g), xlab="x1:x2 p-value") } par(mfrow=c(2,2)) # Draw a 2 by 2 panel of results Now to do the work. It takes a few seconds for 1000 trials per simulation (and four independent simulations, starting with the given value of the interaction term and successively halving it each time): temp <- sapply(2^(-3:0) * gamma, h) The results: Reading backwards from the lower right, these plots show that for this design matrix x, for this standard deviation of errors sd, and for these standardized coefficients beta, OLS can detect a standardized interaction of $1/4$ (just one-quarter the size of the other coefficients) reliably, over 80% of the time (using a 5% threshold for the p-value--recall the brief discussion about correcting for multiple comparisons, which I am now ignoring); it can often detect an interaction size of $1/8$ (about 20% of the time); it will sometimes detect an interaction of size $1/16$, and really cannot identify any smaller interactions. Not shown here is a histogram for gamma equal to $1/2$, which shows that even when correcting for multiple comparisons, a quadratic interaction this large is almost surely detected. Whether you take these interactions, which range in size from $1/32$ to $1/4$, to be "extreme" or not will depend on your perspective, on the regression situation (as expressed by x, sd, and beta), on how many independent tests of nonlinearity you imagine conducting, and, pace Breiman, whom I respect greatly, perhaps on whether you have an axe to grind. You certainly can make it difficult for OLS to detect the nonlinearity: just inflate sd so it swamps the nonlinearity and simultaneously conduct many different tests for goodness of fit. In short, a simulation like this can prove whatever you like if you just set it up and interpret it the right way. That suggests the individual statistician should conduct their own explorations, suitable to the particular problems they face, in order to come to a personal and deep understanding of the capabilities and weaknesses of the procedures they are using.
Difficulty of testing linearity in regression I created a simulation that would answer to Breiman's description and found only the obvious: the result depends on the context and on what is meant by "extreme." An awful lot could be said, but let m
12,283
Difficulty of testing linearity in regression
Not sure it gives a final answer to the question, but I would give a look at this. Especially point 2. See also the discussion in appendix A2 of the paper.
Difficulty of testing linearity in regression
Not sure it gives a final answer to the question, but I would give a look at this. Especially point 2. See also the discussion in appendix A2 of the paper.
Difficulty of testing linearity in regression Not sure it gives a final answer to the question, but I would give a look at this. Especially point 2. See also the discussion in appendix A2 of the paper.
Difficulty of testing linearity in regression Not sure it gives a final answer to the question, but I would give a look at this. Especially point 2. See also the discussion in appendix A2 of the paper.
12,284
Residual diagnostics in MCMC -based regression models
I think the use of the term residual is not consistent with Bayesian regression. Remember, in frequentist probability models, it's the parameters which are considered fixed estimable quantities and the data generating mechanism has some random probability model associated with observed data. For Bayesians, the parameters of probability models are considered to be variable and the fixed data update our belief about what those parameters are. Therefore, if you were calculating the variance of the observed minus fitted values in a regression model, the observed component would have 0 variance whereas the fitted component would vary as a function of the posterior probability density for the model parameters. This is the opposite of what you would derive from the frequentist regression model. I think if one were interested in checking the probabilistic assumptions of their Bayesian regression model, a simple QQplot of the posterior density of parameter estimates (estimated from our MCMC sampling) versus a normal distribution would have diagnostic power analogous to analyzing residuals (or Pearson residuals for non-linear link functions).
Residual diagnostics in MCMC -based regression models
I think the use of the term residual is not consistent with Bayesian regression. Remember, in frequentist probability models, it's the parameters which are considered fixed estimable quantities and th
Residual diagnostics in MCMC -based regression models I think the use of the term residual is not consistent with Bayesian regression. Remember, in frequentist probability models, it's the parameters which are considered fixed estimable quantities and the data generating mechanism has some random probability model associated with observed data. For Bayesians, the parameters of probability models are considered to be variable and the fixed data update our belief about what those parameters are. Therefore, if you were calculating the variance of the observed minus fitted values in a regression model, the observed component would have 0 variance whereas the fitted component would vary as a function of the posterior probability density for the model parameters. This is the opposite of what you would derive from the frequentist regression model. I think if one were interested in checking the probabilistic assumptions of their Bayesian regression model, a simple QQplot of the posterior density of parameter estimates (estimated from our MCMC sampling) versus a normal distribution would have diagnostic power analogous to analyzing residuals (or Pearson residuals for non-linear link functions).
Residual diagnostics in MCMC -based regression models I think the use of the term residual is not consistent with Bayesian regression. Remember, in frequentist probability models, it's the parameters which are considered fixed estimable quantities and th
12,285
Has anyone solved PTLOS exercise 4.1?
For the record, here is a somewhat more extensive proof. It also contains some background information. Maybe this is helpful for others studying the topic. The main idea of the proof is to show that Jaynes' conditions 1 and 2 imply that $$P(D_{m_k}|H_iX)=P(D_{m_k}|X),$$ for all but one data set $m_k=1,\ldots,m$. It then shows that for all these data sets, we also have $$P(D_{m_k}|\overline H_iX)=P(D_{m_k}|X).$$ Thus we have for all but one data set, $$\frac{P(D_{m_k}|H_iX)}{P(D_{m_k}|\overline H_iX)} = \frac{P(D_{m_k}|X)}{P(D_{m_k}|X)} = 1.$$ The reason that I wanted to include the proof here is that some of the steps involved are not at all obvious, and one needs to take care not to use anything else than conditions 1 and 2 and the product rule (as many of the other proofs implicitly do). The link above includes all these steps in detail. It is on my Google Drive and I will make sure it stays accessible.
Has anyone solved PTLOS exercise 4.1?
For the record, here is a somewhat more extensive proof. It also contains some background information. Maybe this is helpful for others studying the topic. The main idea of the proof is to show that J
Has anyone solved PTLOS exercise 4.1? For the record, here is a somewhat more extensive proof. It also contains some background information. Maybe this is helpful for others studying the topic. The main idea of the proof is to show that Jaynes' conditions 1 and 2 imply that $$P(D_{m_k}|H_iX)=P(D_{m_k}|X),$$ for all but one data set $m_k=1,\ldots,m$. It then shows that for all these data sets, we also have $$P(D_{m_k}|\overline H_iX)=P(D_{m_k}|X).$$ Thus we have for all but one data set, $$\frac{P(D_{m_k}|H_iX)}{P(D_{m_k}|\overline H_iX)} = \frac{P(D_{m_k}|X)}{P(D_{m_k}|X)} = 1.$$ The reason that I wanted to include the proof here is that some of the steps involved are not at all obvious, and one needs to take care not to use anything else than conditions 1 and 2 and the product rule (as many of the other proofs implicitly do). The link above includes all these steps in detail. It is on my Google Drive and I will make sure it stays accessible.
Has anyone solved PTLOS exercise 4.1? For the record, here is a somewhat more extensive proof. It also contains some background information. Maybe this is helpful for others studying the topic. The main idea of the proof is to show that J
12,286
Has anyone solved PTLOS exercise 4.1?
The reason we accepted eq. 4.28 (in the book, your condition 1) was that we assumed the probability of the data given a certain hypothesis $H_a$ and background information $X$ is independent, in other words for any $D_i$ and $D_j$ with $i\neq{j}$: \begin{equation}P(D_i|D_jH_aX)=P(D_i|H_aX)\quad\quad{\rm (1)}\end{equation} Nonextensibility beyond the binary case can therefore be discussed like this: If we assume eq.1 to be true, is eq.2 also true? \begin{equation}P(D_i|D_j\overline{H_a}X)\stackrel{?}{=}P(D_i|\overline{H_a}X)\quad\quad{\rm (2)}\end{equation} First lets look at the left side of eq.2, using the multiplication rule: \begin{equation}P(D_i|D_j\overline{H_a}X)=\frac{P(D_iD_j\overline{H_a}|X)}{P(D_j\overline{H_a}|X)}\quad\quad{\rm (3)}\end{equation} Since the $n$ hypotheses $\{H_1\dots{H_n}\}$ are assumed mutually exclusive and exhaustive, we can write: $$\overline{H_a}=\sum_{b\neq{a}}H_b$$So eq.3 becomes: $$P(D_i|D_j\overline{H_a}X)=\frac{\sum_{b\neq{a}}P(D_i|D_jH_bX)P(D_jH_b|X)}{\sum_{b\neq{a}}P(D_jH_b|X)}=\frac{\sum_{b\neq{a}}P(D_i|H_bX)P(D_jH_b|X)}{\sum_{b\neq{a}}P(D_jH_b|X)}$$For the case that we have only two hypotheses, the summations are removed (since there is only one $b\neq{a}$), the equal terms in the nominator and denominator, $P(D_jH_b|X$), cancel out and eq.2 is proved correct, since $H_b=\overline{H_a}$. Therefore equation 4.29 can be derived from equation 4.28 in the book. But when we have more than two hypotheses, this doesn't happen, for example, if we have three hypotheses: $\{H_1, H_2, H_3\}$, the equation above becomes:$$P(D_i|D_j\overline{H_1}X)=\frac{P(D_i|H_2X)P(D_jH_2|X)+P(D_i|H_3X)P(D_jH_3|X)}{P(D_jH_2|X)+P(D_jH_3|X)}$$In other words: $$P(D_i|D_j\overline{H_1}X)=\frac{P(D_i|H_2X)}{1+\frac{P(D_jH_3|X)}{P(D_jH_2|X)}}+\frac{P(D_i|H_3X)}{1+\frac{P(D_jH_2|X)}{P(D_jH_3|X)}}$$The only way this equation can yield eq.2 is that both denominators equal 1, i.e. both fractions in the denominators must equal zero. But that is impossible.
Has anyone solved PTLOS exercise 4.1?
The reason we accepted eq. 4.28 (in the book, your condition 1) was that we assumed the probability of the data given a certain hypothesis $H_a$ and background information $X$ is independent, in other
Has anyone solved PTLOS exercise 4.1? The reason we accepted eq. 4.28 (in the book, your condition 1) was that we assumed the probability of the data given a certain hypothesis $H_a$ and background information $X$ is independent, in other words for any $D_i$ and $D_j$ with $i\neq{j}$: \begin{equation}P(D_i|D_jH_aX)=P(D_i|H_aX)\quad\quad{\rm (1)}\end{equation} Nonextensibility beyond the binary case can therefore be discussed like this: If we assume eq.1 to be true, is eq.2 also true? \begin{equation}P(D_i|D_j\overline{H_a}X)\stackrel{?}{=}P(D_i|\overline{H_a}X)\quad\quad{\rm (2)}\end{equation} First lets look at the left side of eq.2, using the multiplication rule: \begin{equation}P(D_i|D_j\overline{H_a}X)=\frac{P(D_iD_j\overline{H_a}|X)}{P(D_j\overline{H_a}|X)}\quad\quad{\rm (3)}\end{equation} Since the $n$ hypotheses $\{H_1\dots{H_n}\}$ are assumed mutually exclusive and exhaustive, we can write: $$\overline{H_a}=\sum_{b\neq{a}}H_b$$So eq.3 becomes: $$P(D_i|D_j\overline{H_a}X)=\frac{\sum_{b\neq{a}}P(D_i|D_jH_bX)P(D_jH_b|X)}{\sum_{b\neq{a}}P(D_jH_b|X)}=\frac{\sum_{b\neq{a}}P(D_i|H_bX)P(D_jH_b|X)}{\sum_{b\neq{a}}P(D_jH_b|X)}$$For the case that we have only two hypotheses, the summations are removed (since there is only one $b\neq{a}$), the equal terms in the nominator and denominator, $P(D_jH_b|X$), cancel out and eq.2 is proved correct, since $H_b=\overline{H_a}$. Therefore equation 4.29 can be derived from equation 4.28 in the book. But when we have more than two hypotheses, this doesn't happen, for example, if we have three hypotheses: $\{H_1, H_2, H_3\}$, the equation above becomes:$$P(D_i|D_j\overline{H_1}X)=\frac{P(D_i|H_2X)P(D_jH_2|X)+P(D_i|H_3X)P(D_jH_3|X)}{P(D_jH_2|X)+P(D_jH_3|X)}$$In other words: $$P(D_i|D_j\overline{H_1}X)=\frac{P(D_i|H_2X)}{1+\frac{P(D_jH_3|X)}{P(D_jH_2|X)}}+\frac{P(D_i|H_3X)}{1+\frac{P(D_jH_2|X)}{P(D_jH_3|X)}}$$The only way this equation can yield eq.2 is that both denominators equal 1, i.e. both fractions in the denominators must equal zero. But that is impossible.
Has anyone solved PTLOS exercise 4.1? The reason we accepted eq. 4.28 (in the book, your condition 1) was that we assumed the probability of the data given a certain hypothesis $H_a$ and background information $X$ is independent, in other
12,287
Has anyone solved PTLOS exercise 4.1?
Okay, so rather than go and re-derive Saunder's equation (5), I will just state it here. Condition 1 and 2 imply the following equality: $$\prod_{j=1}^{m}\left(\sum_{k\neq i}h_{k}d_{jk}\right)=\left(\sum_{k\neq i}h_{k}\right)^{m-1}\left(\sum_{k\neq i}h_{k}\prod_{j=1}^{m}d_{jk}\right)$$ where $$d_{jk}=P(D_{j}|H_{k},I)\;\;\;\;h_{k}=P(H_{k}|I)$$ Now we can specialise to the case $m=2$ (two data sets) by taking $D_{1}^{(1)}\equiv D_{1}$ and relabeling $D_{2}^{(1)}\equiv D_{2}D_{3}\dots D_{m}$. Note that these two data sets still satisfy conditions 1 and 2, so the result above applies to them as well. Now expanding in the case $m=2$ we get: $$\left(\sum_{k\neq i}h_{k}d_{1k}\right)\left(\sum_{l\neq i}h_{l}d_{2l}\right)=\left(\sum_{k\neq i}h_{k}\right)\left(\sum_{l\neq i}h_{l}d_{1l}d_{2l}\right)$$ $$\rightarrow\sum_{k\neq i}\sum_{l\neq i}h_{k}h_{l}d_{1k}d_{2l}=\sum_{k\neq i}\sum_{l\neq i}h_{k}h_{l}d_{1l}d_{2l}$$ $$\rightarrow\sum_{k\neq i}\sum_{l\neq i}h_{k}h_{l}d_{2l}(d_{1k}-d_{1l})=0\;\;\;\;\;\;\; (i=1,\dots,n)$$ The term $(d_{1a}-d_{1b})$ occurs twice in the above double summation, once when $k=a$ and $l=b$, and once again when $k=b$ and $l=a$. This will occur as long as $a,b\neq i$. The coefficient of each term is given by $d_{2b}$ and $-d_{2a}$. Now because there are $i$ of these equations, we can actually remove $i$ from these equations. To illustrate, take $i=1$, now this means we have all conditions except where $a=1,b=2$ and $b=1,a=2$. Now take $i=3$, and we now can have these two conditions (note this assumes at least three hypothesis). So the equation can be re-written as: $$\sum_{l>k}h_{k}h_{l}(d_{2l}-d_{2k})(d_{1k}-d_{1l})=0$$ Now each of the $h_i$ terms must be greater than zero, for otherwise we are dealing with $n_{1}<n$ hypothesis, and the answer can be reformulated in terms of $n_{1}$. So these can be removed from the above set of conditions: $$\sum_{l>k}(d_{2l}-d_{2k})(d_{1k}-d_{1l})=0$$ Thus, there are $\frac{n(n-1)}{2}$ conditions that must be satisfied, and each conditions implies one of two "sub-conditions": that $d_{jk}=d_{jl}$ for either $j=1$ or $j=2$ (but not necessarily both). Now we have a set of all of the unique pairs $(k,l)$ for $d_{jk}=d_{jl}$. If we were to take $n-1$ of these pairs for one of the $j$, then we would have all the numbers $1,\dots,n$ in the set, and $d_{j1}=d_{j2}=\dots=d_{j,n-1}=d_{j,n}$. This is because the first pair has $2$ elements, and each additional pair brings at least one additional element to the set* But note that because there are $\frac{n(n-1)}{2}$ conditions, we must choose at least the smallest integer greater than or equal to $\frac{1}{2}\times\frac{n(n-1)}{2}=\frac{n(n-1)}{4}$ for one of the $j=1$ or $j=2$. If $n>4$ then the number of terms chosen is greater than $n-1$. If $n=4$ or $n=3$ then we must choose exactly $n-1$ terms. This implies that $d_{j1}=d_{j2}=\dots=d_{j,n-1}=d_{j,n}$. Only with two hypothesis ($n=2$) is where this does not occur. But from the last equation in Saunder's article this equality condition implies: $$P(D_{j}|\overline{H}_{i})=\frac{\sum_{k\neq i}d_{jk}h_{k}}{\sum_{k\neq i}h_{k}}=d_{ji}\frac{\sum_{k\neq i}h_{k}}{\sum_{k\neq i}h_{k}}=d_{ji}=P(D_{j}|H_{i})$$ Thus, in the likelihood ratio we have: $$\frac{P(D_{1}^{(1)}|H_{i})}{P(D_{1}^{(1)}|\overline{H}_{i})}=\frac{P(D_{1}|H_{i})}{P(D_{1}|\overline{H}_{i})}=1 \text{ OR} \frac{P(D_{2}^{(1)}|H_{i})}{P(D_{2}^{(1)}|\overline{H}_{i})}=\frac{P(D_{2}D_{3}\dots,D_{m}|H_{i})}{P(D_{2}D_{3}\dots,D_{m}|\overline{H}_{i})}=1$$ To complete the proof, note that if the second condition holds, the result is already proved, and only one ratio can be different from 1. If the first condition holds, then we can repeat the above analysis by relabeling $D_{1}^{(2)}\equiv D_{2}$ and $D_{2}^{(2)}\equiv D_{3}\dots,D_{m}$. Then we would have $D_{1},D_{2}$ not contributing, or $D_{2}$ being the only contributor. We would then have a third relabeling when $D_{1}D_{2}$ not contributing holds, and so on. Thus, only one data set can contribute to the likelihood ratio when condition 1 and condition 2 hold, and there are more than two hypothesis. *NOTE: An additional pair might bring no new terms, but this would be offset by a pair which brought 2 new terms. e.g. take $d_{j1}=d_{j2}$ as first[+2], $d_{j1}=d_{j3}$ [+1] and $d_{j2}=d_{j3}$ [+0], but next term must have $d_{jk}=d_{jl}$ for both $k,l\notin (1,2,3)$. This will add two terms [+2]. If $n=4$ then we don't need to choose any more, but for the "other" $j$ we must choose the 3 pairs which are not $(1,2),(2,3),(1,3)$. These are $(1,4),(2,4),(3,4)$ and thus the equality holds, because all numbers $(1,2,3,4)$ are in the set.
Has anyone solved PTLOS exercise 4.1?
Okay, so rather than go and re-derive Saunder's equation (5), I will just state it here. Condition 1 and 2 imply the following equality: $$\prod_{j=1}^{m}\left(\sum_{k\neq i}h_{k}d_{jk}\right)=\left(
Has anyone solved PTLOS exercise 4.1? Okay, so rather than go and re-derive Saunder's equation (5), I will just state it here. Condition 1 and 2 imply the following equality: $$\prod_{j=1}^{m}\left(\sum_{k\neq i}h_{k}d_{jk}\right)=\left(\sum_{k\neq i}h_{k}\right)^{m-1}\left(\sum_{k\neq i}h_{k}\prod_{j=1}^{m}d_{jk}\right)$$ where $$d_{jk}=P(D_{j}|H_{k},I)\;\;\;\;h_{k}=P(H_{k}|I)$$ Now we can specialise to the case $m=2$ (two data sets) by taking $D_{1}^{(1)}\equiv D_{1}$ and relabeling $D_{2}^{(1)}\equiv D_{2}D_{3}\dots D_{m}$. Note that these two data sets still satisfy conditions 1 and 2, so the result above applies to them as well. Now expanding in the case $m=2$ we get: $$\left(\sum_{k\neq i}h_{k}d_{1k}\right)\left(\sum_{l\neq i}h_{l}d_{2l}\right)=\left(\sum_{k\neq i}h_{k}\right)\left(\sum_{l\neq i}h_{l}d_{1l}d_{2l}\right)$$ $$\rightarrow\sum_{k\neq i}\sum_{l\neq i}h_{k}h_{l}d_{1k}d_{2l}=\sum_{k\neq i}\sum_{l\neq i}h_{k}h_{l}d_{1l}d_{2l}$$ $$\rightarrow\sum_{k\neq i}\sum_{l\neq i}h_{k}h_{l}d_{2l}(d_{1k}-d_{1l})=0\;\;\;\;\;\;\; (i=1,\dots,n)$$ The term $(d_{1a}-d_{1b})$ occurs twice in the above double summation, once when $k=a$ and $l=b$, and once again when $k=b$ and $l=a$. This will occur as long as $a,b\neq i$. The coefficient of each term is given by $d_{2b}$ and $-d_{2a}$. Now because there are $i$ of these equations, we can actually remove $i$ from these equations. To illustrate, take $i=1$, now this means we have all conditions except where $a=1,b=2$ and $b=1,a=2$. Now take $i=3$, and we now can have these two conditions (note this assumes at least three hypothesis). So the equation can be re-written as: $$\sum_{l>k}h_{k}h_{l}(d_{2l}-d_{2k})(d_{1k}-d_{1l})=0$$ Now each of the $h_i$ terms must be greater than zero, for otherwise we are dealing with $n_{1}<n$ hypothesis, and the answer can be reformulated in terms of $n_{1}$. So these can be removed from the above set of conditions: $$\sum_{l>k}(d_{2l}-d_{2k})(d_{1k}-d_{1l})=0$$ Thus, there are $\frac{n(n-1)}{2}$ conditions that must be satisfied, and each conditions implies one of two "sub-conditions": that $d_{jk}=d_{jl}$ for either $j=1$ or $j=2$ (but not necessarily both). Now we have a set of all of the unique pairs $(k,l)$ for $d_{jk}=d_{jl}$. If we were to take $n-1$ of these pairs for one of the $j$, then we would have all the numbers $1,\dots,n$ in the set, and $d_{j1}=d_{j2}=\dots=d_{j,n-1}=d_{j,n}$. This is because the first pair has $2$ elements, and each additional pair brings at least one additional element to the set* But note that because there are $\frac{n(n-1)}{2}$ conditions, we must choose at least the smallest integer greater than or equal to $\frac{1}{2}\times\frac{n(n-1)}{2}=\frac{n(n-1)}{4}$ for one of the $j=1$ or $j=2$. If $n>4$ then the number of terms chosen is greater than $n-1$. If $n=4$ or $n=3$ then we must choose exactly $n-1$ terms. This implies that $d_{j1}=d_{j2}=\dots=d_{j,n-1}=d_{j,n}$. Only with two hypothesis ($n=2$) is where this does not occur. But from the last equation in Saunder's article this equality condition implies: $$P(D_{j}|\overline{H}_{i})=\frac{\sum_{k\neq i}d_{jk}h_{k}}{\sum_{k\neq i}h_{k}}=d_{ji}\frac{\sum_{k\neq i}h_{k}}{\sum_{k\neq i}h_{k}}=d_{ji}=P(D_{j}|H_{i})$$ Thus, in the likelihood ratio we have: $$\frac{P(D_{1}^{(1)}|H_{i})}{P(D_{1}^{(1)}|\overline{H}_{i})}=\frac{P(D_{1}|H_{i})}{P(D_{1}|\overline{H}_{i})}=1 \text{ OR} \frac{P(D_{2}^{(1)}|H_{i})}{P(D_{2}^{(1)}|\overline{H}_{i})}=\frac{P(D_{2}D_{3}\dots,D_{m}|H_{i})}{P(D_{2}D_{3}\dots,D_{m}|\overline{H}_{i})}=1$$ To complete the proof, note that if the second condition holds, the result is already proved, and only one ratio can be different from 1. If the first condition holds, then we can repeat the above analysis by relabeling $D_{1}^{(2)}\equiv D_{2}$ and $D_{2}^{(2)}\equiv D_{3}\dots,D_{m}$. Then we would have $D_{1},D_{2}$ not contributing, or $D_{2}$ being the only contributor. We would then have a third relabeling when $D_{1}D_{2}$ not contributing holds, and so on. Thus, only one data set can contribute to the likelihood ratio when condition 1 and condition 2 hold, and there are more than two hypothesis. *NOTE: An additional pair might bring no new terms, but this would be offset by a pair which brought 2 new terms. e.g. take $d_{j1}=d_{j2}$ as first[+2], $d_{j1}=d_{j3}$ [+1] and $d_{j2}=d_{j3}$ [+0], but next term must have $d_{jk}=d_{jl}$ for both $k,l\notin (1,2,3)$. This will add two terms [+2]. If $n=4$ then we don't need to choose any more, but for the "other" $j$ we must choose the 3 pairs which are not $(1,2),(2,3),(1,3)$. These are $(1,4),(2,4),(3,4)$ and thus the equality holds, because all numbers $(1,2,3,4)$ are in the set.
Has anyone solved PTLOS exercise 4.1? Okay, so rather than go and re-derive Saunder's equation (5), I will just state it here. Condition 1 and 2 imply the following equality: $$\prod_{j=1}^{m}\left(\sum_{k\neq i}h_{k}d_{jk}\right)=\left(
12,288
Has anyone solved PTLOS exercise 4.1?
Here's a visual example for some intuition (note: for simplification, I've omitted writing that all the probabilities below are conditioned on the background information $X$, as is done in the book, but you should assume that they are). Left column: $H_i$ Middle column: $P(D_1)$ and $P(\overline{D_1})$ Right column: $P(D_2)$ and $P(\overline{D_2})$ Green: $P(D_j)$ Red: $P(\overline{D_j})$. Here we see that condition 1 is true: all $P(D_1|H_i)$ are independent of $P(D_2|H_i)$, meaning that for each hypothesis, the probabilities in the right column for $D_2$ are the same regardless if we are looking at the one associated with $D_1$ or $\overline{D_1}$. Or in other words, $P(D_2|D_1 H_i)$ = $P(D_2|H_i)$. However, this does not imply condition 2, because in this example, condition 2 is false: all $P(D_1|\overline{H_i})$ are not independent of $P(D_2|\overline{H_1})$. For example, if they were, then this would be true: $$P(D_2|D_1 \overline{H_1}) = P(D_2|\overline{H_1})$$ But if we solve for $P(D_2|D_1 \overline{H_1})$ and $P(D_2|\overline{H_1})$ (note: a simpler version of these equation just using the heights of the boxes in the image is shown below*): $$ \begin{align} P(D_2|D_1 \overline{H_1}) &= \frac{P(D_2|D_1 H_2)P(D_1|H_2)P(H_2) + P(D_2|D_1 H_3)P(D_1|H_3)P(H_3)}{P(D_1|H_2)P(H_2) + P(D_1|H_3)P(H_3)} \\ &= \frac{\frac{1}{3}\frac{2}{3}\frac{1}{3} + \frac{2}{3}\frac{1}{3}\frac{1}{3}}{\frac{2}{3}\frac{1}{3} + \frac{1}{3}\frac{1}{3}} = \frac{\frac{2}{27} + \frac{2}{27}}{\frac{2}{9} + \frac{1}{9}} = \frac{\frac{4}{27}}{\frac{3}{9}} = \frac{\frac{4}{27}}{\frac{1}{3}} = \frac{4}{9} \end{align} $$ And: $$ \begin{align} P(D_2|\overline{H_1}) &= \frac{P(D_2|H_2)P(H_2) + P(D_2|H_3)P(H_3)}{P(H_2) + P(H_3)} \\ &= \frac{\frac{1}{3}\frac{1}{3} + \frac{2}{3}\frac{1}{3}}{\frac{1}{3} + \frac{1}{3}} = \frac{\frac{1}{9} + \frac{2}{9}}{\frac{2}{3}} = \frac{\frac{3}{9}}{\frac{2}{3}} = \frac{\frac{1}{3}}{\frac{2}{3}} = \frac{1}{2} \end{align} $$ *Or we can express these same equations using the heights of the boxes in the image above. I'll define $h(P(...))$ to mean the height of $P(...)$ in the image. So: $$ \begin{align} P(D_2|D_1 \overline{H_1}) &= \frac{h(P(D_2|D_1 H_2)) + h(P(D_2|D_1 H_3))}{h(P(D_1|H_3)) + h(P(D_1|H_3))} \\ &= \frac{2 + 2}{6 + 3} = \frac{4}{9} \end{align} $$ And: $$ \begin{align} P(D_2|\overline{H_1}) &= \frac{h(P(D_2|D_1 H_2)) + h(P(D_2|\overline{D_1} H_2)) + h(P(D_2|D_1 H_3)) + h(P(D_2|\overline{D_1} H_3))}{h(P(H_2)) + h(P(H_3))} \\ &= \frac{2 + 1 + 2 + 4}{9 + 9} = \frac{9}{18} = \frac{1}{2} \end{align} $$ So: $$P(D_2|D_1 \overline{H_1}) \neq P(D_2|\overline{H_1})$$ Meaning that $P(D_1|\overline{H_1})$ and $P(D_2|\overline{H_1})$ are not independent. However, we can regain independence if we adjust the example above by changing $P(D_1|H_3) = 2/3$ and $P(\overline{D_1}|H_3) = 1/3$ like so: Now all $P(D_1|H_i)$ are the same for all hypotheses. Meaning that knowing $D1$ no longer gives us any additional information about which hypothesis is more likely, and only $D2$ gives us any information about which hypothesis is more likely. And we see that if we now check that same independence equation as above, equality is restored: $$ \begin{align} P(D_2|D_1 \overline{H_1}) &= \frac{P(D_2|D_1 H_2)P(D_1|H_2)P(H_2) + P(D_2|D_1 H_3)P(D_1|H_3)P(H_3)}{P(D_1|H_2)P(H_2) + P(D_1|H_3)P(H_3)} \\ &= \frac{\frac{1}{3}\frac{2}{3}\frac{1}{3} + \frac{2}{3}\frac{2}{3}\frac{1}{3}}{\frac{2}{3}\frac{1}{3} + \frac{2}{3}\frac{1}{3}} = \frac{\frac{2}{27} + \frac{4}{27}}{\frac{2}{9} + \frac{2}{9}} = \frac{\frac{6}{27}}{\frac{4}{9}} = \frac{\frac{2}{9}}{\frac{4}{9}} = \frac{2}{4} = \frac{1}{2} \end{align} $$ And: $$ \begin{align} P(D_2|\overline{H_1}) &= \frac{P(D_2|H_2)P(H_2) + P(D_2|H_3)P(H_3)}{P(H_2) + P(H_3)} \\ &= \frac{\frac{1}{3}\frac{1}{3} + \frac{2}{3}\frac{1}{3}}{\frac{1}{3} + \frac{1}{3}} = \frac{\frac{1}{9} + \frac{2}{9}}{\frac{2}{3}} = \frac{\frac{3}{9}}{\frac{2}{3}} = \frac{\frac{1}{3}}{\frac{2}{3}} = \frac{1}{2} \end{align} $$ Or by using the heights in the image: $$ \begin{align} P(D_2|D_1 \overline{H_1}) &= \frac{h(P(D_2|D_1 H_2)) + h(P(D_2|D_1 H_3))}{h(P(D_1|H_3)) + h(P(D_1|H_3))} \\ &= \frac{2 + 4}{6 + 6} = \frac{6}{12} = \frac{1}{2} \end{align} $$ And: $$ \begin{align} P(D_2|\overline{H_1}) &= \frac{h(P(D_2|D_1 H_2)) + h(P(D_2|\overline{D_1} H_2)) + h(P(D_2|D_1 H_3)) + h(P(D_2|\overline{D_1} H_3))}{h(P(H_2)) + h(P(H_3))} \\ &= \frac{2 + 1 + 4 + 2}{9 + 9} = \frac{9}{18} = \frac{1}{2} \end{align} $$ So: $$P(D_2|D_1 \overline{H_1}) = P(D_2|\overline{H_1})$$ Meaning that $P(D_1|\overline{H_1})$ and $P(D_2|\overline{H_1})$ are now independent. So while this is not a proof that covers all cases, it's a simple example that shows how when more than one of the $D_j$ values gives information about which hypothesis is more likely, then the $P(D_j|\overline{H_i})$ values can no longer be considered independent. And you can use this example to try to visualize how this would also be true for more complicated examples. P.S. - If anyone sees any errors in this example or my way of thinking about this idea, please let me know.
Has anyone solved PTLOS exercise 4.1?
Here's a visual example for some intuition (note: for simplification, I've omitted writing that all the probabilities below are conditioned on the background information $X$, as is done in the book, b
Has anyone solved PTLOS exercise 4.1? Here's a visual example for some intuition (note: for simplification, I've omitted writing that all the probabilities below are conditioned on the background information $X$, as is done in the book, but you should assume that they are). Left column: $H_i$ Middle column: $P(D_1)$ and $P(\overline{D_1})$ Right column: $P(D_2)$ and $P(\overline{D_2})$ Green: $P(D_j)$ Red: $P(\overline{D_j})$. Here we see that condition 1 is true: all $P(D_1|H_i)$ are independent of $P(D_2|H_i)$, meaning that for each hypothesis, the probabilities in the right column for $D_2$ are the same regardless if we are looking at the one associated with $D_1$ or $\overline{D_1}$. Or in other words, $P(D_2|D_1 H_i)$ = $P(D_2|H_i)$. However, this does not imply condition 2, because in this example, condition 2 is false: all $P(D_1|\overline{H_i})$ are not independent of $P(D_2|\overline{H_1})$. For example, if they were, then this would be true: $$P(D_2|D_1 \overline{H_1}) = P(D_2|\overline{H_1})$$ But if we solve for $P(D_2|D_1 \overline{H_1})$ and $P(D_2|\overline{H_1})$ (note: a simpler version of these equation just using the heights of the boxes in the image is shown below*): $$ \begin{align} P(D_2|D_1 \overline{H_1}) &= \frac{P(D_2|D_1 H_2)P(D_1|H_2)P(H_2) + P(D_2|D_1 H_3)P(D_1|H_3)P(H_3)}{P(D_1|H_2)P(H_2) + P(D_1|H_3)P(H_3)} \\ &= \frac{\frac{1}{3}\frac{2}{3}\frac{1}{3} + \frac{2}{3}\frac{1}{3}\frac{1}{3}}{\frac{2}{3}\frac{1}{3} + \frac{1}{3}\frac{1}{3}} = \frac{\frac{2}{27} + \frac{2}{27}}{\frac{2}{9} + \frac{1}{9}} = \frac{\frac{4}{27}}{\frac{3}{9}} = \frac{\frac{4}{27}}{\frac{1}{3}} = \frac{4}{9} \end{align} $$ And: $$ \begin{align} P(D_2|\overline{H_1}) &= \frac{P(D_2|H_2)P(H_2) + P(D_2|H_3)P(H_3)}{P(H_2) + P(H_3)} \\ &= \frac{\frac{1}{3}\frac{1}{3} + \frac{2}{3}\frac{1}{3}}{\frac{1}{3} + \frac{1}{3}} = \frac{\frac{1}{9} + \frac{2}{9}}{\frac{2}{3}} = \frac{\frac{3}{9}}{\frac{2}{3}} = \frac{\frac{1}{3}}{\frac{2}{3}} = \frac{1}{2} \end{align} $$ *Or we can express these same equations using the heights of the boxes in the image above. I'll define $h(P(...))$ to mean the height of $P(...)$ in the image. So: $$ \begin{align} P(D_2|D_1 \overline{H_1}) &= \frac{h(P(D_2|D_1 H_2)) + h(P(D_2|D_1 H_3))}{h(P(D_1|H_3)) + h(P(D_1|H_3))} \\ &= \frac{2 + 2}{6 + 3} = \frac{4}{9} \end{align} $$ And: $$ \begin{align} P(D_2|\overline{H_1}) &= \frac{h(P(D_2|D_1 H_2)) + h(P(D_2|\overline{D_1} H_2)) + h(P(D_2|D_1 H_3)) + h(P(D_2|\overline{D_1} H_3))}{h(P(H_2)) + h(P(H_3))} \\ &= \frac{2 + 1 + 2 + 4}{9 + 9} = \frac{9}{18} = \frac{1}{2} \end{align} $$ So: $$P(D_2|D_1 \overline{H_1}) \neq P(D_2|\overline{H_1})$$ Meaning that $P(D_1|\overline{H_1})$ and $P(D_2|\overline{H_1})$ are not independent. However, we can regain independence if we adjust the example above by changing $P(D_1|H_3) = 2/3$ and $P(\overline{D_1}|H_3) = 1/3$ like so: Now all $P(D_1|H_i)$ are the same for all hypotheses. Meaning that knowing $D1$ no longer gives us any additional information about which hypothesis is more likely, and only $D2$ gives us any information about which hypothesis is more likely. And we see that if we now check that same independence equation as above, equality is restored: $$ \begin{align} P(D_2|D_1 \overline{H_1}) &= \frac{P(D_2|D_1 H_2)P(D_1|H_2)P(H_2) + P(D_2|D_1 H_3)P(D_1|H_3)P(H_3)}{P(D_1|H_2)P(H_2) + P(D_1|H_3)P(H_3)} \\ &= \frac{\frac{1}{3}\frac{2}{3}\frac{1}{3} + \frac{2}{3}\frac{2}{3}\frac{1}{3}}{\frac{2}{3}\frac{1}{3} + \frac{2}{3}\frac{1}{3}} = \frac{\frac{2}{27} + \frac{4}{27}}{\frac{2}{9} + \frac{2}{9}} = \frac{\frac{6}{27}}{\frac{4}{9}} = \frac{\frac{2}{9}}{\frac{4}{9}} = \frac{2}{4} = \frac{1}{2} \end{align} $$ And: $$ \begin{align} P(D_2|\overline{H_1}) &= \frac{P(D_2|H_2)P(H_2) + P(D_2|H_3)P(H_3)}{P(H_2) + P(H_3)} \\ &= \frac{\frac{1}{3}\frac{1}{3} + \frac{2}{3}\frac{1}{3}}{\frac{1}{3} + \frac{1}{3}} = \frac{\frac{1}{9} + \frac{2}{9}}{\frac{2}{3}} = \frac{\frac{3}{9}}{\frac{2}{3}} = \frac{\frac{1}{3}}{\frac{2}{3}} = \frac{1}{2} \end{align} $$ Or by using the heights in the image: $$ \begin{align} P(D_2|D_1 \overline{H_1}) &= \frac{h(P(D_2|D_1 H_2)) + h(P(D_2|D_1 H_3))}{h(P(D_1|H_3)) + h(P(D_1|H_3))} \\ &= \frac{2 + 4}{6 + 6} = \frac{6}{12} = \frac{1}{2} \end{align} $$ And: $$ \begin{align} P(D_2|\overline{H_1}) &= \frac{h(P(D_2|D_1 H_2)) + h(P(D_2|\overline{D_1} H_2)) + h(P(D_2|D_1 H_3)) + h(P(D_2|\overline{D_1} H_3))}{h(P(H_2)) + h(P(H_3))} \\ &= \frac{2 + 1 + 4 + 2}{9 + 9} = \frac{9}{18} = \frac{1}{2} \end{align} $$ So: $$P(D_2|D_1 \overline{H_1}) = P(D_2|\overline{H_1})$$ Meaning that $P(D_1|\overline{H_1})$ and $P(D_2|\overline{H_1})$ are now independent. So while this is not a proof that covers all cases, it's a simple example that shows how when more than one of the $D_j$ values gives information about which hypothesis is more likely, then the $P(D_j|\overline{H_i})$ values can no longer be considered independent. And you can use this example to try to visualize how this would also be true for more complicated examples. P.S. - If anyone sees any errors in this example or my way of thinking about this idea, please let me know.
Has anyone solved PTLOS exercise 4.1? Here's a visual example for some intuition (note: for simplification, I've omitted writing that all the probabilities below are conditioned on the background information $X$, as is done in the book, b
12,289
AIC & BIC number interpretation
$AIC$ for model $i$ of an a priori model set can be recaled to $\mathsf{\Delta}_i=AIC_i-minAIC$ where the best model of the model set will have $\mathsf{\Delta}=0$. We can use the $\mathsf{\Delta}_i$ values to estimate strength of evidence ($w_i$) for the all models in the model set where: $$ w_i = \frac{e^{(-0.5\mathsf{\Delta}_i)}}{\sum_{r=1}^Re^{(-0.5\mathsf{\Delta}_i)}}. $$ This is often refered to as the "weight of evidence" for model $i$ given the a priori model set. As $\mathsf{\Delta}_i$ increases, $w_i$ decreases suggesting model $i$ is less plausible. These $w_i$ values can be interpreted as the probability that model $i$ is the best model given the a priori model set. We could also calculate the relative likelihood of model $i$ versus model $j$ as $w_i/w_j$. For example, if $w_i = 0.8$ and $w_j = 0.1$ then we could say model $i$ is 8 times more likely than model $j$. Note, $w_1/w_2 = e^{0.5\Delta_2}$ when model 1 is the best model (smallest $AIC$). Burnham and Anderson (2002) term this as the evidence ratio. This table shows how the evidence ratio changes with respect to the best model. Information Loss (Delta) Evidence Ratio 0 1.0 2 2.7 4 7.4 8 54.6 10 148.4 12 403.4 15 1808.0 Reference Burnham, K. P., and D. R. Anderson. 2002. Model selection and multimodel inference: a practical information-theoretic approach. Second edition. Springer, New York, USA. Anderson, D. R. 2008. Model based inference in the life sciences: a primer on evidence. Springer, New York, USA.
AIC & BIC number interpretation
$AIC$ for model $i$ of an a priori model set can be recaled to $\mathsf{\Delta}_i=AIC_i-minAIC$ where the best model of the model set will have $\mathsf{\Delta}=0$. We can use the $\mathsf{\Delta}_i$
AIC & BIC number interpretation $AIC$ for model $i$ of an a priori model set can be recaled to $\mathsf{\Delta}_i=AIC_i-minAIC$ where the best model of the model set will have $\mathsf{\Delta}=0$. We can use the $\mathsf{\Delta}_i$ values to estimate strength of evidence ($w_i$) for the all models in the model set where: $$ w_i = \frac{e^{(-0.5\mathsf{\Delta}_i)}}{\sum_{r=1}^Re^{(-0.5\mathsf{\Delta}_i)}}. $$ This is often refered to as the "weight of evidence" for model $i$ given the a priori model set. As $\mathsf{\Delta}_i$ increases, $w_i$ decreases suggesting model $i$ is less plausible. These $w_i$ values can be interpreted as the probability that model $i$ is the best model given the a priori model set. We could also calculate the relative likelihood of model $i$ versus model $j$ as $w_i/w_j$. For example, if $w_i = 0.8$ and $w_j = 0.1$ then we could say model $i$ is 8 times more likely than model $j$. Note, $w_1/w_2 = e^{0.5\Delta_2}$ when model 1 is the best model (smallest $AIC$). Burnham and Anderson (2002) term this as the evidence ratio. This table shows how the evidence ratio changes with respect to the best model. Information Loss (Delta) Evidence Ratio 0 1.0 2 2.7 4 7.4 8 54.6 10 148.4 12 403.4 15 1808.0 Reference Burnham, K. P., and D. R. Anderson. 2002. Model selection and multimodel inference: a practical information-theoretic approach. Second edition. Springer, New York, USA. Anderson, D. R. 2008. Model based inference in the life sciences: a primer on evidence. Springer, New York, USA.
AIC & BIC number interpretation $AIC$ for model $i$ of an a priori model set can be recaled to $\mathsf{\Delta}_i=AIC_i-minAIC$ where the best model of the model set will have $\mathsf{\Delta}=0$. We can use the $\mathsf{\Delta}_i$
12,290
AIC & BIC number interpretation
I don't think there is any simple interpretation of AIC or BIC like that. They are both quantities that take the log likelihood and apply a penalty to it for the number of parameters being estimated. The specific penalties are explained for AIC by Akaike in his papers starting in 1974. BIC was selected by Gideon Schwarz in his 1978 paper and is motivated by a Bayesian argument.
AIC & BIC number interpretation
I don't think there is any simple interpretation of AIC or BIC like that. They are both quantities that take the log likelihood and apply a penalty to it for the number of parameters being estimated.
AIC & BIC number interpretation I don't think there is any simple interpretation of AIC or BIC like that. They are both quantities that take the log likelihood and apply a penalty to it for the number of parameters being estimated. The specific penalties are explained for AIC by Akaike in his papers starting in 1974. BIC was selected by Gideon Schwarz in his 1978 paper and is motivated by a Bayesian argument.
AIC & BIC number interpretation I don't think there is any simple interpretation of AIC or BIC like that. They are both quantities that take the log likelihood and apply a penalty to it for the number of parameters being estimated.
12,291
AIC & BIC number interpretation
You probably use the BIC as a result of approximation to Bayes factor. Therefore you don't consider (more or less) a prior distribution. BIC in a model selection stage is useful when you compare the models. To fully understand BIC, Bayes factor I highly recommend reading an article (sec. 4): http://www.stat.washington.edu/raftery/Research/PDF/socmeth1995.pdf to supplement knowledge with: http://www.stat.washington.edu/raftery/Research/PDF/kass1995.pdf
AIC & BIC number interpretation
You probably use the BIC as a result of approximation to Bayes factor. Therefore you don't consider (more or less) a prior distribution. BIC in a model selection stage is useful when you compare the m
AIC & BIC number interpretation You probably use the BIC as a result of approximation to Bayes factor. Therefore you don't consider (more or less) a prior distribution. BIC in a model selection stage is useful when you compare the models. To fully understand BIC, Bayes factor I highly recommend reading an article (sec. 4): http://www.stat.washington.edu/raftery/Research/PDF/socmeth1995.pdf to supplement knowledge with: http://www.stat.washington.edu/raftery/Research/PDF/kass1995.pdf
AIC & BIC number interpretation You probably use the BIC as a result of approximation to Bayes factor. Therefore you don't consider (more or less) a prior distribution. BIC in a model selection stage is useful when you compare the m
12,292
Calibrating a multi-class boosted classifier
This is a topic of practical interest to me as well so I did a little research. Here are two papers by an author that is often listed as a reference in these matters. Transforming classifier scores into accurate multiclass probability estimates Reducing multiclass to binary by coupling probability estimates The gist of the technique advocated here is to reduce the multiclass problem to a binary one (e.g. one versus the rest, AKA one versus all), use a technique like Platt (preferrably using a test set) to claibrate the binary scores/probabilities and then combine these using a techique as discussed in the papers (one is an extenstion of a Hastie et al process of "coupling"). In the first link, the best results were found by simply normalizing the binary probabilities to that they sum to 1. I would love to hear other advice and if any of these tecnhiqes have been implmented in R.
Calibrating a multi-class boosted classifier
This is a topic of practical interest to me as well so I did a little research. Here are two papers by an author that is often listed as a reference in these matters. Transforming classifier scores i
Calibrating a multi-class boosted classifier This is a topic of practical interest to me as well so I did a little research. Here are two papers by an author that is often listed as a reference in these matters. Transforming classifier scores into accurate multiclass probability estimates Reducing multiclass to binary by coupling probability estimates The gist of the technique advocated here is to reduce the multiclass problem to a binary one (e.g. one versus the rest, AKA one versus all), use a technique like Platt (preferrably using a test set) to claibrate the binary scores/probabilities and then combine these using a techique as discussed in the papers (one is an extenstion of a Hastie et al process of "coupling"). In the first link, the best results were found by simply normalizing the binary probabilities to that they sum to 1. I would love to hear other advice and if any of these tecnhiqes have been implmented in R.
Calibrating a multi-class boosted classifier This is a topic of practical interest to me as well so I did a little research. Here are two papers by an author that is often listed as a reference in these matters. Transforming classifier scores i
12,293
What causes sudden drops in training/test errors when training a neural network?
They changed the learning rate. Note the drop is at exactly 30 and 60 epochs, obviously set manually by someone.
What causes sudden drops in training/test errors when training a neural network?
They changed the learning rate. Note the drop is at exactly 30 and 60 epochs, obviously set manually by someone.
What causes sudden drops in training/test errors when training a neural network? They changed the learning rate. Note the drop is at exactly 30 and 60 epochs, obviously set manually by someone.
What causes sudden drops in training/test errors when training a neural network? They changed the learning rate. Note the drop is at exactly 30 and 60 epochs, obviously set manually by someone.
12,294
What causes sudden drops in training/test errors when training a neural network?
If you refer to the ResNet (Deep Residual Learning for Image Recognition) paper it reads as follows, "The learning rate starts from 0.1 and is divided by 10 when the error plateaus". Hence, the reason fro drop is the update in the learning rate.
What causes sudden drops in training/test errors when training a neural network?
If you refer to the ResNet (Deep Residual Learning for Image Recognition) paper it reads as follows, "The learning rate starts from 0.1 and is divided by 10 when the error plateaus". Hence, the reason
What causes sudden drops in training/test errors when training a neural network? If you refer to the ResNet (Deep Residual Learning for Image Recognition) paper it reads as follows, "The learning rate starts from 0.1 and is divided by 10 when the error plateaus". Hence, the reason fro drop is the update in the learning rate.
What causes sudden drops in training/test errors when training a neural network? If you refer to the ResNet (Deep Residual Learning for Image Recognition) paper it reads as follows, "The learning rate starts from 0.1 and is divided by 10 when the error plateaus". Hence, the reason
12,295
Suggestions for improving a probability and statistics cheat sheet
Tom Short's R Reference Card is excellent.
Suggestions for improving a probability and statistics cheat sheet
Tom Short's R Reference Card is excellent.
Suggestions for improving a probability and statistics cheat sheet Tom Short's R Reference Card is excellent.
Suggestions for improving a probability and statistics cheat sheet Tom Short's R Reference Card is excellent.
12,296
Suggestions for improving a probability and statistics cheat sheet
My favorite is the R Inferno by Patrick Burns.
Suggestions for improving a probability and statistics cheat sheet
My favorite is the R Inferno by Patrick Burns.
Suggestions for improving a probability and statistics cheat sheet My favorite is the R Inferno by Patrick Burns.
Suggestions for improving a probability and statistics cheat sheet My favorite is the R Inferno by Patrick Burns.
12,297
Are random variables correlated if and only if their ranks are correlated?
Neither correlation being zero necessarily tells you much about the other, since they 'weight' the data - especially extreme data - quite differently. I am just going to play with samples, but similar examples could be constructed with bivariate distributions / copulas. 1. Spearman correlation 0 doesn't imply Pearson correlation 0: As mentioned in the question, there are examples in the comments, but the basic structure is "construct a case where Spearman correlation is 0, then take an extreme point and make it more extreme without changing the Spearman correlation" The examples in comments cover that very well, but I am just going to play with a more 'random' example here. So consider this data (in R), which by construction has both Spearman and Pearson correlation 0: x=c(0.660527211673069, 0.853446087136149, -0.00673848667511427, -0.730570343152498, 0.0519171047989013, 0.00190761493801791, -0.72628058443299, 2.4453231076856, -0.918072410495674, -0.364060229489348, -0.520696233492491, 0.659907250608776) y=c(-0.0214697990371976, 0.255615059485107, 1.10561181413232, 0.572216886959267, -0.929089680725018, 0.530329993414123, -0.219422799586819, -0.425186120279194, -0.848952532832652, 0.859700836483046, -0.00836246690850083, 1.43806947831794) cor(x,y);cor(x,y,method="sp") [1] 1.523681e-18 [1] 0 Now add 1000 to y[12] and subtract 0.6 from x[9]; the Spearman correlation is unchanged but the Pearson correlation is now 0.1841: ya=y ya[12]=ya[12]+1000 xa=x xa[9]=xa[9]-.6 cor(xa,ya);cor(xa,ya,method="sp") [1] 0.1841168 [1] 0 (If you want strong significance on that Pearson correlation, just replicate the entire sample several times.) 2. Pearson correlation 0 doesn't imply Spearman correlation 0: Here's two examples with zero Pearson correlation but nonzero Spearman correlation (and again, if you want strong significance on these Spearman correlations, just replicate the entire sample several times). Example 1: x1=c(rep(-3.4566679074320789866,20),-2:5) y1=x1*x1 cor(x1,y1);cor(x1,y1,method="spe") [1] -8.007297e-17 [1] -0.3512699 Example 2: k=16.881943016134132 x2=c(-9:9,-k,k) y2=c(-9:9,k,-k) cor(x2,y2);cor(x2,y2,method="spe") [1] -9.154471e-17 [1] 0.4805195 In this last example, the Spearman correlation can be made stronger by adding more points on y=x while making the two points at the top left and bottom right more extreme to maintain the Pearson correlation at 0.
Are random variables correlated if and only if their ranks are correlated?
Neither correlation being zero necessarily tells you much about the other, since they 'weight' the data - especially extreme data - quite differently. I am just going to play with samples, but similar
Are random variables correlated if and only if their ranks are correlated? Neither correlation being zero necessarily tells you much about the other, since they 'weight' the data - especially extreme data - quite differently. I am just going to play with samples, but similar examples could be constructed with bivariate distributions / copulas. 1. Spearman correlation 0 doesn't imply Pearson correlation 0: As mentioned in the question, there are examples in the comments, but the basic structure is "construct a case where Spearman correlation is 0, then take an extreme point and make it more extreme without changing the Spearman correlation" The examples in comments cover that very well, but I am just going to play with a more 'random' example here. So consider this data (in R), which by construction has both Spearman and Pearson correlation 0: x=c(0.660527211673069, 0.853446087136149, -0.00673848667511427, -0.730570343152498, 0.0519171047989013, 0.00190761493801791, -0.72628058443299, 2.4453231076856, -0.918072410495674, -0.364060229489348, -0.520696233492491, 0.659907250608776) y=c(-0.0214697990371976, 0.255615059485107, 1.10561181413232, 0.572216886959267, -0.929089680725018, 0.530329993414123, -0.219422799586819, -0.425186120279194, -0.848952532832652, 0.859700836483046, -0.00836246690850083, 1.43806947831794) cor(x,y);cor(x,y,method="sp") [1] 1.523681e-18 [1] 0 Now add 1000 to y[12] and subtract 0.6 from x[9]; the Spearman correlation is unchanged but the Pearson correlation is now 0.1841: ya=y ya[12]=ya[12]+1000 xa=x xa[9]=xa[9]-.6 cor(xa,ya);cor(xa,ya,method="sp") [1] 0.1841168 [1] 0 (If you want strong significance on that Pearson correlation, just replicate the entire sample several times.) 2. Pearson correlation 0 doesn't imply Spearman correlation 0: Here's two examples with zero Pearson correlation but nonzero Spearman correlation (and again, if you want strong significance on these Spearman correlations, just replicate the entire sample several times). Example 1: x1=c(rep(-3.4566679074320789866,20),-2:5) y1=x1*x1 cor(x1,y1);cor(x1,y1,method="spe") [1] -8.007297e-17 [1] -0.3512699 Example 2: k=16.881943016134132 x2=c(-9:9,-k,k) y2=c(-9:9,k,-k) cor(x2,y2);cor(x2,y2,method="spe") [1] -9.154471e-17 [1] 0.4805195 In this last example, the Spearman correlation can be made stronger by adding more points on y=x while making the two points at the top left and bottom right more extreme to maintain the Pearson correlation at 0.
Are random variables correlated if and only if their ranks are correlated? Neither correlation being zero necessarily tells you much about the other, since they 'weight' the data - especially extreme data - quite differently. I am just going to play with samples, but similar
12,298
Logistic regression for time series
There are two methods to consider: Only use the last $\mathrm{N}$ input samples. Assuming your input signal is of dimension $\mathrm{D}$, then you have $\mathrm{N} \times \mathrm{D}$ samples per ground truth label. This way you can train using any classifier you like, including logistic regression. This way, each output is considered independent from all other outputs. Use the last $\mathrm{N}$ input samples and the last $\mathrm{N}$ outputs you have generated. The problem is then similar to Viterbi decoding. You could generate a non-binary score based on the input samples and combine the score of multiple samples using a Viterbi decoder. This is better than method 1. if you know something about the temporal relation between the outputs.
Logistic regression for time series
There are two methods to consider: Only use the last $\mathrm{N}$ input samples. Assuming your input signal is of dimension $\mathrm{D}$, then you have $\mathrm{N} \times \mathrm{D}$ samples per grou
Logistic regression for time series There are two methods to consider: Only use the last $\mathrm{N}$ input samples. Assuming your input signal is of dimension $\mathrm{D}$, then you have $\mathrm{N} \times \mathrm{D}$ samples per ground truth label. This way you can train using any classifier you like, including logistic regression. This way, each output is considered independent from all other outputs. Use the last $\mathrm{N}$ input samples and the last $\mathrm{N}$ outputs you have generated. The problem is then similar to Viterbi decoding. You could generate a non-binary score based on the input samples and combine the score of multiple samples using a Viterbi decoder. This is better than method 1. if you know something about the temporal relation between the outputs.
Logistic regression for time series There are two methods to consider: Only use the last $\mathrm{N}$ input samples. Assuming your input signal is of dimension $\mathrm{D}$, then you have $\mathrm{N} \times \mathrm{D}$ samples per grou
12,299
What is the difference between PCA and asymptotic PCA?
There is absolutely no difference. There is absolutely no difference between standard PCA and what C&K suggested and called "asymptotic PCA". It is quite ridiculous to give it a separate name. Here is a short explanation of PCA. If centered data with samples in rows are stored in a data matrix $\mathbf X$, then PCA looks for eigenvectors of the covariance matrix $\frac{1}{N}\mathbf X^\top \mathbf X$, and projects the data on these eigenvectors to obtain principal components. Equivalently, one can consider a Gram matrix, $\frac{1}{N}\mathbf X \mathbf X^\top$. It is easy to see that is has exactly the same eigenvalues, and its eigenvectors are scaled PCs. (This is convenient when the number of samples is less than the number of features.) It seems to me that what C&K suggested, is to compute eigenvectors of the Gram matrix in order to compute principal components. Well, wow. This is not "equivalent" to PCA; it is PCA. To add to the confusion, the name "asymptotic PCA" seems to refer to its relation to factor analysis (FA), not to PCA! The original C&K papers are under paywall, so here is a quote from Tsay, Analysis of Financial Time Series, available on Google Books: Connor and Korajczyk (1988) showed that as $k$ [number of features] $\to \infty$ eigenvalue-eigenvector analysis of [the Gram matrix] is equivalent to the traditional statistical factor analysis. What this really means is that when $k \to \infty$, PCA gives the same solution as FA. This is an easy-to-understand fact about PCA and FA, and it has nothing to do with whatever C&K suggested. I discussed it in the following threads: Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor analysis? Under which conditions do PCA and FA yield similar results? So the bottom-line is: C&K decided to coin the term "asymptotic PCA" for standard PCA (which could also be called "asymptotic FA"). I would go as far as to recommend never to use this term.
What is the difference between PCA and asymptotic PCA?
There is absolutely no difference. There is absolutely no difference between standard PCA and what C&K suggested and called "asymptotic PCA". It is quite ridiculous to give it a separate name. Here i
What is the difference between PCA and asymptotic PCA? There is absolutely no difference. There is absolutely no difference between standard PCA and what C&K suggested and called "asymptotic PCA". It is quite ridiculous to give it a separate name. Here is a short explanation of PCA. If centered data with samples in rows are stored in a data matrix $\mathbf X$, then PCA looks for eigenvectors of the covariance matrix $\frac{1}{N}\mathbf X^\top \mathbf X$, and projects the data on these eigenvectors to obtain principal components. Equivalently, one can consider a Gram matrix, $\frac{1}{N}\mathbf X \mathbf X^\top$. It is easy to see that is has exactly the same eigenvalues, and its eigenvectors are scaled PCs. (This is convenient when the number of samples is less than the number of features.) It seems to me that what C&K suggested, is to compute eigenvectors of the Gram matrix in order to compute principal components. Well, wow. This is not "equivalent" to PCA; it is PCA. To add to the confusion, the name "asymptotic PCA" seems to refer to its relation to factor analysis (FA), not to PCA! The original C&K papers are under paywall, so here is a quote from Tsay, Analysis of Financial Time Series, available on Google Books: Connor and Korajczyk (1988) showed that as $k$ [number of features] $\to \infty$ eigenvalue-eigenvector analysis of [the Gram matrix] is equivalent to the traditional statistical factor analysis. What this really means is that when $k \to \infty$, PCA gives the same solution as FA. This is an easy-to-understand fact about PCA and FA, and it has nothing to do with whatever C&K suggested. I discussed it in the following threads: Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor analysis? Under which conditions do PCA and FA yield similar results? So the bottom-line is: C&K decided to coin the term "asymptotic PCA" for standard PCA (which could also be called "asymptotic FA"). I would go as far as to recommend never to use this term.
What is the difference between PCA and asymptotic PCA? There is absolutely no difference. There is absolutely no difference between standard PCA and what C&K suggested and called "asymptotic PCA". It is quite ridiculous to give it a separate name. Here i
12,300
What is the difference between PCA and asymptotic PCA?
Typically APCA gets used when there are lots of series but very few samples. I wouldn't describe APCA as better or worse than PCA, because of the equivalence you noted. They do, however, differ in when the tools are applicable. That is the insight of the paper: you can flip the dimension if it's more convenient! So in the application you mentioned, there are a lot of assets so you would need a long time series to compute a covariance matrix, but now you can use APCA. That said, I don't think APCA gets applied very often because you could try to reduce the dimensionality using other techniques (like factor analysis).
What is the difference between PCA and asymptotic PCA?
Typically APCA gets used when there are lots of series but very few samples. I wouldn't describe APCA as better or worse than PCA, because of the equivalence you noted. They do, however, differ in whe
What is the difference between PCA and asymptotic PCA? Typically APCA gets used when there are lots of series but very few samples. I wouldn't describe APCA as better or worse than PCA, because of the equivalence you noted. They do, however, differ in when the tools are applicable. That is the insight of the paper: you can flip the dimension if it's more convenient! So in the application you mentioned, there are a lot of assets so you would need a long time series to compute a covariance matrix, but now you can use APCA. That said, I don't think APCA gets applied very often because you could try to reduce the dimensionality using other techniques (like factor analysis).
What is the difference between PCA and asymptotic PCA? Typically APCA gets used when there are lots of series but very few samples. I wouldn't describe APCA as better or worse than PCA, because of the equivalence you noted. They do, however, differ in whe