idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
8,801
What's the difference between a probability and a proportion?
I don't know if there is a difference, but probabilities are not % they range from 0 to 1. I mean if you multiply a probability by 100 you get %. If your question is what's the difference between probability and % then this would be my answer, but this is not your question. The definition of probability assumes an infinite number of sampling experiments, so then we can never truly get probability because we can never truly conduct an infinite number of sampling experiments.
What's the difference between a probability and a proportion?
I don't know if there is a difference, but probabilities are not % they range from 0 to 1. I mean if you multiply a probability by 100 you get %. If your question is what's the difference between pro
What's the difference between a probability and a proportion? I don't know if there is a difference, but probabilities are not % they range from 0 to 1. I mean if you multiply a probability by 100 you get %. If your question is what's the difference between probability and % then this would be my answer, but this is not your question. The definition of probability assumes an infinite number of sampling experiments, so then we can never truly get probability because we can never truly conduct an infinite number of sampling experiments.
What's the difference between a probability and a proportion? I don't know if there is a difference, but probabilities are not % they range from 0 to 1. I mean if you multiply a probability by 100 you get %. If your question is what's the difference between pro
8,802
What is the proper name for a "river plot" visualisation [duplicate]
It is a map, and so cartographers would likely refer to it as a thematic map (as opposed to a topographical map). The fact that many statistical diagrams have unique names (e.g. a bar chart, a scatterplot, a dotplot) as opposed to just describing their contents can sometime be a hindrance. Both because not everything is named (as is the case here) and the same name can refer to different types of displays (dotplot is a good example). In the Grammar of Graphics Wilkinson describes a graph as geometric elements displayed in a particular coordinate system. Here he refers to Napoleon's March as a path element whose width represents the number of troops. In this example the path is drawn in a Cartesian coordinate system whose points represent actual locations in Europe. The points are connected as a representation of the journey Napoleon and his army took, although it likely does not exactly trace the journey (nor does the wider element at the start mean the army took up more space on the road!) There are many different software programs that have the capabilities to to draw this type of diagram. Michael Friendly has a whole page of examples. Below is a slightly amended example using the ggplot2 package in R (as you requested an example in R), although it could certainly be replicated in base graphics. mydir <- "your directory here" setwd(mydir) library(ggplot2) troops <- read.table("troops.txt", header=T) #data is from Friendly link cities <- read.table("cities.txt", header=T) #http://www.datavis.ca/gallery/minard/ggplot2/ggplot2-minard-gallery.zip temps <- read.table("temps.txt", header=T) temps$date <- as.Date(strptime(temps$date,"%d%b%Y")) xlim <- scale_x_continuous(limits = c(24, 39)) p <- ggplot(cities, aes(x = long, y = lat)) + geom_path( aes(size = survivors, colour = direction, group = group), data=troops, linejoin = "round", lineend = "round" ) + geom_point() + geom_text(aes(label = city), hjust=0, vjust=1, size=4) + scale_size(range = c(1, 10)) + scale_colour_manual(values = c("grey50","red")) + xlim + coord_fixed(ratio = 1) p ggsave(file = "march.png", width=16, height=4) Here are a few of the things that make this different than the original: I did not display the temperature graph at the bottom of the plot. In ggplot2 you can make a separate graph, you cannot draw lines across the separate graph windows though. Minard's original graph shows the path diminishing in steps between cities. This graph does not interpolate the losses like that, and shows abrupt changes from city to city. (Troop sizes are taken from a diary of a physician who traveled with the army I believe) This graph shows the exact location of the contemporary cities, Minard tended to bend space slightly to make the graph nicer. A more blatant example is the location of England in Minards map of migration flows.
What is the proper name for a "river plot" visualisation [duplicate]
It is a map, and so cartographers would likely refer to it as a thematic map (as opposed to a topographical map). The fact that many statistical diagrams have unique names (e.g. a bar chart, a scatter
What is the proper name for a "river plot" visualisation [duplicate] It is a map, and so cartographers would likely refer to it as a thematic map (as opposed to a topographical map). The fact that many statistical diagrams have unique names (e.g. a bar chart, a scatterplot, a dotplot) as opposed to just describing their contents can sometime be a hindrance. Both because not everything is named (as is the case here) and the same name can refer to different types of displays (dotplot is a good example). In the Grammar of Graphics Wilkinson describes a graph as geometric elements displayed in a particular coordinate system. Here he refers to Napoleon's March as a path element whose width represents the number of troops. In this example the path is drawn in a Cartesian coordinate system whose points represent actual locations in Europe. The points are connected as a representation of the journey Napoleon and his army took, although it likely does not exactly trace the journey (nor does the wider element at the start mean the army took up more space on the road!) There are many different software programs that have the capabilities to to draw this type of diagram. Michael Friendly has a whole page of examples. Below is a slightly amended example using the ggplot2 package in R (as you requested an example in R), although it could certainly be replicated in base graphics. mydir <- "your directory here" setwd(mydir) library(ggplot2) troops <- read.table("troops.txt", header=T) #data is from Friendly link cities <- read.table("cities.txt", header=T) #http://www.datavis.ca/gallery/minard/ggplot2/ggplot2-minard-gallery.zip temps <- read.table("temps.txt", header=T) temps$date <- as.Date(strptime(temps$date,"%d%b%Y")) xlim <- scale_x_continuous(limits = c(24, 39)) p <- ggplot(cities, aes(x = long, y = lat)) + geom_path( aes(size = survivors, colour = direction, group = group), data=troops, linejoin = "round", lineend = "round" ) + geom_point() + geom_text(aes(label = city), hjust=0, vjust=1, size=4) + scale_size(range = c(1, 10)) + scale_colour_manual(values = c("grey50","red")) + xlim + coord_fixed(ratio = 1) p ggsave(file = "march.png", width=16, height=4) Here are a few of the things that make this different than the original: I did not display the temperature graph at the bottom of the plot. In ggplot2 you can make a separate graph, you cannot draw lines across the separate graph windows though. Minard's original graph shows the path diminishing in steps between cities. This graph does not interpolate the losses like that, and shows abrupt changes from city to city. (Troop sizes are taken from a diary of a physician who traveled with the army I believe) This graph shows the exact location of the contemporary cities, Minard tended to bend space slightly to make the graph nicer. A more blatant example is the location of England in Minards map of migration flows.
What is the proper name for a "river plot" visualisation [duplicate] It is a map, and so cartographers would likely refer to it as a thematic map (as opposed to a topographical map). The fact that many statistical diagrams have unique names (e.g. a bar chart, a scatter
8,803
What is the proper name for a "river plot" visualisation [duplicate]
I have found it. What I was looking for is called a "Sankey diagram". Although there seems to be a tutorial on generating these graphs using rCharts, apparently there is no R-only package for this type of graphs yet on CRAN.
What is the proper name for a "river plot" visualisation [duplicate]
I have found it. What I was looking for is called a "Sankey diagram". Although there seems to be a tutorial on generating these graphs using rCharts, apparently there is no R-only package for this typ
What is the proper name for a "river plot" visualisation [duplicate] I have found it. What I was looking for is called a "Sankey diagram". Although there seems to be a tutorial on generating these graphs using rCharts, apparently there is no R-only package for this type of graphs yet on CRAN.
What is the proper name for a "river plot" visualisation [duplicate] I have found it. What I was looking for is called a "Sankey diagram". Although there seems to be a tutorial on generating these graphs using rCharts, apparently there is no R-only package for this typ
8,804
What is the proper name for a "river plot" visualisation [duplicate]
I don't think so. It includes so many elements I doubt it lends itself a single canonical name. That said, you could look for ribbon plot, parallel coordinates plot, and (thanks to the comment above from user603) flow map (and searching for flow maps certainly seems the way to proceed). A web search for "replicate Charles Minard's visualization" led to these two possibly useful links. 1, 2.
What is the proper name for a "river plot" visualisation [duplicate]
I don't think so. It includes so many elements I doubt it lends itself a single canonical name. That said, you could look for ribbon plot, parallel coordinates plot, and (thanks to the comment above f
What is the proper name for a "river plot" visualisation [duplicate] I don't think so. It includes so many elements I doubt it lends itself a single canonical name. That said, you could look for ribbon plot, parallel coordinates plot, and (thanks to the comment above from user603) flow map (and searching for flow maps certainly seems the way to proceed). A web search for "replicate Charles Minard's visualization" led to these two possibly useful links. 1, 2.
What is the proper name for a "river plot" visualisation [duplicate] I don't think so. It includes so many elements I doubt it lends itself a single canonical name. That said, you could look for ribbon plot, parallel coordinates plot, and (thanks to the comment above f
8,805
Cross-validation including training, validation, and testing. Why do we need three subsets?
The training set is used to choose the optimum parameters for a given model. Note that evaluating some given set of parameters using the training set should give you an unbiased estimate of your cost function - it is the act of choosing the parameters which optimise the estimate of your cost function based on the training set that biases the estimate they provide. The parameters were chosen which perform best on the training set; hence, the apparent performance of those parameters, as evaluated on the training set, will be overly optimistic. Having trained using the training set, the validation set is used to choose the best model. Again, note that evaluating any given model using the validation set should give you a representative estimate of the cost function - it is the act of choosing the model which performs best on the validation set that biases the estimate they provide. The model was chosen which performs best on the validation set; hence, the apparent performance of that model, as evaluated on the validation set, will be overly optimistic. Having trained each model using the training set, and chosen the best model using the validation set, the test set tells you how good your final choice of model is. It gives you an unbiased estimate of the actual performance you will get at runtime, which is important to know for a lot of reasons. You can't use the training set for this, because the parameters are biased towards it. And you can't use the validation set for this, because the model itself is biased towards those. Hence, the need for a third set.
Cross-validation including training, validation, and testing. Why do we need three subsets?
The training set is used to choose the optimum parameters for a given model. Note that evaluating some given set of parameters using the training set should give you an unbiased estimate of your cost
Cross-validation including training, validation, and testing. Why do we need three subsets? The training set is used to choose the optimum parameters for a given model. Note that evaluating some given set of parameters using the training set should give you an unbiased estimate of your cost function - it is the act of choosing the parameters which optimise the estimate of your cost function based on the training set that biases the estimate they provide. The parameters were chosen which perform best on the training set; hence, the apparent performance of those parameters, as evaluated on the training set, will be overly optimistic. Having trained using the training set, the validation set is used to choose the best model. Again, note that evaluating any given model using the validation set should give you a representative estimate of the cost function - it is the act of choosing the model which performs best on the validation set that biases the estimate they provide. The model was chosen which performs best on the validation set; hence, the apparent performance of that model, as evaluated on the validation set, will be overly optimistic. Having trained each model using the training set, and chosen the best model using the validation set, the test set tells you how good your final choice of model is. It gives you an unbiased estimate of the actual performance you will get at runtime, which is important to know for a lot of reasons. You can't use the training set for this, because the parameters are biased towards it. And you can't use the validation set for this, because the model itself is biased towards those. Hence, the need for a third set.
Cross-validation including training, validation, and testing. Why do we need three subsets? The training set is used to choose the optimum parameters for a given model. Note that evaluating some given set of parameters using the training set should give you an unbiased estimate of your cost
8,806
Cross-validation including training, validation, and testing. Why do we need three subsets?
If I have already found minimum Cost Function on Validation subset, why would I need to test it again on Test subset Because of random error: Usually you only have a finite number of cases. Optimization of the validation (inner test) performance means that you may be overfitting to that inner test set. The inner test set contributes to the estimation of the final model and is thus not independent of the model. This means that you need to have another (outer) test set that is independent of the whole modeling procedure (including all optimization and data-driven pre-processing or model selection processes) if you want to estimate the generalization properties. I recommend that you make a simulation and compare the three different error estimates you can have resubstitution: prediction of the train set measures goodness-of-fit inner test (in your nomenclature: validation) set: the quality the optimizer thinks is achieved outer test set: generalization error, independent of model training. In a simulation you can easily compare them also to a proper, large, independently generated test set. If the set-up is correct, the outer test should be unbiased (w.r.t. the surrogate model it evaluates, not w.r.t. a "final" model built on the whole data set). The inner test is usually optimistically biased, and resubstitution even more optimistically biased. In my field, the inner test would easily underestimate the generalization error by a factor of 2 - 5 (much more for aggressive optimization schemes). Note: the nomenclature of the sets is not universal. In my field (analytical chemistry), validation would usually mean the proof of the performance of the final procedure - thus more what your "test" set does than what your "validation" set does. I therefore prefer to speak of the inner and outer test sets, or of the optimization test set (= inner test set) and then validation set would mean the outer test set.
Cross-validation including training, validation, and testing. Why do we need three subsets?
If I have already found minimum Cost Function on Validation subset, why would I need to test it again on Test subset Because of random error: Usually you only have a finite number of cases. Optimiza
Cross-validation including training, validation, and testing. Why do we need three subsets? If I have already found minimum Cost Function on Validation subset, why would I need to test it again on Test subset Because of random error: Usually you only have a finite number of cases. Optimization of the validation (inner test) performance means that you may be overfitting to that inner test set. The inner test set contributes to the estimation of the final model and is thus not independent of the model. This means that you need to have another (outer) test set that is independent of the whole modeling procedure (including all optimization and data-driven pre-processing or model selection processes) if you want to estimate the generalization properties. I recommend that you make a simulation and compare the three different error estimates you can have resubstitution: prediction of the train set measures goodness-of-fit inner test (in your nomenclature: validation) set: the quality the optimizer thinks is achieved outer test set: generalization error, independent of model training. In a simulation you can easily compare them also to a proper, large, independently generated test set. If the set-up is correct, the outer test should be unbiased (w.r.t. the surrogate model it evaluates, not w.r.t. a "final" model built on the whole data set). The inner test is usually optimistically biased, and resubstitution even more optimistically biased. In my field, the inner test would easily underestimate the generalization error by a factor of 2 - 5 (much more for aggressive optimization schemes). Note: the nomenclature of the sets is not universal. In my field (analytical chemistry), validation would usually mean the proof of the performance of the final procedure - thus more what your "test" set does than what your "validation" set does. I therefore prefer to speak of the inner and outer test sets, or of the optimization test set (= inner test set) and then validation set would mean the outer test set.
Cross-validation including training, validation, and testing. Why do we need three subsets? If I have already found minimum Cost Function on Validation subset, why would I need to test it again on Test subset Because of random error: Usually you only have a finite number of cases. Optimiza
8,807
Cross-validation including training, validation, and testing. Why do we need three subsets?
While training the model one must select meta parameters for the model (for example, regularization parameter) or even choose from several models. In this case validation subset is used for parameter choosing, but test subset for final prediction estimation.
Cross-validation including training, validation, and testing. Why do we need three subsets?
While training the model one must select meta parameters for the model (for example, regularization parameter) or even choose from several models. In this case validation subset is used for parameter
Cross-validation including training, validation, and testing. Why do we need three subsets? While training the model one must select meta parameters for the model (for example, regularization parameter) or even choose from several models. In this case validation subset is used for parameter choosing, but test subset for final prediction estimation.
Cross-validation including training, validation, and testing. Why do we need three subsets? While training the model one must select meta parameters for the model (for example, regularization parameter) or even choose from several models. In this case validation subset is used for parameter
8,808
Why is softmax function used to calculate probabilities although we can divide each value by the sum of the vector?
The function you propose has a singularity whenever the sum of the elements is zero. Suppose your vector is $[-1, \frac{1}{3}, \frac{2}{3}]$. This vector has a sum of 0, so division is not defined. The function is not differentiable here. Additionally, if one or more of the elements of the vector is negative but the sum is nonzero, your result is not a probability. Suppose your vector is $[-1, 0, 2]$. This has a sum of 1, so applying your function results in $[-1, 0, 2]$, which is not a probability vector because it has negative elements, and elements exceeding 1. Taking a wider view, we can motivate the specific form of the softmax function from the perspective of extending binary logistic regression to the case of three or more categorical outcomes. Doing things like taking absolute values or squares, as suggested in comments, means that $-x$ and $x$ have the same predicted probability; this means the model is not identified. By contrast, $\exp(x)$ is monotonic and positive for all real $x$, so the softmax result is (1) a probability vector and (2) the multinomial logistic model is identified.
Why is softmax function used to calculate probabilities although we can divide each value by the sum
The function you propose has a singularity whenever the sum of the elements is zero. Suppose your vector is $[-1, \frac{1}{3}, \frac{2}{3}]$. This vector has a sum of 0, so division is not defined. Th
Why is softmax function used to calculate probabilities although we can divide each value by the sum of the vector? The function you propose has a singularity whenever the sum of the elements is zero. Suppose your vector is $[-1, \frac{1}{3}, \frac{2}{3}]$. This vector has a sum of 0, so division is not defined. The function is not differentiable here. Additionally, if one or more of the elements of the vector is negative but the sum is nonzero, your result is not a probability. Suppose your vector is $[-1, 0, 2]$. This has a sum of 1, so applying your function results in $[-1, 0, 2]$, which is not a probability vector because it has negative elements, and elements exceeding 1. Taking a wider view, we can motivate the specific form of the softmax function from the perspective of extending binary logistic regression to the case of three or more categorical outcomes. Doing things like taking absolute values or squares, as suggested in comments, means that $-x$ and $x$ have the same predicted probability; this means the model is not identified. By contrast, $\exp(x)$ is monotonic and positive for all real $x$, so the softmax result is (1) a probability vector and (2) the multinomial logistic model is identified.
Why is softmax function used to calculate probabilities although we can divide each value by the sum The function you propose has a singularity whenever the sum of the elements is zero. Suppose your vector is $[-1, \frac{1}{3}, \frac{2}{3}]$. This vector has a sum of 0, so division is not defined. Th
8,809
Why is softmax function used to calculate probabilities although we can divide each value by the sum of the vector?
Softmax has two components: Transform the components to e^x. This allows the neural network to work with logarithmic probabilities, instead of ordinary probabilities. This turns the common operation of multiplying probabilities into addition, which is far more natural for the linear algebra based structure of neural networks. Normalize their sum to 1, since that's the total probability we need. One important consequence of this is that bayes' theorem is very natural to such a network, since it's just multiplication of probabilities normalized by the denominator. The trivial case of a single layer network with softmax activation is equivalent to logistic regression. The special case of two component softmax is equivalent to sigmoid activation, which is thus popular when there are only two classes. In multi class classification softmax is used if the classes are mutually exclusive and component-wise sigmoid is used if they are independent.
Why is softmax function used to calculate probabilities although we can divide each value by the sum
Softmax has two components: Transform the components to e^x. This allows the neural network to work with logarithmic probabilities, instead of ordinary probabilities. This turns the common operation
Why is softmax function used to calculate probabilities although we can divide each value by the sum of the vector? Softmax has two components: Transform the components to e^x. This allows the neural network to work with logarithmic probabilities, instead of ordinary probabilities. This turns the common operation of multiplying probabilities into addition, which is far more natural for the linear algebra based structure of neural networks. Normalize their sum to 1, since that's the total probability we need. One important consequence of this is that bayes' theorem is very natural to such a network, since it's just multiplication of probabilities normalized by the denominator. The trivial case of a single layer network with softmax activation is equivalent to logistic regression. The special case of two component softmax is equivalent to sigmoid activation, which is thus popular when there are only two classes. In multi class classification softmax is used if the classes are mutually exclusive and component-wise sigmoid is used if they are independent.
Why is softmax function used to calculate probabilities although we can divide each value by the sum Softmax has two components: Transform the components to e^x. This allows the neural network to work with logarithmic probabilities, instead of ordinary probabilities. This turns the common operation
8,810
Why is softmax function used to calculate probabilities although we can divide each value by the sum of the vector?
In addition to previous suggestion, the softmax function allows for an additional parameter $\beta$, often named temperature $t=1/\beta$ from statistical mechanics, that allows to modulate how much the output probability distribution is concentrated around the positions with larger input value versus smaller ones. $$ \sigma(\mathbf{z})_i = \frac{e^{\beta z_i}}{\sum_{j=1}^K e^{\beta z_j}} \text{ or } \sigma(\mathbf{z})_i = \frac{e^{-\beta z_i}}{\sum_{j=1}^K e^{-\beta z_j}} \text{ for } i = 1,\dotsc , K $$ With this formulation it is also difficult to get extremely unbalanced probabilities, e.g. [1,0,0,..,0], and the system will be allowed a bit of uncertainty in its estimation. To obtain these extreme probability values very low temperatures or very high inputs are necessary. For example in a decision system one may assume temperature that decreases with the number of samples, avoiding having high certainty with very little data Also softmax does not consider only the relative value of two numbers but their absolute value. This may be important when each input is generated aggregating data from multiple sources and having overall low values for each dimension may just intuitively mean that there is not much information about this situation and so the difference between the output probabilities should be small. While when all the input are quite high, this may mean that more information has been aggregate over time and there is more certainty. If the absolute values are higher, in softmax with the same proportion of the input a higher difference in the output probabilities will be generated. Lower input values may be generated for example when the input is generated by a NN that had fewer samples similar to current input or with contrasting outputs.
Why is softmax function used to calculate probabilities although we can divide each value by the sum
In addition to previous suggestion, the softmax function allows for an additional parameter $\beta$, often named temperature $t=1/\beta$ from statistical mechanics, that allows to modulate how much t
Why is softmax function used to calculate probabilities although we can divide each value by the sum of the vector? In addition to previous suggestion, the softmax function allows for an additional parameter $\beta$, often named temperature $t=1/\beta$ from statistical mechanics, that allows to modulate how much the output probability distribution is concentrated around the positions with larger input value versus smaller ones. $$ \sigma(\mathbf{z})_i = \frac{e^{\beta z_i}}{\sum_{j=1}^K e^{\beta z_j}} \text{ or } \sigma(\mathbf{z})_i = \frac{e^{-\beta z_i}}{\sum_{j=1}^K e^{-\beta z_j}} \text{ for } i = 1,\dotsc , K $$ With this formulation it is also difficult to get extremely unbalanced probabilities, e.g. [1,0,0,..,0], and the system will be allowed a bit of uncertainty in its estimation. To obtain these extreme probability values very low temperatures or very high inputs are necessary. For example in a decision system one may assume temperature that decreases with the number of samples, avoiding having high certainty with very little data Also softmax does not consider only the relative value of two numbers but their absolute value. This may be important when each input is generated aggregating data from multiple sources and having overall low values for each dimension may just intuitively mean that there is not much information about this situation and so the difference between the output probabilities should be small. While when all the input are quite high, this may mean that more information has been aggregate over time and there is more certainty. If the absolute values are higher, in softmax with the same proportion of the input a higher difference in the output probabilities will be generated. Lower input values may be generated for example when the input is generated by a NN that had fewer samples similar to current input or with contrasting outputs.
Why is softmax function used to calculate probabilities although we can divide each value by the sum In addition to previous suggestion, the softmax function allows for an additional parameter $\beta$, often named temperature $t=1/\beta$ from statistical mechanics, that allows to modulate how much t
8,811
Is it true that Bayesian methods don't overfit?
No, it is not true. Bayesian methods will certainly overfit the data. There are a couple of things that make Bayesian methods more robust against overfitting and you can make them more fragile as well. The combinatoric nature of Bayesian hypotheses, rather than binary hypotheses allows for multiple comparisons when someone lacks the "true" model for null hypothesis methods. A Bayesian posterior effectively penalizes an increase in model structure such as adding variables while rewarding improvements in fit. The penalties and gains are not optimizations as would be the case in non-Bayesian methods, but shifts in probabilities from new information. While this generally gives a more robust methodology, there is an important constraint and that is using proper prior distributions. While there is a tendency to want to mimic Frequentist methods by using flat priors, this does not assure a proper solution. There are articles on overfitting in Bayesian methods and it appears to me that the sin seems to be in trying to be "fair" to non-Bayesian methods by starting with strictly flat priors. The difficulty is that the prior is important in normalizing the likelihood. Bayesian models are intrinsically optimal models in Wald's admissibility sense of the word, but there is a hidden bogeyman in there. Wald is assuming the prior is your true prior and not some prior you are using so that editors won't ding you for putting too much information in it. They are not optimal in the same sense that Frequentist models are. Frequentist methods begin with the optimization of minimizing the variance while remaining unbiased. This is a costly optimization in that it discards information and is not intrinsically admissible in the Wald sense, though it frequently is admissible. So Frequentist models provide an optimal fit to the data, given unbiasedness. Bayesian models are neither unbiased nor optimal fits to the data. This is the trade you are making to minimize overfitting. Bayesian estimators are intrinsically biased estimators, unless special steps are taken to make them unbiased, that are usually a worse fit to the data. Their virtue is that they never use less information than an alternative method to find the "true model" and this additional information makes Bayesian estimators never more risky than alternative methods, particularly when working out of sample. That said, there will always exist a sample that could have been randomly drawn that would systematically "deceive" the Bayesian method. As to the second part of your question, if you were to analyze a single sample, the posterior would be forever altered in all its parts and would not revert to the prior unless there was a second sample that exactly cancelled out all the information in the first sample. At least theoretically this is true. In practice, if the prior is sufficiently informative and the observation sufficiently uninformative, then the impact could be so small that a computer could not measure the differences because of the limitation on the number of significant digits. It is possible for an effect to be too small for a computer to process a change in the posterior. So the answer is "yes" you can overfit a sample using a Bayesian method, particularly if you have a small sample size and improper priors. The second answer is "no" Bayes theorem never forgets the impact of prior data, though the effect could be so small you miss it computationally.
Is it true that Bayesian methods don't overfit?
No, it is not true. Bayesian methods will certainly overfit the data. There are a couple of things that make Bayesian methods more robust against overfitting and you can make them more fragile as we
Is it true that Bayesian methods don't overfit? No, it is not true. Bayesian methods will certainly overfit the data. There are a couple of things that make Bayesian methods more robust against overfitting and you can make them more fragile as well. The combinatoric nature of Bayesian hypotheses, rather than binary hypotheses allows for multiple comparisons when someone lacks the "true" model for null hypothesis methods. A Bayesian posterior effectively penalizes an increase in model structure such as adding variables while rewarding improvements in fit. The penalties and gains are not optimizations as would be the case in non-Bayesian methods, but shifts in probabilities from new information. While this generally gives a more robust methodology, there is an important constraint and that is using proper prior distributions. While there is a tendency to want to mimic Frequentist methods by using flat priors, this does not assure a proper solution. There are articles on overfitting in Bayesian methods and it appears to me that the sin seems to be in trying to be "fair" to non-Bayesian methods by starting with strictly flat priors. The difficulty is that the prior is important in normalizing the likelihood. Bayesian models are intrinsically optimal models in Wald's admissibility sense of the word, but there is a hidden bogeyman in there. Wald is assuming the prior is your true prior and not some prior you are using so that editors won't ding you for putting too much information in it. They are not optimal in the same sense that Frequentist models are. Frequentist methods begin with the optimization of minimizing the variance while remaining unbiased. This is a costly optimization in that it discards information and is not intrinsically admissible in the Wald sense, though it frequently is admissible. So Frequentist models provide an optimal fit to the data, given unbiasedness. Bayesian models are neither unbiased nor optimal fits to the data. This is the trade you are making to minimize overfitting. Bayesian estimators are intrinsically biased estimators, unless special steps are taken to make them unbiased, that are usually a worse fit to the data. Their virtue is that they never use less information than an alternative method to find the "true model" and this additional information makes Bayesian estimators never more risky than alternative methods, particularly when working out of sample. That said, there will always exist a sample that could have been randomly drawn that would systematically "deceive" the Bayesian method. As to the second part of your question, if you were to analyze a single sample, the posterior would be forever altered in all its parts and would not revert to the prior unless there was a second sample that exactly cancelled out all the information in the first sample. At least theoretically this is true. In practice, if the prior is sufficiently informative and the observation sufficiently uninformative, then the impact could be so small that a computer could not measure the differences because of the limitation on the number of significant digits. It is possible for an effect to be too small for a computer to process a change in the posterior. So the answer is "yes" you can overfit a sample using a Bayesian method, particularly if you have a small sample size and improper priors. The second answer is "no" Bayes theorem never forgets the impact of prior data, though the effect could be so small you miss it computationally.
Is it true that Bayesian methods don't overfit? No, it is not true. Bayesian methods will certainly overfit the data. There are a couple of things that make Bayesian methods more robust against overfitting and you can make them more fragile as we
8,812
Is it true that Bayesian methods don't overfit?
Something to be aware of is that like practically everywhere else, a significant problem in Bayesian methods can be model misspecification. This is an obvious point, but I thought I'd still share a story. A vignette from back in undergrad... A classic application of Bayesian particle filtering is to track the location of a robot as it moves around a room. Movement expands uncertainty while sensor readings reduce uncertainty. I remember coding up some routines to do this. I wrote out a sensible, theoretically motivated model for the likelihood of observing various sonar readings given the true values. Everything was precisely derived and coded beautifully. Then I go to test it... What happened? Total failure! Why? My particle filter rapidly thought that the sensor readings had eliminated almost all uncertainty. My point cloud collapsed to a point, but my robot wasn't necessarily at that point! Basically, my likelihood function was bad; my sensor readings weren't as informative as I thought they were. I was overfitting. A solution? I mixed in a ton more Gaussian noise (in a rather ad-hoc fashion), the point cloud ceased to collapse, and then the filtering worked rather beautifully. Moral? As Box famously said, "all models are wrong, but some are useful." Almost certainly, you won't have the true likelihood function, and if it's sufficiently off, your Bayesian method may go horribly awry and overfit. Adding a prior doesn't magically solve problems stemming from assuming observations are IID when they aren't, assuming the likelihood has more curvature than it does etc...
Is it true that Bayesian methods don't overfit?
Something to be aware of is that like practically everywhere else, a significant problem in Bayesian methods can be model misspecification. This is an obvious point, but I thought I'd still share a st
Is it true that Bayesian methods don't overfit? Something to be aware of is that like practically everywhere else, a significant problem in Bayesian methods can be model misspecification. This is an obvious point, but I thought I'd still share a story. A vignette from back in undergrad... A classic application of Bayesian particle filtering is to track the location of a robot as it moves around a room. Movement expands uncertainty while sensor readings reduce uncertainty. I remember coding up some routines to do this. I wrote out a sensible, theoretically motivated model for the likelihood of observing various sonar readings given the true values. Everything was precisely derived and coded beautifully. Then I go to test it... What happened? Total failure! Why? My particle filter rapidly thought that the sensor readings had eliminated almost all uncertainty. My point cloud collapsed to a point, but my robot wasn't necessarily at that point! Basically, my likelihood function was bad; my sensor readings weren't as informative as I thought they were. I was overfitting. A solution? I mixed in a ton more Gaussian noise (in a rather ad-hoc fashion), the point cloud ceased to collapse, and then the filtering worked rather beautifully. Moral? As Box famously said, "all models are wrong, but some are useful." Almost certainly, you won't have the true likelihood function, and if it's sufficiently off, your Bayesian method may go horribly awry and overfit. Adding a prior doesn't magically solve problems stemming from assuming observations are IID when they aren't, assuming the likelihood has more curvature than it does etc...
Is it true that Bayesian methods don't overfit? Something to be aware of is that like practically everywhere else, a significant problem in Bayesian methods can be model misspecification. This is an obvious point, but I thought I'd still share a st
8,813
How to increase longer term reproducibility of research (particularly using R and Sweave)
At some level, this becomes impossible. Consider the case of the famous Pentium floating point bug: you not only need to conserve your models, your data, your parameters, your packages, all external packages, the host system or language (say, R) as well as the OS ... plus potentially the hardware it all ran on. Now consider that some results may be simulation based and required a particular cluster of machines... That's just a bit much for being practical. With that said, I think more pragmatic solutions of versioning your code (and maybe also your data) in revisions control, storing versions of all relevant software and making it possible to reproduce the results by running a single top-level script may be a "good enough" compromise. Your mileage may vary. This also differs across disciplines or industry. But remember the old saw about the impossibility of foolproof systems: you merely create smarter fools.
How to increase longer term reproducibility of research (particularly using R and Sweave)
At some level, this becomes impossible. Consider the case of the famous Pentium floating point bug: you not only need to conserve your models, your data, your parameters, your packages, all external
How to increase longer term reproducibility of research (particularly using R and Sweave) At some level, this becomes impossible. Consider the case of the famous Pentium floating point bug: you not only need to conserve your models, your data, your parameters, your packages, all external packages, the host system or language (say, R) as well as the OS ... plus potentially the hardware it all ran on. Now consider that some results may be simulation based and required a particular cluster of machines... That's just a bit much for being practical. With that said, I think more pragmatic solutions of versioning your code (and maybe also your data) in revisions control, storing versions of all relevant software and making it possible to reproduce the results by running a single top-level script may be a "good enough" compromise. Your mileage may vary. This also differs across disciplines or industry. But remember the old saw about the impossibility of foolproof systems: you merely create smarter fools.
How to increase longer term reproducibility of research (particularly using R and Sweave) At some level, this becomes impossible. Consider the case of the famous Pentium floating point bug: you not only need to conserve your models, your data, your parameters, your packages, all external
8,814
How to increase longer term reproducibility of research (particularly using R and Sweave)
The first step in reproducibility is making sure the data are in a format that is easy for future researchers to read. Flat files are the clear choice here (Fairbairn in press). To make the code useful over the long term, perhaps the best thing to do is write clear documentation that explains both what the code does and also how it works, so that if your tool chain disappears, your analysis can be reimplemented in some future system. Fairbairn (in press) The advent of mandatory data archiving. Evolution DOI: 10.1111/j.1558-5646.2010.01182.x
How to increase longer term reproducibility of research (particularly using R and Sweave)
The first step in reproducibility is making sure the data are in a format that is easy for future researchers to read. Flat files are the clear choice here (Fairbairn in press). To make the code usefu
How to increase longer term reproducibility of research (particularly using R and Sweave) The first step in reproducibility is making sure the data are in a format that is easy for future researchers to read. Flat files are the clear choice here (Fairbairn in press). To make the code useful over the long term, perhaps the best thing to do is write clear documentation that explains both what the code does and also how it works, so that if your tool chain disappears, your analysis can be reimplemented in some future system. Fairbairn (in press) The advent of mandatory data archiving. Evolution DOI: 10.1111/j.1558-5646.2010.01182.x
How to increase longer term reproducibility of research (particularly using R and Sweave) The first step in reproducibility is making sure the data are in a format that is easy for future researchers to read. Flat files are the clear choice here (Fairbairn in press). To make the code usefu
8,815
How to increase longer term reproducibility of research (particularly using R and Sweave)
One strategy involves using the cacher package. Peng RD, Eckel SP (2009). "Distributed reproducible research using cached computations," IEEE Computing in Science and Engineering, 11 (1), 28–34. (PDF online) also see more articles on Roger Peng's website Further discussion and examples can be found in the book: Statistical Methods for Environmental Epidemiology with R However, I don't have first hand experience of its effectiveness in ensuring ongoing reproducibility.
How to increase longer term reproducibility of research (particularly using R and Sweave)
One strategy involves using the cacher package. Peng RD, Eckel SP (2009). "Distributed reproducible research using cached computations," IEEE Computing in Science and Engineering, 11 (1), 28–34. (PDF
How to increase longer term reproducibility of research (particularly using R and Sweave) One strategy involves using the cacher package. Peng RD, Eckel SP (2009). "Distributed reproducible research using cached computations," IEEE Computing in Science and Engineering, 11 (1), 28–34. (PDF online) also see more articles on Roger Peng's website Further discussion and examples can be found in the book: Statistical Methods for Environmental Epidemiology with R However, I don't have first hand experience of its effectiveness in ensuring ongoing reproducibility.
How to increase longer term reproducibility of research (particularly using R and Sweave) One strategy involves using the cacher package. Peng RD, Eckel SP (2009). "Distributed reproducible research using cached computations," IEEE Computing in Science and Engineering, 11 (1), 28–34. (PDF
8,816
How to increase longer term reproducibility of research (particularly using R and Sweave)
If you are interested in the virtual machine route, I think it would be doable via a small linux distribution with the specific version of R and packages installed. Data is included, along with scripts, and package the whole thing in a virtual box file. This does not get around hardware problems mentioned earlier such as the Intel CPU bug.
How to increase longer term reproducibility of research (particularly using R and Sweave)
If you are interested in the virtual machine route, I think it would be doable via a small linux distribution with the specific version of R and packages installed. Data is included, along with scrip
How to increase longer term reproducibility of research (particularly using R and Sweave) If you are interested in the virtual machine route, I think it would be doable via a small linux distribution with the specific version of R and packages installed. Data is included, along with scripts, and package the whole thing in a virtual box file. This does not get around hardware problems mentioned earlier such as the Intel CPU bug.
How to increase longer term reproducibility of research (particularly using R and Sweave) If you are interested in the virtual machine route, I think it would be doable via a small linux distribution with the specific version of R and packages installed. Data is included, along with scrip
8,817
How to increase longer term reproducibility of research (particularly using R and Sweave)
I would recomend two things in addition to the excellent answers already present; At Key points in your code, dump out the current data as a flat file, suitably named and described in comments, thus highlighting if one package has produced differing results where the differences have been introduced. These data files, as well as the original input and the resulting output should be included in your 'reproducible research set' Include some testing of the packages concerned within your code, for instance using something like TestThat. The hard part is making small, reproducible tests that are likely to highlight any changes in what a package does that relate to your analysis. This would at least highlight to another person that there is some difference in the environments.
How to increase longer term reproducibility of research (particularly using R and Sweave)
I would recomend two things in addition to the excellent answers already present; At Key points in your code, dump out the current data as a flat file, suitably named and described in comments, thus
How to increase longer term reproducibility of research (particularly using R and Sweave) I would recomend two things in addition to the excellent answers already present; At Key points in your code, dump out the current data as a flat file, suitably named and described in comments, thus highlighting if one package has produced differing results where the differences have been introduced. These data files, as well as the original input and the resulting output should be included in your 'reproducible research set' Include some testing of the packages concerned within your code, for instance using something like TestThat. The hard part is making small, reproducible tests that are likely to highlight any changes in what a package does that relate to your analysis. This would at least highlight to another person that there is some difference in the environments.
How to increase longer term reproducibility of research (particularly using R and Sweave) I would recomend two things in addition to the excellent answers already present; At Key points in your code, dump out the current data as a flat file, suitably named and described in comments, thus
8,818
How to increase longer term reproducibility of research (particularly using R and Sweave)
Good suggestions, I've got plenty of things to look into now. Remember, one extremely important consideration is making sure that the work is "correct" in the first place. This is the role that tools like Sweave play, by increasing the chances that what you did, and what you said you did, are the same thing.
How to increase longer term reproducibility of research (particularly using R and Sweave)
Good suggestions, I've got plenty of things to look into now. Remember, one extremely important consideration is making sure that the work is "correct" in the first place. This is the role that too
How to increase longer term reproducibility of research (particularly using R and Sweave) Good suggestions, I've got plenty of things to look into now. Remember, one extremely important consideration is making sure that the work is "correct" in the first place. This is the role that tools like Sweave play, by increasing the chances that what you did, and what you said you did, are the same thing.
How to increase longer term reproducibility of research (particularly using R and Sweave) Good suggestions, I've got plenty of things to look into now. Remember, one extremely important consideration is making sure that the work is "correct" in the first place. This is the role that too
8,819
What does "normalization" mean and how to verify that a sample or a distribution is normalized?
Unfortunately, terms are used differently in different fields, by different people within the same field, etc., so I'm not sure how well this can be answered for you here. You should make sure you know the definition that your instructor / the textbook is using for "normalized". However, here are some common definitions: Centered: $$ X-{\rm mean} $$ Standardized: $$ \frac{X-\text{mean}}{\text{sd}} $$ Normalized: $$ \frac{X-\min(X)}{\max(X)-\min(X)} $$ Normalizing in this sense rescales your data to the unit interval. Standardizing turns your data into $z$-scores, as @Jeff notes. And centering just makes the mean of your data equal to $0$. It is worth recognizing here that all three of these are linear transformations; as such, they do not change the shape of your distribution. That is, sometimes people call the $z$-score transformation "normalizing" and believe, because of $z$-scores' association with the normal distribution, that this has made their data normally distributed. This is not so (as @Jeff also notes, and as you could tell by plotting your data before and after). Should you be interested, you could change the shape of your data using the Box-Cox family of transformations, for example. With respect to how you could verify these transformations, it depends on what exactly is meant by that. If they mean simply to check that the code ran properly, you could check means, SDs, minimums, and maximums.
What does "normalization" mean and how to verify that a sample or a distribution is normalized?
Unfortunately, terms are used differently in different fields, by different people within the same field, etc., so I'm not sure how well this can be answered for you here. You should make sure you kn
What does "normalization" mean and how to verify that a sample or a distribution is normalized? Unfortunately, terms are used differently in different fields, by different people within the same field, etc., so I'm not sure how well this can be answered for you here. You should make sure you know the definition that your instructor / the textbook is using for "normalized". However, here are some common definitions: Centered: $$ X-{\rm mean} $$ Standardized: $$ \frac{X-\text{mean}}{\text{sd}} $$ Normalized: $$ \frac{X-\min(X)}{\max(X)-\min(X)} $$ Normalizing in this sense rescales your data to the unit interval. Standardizing turns your data into $z$-scores, as @Jeff notes. And centering just makes the mean of your data equal to $0$. It is worth recognizing here that all three of these are linear transformations; as such, they do not change the shape of your distribution. That is, sometimes people call the $z$-score transformation "normalizing" and believe, because of $z$-scores' association with the normal distribution, that this has made their data normally distributed. This is not so (as @Jeff also notes, and as you could tell by plotting your data before and after). Should you be interested, you could change the shape of your data using the Box-Cox family of transformations, for example. With respect to how you could verify these transformations, it depends on what exactly is meant by that. If they mean simply to check that the code ran properly, you could check means, SDs, minimums, and maximums.
What does "normalization" mean and how to verify that a sample or a distribution is normalized? Unfortunately, terms are used differently in different fields, by different people within the same field, etc., so I'm not sure how well this can be answered for you here. You should make sure you kn
8,820
What does "normalization" mean and how to verify that a sample or a distribution is normalized?
By using the formula you provided on each score in your sample, you are converting them all to z-scores. To verify that you computed all the z-scores correctly, find the new mean and standard deviation of your sample. If the mean is $0$ and the standard deviation is $1$, you've done everything correctly. The purpose of doing this is to put everything in units relative to the standard deviation of your sample. This may be useful for a variety of purposes, such as comparing two different data sets that were scored using different units (centimeters and inches, perhaps). It is important not to get this confused with asking whether a distribution is normal, i.e. whether it approximates a Gaussian distribution.
What does "normalization" mean and how to verify that a sample or a distribution is normalized?
By using the formula you provided on each score in your sample, you are converting them all to z-scores. To verify that you computed all the z-scores correctly, find the new mean and standard deviati
What does "normalization" mean and how to verify that a sample or a distribution is normalized? By using the formula you provided on each score in your sample, you are converting them all to z-scores. To verify that you computed all the z-scores correctly, find the new mean and standard deviation of your sample. If the mean is $0$ and the standard deviation is $1$, you've done everything correctly. The purpose of doing this is to put everything in units relative to the standard deviation of your sample. This may be useful for a variety of purposes, such as comparing two different data sets that were scored using different units (centimeters and inches, perhaps). It is important not to get this confused with asking whether a distribution is normal, i.e. whether it approximates a Gaussian distribution.
What does "normalization" mean and how to verify that a sample or a distribution is normalized? By using the formula you provided on each score in your sample, you are converting them all to z-scores. To verify that you computed all the z-scores correctly, find the new mean and standard deviati
8,821
What does "normalization" mean and how to verify that a sample or a distribution is normalized?
After consulting the TA, what the question was asking was whether if $$ \int_{-\infty}^{\infty}f(x)dx=1 $$ where $f(x)$ in this case is the density of the uniform(a,b).
What does "normalization" mean and how to verify that a sample or a distribution is normalized?
After consulting the TA, what the question was asking was whether if $$ \int_{-\infty}^{\infty}f(x)dx=1 $$ where $f(x)$ in this case is the density of the uniform(a,b).
What does "normalization" mean and how to verify that a sample or a distribution is normalized? After consulting the TA, what the question was asking was whether if $$ \int_{-\infty}^{\infty}f(x)dx=1 $$ where $f(x)$ in this case is the density of the uniform(a,b).
What does "normalization" mean and how to verify that a sample or a distribution is normalized? After consulting the TA, what the question was asking was whether if $$ \int_{-\infty}^{\infty}f(x)dx=1 $$ where $f(x)$ in this case is the density of the uniform(a,b).
8,822
Test for finite variance?
No, this is not possible, because a finite sample of size $n$ cannot reliably distinguish between, say, a normal population and a normal population contaminated by a $1/N$ amount of a Cauchy distribution where $N$ >> $n$. (Of course the former has finite variance and the latter has infinite variance.) Thus any fully nonparametric test will have arbitrarily low power against such alternatives.
Test for finite variance?
No, this is not possible, because a finite sample of size $n$ cannot reliably distinguish between, say, a normal population and a normal population contaminated by a $1/N$ amount of a Cauchy distribut
Test for finite variance? No, this is not possible, because a finite sample of size $n$ cannot reliably distinguish between, say, a normal population and a normal population contaminated by a $1/N$ amount of a Cauchy distribution where $N$ >> $n$. (Of course the former has finite variance and the latter has infinite variance.) Thus any fully nonparametric test will have arbitrarily low power against such alternatives.
Test for finite variance? No, this is not possible, because a finite sample of size $n$ cannot reliably distinguish between, say, a normal population and a normal population contaminated by a $1/N$ amount of a Cauchy distribut
8,823
Test for finite variance?
You cannot be certain without knowing the distribution. But there are certain things you can do, such as looking at what might be called the "partial variance", i.e. if you have a sample of size $N$, you draw the variance estimated from the first $n$ terms, with $n$ running from 2 to $N$. With a finite population variance, you hope that the partial variance soon settles down close to the population variance. With an infinite population variance, you see jumps up in the partial variance followed by slow declines until the next very large value appears in the sample. This is an illustration with Normal and Cauchy random variables (and a log scale) This may not help if the shape of your distribution is such that a much larger sample size than you have is needed to identify it with sufficient confidence, i.e. where very large values are fairly (but not extremely) rare for a distribution with finite variance, or are extremely rare for a distribution with infinite variance. For a given distribution there will be sample sizes which are more likely than not to reveal its nature; conversely, for a given sample size, there are distributions which are more likely than not to disguise their natures for that size of sample.
Test for finite variance?
You cannot be certain without knowing the distribution. But there are certain things you can do, such as looking at what might be called the "partial variance", i.e. if you have a sample of size $N$,
Test for finite variance? You cannot be certain without knowing the distribution. But there are certain things you can do, such as looking at what might be called the "partial variance", i.e. if you have a sample of size $N$, you draw the variance estimated from the first $n$ terms, with $n$ running from 2 to $N$. With a finite population variance, you hope that the partial variance soon settles down close to the population variance. With an infinite population variance, you see jumps up in the partial variance followed by slow declines until the next very large value appears in the sample. This is an illustration with Normal and Cauchy random variables (and a log scale) This may not help if the shape of your distribution is such that a much larger sample size than you have is needed to identify it with sufficient confidence, i.e. where very large values are fairly (but not extremely) rare for a distribution with finite variance, or are extremely rare for a distribution with infinite variance. For a given distribution there will be sample sizes which are more likely than not to reveal its nature; conversely, for a given sample size, there are distributions which are more likely than not to disguise their natures for that size of sample.
Test for finite variance? You cannot be certain without knowing the distribution. But there are certain things you can do, such as looking at what might be called the "partial variance", i.e. if you have a sample of size $N$,
8,824
Test for finite variance?
Here's another answer. Suppose you could parametrize the problem, something like this: $$ H_{0}:\ X \sim t(\mathtt{df}=3)\mathrm{\ versus\ } H_{1}:\ X \sim t(\mathtt{df}=1). $$ Then you could do an ordinary Neyman-Pearson likelihood ratio test of $H_{0}$ versus $H_{1}$. Note that $H_{1}$ is Cauchy (infinite variance) and $H_{0}$ is the usual Student's $t$ with 3 degrees of freedom (finite variance) which has PDF: $$ f(x|\nu) = \frac{\Gamma\left(\frac{\nu + 1}{2}\right)}{\sqrt{\nu\pi}\Gamma\left(\frac{\nu}{2}\right)}\left(1 + \frac{x^{2}}{\nu} \right)^{-\frac{\nu + 1}{2}}, $$ for $-\infty < x < \infty$. Given simple random sample data $x_{1},x_{2},\ldots,x_{n}$, the likelihood ratio test rejects $H_{0}$ when $$ \Lambda(\mathbf{x}) = \frac{\prod_{i=1}^{n}f(x_{i}|\nu = 1)}{\prod_{i=1}^{n}f(x_{i}|\nu = 3)} > k, $$ where $k \geq 0$ is chosen such that $$ P(\Lambda(\mathbf{X}) > k\,|\nu = 3) = \alpha. $$ It's a little bit of algebra to simplify $$ \Lambda(\mathbf{x}) = \left(\frac{\sqrt{3}}{2}\right)^{n}\prod_{i = 1}^{n}\frac{\left(1 + x_{i}^{2}/3 \right)^{2}}{1 + x_{i}^{2}}. $$ So, again, we get a simple random sample, calculate $\Lambda(\mathbf{x})$, and reject $H_{0}$ if $\Lambda(\mathbf{x})$ is too big. How big? That's the fun part! It's going to be hard (impossible?) to get a closed form for the critical value, but we could approximate it as close as we like, for sure. Here's one way to do it, with R. Suppose $\alpha = 0.05$, and for laughs, let's say $n = 13$. We generate a bunch of samples under $H_{0}$, calculate $\Lambda$ for each sample, and then find the 95th quantile. set.seed(1) x <- matrix(rt(1000000*13, df = 3), ncol = 13) y <- apply(x, 1, function(z) prod((1 + z^2/3)^2)/prod(1 + z^2)) quantile(y, probs = 0.95) This turns out to be (after some seconds) on my machine to be $\approx 12.8842$, which after multiplied by $(\sqrt{3}/2)^{13}$ is $k \approx 1.9859$. Surely there are other, better, ways to approximate this, but we're just playing around. In summary, when the problem is parametrizable you can set up a hypothesis test just like you would in other problems, and it's pretty straightforward, except in this case for some tap dancing near the end. Note that we know from our theory the test above is a most powerful test of $H_{0}$ versus $H_{1}$ (at level $\alpha$), so it doesn't get any better than this (as measured by power). Disclaimers: this is a toy example. I do not have any real-world situation in which I was curious to know whether my data came from Cauchy as opposed to Student's t with 3 df. And the original question didn't say anything about parametrized problems, it seemed to be looking for more of a nonparametric approach, which I think was addressed well by the others. The purpose of this answer is for future readers who stumble across the title of the question and are looking for the classical dusty textbook approach. P.S. it might be fun to play a little more with the test for testing $H_{1}:\nu \leq 1$, or something else, but I haven't done that. My guess is that it'd get pretty ugly pretty fast. I also thought about testing different types of stable distributions, but again, it was just a thought.
Test for finite variance?
Here's another answer. Suppose you could parametrize the problem, something like this: $$ H_{0}:\ X \sim t(\mathtt{df}=3)\mathrm{\ versus\ } H_{1}:\ X \sim t(\mathtt{df}=1). $$ Then you could do a
Test for finite variance? Here's another answer. Suppose you could parametrize the problem, something like this: $$ H_{0}:\ X \sim t(\mathtt{df}=3)\mathrm{\ versus\ } H_{1}:\ X \sim t(\mathtt{df}=1). $$ Then you could do an ordinary Neyman-Pearson likelihood ratio test of $H_{0}$ versus $H_{1}$. Note that $H_{1}$ is Cauchy (infinite variance) and $H_{0}$ is the usual Student's $t$ with 3 degrees of freedom (finite variance) which has PDF: $$ f(x|\nu) = \frac{\Gamma\left(\frac{\nu + 1}{2}\right)}{\sqrt{\nu\pi}\Gamma\left(\frac{\nu}{2}\right)}\left(1 + \frac{x^{2}}{\nu} \right)^{-\frac{\nu + 1}{2}}, $$ for $-\infty < x < \infty$. Given simple random sample data $x_{1},x_{2},\ldots,x_{n}$, the likelihood ratio test rejects $H_{0}$ when $$ \Lambda(\mathbf{x}) = \frac{\prod_{i=1}^{n}f(x_{i}|\nu = 1)}{\prod_{i=1}^{n}f(x_{i}|\nu = 3)} > k, $$ where $k \geq 0$ is chosen such that $$ P(\Lambda(\mathbf{X}) > k\,|\nu = 3) = \alpha. $$ It's a little bit of algebra to simplify $$ \Lambda(\mathbf{x}) = \left(\frac{\sqrt{3}}{2}\right)^{n}\prod_{i = 1}^{n}\frac{\left(1 + x_{i}^{2}/3 \right)^{2}}{1 + x_{i}^{2}}. $$ So, again, we get a simple random sample, calculate $\Lambda(\mathbf{x})$, and reject $H_{0}$ if $\Lambda(\mathbf{x})$ is too big. How big? That's the fun part! It's going to be hard (impossible?) to get a closed form for the critical value, but we could approximate it as close as we like, for sure. Here's one way to do it, with R. Suppose $\alpha = 0.05$, and for laughs, let's say $n = 13$. We generate a bunch of samples under $H_{0}$, calculate $\Lambda$ for each sample, and then find the 95th quantile. set.seed(1) x <- matrix(rt(1000000*13, df = 3), ncol = 13) y <- apply(x, 1, function(z) prod((1 + z^2/3)^2)/prod(1 + z^2)) quantile(y, probs = 0.95) This turns out to be (after some seconds) on my machine to be $\approx 12.8842$, which after multiplied by $(\sqrt{3}/2)^{13}$ is $k \approx 1.9859$. Surely there are other, better, ways to approximate this, but we're just playing around. In summary, when the problem is parametrizable you can set up a hypothesis test just like you would in other problems, and it's pretty straightforward, except in this case for some tap dancing near the end. Note that we know from our theory the test above is a most powerful test of $H_{0}$ versus $H_{1}$ (at level $\alpha$), so it doesn't get any better than this (as measured by power). Disclaimers: this is a toy example. I do not have any real-world situation in which I was curious to know whether my data came from Cauchy as opposed to Student's t with 3 df. And the original question didn't say anything about parametrized problems, it seemed to be looking for more of a nonparametric approach, which I think was addressed well by the others. The purpose of this answer is for future readers who stumble across the title of the question and are looking for the classical dusty textbook approach. P.S. it might be fun to play a little more with the test for testing $H_{1}:\nu \leq 1$, or something else, but I haven't done that. My guess is that it'd get pretty ugly pretty fast. I also thought about testing different types of stable distributions, but again, it was just a thought.
Test for finite variance? Here's another answer. Suppose you could parametrize the problem, something like this: $$ H_{0}:\ X \sim t(\mathtt{df}=3)\mathrm{\ versus\ } H_{1}:\ X \sim t(\mathtt{df}=1). $$ Then you could do a
8,825
Test for finite variance?
In order to test such a vague hypothesis, you need to average out over all densities with finite variance, and all densities with infinite variance. This is likely to be impossible, you basically need to be more specific. One more specific version of this and have two hypothesis for a sample $D\equiv Y_{1},Y_{2},\dots,Y_{N}$: $H_{0}:Y_{i}\sim Normal(\mu,\sigma)$ $H_{A}:Y_{i}\sim Cauchy(\nu,\tau)$ One hypothesis has finite variance, one has infinite variance. Just calculate the odds: $$\frac{P(H_{0}|D,I)}{P(H_{A}|D,I)}=\frac{P(H_{0}|I)}{P(H_{A}|I)}\frac{\int P(D,\mu,\sigma|H_{0},I)d\mu d\sigma}{\int P(D,\nu,\tau|H_{A},I)d\nu d\tau} $$ Where $\frac{P(H_{0}|I)}{P(H_{A}|I)}$ is the prior odds (usually 1) $$P(D,\mu,\sigma|H_{0},I)=P(\mu,\sigma|H_{0},I)P(D|\mu,\sigma,H_{0},I)$$ And $$P(D,\nu,\tau|H_{A},I)=P(\nu,\tau|H_{A},I)P(D|\nu,\tau,H_{A},I)$$ Now you normally wouldn't be able to use improper priors here, but because both densities are of the "location-scale" type, if you specify the standard non-informative prior with the same range $L_{1}<\mu,\tau<U_{1}$ and $L_{2}<\sigma,\tau<U_{2}$, then we get for the numerator integral: $$\frac{\left(2\pi\right)^{-\frac{N}{2}}}{(U_1-L_1)log\left(\frac{U_2}{L_2}\right)}\int_{L_2}^{U_2}\sigma^{-(N+1)}\int_{L_1}^{U_1} exp\left(-\frac{N\left[s^{2}-(\overline{Y}-\mu)^2\right]}{2\sigma^{2}}\right)d\mu d\sigma$$ Where $s^2=N^{-1}\sum_{i=1}^{N}(Y_i-\overline{Y})^2$ and $\overline{Y}=N^{-1}\sum_{i=1}^{N}Y_i$. And for the denominator integral: $$\frac{\pi^{-N}}{(U_1-L_1)log\left(\frac{U_2}{L_2}\right)}\int_{L_2}^{U_2}\tau^{-(N+1)}\int_{L_1}^{U_1} \prod_{i=1}^{N}\left(1+\left[\frac{Y_{i}-\nu}{\tau}\right]^{2}\right)^{-1}d\nu d\tau$$ And now taking the ratio we find that the important parts of the normalising constants cancel and we get: $$\frac{P(D|H_{0},I)}{P(D|H_{A},I)}=\left(\frac{\pi}{2}\right)^{\frac{N}{2}}\frac{\int_{L_2}^{U_2}\sigma^{-(N+1)}\int_{L_1}^{U_1} exp\left(-\frac{N\left[s^{2}-(\overline{Y}-\mu)^2\right]}{2\sigma^{2}}\right)d\mu d\sigma}{\int_{L_2}^{U_2}\tau^{-(N+1)}\int_{L_1}^{U_1} \prod_{i=1}^{N}\left(1+\left[\frac{Y_{i}-\nu}{\tau}\right]^{2}\right)^{-1}d\nu d\tau}$$ And all integrals are still proper in the limit so we can get: $$\frac{P(D|H_{0},I)}{P(D|H_{A},I)}=\left(\frac{2}{\pi}\right)^{-\frac{N}{2}}\frac{\int_{0}^{\infty}\sigma^{-(N+1)}\int_{-\infty}^{\infty} exp\left(-\frac{N\left[s^{2}-(\overline{Y}-\mu)^2\right]}{2\sigma^{2}}\right)d\mu d\sigma}{\int_{0}^{\infty}\tau^{-(N+1)}\int_{-\infty}^{\infty} \prod_{i=1}^{N}\left(1+\left[\frac{Y_{i}-\nu}{\tau}\right]^{2}\right)^{-1}d\nu d\tau}$$ The denominator integral cannot be analytically computed, but the numerator can, and we get for the numerator: $$\int_{0}^{\infty}\sigma^{-(N+1)}\int_{-\infty}^{\infty} exp\left(-\frac{N\left[s^{2}-(\overline{Y}-\mu)^2\right]}{2\sigma^{2}}\right)d\mu d\sigma=\sqrt{2N\pi}\int_{0}^{\infty}\sigma^{-N} exp\left(-\frac{Ns^{2}}{2\sigma^{2}}\right)d\sigma$$ Now make change of variables $\lambda=\sigma^{-2}\implies d\sigma = -\frac{1}{2}\lambda^{-\frac{3}{2}}d\lambda$ and you get a gamma integral: $$-\sqrt{2N\pi}\int_{\infty}^{0}\lambda^{\frac{N-1}{2}-1} exp\left(-\lambda\frac{Ns^{2}}{2}\right)d\lambda=\sqrt{2N\pi}\left(\frac{2}{Ns^{2}}\right)^{\frac{N-1}{2}}\Gamma\left(\frac{N-1}{2}\right)$$ And we get as a final analytic form for the odds for numerical work: $$\frac{P(H_{0}|D,I)}{P(H_{A}|D,I)}=\frac{P(H_{0}|I)}{P(H_{A}|I)}\times\frac{\pi^{\frac{N+1}{2}}N^{-\frac{N}{2}}s^{-(N-1)}\Gamma\left(\frac{N-1}{2}\right)}{\int_{0}^{\infty}\tau^{-(N+1)}\int_{-\infty}^{\infty} \prod_{i=1}^{N}\left(1+\left[\frac{Y_{i}-\nu}{\tau}\right]^{2}\right)^{-1}d\nu d\tau}$$ So this can be thought of as a specific test of finite versus infinite variance. We could also do a T distribution into this framework to get another test (test the hypothesis that the degrees of freedom is greater than 2).
Test for finite variance?
In order to test such a vague hypothesis, you need to average out over all densities with finite variance, and all densities with infinite variance. This is likely to be impossible, you basically nee
Test for finite variance? In order to test such a vague hypothesis, you need to average out over all densities with finite variance, and all densities with infinite variance. This is likely to be impossible, you basically need to be more specific. One more specific version of this and have two hypothesis for a sample $D\equiv Y_{1},Y_{2},\dots,Y_{N}$: $H_{0}:Y_{i}\sim Normal(\mu,\sigma)$ $H_{A}:Y_{i}\sim Cauchy(\nu,\tau)$ One hypothesis has finite variance, one has infinite variance. Just calculate the odds: $$\frac{P(H_{0}|D,I)}{P(H_{A}|D,I)}=\frac{P(H_{0}|I)}{P(H_{A}|I)}\frac{\int P(D,\mu,\sigma|H_{0},I)d\mu d\sigma}{\int P(D,\nu,\tau|H_{A},I)d\nu d\tau} $$ Where $\frac{P(H_{0}|I)}{P(H_{A}|I)}$ is the prior odds (usually 1) $$P(D,\mu,\sigma|H_{0},I)=P(\mu,\sigma|H_{0},I)P(D|\mu,\sigma,H_{0},I)$$ And $$P(D,\nu,\tau|H_{A},I)=P(\nu,\tau|H_{A},I)P(D|\nu,\tau,H_{A},I)$$ Now you normally wouldn't be able to use improper priors here, but because both densities are of the "location-scale" type, if you specify the standard non-informative prior with the same range $L_{1}<\mu,\tau<U_{1}$ and $L_{2}<\sigma,\tau<U_{2}$, then we get for the numerator integral: $$\frac{\left(2\pi\right)^{-\frac{N}{2}}}{(U_1-L_1)log\left(\frac{U_2}{L_2}\right)}\int_{L_2}^{U_2}\sigma^{-(N+1)}\int_{L_1}^{U_1} exp\left(-\frac{N\left[s^{2}-(\overline{Y}-\mu)^2\right]}{2\sigma^{2}}\right)d\mu d\sigma$$ Where $s^2=N^{-1}\sum_{i=1}^{N}(Y_i-\overline{Y})^2$ and $\overline{Y}=N^{-1}\sum_{i=1}^{N}Y_i$. And for the denominator integral: $$\frac{\pi^{-N}}{(U_1-L_1)log\left(\frac{U_2}{L_2}\right)}\int_{L_2}^{U_2}\tau^{-(N+1)}\int_{L_1}^{U_1} \prod_{i=1}^{N}\left(1+\left[\frac{Y_{i}-\nu}{\tau}\right]^{2}\right)^{-1}d\nu d\tau$$ And now taking the ratio we find that the important parts of the normalising constants cancel and we get: $$\frac{P(D|H_{0},I)}{P(D|H_{A},I)}=\left(\frac{\pi}{2}\right)^{\frac{N}{2}}\frac{\int_{L_2}^{U_2}\sigma^{-(N+1)}\int_{L_1}^{U_1} exp\left(-\frac{N\left[s^{2}-(\overline{Y}-\mu)^2\right]}{2\sigma^{2}}\right)d\mu d\sigma}{\int_{L_2}^{U_2}\tau^{-(N+1)}\int_{L_1}^{U_1} \prod_{i=1}^{N}\left(1+\left[\frac{Y_{i}-\nu}{\tau}\right]^{2}\right)^{-1}d\nu d\tau}$$ And all integrals are still proper in the limit so we can get: $$\frac{P(D|H_{0},I)}{P(D|H_{A},I)}=\left(\frac{2}{\pi}\right)^{-\frac{N}{2}}\frac{\int_{0}^{\infty}\sigma^{-(N+1)}\int_{-\infty}^{\infty} exp\left(-\frac{N\left[s^{2}-(\overline{Y}-\mu)^2\right]}{2\sigma^{2}}\right)d\mu d\sigma}{\int_{0}^{\infty}\tau^{-(N+1)}\int_{-\infty}^{\infty} \prod_{i=1}^{N}\left(1+\left[\frac{Y_{i}-\nu}{\tau}\right]^{2}\right)^{-1}d\nu d\tau}$$ The denominator integral cannot be analytically computed, but the numerator can, and we get for the numerator: $$\int_{0}^{\infty}\sigma^{-(N+1)}\int_{-\infty}^{\infty} exp\left(-\frac{N\left[s^{2}-(\overline{Y}-\mu)^2\right]}{2\sigma^{2}}\right)d\mu d\sigma=\sqrt{2N\pi}\int_{0}^{\infty}\sigma^{-N} exp\left(-\frac{Ns^{2}}{2\sigma^{2}}\right)d\sigma$$ Now make change of variables $\lambda=\sigma^{-2}\implies d\sigma = -\frac{1}{2}\lambda^{-\frac{3}{2}}d\lambda$ and you get a gamma integral: $$-\sqrt{2N\pi}\int_{\infty}^{0}\lambda^{\frac{N-1}{2}-1} exp\left(-\lambda\frac{Ns^{2}}{2}\right)d\lambda=\sqrt{2N\pi}\left(\frac{2}{Ns^{2}}\right)^{\frac{N-1}{2}}\Gamma\left(\frac{N-1}{2}\right)$$ And we get as a final analytic form for the odds for numerical work: $$\frac{P(H_{0}|D,I)}{P(H_{A}|D,I)}=\frac{P(H_{0}|I)}{P(H_{A}|I)}\times\frac{\pi^{\frac{N+1}{2}}N^{-\frac{N}{2}}s^{-(N-1)}\Gamma\left(\frac{N-1}{2}\right)}{\int_{0}^{\infty}\tau^{-(N+1)}\int_{-\infty}^{\infty} \prod_{i=1}^{N}\left(1+\left[\frac{Y_{i}-\nu}{\tau}\right]^{2}\right)^{-1}d\nu d\tau}$$ So this can be thought of as a specific test of finite versus infinite variance. We could also do a T distribution into this framework to get another test (test the hypothesis that the degrees of freedom is greater than 2).
Test for finite variance? In order to test such a vague hypothesis, you need to average out over all densities with finite variance, and all densities with infinite variance. This is likely to be impossible, you basically nee
8,826
Test for finite variance?
The counterexample is not relevant to the question asked. You want to test the null hypothesis that a sample of i.i.d. random variables is drawn from a distribution having finite variance, at a given significance level. I recommend a good reference text like "Statistical Inference" by Casella to understand the use and the limit of hypothesis testing. Regarding h.t. on finite variance, I don't have a reference handy, but the following paper addresses a similar, but stronger, version of the problem, i.e., if the distribution tails follow a power law. POWER-LAW DISTRIBUTIONS IN EMPIRICAL DATA SIAM Review 51 (2009): 661--703.
Test for finite variance?
The counterexample is not relevant to the question asked. You want to test the null hypothesis that a sample of i.i.d. random variables is drawn from a distribution having finite variance, at a given
Test for finite variance? The counterexample is not relevant to the question asked. You want to test the null hypothesis that a sample of i.i.d. random variables is drawn from a distribution having finite variance, at a given significance level. I recommend a good reference text like "Statistical Inference" by Casella to understand the use and the limit of hypothesis testing. Regarding h.t. on finite variance, I don't have a reference handy, but the following paper addresses a similar, but stronger, version of the problem, i.e., if the distribution tails follow a power law. POWER-LAW DISTRIBUTIONS IN EMPIRICAL DATA SIAM Review 51 (2009): 661--703.
Test for finite variance? The counterexample is not relevant to the question asked. You want to test the null hypothesis that a sample of i.i.d. random variables is drawn from a distribution having finite variance, at a given
8,827
Test for finite variance?
One approach that had been suggested to me was via the Central Limit Theorem. This is a old question, but I want to propose a way to use the CLT to test for large tails. Let $X = \{X_1,\ldots,X_n\}$ be our sample. If the sample is a i.i.d. realization from a light tail distribution, then the CLT theorem holds. It follows that if $Y = \{Y_1,\ldots,Y_n\}$ is a bootstrap resample from $X$ then the distribution of: $$Z = \sqrt{n}\times\frac{mean(Y) - mean(X)}{sd(Y)},$$ is also close to the N(0,1) distribution function. Now all we have to do is perform a large number of bootstraps and compare the empirical distribution function of the observed Z's with the e.d.f. of a N(0,1). A natural way to make this comparison is the Kolmogorov–Smirnov test. The following pictures illustrate the main idea. In both pictures each colored line is constructed from a i.i.d. realization of 1000 observations from the particular distribution, followed by a 200 bootstrap resamples of size 500 for the approximation of the Z ecdf. The black continuous line is the N(0,1) cdf.
Test for finite variance?
One approach that had been suggested to me was via the Central Limit Theorem. This is a old question, but I want to propose a way to use the CLT to test for large tails. Let $X = \{X_1,\ldots,X_n\}$
Test for finite variance? One approach that had been suggested to me was via the Central Limit Theorem. This is a old question, but I want to propose a way to use the CLT to test for large tails. Let $X = \{X_1,\ldots,X_n\}$ be our sample. If the sample is a i.i.d. realization from a light tail distribution, then the CLT theorem holds. It follows that if $Y = \{Y_1,\ldots,Y_n\}$ is a bootstrap resample from $X$ then the distribution of: $$Z = \sqrt{n}\times\frac{mean(Y) - mean(X)}{sd(Y)},$$ is also close to the N(0,1) distribution function. Now all we have to do is perform a large number of bootstraps and compare the empirical distribution function of the observed Z's with the e.d.f. of a N(0,1). A natural way to make this comparison is the Kolmogorov–Smirnov test. The following pictures illustrate the main idea. In both pictures each colored line is constructed from a i.i.d. realization of 1000 observations from the particular distribution, followed by a 200 bootstrap resamples of size 500 for the approximation of the Z ecdf. The black continuous line is the N(0,1) cdf.
Test for finite variance? One approach that had been suggested to me was via the Central Limit Theorem. This is a old question, but I want to propose a way to use the CLT to test for large tails. Let $X = \{X_1,\ldots,X_n\}$
8,828
Difference Between ANOVA and Kruskal-Wallis test
There are differences in the assumptions and the hypotheses that are tested. The ANOVA (and t-test) is explicitly a test of equality of means of values. The Kruskal-Wallis (and Mann-Whitney) can be seen technically as a comparison of the mean ranks. Hence, in terms of original values, the Kruskal-Wallis is more general than a comparison of means: it tests whether the probability that a random observation from each group is equally likely to be above or below a random observation from another group. The real data quantity that underlies that comparison is neither the differences in means nor the difference in medians, (in the two sample case) it is actually the median of all pairwise differences - the between-sample Hodges-Lehmann difference. However if you choose to make some restrictive assumptions, then Kruskal-Wallis can be seen as a test of equality of population means, as well as quantiles (e.g. medians), and indeed a wide variety of other measures. That is, if you assume that the group-distributions under the null hypothesis are the same, and that under the alternative, the only change is a distributional shift (a so called "location-shift alternative"), then it is also a test of equality of population means (and, simultaneously, of medians, lower quartiles, etc). [If you do make that assumption, you can obtain estimates of and intervals for the relative shifts, just as you can with ANOVA. Well, it is also possible to obtain intervals without that assumption, but they're more difficult to interpret.] If you look at the answer here, especially toward the end, it discusses the comparison between the t-test and the Wilcoxon-Mann-Whitney, which (when doing two-tailed tests at least) are the equivalent* of ANOVA and Kruskal-Wallis applied to a comparison of only two samples; it gives a little more detail, and much of that discussion carries over to the Kruskal-Wallis vs ANOVA. * (aside a particular issue that arises with multigroup comparisons where you can have non-transitive pairwise differences) It's not completely clear what you mean by a practical difference. You use them in generally a generally similar way. When both sets of assumptions apply they usually tend to give fairly similar sorts of results, but they can certainly give fairly different p-values in some situations. Edit: Here's an example of the similarity of inference even at small samples -- here's the joint acceptance region for the location-shifts among three groups (the second and third each compared with the first) sampled from normal distributions (with small sample sizes) for a particular data set, at the 5% level: Numerous interesting features can be discerned -- the slightly larger acceptance region for the KW in this case, with its boundary consisting of vertical, horizontal and diagonal straight line segments (it is not hard to figure out why). The two regions tell us very similar things about the parameters of interest here.
Difference Between ANOVA and Kruskal-Wallis test
There are differences in the assumptions and the hypotheses that are tested. The ANOVA (and t-test) is explicitly a test of equality of means of values. The Kruskal-Wallis (and Mann-Whitney) can be se
Difference Between ANOVA and Kruskal-Wallis test There are differences in the assumptions and the hypotheses that are tested. The ANOVA (and t-test) is explicitly a test of equality of means of values. The Kruskal-Wallis (and Mann-Whitney) can be seen technically as a comparison of the mean ranks. Hence, in terms of original values, the Kruskal-Wallis is more general than a comparison of means: it tests whether the probability that a random observation from each group is equally likely to be above or below a random observation from another group. The real data quantity that underlies that comparison is neither the differences in means nor the difference in medians, (in the two sample case) it is actually the median of all pairwise differences - the between-sample Hodges-Lehmann difference. However if you choose to make some restrictive assumptions, then Kruskal-Wallis can be seen as a test of equality of population means, as well as quantiles (e.g. medians), and indeed a wide variety of other measures. That is, if you assume that the group-distributions under the null hypothesis are the same, and that under the alternative, the only change is a distributional shift (a so called "location-shift alternative"), then it is also a test of equality of population means (and, simultaneously, of medians, lower quartiles, etc). [If you do make that assumption, you can obtain estimates of and intervals for the relative shifts, just as you can with ANOVA. Well, it is also possible to obtain intervals without that assumption, but they're more difficult to interpret.] If you look at the answer here, especially toward the end, it discusses the comparison between the t-test and the Wilcoxon-Mann-Whitney, which (when doing two-tailed tests at least) are the equivalent* of ANOVA and Kruskal-Wallis applied to a comparison of only two samples; it gives a little more detail, and much of that discussion carries over to the Kruskal-Wallis vs ANOVA. * (aside a particular issue that arises with multigroup comparisons where you can have non-transitive pairwise differences) It's not completely clear what you mean by a practical difference. You use them in generally a generally similar way. When both sets of assumptions apply they usually tend to give fairly similar sorts of results, but they can certainly give fairly different p-values in some situations. Edit: Here's an example of the similarity of inference even at small samples -- here's the joint acceptance region for the location-shifts among three groups (the second and third each compared with the first) sampled from normal distributions (with small sample sizes) for a particular data set, at the 5% level: Numerous interesting features can be discerned -- the slightly larger acceptance region for the KW in this case, with its boundary consisting of vertical, horizontal and diagonal straight line segments (it is not hard to figure out why). The two regions tell us very similar things about the parameters of interest here.
Difference Between ANOVA and Kruskal-Wallis test There are differences in the assumptions and the hypotheses that are tested. The ANOVA (and t-test) is explicitly a test of equality of means of values. The Kruskal-Wallis (and Mann-Whitney) can be se
8,829
Difference Between ANOVA and Kruskal-Wallis test
Yes there is. The anova is a parametric approach while kruskal.test is a non parametric approach. So kruskal.test does not need any distributional assumption. From practical point of view, when your data is skewed, then anova would not a be good approach to use. Have a look at this question for example.
Difference Between ANOVA and Kruskal-Wallis test
Yes there is. The anova is a parametric approach while kruskal.test is a non parametric approach. So kruskal.test does not need any distributional assumption. From practical point of view, when your
Difference Between ANOVA and Kruskal-Wallis test Yes there is. The anova is a parametric approach while kruskal.test is a non parametric approach. So kruskal.test does not need any distributional assumption. From practical point of view, when your data is skewed, then anova would not a be good approach to use. Have a look at this question for example.
Difference Between ANOVA and Kruskal-Wallis test Yes there is. The anova is a parametric approach while kruskal.test is a non parametric approach. So kruskal.test does not need any distributional assumption. From practical point of view, when your
8,830
Difference Between ANOVA and Kruskal-Wallis test
As far as I know (but please correct me if I'm wrong cause I'm not sure), the Kruskal-Wallis test is constructed in order to detect a difference between two distributions having the same shape and the same dispersion, that is, one is obtained by translating the other by a difference $\Delta$, such as: Let's call $(*)$ this assumption. The KW test tests the null hypothesis $H_0\colon\{\Delta=0\}$ vs $H_1\colon\{\Delta \neq 0\}$. However, the KW test is "valid" without assumption $(*)$ : its signficance level (probability to reject $H_0$ under $H_0)$ is valid because $(*)$ is obviously fulfilled under $H_0\colon\{\text{the distributions are equal}\}$. But the KW test is "inefficient" if $(*)$ does not hold: it only intend to have a good power to detect $\Delta >0$, and then the test statistic is not appropriate to reflect the difference between the two distributions if there's no such $\Delta$. Consider the following example. Two samples $x$ and $y$ of size $n=1000$ are generated from two quite different distributions but having the same mean. Then KW fails to reject $H_0$. set.seed(666) n <- 1000 x <- rnorm(n) y <- (2*rbinom(n,1,1/2)-1)*rnorm(n,3) plot(density(x, from=min(y), to=max(y))) lines(density(y), col="blue") > kruskal.test(list(x,y)) Kruskal-Wallis rank sum test data: list(x, y) Kruskal-Wallis chi-squared = 2.482, df = 1, p-value = 0.1152 As I claimed in the beginning, I'm not sure about the precise construction of KW. Maybe my answer is more correct for another nonparametric test (Mann-Whitney ?..), but the approach should be similar.
Difference Between ANOVA and Kruskal-Wallis test
As far as I know (but please correct me if I'm wrong cause I'm not sure), the Kruskal-Wallis test is constructed in order to detect a difference between two distributions having the same shape and the
Difference Between ANOVA and Kruskal-Wallis test As far as I know (but please correct me if I'm wrong cause I'm not sure), the Kruskal-Wallis test is constructed in order to detect a difference between two distributions having the same shape and the same dispersion, that is, one is obtained by translating the other by a difference $\Delta$, such as: Let's call $(*)$ this assumption. The KW test tests the null hypothesis $H_0\colon\{\Delta=0\}$ vs $H_1\colon\{\Delta \neq 0\}$. However, the KW test is "valid" without assumption $(*)$ : its signficance level (probability to reject $H_0$ under $H_0)$ is valid because $(*)$ is obviously fulfilled under $H_0\colon\{\text{the distributions are equal}\}$. But the KW test is "inefficient" if $(*)$ does not hold: it only intend to have a good power to detect $\Delta >0$, and then the test statistic is not appropriate to reflect the difference between the two distributions if there's no such $\Delta$. Consider the following example. Two samples $x$ and $y$ of size $n=1000$ are generated from two quite different distributions but having the same mean. Then KW fails to reject $H_0$. set.seed(666) n <- 1000 x <- rnorm(n) y <- (2*rbinom(n,1,1/2)-1)*rnorm(n,3) plot(density(x, from=min(y), to=max(y))) lines(density(y), col="blue") > kruskal.test(list(x,y)) Kruskal-Wallis rank sum test data: list(x, y) Kruskal-Wallis chi-squared = 2.482, df = 1, p-value = 0.1152 As I claimed in the beginning, I'm not sure about the precise construction of KW. Maybe my answer is more correct for another nonparametric test (Mann-Whitney ?..), but the approach should be similar.
Difference Between ANOVA and Kruskal-Wallis test As far as I know (but please correct me if I'm wrong cause I'm not sure), the Kruskal-Wallis test is constructed in order to detect a difference between two distributions having the same shape and the
8,831
Difference Between ANOVA and Kruskal-Wallis test
Kruskal-Wallis is rank based, rather than value-based. This can make a big difference if there are skewed distributions or if there are extreme cases
Difference Between ANOVA and Kruskal-Wallis test
Kruskal-Wallis is rank based, rather than value-based. This can make a big difference if there are skewed distributions or if there are extreme cases
Difference Between ANOVA and Kruskal-Wallis test Kruskal-Wallis is rank based, rather than value-based. This can make a big difference if there are skewed distributions or if there are extreme cases
Difference Between ANOVA and Kruskal-Wallis test Kruskal-Wallis is rank based, rather than value-based. This can make a big difference if there are skewed distributions or if there are extreme cases
8,832
Explanation of Spikes in training loss vs. iterations with Adam Optimizer
The spikes are an unavoidable consequence of Mini-Batch Gradient Descent in Adam (batch_size=32). Some mini-batches have 'by chance' unlucky data for the optimization, inducing those spikes you see in your cost function using Adam. If you try stochastic gradient descent (same as using batch_size=1) you will see that there are even more spikes in the cost function. The same doesn´t happen in (Full) Batch GD because it uses all training data (i.e the batch size is equal to the cardinality of your training set) each optimization epoch. As in your first graphic the cost is monotonically decreasing smoothly it seems the title (i) With SGD) is wrong and you are using (Full) Batch Gradient Descent instead of SGD. On his great Deep Learning course at Coursera, Andrew Ng explains in great details this using the image below:
Explanation of Spikes in training loss vs. iterations with Adam Optimizer
The spikes are an unavoidable consequence of Mini-Batch Gradient Descent in Adam (batch_size=32). Some mini-batches have 'by chance' unlucky data for the optimization, inducing those spikes you see i
Explanation of Spikes in training loss vs. iterations with Adam Optimizer The spikes are an unavoidable consequence of Mini-Batch Gradient Descent in Adam (batch_size=32). Some mini-batches have 'by chance' unlucky data for the optimization, inducing those spikes you see in your cost function using Adam. If you try stochastic gradient descent (same as using batch_size=1) you will see that there are even more spikes in the cost function. The same doesn´t happen in (Full) Batch GD because it uses all training data (i.e the batch size is equal to the cardinality of your training set) each optimization epoch. As in your first graphic the cost is monotonically decreasing smoothly it seems the title (i) With SGD) is wrong and you are using (Full) Batch Gradient Descent instead of SGD. On his great Deep Learning course at Coursera, Andrew Ng explains in great details this using the image below:
Explanation of Spikes in training loss vs. iterations with Adam Optimizer The spikes are an unavoidable consequence of Mini-Batch Gradient Descent in Adam (batch_size=32). Some mini-batches have 'by chance' unlucky data for the optimization, inducing those spikes you see i
8,833
Explanation of Spikes in training loss vs. iterations with Adam Optimizer
I've spent insane amount of time debugging exploding gradients and similar behaviour. Your answer will be dependent on loss function, data, architecture etc. There's hundreds of reasons. I'll name a few. Loss-dependent. Loglikelihood-losses needs to be clipped, if not, it may evaluate near log(0) for bad predictions/outliers in dataset, causing exploding gradients. Most packages (torch,tensorflow etc) implements clipping per default for their losses. Outliers in dataset. BatchNorm with small batchsize and large epsilon $\epsilon$ (hyperparameter). With batchnorm as $y=(x-u)/(s+\epsilon)$, then with small $s$ and $\epsilon$ you can get high magnitudes of $y$ Final batch in an epoch may be small if dataset undivisible by batchsize. In torch dataloader there's a flag drop_last. Small batchsize=high variance Now to why you see it with Adam and not with SGD? Clearly you reached lower loss with Adam. As noted before, If 99.9% of dataset has optima at one point except some observation, this may be that observation screaming "NO" and jumping out from the local minima when randomly selected to a batch. If you see it every dataset_size//batch_size+1-steps, it's probably due to final batchsize being small. I bet you'll see SGD spike too if you let it reach lower loss. Bonus: Your really fast decrease with momentum-optimizer (Adam) could mean that some layer (input layer? output layer?) is initialized way out of scale (to large/small weights).
Explanation of Spikes in training loss vs. iterations with Adam Optimizer
I've spent insane amount of time debugging exploding gradients and similar behaviour. Your answer will be dependent on loss function, data, architecture etc. There's hundreds of reasons. I'll name a f
Explanation of Spikes in training loss vs. iterations with Adam Optimizer I've spent insane amount of time debugging exploding gradients and similar behaviour. Your answer will be dependent on loss function, data, architecture etc. There's hundreds of reasons. I'll name a few. Loss-dependent. Loglikelihood-losses needs to be clipped, if not, it may evaluate near log(0) for bad predictions/outliers in dataset, causing exploding gradients. Most packages (torch,tensorflow etc) implements clipping per default for their losses. Outliers in dataset. BatchNorm with small batchsize and large epsilon $\epsilon$ (hyperparameter). With batchnorm as $y=(x-u)/(s+\epsilon)$, then with small $s$ and $\epsilon$ you can get high magnitudes of $y$ Final batch in an epoch may be small if dataset undivisible by batchsize. In torch dataloader there's a flag drop_last. Small batchsize=high variance Now to why you see it with Adam and not with SGD? Clearly you reached lower loss with Adam. As noted before, If 99.9% of dataset has optima at one point except some observation, this may be that observation screaming "NO" and jumping out from the local minima when randomly selected to a batch. If you see it every dataset_size//batch_size+1-steps, it's probably due to final batchsize being small. I bet you'll see SGD spike too if you let it reach lower loss. Bonus: Your really fast decrease with momentum-optimizer (Adam) could mean that some layer (input layer? output layer?) is initialized way out of scale (to large/small weights).
Explanation of Spikes in training loss vs. iterations with Adam Optimizer I've spent insane amount of time debugging exploding gradients and similar behaviour. Your answer will be dependent on loss function, data, architecture etc. There's hundreds of reasons. I'll name a f
8,834
Why is xgboost overfitting in my task? Is it fine to accept this overfitting?
Is overfitting so bad that you should not pick a model that does overfit, even though its test error is smaller? No. But you should have a justification for choosing it. This behavior is not restricted to XGBoost. It is a common thread among all machine learning techniques; finding the right tradeoff between underfitting and overfitting. The formal definition is the Bias-variance tradeoff (Wikipedia). The bias-variance tradeoff The following is a simplification of the Bias-variance tradeoff, to help justify the choice of your model. We say that a model has a high bias if it is not able to fully use the information in the data. It is too reliant on general information, such as the most frequent case, the mean of the response, or few powerful features. Bias can come from wrong assumptions, for exemple assuming that the variables are Normally distributed or that the model is linear. We say that a model has high variance if it is using too much information from the the data. It relies on information that is revelant only in the training set that has been presented to it, which does not generalize well enough. Typically, the model will change a lot if you change the training set, hence the "high variance" name. Those definition are very similar to the definitions of underfitting and overfitting. However, those definition are often too simplified to be opposites, as in The model is underfitting if both the training and test error are high. This means that the model is too simple. The model is overfitting if the test error is higher than the training error. This means that the model is too complex. Those simplifications are of course helpful, as they help choosing the right complexity of the model. But they overlook an important point, the fact that (almost) every model has both a bias and a variance component. The underfitting/overfitting description tell you that you have too much bias/too much variance, but you (almost) always have both. If you want more information about the bias-variance tradeoff, they are a lot of helpful visualisation and good ressource available through google. Every machine learning textbook will have a section on the bias-variance tradeoff, here are a few An introduction to statistical learning and Elements of statistical learning (available here). Pattern Recognition and Machine Learning, by Christopher Bishop. Machine Learning: A Probabilistic Perspective, by Kevin Murphy. Also, a nice blog post that helped me grasp is Scott Fortmann-Roe's Understanding the Bias-Variance Tradeoff. Application to your problem So you have two models, $$ \begin{array}{lrrl} & \text{Train MAE} & \text{Test MAE} &\\ \text{MARS} & \sim4.0 & \sim4.0 & \text{Low variance, higher bias},\\ \text{XGBoost} & \sim0.3 & \sim2.4 & \text{Higher variance, lower bias},\\ \end{array} $$ and you need to pick one. To do so, you need to define what is a better model. The parameters that should be included in your decisions are the complexity and the performance of the model. How many "units" of complexity are you willing to exchange for one "unit" of performance? More complexity is associated with higher variance. If you want your model to generalize well on a dataset that is a little bit different than the one you have trained on, you should aim for less complexity. If you want a model that you can understand easily, you can do so at the cost of performance by reducing the complexity of the model. If you are aiming for the best performance on a dataset that you know comes from the same generative process than your training set, you can manipulate complexity in order to optimize your test error and use this as a metric. This happens when your training set is randomly sampled from a larger set, and your model will be applied on this set. This is the case in most Kaggle competitions, for exemple. The goal here is not to find a model that "does not overfit". It is to find the model that has the best bias-variance tradeoff. In this case, I would argue that the reduction in bias accomplished by the XGBoost model is good enough to justify the increase in variance. What can you do However, you can probably do better by tuning the hyperparameters. Increasing the number of rounds and reducing the learning rate is a possibility. Something that is "weird" about gradient boosting is that running it well past the point where the training error has hit zero seems to still improve the test error (as discussed here: Is Deeper Better Only When Shallow Is Good?). You can try to train your model a little bit longer on your dataset once you have set the other parameters, The depth of the trees you grow is a very good place to start. You have to note that for every one unit of depth, you double the number of leafs to be constructed. If you were to grow trees of size two instead of size 16, it would take $1/2^{14}$ of the time! You should try growing more smaller trees. The reason why is that the depth of the tree should represent the degree of feature interaction. This may be jargon, but if your features have a degree of interaction of 3 (Roughly: A combination of 4 features is not more powerful than a combination of 3 of those feature + the fourth), then growing trees of size larger than 3 is detrimental. Two trees of depth three will have more generalization power than one tree of depth four. This is a rather complicated concept and I will not go into it right now, but you can check this collection of papers for a start. Also, note that deep trees lead to high variance! Using subsampling, known as bagging, is great to reduce variance. If your individual trees have a high variance, bagging will average the trees and the average has less variance than individual trees. If, after tuning the depth of your trees, you still encounter high variance, try to increase subsampling (that is, reduce the fraction of data used). Subsampling of the feature space also achieves this goal.
Why is xgboost overfitting in my task? Is it fine to accept this overfitting?
Is overfitting so bad that you should not pick a model that does overfit, even though its test error is smaller? No. But you should have a justification for choosing it. This behavior is not restricte
Why is xgboost overfitting in my task? Is it fine to accept this overfitting? Is overfitting so bad that you should not pick a model that does overfit, even though its test error is smaller? No. But you should have a justification for choosing it. This behavior is not restricted to XGBoost. It is a common thread among all machine learning techniques; finding the right tradeoff between underfitting and overfitting. The formal definition is the Bias-variance tradeoff (Wikipedia). The bias-variance tradeoff The following is a simplification of the Bias-variance tradeoff, to help justify the choice of your model. We say that a model has a high bias if it is not able to fully use the information in the data. It is too reliant on general information, such as the most frequent case, the mean of the response, or few powerful features. Bias can come from wrong assumptions, for exemple assuming that the variables are Normally distributed or that the model is linear. We say that a model has high variance if it is using too much information from the the data. It relies on information that is revelant only in the training set that has been presented to it, which does not generalize well enough. Typically, the model will change a lot if you change the training set, hence the "high variance" name. Those definition are very similar to the definitions of underfitting and overfitting. However, those definition are often too simplified to be opposites, as in The model is underfitting if both the training and test error are high. This means that the model is too simple. The model is overfitting if the test error is higher than the training error. This means that the model is too complex. Those simplifications are of course helpful, as they help choosing the right complexity of the model. But they overlook an important point, the fact that (almost) every model has both a bias and a variance component. The underfitting/overfitting description tell you that you have too much bias/too much variance, but you (almost) always have both. If you want more information about the bias-variance tradeoff, they are a lot of helpful visualisation and good ressource available through google. Every machine learning textbook will have a section on the bias-variance tradeoff, here are a few An introduction to statistical learning and Elements of statistical learning (available here). Pattern Recognition and Machine Learning, by Christopher Bishop. Machine Learning: A Probabilistic Perspective, by Kevin Murphy. Also, a nice blog post that helped me grasp is Scott Fortmann-Roe's Understanding the Bias-Variance Tradeoff. Application to your problem So you have two models, $$ \begin{array}{lrrl} & \text{Train MAE} & \text{Test MAE} &\\ \text{MARS} & \sim4.0 & \sim4.0 & \text{Low variance, higher bias},\\ \text{XGBoost} & \sim0.3 & \sim2.4 & \text{Higher variance, lower bias},\\ \end{array} $$ and you need to pick one. To do so, you need to define what is a better model. The parameters that should be included in your decisions are the complexity and the performance of the model. How many "units" of complexity are you willing to exchange for one "unit" of performance? More complexity is associated with higher variance. If you want your model to generalize well on a dataset that is a little bit different than the one you have trained on, you should aim for less complexity. If you want a model that you can understand easily, you can do so at the cost of performance by reducing the complexity of the model. If you are aiming for the best performance on a dataset that you know comes from the same generative process than your training set, you can manipulate complexity in order to optimize your test error and use this as a metric. This happens when your training set is randomly sampled from a larger set, and your model will be applied on this set. This is the case in most Kaggle competitions, for exemple. The goal here is not to find a model that "does not overfit". It is to find the model that has the best bias-variance tradeoff. In this case, I would argue that the reduction in bias accomplished by the XGBoost model is good enough to justify the increase in variance. What can you do However, you can probably do better by tuning the hyperparameters. Increasing the number of rounds and reducing the learning rate is a possibility. Something that is "weird" about gradient boosting is that running it well past the point where the training error has hit zero seems to still improve the test error (as discussed here: Is Deeper Better Only When Shallow Is Good?). You can try to train your model a little bit longer on your dataset once you have set the other parameters, The depth of the trees you grow is a very good place to start. You have to note that for every one unit of depth, you double the number of leafs to be constructed. If you were to grow trees of size two instead of size 16, it would take $1/2^{14}$ of the time! You should try growing more smaller trees. The reason why is that the depth of the tree should represent the degree of feature interaction. This may be jargon, but if your features have a degree of interaction of 3 (Roughly: A combination of 4 features is not more powerful than a combination of 3 of those feature + the fourth), then growing trees of size larger than 3 is detrimental. Two trees of depth three will have more generalization power than one tree of depth four. This is a rather complicated concept and I will not go into it right now, but you can check this collection of papers for a start. Also, note that deep trees lead to high variance! Using subsampling, known as bagging, is great to reduce variance. If your individual trees have a high variance, bagging will average the trees and the average has less variance than individual trees. If, after tuning the depth of your trees, you still encounter high variance, try to increase subsampling (that is, reduce the fraction of data used). Subsampling of the feature space also achieves this goal.
Why is xgboost overfitting in my task? Is it fine to accept this overfitting? Is overfitting so bad that you should not pick a model that does overfit, even though its test error is smaller? No. But you should have a justification for choosing it. This behavior is not restricte
8,835
Where's the graph theory in graphical models?
There is very little true mathematical graph theory in probabilistic graphical models, where by true mathematical graph theory I mean proofs about cliques, vertex orders, max-flow min-cut theorems, and so on. Even something as fundamental as Euler's Theorem and Handshaking Lemma are not used, though I suppose one might invoke them to check some property of computer code used to update probabilistic estimates. Moreover, probabilist graphical models rarely use more than a subset of the classes of graphs, such as multi-graphs. Theorems about flows in graphs are not used in probabilistic graphical models. If student A were an expert in probability but knew nothing about graph theory, and student B were an expert in graph theory but knew nothing about probability, then A would certainly learn and understand probabilistic graphical models faster than would B.
Where's the graph theory in graphical models?
There is very little true mathematical graph theory in probabilistic graphical models, where by true mathematical graph theory I mean proofs about cliques, vertex orders, max-flow min-cut theorems, an
Where's the graph theory in graphical models? There is very little true mathematical graph theory in probabilistic graphical models, where by true mathematical graph theory I mean proofs about cliques, vertex orders, max-flow min-cut theorems, and so on. Even something as fundamental as Euler's Theorem and Handshaking Lemma are not used, though I suppose one might invoke them to check some property of computer code used to update probabilistic estimates. Moreover, probabilist graphical models rarely use more than a subset of the classes of graphs, such as multi-graphs. Theorems about flows in graphs are not used in probabilistic graphical models. If student A were an expert in probability but knew nothing about graph theory, and student B were an expert in graph theory but knew nothing about probability, then A would certainly learn and understand probabilistic graphical models faster than would B.
Where's the graph theory in graphical models? There is very little true mathematical graph theory in probabilistic graphical models, where by true mathematical graph theory I mean proofs about cliques, vertex orders, max-flow min-cut theorems, an
8,836
Where's the graph theory in graphical models?
In a strict sense, graph theory seems loosely connected to PGMs. However, graph algorithms come in handy. PGMs started with message-passing inference, which is a subset of general class of message-passing algorithms on graphs (may be, that is the reason for the word “graphical” in them). Graph-cut algorithms are widely used for Markov random field inference in computer vision; they are based on the results akin to Ford–Fulkerson theorem (max flow equals min cut); most popular algorithms are probably Boykov–Kolmogorov and IBFS. References. [Murphy, 2012, §22.6.3] covers graph cuts usage for MAP inference. See also [Kolmogorom and Zabih, 2004; Boykov et al., PAMI 2001], which cover optimization rather than modelling.
Where's the graph theory in graphical models?
In a strict sense, graph theory seems loosely connected to PGMs. However, graph algorithms come in handy. PGMs started with message-passing inference, which is a subset of general class of message-pas
Where's the graph theory in graphical models? In a strict sense, graph theory seems loosely connected to PGMs. However, graph algorithms come in handy. PGMs started with message-passing inference, which is a subset of general class of message-passing algorithms on graphs (may be, that is the reason for the word “graphical” in them). Graph-cut algorithms are widely used for Markov random field inference in computer vision; they are based on the results akin to Ford–Fulkerson theorem (max flow equals min cut); most popular algorithms are probably Boykov–Kolmogorov and IBFS. References. [Murphy, 2012, §22.6.3] covers graph cuts usage for MAP inference. See also [Kolmogorom and Zabih, 2004; Boykov et al., PAMI 2001], which cover optimization rather than modelling.
Where's the graph theory in graphical models? In a strict sense, graph theory seems loosely connected to PGMs. However, graph algorithms come in handy. PGMs started with message-passing inference, which is a subset of general class of message-pas
8,837
Where's the graph theory in graphical models?
There has been some work investigating the link between the ease of decoding of Low Density Parity Check codes (which gets excellent results when you consider it a probablistic graph and apply Loopy Belief Propagation), and the girth of the graph formed by the parity check matrix. This link to girth goes right the way back to when LDPCs were invented[1] but there's been further work in the last decade or so[2][3] after the were separately rediscovery by Mackay et al [4] and their properties noticed. I often see pearl's comment on the convergence time of belief propagation depending on the diameter of the graph being cited. But I don't know of any work looking at graph diameters in non-tree graphs and what effect that has. R. G. Gallager. Low Density Parity Check Codes. M.I.T. Press, 1963 I.E. Bocharova, F. Hug, R. Johannesson, B.D. Kudryashov, and R.V. Satyukov. New low-density parity-check codes with large girth based on hypergraphs. In Information Theory Proceedings (ISIT), 2010 IEEE International Symposium on, pages 819 –823, 2010. S.C. Tatikonda. Convergence of the sum-product algorithm. In Information Theory Workshop, 2003. Proceedings. 2003 IEEE, pages 222 – 225, 2003 David J. C. MacKay and R. M. Neal. Near Shannon limit performance of low density parity check codes. Electronics Letters, 33(6):457–458, 1997.
Where's the graph theory in graphical models?
There has been some work investigating the link between the ease of decoding of Low Density Parity Check codes (which gets excellent results when you consider it a probablistic graph and apply Loopy B
Where's the graph theory in graphical models? There has been some work investigating the link between the ease of decoding of Low Density Parity Check codes (which gets excellent results when you consider it a probablistic graph and apply Loopy Belief Propagation), and the girth of the graph formed by the parity check matrix. This link to girth goes right the way back to when LDPCs were invented[1] but there's been further work in the last decade or so[2][3] after the were separately rediscovery by Mackay et al [4] and their properties noticed. I often see pearl's comment on the convergence time of belief propagation depending on the diameter of the graph being cited. But I don't know of any work looking at graph diameters in non-tree graphs and what effect that has. R. G. Gallager. Low Density Parity Check Codes. M.I.T. Press, 1963 I.E. Bocharova, F. Hug, R. Johannesson, B.D. Kudryashov, and R.V. Satyukov. New low-density parity-check codes with large girth based on hypergraphs. In Information Theory Proceedings (ISIT), 2010 IEEE International Symposium on, pages 819 –823, 2010. S.C. Tatikonda. Convergence of the sum-product algorithm. In Information Theory Workshop, 2003. Proceedings. 2003 IEEE, pages 222 – 225, 2003 David J. C. MacKay and R. M. Neal. Near Shannon limit performance of low density parity check codes. Electronics Letters, 33(6):457–458, 1997.
Where's the graph theory in graphical models? There has been some work investigating the link between the ease of decoding of Low Density Parity Check codes (which gets excellent results when you consider it a probablistic graph and apply Loopy B
8,838
Where's the graph theory in graphical models?
One successful application of graph algorithms to probabilistic graphical models is the Chow-Liu algorithm. It solves the problem of finding the optimum (tree) graph structure and is based on maximum spanning trees (MST) algorithm. A joint probability over a tree graphical model can be written as: \begin{equation} p(x|T) = \prod_{t\in V} p(x_t) \prod_{(s,t) \in E} \frac{p(x_s, x_t)}{p(x_s)p(x_t)} \end{equation} We can write down a normalized log-likelihood as follows: \begin{equation} \frac{1}{N}\log P(D|\theta, T) = \sum_{t\in V}\sum_k p_{ML}(x_t=k) \log p_{ML}(x_t=k) + \sum_{(s,t)\in E} I(x_s; x_t|\theta_{st}) \end{equation} where $I(x_s;x_t|\theta_{st})$ is the mutual information between $x_s$ and $x_t$ given the empirical Maximum Likelihood (ML) distribution which counts the number of times a node $x$ was in state $k$. Since the first term is independent of the topology $T$, we can ignore it and focus on maximizing the second term. The log-likelihood is maximized by computing the maximum weight spanning tree, where the edge weights are the pairwise mutual information terms $I(x_s;x_t|\theta_{st})$. The maximum weight spanning tree can be found using Prim's algorithm and Kruskal's algorithm.
Where's the graph theory in graphical models?
One successful application of graph algorithms to probabilistic graphical models is the Chow-Liu algorithm. It solves the problem of finding the optimum (tree) graph structure and is based on maximum
Where's the graph theory in graphical models? One successful application of graph algorithms to probabilistic graphical models is the Chow-Liu algorithm. It solves the problem of finding the optimum (tree) graph structure and is based on maximum spanning trees (MST) algorithm. A joint probability over a tree graphical model can be written as: \begin{equation} p(x|T) = \prod_{t\in V} p(x_t) \prod_{(s,t) \in E} \frac{p(x_s, x_t)}{p(x_s)p(x_t)} \end{equation} We can write down a normalized log-likelihood as follows: \begin{equation} \frac{1}{N}\log P(D|\theta, T) = \sum_{t\in V}\sum_k p_{ML}(x_t=k) \log p_{ML}(x_t=k) + \sum_{(s,t)\in E} I(x_s; x_t|\theta_{st}) \end{equation} where $I(x_s;x_t|\theta_{st})$ is the mutual information between $x_s$ and $x_t$ given the empirical Maximum Likelihood (ML) distribution which counts the number of times a node $x$ was in state $k$. Since the first term is independent of the topology $T$, we can ignore it and focus on maximizing the second term. The log-likelihood is maximized by computing the maximum weight spanning tree, where the edge weights are the pairwise mutual information terms $I(x_s;x_t|\theta_{st})$. The maximum weight spanning tree can be found using Prim's algorithm and Kruskal's algorithm.
Where's the graph theory in graphical models? One successful application of graph algorithms to probabilistic graphical models is the Chow-Liu algorithm. It solves the problem of finding the optimum (tree) graph structure and is based on maximum
8,839
Open source tools for visualizing multi-dimensional data?
How about R with ggplot2? Other tools that I really like: Processing Prefuse Protovis
Open source tools for visualizing multi-dimensional data?
How about R with ggplot2? Other tools that I really like: Processing Prefuse Protovis
Open source tools for visualizing multi-dimensional data? How about R with ggplot2? Other tools that I really like: Processing Prefuse Protovis
Open source tools for visualizing multi-dimensional data? How about R with ggplot2? Other tools that I really like: Processing Prefuse Protovis
8,840
Open source tools for visualizing multi-dimensional data?
Mondrian: Exploratory data analysis with focus on large data and databases. iPlots: a package for the R statistical environment which provides high interaction statistical graphics, written in Java.
Open source tools for visualizing multi-dimensional data?
Mondrian: Exploratory data analysis with focus on large data and databases. iPlots: a package for the R statistical environment which provides high interaction statistical graphics, written in Java.
Open source tools for visualizing multi-dimensional data? Mondrian: Exploratory data analysis with focus on large data and databases. iPlots: a package for the R statistical environment which provides high interaction statistical graphics, written in Java.
Open source tools for visualizing multi-dimensional data? Mondrian: Exploratory data analysis with focus on large data and databases. iPlots: a package for the R statistical environment which provides high interaction statistical graphics, written in Java.
8,841
Open source tools for visualizing multi-dimensional data?
The lattice package in R. Lattice is a powerful and elegant high-level data visualization system, with an emphasis on multivariate data,that is sufficient for typical graphics needs, and is also flexible enough to handle most nonstandard requirements. Quick-R has a quick introduction.
Open source tools for visualizing multi-dimensional data?
The lattice package in R. Lattice is a powerful and elegant high-level data visualization system, with an emphasis on multivariate data,that is sufficient for typical graphics needs, and is also fl
Open source tools for visualizing multi-dimensional data? The lattice package in R. Lattice is a powerful and elegant high-level data visualization system, with an emphasis on multivariate data,that is sufficient for typical graphics needs, and is also flexible enough to handle most nonstandard requirements. Quick-R has a quick introduction.
Open source tools for visualizing multi-dimensional data? The lattice package in R. Lattice is a powerful and elegant high-level data visualization system, with an emphasis on multivariate data,that is sufficient for typical graphics needs, and is also fl
8,842
Open source tools for visualizing multi-dimensional data?
ggobi and the R links to Ggobi are really rather good for this. There are simpler visualisations (iPlots is very nice, also interactive, as mentioned). But it depends whether you are doing something more specialised. For example TreeView lets you visualise the kind of cluster dendrograms you get out of microarrays.
Open source tools for visualizing multi-dimensional data?
ggobi and the R links to Ggobi are really rather good for this. There are simpler visualisations (iPlots is very nice, also interactive, as mentioned). But it depends whether you are doing something
Open source tools for visualizing multi-dimensional data? ggobi and the R links to Ggobi are really rather good for this. There are simpler visualisations (iPlots is very nice, also interactive, as mentioned). But it depends whether you are doing something more specialised. For example TreeView lets you visualise the kind of cluster dendrograms you get out of microarrays.
Open source tools for visualizing multi-dimensional data? ggobi and the R links to Ggobi are really rather good for this. There are simpler visualisations (iPlots is very nice, also interactive, as mentioned). But it depends whether you are doing something
8,843
Open source tools for visualizing multi-dimensional data?
Python's matplotlib
Open source tools for visualizing multi-dimensional data?
Python's matplotlib
Open source tools for visualizing multi-dimensional data? Python's matplotlib
Open source tools for visualizing multi-dimensional data? Python's matplotlib
8,844
Open source tools for visualizing multi-dimensional data?
Viewpoints is useful for multi-variate data sets.
Open source tools for visualizing multi-dimensional data?
Viewpoints is useful for multi-variate data sets.
Open source tools for visualizing multi-dimensional data? Viewpoints is useful for multi-variate data sets.
Open source tools for visualizing multi-dimensional data? Viewpoints is useful for multi-variate data sets.
8,845
Open source tools for visualizing multi-dimensional data?
t-SNE has many open source implementations. One of the easiest to use is probably sklearn.manifold.TSNE. sklearn.manifold contains other manifold learning methods to plot your data to 2D:
Open source tools for visualizing multi-dimensional data?
t-SNE has many open source implementations. One of the easiest to use is probably sklearn.manifold.TSNE. sklearn.manifold contains other manifold learning methods to plot your data to 2D:
Open source tools for visualizing multi-dimensional data? t-SNE has many open source implementations. One of the easiest to use is probably sklearn.manifold.TSNE. sklearn.manifold contains other manifold learning methods to plot your data to 2D:
Open source tools for visualizing multi-dimensional data? t-SNE has many open source implementations. One of the easiest to use is probably sklearn.manifold.TSNE. sklearn.manifold contains other manifold learning methods to plot your data to 2D:
8,846
Open source tools for visualizing multi-dimensional data?
Look also SCaVis data plotting library. It works on any platform since Java. It supports many data containers and plot styles (2D, 3D etc.)
Open source tools for visualizing multi-dimensional data?
Look also SCaVis data plotting library. It works on any platform since Java. It supports many data containers and plot styles (2D, 3D etc.)
Open source tools for visualizing multi-dimensional data? Look also SCaVis data plotting library. It works on any platform since Java. It supports many data containers and plot styles (2D, 3D etc.)
Open source tools for visualizing multi-dimensional data? Look also SCaVis data plotting library. It works on any platform since Java. It supports many data containers and plot styles (2D, 3D etc.)
8,847
Best bandit algorithm?
A paper from NIPS 2011 ("An empirical evaluation of Thompson Sampling") shows, in experiments, that Thompson Sampling beats UCB. UCB is based on choosing the lever that promises the highest reward under optimistic assumptions (i.e. the variance of your estimate of the expected reward is high, therefore you pull levers that you don't know that well). Instead, Thompson Sampling is fully Bayesian: it generates a bandit configuration (i.e. a vector of expected rewards) from a posterior distribution, and then acts as if this was the true configuration (i.e. it pulls the lever with the highest expected reward). The Bayesian Control Rule ("A Minimum Relative Entropy Principle for Learning and Acting", JAIR), a generalization of Thompson Sampling, derives Thompson Sampling from information-theoretic principles and causality. In particular, it is shown that the Bayesian Control Rule is the optimum strategy when you want to minimize the KL between your strategy and the (unknown) optimum strategy and if you take into account causal constraints. The reason why this is important is because this can be viewed as an extension of Bayesian inference to actions: Bayesian inference can be shown to be the optimal prediction strategy when your performance criterion is the KL between your estimator and the (unknown) true distribution.
Best bandit algorithm?
A paper from NIPS 2011 ("An empirical evaluation of Thompson Sampling") shows, in experiments, that Thompson Sampling beats UCB. UCB is based on choosing the lever that promises the highest reward und
Best bandit algorithm? A paper from NIPS 2011 ("An empirical evaluation of Thompson Sampling") shows, in experiments, that Thompson Sampling beats UCB. UCB is based on choosing the lever that promises the highest reward under optimistic assumptions (i.e. the variance of your estimate of the expected reward is high, therefore you pull levers that you don't know that well). Instead, Thompson Sampling is fully Bayesian: it generates a bandit configuration (i.e. a vector of expected rewards) from a posterior distribution, and then acts as if this was the true configuration (i.e. it pulls the lever with the highest expected reward). The Bayesian Control Rule ("A Minimum Relative Entropy Principle for Learning and Acting", JAIR), a generalization of Thompson Sampling, derives Thompson Sampling from information-theoretic principles and causality. In particular, it is shown that the Bayesian Control Rule is the optimum strategy when you want to minimize the KL between your strategy and the (unknown) optimum strategy and if you take into account causal constraints. The reason why this is important is because this can be viewed as an extension of Bayesian inference to actions: Bayesian inference can be shown to be the optimal prediction strategy when your performance criterion is the KL between your estimator and the (unknown) true distribution.
Best bandit algorithm? A paper from NIPS 2011 ("An empirical evaluation of Thompson Sampling") shows, in experiments, that Thompson Sampling beats UCB. UCB is based on choosing the lever that promises the highest reward und
8,848
Best bandit algorithm?
UCB is indeed near optimal in the stochastic case (up to a log T factor for a T round game), and up to a gap in Pinsker's inequality in a more problem dependent sense. Recent paper of Audibert and Bubeck removes this log dependence in the worst case, but has a worse bound in the favorable case when different arms have well-separated rewards. In general, UCB is one candidate from a larger family of algorithms. At any point in the game, you can look at all arms that are not "disqualified", that is, whose upper confidence bound is not smaller than the lower confidence bound of some arm. Picking based on any distribution of such qualified arms constitutes a valid strategy and gets a similar regret up to constants. Empirically, I do not think there has been a significant evaluation of many different strategies, but I think UCB is often quite good. Most of the more recent research has focused on extending bandit problems beyond the simple K-armed setting with stochastic rewards, to very large (or infinite) action spaces, with or without side information, and under stochastic or adversarial feedback. There has also been work in scenarios where the performance criteria are different (such as the identification of best arm only).
Best bandit algorithm?
UCB is indeed near optimal in the stochastic case (up to a log T factor for a T round game), and up to a gap in Pinsker's inequality in a more problem dependent sense. Recent paper of Audibert and Bub
Best bandit algorithm? UCB is indeed near optimal in the stochastic case (up to a log T factor for a T round game), and up to a gap in Pinsker's inequality in a more problem dependent sense. Recent paper of Audibert and Bubeck removes this log dependence in the worst case, but has a worse bound in the favorable case when different arms have well-separated rewards. In general, UCB is one candidate from a larger family of algorithms. At any point in the game, you can look at all arms that are not "disqualified", that is, whose upper confidence bound is not smaller than the lower confidence bound of some arm. Picking based on any distribution of such qualified arms constitutes a valid strategy and gets a similar regret up to constants. Empirically, I do not think there has been a significant evaluation of many different strategies, but I think UCB is often quite good. Most of the more recent research has focused on extending bandit problems beyond the simple K-armed setting with stochastic rewards, to very large (or infinite) action spaces, with or without side information, and under stochastic or adversarial feedback. There has also been work in scenarios where the performance criteria are different (such as the identification of best arm only).
Best bandit algorithm? UCB is indeed near optimal in the stochastic case (up to a log T factor for a T round game), and up to a gap in Pinsker's inequality in a more problem dependent sense. Recent paper of Audibert and Bub
8,849
Best bandit algorithm?
The current state of the art could be summed up like this: stochastic: UCB and variants (regret in $R_T = O(\frac{K \log T}{\Delta})$) adversarial: EXP3 and variants (regret in $\tilde{R}_T = O(\sqrt{T K \log K})$) contextual: it's complicated with $T$ is the number of rounds, $K$ the number of arms, $\Delta$ the true difference between the best and second best arm (gap).
Best bandit algorithm?
The current state of the art could be summed up like this: stochastic: UCB and variants (regret in $R_T = O(\frac{K \log T}{\Delta})$) adversarial: EXP3 and variants (regret in $\tilde{R}_T = O(\sqrt
Best bandit algorithm? The current state of the art could be summed up like this: stochastic: UCB and variants (regret in $R_T = O(\frac{K \log T}{\Delta})$) adversarial: EXP3 and variants (regret in $\tilde{R}_T = O(\sqrt{T K \log K})$) contextual: it's complicated with $T$ is the number of rounds, $K$ the number of arms, $\Delta$ the true difference between the best and second best arm (gap).
Best bandit algorithm? The current state of the art could be summed up like this: stochastic: UCB and variants (regret in $R_T = O(\frac{K \log T}{\Delta})$) adversarial: EXP3 and variants (regret in $\tilde{R}_T = O(\sqrt
8,850
Calculating PCA variance explained [duplicate]
Yes, that's correct. summary.prcomp brings that information as well: summary(pca) #Importance of components: # PC1 PC2 PC3 PC4 #Standard deviation 1.5749 0.9949 0.59713 0.41645 #Proportion of Variance 0.6201 0.2474 0.08914 0.04336 #Cumulative Proportion 0.6201 0.8675 0.95664 1.00000 Compare to rbind( SD = sqrt(eigs), Proportion = eigs/sum(eigs), Cumulative = cumsum(eigs)/sum(eigs)) # [,1] [,2] [,3] [,4] #SD 1.5748783 0.9948694 0.5971291 0.41644938 #Proportion 0.6200604 0.2474413 0.0891408 0.04335752 #Cumulative 0.6200604 0.8675017 0.9566425 1.00000000
Calculating PCA variance explained [duplicate]
Yes, that's correct. summary.prcomp brings that information as well: summary(pca) #Importance of components: # PC1 PC2 PC3 PC4 #Standard deviation 1.5749 0.9949
Calculating PCA variance explained [duplicate] Yes, that's correct. summary.prcomp brings that information as well: summary(pca) #Importance of components: # PC1 PC2 PC3 PC4 #Standard deviation 1.5749 0.9949 0.59713 0.41645 #Proportion of Variance 0.6201 0.2474 0.08914 0.04336 #Cumulative Proportion 0.6201 0.8675 0.95664 1.00000 Compare to rbind( SD = sqrt(eigs), Proportion = eigs/sum(eigs), Cumulative = cumsum(eigs)/sum(eigs)) # [,1] [,2] [,3] [,4] #SD 1.5748783 0.9948694 0.5971291 0.41644938 #Proportion 0.6200604 0.2474413 0.0891408 0.04335752 #Cumulative 0.6200604 0.8675017 0.9566425 1.00000000
Calculating PCA variance explained [duplicate] Yes, that's correct. summary.prcomp brings that information as well: summary(pca) #Importance of components: # PC1 PC2 PC3 PC4 #Standard deviation 1.5749 0.9949
8,851
REML or ML to compare two mixed effects models with differing fixed effects, but with the same random effect?
Zuur et al., and Faraway (from @janhove's comment above) are right; using likelihood-based methods (including AIC) to compare two models with different fixed effects that are fitted by REML will generally lead to nonsense. Faraway (2006) Extending the linear model with R (p. 156): The reason is that REML estimates the random effects by considering linear combinations of the data that remove the fixed effects. If these fixed effects are changed, the likelihoods of the two models will not be directly comparable These two questions discuss the issue further: Allowed comparisons of mixed effects models (random effects primarily) ; REML vs ML stepAIC
REML or ML to compare two mixed effects models with differing fixed effects, but with the same rando
Zuur et al., and Faraway (from @janhove's comment above) are right; using likelihood-based methods (including AIC) to compare two models with different fixed effects that are fitted by REML will gener
REML or ML to compare two mixed effects models with differing fixed effects, but with the same random effect? Zuur et al., and Faraway (from @janhove's comment above) are right; using likelihood-based methods (including AIC) to compare two models with different fixed effects that are fitted by REML will generally lead to nonsense. Faraway (2006) Extending the linear model with R (p. 156): The reason is that REML estimates the random effects by considering linear combinations of the data that remove the fixed effects. If these fixed effects are changed, the likelihoods of the two models will not be directly comparable These two questions discuss the issue further: Allowed comparisons of mixed effects models (random effects primarily) ; REML vs ML stepAIC
REML or ML to compare two mixed effects models with differing fixed effects, but with the same rando Zuur et al., and Faraway (from @janhove's comment above) are right; using likelihood-based methods (including AIC) to compare two models with different fixed effects that are fitted by REML will gener
8,852
REML or ML to compare two mixed effects models with differing fixed effects, but with the same random effect?
I'll give an example to illustrate why the REML likelihood cannot be used for things like AIC comparisons. Imagine that we a normal mixed effects model. Let $X$ denote the design matrix and assume that this matrix has full rank. We can find a reparametrization of the mean value space, given by the matrix $\tilde{X}$. The two matrices span the same linear subspace of $\mathbb{R}^n$. Thus, the columns of $\tilde{X}$ can be written as linear combinations of the columns of $X$. Therefore, we can find a quadratic matrix, $B$, such that $\tilde{X} = XB$. Furthermore, $B$ has full rank (this can be proven by assuming that it didn't; then neither would $X$, a contradiction). This means that $B$ is invertible. If we start out be using the second parametrization of the mean value space and let $V$ be a covariance matrix then let's consider the REML criterion we should maximize (I'm omitting a constant) $ |V|^{-1/2}|\tilde{X}'V^{-1}\tilde{X}|^{-1/2}\exp((y-\tilde{X}\tilde{\beta})'V^{-1}(y-\tilde{X}\tilde{\beta})/2) $, over the parameter set, where $\beta = (\tilde{X}V^{-1}\tilde{X})^{-1}y$. Using the fact that $X = \tilde{X}B$, we can realize that this can be rewritten as $ |B||V|^{-1/2}||X'V^{-1}X|^{-1/2}|\exp((y-X\bar{\beta})'V^{-1}(y-X\bar{\beta})/2) $, where $\bar{\beta} = (XV^{-1}X)^{-1}y$. This is the REML likelihood for the other parametrization times the determinant of $|B|$. We therefore have an example of two different parametrizations of the same model, giving different likelihood values, assuming that $|B| \neq 1$ (such a matrix can easily be found). The same parameter value will maximize the criterion in both cases but the value of the likelihood will be different. This shows that there is an arbitrary element in the likelihood value and therefore illustrates why one cannot use the value of the likelihood for comparison between models with different fixed effects: you would be able to change the results simply be changing the mean value space parametrization in one of the models. This an example of why REML should not be used when comparing models with different fixed effects. REML, however, often estimates the random effects parameters better and therefore it is sometimes recommended to use ML for comparisons and REML for estimating a single (perhaps final) model.
REML or ML to compare two mixed effects models with differing fixed effects, but with the same rando
I'll give an example to illustrate why the REML likelihood cannot be used for things like AIC comparisons. Imagine that we a normal mixed effects model. Let $X$ denote the design matrix and assume tha
REML or ML to compare two mixed effects models with differing fixed effects, but with the same random effect? I'll give an example to illustrate why the REML likelihood cannot be used for things like AIC comparisons. Imagine that we a normal mixed effects model. Let $X$ denote the design matrix and assume that this matrix has full rank. We can find a reparametrization of the mean value space, given by the matrix $\tilde{X}$. The two matrices span the same linear subspace of $\mathbb{R}^n$. Thus, the columns of $\tilde{X}$ can be written as linear combinations of the columns of $X$. Therefore, we can find a quadratic matrix, $B$, such that $\tilde{X} = XB$. Furthermore, $B$ has full rank (this can be proven by assuming that it didn't; then neither would $X$, a contradiction). This means that $B$ is invertible. If we start out be using the second parametrization of the mean value space and let $V$ be a covariance matrix then let's consider the REML criterion we should maximize (I'm omitting a constant) $ |V|^{-1/2}|\tilde{X}'V^{-1}\tilde{X}|^{-1/2}\exp((y-\tilde{X}\tilde{\beta})'V^{-1}(y-\tilde{X}\tilde{\beta})/2) $, over the parameter set, where $\beta = (\tilde{X}V^{-1}\tilde{X})^{-1}y$. Using the fact that $X = \tilde{X}B$, we can realize that this can be rewritten as $ |B||V|^{-1/2}||X'V^{-1}X|^{-1/2}|\exp((y-X\bar{\beta})'V^{-1}(y-X\bar{\beta})/2) $, where $\bar{\beta} = (XV^{-1}X)^{-1}y$. This is the REML likelihood for the other parametrization times the determinant of $|B|$. We therefore have an example of two different parametrizations of the same model, giving different likelihood values, assuming that $|B| \neq 1$ (such a matrix can easily be found). The same parameter value will maximize the criterion in both cases but the value of the likelihood will be different. This shows that there is an arbitrary element in the likelihood value and therefore illustrates why one cannot use the value of the likelihood for comparison between models with different fixed effects: you would be able to change the results simply be changing the mean value space parametrization in one of the models. This an example of why REML should not be used when comparing models with different fixed effects. REML, however, often estimates the random effects parameters better and therefore it is sometimes recommended to use ML for comparisons and REML for estimating a single (perhaps final) model.
REML or ML to compare two mixed effects models with differing fixed effects, but with the same rando I'll give an example to illustrate why the REML likelihood cannot be used for things like AIC comparisons. Imagine that we a normal mixed effects model. Let $X$ denote the design matrix and assume tha
8,853
Who invented the decision tree?
Good question. @G5W is on the right track in referencing Wei-Yin Loh's paper. Loh's paper discusses the statistical antecedents of decision trees and, correctly, traces their locus back to Fisher's (1936) paper on discriminant analysis -- essentially regression classifying multiple groups as the dependent variable -- and from there, through AID, THAID, CHAID and CART models. The short answer is that the first article I've been able to find that develops a "decision tree" approach dates to 1959 and a British researcher, William Belson, in a paper titled Matching and Prediction on the Principle of Biological Classification, (JRSS, Series C, Applied Statistics, Vol. 8, No. 2, June, 1959, pp. 65-75), whose abstract describes his approach as one of matching population samples and developing criteria for doing so: In this article Dr Belson describes a technique for matching population samples. This depends on the combination of empirically developed predictors to give the best available predictive, or matching, composite. The underlying principle is quite distinct from that inherent in the multiple correlation method. The "long" answer is that other, even earlier streams of thought seem relevant here. For instance, the simple age-gender cohort breakouts employed in actuarial tables of mortality offer a framework for thinking about decisions that dates back several centuries. It could also be argued that efforts dating back to the Babylonians employed quadratic equations, which were nonlinear in the variables (not in the parameters, http://www-history.mcs.st-and.ac.uk/HistTopics/Quadratic_etc_equations.html) have relevance, at least insofar as they presage parametric models of logistic growth (I recognize that this is a stretch comment, please read on for a fuller motivation of it). In addition, philosophers have long recognized and theorized about the existence of hierarchically arranged, qualitative information, e.g., Aristotle's book on Categories. The concept and assumption of a hierarchy is key here. Other relevant, much later discoveries were in pushing beyond the boundaries of 3-D Euclidean space in David Hilbert's development of infinite, Hilbert space, combinatorics, discoveries in physics related to 4-D Minkowski space, distance and time, the statistical mechanics behind Einstein's theory of special relativity as well as innovations in the theory of probability relating to models of markov chains, transitions and processes. The point here is that there can be a significant lag between any theory and its application -- in this case, the lag between theories about qualitative information and developments related to their empirical assessment, prediction, classification and modeling. A best guess is that these developments can be associated with the history of increasing sophistication of statisticians, mostly in the 20th c, in developing models leveraging scale types other than continuous (e.g., nominal or, more simply, categorical information), count data models (poisson), cross-classified contingency tables, distribution-free nonparametric statistics, multidimensional scaling (e.g., J.G. Carroll, among others), models with qualitative dependent variables such as two group logistic regression as well as correspondence analysis (mostly in Holland and France in the 70s and 80s). There is a wide literature that discusses and compares two group logistic regression with two group discriminant analysis and, for fully nominal features, finds them providing equivalent solutions (e.g., Dillon and Goldstein's Multivariate Analysis, 1984). J.S. Cramer's article on the history of logistic regression (The History of Logistic Regression, http://papers.tinbergen.nl/02119.pdf) describes it as originating with the development of the univariate, logistic function or the classic S-shaped curve: The survival of the term logistic and the wide application of the device have been determined decisively by the personal histories and individual actions of a few scholars... Deterministic models of the logistic curve originated in 1825, when Benjamin Gompertz (https://en.wikipedia.org/wiki/Benjamin_Gompertz) published a paper developing the first truly nonlinear logistic model (nonlinear in the parameters and not just the variables as with the Babylonians) -- the Gompertz model and curve. I would suggest that another important link in this chain leading to the invention of decision trees was the Columbia sociologist Paul Lazarsfeld's work on latent structure models. His work began in the 30s, continued during WWII with his content analysis of German newspapers for the nascent OSS (later the CIA, as discussed in John Naisbett's book Megatrends) and finally published in 1950. Andersen describes it this way (Latent Structure Analysis: A Survey, Erling B. Andersen, Scandinavian Journal of Statistics, Vol. 9, No. 1, 1982, pp. 1-12): The foundation for the classical theory of latent structure analysis was developed by Paul Lazarsfeld in 1950 in a study of ethnocentrism of American soldiers during WWII. Lazarsfeld was primarily interested in developing the conceptual foundation of latent structure models...The statistical methods developed by Lazarsfeld were, however, rather primitive...An early attempt to derive efficient estimation methods and test procedures was made by Lazarsfeld's colleague at Columbia University, T.W. Anderson, who in a paper (Psychometrika, March 1954, Volume 19, Issue 1, pp 1–10, On estimation of parameters in latent structure analysis), developed an efficient estimation method for the parameters of the latent class model...In order to introduce the framework (of latent class models) we shall briefly outline the basic concepts...and use a notational system developed much later by Goodman (1974a)...The data are given in the form of a multiple contingency table... There is a useful distinction worth making here, as it can be related to the progression from AID to CHAID (later CART), between contingency table-based models (all variables in the model are nominally scaled) and more recent latent class models (more precisely, finite mixture models based on "mixtures" of scales and distributions, e.g., Kamakura and Russell, 1989, A Probabilistic Choice Model for Market Segmentation and Elasticity Structure) in how they create the model's residuals. For the older contingency table models, the cell counts inherent in the fully cross-classified table formed the basis for the "replications" and, therefore, the heterogeneity in the model's residuals used in the partitioning into classes. On the other hand, the more recent mixture models rely on repeated measures across a single subject as the basis for partitioning the heterogeneity in the residuals. This response is not suggesting a direct connection between latent class models and decision trees. The relevance to AID and CHAID can be summarized in the statistics employed to evaluate the models, AID uses a continuous F distribution while CHAID uses the chi-square distribution, appropriate for categorical information. Rather in their analysis and modeling of contingency tables, LCMs constitute, in my opinion, an important piece in the puzzle or narrative leading up to the development of decision trees, along with the many other innovations already noted. CHAID was a later development, first proposed in a 1980 PhD dissertation by South African Gordon Kass as outlined in this Wiki piece on CHAID (https://en.wikipedia.org/wiki/CHAID). Of course, CART came a few years later in the 80s with Breiman, et al's, now famous book Classification and Regression Trees. AID, CHAID and CART all posit tree-like, hierarchically arranged structures as the optimal representation of reality. They just go about this using differing algorithms and methods. To me, the next steps in this progressive chain of innovation are the emergence of heterarchical theories of structure. As defined in this Wiki article, heterarchies "are a system of organization where the elements of the organization are unranked (non-hierarchical) or where they possess the potential to be ranked a number of different ways" (https://en.wikipedia.org/wiki/Heterarchy or for a deeper, more philosophic perspective on heterarchy see Kontopoulos, The Logics of Social Structure). From an empirical point of view, the analysis and modeling of network structures are most representative of this historical development in the understanding of structure (e.g., Freeman's book The Development of Social Network Analysis). While many network analysts will try and force a hierarchical arrangement on the resulting network, this is more an expression of ingrained and unconscious assumptions than it is a statement about the empirical reality of multiplex network structure in a complex world. This response is suggesting that the arc of the evolution leading to the development of decision trees created new questions or dissatisfaction with existing "state-of-the-art" methods at each step or phase in the process, requiring new solutions and new models. In this case, dissatisfactions can be seen in the limitations of modeling two groups (logistic regression) and recognition of a need to widen that framework to more than two groups. Dissatisfactions with unrepresentative assumptions of an underlying normal distribution (discriminant analysis or AID) as well as comparison with the relative "freedom" to be found in employing nonparametric, distribution-free assumptions and models (e.g., CHAID and CART). As suggested, the origins of decision trees almost certainly has a long history that goes back centuries and is geographically dispersed. Multiple streams in human history, science, philosophy and thought can be traced in outlining the narrative leading up to the development of the many flavors of decision trees extant today. I will be the first to acknowledge the significant limitations of my brief sketch of this history. /** Addendums **/ This 2014 article in the New Scientist is titled Why do we love to organise knowledge into trees?(https://www.newscientist.com/article/mg22229630-800-why-do-we-love-to-organise-knowledge-into-trees/), It's a review of data visualization guru Manuel Lima's book The Book of Trees which traces the millenia old use of trees as a visualization and mnemonic aid for knowledge. There seems little question but that the secular and empirical models and graphics inherent in methods such as AID, CHAID and CART represents the continued evolution of this originally religious tradition of classification. In this video (posted online by Salford Systems, implementers of CART software), A Tribute to Leo Breiman, Breiman talks about the development of his thinking that led to the CART methodology. It all started with a wall plastered with the silhouettes of different WWII-era battleships. https://www.salford-systems.com/videos/conferences/cart-founding-fathers/a-tribute-to-leo-breiman?utm_source=linkedin&utm_medium=social&utm_content=3599323 In reading the introduction to Denis Konig's 1936 Theory of Finite and Infinite Graphs, widely viewed as providing the first rigorous, mathematical grounding to a field previously viewed as as a source of amusement and puzzles for children, Tutte notes (p. 13) that chapter 4 (beginning on p. 62) of Konig's book is devoted to trees in graph theory. Tutte's explanation of Konig's definition of a tree is "where an 'acyclic' graph is a graph with no circuit, a tree is a finite connected acyclic graph...in other words, in a tree there is one and only one path from a given vertex to another..." To me (and I'm neither a graph theorist nor a mathematician), this suggests that graph theory and its precursors in Poincare's Analysis Situs or Veblen's Cambridge Colloquium lectures on combinatorial topology, may have provided the early intellectual and mathematical antecedents for what later became a topic for statisticians. The first Tree of Knowledge is widely attributed to the neoplatonic philosopher Porphyry who, around 270 CE wrote an Introduction to Logic that used a metaphorical tree to describe and organize knowledge ... http://www.historyofinformation.com/expanded.php?id=3857 Just discovered an even earlier reference to a Tree of Knowledge in the Book of Genesis in the Bible, discussed in this Wiki article ... https://en.wikipedia.org/wiki/Tree_of_life_(biblical). Genesis probably dates back to 1,400 BCE based on this reference ... https://www.biblica.com/bible/bible-faqs/when-was-the-bible-written/ Regardless, the Book of Genesis came many centuries before Porphyry.
Who invented the decision tree?
Good question. @G5W is on the right track in referencing Wei-Yin Loh's paper. Loh's paper discusses the statistical antecedents of decision trees and, correctly, traces their locus back to Fisher's (1
Who invented the decision tree? Good question. @G5W is on the right track in referencing Wei-Yin Loh's paper. Loh's paper discusses the statistical antecedents of decision trees and, correctly, traces their locus back to Fisher's (1936) paper on discriminant analysis -- essentially regression classifying multiple groups as the dependent variable -- and from there, through AID, THAID, CHAID and CART models. The short answer is that the first article I've been able to find that develops a "decision tree" approach dates to 1959 and a British researcher, William Belson, in a paper titled Matching and Prediction on the Principle of Biological Classification, (JRSS, Series C, Applied Statistics, Vol. 8, No. 2, June, 1959, pp. 65-75), whose abstract describes his approach as one of matching population samples and developing criteria for doing so: In this article Dr Belson describes a technique for matching population samples. This depends on the combination of empirically developed predictors to give the best available predictive, or matching, composite. The underlying principle is quite distinct from that inherent in the multiple correlation method. The "long" answer is that other, even earlier streams of thought seem relevant here. For instance, the simple age-gender cohort breakouts employed in actuarial tables of mortality offer a framework for thinking about decisions that dates back several centuries. It could also be argued that efforts dating back to the Babylonians employed quadratic equations, which were nonlinear in the variables (not in the parameters, http://www-history.mcs.st-and.ac.uk/HistTopics/Quadratic_etc_equations.html) have relevance, at least insofar as they presage parametric models of logistic growth (I recognize that this is a stretch comment, please read on for a fuller motivation of it). In addition, philosophers have long recognized and theorized about the existence of hierarchically arranged, qualitative information, e.g., Aristotle's book on Categories. The concept and assumption of a hierarchy is key here. Other relevant, much later discoveries were in pushing beyond the boundaries of 3-D Euclidean space in David Hilbert's development of infinite, Hilbert space, combinatorics, discoveries in physics related to 4-D Minkowski space, distance and time, the statistical mechanics behind Einstein's theory of special relativity as well as innovations in the theory of probability relating to models of markov chains, transitions and processes. The point here is that there can be a significant lag between any theory and its application -- in this case, the lag between theories about qualitative information and developments related to their empirical assessment, prediction, classification and modeling. A best guess is that these developments can be associated with the history of increasing sophistication of statisticians, mostly in the 20th c, in developing models leveraging scale types other than continuous (e.g., nominal or, more simply, categorical information), count data models (poisson), cross-classified contingency tables, distribution-free nonparametric statistics, multidimensional scaling (e.g., J.G. Carroll, among others), models with qualitative dependent variables such as two group logistic regression as well as correspondence analysis (mostly in Holland and France in the 70s and 80s). There is a wide literature that discusses and compares two group logistic regression with two group discriminant analysis and, for fully nominal features, finds them providing equivalent solutions (e.g., Dillon and Goldstein's Multivariate Analysis, 1984). J.S. Cramer's article on the history of logistic regression (The History of Logistic Regression, http://papers.tinbergen.nl/02119.pdf) describes it as originating with the development of the univariate, logistic function or the classic S-shaped curve: The survival of the term logistic and the wide application of the device have been determined decisively by the personal histories and individual actions of a few scholars... Deterministic models of the logistic curve originated in 1825, when Benjamin Gompertz (https://en.wikipedia.org/wiki/Benjamin_Gompertz) published a paper developing the first truly nonlinear logistic model (nonlinear in the parameters and not just the variables as with the Babylonians) -- the Gompertz model and curve. I would suggest that another important link in this chain leading to the invention of decision trees was the Columbia sociologist Paul Lazarsfeld's work on latent structure models. His work began in the 30s, continued during WWII with his content analysis of German newspapers for the nascent OSS (later the CIA, as discussed in John Naisbett's book Megatrends) and finally published in 1950. Andersen describes it this way (Latent Structure Analysis: A Survey, Erling B. Andersen, Scandinavian Journal of Statistics, Vol. 9, No. 1, 1982, pp. 1-12): The foundation for the classical theory of latent structure analysis was developed by Paul Lazarsfeld in 1950 in a study of ethnocentrism of American soldiers during WWII. Lazarsfeld was primarily interested in developing the conceptual foundation of latent structure models...The statistical methods developed by Lazarsfeld were, however, rather primitive...An early attempt to derive efficient estimation methods and test procedures was made by Lazarsfeld's colleague at Columbia University, T.W. Anderson, who in a paper (Psychometrika, March 1954, Volume 19, Issue 1, pp 1–10, On estimation of parameters in latent structure analysis), developed an efficient estimation method for the parameters of the latent class model...In order to introduce the framework (of latent class models) we shall briefly outline the basic concepts...and use a notational system developed much later by Goodman (1974a)...The data are given in the form of a multiple contingency table... There is a useful distinction worth making here, as it can be related to the progression from AID to CHAID (later CART), between contingency table-based models (all variables in the model are nominally scaled) and more recent latent class models (more precisely, finite mixture models based on "mixtures" of scales and distributions, e.g., Kamakura and Russell, 1989, A Probabilistic Choice Model for Market Segmentation and Elasticity Structure) in how they create the model's residuals. For the older contingency table models, the cell counts inherent in the fully cross-classified table formed the basis for the "replications" and, therefore, the heterogeneity in the model's residuals used in the partitioning into classes. On the other hand, the more recent mixture models rely on repeated measures across a single subject as the basis for partitioning the heterogeneity in the residuals. This response is not suggesting a direct connection between latent class models and decision trees. The relevance to AID and CHAID can be summarized in the statistics employed to evaluate the models, AID uses a continuous F distribution while CHAID uses the chi-square distribution, appropriate for categorical information. Rather in their analysis and modeling of contingency tables, LCMs constitute, in my opinion, an important piece in the puzzle or narrative leading up to the development of decision trees, along with the many other innovations already noted. CHAID was a later development, first proposed in a 1980 PhD dissertation by South African Gordon Kass as outlined in this Wiki piece on CHAID (https://en.wikipedia.org/wiki/CHAID). Of course, CART came a few years later in the 80s with Breiman, et al's, now famous book Classification and Regression Trees. AID, CHAID and CART all posit tree-like, hierarchically arranged structures as the optimal representation of reality. They just go about this using differing algorithms and methods. To me, the next steps in this progressive chain of innovation are the emergence of heterarchical theories of structure. As defined in this Wiki article, heterarchies "are a system of organization where the elements of the organization are unranked (non-hierarchical) or where they possess the potential to be ranked a number of different ways" (https://en.wikipedia.org/wiki/Heterarchy or for a deeper, more philosophic perspective on heterarchy see Kontopoulos, The Logics of Social Structure). From an empirical point of view, the analysis and modeling of network structures are most representative of this historical development in the understanding of structure (e.g., Freeman's book The Development of Social Network Analysis). While many network analysts will try and force a hierarchical arrangement on the resulting network, this is more an expression of ingrained and unconscious assumptions than it is a statement about the empirical reality of multiplex network structure in a complex world. This response is suggesting that the arc of the evolution leading to the development of decision trees created new questions or dissatisfaction with existing "state-of-the-art" methods at each step or phase in the process, requiring new solutions and new models. In this case, dissatisfactions can be seen in the limitations of modeling two groups (logistic regression) and recognition of a need to widen that framework to more than two groups. Dissatisfactions with unrepresentative assumptions of an underlying normal distribution (discriminant analysis or AID) as well as comparison with the relative "freedom" to be found in employing nonparametric, distribution-free assumptions and models (e.g., CHAID and CART). As suggested, the origins of decision trees almost certainly has a long history that goes back centuries and is geographically dispersed. Multiple streams in human history, science, philosophy and thought can be traced in outlining the narrative leading up to the development of the many flavors of decision trees extant today. I will be the first to acknowledge the significant limitations of my brief sketch of this history. /** Addendums **/ This 2014 article in the New Scientist is titled Why do we love to organise knowledge into trees?(https://www.newscientist.com/article/mg22229630-800-why-do-we-love-to-organise-knowledge-into-trees/), It's a review of data visualization guru Manuel Lima's book The Book of Trees which traces the millenia old use of trees as a visualization and mnemonic aid for knowledge. There seems little question but that the secular and empirical models and graphics inherent in methods such as AID, CHAID and CART represents the continued evolution of this originally religious tradition of classification. In this video (posted online by Salford Systems, implementers of CART software), A Tribute to Leo Breiman, Breiman talks about the development of his thinking that led to the CART methodology. It all started with a wall plastered with the silhouettes of different WWII-era battleships. https://www.salford-systems.com/videos/conferences/cart-founding-fathers/a-tribute-to-leo-breiman?utm_source=linkedin&utm_medium=social&utm_content=3599323 In reading the introduction to Denis Konig's 1936 Theory of Finite and Infinite Graphs, widely viewed as providing the first rigorous, mathematical grounding to a field previously viewed as as a source of amusement and puzzles for children, Tutte notes (p. 13) that chapter 4 (beginning on p. 62) of Konig's book is devoted to trees in graph theory. Tutte's explanation of Konig's definition of a tree is "where an 'acyclic' graph is a graph with no circuit, a tree is a finite connected acyclic graph...in other words, in a tree there is one and only one path from a given vertex to another..." To me (and I'm neither a graph theorist nor a mathematician), this suggests that graph theory and its precursors in Poincare's Analysis Situs or Veblen's Cambridge Colloquium lectures on combinatorial topology, may have provided the early intellectual and mathematical antecedents for what later became a topic for statisticians. The first Tree of Knowledge is widely attributed to the neoplatonic philosopher Porphyry who, around 270 CE wrote an Introduction to Logic that used a metaphorical tree to describe and organize knowledge ... http://www.historyofinformation.com/expanded.php?id=3857 Just discovered an even earlier reference to a Tree of Knowledge in the Book of Genesis in the Bible, discussed in this Wiki article ... https://en.wikipedia.org/wiki/Tree_of_life_(biblical). Genesis probably dates back to 1,400 BCE based on this reference ... https://www.biblica.com/bible/bible-faqs/when-was-the-bible-written/ Regardless, the Book of Genesis came many centuries before Porphyry.
Who invented the decision tree? Good question. @G5W is on the right track in referencing Wei-Yin Loh's paper. Loh's paper discusses the statistical antecedents of decision trees and, correctly, traces their locus back to Fisher's (1
8,854
Who invented the decision tree?
The big reference on CART is: Classification and Regression Trees Leo Breiman, Jerome Friedman, Charles J. Stone, R.A. Olshen (1984) but that certainly was not the earliest work on the subject. In his 1986 paper Induction of Decision Trees, Quinlan himself identifies Hunt's Concept Learning System (CLS) as a precursor to ID3. He dates CLS at 1963, but references E.B. Hunt, J.Marin, P.J. Stone, Experiments in Induction Academic Press, New York, 1966 Wei-Yin Loh of the University of Wisconsin has written about the history of decision trees. There is a Paper Fifty Years of Classification and Regression Trees Wei-Yin Loh International Statistical Review (2014), 82, 3, 329–348 doi:10.1111/insr.12016 There is also a Slide deck from a talk that he gave on the topic.
Who invented the decision tree?
The big reference on CART is: Classification and Regression Trees Leo Breiman, Jerome Friedman, Charles J. Stone, R.A. Olshen (1984) but that certainly was not the earliest work on the subject. I
Who invented the decision tree? The big reference on CART is: Classification and Regression Trees Leo Breiman, Jerome Friedman, Charles J. Stone, R.A. Olshen (1984) but that certainly was not the earliest work on the subject. In his 1986 paper Induction of Decision Trees, Quinlan himself identifies Hunt's Concept Learning System (CLS) as a precursor to ID3. He dates CLS at 1963, but references E.B. Hunt, J.Marin, P.J. Stone, Experiments in Induction Academic Press, New York, 1966 Wei-Yin Loh of the University of Wisconsin has written about the history of decision trees. There is a Paper Fifty Years of Classification and Regression Trees Wei-Yin Loh International Statistical Review (2014), 82, 3, 329–348 doi:10.1111/insr.12016 There is also a Slide deck from a talk that he gave on the topic.
Who invented the decision tree? The big reference on CART is: Classification and Regression Trees Leo Breiman, Jerome Friedman, Charles J. Stone, R.A. Olshen (1984) but that certainly was not the earliest work on the subject. I
8,855
Feature selection & model with glmnet on Methylation data (p>>N)
Part 1 In the elastic net two types of constraints on the parameters are employed Lasso constraints (i.e. on the size of the absolute values of $\beta_j$) Ridge constraints (i.e. on the size of the squared values of $\beta_j$) $\alpha$ controls the relative weighting of the two types. The Lasso constraints allow for the selection/removal of variables in the model. The ridge constraints can cope with collinear variables. Which you put more weight upon will depend on the data properties; lots of correlated variables may need both constraints, a few correlated variables might suggest more emphasis on ridge constraints. One way to solve this is to treat $\alpha$ as a tuning parameter alongside $\lambda$ and use the values that give the lowest CV error, in the same way that you are tuning over $\lambda$ at the moment with cv.glmnet. The R package caret can build models using the glmnet package and should be set up to tune over both parameters $\alpha$ and $\lambda$. Part 2 Q3 Yes, in this case where $m \gg n$ (number of variables $\gg$ number of observations), the help page for ?glmnet suggests to use type.gaussian = "naive" Instead of storing all the inner-products computed along the way, which can be inefficient with a large number of variables or when $m \gg n$, the "naive" option will loop over $n$ each time it is required to computer inner products. If you had not specified this argument, glmnet would have chosen "naive" anyway as $m > 500$, but it is better to specify this explicitly incase the defaults and options change later in the package and you are running code at a future date. Q4 Short answer, you don't need to specify a high value for nlambda now that you have chosen an optimal value, conditioned on $\alpha = 0.5$. However, if you want to plot the coefficient paths etc then having a modest set of values of $\lambda$ over the interval results in a much nicer set of paths. The computational burden of doing the entire path relative to one specific $\lambda$ is not that great, the result of a lot of effort to develop algorithms to do this job correctly. I would just leave nlambda at the default, unless it makes an appreciable difference in compute time. Q5 This is a question about parsimony. The lambda.min option refers to value of $\lambda$ at the lowest CV error. The error at this value of $\lambda$ is the average of the errors over the $k$ folds and hence this estimate of the error is uncertain. The lambda.1se represents the value of $\lambda$ in the search that was simpler than the best model (lambda.min), but which has error within 1 standard error of the best model. In other words, using the value of lambda.1se as the selected value for $\lambda$ results in a model that is slightly simpler than the best model but which cannot be distinguished from the best model in terms of error given the uncertainty in the $k$-fold CV estimate of the error of the best model. The choice is yours: The best model that may be too complex of slightly overfitted: lambda.min The simplest model that has comparable error to the best model given the uncertainty: lambda.1se Part 3 This is a simple one and is something you'll come across a lot with R. You use the predict() function 99.9% of the time. R will arrange for the use of the correct function for the object supplied as the first argument. More technically, predict is a generic function, which has methods (versions of the function) for objects of different types (technically known as classes). The object created by glmnet has a particular class (or classes) depending on what type of model is actually fitted. glmnet (the package) provides methods for the predict function for these different types of objects. R knows about these methods and will choose the appropriate one based on the class of the object supplied.
Feature selection & model with glmnet on Methylation data (p>>N)
Part 1 In the elastic net two types of constraints on the parameters are employed Lasso constraints (i.e. on the size of the absolute values of $\beta_j$) Ridge constraints (i.e. on the size of the s
Feature selection & model with glmnet on Methylation data (p>>N) Part 1 In the elastic net two types of constraints on the parameters are employed Lasso constraints (i.e. on the size of the absolute values of $\beta_j$) Ridge constraints (i.e. on the size of the squared values of $\beta_j$) $\alpha$ controls the relative weighting of the two types. The Lasso constraints allow for the selection/removal of variables in the model. The ridge constraints can cope with collinear variables. Which you put more weight upon will depend on the data properties; lots of correlated variables may need both constraints, a few correlated variables might suggest more emphasis on ridge constraints. One way to solve this is to treat $\alpha$ as a tuning parameter alongside $\lambda$ and use the values that give the lowest CV error, in the same way that you are tuning over $\lambda$ at the moment with cv.glmnet. The R package caret can build models using the glmnet package and should be set up to tune over both parameters $\alpha$ and $\lambda$. Part 2 Q3 Yes, in this case where $m \gg n$ (number of variables $\gg$ number of observations), the help page for ?glmnet suggests to use type.gaussian = "naive" Instead of storing all the inner-products computed along the way, which can be inefficient with a large number of variables or when $m \gg n$, the "naive" option will loop over $n$ each time it is required to computer inner products. If you had not specified this argument, glmnet would have chosen "naive" anyway as $m > 500$, but it is better to specify this explicitly incase the defaults and options change later in the package and you are running code at a future date. Q4 Short answer, you don't need to specify a high value for nlambda now that you have chosen an optimal value, conditioned on $\alpha = 0.5$. However, if you want to plot the coefficient paths etc then having a modest set of values of $\lambda$ over the interval results in a much nicer set of paths. The computational burden of doing the entire path relative to one specific $\lambda$ is not that great, the result of a lot of effort to develop algorithms to do this job correctly. I would just leave nlambda at the default, unless it makes an appreciable difference in compute time. Q5 This is a question about parsimony. The lambda.min option refers to value of $\lambda$ at the lowest CV error. The error at this value of $\lambda$ is the average of the errors over the $k$ folds and hence this estimate of the error is uncertain. The lambda.1se represents the value of $\lambda$ in the search that was simpler than the best model (lambda.min), but which has error within 1 standard error of the best model. In other words, using the value of lambda.1se as the selected value for $\lambda$ results in a model that is slightly simpler than the best model but which cannot be distinguished from the best model in terms of error given the uncertainty in the $k$-fold CV estimate of the error of the best model. The choice is yours: The best model that may be too complex of slightly overfitted: lambda.min The simplest model that has comparable error to the best model given the uncertainty: lambda.1se Part 3 This is a simple one and is something you'll come across a lot with R. You use the predict() function 99.9% of the time. R will arrange for the use of the correct function for the object supplied as the first argument. More technically, predict is a generic function, which has methods (versions of the function) for objects of different types (technically known as classes). The object created by glmnet has a particular class (or classes) depending on what type of model is actually fitted. glmnet (the package) provides methods for the predict function for these different types of objects. R knows about these methods and will choose the appropriate one based on the class of the object supplied.
Feature selection & model with glmnet on Methylation data (p>>N) Part 1 In the elastic net two types of constraints on the parameters are employed Lasso constraints (i.e. on the size of the absolute values of $\beta_j$) Ridge constraints (i.e. on the size of the s
8,856
What are some interesting and well-written applied statistics papers?
It's a bit difficult for me to see what paper might be of interest to you, so let me try and suggest the following ones, from the psychometric literature: Borsboom, D. (2006). The attack of the psychometricians. Psychometrika, 71, 425-440. for dressing the scene (Why do we need to use statistical models that better reflect the underlying hypotheses commonly found in psychological research?), and Borsboom, D. (2008). Psychometric perspectives on diagnostic systems. Journal of Clinical Psychology, 64, 1089-1108. for an applied perspective on diagnostic medicine (transition from yes/no assessment as used in the DSM-IV to the "dimensional" approach intended for the DSM-V). A larger review of latent variable models in biomedical research that I like is: Rabe-Hesketh, S. and Skrondal, A. (2008). Classical latent variable models for medical research. Statistical Methods in Medical Research, 17(1), 5-32.
What are some interesting and well-written applied statistics papers?
It's a bit difficult for me to see what paper might be of interest to you, so let me try and suggest the following ones, from the psychometric literature: Borsboom, D. (2006). The attack of the psy
What are some interesting and well-written applied statistics papers? It's a bit difficult for me to see what paper might be of interest to you, so let me try and suggest the following ones, from the psychometric literature: Borsboom, D. (2006). The attack of the psychometricians. Psychometrika, 71, 425-440. for dressing the scene (Why do we need to use statistical models that better reflect the underlying hypotheses commonly found in psychological research?), and Borsboom, D. (2008). Psychometric perspectives on diagnostic systems. Journal of Clinical Psychology, 64, 1089-1108. for an applied perspective on diagnostic medicine (transition from yes/no assessment as used in the DSM-IV to the "dimensional" approach intended for the DSM-V). A larger review of latent variable models in biomedical research that I like is: Rabe-Hesketh, S. and Skrondal, A. (2008). Classical latent variable models for medical research. Statistical Methods in Medical Research, 17(1), 5-32.
What are some interesting and well-written applied statistics papers? It's a bit difficult for me to see what paper might be of interest to you, so let me try and suggest the following ones, from the psychometric literature: Borsboom, D. (2006). The attack of the psy
8,857
What are some interesting and well-written applied statistics papers?
Here are five highly-cited papers from the last 40 years of the Journal of the Royal Statistical Society, Series C: Applied Statistics with a clear application in the title that caught my eye while scanning through the Web of Knowledge search results: Sheila M. Gore, Stuart J. Pocock and Gillian R. Kerr (1984). Regression Models and Non-Proportional Hazards in the Analysis of Breast Cancer Survival. Vol. 33, No. 2, pp. 176-195. (Cited 100 times) (Free PDF) John Haslett and Adrian E. Raftery (1989). Space-Time Modelling with Long-Memory Dependence: Assessing Ireland's Wind Power Resource. Vol. 38, No. 1 pp. 1-50 (Cited 156 times) Stuart G. Coles and Jonathan A. Tawn (1994). Statistical Methods for Multivariate Extremes: An Application to Structural Design. Vol. 43, No. 1, pp. 1-48. (Cited 99 times) Nicholas Lange and Scott L. Zeger (1997). Non-linear Fourier time series analysis for human brain mapping by functional magnetic resonance imaging. Vol. 46, No. 1, pp. 1-29. (Cited 94 times) James P. Hughes, Peter Guttorp and Stephen P. Charles (1999). A Non-Homogeneous Hidden Markov Model for Precipitation Occurrence. Vol. 48, No. 1, pp. 15-30. (Cited 103 times)
What are some interesting and well-written applied statistics papers?
Here are five highly-cited papers from the last 40 years of the Journal of the Royal Statistical Society, Series C: Applied Statistics with a clear application in the title that caught my eye while sc
What are some interesting and well-written applied statistics papers? Here are five highly-cited papers from the last 40 years of the Journal of the Royal Statistical Society, Series C: Applied Statistics with a clear application in the title that caught my eye while scanning through the Web of Knowledge search results: Sheila M. Gore, Stuart J. Pocock and Gillian R. Kerr (1984). Regression Models and Non-Proportional Hazards in the Analysis of Breast Cancer Survival. Vol. 33, No. 2, pp. 176-195. (Cited 100 times) (Free PDF) John Haslett and Adrian E. Raftery (1989). Space-Time Modelling with Long-Memory Dependence: Assessing Ireland's Wind Power Resource. Vol. 38, No. 1 pp. 1-50 (Cited 156 times) Stuart G. Coles and Jonathan A. Tawn (1994). Statistical Methods for Multivariate Extremes: An Application to Structural Design. Vol. 43, No. 1, pp. 1-48. (Cited 99 times) Nicholas Lange and Scott L. Zeger (1997). Non-linear Fourier time series analysis for human brain mapping by functional magnetic resonance imaging. Vol. 46, No. 1, pp. 1-29. (Cited 94 times) James P. Hughes, Peter Guttorp and Stephen P. Charles (1999). A Non-Homogeneous Hidden Markov Model for Precipitation Occurrence. Vol. 48, No. 1, pp. 15-30. (Cited 103 times)
What are some interesting and well-written applied statistics papers? Here are five highly-cited papers from the last 40 years of the Journal of the Royal Statistical Society, Series C: Applied Statistics with a clear application in the title that caught my eye while sc
8,858
What are some interesting and well-written applied statistics papers?
On a wider level I would recommend the ["Statistical Modeling: The Two Cultures"][1] paper by Leo Breiman in 2001 (cited 515) I know it was covered by the journal club recently and I found it to be really interesting. I've c&p'd the abstract. Abstract. There are two cultures in the use of statistical modeling to reach conclusions from data. One assumes that the data are generated by a given stochastic data model. The other uses algorithmic models and treats the data mechanism as unknown. The statistical community has been committed to the almost exclusive use of data models. This commitment has led to irrelevant theory, questionable conclusions, and has kept statisticians from working on a large range of interesting current problems. Algorithmic modeling, both in theory and practice, has developed rapidlyin fields outside statistics. It can be used both on large complex data sets and as a more accurate and informative alternative to data modeling on smaller data sets. If our goal as a field is to use data to solve problems, then we need to move away from exclusive dependence on data models and adopt a more diverse set of tools. [1]: https://doi.org/10.1214/ss/1009213726 (open access)
What are some interesting and well-written applied statistics papers?
On a wider level I would recommend the ["Statistical Modeling: The Two Cultures"][1] paper by Leo Breiman in 2001 (cited 515) I know it was covered by the journal club recently and I found it to be re
What are some interesting and well-written applied statistics papers? On a wider level I would recommend the ["Statistical Modeling: The Two Cultures"][1] paper by Leo Breiman in 2001 (cited 515) I know it was covered by the journal club recently and I found it to be really interesting. I've c&p'd the abstract. Abstract. There are two cultures in the use of statistical modeling to reach conclusions from data. One assumes that the data are generated by a given stochastic data model. The other uses algorithmic models and treats the data mechanism as unknown. The statistical community has been committed to the almost exclusive use of data models. This commitment has led to irrelevant theory, questionable conclusions, and has kept statisticians from working on a large range of interesting current problems. Algorithmic modeling, both in theory and practice, has developed rapidlyin fields outside statistics. It can be used both on large complex data sets and as a more accurate and informative alternative to data modeling on smaller data sets. If our goal as a field is to use data to solve problems, then we need to move away from exclusive dependence on data models and adopt a more diverse set of tools. [1]: https://doi.org/10.1214/ss/1009213726 (open access)
What are some interesting and well-written applied statistics papers? On a wider level I would recommend the ["Statistical Modeling: The Two Cultures"][1] paper by Leo Breiman in 2001 (cited 515) I know it was covered by the journal club recently and I found it to be re
8,859
What are some interesting and well-written applied statistics papers?
From a genetic epidemiology perspective, I would now recommend the following series of papers about genome-wide association studies: Cordell, H.J. and Clayton, D.G. (2005). Genetic association studies. Lancet 366, 1121-1131. Cantor, R.M., Lange, K., and Sinsheimer, J.S. (2010). Prioritizing GWAS results: A review of statistical methods and recommendations for their application. The American Journal of Human Genetics 86, 6–22. Ioannidis, J.P.A., Thomas, G., Daly, M.J. (2009). Validating, augmenting and refining genome-wide association signals. Nature Reviews Genetics 10, 318-329. Balding, D.J. (2006). A tutorial on statistical methods for population association studies. Nature Reviews Genetics 7, 781-791. Green, A.E. et al. (2008). Using genetic data in cognitive neuroscience: from growing pains to genuine insights. Nature Reviews Neuroscience 9, 710-720. McCarthy, M.I. et al. (2008). Genome-wide association studies for complex traits: consensus, uncertainty and challenges. Nature Reviews Genetics 9, 356-369. Psychiatric GWAS Consortium Coordinating Committee (2009). Genomewide Association Studies: History, Rationale, and Prospects for Psychiatric Disorders. American Journal of Psychiatry 166(5), 540-556. Sebastiani, P. et al. (2009). Genome-wide association studies and the genetic dissection of complex traits. American Journal of Hematology 84(8), 504-15. The Wellcome Trust Case Control Consortium (2007). Genome-wide association study of 14,000 cases of seven common diseases and 3,000 shared controls. Nature 447, 661-678. The Wellcome Trust Case Control Consortium (2010). Genome-wide association study of CNVs in 16,000 cases of eight common diseases and 3,000 shared controls. Nature 464, 713-720.
What are some interesting and well-written applied statistics papers?
From a genetic epidemiology perspective, I would now recommend the following series of papers about genome-wide association studies: Cordell, H.J. and Clayton, D.G. (2005). Genetic association studie
What are some interesting and well-written applied statistics papers? From a genetic epidemiology perspective, I would now recommend the following series of papers about genome-wide association studies: Cordell, H.J. and Clayton, D.G. (2005). Genetic association studies. Lancet 366, 1121-1131. Cantor, R.M., Lange, K., and Sinsheimer, J.S. (2010). Prioritizing GWAS results: A review of statistical methods and recommendations for their application. The American Journal of Human Genetics 86, 6–22. Ioannidis, J.P.A., Thomas, G., Daly, M.J. (2009). Validating, augmenting and refining genome-wide association signals. Nature Reviews Genetics 10, 318-329. Balding, D.J. (2006). A tutorial on statistical methods for population association studies. Nature Reviews Genetics 7, 781-791. Green, A.E. et al. (2008). Using genetic data in cognitive neuroscience: from growing pains to genuine insights. Nature Reviews Neuroscience 9, 710-720. McCarthy, M.I. et al. (2008). Genome-wide association studies for complex traits: consensus, uncertainty and challenges. Nature Reviews Genetics 9, 356-369. Psychiatric GWAS Consortium Coordinating Committee (2009). Genomewide Association Studies: History, Rationale, and Prospects for Psychiatric Disorders. American Journal of Psychiatry 166(5), 540-556. Sebastiani, P. et al. (2009). Genome-wide association studies and the genetic dissection of complex traits. American Journal of Hematology 84(8), 504-15. The Wellcome Trust Case Control Consortium (2007). Genome-wide association study of 14,000 cases of seven common diseases and 3,000 shared controls. Nature 447, 661-678. The Wellcome Trust Case Control Consortium (2010). Genome-wide association study of CNVs in 16,000 cases of eight common diseases and 3,000 shared controls. Nature 464, 713-720.
What are some interesting and well-written applied statistics papers? From a genetic epidemiology perspective, I would now recommend the following series of papers about genome-wide association studies: Cordell, H.J. and Clayton, D.G. (2005). Genetic association studie
8,860
What are some interesting and well-written applied statistics papers?
Jim Berger's review articles: http://www.stat.duke.edu/~berger/papers.html You might start with Could Fisher, Jeffreys and Neyman have agreed upon testing?
What are some interesting and well-written applied statistics papers?
Jim Berger's review articles: http://www.stat.duke.edu/~berger/papers.html You might start with Could Fisher, Jeffreys and Neyman have agreed upon testing?
What are some interesting and well-written applied statistics papers? Jim Berger's review articles: http://www.stat.duke.edu/~berger/papers.html You might start with Could Fisher, Jeffreys and Neyman have agreed upon testing?
What are some interesting and well-written applied statistics papers? Jim Berger's review articles: http://www.stat.duke.edu/~berger/papers.html You might start with Could Fisher, Jeffreys and Neyman have agreed upon testing?
8,861
What are some interesting and well-written applied statistics papers?
An article with early impact regarding statistical bioinformatics research: Jelizarow et al. Over-optimism in bioinformatics: an illustration. Bioinformatics, 2010 It makes for an interesting discussion on bias sources, overfitting, and fishing for significance.
What are some interesting and well-written applied statistics papers?
An article with early impact regarding statistical bioinformatics research: Jelizarow et al. Over-optimism in bioinformatics: an illustration. Bioinformatics, 2010 It makes for an interesting discussi
What are some interesting and well-written applied statistics papers? An article with early impact regarding statistical bioinformatics research: Jelizarow et al. Over-optimism in bioinformatics: an illustration. Bioinformatics, 2010 It makes for an interesting discussion on bias sources, overfitting, and fishing for significance.
What are some interesting and well-written applied statistics papers? An article with early impact regarding statistical bioinformatics research: Jelizarow et al. Over-optimism in bioinformatics: an illustration. Bioinformatics, 2010 It makes for an interesting discussi
8,862
In layman's terms, what is the difference between a model and a distribution?
Probability distribution is a mathematical function that describes a random variable. A little bit more precisely, it is a function that assigns probabilities to numbers and it's output has to agree with axioms of probability. Statistical model is an abstract, idealized description of some phenomenon in mathematical terms using probability distributions. Quoting Wasserman (2013): A statistical model $\mathfrak{F}$ is a set of distributions (or densities or regression functions). A parametric model is a set $\mathfrak{F}$ that can be parameterized by a finite number of parameters. [...] In general, a parametric model takes the form $$ \mathfrak{F} = \{ f (x; \theta) : \theta \in \Theta \} $$ where $\theta$ is an unknown parameter (or vector of parameters) that can take values in the parameter space $\Theta$. If $\theta$ is a vector but we are only interested in one component of $\theta$, we call the remaining parameters nuisance parameters. A nonparametric model is a set $\mathfrak{F}$ that cannot be parameterized by a finite number of parameters. In many cases we use distributions as models (you can check this example). You can use binomial distribution as a model of counts of heads in series of coin throws. In such case we assume that this distribution describes, in simplified way, the actual outcomes. This does not mean that this is an only way on how you can describe such phenomenon, neither that binomial distribution is something that can be used only for this purpose. Model can use one or more distributions, while Bayesian models specify also prior distributions. More formally this is discussed by McCullaugh (2002): According to currently accepted theories [Cox and Hinkley (1974), Chapter 1; Lehmann (1983), Chapter 1; Barndorff-Nielsen and Cox (1994), Section 1.1; Bernardo and Smith (1994), Chapter 4] a statistical model is a set of probability distributions on the sample space $\mathcal{S}$. A parameterized statistical model is a parameter $\Theta$ set together with a function $P : \Theta \rightarrow \mathcal{P} (\mathcal{S})$, which assigns to each parameter point $\mathcal{\theta \in \Theta}$ a probability distribution $P \theta$ on $\mathcal{S}$. Here $\mathcal{P}(\mathcal{S})$ is the set of all probability distributions on $\mathcal{S}$. In much of the following, it is important to distinguish between the model as a function $ P : \Theta \rightarrow \mathcal{P} (\mathcal{S}) $, and the associated set of distributions $P\Theta \subset \mathcal{P} (\mathcal{S})$. So statistical models use probability distributions to describe data in their terms. Parametric models are also described in terms of finite set of parameters. This does not mean that all statistical methods need probability distributions. For example, linear regression is often described in terms of normality assumption, but in fact it is pretty robust to departures from normality and we need assumption about normality of errors for confidence intervals and hypothesis testing. So for regression to work we don't need such assumption, but to have fully specified statistical model we need to describe it in terms of random variables, so we need probability distributions. I write about this because you can often hear people saying that they used regression model for their data -- in most such cases they rather mean that they describe data in terms of linear relation between target values and predictors using some parameters, than insisting on conditional normality. McCullagh, P. (2002). What is a statistical model? Annals of statistics, 1225-1267. Wasserman, L. (2013). All of statistics: a concise course in statistical inference. Springer.
In layman's terms, what is the difference between a model and a distribution?
Probability distribution is a mathematical function that describes a random variable. A little bit more precisely, it is a function that assigns probabilities to numbers and it's output has to agree w
In layman's terms, what is the difference between a model and a distribution? Probability distribution is a mathematical function that describes a random variable. A little bit more precisely, it is a function that assigns probabilities to numbers and it's output has to agree with axioms of probability. Statistical model is an abstract, idealized description of some phenomenon in mathematical terms using probability distributions. Quoting Wasserman (2013): A statistical model $\mathfrak{F}$ is a set of distributions (or densities or regression functions). A parametric model is a set $\mathfrak{F}$ that can be parameterized by a finite number of parameters. [...] In general, a parametric model takes the form $$ \mathfrak{F} = \{ f (x; \theta) : \theta \in \Theta \} $$ where $\theta$ is an unknown parameter (or vector of parameters) that can take values in the parameter space $\Theta$. If $\theta$ is a vector but we are only interested in one component of $\theta$, we call the remaining parameters nuisance parameters. A nonparametric model is a set $\mathfrak{F}$ that cannot be parameterized by a finite number of parameters. In many cases we use distributions as models (you can check this example). You can use binomial distribution as a model of counts of heads in series of coin throws. In such case we assume that this distribution describes, in simplified way, the actual outcomes. This does not mean that this is an only way on how you can describe such phenomenon, neither that binomial distribution is something that can be used only for this purpose. Model can use one or more distributions, while Bayesian models specify also prior distributions. More formally this is discussed by McCullaugh (2002): According to currently accepted theories [Cox and Hinkley (1974), Chapter 1; Lehmann (1983), Chapter 1; Barndorff-Nielsen and Cox (1994), Section 1.1; Bernardo and Smith (1994), Chapter 4] a statistical model is a set of probability distributions on the sample space $\mathcal{S}$. A parameterized statistical model is a parameter $\Theta$ set together with a function $P : \Theta \rightarrow \mathcal{P} (\mathcal{S})$, which assigns to each parameter point $\mathcal{\theta \in \Theta}$ a probability distribution $P \theta$ on $\mathcal{S}$. Here $\mathcal{P}(\mathcal{S})$ is the set of all probability distributions on $\mathcal{S}$. In much of the following, it is important to distinguish between the model as a function $ P : \Theta \rightarrow \mathcal{P} (\mathcal{S}) $, and the associated set of distributions $P\Theta \subset \mathcal{P} (\mathcal{S})$. So statistical models use probability distributions to describe data in their terms. Parametric models are also described in terms of finite set of parameters. This does not mean that all statistical methods need probability distributions. For example, linear regression is often described in terms of normality assumption, but in fact it is pretty robust to departures from normality and we need assumption about normality of errors for confidence intervals and hypothesis testing. So for regression to work we don't need such assumption, but to have fully specified statistical model we need to describe it in terms of random variables, so we need probability distributions. I write about this because you can often hear people saying that they used regression model for their data -- in most such cases they rather mean that they describe data in terms of linear relation between target values and predictors using some parameters, than insisting on conditional normality. McCullagh, P. (2002). What is a statistical model? Annals of statistics, 1225-1267. Wasserman, L. (2013). All of statistics: a concise course in statistical inference. Springer.
In layman's terms, what is the difference between a model and a distribution? Probability distribution is a mathematical function that describes a random variable. A little bit more precisely, it is a function that assigns probabilities to numbers and it's output has to agree w
8,863
In layman's terms, what is the difference between a model and a distribution?
Think of $\mathcal{S}$ as a set of tickets. You can write stuff on a ticket. Usually a ticket starts out with the name of some real-world person or object that it "represents" or "models." There's lots of blank space on each ticket for writing other things. You can make as many copies of each ticket as you want. A probability model $\mathbb{P}$ for this real-world population or process consists of making one or more copies of every ticket, mixing them up, and putting them in a box. If you--the analyst--can establish that the process of drawing one ticket randomly from this box emulates all the important behavior of what you are studying, then you can learn much about the world by thinking about this box. Because some tickets may be more numerous in the box than others, they may have difference chances of being drawn. Probability theory studies these chances. When numbers are written on the tickets (in a consistent way), they give rise to (probability) distributions. A probability distribution merely describes the proportion of tickets in a box whose numbers lie within any given interval. Because we usually don't know exactly how the world behaves, we have to imagine different boxes in which the tickets appear with different relative frequencies. The set of these boxes is $\mathcal{P}$. We view the world as being adequately described by the behavior of one of the boxes in $\mathcal{P}$. It is your objective to make reasonable guesses as to which box it is, based on what you see on the tickets you have pulled out of it. As an example (which is practical and realistic, not a textbook toy), suppose you are studying the rate $y$ of a chemical reaction as it varies with temperature. Suppose that the theory of chemistry predicts that within the range of temperatures between $0$ and $100$ degrees, the rate is proportional to the temperature. You plan to study this reaction at both $0$ and $100$ degrees, making several observations at each temperature. You therefore make up a very, very large number of boxes. You are going to fill each box with tickets. There is a rate constant written on each one. All the tickets in any given box have the same rate constant written on them. Different boxes use different rate constants. Using the rate constant written on any ticket, you also write down the rate at $0$ and the rate at $100$ degrees: call these $y_0$ and $y_{100}$. But this is not yet enough for a good model. Chemists also know that no substance is pure, no quantity is exactly measured, and other forms of observational variability occur. To model these "errors," you make very, very many copies of your tickets. On each copy you change the values of $y_0$ and $y_{100}$. On most of them you change them only a little. On a very few, you might change them a lot. You write down as many changed values as you plan to observe at each temperature. These observations represent possible observable outcomes of your experiment. Into the box go each such set of these tickets: it is a probability model for what you might observe for a given rate constant. What you do observe is modeled by drawing a ticket from that box and reading only the observations written there. You don't get to see the underlying (true) values of $y_0$ or $y_{100}$. You don't get to read the (true) rate constant. Those aren't afforded by your experiment. Every statistical model must make some assumptions about the tickets in these (hypothetical) boxes. For instance, we hope that when you modified the values of the $y_0$ and $y_{100}$, you did so without consistently increasing or consistently decreasing either one (as a whole, within the box): that would be a form of systematic bias. Because the observations written on each ticket are numbers, they give rise to probability distributions. The assumptions made about the boxes typically are phrased in terms of properties of those distributions, such as whether they must average out to zero, be symmetric, have a "bell curve" shape, are uncorrelated, or whatever. That's really all there is to it. Much in the way that a primitive twelve-tone scale gave rise to all of Western classical music, a collection of ticket-containing boxes is a simple concept that can be used in extremely rich and complex ways. It can model just about anything, ranging from a coin flip to a library of videos, databases of Website interactions, quantum mechanical ensembles, and anything else that can be observed and recorded.
In layman's terms, what is the difference between a model and a distribution?
Think of $\mathcal{S}$ as a set of tickets. You can write stuff on a ticket. Usually a ticket starts out with the name of some real-world person or object that it "represents" or "models." There's
In layman's terms, what is the difference between a model and a distribution? Think of $\mathcal{S}$ as a set of tickets. You can write stuff on a ticket. Usually a ticket starts out with the name of some real-world person or object that it "represents" or "models." There's lots of blank space on each ticket for writing other things. You can make as many copies of each ticket as you want. A probability model $\mathbb{P}$ for this real-world population or process consists of making one or more copies of every ticket, mixing them up, and putting them in a box. If you--the analyst--can establish that the process of drawing one ticket randomly from this box emulates all the important behavior of what you are studying, then you can learn much about the world by thinking about this box. Because some tickets may be more numerous in the box than others, they may have difference chances of being drawn. Probability theory studies these chances. When numbers are written on the tickets (in a consistent way), they give rise to (probability) distributions. A probability distribution merely describes the proportion of tickets in a box whose numbers lie within any given interval. Because we usually don't know exactly how the world behaves, we have to imagine different boxes in which the tickets appear with different relative frequencies. The set of these boxes is $\mathcal{P}$. We view the world as being adequately described by the behavior of one of the boxes in $\mathcal{P}$. It is your objective to make reasonable guesses as to which box it is, based on what you see on the tickets you have pulled out of it. As an example (which is practical and realistic, not a textbook toy), suppose you are studying the rate $y$ of a chemical reaction as it varies with temperature. Suppose that the theory of chemistry predicts that within the range of temperatures between $0$ and $100$ degrees, the rate is proportional to the temperature. You plan to study this reaction at both $0$ and $100$ degrees, making several observations at each temperature. You therefore make up a very, very large number of boxes. You are going to fill each box with tickets. There is a rate constant written on each one. All the tickets in any given box have the same rate constant written on them. Different boxes use different rate constants. Using the rate constant written on any ticket, you also write down the rate at $0$ and the rate at $100$ degrees: call these $y_0$ and $y_{100}$. But this is not yet enough for a good model. Chemists also know that no substance is pure, no quantity is exactly measured, and other forms of observational variability occur. To model these "errors," you make very, very many copies of your tickets. On each copy you change the values of $y_0$ and $y_{100}$. On most of them you change them only a little. On a very few, you might change them a lot. You write down as many changed values as you plan to observe at each temperature. These observations represent possible observable outcomes of your experiment. Into the box go each such set of these tickets: it is a probability model for what you might observe for a given rate constant. What you do observe is modeled by drawing a ticket from that box and reading only the observations written there. You don't get to see the underlying (true) values of $y_0$ or $y_{100}$. You don't get to read the (true) rate constant. Those aren't afforded by your experiment. Every statistical model must make some assumptions about the tickets in these (hypothetical) boxes. For instance, we hope that when you modified the values of the $y_0$ and $y_{100}$, you did so without consistently increasing or consistently decreasing either one (as a whole, within the box): that would be a form of systematic bias. Because the observations written on each ticket are numbers, they give rise to probability distributions. The assumptions made about the boxes typically are phrased in terms of properties of those distributions, such as whether they must average out to zero, be symmetric, have a "bell curve" shape, are uncorrelated, or whatever. That's really all there is to it. Much in the way that a primitive twelve-tone scale gave rise to all of Western classical music, a collection of ticket-containing boxes is a simple concept that can be used in extremely rich and complex ways. It can model just about anything, ranging from a coin flip to a library of videos, databases of Website interactions, quantum mechanical ensembles, and anything else that can be observed and recorded.
In layman's terms, what is the difference between a model and a distribution? Think of $\mathcal{S}$ as a set of tickets. You can write stuff on a ticket. Usually a ticket starts out with the name of some real-world person or object that it "represents" or "models." There's
8,864
In layman's terms, what is the difference between a model and a distribution?
The definition of a distribution as assigning probabilities to each possible event works for discrete distribution, but becomes trickier for continuous distributions, where e.g. any number on the real line could be the outcome. Very often when talking about distributions, we think of them as having fixed parameters such as a binomial distribution having two parameters: firstly, the number of observations and secondly a probability $\pi$ of a single observation being an event. Typical parametric statistical models describe how the parameter(s) of a distribution depend on certain things such as factors (a variable that has discrete values) and covariates (continuous variables). For example, if in a normal distribution you assume that the mean can be described by some fixed number (an "intercept") and some number (a "regression coefficient") times the value of a covariate, you obtain a linear regression model with a normally distributed error term. For a binomial distribution, one commonly used model ("logistic regression") is to assume that the logit of the probability $\pi$ of an event ($\pi/(1-\pi)$) can be described by a regression equation such as $\text{intercept}+\beta_1 \text{covariate}_1+\ldots$. Similarly, for a Poisson distribution a common model is to assume this for the logarithm of the rate parameter ("Poisson regression").
In layman's terms, what is the difference between a model and a distribution?
The definition of a distribution as assigning probabilities to each possible event works for discrete distribution, but becomes trickier for continuous distributions, where e.g. any number on the real
In layman's terms, what is the difference between a model and a distribution? The definition of a distribution as assigning probabilities to each possible event works for discrete distribution, but becomes trickier for continuous distributions, where e.g. any number on the real line could be the outcome. Very often when talking about distributions, we think of them as having fixed parameters such as a binomial distribution having two parameters: firstly, the number of observations and secondly a probability $\pi$ of a single observation being an event. Typical parametric statistical models describe how the parameter(s) of a distribution depend on certain things such as factors (a variable that has discrete values) and covariates (continuous variables). For example, if in a normal distribution you assume that the mean can be described by some fixed number (an "intercept") and some number (a "regression coefficient") times the value of a covariate, you obtain a linear regression model with a normally distributed error term. For a binomial distribution, one commonly used model ("logistic regression") is to assume that the logit of the probability $\pi$ of an event ($\pi/(1-\pi)$) can be described by a regression equation such as $\text{intercept}+\beta_1 \text{covariate}_1+\ldots$. Similarly, for a Poisson distribution a common model is to assume this for the logarithm of the rate parameter ("Poisson regression").
In layman's terms, what is the difference between a model and a distribution? The definition of a distribution as assigning probabilities to each possible event works for discrete distribution, but becomes trickier for continuous distributions, where e.g. any number on the real
8,865
In layman's terms, what is the difference between a model and a distribution?
A probability distribution gives all the information about how a random quantity fluctuates. In practice we usually do not have the full probability distribution of our quantity of interest. We may know or assume something about it without knowing or assuming that we know everything about it. For example, we might assume that some quantity is normally distributed but know nothing about the mean and variance. Then we have a collection of candidates for the distribution to choose from; in our example, it is all possible normal distributions. This collection of distributions forms a statistical model. We use it by gathering data and then restricting our class of candidates so that all the remaining candidates are consistent with the data in some appropriate sense.
In layman's terms, what is the difference between a model and a distribution?
A probability distribution gives all the information about how a random quantity fluctuates. In practice we usually do not have the full probability distribution of our quantity of interest. We may kn
In layman's terms, what is the difference between a model and a distribution? A probability distribution gives all the information about how a random quantity fluctuates. In practice we usually do not have the full probability distribution of our quantity of interest. We may know or assume something about it without knowing or assuming that we know everything about it. For example, we might assume that some quantity is normally distributed but know nothing about the mean and variance. Then we have a collection of candidates for the distribution to choose from; in our example, it is all possible normal distributions. This collection of distributions forms a statistical model. We use it by gathering data and then restricting our class of candidates so that all the remaining candidates are consistent with the data in some appropriate sense.
In layman's terms, what is the difference between a model and a distribution? A probability distribution gives all the information about how a random quantity fluctuates. In practice we usually do not have the full probability distribution of our quantity of interest. We may kn
8,866
In layman's terms, what is the difference between a model and a distribution?
You ask a very important question, Alan, and have received some fine answers above. I would like to offer a simpler answer, and also indicate an additional dimension to the distinction that the above answers have not addressed. For simplicity, everything I'll say here relates to parametric statistical models. First of all, you may find the idea of a family helpful for connecting your question with things you've learned in high school. (I am surprised that this word has not yet appeared on this page!) You long ago learned about the quadratic family of curves, $y = a x^2 + b x + c$. You can think of a parametric statistical model in the same way, as a family of distributions. You have probably done lab experiments in chemistry or physics classes, where you collected data and plotted them in order to identify parameters from a simple family of models like $y = m x + b$ or $F = -k x$. At the highest level, estimating the parameters of a statistical model very much resembles the process of finding the slope $m$ and intercept $b$, or finding the spring constant $k$. As you continue to study mathematics, you will see 'families' of various sorts of entities pop up everywhere. So, my brief Answer #1 to your question is: a statistical model is a family of distributions. The further point I wanted to make relates to the qualifier, statistical. As Judea Pearl points out in his "golden rule of causal analysis" [1,p350], No causal claim can be established by a purely statistical method, be it propensity scores, regression, stratification, or any other distribution-based design. (For present purposes, I would invite you to read "statistical" in place of "distribution-based," and "model" in place of "design.") What Pearl is keen to convey is that our models of causal effects in the world (think $F=-kx$, for example!) necessarily embody more than purely statistical ideas. Thus, taking your question as titled---i.e., without the qualification statistical attached to model---a full answer requires the further admonition that models generally incorporate causal ideas that lie inherently outside the province of statistics, i.e. of statements about probability distributions. Thus, my Answer #2 to your question is: models usually embody causal ideas that cannot be expressed in purely distributional terms. [1]: Pearl, Judea. Causality: Models, Reasoning and Inference. 2nd edition. Cambridge, U.K. ; New York: Cambridge University Press, 2009. Link to §11.3.5, including cited p. 351.
In layman's terms, what is the difference between a model and a distribution?
You ask a very important question, Alan, and have received some fine answers above. I would like to offer a simpler answer, and also indicate an additional dimension to the distinction that the above
In layman's terms, what is the difference between a model and a distribution? You ask a very important question, Alan, and have received some fine answers above. I would like to offer a simpler answer, and also indicate an additional dimension to the distinction that the above answers have not addressed. For simplicity, everything I'll say here relates to parametric statistical models. First of all, you may find the idea of a family helpful for connecting your question with things you've learned in high school. (I am surprised that this word has not yet appeared on this page!) You long ago learned about the quadratic family of curves, $y = a x^2 + b x + c$. You can think of a parametric statistical model in the same way, as a family of distributions. You have probably done lab experiments in chemistry or physics classes, where you collected data and plotted them in order to identify parameters from a simple family of models like $y = m x + b$ or $F = -k x$. At the highest level, estimating the parameters of a statistical model very much resembles the process of finding the slope $m$ and intercept $b$, or finding the spring constant $k$. As you continue to study mathematics, you will see 'families' of various sorts of entities pop up everywhere. So, my brief Answer #1 to your question is: a statistical model is a family of distributions. The further point I wanted to make relates to the qualifier, statistical. As Judea Pearl points out in his "golden rule of causal analysis" [1,p350], No causal claim can be established by a purely statistical method, be it propensity scores, regression, stratification, or any other distribution-based design. (For present purposes, I would invite you to read "statistical" in place of "distribution-based," and "model" in place of "design.") What Pearl is keen to convey is that our models of causal effects in the world (think $F=-kx$, for example!) necessarily embody more than purely statistical ideas. Thus, taking your question as titled---i.e., without the qualification statistical attached to model---a full answer requires the further admonition that models generally incorporate causal ideas that lie inherently outside the province of statistics, i.e. of statements about probability distributions. Thus, my Answer #2 to your question is: models usually embody causal ideas that cannot be expressed in purely distributional terms. [1]: Pearl, Judea. Causality: Models, Reasoning and Inference. 2nd edition. Cambridge, U.K. ; New York: Cambridge University Press, 2009. Link to §11.3.5, including cited p. 351.
In layman's terms, what is the difference between a model and a distribution? You ask a very important question, Alan, and have received some fine answers above. I would like to offer a simpler answer, and also indicate an additional dimension to the distinction that the above
8,867
In layman's terms, what is the difference between a model and a distribution?
A model is specified by a PDF, but it is not a PDF. Probability distribution (PDF) is a function that assigns probabilities to numbers and its output has to agree with axioms of probability, like Tim explained. A model is fully defined by a probability distribution, but it is more than that. In the coin tossing example, our model could be "coin is fair" + "each throw is independent". This model is specified by a PDF that is a binomial with p=0.5. However, one could imagine a model where the throws are not independent, in which case it is no longer described by the binomial PDF. Still, the model is specified by the joint distribution (a PDF) of all events $P(x_1, x_2, x_3, ...)$. The point being, formally, a model is always specified by the joint distribution over events. One distinction between the model and the PDF is that a model can be interpreted as a statistical hypothesis. For example, in coin tossing, we can consider the model where the coin is fair (p=0.5), and that each throw is independent (binomial), and say that this is our hypothesis, which we want to test against a competing hypothesis. You can also have competing models (e.g. we don't know $p$ and we want to compute which $p$ is the best fit). It does not make sense to speak of competing PDFs because they are just a mathematical object.
In layman's terms, what is the difference between a model and a distribution?
A model is specified by a PDF, but it is not a PDF. Probability distribution (PDF) is a function that assigns probabilities to numbers and its output has to agree with axioms of probability, like Tim
In layman's terms, what is the difference between a model and a distribution? A model is specified by a PDF, but it is not a PDF. Probability distribution (PDF) is a function that assigns probabilities to numbers and its output has to agree with axioms of probability, like Tim explained. A model is fully defined by a probability distribution, but it is more than that. In the coin tossing example, our model could be "coin is fair" + "each throw is independent". This model is specified by a PDF that is a binomial with p=0.5. However, one could imagine a model where the throws are not independent, in which case it is no longer described by the binomial PDF. Still, the model is specified by the joint distribution (a PDF) of all events $P(x_1, x_2, x_3, ...)$. The point being, formally, a model is always specified by the joint distribution over events. One distinction between the model and the PDF is that a model can be interpreted as a statistical hypothesis. For example, in coin tossing, we can consider the model where the coin is fair (p=0.5), and that each throw is independent (binomial), and say that this is our hypothesis, which we want to test against a competing hypothesis. You can also have competing models (e.g. we don't know $p$ and we want to compute which $p$ is the best fit). It does not make sense to speak of competing PDFs because they are just a mathematical object.
In layman's terms, what is the difference between a model and a distribution? A model is specified by a PDF, but it is not a PDF. Probability distribution (PDF) is a function that assigns probabilities to numbers and its output has to agree with axioms of probability, like Tim
8,868
A statistical approach to determine if data are missing at random
I found the information I was talking about in my comment. From van Buurens book, page 31, he writes "Several tests have been proposed to test MCAR versus MAR. These tests are not widely used, and their practical value is unclear. See Enders (2010, pp. 17–21) for an evaluation of two procedures. It is not possible to test MAR versus MNAR since the information that is needed for such a test is missing."
A statistical approach to determine if data are missing at random
I found the information I was talking about in my comment. From van Buurens book, page 31, he writes "Several tests have been proposed to test MCAR versus MAR. These tests are not widely used, and the
A statistical approach to determine if data are missing at random I found the information I was talking about in my comment. From van Buurens book, page 31, he writes "Several tests have been proposed to test MCAR versus MAR. These tests are not widely used, and their practical value is unclear. See Enders (2010, pp. 17–21) for an evaluation of two procedures. It is not possible to test MAR versus MNAR since the information that is needed for such a test is missing."
A statistical approach to determine if data are missing at random I found the information I was talking about in my comment. From van Buurens book, page 31, he writes "Several tests have been proposed to test MCAR versus MAR. These tests are not widely used, and the
8,869
A statistical approach to determine if data are missing at random
This is not possible, unless you managed to retrieve missing data. You cannot determine from the observed data whether the missing data is missing at random (MAR) or not at random (MNAR). You can only tell whether the data is clearly not missing completely at random (MCAR). Beyond that only appeal to plausibility of MCAR or MAR as opposed to MNAR based on what you know (e.g. reported reasons for why data is missing). Alternatively, you might be able to argue that it does not matter too much, because the proportion of missing data is small and under MNAR very extreme scenarios would have to happen for your results to be overturned (see "tipping point analysis").
A statistical approach to determine if data are missing at random
This is not possible, unless you managed to retrieve missing data. You cannot determine from the observed data whether the missing data is missing at random (MAR) or not at random (MNAR). You can only
A statistical approach to determine if data are missing at random This is not possible, unless you managed to retrieve missing data. You cannot determine from the observed data whether the missing data is missing at random (MAR) or not at random (MNAR). You can only tell whether the data is clearly not missing completely at random (MCAR). Beyond that only appeal to plausibility of MCAR or MAR as opposed to MNAR based on what you know (e.g. reported reasons for why data is missing). Alternatively, you might be able to argue that it does not matter too much, because the proportion of missing data is small and under MNAR very extreme scenarios would have to happen for your results to be overturned (see "tipping point analysis").
A statistical approach to determine if data are missing at random This is not possible, unless you managed to retrieve missing data. You cannot determine from the observed data whether the missing data is missing at random (MAR) or not at random (MNAR). You can only
8,870
A statistical approach to determine if data are missing at random
A method I use is a shadow matrix, in which the dataset consists of indicator variables where a 1 is given if a value is present, and 0 if it isn't. Correlating these with each other and the original data can help determine if variables tend to be missing together (MAR) or not (MCAR). Using R for an example (borrowing from the book "R in action" by Robert Kabacoff): #Load dataset data(sleep, package = "VIM") x <- as.data.frame(abs(is.na(sleep))) #Elements of x are 1 if a value in the sleep data is missing and 0 if non-missing. head(sleep) head(x) #Extracting variables that have some missing values. y <- x[which(sapply(x, sd) > 0)] cor(y) #We see that variables Dream and NonD tend to be missing together. To a lesser extent, this is also true with Sleep and NonD, as well as Sleep and Dream. #Now, looking at the relationship between the presence of missing values in each variable and the observed values in other variables: cor(sleep, y, use="pairwise.complete.obs") #NonD is more likely to be missing as Exp, BodyWgt, and Gest increases, suggesting that the missingness for NonD is likely MAR rather than MCAR.
A statistical approach to determine if data are missing at random
A method I use is a shadow matrix, in which the dataset consists of indicator variables where a 1 is given if a value is present, and 0 if it isn't. Correlating these with each other and the original
A statistical approach to determine if data are missing at random A method I use is a shadow matrix, in which the dataset consists of indicator variables where a 1 is given if a value is present, and 0 if it isn't. Correlating these with each other and the original data can help determine if variables tend to be missing together (MAR) or not (MCAR). Using R for an example (borrowing from the book "R in action" by Robert Kabacoff): #Load dataset data(sleep, package = "VIM") x <- as.data.frame(abs(is.na(sleep))) #Elements of x are 1 if a value in the sleep data is missing and 0 if non-missing. head(sleep) head(x) #Extracting variables that have some missing values. y <- x[which(sapply(x, sd) > 0)] cor(y) #We see that variables Dream and NonD tend to be missing together. To a lesser extent, this is also true with Sleep and NonD, as well as Sleep and Dream. #Now, looking at the relationship between the presence of missing values in each variable and the observed values in other variables: cor(sleep, y, use="pairwise.complete.obs") #NonD is more likely to be missing as Exp, BodyWgt, and Gest increases, suggesting that the missingness for NonD is likely MAR rather than MCAR.
A statistical approach to determine if data are missing at random A method I use is a shadow matrix, in which the dataset consists of indicator variables where a 1 is given if a value is present, and 0 if it isn't. Correlating these with each other and the original
8,871
A statistical approach to determine if data are missing at random
This sounds quite doable from a classification standpoint. You want to classify missing versus non-missing data using all other features. If you get significantly better than random results, then your data aren't missing at random.
A statistical approach to determine if data are missing at random
This sounds quite doable from a classification standpoint. You want to classify missing versus non-missing data using all other features. If you get significantly better than random results, then your
A statistical approach to determine if data are missing at random This sounds quite doable from a classification standpoint. You want to classify missing versus non-missing data using all other features. If you get significantly better than random results, then your data aren't missing at random.
A statistical approach to determine if data are missing at random This sounds quite doable from a classification standpoint. You want to classify missing versus non-missing data using all other features. If you get significantly better than random results, then your
8,872
A statistical approach to determine if data are missing at random
You want to know whether there is some correlation of a value being missed in feature and the value of any other of the features. For each of the features, create a new feature indicating whether the value is missing or not (let's call them "is_missing" feature). Compute your favourite correlation measure (I suggest using here mutual information) of the is_missing features and the rest of the features. Note the if you don't find any correlation between two features, it is still possible to have a correlation due to group of features (a value is missing as a function of XOR of ten other features). It you have a large set of features and a large number of values, you will get false correlations due to randomness. Other than the regular ways of coping with that (validation set, high enough threshold) You can check if the correlations are symmetric and transitive. If they are, it is likely that they are true and you should further check them.
A statistical approach to determine if data are missing at random
You want to know whether there is some correlation of a value being missed in feature and the value of any other of the features. For each of the features, create a new feature indicating whether the
A statistical approach to determine if data are missing at random You want to know whether there is some correlation of a value being missed in feature and the value of any other of the features. For each of the features, create a new feature indicating whether the value is missing or not (let's call them "is_missing" feature). Compute your favourite correlation measure (I suggest using here mutual information) of the is_missing features and the rest of the features. Note the if you don't find any correlation between two features, it is still possible to have a correlation due to group of features (a value is missing as a function of XOR of ten other features). It you have a large set of features and a large number of values, you will get false correlations due to randomness. Other than the regular ways of coping with that (validation set, high enough threshold) You can check if the correlations are symmetric and transitive. If they are, it is likely that they are true and you should further check them.
A statistical approach to determine if data are missing at random You want to know whether there is some correlation of a value being missed in feature and the value of any other of the features. For each of the features, create a new feature indicating whether the
8,873
A statistical approach to determine if data are missing at random
There is a useful package called finalfit check here it has a missing_pairs(outcome VAR, explanatory Vars) were you can explore patterns of missingness and decide whether data is MCAR or MAR. It produces pairs plots to show relationships between missing values and observed values in all variables.
A statistical approach to determine if data are missing at random
There is a useful package called finalfit check here it has a missing_pairs(outcome VAR, explanatory Vars) were you can explore patterns of missingness and decide whether data is MCAR or MAR. It produ
A statistical approach to determine if data are missing at random There is a useful package called finalfit check here it has a missing_pairs(outcome VAR, explanatory Vars) were you can explore patterns of missingness and decide whether data is MCAR or MAR. It produces pairs plots to show relationships between missing values and observed values in all variables.
A statistical approach to determine if data are missing at random There is a useful package called finalfit check here it has a missing_pairs(outcome VAR, explanatory Vars) were you can explore patterns of missingness and decide whether data is MCAR or MAR. It produ
8,874
What does a non positive definite covariance matrix tell me about my data?
The covariance matrix is not positive definite because it is singular. That means that at least one of your variables can be expressed as a linear combination of the others. You do not need all the variables as the value of at least one can be determined from a subset of the others. I would suggest adding variables sequentially and checking the covariance matrix at each step. If a new variable creates a singularity drop it and go on the the next one. Eventually you should have a subset of variables with a postive definite covariance matrix.
What does a non positive definite covariance matrix tell me about my data?
The covariance matrix is not positive definite because it is singular. That means that at least one of your variables can be expressed as a linear combination of the others. You do not need all the
What does a non positive definite covariance matrix tell me about my data? The covariance matrix is not positive definite because it is singular. That means that at least one of your variables can be expressed as a linear combination of the others. You do not need all the variables as the value of at least one can be determined from a subset of the others. I would suggest adding variables sequentially and checking the covariance matrix at each step. If a new variable creates a singularity drop it and go on the the next one. Eventually you should have a subset of variables with a postive definite covariance matrix.
What does a non positive definite covariance matrix tell me about my data? The covariance matrix is not positive definite because it is singular. That means that at least one of your variables can be expressed as a linear combination of the others. You do not need all the
8,875
What does a non positive definite covariance matrix tell me about my data?
One point that I don't think is addressed above is that it IS possible to calculate a non-positive definite covariance matrix from empirical data even if your variables are not perfectly linearly related. If you don't have sufficient data (particularly if you are trying to construct a high-dimensional covariance matrix from a bunch of pairwise comparisons) or if your data don't follow a multivariate normal distribution, then you can end up with paradoxical relationships among variables, such as cov(A,B)>0; cov(A,C)>0; cov(B,C)<0. In such a case, one cannot fit a multivariate normal PDF, as there is no multivariate normal distribution that meets these criteria - cov(A,B)>0 and cov(A,C)>0 necessarily implies that cov(B,C)>0. All this is to say, a non-positive definite matrix does not always mean that you are including collinear variables. It could also suggest that you are trying to model a relationship which is impossible given the parametric structure that you have chosen.
What does a non positive definite covariance matrix tell me about my data?
One point that I don't think is addressed above is that it IS possible to calculate a non-positive definite covariance matrix from empirical data even if your variables are not perfectly linearly rela
What does a non positive definite covariance matrix tell me about my data? One point that I don't think is addressed above is that it IS possible to calculate a non-positive definite covariance matrix from empirical data even if your variables are not perfectly linearly related. If you don't have sufficient data (particularly if you are trying to construct a high-dimensional covariance matrix from a bunch of pairwise comparisons) or if your data don't follow a multivariate normal distribution, then you can end up with paradoxical relationships among variables, such as cov(A,B)>0; cov(A,C)>0; cov(B,C)<0. In such a case, one cannot fit a multivariate normal PDF, as there is no multivariate normal distribution that meets these criteria - cov(A,B)>0 and cov(A,C)>0 necessarily implies that cov(B,C)>0. All this is to say, a non-positive definite matrix does not always mean that you are including collinear variables. It could also suggest that you are trying to model a relationship which is impossible given the parametric structure that you have chosen.
What does a non positive definite covariance matrix tell me about my data? One point that I don't think is addressed above is that it IS possible to calculate a non-positive definite covariance matrix from empirical data even if your variables are not perfectly linearly rela
8,876
What is the intuition behind defining completeness in a statistic as being impossible to form an unbiased estimator of $0$ from it?
I will try to add to the other answer. First, completeness is a technical condition which is justified mainly by the theorems that use it. So let us start with some related concepts and theorems where they occur. Let $X=(X_1,X_2,\dotsc,X_n)$ represent a vector of iid data, which we model as having a distribution $f(x;\theta), \theta \in \Theta$ where the parameter $\theta$ governing the data is unknown. $T=T(X)$ is sufficient if the conditional distribution of $X \mid T$ does not depend on the parameter $\theta$. $V=V(X)$ is ancillary if the distribution of $V$ does not depend on $\theta$ (within the family $f(x;\theta)$). $U=U(X)$ is an unbiased estimator of zero if its expectation is zero, irrespective of $\theta$. $S=S(X)$ is a complete statistic if any unbiased estimator of zero based on $S$ is identically zero, that is, if $\DeclareMathOperator{\E}{\mathbb{E}} \E g(S)=0 (\text{for all $\theta$})$ then $g(S)=0$ a.e. (for all $\theta$). Now, suppose you have two different unbiased estimators of $\theta$ based on the sufficient statistic $T$, $g_1(T), g_2(T)$. That is, in symbols $$ \E g_1(T)=\theta ,\\ \E g_2(T)=\theta $$ and $\DeclareMathOperator{\P}{\mathbb{P}} \P(g_1(T) \not= g_2(T) ) > 0$ (for all $\theta$). Then $g_1(T)-g_2(T)$ is an unbiased estimator of zero, which is not identically zero, proving that $T$ is not complete. So, completeness of an sufficient statistic $T$ gives us that there exists only one unique unbiased estimator of $\theta$ based on $T$. That is already very close to the Lehmann–Scheffé theorem. Let us look at some examples. Suppose $X_1, \dotsc, X_n$ now are iid uniform on the interval $(\theta, \theta+1)$. We can show that ($X_{(1)} < X_{(2)} < \dotsm < X_{(n)}$ are the order statistics) the pair $(X_{(1)}, X_{(n)})$ is sufficient, but it is not complete, because the difference $X_{(n)}-X_{(1)}$ is ancillary; we can compute its expectation, let it be $c$ (which is a function of $n$ only), and then $X_{(n)}-X_{(1)} -c$ will be an unbiased estimator of zero which is not identically zero. So our sufficient statistic, in this case, is not complete and sufficient. And we can see what that means: there exist functions of the sufficient statistic which are not informative about $\theta$ (in the context of the model). This cannot happen with a complete sufficient statistic; it is in a sense maximally informative, in that no functions of it are uninformative. On the other hand, if there is some function of the minimally sufficient statistic that has expectation zero, that could be seen as a noise term; disturbance/noise terms in models have expectation zero. So we could say that non-complete sufficient statistics do contain some noise. Look again at the range $R=X_{(n)}-X_{(1)}$ in this example. Since its distribution does not depend on $\theta$, it doesn't by itself alone contain any information about $\theta$. But, together with the sufficient statistic, it does! How? Look at the case where $R=1$ is observed.Then, in the context of our (known to be true) model, we have perfect knowledge of $\theta$! Namely, we can say with certainty that $\theta = X_{(1)}$. You can check that any other value for $\theta$ then leads to either $X_{(1)}$ or $X_{(n)}$ being an impossible observation, under the assumed model. On the other hand, if we observe $R=0.1$, then the range of possible values for $\theta$ is rather large (exercise ...). In this sense, the ancillary statistic $R$ does contain some information about the precision with which we can estimate $\theta$ based on this data and model. In this example, and others, the ancillary statistic $R$ "takes over the role of the sample size". Usually, confidence intervals and such need the sample size $n$, but in this example, we can make a conditional confidence interval this is computed using only $R$, not $n$ (exercise.) This was an idea of Fisher, that inference should be conditional on some ancillary statistic. Now, Basu's theorem: If $T$ is complete sufficient, then it is independent of any ancillary statistic. That is, inference based on a complete sufficient statistic is simpler, in that we do not need to consider conditional inference. Conditioning on a statistic which is independent of $T$ does not change anything, of course. Then, a last example to give some more intuition. Change our uniform distribution example to a uniform distribution on the interval $(\theta_1, \theta_2)$ (with $\theta_1<\theta_2$). In this case the statistic $(X_{(1)}, X_{(n)})$ is complete and sufficient. What changed? We can see that completeness is really a property of the model. In the former case, we had a restricted parameter space. This restriction destroyed completeness by introducing relationships on the order statistics. By removing this restriction we got completeness! So, in a sense, lack of completeness means that the parameter space is not big enough, and by enlarging it we can hope to restore completeness (and thus, easier inference). Some other examples where lack of completeness is caused by restrictions on the parameter space, see my answer to: What kind of information is Fisher information? Let $X_1, \dotsc, X_n$ be iid $\mathcal{Cauchy}(\theta,\sigma)$ (a location-scale model). Then the order statistics are sufficient but not complete. But now enlarge this model to a fully nonparametric model, still iid but from some completely unspecified distribution $F$. Then the order statistics are sufficient and complete. For exponential families with canonical parameter space (that is, as large as possible) the minimal sufficient statistic is also complete. But in many cases, introducing restrictions on the parameter space, as with curved exponential families, destroys completeness. A very relevant paper is Lehmann (1981), J. Am. Stat. Assoc., 76, 374, "An Interpretation of Completeness and Basu's Theorem".
What is the intuition behind defining completeness in a statistic as being impossible to form an unb
I will try to add to the other answer. First, completeness is a technical condition which is justified mainly by the theorems that use it. So let us start with some related concepts and theorems where
What is the intuition behind defining completeness in a statistic as being impossible to form an unbiased estimator of $0$ from it? I will try to add to the other answer. First, completeness is a technical condition which is justified mainly by the theorems that use it. So let us start with some related concepts and theorems where they occur. Let $X=(X_1,X_2,\dotsc,X_n)$ represent a vector of iid data, which we model as having a distribution $f(x;\theta), \theta \in \Theta$ where the parameter $\theta$ governing the data is unknown. $T=T(X)$ is sufficient if the conditional distribution of $X \mid T$ does not depend on the parameter $\theta$. $V=V(X)$ is ancillary if the distribution of $V$ does not depend on $\theta$ (within the family $f(x;\theta)$). $U=U(X)$ is an unbiased estimator of zero if its expectation is zero, irrespective of $\theta$. $S=S(X)$ is a complete statistic if any unbiased estimator of zero based on $S$ is identically zero, that is, if $\DeclareMathOperator{\E}{\mathbb{E}} \E g(S)=0 (\text{for all $\theta$})$ then $g(S)=0$ a.e. (for all $\theta$). Now, suppose you have two different unbiased estimators of $\theta$ based on the sufficient statistic $T$, $g_1(T), g_2(T)$. That is, in symbols $$ \E g_1(T)=\theta ,\\ \E g_2(T)=\theta $$ and $\DeclareMathOperator{\P}{\mathbb{P}} \P(g_1(T) \not= g_2(T) ) > 0$ (for all $\theta$). Then $g_1(T)-g_2(T)$ is an unbiased estimator of zero, which is not identically zero, proving that $T$ is not complete. So, completeness of an sufficient statistic $T$ gives us that there exists only one unique unbiased estimator of $\theta$ based on $T$. That is already very close to the Lehmann–Scheffé theorem. Let us look at some examples. Suppose $X_1, \dotsc, X_n$ now are iid uniform on the interval $(\theta, \theta+1)$. We can show that ($X_{(1)} < X_{(2)} < \dotsm < X_{(n)}$ are the order statistics) the pair $(X_{(1)}, X_{(n)})$ is sufficient, but it is not complete, because the difference $X_{(n)}-X_{(1)}$ is ancillary; we can compute its expectation, let it be $c$ (which is a function of $n$ only), and then $X_{(n)}-X_{(1)} -c$ will be an unbiased estimator of zero which is not identically zero. So our sufficient statistic, in this case, is not complete and sufficient. And we can see what that means: there exist functions of the sufficient statistic which are not informative about $\theta$ (in the context of the model). This cannot happen with a complete sufficient statistic; it is in a sense maximally informative, in that no functions of it are uninformative. On the other hand, if there is some function of the minimally sufficient statistic that has expectation zero, that could be seen as a noise term; disturbance/noise terms in models have expectation zero. So we could say that non-complete sufficient statistics do contain some noise. Look again at the range $R=X_{(n)}-X_{(1)}$ in this example. Since its distribution does not depend on $\theta$, it doesn't by itself alone contain any information about $\theta$. But, together with the sufficient statistic, it does! How? Look at the case where $R=1$ is observed.Then, in the context of our (known to be true) model, we have perfect knowledge of $\theta$! Namely, we can say with certainty that $\theta = X_{(1)}$. You can check that any other value for $\theta$ then leads to either $X_{(1)}$ or $X_{(n)}$ being an impossible observation, under the assumed model. On the other hand, if we observe $R=0.1$, then the range of possible values for $\theta$ is rather large (exercise ...). In this sense, the ancillary statistic $R$ does contain some information about the precision with which we can estimate $\theta$ based on this data and model. In this example, and others, the ancillary statistic $R$ "takes over the role of the sample size". Usually, confidence intervals and such need the sample size $n$, but in this example, we can make a conditional confidence interval this is computed using only $R$, not $n$ (exercise.) This was an idea of Fisher, that inference should be conditional on some ancillary statistic. Now, Basu's theorem: If $T$ is complete sufficient, then it is independent of any ancillary statistic. That is, inference based on a complete sufficient statistic is simpler, in that we do not need to consider conditional inference. Conditioning on a statistic which is independent of $T$ does not change anything, of course. Then, a last example to give some more intuition. Change our uniform distribution example to a uniform distribution on the interval $(\theta_1, \theta_2)$ (with $\theta_1<\theta_2$). In this case the statistic $(X_{(1)}, X_{(n)})$ is complete and sufficient. What changed? We can see that completeness is really a property of the model. In the former case, we had a restricted parameter space. This restriction destroyed completeness by introducing relationships on the order statistics. By removing this restriction we got completeness! So, in a sense, lack of completeness means that the parameter space is not big enough, and by enlarging it we can hope to restore completeness (and thus, easier inference). Some other examples where lack of completeness is caused by restrictions on the parameter space, see my answer to: What kind of information is Fisher information? Let $X_1, \dotsc, X_n$ be iid $\mathcal{Cauchy}(\theta,\sigma)$ (a location-scale model). Then the order statistics are sufficient but not complete. But now enlarge this model to a fully nonparametric model, still iid but from some completely unspecified distribution $F$. Then the order statistics are sufficient and complete. For exponential families with canonical parameter space (that is, as large as possible) the minimal sufficient statistic is also complete. But in many cases, introducing restrictions on the parameter space, as with curved exponential families, destroys completeness. A very relevant paper is Lehmann (1981), J. Am. Stat. Assoc., 76, 374, "An Interpretation of Completeness and Basu's Theorem".
What is the intuition behind defining completeness in a statistic as being impossible to form an unb I will try to add to the other answer. First, completeness is a technical condition which is justified mainly by the theorems that use it. So let us start with some related concepts and theorems where
8,877
What is the intuition behind defining completeness in a statistic as being impossible to form an unbiased estimator of $0$ from it?
Some intuition may be available from the theory of best (minimum variance) unbiased estimators (Casella and Berger's Statistical Inference (2002), Theorem 7.3.20). If $E_\theta W=\tau(\theta)$ then $W$ is a best unbiased estimator of $\tau(\theta)$ iff $W$ is uncorrelated with all other unbiased estimators of zero. Proof: Let $W$ be an unbiased estimator uncorrelated with all unbiased estimators of zero. Let $W'$ be another estimator such that $E_\theta W'=E_\theta W=\tau(\theta)$. Write $W'=W+(W'-W)$. By assumption, $Var_\theta W'=Var_\theta W+Var_\theta (W'-W)$. Hence, for any $W'$, $Var_\theta W'\geq Var_\theta W$. Now assume that $W$ is a best unbiased estimator. Let there be some other estimator $U$ with $E_\theta U=0$. $\phi_a:=W+aU$ is also unbiased for $\tau(\theta)$. We have $$Var_\theta \phi_a:=Var_\theta W+2aCov_\theta(W,U)+a^2Var_\theta U.$$ If there were a $\theta_0\in\Theta$ such that $Cov_{\theta_0}(W,U)<0$, we would obtain $Var_\theta \phi_a<Var_\theta W$ for $a\in(0,-2Cov_{\theta_0}(W,U)/Var_{\theta_0} U)$. $W$ could then not be the best unbiased estimator. QED Intuitively, the result says that if an estimator is optimal, it must not be possible to improve it by just adding some noise to it, in the sense of combining it with an estimator that is just zero on average (being an unbiased estimator of zero). Unfortunately, it is difficult to characterize all unbiased estimators of zero. The situation becomes much simpler if zero itself is the only unbiased estimator of zero, as any statistic $W$ satisfies $Cov_\theta(W,0)=0$. Completeness describes such a situation.
What is the intuition behind defining completeness in a statistic as being impossible to form an unb
Some intuition may be available from the theory of best (minimum variance) unbiased estimators (Casella and Berger's Statistical Inference (2002), Theorem 7.3.20). If $E_\theta W=\tau(\theta)$ then $W
What is the intuition behind defining completeness in a statistic as being impossible to form an unbiased estimator of $0$ from it? Some intuition may be available from the theory of best (minimum variance) unbiased estimators (Casella and Berger's Statistical Inference (2002), Theorem 7.3.20). If $E_\theta W=\tau(\theta)$ then $W$ is a best unbiased estimator of $\tau(\theta)$ iff $W$ is uncorrelated with all other unbiased estimators of zero. Proof: Let $W$ be an unbiased estimator uncorrelated with all unbiased estimators of zero. Let $W'$ be another estimator such that $E_\theta W'=E_\theta W=\tau(\theta)$. Write $W'=W+(W'-W)$. By assumption, $Var_\theta W'=Var_\theta W+Var_\theta (W'-W)$. Hence, for any $W'$, $Var_\theta W'\geq Var_\theta W$. Now assume that $W$ is a best unbiased estimator. Let there be some other estimator $U$ with $E_\theta U=0$. $\phi_a:=W+aU$ is also unbiased for $\tau(\theta)$. We have $$Var_\theta \phi_a:=Var_\theta W+2aCov_\theta(W,U)+a^2Var_\theta U.$$ If there were a $\theta_0\in\Theta$ such that $Cov_{\theta_0}(W,U)<0$, we would obtain $Var_\theta \phi_a<Var_\theta W$ for $a\in(0,-2Cov_{\theta_0}(W,U)/Var_{\theta_0} U)$. $W$ could then not be the best unbiased estimator. QED Intuitively, the result says that if an estimator is optimal, it must not be possible to improve it by just adding some noise to it, in the sense of combining it with an estimator that is just zero on average (being an unbiased estimator of zero). Unfortunately, it is difficult to characterize all unbiased estimators of zero. The situation becomes much simpler if zero itself is the only unbiased estimator of zero, as any statistic $W$ satisfies $Cov_\theta(W,0)=0$. Completeness describes such a situation.
What is the intuition behind defining completeness in a statistic as being impossible to form an unb Some intuition may be available from the theory of best (minimum variance) unbiased estimators (Casella and Berger's Statistical Inference (2002), Theorem 7.3.20). If $E_\theta W=\tau(\theta)$ then $W
8,878
Estimating parameters of Student's t-distribution
Closed form does not exist for T, but a very intuitive and stable approach is via the EM algorithm. Now because student is a scale mixture of normals, you can write your model as $$y_i=\mu+e_i$$ where $e_i|\sigma,w_i \sim N(0,\sigma^2w_i^{-1})$ and $w_i\sim Ga(\frac{\nu}{2}, \frac{\nu}{2})$. This means that conditionally on $w_i$ the mle are just the weighted mean and standard deviation. This is the "M"step $$\hat{\mu}=\frac{\sum_iw_iy_i}{ \sum_iw_i}$$ $$\hat{\sigma}^2= \frac{\sum_iw_i(y_i-\hat{\mu})^2}{n}$$ Now the "E" step replaces $w_i$ with its expectation given all the data. This is given as: $$\hat{w}_i=\frac{(\nu+1) \sigma^2 }{\nu \sigma^2 +(y_i-\mu)^2}$$ so you simply iterate the above two steps, replacing the "right hand side" of each equation with the current parameter estimates. This very easily shows the robustness properties of the t distribution as observations with large residuals receive less weight in the calculation for the location $\mu$, and bounded influence in the calculation of $\sigma^2$. By "bounded influence" I mean that the contribution to the estimate for $\sigma^2$ from the ith observation cannot exceed a given threshold (this is $(\nu+1)\sigma^2_{old}$ in the EM algorithm). Also $\nu$ is a "robustness"parameter in that increasing (decreasing) $\nu$ will result in more (less) uniform weights and hence more (less) sensitivity to outliers. One thing to note is that the log likelihood function may have more than one stationary point, so the EM algorithm may converge to a local mode instead of a global mode. The local modes are likely to be found when the location parameter is started too close to an outlier. So starting at the median is a good way to avoid this.
Estimating parameters of Student's t-distribution
Closed form does not exist for T, but a very intuitive and stable approach is via the EM algorithm. Now because student is a scale mixture of normals, you can write your model as $$y_i=\mu+e_i$$ wher
Estimating parameters of Student's t-distribution Closed form does not exist for T, but a very intuitive and stable approach is via the EM algorithm. Now because student is a scale mixture of normals, you can write your model as $$y_i=\mu+e_i$$ where $e_i|\sigma,w_i \sim N(0,\sigma^2w_i^{-1})$ and $w_i\sim Ga(\frac{\nu}{2}, \frac{\nu}{2})$. This means that conditionally on $w_i$ the mle are just the weighted mean and standard deviation. This is the "M"step $$\hat{\mu}=\frac{\sum_iw_iy_i}{ \sum_iw_i}$$ $$\hat{\sigma}^2= \frac{\sum_iw_i(y_i-\hat{\mu})^2}{n}$$ Now the "E" step replaces $w_i$ with its expectation given all the data. This is given as: $$\hat{w}_i=\frac{(\nu+1) \sigma^2 }{\nu \sigma^2 +(y_i-\mu)^2}$$ so you simply iterate the above two steps, replacing the "right hand side" of each equation with the current parameter estimates. This very easily shows the robustness properties of the t distribution as observations with large residuals receive less weight in the calculation for the location $\mu$, and bounded influence in the calculation of $\sigma^2$. By "bounded influence" I mean that the contribution to the estimate for $\sigma^2$ from the ith observation cannot exceed a given threshold (this is $(\nu+1)\sigma^2_{old}$ in the EM algorithm). Also $\nu$ is a "robustness"parameter in that increasing (decreasing) $\nu$ will result in more (less) uniform weights and hence more (less) sensitivity to outliers. One thing to note is that the log likelihood function may have more than one stationary point, so the EM algorithm may converge to a local mode instead of a global mode. The local modes are likely to be found when the location parameter is started too close to an outlier. So starting at the median is a good way to avoid this.
Estimating parameters of Student's t-distribution Closed form does not exist for T, but a very intuitive and stable approach is via the EM algorithm. Now because student is a scale mixture of normals, you can write your model as $$y_i=\mu+e_i$$ wher
8,879
Estimating parameters of Student's t-distribution
The following paper addresses exactly the problem you posted. Liu C. and Rubin D.B. 1995. "ML estimation of the t distribution using EM and its extensions, ECM and ECME." Statistica Sinica 5:19–39. It provides a general multivariate t-distribution parameter estimation, with or without the knowledge of the degree of freedom. The procedure can be found in Section 4, and it is very similar to probabilityislogic's for 1-dimension.
Estimating parameters of Student's t-distribution
The following paper addresses exactly the problem you posted. Liu C. and Rubin D.B. 1995. "ML estimation of the t distribution using EM and its extensions, ECM and ECME." Statistica Sinica 5:19–39. It
Estimating parameters of Student's t-distribution The following paper addresses exactly the problem you posted. Liu C. and Rubin D.B. 1995. "ML estimation of the t distribution using EM and its extensions, ECM and ECME." Statistica Sinica 5:19–39. It provides a general multivariate t-distribution parameter estimation, with or without the knowledge of the degree of freedom. The procedure can be found in Section 4, and it is very similar to probabilityislogic's for 1-dimension.
Estimating parameters of Student's t-distribution The following paper addresses exactly the problem you posted. Liu C. and Rubin D.B. 1995. "ML estimation of the t distribution using EM and its extensions, ECM and ECME." Statistica Sinica 5:19–39. It
8,880
Estimating parameters of Student's t-distribution
I doubt that it exists in closed form: if you write any one of the factors of the likelihood as $$\frac{\Gamma(\frac{\nu+1}{2})} {\sqrt{\nu\pi}\,\Gamma(\frac{\nu}{2})} \left(1+\frac{t^2}{\nu} \right)^{-\frac{\nu+1}{2}} = \frac{\Gamma(\frac{\nu+1}{2})} {\sqrt{\nu\pi}\,\Gamma(\frac{\nu}{2})} \exp \left \{ \left [ \ln \left(1+\frac{t^2}{\nu} \right) \right ] \left [ {-\frac{\nu+1}{2}} \right ]\right \}$$ and take the ln of that, you will get a nonlinear equation in $\nu$. Even if you manage to get a solution, then depending on the number of factors (terms) $n$, the MLE equation is going to depend on this $n$ in a nontrivial way. All that dramatically simplifies, of course, when $\nu \rightarrow \infty$, when the power approaches an exponential (Gaussian PDF).
Estimating parameters of Student's t-distribution
I doubt that it exists in closed form: if you write any one of the factors of the likelihood as $$\frac{\Gamma(\frac{\nu+1}{2})} {\sqrt{\nu\pi}\,\Gamma(\frac{\nu}{2})} \left(1+\frac{t^2}{\nu} \right)^
Estimating parameters of Student's t-distribution I doubt that it exists in closed form: if you write any one of the factors of the likelihood as $$\frac{\Gamma(\frac{\nu+1}{2})} {\sqrt{\nu\pi}\,\Gamma(\frac{\nu}{2})} \left(1+\frac{t^2}{\nu} \right)^{-\frac{\nu+1}{2}} = \frac{\Gamma(\frac{\nu+1}{2})} {\sqrt{\nu\pi}\,\Gamma(\frac{\nu}{2})} \exp \left \{ \left [ \ln \left(1+\frac{t^2}{\nu} \right) \right ] \left [ {-\frac{\nu+1}{2}} \right ]\right \}$$ and take the ln of that, you will get a nonlinear equation in $\nu$. Even if you manage to get a solution, then depending on the number of factors (terms) $n$, the MLE equation is going to depend on this $n$ in a nontrivial way. All that dramatically simplifies, of course, when $\nu \rightarrow \infty$, when the power approaches an exponential (Gaussian PDF).
Estimating parameters of Student's t-distribution I doubt that it exists in closed form: if you write any one of the factors of the likelihood as $$\frac{\Gamma(\frac{\nu+1}{2})} {\sqrt{\nu\pi}\,\Gamma(\frac{\nu}{2})} \left(1+\frac{t^2}{\nu} \right)^
8,881
Estimating parameters of Student's t-distribution
Does a closed-form maximum-likelihood estimator for the Student's t distribution exist? The answer is now YES!! During the COVID pandemic, I dug into this problem and discovered a method I call Independent Approximators (IAs). This new algorithm provides a closed-form estimate of the location, scale, and shape that achieves the maximum likelihood estimate. The method works by filtering the samples by pairs and triplets that are approximately equal. The IA-pairs are distributed as the normalized square of the original distribution and are guaranteed to have a defined mean. The IA-triplets are distributed as the normalized cube of the original distribution, are guaranteed to have a finite second moment, and are used to estimate the scale. Finally, the geometric mean is used to estimate the scale, as defined in a paper I posted earlier to the question. Try it out. I'm quite interested in getting feedback on this new method. Mathematica code is available in the referenced Github repository. Kenric Nelson, "Independent Approximates enable closed-form estimation of heavy-tailed distributions" https://arxiv.org/abs/2012.11026 Original answer from 2018: I have recently discovered a closed-form estimator for the scale of the Student's t distribution. To the best of my knowledge, this is a new contribution, but I would welcome comments suggesting any related results. The paper describes the method in the context of a family of "coupled exponential" distributions. The Student's t is referred to as the Coupled Gaussian, where the coupling term is the reciprocal of the degree of freedom. The closed-form statistic is the geometric mean of the samples. Assuming a value of the coupling or degree of freedom, an estimate of the scale is determined by multiplying the geometric mean of the samples by a function involving the coupling and a harmonic number. Use of the geometric mean as a statistic for the scale of the coupled Gaussian distributions, Kenric P. Nelson, Mark A. Kon, Sabir R. Umarov
Estimating parameters of Student's t-distribution
Does a closed-form maximum-likelihood estimator for the Student's t distribution exist? The answer is now YES!! During the COVID pandemic, I dug into this problem and discovered a method I call Indep
Estimating parameters of Student's t-distribution Does a closed-form maximum-likelihood estimator for the Student's t distribution exist? The answer is now YES!! During the COVID pandemic, I dug into this problem and discovered a method I call Independent Approximators (IAs). This new algorithm provides a closed-form estimate of the location, scale, and shape that achieves the maximum likelihood estimate. The method works by filtering the samples by pairs and triplets that are approximately equal. The IA-pairs are distributed as the normalized square of the original distribution and are guaranteed to have a defined mean. The IA-triplets are distributed as the normalized cube of the original distribution, are guaranteed to have a finite second moment, and are used to estimate the scale. Finally, the geometric mean is used to estimate the scale, as defined in a paper I posted earlier to the question. Try it out. I'm quite interested in getting feedback on this new method. Mathematica code is available in the referenced Github repository. Kenric Nelson, "Independent Approximates enable closed-form estimation of heavy-tailed distributions" https://arxiv.org/abs/2012.11026 Original answer from 2018: I have recently discovered a closed-form estimator for the scale of the Student's t distribution. To the best of my knowledge, this is a new contribution, but I would welcome comments suggesting any related results. The paper describes the method in the context of a family of "coupled exponential" distributions. The Student's t is referred to as the Coupled Gaussian, where the coupling term is the reciprocal of the degree of freedom. The closed-form statistic is the geometric mean of the samples. Assuming a value of the coupling or degree of freedom, an estimate of the scale is determined by multiplying the geometric mean of the samples by a function involving the coupling and a harmonic number. Use of the geometric mean as a statistic for the scale of the coupled Gaussian distributions, Kenric P. Nelson, Mark A. Kon, Sabir R. Umarov
Estimating parameters of Student's t-distribution Does a closed-form maximum-likelihood estimator for the Student's t distribution exist? The answer is now YES!! During the COVID pandemic, I dug into this problem and discovered a method I call Indep
8,882
Transforming variables for multiple regression in R
John Fox's book An R companion to applied regression is an excellent ressource on applied regression modelling with R. The package car which I use throughout in this answer is the accompanying package. The book also has as website with additional chapters. Transforming the response (aka dependent variable, outcome) Box-Cox transformations offer a possible way for choosing a transformation of the response. After fitting your regression model containing untransformed variables with the R function lm, you can use the function boxCox from the car package to estimate $\lambda$ (i.e. the power parameter) by maximum likelihood. Because your dependent variable isn't strictly positive, Box-Cox transformations will not work and you have to specify the option family="yjPower" to use the Yeo-Johnson transformations (see the original paper here and this related post): boxCox(my.regression.model, family="yjPower", plotit = TRUE) This produces a plot like the following one: The best estimate of $\lambda$ is the value that maximizes the profile likelhod which in this example is about 0.2. Usually, the estimate of $\lambda$ is rounded to a familiar value that is still within the 95%-confidence interval, such as -1, -1/2, 0, 1/3, 1/2, 1 or 2. To transform your dependent variable now, use the function yjPower from the car package: depvar.transformed <- yjPower(my.dependent.variable, lambda) In the function, the lambda should be the rounded $\lambda$ you have found before using boxCox. Then fit the regression again with the transformed dependent variable. Important: Rather than just log-transform the dependent variable, you should consider to fit a GLM with a log-link. Here are some references that provide further information: first, second, third. To do this in R, use glm: glm.mod <- glm(y~x1+x2, family=gaussian(link="log")) where y is your dependent variable and x1, x2 etc. are your independent variables. Transformations of predictors Transformations of strictly positive predictors can be estimated by maximum likelihood after the transformation of the dependent variable. To do so, use the function boxTidwell from the car package (for the original paper see here). Use it like that: boxTidwell(y~x1+x2, other.x=~x3+x4). The important thing here is that option other.x indicates the terms of the regression that are not to be transformed. This would be all your categorical variables. The function produces an output of the following form: boxTidwell(prestige ~ income + education, other.x=~ type + poly(women, 2), data=Prestige) Score Statistic p-value MLE of lambda income -4.482406 0.0000074 -0.3476283 education 0.216991 0.8282154 1.2538274 In that case, the score test suggests that the variable income should be transformed. The maximum likelihood estimates of $\lambda$ for income is -0.348. This could be rounded to -0.5 which is analogous to the transformation $\text{income}_{new}=1/\sqrt{\text{income}_{old}}$. Another very interesting post on the site about the transformation of the independent variables is this one. Disadvantages of transformations While log-transformed dependent and/or independent variables can be interpreted relatively easy, the interpretation of other, more complicated transformations is less intuitive (for me at least). How would you, for example, interpret the regression coefficients after the dependent variables has been transformed by $1/\sqrt{y}$? There are quite a few posts on this site that deal exactly with that question: first, second, third, fourth. If you use the $\lambda$ from Box-Cox directly, without rounding (e.g. $\lambda$=-0.382), it is even more difficult to interpret the regression coefficients. Modelling nonlinear relationships Two quite flexible methods to fit nonlinear relationships are fractional polynomials and splines. These three papers offer a very good introduction to both methods: First, second and third. There is also a whole book about fractional polynomials and R. The R package mfp implements multivariable fractional polynomials. This presentation might be informative regarding fractional polynomials. To fit splines, you can use the function gam (generalized additive models, see here for an excellent introduction with R) from the package mgcv or the functions ns (natural cubic splines) and bs (cubic B-splines) from the package splines (see here for an example of the usage of these functions). Using gam you can specify which predictors you want to fit using splines using the s() function: my.gam <- gam(y~s(x1) + x2, family=gaussian()) here, x1 would be fitted using a spline and x2 linearly as in a normal linear regression. Inside gam you can specify the distribution family and the link function as in glm. So to fit a model with a log-link function, you can specify the option family=gaussian(link="log") in gam as in glm. Have a look at this post from the site.
Transforming variables for multiple regression in R
John Fox's book An R companion to applied regression is an excellent ressource on applied regression modelling with R. The package car which I use throughout in this answer is the accompanying package
Transforming variables for multiple regression in R John Fox's book An R companion to applied regression is an excellent ressource on applied regression modelling with R. The package car which I use throughout in this answer is the accompanying package. The book also has as website with additional chapters. Transforming the response (aka dependent variable, outcome) Box-Cox transformations offer a possible way for choosing a transformation of the response. After fitting your regression model containing untransformed variables with the R function lm, you can use the function boxCox from the car package to estimate $\lambda$ (i.e. the power parameter) by maximum likelihood. Because your dependent variable isn't strictly positive, Box-Cox transformations will not work and you have to specify the option family="yjPower" to use the Yeo-Johnson transformations (see the original paper here and this related post): boxCox(my.regression.model, family="yjPower", plotit = TRUE) This produces a plot like the following one: The best estimate of $\lambda$ is the value that maximizes the profile likelhod which in this example is about 0.2. Usually, the estimate of $\lambda$ is rounded to a familiar value that is still within the 95%-confidence interval, such as -1, -1/2, 0, 1/3, 1/2, 1 or 2. To transform your dependent variable now, use the function yjPower from the car package: depvar.transformed <- yjPower(my.dependent.variable, lambda) In the function, the lambda should be the rounded $\lambda$ you have found before using boxCox. Then fit the regression again with the transformed dependent variable. Important: Rather than just log-transform the dependent variable, you should consider to fit a GLM with a log-link. Here are some references that provide further information: first, second, third. To do this in R, use glm: glm.mod <- glm(y~x1+x2, family=gaussian(link="log")) where y is your dependent variable and x1, x2 etc. are your independent variables. Transformations of predictors Transformations of strictly positive predictors can be estimated by maximum likelihood after the transformation of the dependent variable. To do so, use the function boxTidwell from the car package (for the original paper see here). Use it like that: boxTidwell(y~x1+x2, other.x=~x3+x4). The important thing here is that option other.x indicates the terms of the regression that are not to be transformed. This would be all your categorical variables. The function produces an output of the following form: boxTidwell(prestige ~ income + education, other.x=~ type + poly(women, 2), data=Prestige) Score Statistic p-value MLE of lambda income -4.482406 0.0000074 -0.3476283 education 0.216991 0.8282154 1.2538274 In that case, the score test suggests that the variable income should be transformed. The maximum likelihood estimates of $\lambda$ for income is -0.348. This could be rounded to -0.5 which is analogous to the transformation $\text{income}_{new}=1/\sqrt{\text{income}_{old}}$. Another very interesting post on the site about the transformation of the independent variables is this one. Disadvantages of transformations While log-transformed dependent and/or independent variables can be interpreted relatively easy, the interpretation of other, more complicated transformations is less intuitive (for me at least). How would you, for example, interpret the regression coefficients after the dependent variables has been transformed by $1/\sqrt{y}$? There are quite a few posts on this site that deal exactly with that question: first, second, third, fourth. If you use the $\lambda$ from Box-Cox directly, without rounding (e.g. $\lambda$=-0.382), it is even more difficult to interpret the regression coefficients. Modelling nonlinear relationships Two quite flexible methods to fit nonlinear relationships are fractional polynomials and splines. These three papers offer a very good introduction to both methods: First, second and third. There is also a whole book about fractional polynomials and R. The R package mfp implements multivariable fractional polynomials. This presentation might be informative regarding fractional polynomials. To fit splines, you can use the function gam (generalized additive models, see here for an excellent introduction with R) from the package mgcv or the functions ns (natural cubic splines) and bs (cubic B-splines) from the package splines (see here for an example of the usage of these functions). Using gam you can specify which predictors you want to fit using splines using the s() function: my.gam <- gam(y~s(x1) + x2, family=gaussian()) here, x1 would be fitted using a spline and x2 linearly as in a normal linear regression. Inside gam you can specify the distribution family and the link function as in glm. So to fit a model with a log-link function, you can specify the option family=gaussian(link="log") in gam as in glm. Have a look at this post from the site.
Transforming variables for multiple regression in R John Fox's book An R companion to applied regression is an excellent ressource on applied regression modelling with R. The package car which I use throughout in this answer is the accompanying package
8,883
Transforming variables for multiple regression in R
You should tell us more about the nature of your response (outcome, dependent) variable. From your first plot it is strongly positively skewed with many values near zero and some negative. From that it is possible, but not inevitable, that transformation would help you, but the most important question is whether transformation would make your data closer to a linear relationship. Note that negative values for the response rule out straight logarithmic transformation, but not log(response + constant), and not a generalised linear model with logarithmic link. There are many answers on this site discussing log(response + constant), which divides statistical people: some people dislike it as being ad hoc and difficult to work with, while others regard it as a legitimate device. A GLM with log link is still possible. Alternatively, it may be that your model reflects some kind of mixed process, in which case a customised model reflecting the data generation process more closely would be a good idea. (LATER) The OP has a dependent variable WAR with values ranging roughly from about 100 to -2. To get over problems with taking logarithms of zero or negative values, OP proposes a fudge of zeros and negatives to 0.000001. Now on a logarithmic scale (base 10) those values range from about 2 (100 or so) through to -6 (0.000001). The minority of fudged points on a logarithmic scale are now a minority of massive outliers. Plot log_10(fudged WAR) against anything else to see this.
Transforming variables for multiple regression in R
You should tell us more about the nature of your response (outcome, dependent) variable. From your first plot it is strongly positively skewed with many values near zero and some negative. From that i
Transforming variables for multiple regression in R You should tell us more about the nature of your response (outcome, dependent) variable. From your first plot it is strongly positively skewed with many values near zero and some negative. From that it is possible, but not inevitable, that transformation would help you, but the most important question is whether transformation would make your data closer to a linear relationship. Note that negative values for the response rule out straight logarithmic transformation, but not log(response + constant), and not a generalised linear model with logarithmic link. There are many answers on this site discussing log(response + constant), which divides statistical people: some people dislike it as being ad hoc and difficult to work with, while others regard it as a legitimate device. A GLM with log link is still possible. Alternatively, it may be that your model reflects some kind of mixed process, in which case a customised model reflecting the data generation process more closely would be a good idea. (LATER) The OP has a dependent variable WAR with values ranging roughly from about 100 to -2. To get over problems with taking logarithms of zero or negative values, OP proposes a fudge of zeros and negatives to 0.000001. Now on a logarithmic scale (base 10) those values range from about 2 (100 or so) through to -6 (0.000001). The minority of fudged points on a logarithmic scale are now a minority of massive outliers. Plot log_10(fudged WAR) against anything else to see this.
Transforming variables for multiple regression in R You should tell us more about the nature of your response (outcome, dependent) variable. From your first plot it is strongly positively skewed with many values near zero and some negative. From that i
8,884
Specifying multiple (separate) random effects in lme [closed]
After many struggles I found a solution for my problem, which I am posting here in case somebody will have similar questions: fit <- lme(Y ~ time, random=list(year=~1, date=~time), data=X, weights=varIdent(form=~1|year))
Specifying multiple (separate) random effects in lme [closed]
After many struggles I found a solution for my problem, which I am posting here in case somebody will have similar questions: fit <- lme(Y ~ time, random=list(year=~1, date=~time), data=X, weights=var
Specifying multiple (separate) random effects in lme [closed] After many struggles I found a solution for my problem, which I am posting here in case somebody will have similar questions: fit <- lme(Y ~ time, random=list(year=~1, date=~time), data=X, weights=varIdent(form=~1|year))
Specifying multiple (separate) random effects in lme [closed] After many struggles I found a solution for my problem, which I am posting here in case somebody will have similar questions: fit <- lme(Y ~ time, random=list(year=~1, date=~time), data=X, weights=var
8,885
What is the best method for checking convergence in MCMC?
I use the Gelman-Rubin convergence diagnostic as well. A potential problem with Gelman-Rubin is that it may mis-diagnose convergence if the shrink factor happens to be close to 1 by chance, in which case you can use a Gelman-Rubin-Brooks plot. See the "General Methods for Monitoring Convergence of Iterative Simulations" paper for details. This is supported in the coda package in R (for "Output analysis and diagnostics for Markov Chain Monte Carlo simulations"). coda also includes other functions (such as the Geweke’s convergence diagnostic). You can also have a look at "boa: An R Package for MCMC Output Convergence Assessment and Posterior Inference".
What is the best method for checking convergence in MCMC?
I use the Gelman-Rubin convergence diagnostic as well. A potential problem with Gelman-Rubin is that it may mis-diagnose convergence if the shrink factor happens to be close to 1 by chance, in which
What is the best method for checking convergence in MCMC? I use the Gelman-Rubin convergence diagnostic as well. A potential problem with Gelman-Rubin is that it may mis-diagnose convergence if the shrink factor happens to be close to 1 by chance, in which case you can use a Gelman-Rubin-Brooks plot. See the "General Methods for Monitoring Convergence of Iterative Simulations" paper for details. This is supported in the coda package in R (for "Output analysis and diagnostics for Markov Chain Monte Carlo simulations"). coda also includes other functions (such as the Geweke’s convergence diagnostic). You can also have a look at "boa: An R Package for MCMC Output Convergence Assessment and Posterior Inference".
What is the best method for checking convergence in MCMC? I use the Gelman-Rubin convergence diagnostic as well. A potential problem with Gelman-Rubin is that it may mis-diagnose convergence if the shrink factor happens to be close to 1 by chance, in which
8,886
What is the best method for checking convergence in MCMC?
Rather than using the Gelman-Rubin statistic, which is a nice aid but not perfect (as with all convergence diagnostics), I simply use the same idea and plot the results for a visual graphical assessment. In almost all cases I have considered (which is a very large number), graphing the trace plots of multiple MCMC chains started from widely varied starting positions is sufficient to show or assess whether the same posterior is being converged to or not, in each case. I use this method to: Whether the MCMC chain (ever) converges Assess how long I should set the burn-in period To calculate Gelman's R statistic (see Gelman, Carlin, Stern and Rubin, Bayesian Data Analysis) to measure the efficiency and speed of mixing in the MCMC sampler. Efficiency and convergence are slightly different issues: e.g. you can have convergence with very low efficiency (i.e. thus requiring long chains to converge). I have used this graphical method to successfully diagnose (and later correct) lack of convergence problems in specific and general situations.
What is the best method for checking convergence in MCMC?
Rather than using the Gelman-Rubin statistic, which is a nice aid but not perfect (as with all convergence diagnostics), I simply use the same idea and plot the results for a visual graphical assessme
What is the best method for checking convergence in MCMC? Rather than using the Gelman-Rubin statistic, which is a nice aid but not perfect (as with all convergence diagnostics), I simply use the same idea and plot the results for a visual graphical assessment. In almost all cases I have considered (which is a very large number), graphing the trace plots of multiple MCMC chains started from widely varied starting positions is sufficient to show or assess whether the same posterior is being converged to or not, in each case. I use this method to: Whether the MCMC chain (ever) converges Assess how long I should set the burn-in period To calculate Gelman's R statistic (see Gelman, Carlin, Stern and Rubin, Bayesian Data Analysis) to measure the efficiency and speed of mixing in the MCMC sampler. Efficiency and convergence are slightly different issues: e.g. you can have convergence with very low efficiency (i.e. thus requiring long chains to converge). I have used this graphical method to successfully diagnose (and later correct) lack of convergence problems in specific and general situations.
What is the best method for checking convergence in MCMC? Rather than using the Gelman-Rubin statistic, which is a nice aid but not perfect (as with all convergence diagnostics), I simply use the same idea and plot the results for a visual graphical assessme
8,887
What is the best method for checking convergence in MCMC?
This is a wee late into the debate, but we have a whole chapter in our 2007 book Introducing Monte Carlo Methods with R dealing with this issue. You can also download the CODA package from CRAN to this effect.
What is the best method for checking convergence in MCMC?
This is a wee late into the debate, but we have a whole chapter in our 2007 book Introducing Monte Carlo Methods with R dealing with this issue. You can also download the CODA package from CRAN to th
What is the best method for checking convergence in MCMC? This is a wee late into the debate, but we have a whole chapter in our 2007 book Introducing Monte Carlo Methods with R dealing with this issue. You can also download the CODA package from CRAN to this effect.
What is the best method for checking convergence in MCMC? This is a wee late into the debate, but we have a whole chapter in our 2007 book Introducing Monte Carlo Methods with R dealing with this issue. You can also download the CODA package from CRAN to th
8,888
What is the best method for checking convergence in MCMC?
I like to do trace plots primarily and sometimes I use the Gelman-Rubin convergence diagnostic.
What is the best method for checking convergence in MCMC?
I like to do trace plots primarily and sometimes I use the Gelman-Rubin convergence diagnostic.
What is the best method for checking convergence in MCMC? I like to do trace plots primarily and sometimes I use the Gelman-Rubin convergence diagnostic.
What is the best method for checking convergence in MCMC? I like to do trace plots primarily and sometimes I use the Gelman-Rubin convergence diagnostic.
8,889
Can non-random samples be analyzed using standard statistical tests?
There are two general models to testing. The first one, based on the assumption of random sampling from a population, is usually called the "population model". For example, for the two-independent samples t-test, we assume that the two groups we want to compare are random samples from the respective populations. Assuming that the distributions of the scores within the two groups are normally distributed in the population, we can then derive analytically the sampling distribution of the test statistic (i.e., for the t-statistic). The idea is that if we were to repeat this process (randomly drawing two samples from the respective populations) an infinite number of times (of course, we do not actually do that), we would obtain this sampling distribution for the test statistic. An alternative model for testing is the "randomization model". Here, we do not have to appeal to random sampling. Instead, we obtain a randomization distribution through permutations of our samples. For example, for the t-test, you have your two samples (not necessarily obtained via random sampling). Now if indeed there is no difference between these two groups, then whether a particular person actually "belongs" to group 1 or group 2 is arbitrary. So, what we can do is to permute the group assignment over and over, each time noting how far the means of the two groups are apart. This way, we obtain a sampling distribution empirically. We can then compare how far the two means are apart in the original samples (before we started to reshuffle the group memberships) and if that difference is "extreme" (i.e., falls into the tails of empirically derived sampling distribution), then we conclude that group membership is not arbitrary and there is indeed a difference between the two groups. In many situations, the two approaches actually lead to the same conclusion. In a way, the approach based on the population model can be seen as an approximation to the randomization test. Interestingly, Fisher was the one who proposed the randomization model and suggested that it should be the basis for our inferences (since most samples are not obtained via random sampling). A nice article describing the difference between the two approaches is: Ernst, M. D. (2004). Permutation methods: A basis for exact inference. Statistical Science, 19(4), 676-685 (link). Another article that provides a nice summary and suggest that the randomization approach should be the basis for our inferences: Ludbrook, J., & Dudley, H. (1998). Why permutation tests are superior to t and F tests in biomedical research. American Statistician, 52(2), 127-132 (link). EDIT: I should also add that it is common to calculate the same test statistic when using the randomization approach as under the population model. So, for example, for testing the difference in means between two groups, one would calculate the usual t-statistic for all possible permutations of the group memberships (yielding the empirically derived sampling distribution under the null hypothesis) and then one would check how extreme the t-statistic for the original group membership is under that distribution.
Can non-random samples be analyzed using standard statistical tests?
There are two general models to testing. The first one, based on the assumption of random sampling from a population, is usually called the "population model". For example, for the two-independent sam
Can non-random samples be analyzed using standard statistical tests? There are two general models to testing. The first one, based on the assumption of random sampling from a population, is usually called the "population model". For example, for the two-independent samples t-test, we assume that the two groups we want to compare are random samples from the respective populations. Assuming that the distributions of the scores within the two groups are normally distributed in the population, we can then derive analytically the sampling distribution of the test statistic (i.e., for the t-statistic). The idea is that if we were to repeat this process (randomly drawing two samples from the respective populations) an infinite number of times (of course, we do not actually do that), we would obtain this sampling distribution for the test statistic. An alternative model for testing is the "randomization model". Here, we do not have to appeal to random sampling. Instead, we obtain a randomization distribution through permutations of our samples. For example, for the t-test, you have your two samples (not necessarily obtained via random sampling). Now if indeed there is no difference between these two groups, then whether a particular person actually "belongs" to group 1 or group 2 is arbitrary. So, what we can do is to permute the group assignment over and over, each time noting how far the means of the two groups are apart. This way, we obtain a sampling distribution empirically. We can then compare how far the two means are apart in the original samples (before we started to reshuffle the group memberships) and if that difference is "extreme" (i.e., falls into the tails of empirically derived sampling distribution), then we conclude that group membership is not arbitrary and there is indeed a difference between the two groups. In many situations, the two approaches actually lead to the same conclusion. In a way, the approach based on the population model can be seen as an approximation to the randomization test. Interestingly, Fisher was the one who proposed the randomization model and suggested that it should be the basis for our inferences (since most samples are not obtained via random sampling). A nice article describing the difference between the two approaches is: Ernst, M. D. (2004). Permutation methods: A basis for exact inference. Statistical Science, 19(4), 676-685 (link). Another article that provides a nice summary and suggest that the randomization approach should be the basis for our inferences: Ludbrook, J., & Dudley, H. (1998). Why permutation tests are superior to t and F tests in biomedical research. American Statistician, 52(2), 127-132 (link). EDIT: I should also add that it is common to calculate the same test statistic when using the randomization approach as under the population model. So, for example, for testing the difference in means between two groups, one would calculate the usual t-statistic for all possible permutations of the group memberships (yielding the empirically derived sampling distribution under the null hypothesis) and then one would check how extreme the t-statistic for the original group membership is under that distribution.
Can non-random samples be analyzed using standard statistical tests? There are two general models to testing. The first one, based on the assumption of random sampling from a population, is usually called the "population model". For example, for the two-independent sam
8,890
Can non-random samples be analyzed using standard statistical tests?
Your question is very good, but it doesn't have a straightforward answer. Most tests like those you mention are based on the assumption that a sample is a random sample, because a random sample is likely to be representative of the sampled population. If the assumption is invalid then any interpretation of the results has to take that into account. When the sample is very non-representative of the population then the results are likely to be misleading. When the sample is representative despite being non-random then the results will be perfectly OK. The next level of the question is then to ask how one can decide whether the non-randomness matters in any particular case. I can't answer that one ;-)
Can non-random samples be analyzed using standard statistical tests?
Your question is very good, but it doesn't have a straightforward answer. Most tests like those you mention are based on the assumption that a sample is a random sample, because a random sample is lik
Can non-random samples be analyzed using standard statistical tests? Your question is very good, but it doesn't have a straightforward answer. Most tests like those you mention are based on the assumption that a sample is a random sample, because a random sample is likely to be representative of the sampled population. If the assumption is invalid then any interpretation of the results has to take that into account. When the sample is very non-representative of the population then the results are likely to be misleading. When the sample is representative despite being non-random then the results will be perfectly OK. The next level of the question is then to ask how one can decide whether the non-randomness matters in any particular case. I can't answer that one ;-)
Can non-random samples be analyzed using standard statistical tests? Your question is very good, but it doesn't have a straightforward answer. Most tests like those you mention are based on the assumption that a sample is a random sample, because a random sample is lik
8,891
Can non-random samples be analyzed using standard statistical tests?
You ask a very general question, so the answer can't be suitable for all cases. However, I can clarify. Statistical tests generally have to do with the distribution observed versus a hypothetical distribution (so-called null distribution or null hypothesis; or, in some cases, an alternative distribution). Samples may be non-random, but the test being administered is applied to some value obtained from the samples. If that variable can have some stochastic properties, then its distribution is compared to some alternative distribution. What matters then is whether or not the sample's test statistic would hold for some other population of interest and whether the assumptions regarding the alternative or null distribution are relevant for the other population of interest.
Can non-random samples be analyzed using standard statistical tests?
You ask a very general question, so the answer can't be suitable for all cases. However, I can clarify. Statistical tests generally have to do with the distribution observed versus a hypothetical di
Can non-random samples be analyzed using standard statistical tests? You ask a very general question, so the answer can't be suitable for all cases. However, I can clarify. Statistical tests generally have to do with the distribution observed versus a hypothetical distribution (so-called null distribution or null hypothesis; or, in some cases, an alternative distribution). Samples may be non-random, but the test being administered is applied to some value obtained from the samples. If that variable can have some stochastic properties, then its distribution is compared to some alternative distribution. What matters then is whether or not the sample's test statistic would hold for some other population of interest and whether the assumptions regarding the alternative or null distribution are relevant for the other population of interest.
Can non-random samples be analyzed using standard statistical tests? You ask a very general question, so the answer can't be suitable for all cases. However, I can clarify. Statistical tests generally have to do with the distribution observed versus a hypothetical di
8,892
Different covariance types for Gaussian Mixture Models
A Gaussian distribution is completely determined by its covariance matrix and its mean (a location in space). The covariance matrix of a Gaussian distribution determines the directions and lengths of the axes of its density contours, all of which are ellipsoids. These four types of mixture models can be illustrated in full generality using the two-dimensional case. In each of these contour plots of the mixture density, two components are located at $(0,0)$ and $(4,5)$ with weights $3/5$ and $2/5$ respectively. The different weights will cause the sets of contours to look slightly different even when the covariance matrices are the same, but the overall shapes of individual contours will still be similar for identical matrices. Clicking on the image will display a version at higher resolution. NB These are plots of the actual mixtures, not of the individual components. Because the components are well separated and of comparable weight, the mixture contours closely resemble the component contours (except at low levels where they may distort and merge, as shown in the center of the "tied" plot for instance). Full means the components may independently adopt any position and shape. Tied means they have the same shape, but the shape may be anything. Diagonal means the contour axes are oriented along the coordinate axes, but otherwise the eccentricities may vary between components. Tied Diagonal is a "tied" situation where the contour axes are oriented along the coordinate axes. (I have added this because initially it was how I misinterpreted "diagonal.") Spherical is a "diagonal" situation with circular contours (spherical in higher dimensions, whence the name). This exhibit a gamut from the most general possible mixture to a very specific kind of mixture. Other (fussier) restrictions are possible, especially in higher dimensions where the numbers of parameters grow rapidly. (A covariance matrix in $n$ dimensions is described by $n(n+1)/2$ independent parameters.)
Different covariance types for Gaussian Mixture Models
A Gaussian distribution is completely determined by its covariance matrix and its mean (a location in space). The covariance matrix of a Gaussian distribution determines the directions and lengths of
Different covariance types for Gaussian Mixture Models A Gaussian distribution is completely determined by its covariance matrix and its mean (a location in space). The covariance matrix of a Gaussian distribution determines the directions and lengths of the axes of its density contours, all of which are ellipsoids. These four types of mixture models can be illustrated in full generality using the two-dimensional case. In each of these contour plots of the mixture density, two components are located at $(0,0)$ and $(4,5)$ with weights $3/5$ and $2/5$ respectively. The different weights will cause the sets of contours to look slightly different even when the covariance matrices are the same, but the overall shapes of individual contours will still be similar for identical matrices. Clicking on the image will display a version at higher resolution. NB These are plots of the actual mixtures, not of the individual components. Because the components are well separated and of comparable weight, the mixture contours closely resemble the component contours (except at low levels where they may distort and merge, as shown in the center of the "tied" plot for instance). Full means the components may independently adopt any position and shape. Tied means they have the same shape, but the shape may be anything. Diagonal means the contour axes are oriented along the coordinate axes, but otherwise the eccentricities may vary between components. Tied Diagonal is a "tied" situation where the contour axes are oriented along the coordinate axes. (I have added this because initially it was how I misinterpreted "diagonal.") Spherical is a "diagonal" situation with circular contours (spherical in higher dimensions, whence the name). This exhibit a gamut from the most general possible mixture to a very specific kind of mixture. Other (fussier) restrictions are possible, especially in higher dimensions where the numbers of parameters grow rapidly. (A covariance matrix in $n$ dimensions is described by $n(n+1)/2$ independent parameters.)
Different covariance types for Gaussian Mixture Models A Gaussian distribution is completely determined by its covariance matrix and its mean (a location in space). The covariance matrix of a Gaussian distribution determines the directions and lengths of
8,893
Residual plots: why plot versus fitted values, not observed $Y$ values?
By construction the error term in an OLS model is uncorrelated with the observed values of the X covariates. This will always be true for the observed data even if the model is yielding biased estimates that do not reflect the true values of a parameter because an assumption of the model is violated (like an omitted variable problem or a problem with reverse causality). The predicted values are entirely a function of these covariates so they are also uncorrelated with the error term. Thus, when you plot residuals against predicted values they should always look random because they are indeed uncorrelated by construction of the estimator. In contrast, it's entirely possible (and indeed probable) for a model's error term to be correlated with Y in practice. For example, with a dichotomous X variable the further the true Y is from either E(Y | X = 1) or E(Y | X = 0) then the larger the residual will be. Here is the same intuition with simulated data in R where we know the model is unbiased because we control the data generating process: set.seed(21391209) trueSd <- 10 trueA <- 5 trueB <- as.matrix(c(3, 5, -1, 0)) sampleSize <- 100 # create independent x-values x1 <- rnorm(n=sampleSize, mean = 0, sd = 4) x2 <- rnorm(n=sampleSize, mean = 5, sd = 10) x3 <- 3 + x1 * 4 + x2 * 2 + rnorm(n=sampleSize, mean = 0, sd = 10) x4 <- -50 + x1 * 7 + x2 * .5 + x3 * 2 + rnorm(n=sampleSize, mean = 0, sd = 20) X = as.matrix(cbind(x1, x2, x3, x4)) # create dependent values according to a + bx + N(0,sd) Y <- trueA + X %*% trueB + rnorm(n=sampleSize, mean=0, sd=trueSd) df = as.data.frame(cbind(Y, X)) colnames(df) <- c("y", "x1", "x2", "x3", "x4") ols = lm(y ~ x1 + x2 + x3 + x4, data = df) y_hat = predict(ols, df) error = Y - y_hat cor(y_hat, error) #Zero cor(Y, error) #Not Zero We get the same result of zero correlation with a biased model, for example if we omit x1. ols2 = lm(y ~ x2 + x3 + x4, data = df) y_hat2 = predict(ols2, df) error2 = Y - y_hat2 cor(y_hat2, error2) #Still zero cor(Y, error2) #Not Zero
Residual plots: why plot versus fitted values, not observed $Y$ values?
By construction the error term in an OLS model is uncorrelated with the observed values of the X covariates. This will always be true for the observed data even if the model is yielding biased estimat
Residual plots: why plot versus fitted values, not observed $Y$ values? By construction the error term in an OLS model is uncorrelated with the observed values of the X covariates. This will always be true for the observed data even if the model is yielding biased estimates that do not reflect the true values of a parameter because an assumption of the model is violated (like an omitted variable problem or a problem with reverse causality). The predicted values are entirely a function of these covariates so they are also uncorrelated with the error term. Thus, when you plot residuals against predicted values they should always look random because they are indeed uncorrelated by construction of the estimator. In contrast, it's entirely possible (and indeed probable) for a model's error term to be correlated with Y in practice. For example, with a dichotomous X variable the further the true Y is from either E(Y | X = 1) or E(Y | X = 0) then the larger the residual will be. Here is the same intuition with simulated data in R where we know the model is unbiased because we control the data generating process: set.seed(21391209) trueSd <- 10 trueA <- 5 trueB <- as.matrix(c(3, 5, -1, 0)) sampleSize <- 100 # create independent x-values x1 <- rnorm(n=sampleSize, mean = 0, sd = 4) x2 <- rnorm(n=sampleSize, mean = 5, sd = 10) x3 <- 3 + x1 * 4 + x2 * 2 + rnorm(n=sampleSize, mean = 0, sd = 10) x4 <- -50 + x1 * 7 + x2 * .5 + x3 * 2 + rnorm(n=sampleSize, mean = 0, sd = 20) X = as.matrix(cbind(x1, x2, x3, x4)) # create dependent values according to a + bx + N(0,sd) Y <- trueA + X %*% trueB + rnorm(n=sampleSize, mean=0, sd=trueSd) df = as.data.frame(cbind(Y, X)) colnames(df) <- c("y", "x1", "x2", "x3", "x4") ols = lm(y ~ x1 + x2 + x3 + x4, data = df) y_hat = predict(ols, df) error = Y - y_hat cor(y_hat, error) #Zero cor(Y, error) #Not Zero We get the same result of zero correlation with a biased model, for example if we omit x1. ols2 = lm(y ~ x2 + x3 + x4, data = df) y_hat2 = predict(ols2, df) error2 = Y - y_hat2 cor(y_hat2, error2) #Still zero cor(Y, error2) #Not Zero
Residual plots: why plot versus fitted values, not observed $Y$ values? By construction the error term in an OLS model is uncorrelated with the observed values of the X covariates. This will always be true for the observed data even if the model is yielding biased estimat
8,894
Residual plots: why plot versus fitted values, not observed $Y$ values?
Two facts which I assume you're happy with me just stating: i. $y_i = \hat{y}_i+\hat{e}_i$ ii. $\text{Cov}(\hat{y}_i,\hat{e}_i)=0$ Then: $\text{Cov}(y_i,\hat{e}_i)=\text{Cov}(\hat{y}_i+\hat{e}_i,\hat{e}_i)$ $\qquad=\text{Cov}(\hat{y}_i,\hat{e}_i) +\text{Cov}(\hat{e}_i,\hat{e}_i)$ $\qquad=0 +\sigma^2_e$ $\qquad=\sigma^2_e$ So while the fitted value isn't correlated with the residual, the observation is. In effect, this is because both the observation and the residual are related to the error term. This usually makes it somewhat harder to use the plot of residuals vs observations for diagnostic purposes; the addition of a linear relationship (and dependence) to the deviation from a linear relationship tends to partially disguise the pattern in the second thing (it's harder to 'see' what's going on).
Residual plots: why plot versus fitted values, not observed $Y$ values?
Two facts which I assume you're happy with me just stating: i. $y_i = \hat{y}_i+\hat{e}_i$ ii. $\text{Cov}(\hat{y}_i,\hat{e}_i)=0$ Then: $\text{Cov}(y_i,\hat{e}_i)=\text{Cov}(\hat{y}_i+\hat{e}_i,\hat{
Residual plots: why plot versus fitted values, not observed $Y$ values? Two facts which I assume you're happy with me just stating: i. $y_i = \hat{y}_i+\hat{e}_i$ ii. $\text{Cov}(\hat{y}_i,\hat{e}_i)=0$ Then: $\text{Cov}(y_i,\hat{e}_i)=\text{Cov}(\hat{y}_i+\hat{e}_i,\hat{e}_i)$ $\qquad=\text{Cov}(\hat{y}_i,\hat{e}_i) +\text{Cov}(\hat{e}_i,\hat{e}_i)$ $\qquad=0 +\sigma^2_e$ $\qquad=\sigma^2_e$ So while the fitted value isn't correlated with the residual, the observation is. In effect, this is because both the observation and the residual are related to the error term. This usually makes it somewhat harder to use the plot of residuals vs observations for diagnostic purposes; the addition of a linear relationship (and dependence) to the deviation from a linear relationship tends to partially disguise the pattern in the second thing (it's harder to 'see' what's going on).
Residual plots: why plot versus fitted values, not observed $Y$ values? Two facts which I assume you're happy with me just stating: i. $y_i = \hat{y}_i+\hat{e}_i$ ii. $\text{Cov}(\hat{y}_i,\hat{e}_i)=0$ Then: $\text{Cov}(y_i,\hat{e}_i)=\text{Cov}(\hat{y}_i+\hat{e}_i,\hat{
8,895
Does the order of explanatory variables matter when calculating their regression coefficients?
I believe the confusion may be arising from something a bit simpler, but it provides a nice opportunity to review some related matters. Note that the text is not claiming that all of the regression coefficients $\newcommand{\bhat}{\hat{\beta}}\newcommand{\m}{\mathbf}\newcommand{\z}{\m{z}}\bhat_i$ can be calculated via the successive residuals vectors as $$ \bhat_i \stackrel{?}{=} \frac{\langle \m y, \z_i \rangle}{\|\z_i\|^2}\>, $$ but rather that only the last one, $\bhat_p$, can be calculated this way! The successive orthogonalization scheme (a form of Gram–Schmidt orthogonalization) is (almost) producing a pair of matrices $\newcommand{\Z}{\m{Z}}\newcommand{\G}{\m{G}}\Z$ and $\G$ such that $$ \m X = \Z \G \>, $$ where $\Z$ is $n \times p$ with orthonormal columns and $\G = (g_{ij})$ is $p \times p$ upper triangular. I say "almost" since the algorithm is only specifying $\Z$ up to the norms of the columns, which will not in general be one, but can be made to have unit norm by normalizing the columns and making a corresponding simple adjustment to the coordinate matrix $\G$. Assuming, of course, that $\m X \in \mathbb R^{n \times p}$ has rank $p \leq n$, the unique least squares solution is the vector $\bhat$ that solves the system $$ \m X^T \m X \bhat = \m X^T \m y \>. $$ Substituting $\m X = \Z \G$ and using $\Z^T \Z = \m I$ (by construction), we get $$ \G^T \G \bhat = \G^T \Z^T \m y \> , $$ which is equivalent to $$ \G \bhat = \Z^T \m y \>. $$ Now, concentrate on the last row of the linear system. The only nonzero element of $\G$ in the last row is $g_{pp}$. So, we get that $$ g_{pp} \bhat_p = \langle \m y, \z_p \rangle \>. $$ It is not hard to see (verify this as a check of understanding!) that $g_{pp} = \|\z_p\|$ and so this yields the solution. (Caveat lector: I've used $\z_i$ already normalized to have unit norm, whereas in the book they have not. This accounts for the fact that the book has a squared norm in the denominator, whereas I only have the norm.) To find all of the regression coefficients, one needs to do a simple backsubstitution step to solve for the individual $\bhat_i$. For example, for row $(p-1)$, $$ g_{p-1,p-1} \bhat_{p-1} + g_{p-1,p} \bhat_p = \langle \m z_{p-1}, \m y \rangle \>, $$ and so $$ \bhat_{p-1} = g_{p-1,p-1}^{-1} \langle \m z_{p-1}, \m y \rangle \> - g_{p-1,p-1}^{-1} g_{p-1,p} \bhat_p . $$ One can continue this procedure working "backwards" from the last row of the system up to the first, subtracting out weighted sums of the regression coefficients already calculated and then dividing by the leading term $g_{ii}$ to get $\bhat_i$. The point in the section in ESL is that we could reorder the columns of $\m X$ to get a new matrix $\m X^{(r)}$ with the $r$th original column now being the last one. If we then apply Gram–Schmidt procedure on the new matrix, we get a new orthogonalization such that the solution for the original coefficient $\bhat_r$ is found by the simple solution above. This gives us an interpretation for the regression coefficient $\bhat_r$. It is a univariate regression of $\m y$ on the residual vector obtained by "regressing out" the remaining columns of the design matrix from $\m x_r$. General QR decompositions The Gram–Schmidt procedure is but one method of producing a QR decomposition of $\m X$. Indeed, there are many reasons to prefer other algorithmic approaches over the Gram–Schmidt procedure. Householder reflections and Givens rotations provide more numerically stable approaches to this problem. Note that the above development does not change in the general case of QR decomposition. Namely, let $$ \m X = \m Q \m R \>, $$ be any QR decomposition of $\m X$. Then, using exactly the same reasoning and algebraic manipulations as above, we have that the least-squares solution $\bhat$ satisfies $$ \m R^T \m R \bhat = \m R^T \m Q^T \m y \>, $$ which simplifies to $$ \m R \bhat = \m Q^T \m y \> . $$ Since $\m R$ is upper-triangular, then the same backsubstitution technique works. We first solve for $\bhat_p$ and then work our way backwards from bottom to top. The choice for which QR decomposition algorithm to use generally hinges on controlling numerical instability and, from this perspective, Gram–Schmidt is generally not a competitive approach. This notion of decomposing $\m X$ as an orthogonal matrix times something else can be generalized a little bit further as well to get a very general form for the fitted vector $\hat{\m y}$, but I fear this response has already become too long.
Does the order of explanatory variables matter when calculating their regression coefficients?
I believe the confusion may be arising from something a bit simpler, but it provides a nice opportunity to review some related matters. Note that the text is not claiming that all of the regression co
Does the order of explanatory variables matter when calculating their regression coefficients? I believe the confusion may be arising from something a bit simpler, but it provides a nice opportunity to review some related matters. Note that the text is not claiming that all of the regression coefficients $\newcommand{\bhat}{\hat{\beta}}\newcommand{\m}{\mathbf}\newcommand{\z}{\m{z}}\bhat_i$ can be calculated via the successive residuals vectors as $$ \bhat_i \stackrel{?}{=} \frac{\langle \m y, \z_i \rangle}{\|\z_i\|^2}\>, $$ but rather that only the last one, $\bhat_p$, can be calculated this way! The successive orthogonalization scheme (a form of Gram–Schmidt orthogonalization) is (almost) producing a pair of matrices $\newcommand{\Z}{\m{Z}}\newcommand{\G}{\m{G}}\Z$ and $\G$ such that $$ \m X = \Z \G \>, $$ where $\Z$ is $n \times p$ with orthonormal columns and $\G = (g_{ij})$ is $p \times p$ upper triangular. I say "almost" since the algorithm is only specifying $\Z$ up to the norms of the columns, which will not in general be one, but can be made to have unit norm by normalizing the columns and making a corresponding simple adjustment to the coordinate matrix $\G$. Assuming, of course, that $\m X \in \mathbb R^{n \times p}$ has rank $p \leq n$, the unique least squares solution is the vector $\bhat$ that solves the system $$ \m X^T \m X \bhat = \m X^T \m y \>. $$ Substituting $\m X = \Z \G$ and using $\Z^T \Z = \m I$ (by construction), we get $$ \G^T \G \bhat = \G^T \Z^T \m y \> , $$ which is equivalent to $$ \G \bhat = \Z^T \m y \>. $$ Now, concentrate on the last row of the linear system. The only nonzero element of $\G$ in the last row is $g_{pp}$. So, we get that $$ g_{pp} \bhat_p = \langle \m y, \z_p \rangle \>. $$ It is not hard to see (verify this as a check of understanding!) that $g_{pp} = \|\z_p\|$ and so this yields the solution. (Caveat lector: I've used $\z_i$ already normalized to have unit norm, whereas in the book they have not. This accounts for the fact that the book has a squared norm in the denominator, whereas I only have the norm.) To find all of the regression coefficients, one needs to do a simple backsubstitution step to solve for the individual $\bhat_i$. For example, for row $(p-1)$, $$ g_{p-1,p-1} \bhat_{p-1} + g_{p-1,p} \bhat_p = \langle \m z_{p-1}, \m y \rangle \>, $$ and so $$ \bhat_{p-1} = g_{p-1,p-1}^{-1} \langle \m z_{p-1}, \m y \rangle \> - g_{p-1,p-1}^{-1} g_{p-1,p} \bhat_p . $$ One can continue this procedure working "backwards" from the last row of the system up to the first, subtracting out weighted sums of the regression coefficients already calculated and then dividing by the leading term $g_{ii}$ to get $\bhat_i$. The point in the section in ESL is that we could reorder the columns of $\m X$ to get a new matrix $\m X^{(r)}$ with the $r$th original column now being the last one. If we then apply Gram–Schmidt procedure on the new matrix, we get a new orthogonalization such that the solution for the original coefficient $\bhat_r$ is found by the simple solution above. This gives us an interpretation for the regression coefficient $\bhat_r$. It is a univariate regression of $\m y$ on the residual vector obtained by "regressing out" the remaining columns of the design matrix from $\m x_r$. General QR decompositions The Gram–Schmidt procedure is but one method of producing a QR decomposition of $\m X$. Indeed, there are many reasons to prefer other algorithmic approaches over the Gram–Schmidt procedure. Householder reflections and Givens rotations provide more numerically stable approaches to this problem. Note that the above development does not change in the general case of QR decomposition. Namely, let $$ \m X = \m Q \m R \>, $$ be any QR decomposition of $\m X$. Then, using exactly the same reasoning and algebraic manipulations as above, we have that the least-squares solution $\bhat$ satisfies $$ \m R^T \m R \bhat = \m R^T \m Q^T \m y \>, $$ which simplifies to $$ \m R \bhat = \m Q^T \m y \> . $$ Since $\m R$ is upper-triangular, then the same backsubstitution technique works. We first solve for $\bhat_p$ and then work our way backwards from bottom to top. The choice for which QR decomposition algorithm to use generally hinges on controlling numerical instability and, from this perspective, Gram–Schmidt is generally not a competitive approach. This notion of decomposing $\m X$ as an orthogonal matrix times something else can be generalized a little bit further as well to get a very general form for the fitted vector $\hat{\m y}$, but I fear this response has already become too long.
Does the order of explanatory variables matter when calculating their regression coefficients? I believe the confusion may be arising from something a bit simpler, but it provides a nice opportunity to review some related matters. Note that the text is not claiming that all of the regression co
8,896
Does the order of explanatory variables matter when calculating their regression coefficients?
I had a look through the book and it looks like exercise 3.4 might be useful in understanding the concept of using GS to find all the regression coefficients $\beta_j$ (not just the final coefficient $\beta_p$ - so I typed up a solution. Hope this is useful. Exercise 3.4 in ESL Show how the vector of least square coefficients can be obtained from a single pass of the Gram-Schmidt procedure. Represent your solution in terms of the QR decomposition of $X$. Solution Recall that by a single pass of the Gram-Schmidt procedure, we can write our matrix $X$ as $$X = Z \Gamma,$$ where $Z$ contains the orthogonal columns $z_j$, and $\Gamma$ is an upper-diagonal matrix with ones on the diagonal, and $\gamma_{ij} = \frac{\langle z_i, x_j \rangle}{\| z_i \|^2}$. This is a reflection of the fact that by definition, $$ x_j = z_j + \sum_{k=0}^{j-1} \gamma_{kj} z_k. $$ Now, by the $QR$ decomposition, we can write $X = QR$, where $Q$ is an orthogonal matrix and $R$ is an upper triangular matrix. We have $Q = Z D^{-1}$ and $R = D\Gamma$, where $D$ is a diagonal matrix with $D_{jj} = \| z_j \|$. Now, by definition of $\hat \beta$, we have $$ (X^T X) \hat \beta = X^T y. $$ Now, using the $QR$ decomposition, we have \begin{align*} (R^T Q^T) (QR) \hat \beta &= R^T Q^T y \\ R \hat \beta &= Q^T y \end{align*} $R$ is upper triangular, we can write \begin{align*} R_{pp} \hat \beta_p &= \langle q_p, y \rangle \\ \| z_p \| \hat \beta_p &= \| z_p \|^{-1} \langle z_p, y \rangle \\ \hat \beta_p &= \frac{\langle z_p, y \rangle}{\| z_p \|^2} \end{align*} in accordance with our previous results. Now, by back substitution, we can obtain the sequence of regression coefficients $\hat \beta_j$. As an example, to calculate $\hat \beta_{p-1}$, we have \begin{align*} R_{p-1, p-1} \hat \beta_{p-1} + R_{p-1,p} \hat \beta_p &= \langle q_{p-1}, y \rangle \\ \| z_{p-1} \| \hat \beta_{p-1} + \| z_{p-1} \| \gamma_{p-1,p} \hat \beta_p &= \| z_{p-1} \|^{-1} \langle z_{p-1}, y \rangle \end{align*} and then solving for $\hat \beta_{p-1}$. This process can be repeated for all $\beta_j$, thus obtaining the regression coefficients in one pass of the Gram-Schmidt procedure.
Does the order of explanatory variables matter when calculating their regression coefficients?
I had a look through the book and it looks like exercise 3.4 might be useful in understanding the concept of using GS to find all the regression coefficients $\beta_j$ (not just the final coefficient
Does the order of explanatory variables matter when calculating their regression coefficients? I had a look through the book and it looks like exercise 3.4 might be useful in understanding the concept of using GS to find all the regression coefficients $\beta_j$ (not just the final coefficient $\beta_p$ - so I typed up a solution. Hope this is useful. Exercise 3.4 in ESL Show how the vector of least square coefficients can be obtained from a single pass of the Gram-Schmidt procedure. Represent your solution in terms of the QR decomposition of $X$. Solution Recall that by a single pass of the Gram-Schmidt procedure, we can write our matrix $X$ as $$X = Z \Gamma,$$ where $Z$ contains the orthogonal columns $z_j$, and $\Gamma$ is an upper-diagonal matrix with ones on the diagonal, and $\gamma_{ij} = \frac{\langle z_i, x_j \rangle}{\| z_i \|^2}$. This is a reflection of the fact that by definition, $$ x_j = z_j + \sum_{k=0}^{j-1} \gamma_{kj} z_k. $$ Now, by the $QR$ decomposition, we can write $X = QR$, where $Q$ is an orthogonal matrix and $R$ is an upper triangular matrix. We have $Q = Z D^{-1}$ and $R = D\Gamma$, where $D$ is a diagonal matrix with $D_{jj} = \| z_j \|$. Now, by definition of $\hat \beta$, we have $$ (X^T X) \hat \beta = X^T y. $$ Now, using the $QR$ decomposition, we have \begin{align*} (R^T Q^T) (QR) \hat \beta &= R^T Q^T y \\ R \hat \beta &= Q^T y \end{align*} $R$ is upper triangular, we can write \begin{align*} R_{pp} \hat \beta_p &= \langle q_p, y \rangle \\ \| z_p \| \hat \beta_p &= \| z_p \|^{-1} \langle z_p, y \rangle \\ \hat \beta_p &= \frac{\langle z_p, y \rangle}{\| z_p \|^2} \end{align*} in accordance with our previous results. Now, by back substitution, we can obtain the sequence of regression coefficients $\hat \beta_j$. As an example, to calculate $\hat \beta_{p-1}$, we have \begin{align*} R_{p-1, p-1} \hat \beta_{p-1} + R_{p-1,p} \hat \beta_p &= \langle q_{p-1}, y \rangle \\ \| z_{p-1} \| \hat \beta_{p-1} + \| z_{p-1} \| \gamma_{p-1,p} \hat \beta_p &= \| z_{p-1} \|^{-1} \langle z_{p-1}, y \rangle \end{align*} and then solving for $\hat \beta_{p-1}$. This process can be repeated for all $\beta_j$, thus obtaining the regression coefficients in one pass of the Gram-Schmidt procedure.
Does the order of explanatory variables matter when calculating their regression coefficients? I had a look through the book and it looks like exercise 3.4 might be useful in understanding the concept of using GS to find all the regression coefficients $\beta_j$ (not just the final coefficient
8,897
Does the order of explanatory variables matter when calculating their regression coefficients?
Why not try it and compare? Fit a set of regression coefficients, then change the order and fit them again and see if they differ (other than possible round-off error). As @mpiktas points out it is not exactly clear what you are doing. I can see using GS to solve for $B$ in the least squares equation $(x'x)B=(x'y)$. But then you would be doing the GS on the $(x'x)$ matrix, not the original data. In this case the coefficients should be the same (other than possible rounding error). Another approach of GS in regression is to apply GS to the predictor variables to eliminate colinearity between them. Then the orthogonalized variables are used as the predictors. In this case order matters and the coefficients will be different because the interpretation of the coefficients depends on the order. Consider 2 predictors $x_1$ and $x_2$ and do GS on them in that order then use as predictors. In that case the first coefficient (after the intercept) shows the effect of $x_1$ on $y$ by itself and the second coefficient is the effect of $x_2$ on $y$ after adjusting for $x_1$. Now if you reverse the order of the x's then the first coefficient shows the effect of $x_2$ on $y$ by itself (ignoring $x_1$ rather than adjusting for it) and the second is the effect of $x_1$ adjusting for $x_2$.
Does the order of explanatory variables matter when calculating their regression coefficients?
Why not try it and compare? Fit a set of regression coefficients, then change the order and fit them again and see if they differ (other than possible round-off error). As @mpiktas points out it is n
Does the order of explanatory variables matter when calculating their regression coefficients? Why not try it and compare? Fit a set of regression coefficients, then change the order and fit them again and see if they differ (other than possible round-off error). As @mpiktas points out it is not exactly clear what you are doing. I can see using GS to solve for $B$ in the least squares equation $(x'x)B=(x'y)$. But then you would be doing the GS on the $(x'x)$ matrix, not the original data. In this case the coefficients should be the same (other than possible rounding error). Another approach of GS in regression is to apply GS to the predictor variables to eliminate colinearity between them. Then the orthogonalized variables are used as the predictors. In this case order matters and the coefficients will be different because the interpretation of the coefficients depends on the order. Consider 2 predictors $x_1$ and $x_2$ and do GS on them in that order then use as predictors. In that case the first coefficient (after the intercept) shows the effect of $x_1$ on $y$ by itself and the second coefficient is the effect of $x_2$ on $y$ after adjusting for $x_1$. Now if you reverse the order of the x's then the first coefficient shows the effect of $x_2$ on $y$ by itself (ignoring $x_1$ rather than adjusting for it) and the second is the effect of $x_1$ adjusting for $x_2$.
Does the order of explanatory variables matter when calculating their regression coefficients? Why not try it and compare? Fit a set of regression coefficients, then change the order and fit them again and see if they differ (other than possible round-off error). As @mpiktas points out it is n
8,898
Mean squared error vs. mean squared prediction error
The difference is not the mathematical expression, but rather what you are measuring. Mean squared error measures the expected squared distance between an estimator and the true underlying parameter: $$\text{MSE}(\hat{\theta}) = E\left[(\hat{\theta} - \theta)^2\right].$$ It is thus a measurement of the quality of an estimator. The mean squared prediction error measures the expected squared distance between what your predictor predicts for a specific value and what the true value is: $$\text{MSPE}(L) = E\left[\sum_{i=1}^n\left(g(x_i) - \widehat{g}(x_i)\right)^2\right].$$ It is thus a measurement of the quality of a predictor. The most important thing to understand is the difference between a predictor and an estimator. An example of an estimator would be taking the average height a sample of people to estimate the average height of a population. An example of a predictor is to average the height of an individual's two parents to guess his specific height. They are thus solving two very different problems.
Mean squared error vs. mean squared prediction error
The difference is not the mathematical expression, but rather what you are measuring. Mean squared error measures the expected squared distance between an estimator and the true underlying parameter:
Mean squared error vs. mean squared prediction error The difference is not the mathematical expression, but rather what you are measuring. Mean squared error measures the expected squared distance between an estimator and the true underlying parameter: $$\text{MSE}(\hat{\theta}) = E\left[(\hat{\theta} - \theta)^2\right].$$ It is thus a measurement of the quality of an estimator. The mean squared prediction error measures the expected squared distance between what your predictor predicts for a specific value and what the true value is: $$\text{MSPE}(L) = E\left[\sum_{i=1}^n\left(g(x_i) - \widehat{g}(x_i)\right)^2\right].$$ It is thus a measurement of the quality of a predictor. The most important thing to understand is the difference between a predictor and an estimator. An example of an estimator would be taking the average height a sample of people to estimate the average height of a population. An example of a predictor is to average the height of an individual's two parents to guess his specific height. They are thus solving two very different problems.
Mean squared error vs. mean squared prediction error The difference is not the mathematical expression, but rather what you are measuring. Mean squared error measures the expected squared distance between an estimator and the true underlying parameter:
8,899
Mean squared error vs. mean squared prediction error
There is a correction to the second equation about: $MSPE(L)=\mathbb{E}\Big[\Big(g(X))-\widehat{g}(X)\Big)^{2}\Big]$; where $X$ is a random variable. It is important to remember that when we are working with MSPE or MSEP (I usually use the last expression) we are dealing with random variables. We want to predict an unobserved random variable $X$ using an estimator which is also a random variable (usually constructed with a sample data). There, we have a great difference with the most used expression MSE. In that case, we are dealing with a population parameter $\theta$ that it is a constant and the estimator is again a random variable. In a more realistic scenario, we are dealing with conditional expectation for the MSPE because if we want to predict we need to measure the quality of our estimator based on the information used in the sample data. So, our definition of MSPE would be: $MSPE(L)=\mathbb{E}\Big[\Big(g(X))-\widehat{g}(X)\Big)^{2}\Big|\;\mathcal{G}\; \Big]$; where $\mathcal{G}$ is a $\sigma$-algebra and $\widehat{g}(X)$ is $\mathcal{G}$-measurable. We can say that $X$ is $\mathcal{F}$-measurable in the measurable space $(\Omega, \mathcal{F})$ and $g$ is a borel-measurable function, so by Doob-Dynkin $g(X)$ is also $\mathcal{F}$-measurable.
Mean squared error vs. mean squared prediction error
There is a correction to the second equation about: $MSPE(L)=\mathbb{E}\Big[\Big(g(X))-\widehat{g}(X)\Big)^{2}\Big]$; where $X$ is a random variable. It is important to remember that when we are work
Mean squared error vs. mean squared prediction error There is a correction to the second equation about: $MSPE(L)=\mathbb{E}\Big[\Big(g(X))-\widehat{g}(X)\Big)^{2}\Big]$; where $X$ is a random variable. It is important to remember that when we are working with MSPE or MSEP (I usually use the last expression) we are dealing with random variables. We want to predict an unobserved random variable $X$ using an estimator which is also a random variable (usually constructed with a sample data). There, we have a great difference with the most used expression MSE. In that case, we are dealing with a population parameter $\theta$ that it is a constant and the estimator is again a random variable. In a more realistic scenario, we are dealing with conditional expectation for the MSPE because if we want to predict we need to measure the quality of our estimator based on the information used in the sample data. So, our definition of MSPE would be: $MSPE(L)=\mathbb{E}\Big[\Big(g(X))-\widehat{g}(X)\Big)^{2}\Big|\;\mathcal{G}\; \Big]$; where $\mathcal{G}$ is a $\sigma$-algebra and $\widehat{g}(X)$ is $\mathcal{G}$-measurable. We can say that $X$ is $\mathcal{F}$-measurable in the measurable space $(\Omega, \mathcal{F})$ and $g$ is a borel-measurable function, so by Doob-Dynkin $g(X)$ is also $\mathcal{F}$-measurable.
Mean squared error vs. mean squared prediction error There is a correction to the second equation about: $MSPE(L)=\mathbb{E}\Big[\Big(g(X))-\widehat{g}(X)\Big)^{2}\Big]$; where $X$ is a random variable. It is important to remember that when we are work
8,900
Mean squared error vs. mean squared prediction error
Typically, MSE involves only training data. The error here refers to how far the observed training response data is from the fitted response data (based on a model fit on the training data itself). On the other hand, MSPE typically involves a testing set that was not part of the model training. The error here refers to how far the predicted testing data (predicted based on a model already fit on the training data) is from the observed testing data.
Mean squared error vs. mean squared prediction error
Typically, MSE involves only training data. The error here refers to how far the observed training response data is from the fitted response data (based on a model fit on the training data itself). On
Mean squared error vs. mean squared prediction error Typically, MSE involves only training data. The error here refers to how far the observed training response data is from the fitted response data (based on a model fit on the training data itself). On the other hand, MSPE typically involves a testing set that was not part of the model training. The error here refers to how far the predicted testing data (predicted based on a model already fit on the training data) is from the observed testing data.
Mean squared error vs. mean squared prediction error Typically, MSE involves only training data. The error here refers to how far the observed training response data is from the fitted response data (based on a model fit on the training data itself). On