idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
33,201
Do neural networks use efficient coding?
I believe that one can argue that a connection has been made. I'll apologize for not posting my source as I couldn't find it, but this came from an old slide that Hinton presented. In it, he claimed that one of the fundamental ways of thinking for those who do machine learning (as the presentation predated the common use of the word deep learning) was that there exists an optimal transformation of the data such that the data can be easily learned. I believe for neural nets, the 'optimal transformation' of the data though back prop, IS the efficient coding hypothesis in action. In the same way that given a proper kernel, many spaces can be easily classified with linear models, learning the proper way to transform and store the data IS analogous to which and how the neurons should be arranged to represent the data.
Do neural networks use efficient coding?
I believe that one can argue that a connection has been made. I'll apologize for not posting my source as I couldn't find it, but this came from an old slide that Hinton presented. In it, he claimed t
Do neural networks use efficient coding? I believe that one can argue that a connection has been made. I'll apologize for not posting my source as I couldn't find it, but this came from an old slide that Hinton presented. In it, he claimed that one of the fundamental ways of thinking for those who do machine learning (as the presentation predated the common use of the word deep learning) was that there exists an optimal transformation of the data such that the data can be easily learned. I believe for neural nets, the 'optimal transformation' of the data though back prop, IS the efficient coding hypothesis in action. In the same way that given a proper kernel, many spaces can be easily classified with linear models, learning the proper way to transform and store the data IS analogous to which and how the neurons should be arranged to represent the data.
Do neural networks use efficient coding? I believe that one can argue that a connection has been made. I'll apologize for not posting my source as I couldn't find it, but this came from an old slide that Hinton presented. In it, he claimed t
33,202
4D Convolutional Network
Tensorflow defines the convolution in N dimensions and the transposed, RELU layer is dimension independent, the only problem you will have is with the pooling layers that you will have to implement on your own (feel free to submit them later to tf contributions). So I guess your problem is perfectly addressable with TF
4D Convolutional Network
Tensorflow defines the convolution in N dimensions and the transposed, RELU layer is dimension independent, the only problem you will have is with the pooling layers that you will have to implement on
4D Convolutional Network Tensorflow defines the convolution in N dimensions and the transposed, RELU layer is dimension independent, the only problem you will have is with the pooling layers that you will have to implement on your own (feel free to submit them later to tf contributions). So I guess your problem is perfectly addressable with TF
4D Convolutional Network Tensorflow defines the convolution in N dimensions and the transposed, RELU layer is dimension independent, the only problem you will have is with the pooling layers that you will have to implement on
33,203
Bayesian model comparison in high school
First let me say that sensible testing of a sharp hypothesis such as $a=0$ requires a thoughtful prior distribution for $a$, because the Bayes factor depend critically on this prior. Many Bayesians will not test a sharp hypothesis, but I will. Before proceeding, I must tell you that I don't really understand what you say you're doing and so I may be giving you advice that you're not looking for. I hope you can follow may notation. Let the data be $n$ observations: $y = ((x_1,y_1), \ldots, (x_n,y_n))$, where (according to the more general model and includes the slope) $$ p(y_i|a,b,\sigma^2) = \textsf{N}(y_i|b+a\,x_i,\sigma^2). $$ (I am suppressing the independent variable $x_i$ from the list of conditioning arguments for notational simplicity.) The likelihood is given by $$ p(y|a,b,\sigma^2) = \prod_{i=1}^n p(y_i|a,b,\sigma^2). $$ Given a prior for $(a,b,\sigma^2)$, the posterior distribution is \begin{equation} p(a,b,\sigma^2|y) = \frac{p(y|a,b,\sigma^2)\,p(a,b,\sigma^2)}{p(y)}, \end{equation} where the likelihood of the data according to the more general model is \begin{equation} \begin{split} p(y) &= \iiint p(y|a,b,\sigma^2)\,p(a,b,\sigma)\,d\sigma^2\,db\,da \\ &= \int\left(\iint p(y|a,b,\sigma^2)\,p(b,\sigma^2)\,d\sigma^2\,db\right) p(a|b,\sigma^2)\,da \\ &= \int p(y|a)\,p(a|b,\sigma^2)\,da , \end{split} \end{equation} where I have used $p(a,b,\sigma^2) = p(a|b,\sigma^2)\,p(b,\sigma^2)$. Note that $p(y|a)$ is the (marginal) likelihood for $a$ and $p(a|b,\sigma^2)$ is the conditional prior for $a$. If the prior for $a$ is independent of $(b,\sigma^2)$, then $p(a|b,\sigma^2) = p(a)$. I will assume that is true. With these expressions, we can now write the marginal posterior for $a$: \begin{equation} p(a|y) = \frac{p(y|a)\,p(a)}{p(y)}. \end{equation} We will now rearrange this expression: \begin{equation} \frac{p(y|a)}{p(y)} = \frac{p(a|y)}{p(a)}. \end{equation} Since this expression is true for every value of $a$, it is true in particular for $a = 0$: \begin{equation} \frac{p(y|a=0)}{p(y)} = \frac{p(a=0|y)}{p(a=0)}. \end{equation} Note that the numerator in the fraction on the left-hand side is the likelihood of the data according to the restricted model (i.e., restricted to $a=0$). And, as already noted, the denominator is the likelihood of the data according to the more general model. Therefore, the left-hand side is the Bayes factor in favor of the restricted model relative to the more general model. The fraction on the right-hand gives us a way to evaluate the Bayes factor: It says to divide the posterior density evaluated at $a=0$ by the prior density evaluated at $a=0$. (By the way, the "formula" is called the Savage-Dickey density ratio.) Now it is apparent why a thoughtful prior for $a$ is required. If we let the prior density for $a$ be very uncertain, the prior density will be very low everywhere including at $a =0$, but the posterior density at $a=0$ will not go to zero, and consequently the Bayes factor will go to infinity. In this case, "garbage in" produces "garbage out." You may imagine that if you don't follow the steps I have outlined, then you won't be subject to this problem, but you would be wrong. The logic I have presented applies regardless of the "algorithm" you apply. But the steps do provide an algorithm that can be useful. Suppose the prior for the parameters is given by the "Jeffreys prior" $$ p(b,\sigma^2) \propto 1/\sigma^2. $$ This amounts to using an improper prior on the "nuisance parameters" $(b,\sigma^2)$. This is okay, but such a prior would not be appropriate for $a$ for the reason I discussed above. With this prior, $p(y|a)$ --- the (marginal) likelihood for $a$ --- will be proportional to a Student $t$ distribution, the parameters of which depend on the data $y$. This $t$ distribution is complete summary of the data, which may be discarded. Now you must choose a proper and well-informed prior for $a$. Having done so, you can numerically compute either side the "Savage-Dickey" equation. I hope you find something in what I have said useful.
Bayesian model comparison in high school
First let me say that sensible testing of a sharp hypothesis such as $a=0$ requires a thoughtful prior distribution for $a$, because the Bayes factor depend critically on this prior. Many Bayesians wi
Bayesian model comparison in high school First let me say that sensible testing of a sharp hypothesis such as $a=0$ requires a thoughtful prior distribution for $a$, because the Bayes factor depend critically on this prior. Many Bayesians will not test a sharp hypothesis, but I will. Before proceeding, I must tell you that I don't really understand what you say you're doing and so I may be giving you advice that you're not looking for. I hope you can follow may notation. Let the data be $n$ observations: $y = ((x_1,y_1), \ldots, (x_n,y_n))$, where (according to the more general model and includes the slope) $$ p(y_i|a,b,\sigma^2) = \textsf{N}(y_i|b+a\,x_i,\sigma^2). $$ (I am suppressing the independent variable $x_i$ from the list of conditioning arguments for notational simplicity.) The likelihood is given by $$ p(y|a,b,\sigma^2) = \prod_{i=1}^n p(y_i|a,b,\sigma^2). $$ Given a prior for $(a,b,\sigma^2)$, the posterior distribution is \begin{equation} p(a,b,\sigma^2|y) = \frac{p(y|a,b,\sigma^2)\,p(a,b,\sigma^2)}{p(y)}, \end{equation} where the likelihood of the data according to the more general model is \begin{equation} \begin{split} p(y) &= \iiint p(y|a,b,\sigma^2)\,p(a,b,\sigma)\,d\sigma^2\,db\,da \\ &= \int\left(\iint p(y|a,b,\sigma^2)\,p(b,\sigma^2)\,d\sigma^2\,db\right) p(a|b,\sigma^2)\,da \\ &= \int p(y|a)\,p(a|b,\sigma^2)\,da , \end{split} \end{equation} where I have used $p(a,b,\sigma^2) = p(a|b,\sigma^2)\,p(b,\sigma^2)$. Note that $p(y|a)$ is the (marginal) likelihood for $a$ and $p(a|b,\sigma^2)$ is the conditional prior for $a$. If the prior for $a$ is independent of $(b,\sigma^2)$, then $p(a|b,\sigma^2) = p(a)$. I will assume that is true. With these expressions, we can now write the marginal posterior for $a$: \begin{equation} p(a|y) = \frac{p(y|a)\,p(a)}{p(y)}. \end{equation} We will now rearrange this expression: \begin{equation} \frac{p(y|a)}{p(y)} = \frac{p(a|y)}{p(a)}. \end{equation} Since this expression is true for every value of $a$, it is true in particular for $a = 0$: \begin{equation} \frac{p(y|a=0)}{p(y)} = \frac{p(a=0|y)}{p(a=0)}. \end{equation} Note that the numerator in the fraction on the left-hand side is the likelihood of the data according to the restricted model (i.e., restricted to $a=0$). And, as already noted, the denominator is the likelihood of the data according to the more general model. Therefore, the left-hand side is the Bayes factor in favor of the restricted model relative to the more general model. The fraction on the right-hand gives us a way to evaluate the Bayes factor: It says to divide the posterior density evaluated at $a=0$ by the prior density evaluated at $a=0$. (By the way, the "formula" is called the Savage-Dickey density ratio.) Now it is apparent why a thoughtful prior for $a$ is required. If we let the prior density for $a$ be very uncertain, the prior density will be very low everywhere including at $a =0$, but the posterior density at $a=0$ will not go to zero, and consequently the Bayes factor will go to infinity. In this case, "garbage in" produces "garbage out." You may imagine that if you don't follow the steps I have outlined, then you won't be subject to this problem, but you would be wrong. The logic I have presented applies regardless of the "algorithm" you apply. But the steps do provide an algorithm that can be useful. Suppose the prior for the parameters is given by the "Jeffreys prior" $$ p(b,\sigma^2) \propto 1/\sigma^2. $$ This amounts to using an improper prior on the "nuisance parameters" $(b,\sigma^2)$. This is okay, but such a prior would not be appropriate for $a$ for the reason I discussed above. With this prior, $p(y|a)$ --- the (marginal) likelihood for $a$ --- will be proportional to a Student $t$ distribution, the parameters of which depend on the data $y$. This $t$ distribution is complete summary of the data, which may be discarded. Now you must choose a proper and well-informed prior for $a$. Having done so, you can numerically compute either side the "Savage-Dickey" equation. I hope you find something in what I have said useful.
Bayesian model comparison in high school First let me say that sensible testing of a sharp hypothesis such as $a=0$ requires a thoughtful prior distribution for $a$, because the Bayes factor depend critically on this prior. Many Bayesians wi
33,204
Difference between paired t-test and repeated measures ANOVA with two level of repeated measures
yes, they are equivalent. these assumptions question has never been directly addressed, though. it is sometimes indicated, that the assumptions you cite for anova, when met, do cover the normality assumption for paired t-test. however, I still wonder, what when the variables are not normal within each subgroup, but their differences (calculated like for t-test) are normal? This should be enough, so the incongruence between these assumptions (as stated in every major statistics handbook) and in your question, are bothering to me too. ;)
Difference between paired t-test and repeated measures ANOVA with two level of repeated measures
yes, they are equivalent. these assumptions question has never been directly addressed, though. it is sometimes indicated, that the assumptions you cite for anova, when met, do cover the normality as
Difference between paired t-test and repeated measures ANOVA with two level of repeated measures yes, they are equivalent. these assumptions question has never been directly addressed, though. it is sometimes indicated, that the assumptions you cite for anova, when met, do cover the normality assumption for paired t-test. however, I still wonder, what when the variables are not normal within each subgroup, but their differences (calculated like for t-test) are normal? This should be enough, so the incongruence between these assumptions (as stated in every major statistics handbook) and in your question, are bothering to me too. ;)
Difference between paired t-test and repeated measures ANOVA with two level of repeated measures yes, they are equivalent. these assumptions question has never been directly addressed, though. it is sometimes indicated, that the assumptions you cite for anova, when met, do cover the normality as
33,205
Difference between paired t-test and repeated measures ANOVA with two level of repeated measures
As both are equivalent, "bivariate" repeated measures Anova works as well if only the differences are normally distributed. The more strict repeated measures requirements in the literature are only necessary if more complicated layouts with more factors and other hypotheses are included, in particular between subject factors and unequal sample sizes / heteroskedasticity between them.
Difference between paired t-test and repeated measures ANOVA with two level of repeated measures
As both are equivalent, "bivariate" repeated measures Anova works as well if only the differences are normally distributed. The more strict repeated measures requirements in the literature are only ne
Difference between paired t-test and repeated measures ANOVA with two level of repeated measures As both are equivalent, "bivariate" repeated measures Anova works as well if only the differences are normally distributed. The more strict repeated measures requirements in the literature are only necessary if more complicated layouts with more factors and other hypotheses are included, in particular between subject factors and unequal sample sizes / heteroskedasticity between them.
Difference between paired t-test and repeated measures ANOVA with two level of repeated measures As both are equivalent, "bivariate" repeated measures Anova works as well if only the differences are normally distributed. The more strict repeated measures requirements in the literature are only ne
33,206
Difference between paired t-test and repeated measures ANOVA with two level of repeated measures
I also find in my research that the results are sometimes different. The paired-samples t-test shows no statistical significance, while the repeated measures anova witihin subjects factor shows significance in both groups. My two groups of speakers are different in numbers (49 and 23) and from the paired samples test I find no significance for the second group, whereas repeated measures ANOVA shows significance for both groups.
Difference between paired t-test and repeated measures ANOVA with two level of repeated measures
I also find in my research that the results are sometimes different. The paired-samples t-test shows no statistical significance, while the repeated measures anova witihin subjects factor shows signif
Difference between paired t-test and repeated measures ANOVA with two level of repeated measures I also find in my research that the results are sometimes different. The paired-samples t-test shows no statistical significance, while the repeated measures anova witihin subjects factor shows significance in both groups. My two groups of speakers are different in numbers (49 and 23) and from the paired samples test I find no significance for the second group, whereas repeated measures ANOVA shows significance for both groups.
Difference between paired t-test and repeated measures ANOVA with two level of repeated measures I also find in my research that the results are sometimes different. The paired-samples t-test shows no statistical significance, while the repeated measures anova witihin subjects factor shows signif
33,207
Difference between paired t-test and repeated measures ANOVA with two level of repeated measures
No, they are different. I did both in SPSS and found very different results for a same dependent variable. In the paired samples t test, the change is significant (p<.01), whereas in the repeated measures ANOVA, the change is insignificant (p>.05).
Difference between paired t-test and repeated measures ANOVA with two level of repeated measures
No, they are different. I did both in SPSS and found very different results for a same dependent variable. In the paired samples t test, the change is significant (p<.01), whereas in the repeated meas
Difference between paired t-test and repeated measures ANOVA with two level of repeated measures No, they are different. I did both in SPSS and found very different results for a same dependent variable. In the paired samples t test, the change is significant (p<.01), whereas in the repeated measures ANOVA, the change is insignificant (p>.05).
Difference between paired t-test and repeated measures ANOVA with two level of repeated measures No, they are different. I did both in SPSS and found very different results for a same dependent variable. In the paired samples t test, the change is significant (p<.01), whereas in the repeated meas
33,208
Logistic Regression with (Normal) Distributions for Independent Variables
I think you can also go for a maximum likelihood approach considering the $x_i$ are latent variables over which you marginalize a likelihood. Let's say the likelihood of your usual logistic regression, if you oberved the $x$ values, is $\mathcal{L}(\beta, x, y)$ where $\beta$ is the vector of parameters (typically, $\mathcal{L}(\beta, x, y) = (\frac{1}{1 + e^{-\beta x}})^y (\frac{1}{1 + e^{\beta x}})^{1 - y}$). Then the likelihood only observing $\mu$ and $y$ is $$\mathcal{L}(\beta, y, \mu) = \mathbb{E}_{X \sim F_{\mu}}[\mathcal{L}(\beta, y, X)]$$ And the total likelihood is just the product of the likelihoods over all observed $(y_i, \mu_i)$. Unfortunately, these expectations may be untractable (maybe for a simple normal distribution it is not, but it is not obvious to me...), so you can estimate them by Monte Carlo. For instance, sample $x_i \sim F_{\mu_i}$ and take empirical mean of $\mathcal{L}(\beta, y_i, x_i)$. I don't think that this is equivalent to simulating data according to $F_{\mu_i}$ and put them into the model, but it would be nice to see the links... Another way would be to go with an E-M algorithm (where $x_i$ are the latent variables) to maximize this likelihood, this would vertainly be more computationally efficient. I hope this helps a little bit...
Logistic Regression with (Normal) Distributions for Independent Variables
I think you can also go for a maximum likelihood approach considering the $x_i$ are latent variables over which you marginalize a likelihood. Let's say the likelihood of your usual logistic regressio
Logistic Regression with (Normal) Distributions for Independent Variables I think you can also go for a maximum likelihood approach considering the $x_i$ are latent variables over which you marginalize a likelihood. Let's say the likelihood of your usual logistic regression, if you oberved the $x$ values, is $\mathcal{L}(\beta, x, y)$ where $\beta$ is the vector of parameters (typically, $\mathcal{L}(\beta, x, y) = (\frac{1}{1 + e^{-\beta x}})^y (\frac{1}{1 + e^{\beta x}})^{1 - y}$). Then the likelihood only observing $\mu$ and $y$ is $$\mathcal{L}(\beta, y, \mu) = \mathbb{E}_{X \sim F_{\mu}}[\mathcal{L}(\beta, y, X)]$$ And the total likelihood is just the product of the likelihoods over all observed $(y_i, \mu_i)$. Unfortunately, these expectations may be untractable (maybe for a simple normal distribution it is not, but it is not obvious to me...), so you can estimate them by Monte Carlo. For instance, sample $x_i \sim F_{\mu_i}$ and take empirical mean of $\mathcal{L}(\beta, y_i, x_i)$. I don't think that this is equivalent to simulating data according to $F_{\mu_i}$ and put them into the model, but it would be nice to see the links... Another way would be to go with an E-M algorithm (where $x_i$ are the latent variables) to maximize this likelihood, this would vertainly be more computationally efficient. I hope this helps a little bit...
Logistic Regression with (Normal) Distributions for Independent Variables I think you can also go for a maximum likelihood approach considering the $x_i$ are latent variables over which you marginalize a likelihood. Let's say the likelihood of your usual logistic regressio
33,209
Logistic Regression with (Normal) Distributions for Independent Variables
Generalizing the bootstrap method proposed in the question, in which the regression does not attempt to estimate $X$ but instead determines how the distribution $F$ leads to a distribution of logistic regression parameters, one could use marginal maximum likelihood estimation, a technique common in random effects linear models. The likelihood to be maximized is the marginal likelihood $$ \mathcal{L(\beta)} = \prod_i \int_X P(y_i|X,\beta) P(X|\mu_i) \, X $$ Such likelihoods can rarely be solved exactly, but the existing literature may give some inspiration -- and the idea of estimating this $\mathcal{L}$ (or rather the $\beta$ that maximizes it) by Monte Carlo could be a good one. In the case that the distribution is normal, there might be hope of doing something more exact. Assuming for notational simplicity that $E[X|\mu] = \mu$ (just sop that I don't need to make a new variable), $$ \log \mathcal{L(\beta)} \propto \sum_i \log \int_X \left(\frac{1}{1+\exp(-\beta X)} \right)^{y_i} \left(\frac{1}{1+\exp(\beta X)} \right)^{1-y_i} \exp \left( - \frac{1}{2} (X-\mu_i)^T \Sigma^{-1} (X-\mu_i) \right) $$ Since either $y_i = 0$ or $y_i = 1$, this integral is known as a logistic-normal integral, and there is some accessible literature on it: https://books.google.com/books?hl=en&lr=&id=iaieM_3lcHQC&oi=fnd&pg=PR5&ots=CM9147oK0H&sig=SegYdLgH2UtTDmcspTix2fnBgRg#v=onepage&q=logistic-normal%20integral&f=false This book, in fact, is probably a good reference in general, as it examines this integral in the context of logistic regression with random effects, probably directly applicable to the posed question.
Logistic Regression with (Normal) Distributions for Independent Variables
Generalizing the bootstrap method proposed in the question, in which the regression does not attempt to estimate $X$ but instead determines how the distribution $F$ leads to a distribution of logistic
Logistic Regression with (Normal) Distributions for Independent Variables Generalizing the bootstrap method proposed in the question, in which the regression does not attempt to estimate $X$ but instead determines how the distribution $F$ leads to a distribution of logistic regression parameters, one could use marginal maximum likelihood estimation, a technique common in random effects linear models. The likelihood to be maximized is the marginal likelihood $$ \mathcal{L(\beta)} = \prod_i \int_X P(y_i|X,\beta) P(X|\mu_i) \, X $$ Such likelihoods can rarely be solved exactly, but the existing literature may give some inspiration -- and the idea of estimating this $\mathcal{L}$ (or rather the $\beta$ that maximizes it) by Monte Carlo could be a good one. In the case that the distribution is normal, there might be hope of doing something more exact. Assuming for notational simplicity that $E[X|\mu] = \mu$ (just sop that I don't need to make a new variable), $$ \log \mathcal{L(\beta)} \propto \sum_i \log \int_X \left(\frac{1}{1+\exp(-\beta X)} \right)^{y_i} \left(\frac{1}{1+\exp(\beta X)} \right)^{1-y_i} \exp \left( - \frac{1}{2} (X-\mu_i)^T \Sigma^{-1} (X-\mu_i) \right) $$ Since either $y_i = 0$ or $y_i = 1$, this integral is known as a logistic-normal integral, and there is some accessible literature on it: https://books.google.com/books?hl=en&lr=&id=iaieM_3lcHQC&oi=fnd&pg=PR5&ots=CM9147oK0H&sig=SegYdLgH2UtTDmcspTix2fnBgRg#v=onepage&q=logistic-normal%20integral&f=false This book, in fact, is probably a good reference in general, as it examines this integral in the context of logistic regression with random effects, probably directly applicable to the posed question.
Logistic Regression with (Normal) Distributions for Independent Variables Generalizing the bootstrap method proposed in the question, in which the regression does not attempt to estimate $X$ but instead determines how the distribution $F$ leads to a distribution of logistic
33,210
1D Convolution in Neural Networks
Let $x_1, …,x_n $ be a sequence of vectors (e.g., word vectors). Applying a convolutional layer is equivalent to applying the same weight matrices to all n-grams, where $n$ is the height of your filter. E.g., if $n=3$, you can visualize it as follows: For a slightly more mathematical explanation, you can check out Ji Young Lee, Franck Dernoncourt. "Sequential Short-Text Classification with Recurrent and Convolutional Neural Networks". NAACL 2016. section 2.1.2:
1D Convolution in Neural Networks
Let $x_1, …,x_n $ be a sequence of vectors (e.g., word vectors). Applying a convolutional layer is equivalent to applying the same weight matrices to all n-grams, where $n$ is the height of your filte
1D Convolution in Neural Networks Let $x_1, …,x_n $ be a sequence of vectors (e.g., word vectors). Applying a convolutional layer is equivalent to applying the same weight matrices to all n-grams, where $n$ is the height of your filter. E.g., if $n=3$, you can visualize it as follows: For a slightly more mathematical explanation, you can check out Ji Young Lee, Franck Dernoncourt. "Sequential Short-Text Classification with Recurrent and Convolutional Neural Networks". NAACL 2016. section 2.1.2:
1D Convolution in Neural Networks Let $x_1, …,x_n $ be a sequence of vectors (e.g., word vectors). Applying a convolutional layer is equivalent to applying the same weight matrices to all n-grams, where $n$ is the height of your filte
33,211
1D Convolution in Neural Networks
1D convolutions are used in convolutional networks for down sampling and up sampling in the filter dimension. Convolutional networks build up these filter maps as you go through the network, you can really think of them as a 3rd dimension. The usual base case of the filter map dimension is a size of 3, since we will often have RGB images going through our network. These 1D convolutions can be useful for down sampling, performing some operation, then up sampling back to the same dimension. This is quite useful for performance reasons. To really intuitively understand I'd suggest reading: Network-in-network - http://arxiv.org/abs/1312.4400 Going deeper with convolutions - https://www.google.com/url?sa=t&source=web&rct=j&url=http://www.cs.unc.edu/~wliu/papers/GoogLeNet.pdf&ved=0ahUKEwi89oeuxqnLAhXhuIMKHZrTCe0QFggkMAE&usg=AFQjCNGCEEnUgrgCn-rrECNQ72wI3PH1Qw&sig2=VhjfaMvuskNIDVKhFfNiqQ
1D Convolution in Neural Networks
1D convolutions are used in convolutional networks for down sampling and up sampling in the filter dimension. Convolutional networks build up these filter maps as you go through the network, you can r
1D Convolution in Neural Networks 1D convolutions are used in convolutional networks for down sampling and up sampling in the filter dimension. Convolutional networks build up these filter maps as you go through the network, you can really think of them as a 3rd dimension. The usual base case of the filter map dimension is a size of 3, since we will often have RGB images going through our network. These 1D convolutions can be useful for down sampling, performing some operation, then up sampling back to the same dimension. This is quite useful for performance reasons. To really intuitively understand I'd suggest reading: Network-in-network - http://arxiv.org/abs/1312.4400 Going deeper with convolutions - https://www.google.com/url?sa=t&source=web&rct=j&url=http://www.cs.unc.edu/~wliu/papers/GoogLeNet.pdf&ved=0ahUKEwi89oeuxqnLAhXhuIMKHZrTCe0QFggkMAE&usg=AFQjCNGCEEnUgrgCn-rrECNQ72wI3PH1Qw&sig2=VhjfaMvuskNIDVKhFfNiqQ
1D Convolution in Neural Networks 1D convolutions are used in convolutional networks for down sampling and up sampling in the filter dimension. Convolutional networks build up these filter maps as you go through the network, you can r
33,212
Multinomial distribution conditional on number of distinct items
TLDR just generalize the coupon collector techniques. Suppose you have a discrete state space Markov chain that evolves on the space of all subsets of $\{1,\ldots,k\}$ that have size between $0$ and $m$ (inclusive). This has size $\binom{k}{0} + \binom{k}{1} + \binom{k}{2} + \cdots \binom{k}{m}$. In particular notice that when $k=m$, this is $2^k$. Say time starts at $0$. $X_0$ is the empty set with probability $1$. The marginal/transition distribution of $P(X_1 = \cdot \mid X_0 = 0)$ is uniform over the $k$ singletons. $P(X_2 = \{j,k\} \mid X_1 = \{j\}) = p_k$ where $k\neq j$ and $P(X_2 = \{j\} \mid X_1 = \{j\}) = p_j$. If you write out the big ugly transition matrix, you'll see each row only has $k$ nonzero elements, because you can only do $k$ things at each time step. Using that big ugly transition matrix, you might figure the transition matrix for $|X_t|$ (the cardinality/size of $X_t$). With this you can describe the stopping time of interest: $$ n = \inf\{t : |X_t| = m\}. $$ Notice that $n \in \{m, m+1, \ldots\}$ and $$ P(n = j) = P(|X_n| = j \mid |X_{n-1}| = j-1) P(|X_{n-1}| = j-1) . $$ Both these factors could be coded up. Regarding sampling, it's faster (but more memory-intensive) to sample $X_t$ or $|X_t|$ instead of the whole multinomial enchilada. The hard part is instantiating and storing the transition matrix, but it’s very straightforward (more so for $X_t$).
Multinomial distribution conditional on number of distinct items
TLDR just generalize the coupon collector techniques. Suppose you have a discrete state space Markov chain that evolves on the space of all subsets of $\{1,\ldots,k\}$ that have size between $0$ and $
Multinomial distribution conditional on number of distinct items TLDR just generalize the coupon collector techniques. Suppose you have a discrete state space Markov chain that evolves on the space of all subsets of $\{1,\ldots,k\}$ that have size between $0$ and $m$ (inclusive). This has size $\binom{k}{0} + \binom{k}{1} + \binom{k}{2} + \cdots \binom{k}{m}$. In particular notice that when $k=m$, this is $2^k$. Say time starts at $0$. $X_0$ is the empty set with probability $1$. The marginal/transition distribution of $P(X_1 = \cdot \mid X_0 = 0)$ is uniform over the $k$ singletons. $P(X_2 = \{j,k\} \mid X_1 = \{j\}) = p_k$ where $k\neq j$ and $P(X_2 = \{j\} \mid X_1 = \{j\}) = p_j$. If you write out the big ugly transition matrix, you'll see each row only has $k$ nonzero elements, because you can only do $k$ things at each time step. Using that big ugly transition matrix, you might figure the transition matrix for $|X_t|$ (the cardinality/size of $X_t$). With this you can describe the stopping time of interest: $$ n = \inf\{t : |X_t| = m\}. $$ Notice that $n \in \{m, m+1, \ldots\}$ and $$ P(n = j) = P(|X_n| = j \mid |X_{n-1}| = j-1) P(|X_{n-1}| = j-1) . $$ Both these factors could be coded up. Regarding sampling, it's faster (but more memory-intensive) to sample $X_t$ or $|X_t|$ instead of the whole multinomial enchilada. The hard part is instantiating and storing the transition matrix, but it’s very straightforward (more so for $X_t$).
Multinomial distribution conditional on number of distinct items TLDR just generalize the coupon collector techniques. Suppose you have a discrete state space Markov chain that evolves on the space of all subsets of $\{1,\ldots,k\}$ that have size between $0$ and $
33,213
Topologies for which the ensemble of probability distributions is complete
Looking at the question from a more narrow statistical angle (the general mathematical topological issue is valid), the fact that the sequence of moments may not converge to the moments of the limiting distribution is a well-known phenomenon. This in principle, does not automatically set in doubt the existence of a well behaved limiting distribution of the sequence. The limiting distribution of the above sequence $\{X_n + n Bern(1/n)\}$ is a well-behaved $N(0,1)$ distribution with finite moments. It is the sequence of the moments that does not converge. But this is a different sequence, a sequence comprised of functions of our random variables (integrals, densities and such), not the sequence of the random variables themselves whose limiting distribution we are interested at.
Topologies for which the ensemble of probability distributions is complete
Looking at the question from a more narrow statistical angle (the general mathematical topological issue is valid), the fact that the sequence of moments may not converge to the moments of the limitin
Topologies for which the ensemble of probability distributions is complete Looking at the question from a more narrow statistical angle (the general mathematical topological issue is valid), the fact that the sequence of moments may not converge to the moments of the limiting distribution is a well-known phenomenon. This in principle, does not automatically set in doubt the existence of a well behaved limiting distribution of the sequence. The limiting distribution of the above sequence $\{X_n + n Bern(1/n)\}$ is a well-behaved $N(0,1)$ distribution with finite moments. It is the sequence of the moments that does not converge. But this is a different sequence, a sequence comprised of functions of our random variables (integrals, densities and such), not the sequence of the random variables themselves whose limiting distribution we are interested at.
Topologies for which the ensemble of probability distributions is complete Looking at the question from a more narrow statistical angle (the general mathematical topological issue is valid), the fact that the sequence of moments may not converge to the moments of the limitin
33,214
Are contours $h^{-1}(y)$ interesting features of a function $h:X\to \mathbb R^n$ obtained by regression?
Economists are frequently interested in this. Often we estimate consumers' utility functions $u: \mathbb R^n \rightarrow \mathbb R$, where the domain describes how much of each good a consumer consumes and the range is how "happy" the consumption bundle makes him. We call the level sets of utility functions "indifference curves." Often we estimate firms' cost functions $c: \mathbb R^n \times \mathbb R^k \rightarrow \mathbb R$, where the two parts of the domain are quantities of each output the firm produces and prices for each input the firm uses in production. Level sets of $c$ are called iso-cost curves. Most commonly, the properties of the level sets we are interested in are the slopes of the boundaries. The slope of an indifference curve tells you at what rate consumers trade-off different goods: "How many apricots would you be willing to give up for one more apple?" The slope of an iso-cost curve tells you (depending on which part of the domain), how substitutable in production different outputs are (at the same cost, if you produced 10 fewer razor blades, then how many more pins could you make), or how substitutable different inputs are. Economists are completely obsessed with ratios of first partial derivatives because we are obsessed with trade-offs. These, I guess, can be (always?) thought of as slopes of boundaries of level sets. Another application is the calculation of economic equilibria. The simplest example is the supply and demand system. The supply curve represents how much producers are willing to supply at each price: $q=s(p)$. The demand curve represents how much consumers are willing to demand at each price: $q=d(p)$. Take an arbitrary price, $p$, and define excess demand as $e(p)=d(p)-s(p)$. Equilibrium prices are $e^{-1}(0)$ --- i.e. these are the prices at which markets clear. $q$ and $p$ can be vectors, and $d$ and $s$ are normally non-linear. What I'm describing in the previous paragraph (demand and supply) is just an example. The general set-up is extremely common. In Game Theory, maybe we are interested in calculating the Nash Equilibria of a game. To do this you define, for player $i$, a function (the best response function) which gives their best strategy as the range and what strategies all the other players are playing as the domain: $s_i=br(s^{-i})$. Stack these all up into a vector best response function: $s=BR(s)$. If $s$ can be represented as real numbers, then you can define a function giving the distance from equilibrium: $d(s)=BR(s)-s$. Then $d^{-1}(0)$ is the set of equilibria of the game. Whether Economists usually estimate these relationships with regression depends on how broad your definition of regression is. Commonly, we use instrumental variables regression. Also, in the case of utility functions, utility is not observed, so we have various latent variable methods for estimating those.
Are contours $h^{-1}(y)$ interesting features of a function $h:X\to \mathbb R^n$ obtained by regress
Economists are frequently interested in this. Often we estimate consumers' utility functions $u: \mathbb R^n \rightarrow \mathbb R$, where the domain describes how much of each good a consumer consum
Are contours $h^{-1}(y)$ interesting features of a function $h:X\to \mathbb R^n$ obtained by regression? Economists are frequently interested in this. Often we estimate consumers' utility functions $u: \mathbb R^n \rightarrow \mathbb R$, where the domain describes how much of each good a consumer consumes and the range is how "happy" the consumption bundle makes him. We call the level sets of utility functions "indifference curves." Often we estimate firms' cost functions $c: \mathbb R^n \times \mathbb R^k \rightarrow \mathbb R$, where the two parts of the domain are quantities of each output the firm produces and prices for each input the firm uses in production. Level sets of $c$ are called iso-cost curves. Most commonly, the properties of the level sets we are interested in are the slopes of the boundaries. The slope of an indifference curve tells you at what rate consumers trade-off different goods: "How many apricots would you be willing to give up for one more apple?" The slope of an iso-cost curve tells you (depending on which part of the domain), how substitutable in production different outputs are (at the same cost, if you produced 10 fewer razor blades, then how many more pins could you make), or how substitutable different inputs are. Economists are completely obsessed with ratios of first partial derivatives because we are obsessed with trade-offs. These, I guess, can be (always?) thought of as slopes of boundaries of level sets. Another application is the calculation of economic equilibria. The simplest example is the supply and demand system. The supply curve represents how much producers are willing to supply at each price: $q=s(p)$. The demand curve represents how much consumers are willing to demand at each price: $q=d(p)$. Take an arbitrary price, $p$, and define excess demand as $e(p)=d(p)-s(p)$. Equilibrium prices are $e^{-1}(0)$ --- i.e. these are the prices at which markets clear. $q$ and $p$ can be vectors, and $d$ and $s$ are normally non-linear. What I'm describing in the previous paragraph (demand and supply) is just an example. The general set-up is extremely common. In Game Theory, maybe we are interested in calculating the Nash Equilibria of a game. To do this you define, for player $i$, a function (the best response function) which gives their best strategy as the range and what strategies all the other players are playing as the domain: $s_i=br(s^{-i})$. Stack these all up into a vector best response function: $s=BR(s)$. If $s$ can be represented as real numbers, then you can define a function giving the distance from equilibrium: $d(s)=BR(s)-s$. Then $d^{-1}(0)$ is the set of equilibria of the game. Whether Economists usually estimate these relationships with regression depends on how broad your definition of regression is. Commonly, we use instrumental variables regression. Also, in the case of utility functions, utility is not observed, so we have various latent variable methods for estimating those.
Are contours $h^{-1}(y)$ interesting features of a function $h:X\to \mathbb R^n$ obtained by regress Economists are frequently interested in this. Often we estimate consumers' utility functions $u: \mathbb R^n \rightarrow \mathbb R$, where the domain describes how much of each good a consumer consum
33,215
Marginal probability function of the Dirichlet-Multinomial distribution
I think I have a proof, but you're probably not going to like it... At least I don't like it. If you want to skip to the punchline, it's equation $(***)$ below. I claim that it suffices to show this aggregation/marginalization property for three variables, and that the general case should follow by induction. So given $P(X_1=x_1, X_2=x_2, X_3=x_3)=\frac{N!}{x_1!x_2!x_3!}\frac{\Gamma\left(A\right)} {\Gamma\left(N+A\right)}\frac{\Gamma(x_{1}+\alpha_{1})}{\Gamma(\alpha_{1})}\frac{\Gamma(x_{2}+\alpha_{2})}{\Gamma(\alpha_{2})}\frac{\Gamma(x_{3}+\alpha_{3})}{\Gamma(\alpha_{3})}$, the claim is that $$P(X_1=x_1) = P(X_1=x_1, (X_2+X_3)=N-x_1) = \frac{N!}{x_1!(N-x_1)!}\frac{\Gamma\left(A\right)} {\Gamma\left(N+A\right)}\frac{\Gamma(x_{1}+\alpha_{1})}{\Gamma(\alpha_{1})}\frac{\Gamma((N - x_{1})+(A -\alpha_{1}))}{\Gamma(A - \alpha_{1})}$$ i.e. we can reduce things to a Beta-Binomial distribution. Note that $$P(X_1=x_1) = P(X_1=x_1, (X_2+X_3)=N-x_1 ) = \sum_{x_2 + x_3 = N - x_1} P(X_1 = x_1, X_2 = x_2, X_3=x_3)$$ So really what I am claiming is that $$\sum_{x_2 + x_3 = N - x_1} \frac{N!}{x_1!x_2!x_3!}\frac{\Gamma\left(A\right)} {\Gamma\left(N+A\right)}\frac{\Gamma(x_{1}+\alpha_{1})}{\Gamma(\alpha_{1})}\frac{\Gamma(x_{2}+\alpha_{2})}{\Gamma(\alpha_{2})}\frac{\Gamma(x_{3}+\alpha_{3})}{\Gamma(\alpha_{3})} = \frac{N!}{x_1!(N-x_1)!}\frac{\Gamma\left(A\right)} {\Gamma\left(N+A\right)}\frac{\Gamma(x_{1}+\alpha_{1})}{\Gamma(\alpha_{1})}\frac{\Gamma((N - x_{1})+(A -\alpha_{1}))}{\Gamma(A - \alpha_{1})}$$ Cancelling factors on both sides, what I am really really claiming is that $$\sum_{x_2 + x_3 = N - x_1} \frac{1}{x_2!x_3!}\frac{\Gamma(x_{2}+\alpha_{2})}{\Gamma(\alpha_{2})}\frac{\Gamma(x_{3}+\alpha_{3})}{\Gamma(\alpha_{3})} = \frac{1}{(N-x_1)!}\frac{\Gamma((N - x_{1})+(A -\alpha_{1}))}{\Gamma(A - \alpha_{1})}$$ or tidying up even further $$\sum_{x_2 + x_3 = N - x_1} \frac{1}{x_2!x_3!}\frac{\Gamma(x_{2}+\alpha_{2})}{\Gamma(\alpha_{2})}\frac{\Gamma(x_{3}+\alpha_{3})}{\Gamma(\alpha_{3})} = \frac{1}{(N-x_1)!}\frac{\Gamma((N - x_{1})+(\alpha_{2} + \alpha_{3}))}{\Gamma(\alpha_{2} + \alpha_{3})}$$ Basically everything that follows from here will amount to renaming variables and appeals to obscure combinatorial identities (which at the very least should be proved in some textbook somewhere). So this is why you probably won't like the proof. On the other hand, no integrals nor integration by parts is (directly) involved. So there's that. Anyway, let's rename $N - x_1 =: m$, $x_2 =: m_1$, $x_3=:m_2$, and so $m_1 + m_2 = x_2 + x_3 = N - x_1 = m$, in other words $m_1 + m_2 = m$. Recall that $m_1$, $m_2$, and $m$ are all non-negative integers. Similarly, let's rename $A - \alpha_1 = \alpha_2 + \alpha_3 =: c$ and $\alpha_2 =: c_1$ and $\alpha_3 =: c_2$. In particular we have $c_1 + c_2 = c$ by definition. Recall that $c_1, c_2,$ and $c$ are all positive real numbers. OK great, so that means what we want to show then is equivalent (up to renaming) to the identity $$\sum_{m_1 + m_2 = m} \frac{1}{m_1!m_2!}\frac{\Gamma(m_{1}+ c_{1})}{\Gamma( c_{1})}\frac{\Gamma(m_{2}+ c_{2})}{\Gamma( c_{2})} = \frac{1}{m!}\frac{\Gamma( m +c)}{\Gamma(c)} $$ Using the identity $\Gamma(y + 1) = y \Gamma(y)$ for $y$ a positive real number, and then induction, we get that for any positive integer $n$ that $\Gamma(y+n) = \Gamma(y) \cdot \prod_{i=0}^{n-1} (y+i)$. In particular, we get that $$ \frac{\Gamma(y+n)}{\Gamma(y)} = \prod_{i=0}^{n-1} (y+i) =: y^{(i)}\,, $$ where $y^{(i)}$ denotes the rising factorial, which is sometimes also denoted using the Pochhammer symbol $(y)_i$, but sometimes the Pochhammer symbol denotes the falling factorial or the regular factorial instead, so let's stick with $y^{(i)}$. Therefore the identity we want to show is equivalent to $$\sum_{m_1 + m_2 = m} \frac{1}{m_1!m_2!} c_1^{(m_1)} c_2^{(m_2)} = \frac{1}{m!} c^{(m)} \,, $$ where recall that $c_1 + c_2 = c$, all positive reals, and $m_1 + m_2 = m$, all non-negative integers. (Note that when $i=0$, the rising factorial $y^{(i)} = y^{(0)}$ is equal to the empty product i.e. $1$.) Anyway, there's no harm in multiplying both sides of the above identity by $m!$, which leads to $$\sum_{m_1 + m_2 = m} \frac{m!}{m_1!m_2!} c_1^{(m_1)} c_2^{(m_2)} = c^{(m)} \,, $$ By definition of binomial coefficient and re-indexing we clearly have that $$\sum_{m_1 + m_2 = m} \frac{m!}{m_1!m_2!} c_1^{(m_1)} c_2^{(m_2)} = \sum_{m_1 = 0}^m \binom{m}{m_1} c_1^{(m_1)} c_2^{(m-m_1)} \,,$$ whereas meanwhile by definition $c = c_1 + c_2$, so $c^{(m)} = (c_1 + c_2)^{(m)}$, so the identity we want to show/be true is equivalent to the identity $$ \sum_{m_1 = 0}^m \binom{m}{m_1} c_1^{(m_1)} c_2^{(m-m_1)} = (c_1 + c_2)^{(m)} \,. \tag{***}$$ Apparently (according to both Wikipedia and Wolfram mathworld) this result is true, an equivalent formulation of the "Chu-Vandermonde identity", and related to "umbral calculus". So if you're willing to believe that, or able to look up and follow the proofs supposedly given in the references mentioned by Wolfram Mathworld and Wikipedia: Koepf, W. Hypergeometric Summation: An Algorithmic Approach to Summation and Special Function Identities. Braunschweig, Germany: Vieweg, 1998, p. 42 Boros, G. and Moll, V. Irresistible Integrals: Symbolics, Analysis and Experiments in the Evaluation of Integrals. Cambridge, England: Cambridge University Press, 2004, p. 18 Askey, Richard (1975), Orthogonal polynomials and special functions, Regional Conference Series in Applied Mathematics, 21, Philadelphia, PA: SIAM, p. 59–60 then based on what I showed above, it should follow using induction that the "aggregation property" of the Dirichlet-Multinomial distribution (equivalent to the marginalizationt you asked for) is true.
Marginal probability function of the Dirichlet-Multinomial distribution
I think I have a proof, but you're probably not going to like it... At least I don't like it. If you want to skip to the punchline, it's equation $(***)$ below. I claim that it suffices to show this a
Marginal probability function of the Dirichlet-Multinomial distribution I think I have a proof, but you're probably not going to like it... At least I don't like it. If you want to skip to the punchline, it's equation $(***)$ below. I claim that it suffices to show this aggregation/marginalization property for three variables, and that the general case should follow by induction. So given $P(X_1=x_1, X_2=x_2, X_3=x_3)=\frac{N!}{x_1!x_2!x_3!}\frac{\Gamma\left(A\right)} {\Gamma\left(N+A\right)}\frac{\Gamma(x_{1}+\alpha_{1})}{\Gamma(\alpha_{1})}\frac{\Gamma(x_{2}+\alpha_{2})}{\Gamma(\alpha_{2})}\frac{\Gamma(x_{3}+\alpha_{3})}{\Gamma(\alpha_{3})}$, the claim is that $$P(X_1=x_1) = P(X_1=x_1, (X_2+X_3)=N-x_1) = \frac{N!}{x_1!(N-x_1)!}\frac{\Gamma\left(A\right)} {\Gamma\left(N+A\right)}\frac{\Gamma(x_{1}+\alpha_{1})}{\Gamma(\alpha_{1})}\frac{\Gamma((N - x_{1})+(A -\alpha_{1}))}{\Gamma(A - \alpha_{1})}$$ i.e. we can reduce things to a Beta-Binomial distribution. Note that $$P(X_1=x_1) = P(X_1=x_1, (X_2+X_3)=N-x_1 ) = \sum_{x_2 + x_3 = N - x_1} P(X_1 = x_1, X_2 = x_2, X_3=x_3)$$ So really what I am claiming is that $$\sum_{x_2 + x_3 = N - x_1} \frac{N!}{x_1!x_2!x_3!}\frac{\Gamma\left(A\right)} {\Gamma\left(N+A\right)}\frac{\Gamma(x_{1}+\alpha_{1})}{\Gamma(\alpha_{1})}\frac{\Gamma(x_{2}+\alpha_{2})}{\Gamma(\alpha_{2})}\frac{\Gamma(x_{3}+\alpha_{3})}{\Gamma(\alpha_{3})} = \frac{N!}{x_1!(N-x_1)!}\frac{\Gamma\left(A\right)} {\Gamma\left(N+A\right)}\frac{\Gamma(x_{1}+\alpha_{1})}{\Gamma(\alpha_{1})}\frac{\Gamma((N - x_{1})+(A -\alpha_{1}))}{\Gamma(A - \alpha_{1})}$$ Cancelling factors on both sides, what I am really really claiming is that $$\sum_{x_2 + x_3 = N - x_1} \frac{1}{x_2!x_3!}\frac{\Gamma(x_{2}+\alpha_{2})}{\Gamma(\alpha_{2})}\frac{\Gamma(x_{3}+\alpha_{3})}{\Gamma(\alpha_{3})} = \frac{1}{(N-x_1)!}\frac{\Gamma((N - x_{1})+(A -\alpha_{1}))}{\Gamma(A - \alpha_{1})}$$ or tidying up even further $$\sum_{x_2 + x_3 = N - x_1} \frac{1}{x_2!x_3!}\frac{\Gamma(x_{2}+\alpha_{2})}{\Gamma(\alpha_{2})}\frac{\Gamma(x_{3}+\alpha_{3})}{\Gamma(\alpha_{3})} = \frac{1}{(N-x_1)!}\frac{\Gamma((N - x_{1})+(\alpha_{2} + \alpha_{3}))}{\Gamma(\alpha_{2} + \alpha_{3})}$$ Basically everything that follows from here will amount to renaming variables and appeals to obscure combinatorial identities (which at the very least should be proved in some textbook somewhere). So this is why you probably won't like the proof. On the other hand, no integrals nor integration by parts is (directly) involved. So there's that. Anyway, let's rename $N - x_1 =: m$, $x_2 =: m_1$, $x_3=:m_2$, and so $m_1 + m_2 = x_2 + x_3 = N - x_1 = m$, in other words $m_1 + m_2 = m$. Recall that $m_1$, $m_2$, and $m$ are all non-negative integers. Similarly, let's rename $A - \alpha_1 = \alpha_2 + \alpha_3 =: c$ and $\alpha_2 =: c_1$ and $\alpha_3 =: c_2$. In particular we have $c_1 + c_2 = c$ by definition. Recall that $c_1, c_2,$ and $c$ are all positive real numbers. OK great, so that means what we want to show then is equivalent (up to renaming) to the identity $$\sum_{m_1 + m_2 = m} \frac{1}{m_1!m_2!}\frac{\Gamma(m_{1}+ c_{1})}{\Gamma( c_{1})}\frac{\Gamma(m_{2}+ c_{2})}{\Gamma( c_{2})} = \frac{1}{m!}\frac{\Gamma( m +c)}{\Gamma(c)} $$ Using the identity $\Gamma(y + 1) = y \Gamma(y)$ for $y$ a positive real number, and then induction, we get that for any positive integer $n$ that $\Gamma(y+n) = \Gamma(y) \cdot \prod_{i=0}^{n-1} (y+i)$. In particular, we get that $$ \frac{\Gamma(y+n)}{\Gamma(y)} = \prod_{i=0}^{n-1} (y+i) =: y^{(i)}\,, $$ where $y^{(i)}$ denotes the rising factorial, which is sometimes also denoted using the Pochhammer symbol $(y)_i$, but sometimes the Pochhammer symbol denotes the falling factorial or the regular factorial instead, so let's stick with $y^{(i)}$. Therefore the identity we want to show is equivalent to $$\sum_{m_1 + m_2 = m} \frac{1}{m_1!m_2!} c_1^{(m_1)} c_2^{(m_2)} = \frac{1}{m!} c^{(m)} \,, $$ where recall that $c_1 + c_2 = c$, all positive reals, and $m_1 + m_2 = m$, all non-negative integers. (Note that when $i=0$, the rising factorial $y^{(i)} = y^{(0)}$ is equal to the empty product i.e. $1$.) Anyway, there's no harm in multiplying both sides of the above identity by $m!$, which leads to $$\sum_{m_1 + m_2 = m} \frac{m!}{m_1!m_2!} c_1^{(m_1)} c_2^{(m_2)} = c^{(m)} \,, $$ By definition of binomial coefficient and re-indexing we clearly have that $$\sum_{m_1 + m_2 = m} \frac{m!}{m_1!m_2!} c_1^{(m_1)} c_2^{(m_2)} = \sum_{m_1 = 0}^m \binom{m}{m_1} c_1^{(m_1)} c_2^{(m-m_1)} \,,$$ whereas meanwhile by definition $c = c_1 + c_2$, so $c^{(m)} = (c_1 + c_2)^{(m)}$, so the identity we want to show/be true is equivalent to the identity $$ \sum_{m_1 = 0}^m \binom{m}{m_1} c_1^{(m_1)} c_2^{(m-m_1)} = (c_1 + c_2)^{(m)} \,. \tag{***}$$ Apparently (according to both Wikipedia and Wolfram mathworld) this result is true, an equivalent formulation of the "Chu-Vandermonde identity", and related to "umbral calculus". So if you're willing to believe that, or able to look up and follow the proofs supposedly given in the references mentioned by Wolfram Mathworld and Wikipedia: Koepf, W. Hypergeometric Summation: An Algorithmic Approach to Summation and Special Function Identities. Braunschweig, Germany: Vieweg, 1998, p. 42 Boros, G. and Moll, V. Irresistible Integrals: Symbolics, Analysis and Experiments in the Evaluation of Integrals. Cambridge, England: Cambridge University Press, 2004, p. 18 Askey, Richard (1975), Orthogonal polynomials and special functions, Regional Conference Series in Applied Mathematics, 21, Philadelphia, PA: SIAM, p. 59–60 then based on what I showed above, it should follow using induction that the "aggregation property" of the Dirichlet-Multinomial distribution (equivalent to the marginalizationt you asked for) is true.
Marginal probability function of the Dirichlet-Multinomial distribution I think I have a proof, but you're probably not going to like it... At least I don't like it. If you want to skip to the punchline, it's equation $(***)$ below. I claim that it suffices to show this a
33,216
Neuron vs. unit in a neural network
Let me suggest one scenario (the only one I can think of) where it might be useful to distinguish between "units" (or some similarly generic term) and "neurons." Biologically, a neuron is easy to identify, because it represents a single cell. In terms of neural nets, a neuron or "unit" has typically represented a single object, usually with one activation value, plus an additional threshold or separate input and output values in some cases. Problems arise in distinguishing between a neuron and a "unit" when we take into account the fact that the inputs, outputs, activations and thresholds of biological neurons are often mediated by multiple neurotransmitters and specific subsets of connections on the dendrites - many of which can be modeled as separate units. Then the line between "neuron" and "unit" blurs quickly. As William F. Allman puts it in pp. 65-66, Apprentices of Wonder: Inside the Neural Network Revolution (1989, Bantam Books: New York): "An axon may release various amounts of transmitter; a receiving dendrite might have varying amounts of receptor; the transmitter itself may have different checmical properties and react at different rates. And the whole process may be mitigated by the action of various enzymes.” Here's a more thorough treatment from Daniel Gardner (1993, The Neurobiology of Neural Networks, MIT Press: Cambridge, Mass.) (I lost the page number to this, so I can't provide an exact citation): " First, it has become evident that neurons (both in vertebrates and invertebrates) possess rich and complex intrinsic properties. Most neurons have multiple channels to different ionic species, and these channels can be regulated in a wide variety of ways: They can be turned on or off by voltage, molecules, or ions. Some of these channels can be active in the absence of external inputs to the cell, and endow it with a variety of dynamic properties, such as the ability to oscillate (Llinas 1988; Selverston 1988; Yamada et al. 1989). Thus, it is not enough to specify the inputs to a neuron to predict its outputs; its internal state will also determine its behavior. As a consequence, neurons may be better represented as nonlinear dynamic systems in their own right. For example, the intrinsic conductances of thalamic neurons can allow them to act as linear input/output devices, relaying information directly to cortex, but when they are hyperpolarized, these conductances cause the neurons to burst, significantly transforming their inputs (Llinas 1988). In terms of the model neurons that have often been used in artificial neural networks, the input/output relationship would need to be represented as a function both of voltage and of time. "Second, the interactions between neurons are complex. The differential distribution of synapses on complex dendritic trees of neurons can significantly affect the nature and intensity of their inputs to a neuron. In addition, synapses may have multiple time courses (e.g., initial excitation, slower inhibition, and still slower excitation [Getting and Dekin 1985), and connections may be dynamically reconfigured (e.g., by inhibition of specific neurons [Getting and Dekin 1985, or by the actions of neuromodulators [Harris-Warrick and Marder 1991; Marder and Hooper 19851). Receptors controlling the synaptic response may be gated both by the presence of a chemical, such as a neurotransmitter, and by voltage, so that the synaptic connections between neurons can be affected by their own activity and by the activity of neurons impinging on them (discussions of these and other complexities in synaptic interactions are found in chapters 2, 3, and 4). Influences may occur over a variety of spatial and temporal scales: A neuromodulator which is only slowly broken down may affect a very large number of neurons in its vicinity over an appreciable period of time as it diffuses away from its point of release. Furthermore, such compounds are likely to selectively activate those subgroups of neurons that have a receptor for that substance. Neuromodulators may also have subtle but profound effects on the intrinsic properties of neurons, activating or inactivating intrinsic currents and thus changing their "electrical personality." Field potentials may alter the excitability of neurons in different regions of the brain (Nunez 1981)." I've run across other such quotes in the literature with similar detail, but those two should get the point across (Gardner's book may be a good starting point if you want to look into the matter further). In cases where we're dealing with multiple activations, thresholds and the like, it might be helpful to make a distinction between "neurons" and constituent "units" that contribute their own activations and other calculations; there's such bewildering complexity to these matters that I don't think anyone can give a definitive answer as to the best way to model such distinctions. I ran into this problem when trying to implement Fukushima's neocognitrons, in which each neuron has its own separate inhibitory and stimulatory inputs; first I tried modeling them as separate neurons, then as a single neuron with multiple outputs, but I'm still not certain what the optimal choice is. There may be solid computational advantages to modeling many of these various enzymes, neurotransmitters and receptors beyond mere biological plausibility; perhaps there's not; the whole topic is still far afield, even for neuroscientists, who still have much to learn about the purposes of such connections. I suspect such questions will become far more complex and pressing in the future once the field of neuroscience advances, enabling neural net researchers to mimic more of these internal calculations. For the time being it's safe to equate neurons with "units," but that might not be the case once more sophisticated neural nets begin to make practical use of this dizzying array of computations.
Neuron vs. unit in a neural network
Let me suggest one scenario (the only one I can think of) where it might be useful to distinguish between "units" (or some similarly generic term) and "neurons." Biologically, a neuron is easy to iden
Neuron vs. unit in a neural network Let me suggest one scenario (the only one I can think of) where it might be useful to distinguish between "units" (or some similarly generic term) and "neurons." Biologically, a neuron is easy to identify, because it represents a single cell. In terms of neural nets, a neuron or "unit" has typically represented a single object, usually with one activation value, plus an additional threshold or separate input and output values in some cases. Problems arise in distinguishing between a neuron and a "unit" when we take into account the fact that the inputs, outputs, activations and thresholds of biological neurons are often mediated by multiple neurotransmitters and specific subsets of connections on the dendrites - many of which can be modeled as separate units. Then the line between "neuron" and "unit" blurs quickly. As William F. Allman puts it in pp. 65-66, Apprentices of Wonder: Inside the Neural Network Revolution (1989, Bantam Books: New York): "An axon may release various amounts of transmitter; a receiving dendrite might have varying amounts of receptor; the transmitter itself may have different checmical properties and react at different rates. And the whole process may be mitigated by the action of various enzymes.” Here's a more thorough treatment from Daniel Gardner (1993, The Neurobiology of Neural Networks, MIT Press: Cambridge, Mass.) (I lost the page number to this, so I can't provide an exact citation): " First, it has become evident that neurons (both in vertebrates and invertebrates) possess rich and complex intrinsic properties. Most neurons have multiple channels to different ionic species, and these channels can be regulated in a wide variety of ways: They can be turned on or off by voltage, molecules, or ions. Some of these channels can be active in the absence of external inputs to the cell, and endow it with a variety of dynamic properties, such as the ability to oscillate (Llinas 1988; Selverston 1988; Yamada et al. 1989). Thus, it is not enough to specify the inputs to a neuron to predict its outputs; its internal state will also determine its behavior. As a consequence, neurons may be better represented as nonlinear dynamic systems in their own right. For example, the intrinsic conductances of thalamic neurons can allow them to act as linear input/output devices, relaying information directly to cortex, but when they are hyperpolarized, these conductances cause the neurons to burst, significantly transforming their inputs (Llinas 1988). In terms of the model neurons that have often been used in artificial neural networks, the input/output relationship would need to be represented as a function both of voltage and of time. "Second, the interactions between neurons are complex. The differential distribution of synapses on complex dendritic trees of neurons can significantly affect the nature and intensity of their inputs to a neuron. In addition, synapses may have multiple time courses (e.g., initial excitation, slower inhibition, and still slower excitation [Getting and Dekin 1985), and connections may be dynamically reconfigured (e.g., by inhibition of specific neurons [Getting and Dekin 1985, or by the actions of neuromodulators [Harris-Warrick and Marder 1991; Marder and Hooper 19851). Receptors controlling the synaptic response may be gated both by the presence of a chemical, such as a neurotransmitter, and by voltage, so that the synaptic connections between neurons can be affected by their own activity and by the activity of neurons impinging on them (discussions of these and other complexities in synaptic interactions are found in chapters 2, 3, and 4). Influences may occur over a variety of spatial and temporal scales: A neuromodulator which is only slowly broken down may affect a very large number of neurons in its vicinity over an appreciable period of time as it diffuses away from its point of release. Furthermore, such compounds are likely to selectively activate those subgroups of neurons that have a receptor for that substance. Neuromodulators may also have subtle but profound effects on the intrinsic properties of neurons, activating or inactivating intrinsic currents and thus changing their "electrical personality." Field potentials may alter the excitability of neurons in different regions of the brain (Nunez 1981)." I've run across other such quotes in the literature with similar detail, but those two should get the point across (Gardner's book may be a good starting point if you want to look into the matter further). In cases where we're dealing with multiple activations, thresholds and the like, it might be helpful to make a distinction between "neurons" and constituent "units" that contribute their own activations and other calculations; there's such bewildering complexity to these matters that I don't think anyone can give a definitive answer as to the best way to model such distinctions. I ran into this problem when trying to implement Fukushima's neocognitrons, in which each neuron has its own separate inhibitory and stimulatory inputs; first I tried modeling them as separate neurons, then as a single neuron with multiple outputs, but I'm still not certain what the optimal choice is. There may be solid computational advantages to modeling many of these various enzymes, neurotransmitters and receptors beyond mere biological plausibility; perhaps there's not; the whole topic is still far afield, even for neuroscientists, who still have much to learn about the purposes of such connections. I suspect such questions will become far more complex and pressing in the future once the field of neuroscience advances, enabling neural net researchers to mimic more of these internal calculations. For the time being it's safe to equate neurons with "units," but that might not be the case once more sophisticated neural nets begin to make practical use of this dizzying array of computations.
Neuron vs. unit in a neural network Let me suggest one scenario (the only one I can think of) where it might be useful to distinguish between "units" (or some similarly generic term) and "neurons." Biologically, a neuron is easy to iden
33,217
Neuron vs. unit in a neural network
In the context of machine learning, is there any difference between the terms unit and neuron? They are the same, often called neural unit. Neurons in an ANN is derived from the McCulloch-Pitts Neurons(MCP neuron), and a MCP neuron is a highly simplified model of a neuron in the human brain. In 1943 Warren S. McCulloch, a neuroscientist, and Walter Pitts, a logician, published "A logical calculus of the ideas immanent in nervous activity" in the Bulletin of Mathematical Biophysics 5:115-133. In this paper McCulloch and Pitts tried to understand how the brain could produce highly complex patterns by using many basic cells that are connected together. These basic brain cells are called neurons, and McCulloch and Pitts gave a highly simplified model of a neuron in their paper. The McCulloch and Pitts model of a neuron, which we will call an MCP neuron for short, has made an important contribution to the development of artificial neural networks -- which model key features of biological neurons. The original MCP Neurons had limitations. Additional features were added which allowed them to "learn." The next major development in neural networks was the concept of a perceptron which was introduced by Frank Rosenblatt in 1958. Essentially the perceptron is an MCP neuron where the inputs are first passed through some "preprocessors," which are called association units. These association units detect the presence of certain specific features in the inputs. In fact, as the name suggests, a perceptron was intended to be a pattern recognition device, and the association units correspond to feature or pattern detectors.
Neuron vs. unit in a neural network
In the context of machine learning, is there any difference between the terms unit and neuron? They are the same, often called neural unit. Neurons in an ANN is derived from the McCulloch-Pitts Neuro
Neuron vs. unit in a neural network In the context of machine learning, is there any difference between the terms unit and neuron? They are the same, often called neural unit. Neurons in an ANN is derived from the McCulloch-Pitts Neurons(MCP neuron), and a MCP neuron is a highly simplified model of a neuron in the human brain. In 1943 Warren S. McCulloch, a neuroscientist, and Walter Pitts, a logician, published "A logical calculus of the ideas immanent in nervous activity" in the Bulletin of Mathematical Biophysics 5:115-133. In this paper McCulloch and Pitts tried to understand how the brain could produce highly complex patterns by using many basic cells that are connected together. These basic brain cells are called neurons, and McCulloch and Pitts gave a highly simplified model of a neuron in their paper. The McCulloch and Pitts model of a neuron, which we will call an MCP neuron for short, has made an important contribution to the development of artificial neural networks -- which model key features of biological neurons. The original MCP Neurons had limitations. Additional features were added which allowed them to "learn." The next major development in neural networks was the concept of a perceptron which was introduced by Frank Rosenblatt in 1958. Essentially the perceptron is an MCP neuron where the inputs are first passed through some "preprocessors," which are called association units. These association units detect the presence of certain specific features in the inputs. In fact, as the name suggests, a perceptron was intended to be a pattern recognition device, and the association units correspond to feature or pattern detectors.
Neuron vs. unit in a neural network In the context of machine learning, is there any difference between the terms unit and neuron? They are the same, often called neural unit. Neurons in an ANN is derived from the McCulloch-Pitts Neuro
33,218
Evaluating probabilistic forecasts of K-most-likely events from an arbitrarily large event space
I admire your commitment to world-building research for that dystopian novel you've been working on! A possible argument that this problem is underdetermined without additional assumptions. It seems (I lack definite proof) that we need to know at least the overall population size, and presumably some other factors as well. Consider a likelihood score. Assume murders are committed independently randomly with some murderousness probability $\theta_i$ by each member of the population $i$ (probably not true but let's run with it). The probability space is the powerset $\Omega_n = \mathcal P(\{0, ..., n-1\})$ for population size $n$. Then outcome $X$ happens with probability $$P(X|\theta) = \underset{i<n}\prod \theta_i^{i \in X}(1-\theta_i)^{i \notin X}$$ Then, as alluded to in your remarks, for a complete prediction $\hat\theta$ of murderousness in the population, we could appropriately score the prediction, for example with the likelihood $$\mathcal L(\theta|X) = P(X|\theta)$$ An alternative could additionally incorporate some Bayesian prior and instead score the a-posteriori probability/credence of a particular prediction. (An appropriate choice would be a product of independent Beta distributions, one for each member of the population, which is then conjugate to the set of independent Bernoulli samples of each person's murdership). But for a truncated prediction $\hat\theta_k$ of top-k-murderousness, the likelihood is undefined. For example the prediction $\hat\theta_3 = (0:0.3, 1:0.2, 2:0.1)$ might correspond to the 'full' parameterisation $\hat\theta^\star = \hat\theta_3 + (3:0.1, ..., 99:0.1)$ or to $\hat\theta^\star = \hat\theta_3 + (3:0.001, ..., 99:0.001)$, each of which assign very different probabilities to any outcome in $\Omega_{100}$ and consequently have very different likelihood or a-posteriori credibility. I'm not completely clear from your question if the full outcome $X$ is observed, or only the truncated event $X_k$ consisting of the murdership of the named top-k-murderous members of the population. Notice that, if we do make a particular choice of extrapolation from $\hat\theta_k$ to the full $\hat\theta^\star$, a truncated observation $X_k$ which witnesses only those individuals predicted in $\hat\theta_k$ is a well-defined event over the probability space and thus has a well-defined probability, allowing a likelihood score to be derived. But it suffers from the problem you identified for Brier score, where the statistician can control the censoring of the observations to avoid the first desideratum of naming only the most credible murderers. If instead we have access to $X$, the full observation of murders committed, the likelihood or a-posteriori credibility of an extrapolated $\hat\theta^\star$ appears to me to be both defined and well-incentivised. What remains with this picture is how to sensibly extrapolate from a truncated prediction $\hat\theta_k$ to a full prediction $\theta^\star$. A computationally tractable approach would be to have the statistician commit to a population size $n$ and a uniform baseline murderousness $p$ for the rest of the population not identified in $\hat\theta_k$, producing a Binomial 'rest-of-population' murder-count distribution. For suitable $n$ and $p$ the 'rest-of-population' likelihood factor could be even more tractably approximated as a Poisson and you could simplify and have her propose such a Poisson parameter $\lambda$. (This Poisson case is very plausible in the motivating scenario of populations and murders, but may not transfer to other cases.) Letting $r = |X-dom(\hat\theta_k)|$ the number of 'surprise murders', Binomial case $$ \mathcal L(\hat\theta_k, n, p|X) = \binom {n-k} r p^r (1-p)^{n-k-r} \underset{i \in dom(\hat\theta_k)}\prod \hat\theta_k[i]^{i \in X}(1 - \hat\theta_k[i])^{i \notin X} $$ Poisson case $$ \mathcal L(\hat\theta_k, \lambda|X) = \frac {\lambda^r e^{-\lambda}} {r!} \underset{i \in dom(\hat\theta_k)}\prod \hat\theta_k[i]^{i \in X}(1 - \hat\theta_k[i])^{i \notin X} $$ A 'generous' and similarly tractable approach might be to give 'benefit of the doubt' and extrapolate to $\underset {i \in dom(\hat\theta_k)} {min} \hat\theta_k[i]$ for anyone who did in fact murder, and $0$ for anyone who did in fact not murder. This should still incentivise nominating the most plausible murderers, and giving reasonable estimates, but it might introduce some bias. Generous case $$ \mathcal L^\star(\hat\theta_k|X) = \left(\underset {i \in dom(\hat\theta_k)} {min} \hat\theta_k[i]\right)^r \underset{i \in dom(\hat\theta_k)}\prod \hat\theta_k[i]^{i \in X}(1 - \hat\theta_k[i])^{i \notin X} $$
Evaluating probabilistic forecasts of K-most-likely events from an arbitrarily large event space
I admire your commitment to world-building research for that dystopian novel you've been working on! A possible argument that this problem is underdetermined without additional assumptions. It seems (
Evaluating probabilistic forecasts of K-most-likely events from an arbitrarily large event space I admire your commitment to world-building research for that dystopian novel you've been working on! A possible argument that this problem is underdetermined without additional assumptions. It seems (I lack definite proof) that we need to know at least the overall population size, and presumably some other factors as well. Consider a likelihood score. Assume murders are committed independently randomly with some murderousness probability $\theta_i$ by each member of the population $i$ (probably not true but let's run with it). The probability space is the powerset $\Omega_n = \mathcal P(\{0, ..., n-1\})$ for population size $n$. Then outcome $X$ happens with probability $$P(X|\theta) = \underset{i<n}\prod \theta_i^{i \in X}(1-\theta_i)^{i \notin X}$$ Then, as alluded to in your remarks, for a complete prediction $\hat\theta$ of murderousness in the population, we could appropriately score the prediction, for example with the likelihood $$\mathcal L(\theta|X) = P(X|\theta)$$ An alternative could additionally incorporate some Bayesian prior and instead score the a-posteriori probability/credence of a particular prediction. (An appropriate choice would be a product of independent Beta distributions, one for each member of the population, which is then conjugate to the set of independent Bernoulli samples of each person's murdership). But for a truncated prediction $\hat\theta_k$ of top-k-murderousness, the likelihood is undefined. For example the prediction $\hat\theta_3 = (0:0.3, 1:0.2, 2:0.1)$ might correspond to the 'full' parameterisation $\hat\theta^\star = \hat\theta_3 + (3:0.1, ..., 99:0.1)$ or to $\hat\theta^\star = \hat\theta_3 + (3:0.001, ..., 99:0.001)$, each of which assign very different probabilities to any outcome in $\Omega_{100}$ and consequently have very different likelihood or a-posteriori credibility. I'm not completely clear from your question if the full outcome $X$ is observed, or only the truncated event $X_k$ consisting of the murdership of the named top-k-murderous members of the population. Notice that, if we do make a particular choice of extrapolation from $\hat\theta_k$ to the full $\hat\theta^\star$, a truncated observation $X_k$ which witnesses only those individuals predicted in $\hat\theta_k$ is a well-defined event over the probability space and thus has a well-defined probability, allowing a likelihood score to be derived. But it suffers from the problem you identified for Brier score, where the statistician can control the censoring of the observations to avoid the first desideratum of naming only the most credible murderers. If instead we have access to $X$, the full observation of murders committed, the likelihood or a-posteriori credibility of an extrapolated $\hat\theta^\star$ appears to me to be both defined and well-incentivised. What remains with this picture is how to sensibly extrapolate from a truncated prediction $\hat\theta_k$ to a full prediction $\theta^\star$. A computationally tractable approach would be to have the statistician commit to a population size $n$ and a uniform baseline murderousness $p$ for the rest of the population not identified in $\hat\theta_k$, producing a Binomial 'rest-of-population' murder-count distribution. For suitable $n$ and $p$ the 'rest-of-population' likelihood factor could be even more tractably approximated as a Poisson and you could simplify and have her propose such a Poisson parameter $\lambda$. (This Poisson case is very plausible in the motivating scenario of populations and murders, but may not transfer to other cases.) Letting $r = |X-dom(\hat\theta_k)|$ the number of 'surprise murders', Binomial case $$ \mathcal L(\hat\theta_k, n, p|X) = \binom {n-k} r p^r (1-p)^{n-k-r} \underset{i \in dom(\hat\theta_k)}\prod \hat\theta_k[i]^{i \in X}(1 - \hat\theta_k[i])^{i \notin X} $$ Poisson case $$ \mathcal L(\hat\theta_k, \lambda|X) = \frac {\lambda^r e^{-\lambda}} {r!} \underset{i \in dom(\hat\theta_k)}\prod \hat\theta_k[i]^{i \in X}(1 - \hat\theta_k[i])^{i \notin X} $$ A 'generous' and similarly tractable approach might be to give 'benefit of the doubt' and extrapolate to $\underset {i \in dom(\hat\theta_k)} {min} \hat\theta_k[i]$ for anyone who did in fact murder, and $0$ for anyone who did in fact not murder. This should still incentivise nominating the most plausible murderers, and giving reasonable estimates, but it might introduce some bias. Generous case $$ \mathcal L^\star(\hat\theta_k|X) = \left(\underset {i \in dom(\hat\theta_k)} {min} \hat\theta_k[i]\right)^r \underset{i \in dom(\hat\theta_k)}\prod \hat\theta_k[i]^{i \in X}(1 - \hat\theta_k[i])^{i \notin X} $$
Evaluating probabilistic forecasts of K-most-likely events from an arbitrarily large event space I admire your commitment to world-building research for that dystopian novel you've been working on! A possible argument that this problem is underdetermined without additional assumptions. It seems (
33,219
Evaluating probabilistic forecasts of K-most-likely events from an arbitrarily large event space
Indeed, there are many other ideas! If you call $p$ the estimated probability and $y$ the output, the following metrics are widely used: MAE = $\sum_i|p_i-y_i]$ MSE = $\sum_i(p_i-y_i)^2$, logarithmic loss = $\sum_i(1-y_i)\log p_i+y_i\log (1-p_i)$ In these examples, a misclassification is more penalized if the associated probability is high. Note that these approaches will be affected if you replace $p$ by $p^2$. Besides, it is hard to say if if a logarithmic loss of 0.34 is high or low - whereas an error rate of 95% is self explanatory. A way to circumvent this is to use the AUC, which lies between 0 and 1 and focuses on the rank of the proposed probabilities. AUC $=\frac{S_0-n_0(n_0+1)/2}{n_0n_1}$ Where $n_0$ and $n_1$ are the number of positive and negative examples. $S_0$ is the sum of the ranks of the positive examples. More details about AUC can be found here: http://home.cse.ust.hk/~qyang/Teaching/537/Papers/AUC-evaluation.pdf
Evaluating probabilistic forecasts of K-most-likely events from an arbitrarily large event space
Indeed, there are many other ideas! If you call $p$ the estimated probability and $y$ the output, the following metrics are widely used: MAE = $\sum_i|p_i-y_i]$ MSE = $\sum_i(p_i-y_i)^2$, logarithm
Evaluating probabilistic forecasts of K-most-likely events from an arbitrarily large event space Indeed, there are many other ideas! If you call $p$ the estimated probability and $y$ the output, the following metrics are widely used: MAE = $\sum_i|p_i-y_i]$ MSE = $\sum_i(p_i-y_i)^2$, logarithmic loss = $\sum_i(1-y_i)\log p_i+y_i\log (1-p_i)$ In these examples, a misclassification is more penalized if the associated probability is high. Note that these approaches will be affected if you replace $p$ by $p^2$. Besides, it is hard to say if if a logarithmic loss of 0.34 is high or low - whereas an error rate of 95% is self explanatory. A way to circumvent this is to use the AUC, which lies between 0 and 1 and focuses on the rank of the proposed probabilities. AUC $=\frac{S_0-n_0(n_0+1)/2}{n_0n_1}$ Where $n_0$ and $n_1$ are the number of positive and negative examples. $S_0$ is the sum of the ranks of the positive examples. More details about AUC can be found here: http://home.cse.ust.hk/~qyang/Teaching/537/Papers/AUC-evaluation.pdf
Evaluating probabilistic forecasts of K-most-likely events from an arbitrarily large event space Indeed, there are many other ideas! If you call $p$ the estimated probability and $y$ the output, the following metrics are widely used: MAE = $\sum_i|p_i-y_i]$ MSE = $\sum_i(p_i-y_i)^2$, logarithm
33,220
Evaluating probabilistic forecasts of K-most-likely events from an arbitrarily large event space
There has, as you suggest, been quite a lot of discussion in various communities surrounding this point. The crux of the problem is that when you have a relatively rare event (the probability that even the most murderous person commits a murder on a given day has to be quite small- naively, for someone who commits murders on 36 days a year, it would be only ~.1 for a given day) evaluating the value of that prediction presents significant challenges. A correct prediction that the probability is .1 still only results in a 10 percent chance of the event occuring. Luckily, there are many branches of inquiry which concentrate on rare events. Meterology, for example, consists almost entirely of predicting relatively rare events. In this paper (1) , Marzban evaluates several metrics for evaluating models similarly in terms of the propensity of those metrics to induce the forecaster to over- or underestimate the likelihood of a given rare event. While there are many specific model variations that are discussed in more detail in the paper, the general approaches are: Some combination of the false alarm rate (0 results classified as 1 predictions) and the probability of detection (1 results classified as 0 predictions) Critical Success Index Skill scores Custom angle metrics (Marzban 754-55) The general results we see for a model is that some models encourage over-predicting the rare event in question and some encourage under-predicting the event. In this case, the police chief would want to make a decision on which metric to employ to evaluate the results of the statistician's model informed by the way in which the prediction would be used. If it was, as you mention, in Minority Report, going to be used to preemtively imprison people, presumably we would want to encourage the statistician to under-predict the rare events (or, in a more authoritarian state, over-predict them into near-oblivion). (1) Caren Marzban, 1998: Scalar Measures of Performance in Rare-Event Situations. Wea. Forecasting, 13, 753–763.
Evaluating probabilistic forecasts of K-most-likely events from an arbitrarily large event space
There has, as you suggest, been quite a lot of discussion in various communities surrounding this point. The crux of the problem is that when you have a relatively rare event (the probability that eve
Evaluating probabilistic forecasts of K-most-likely events from an arbitrarily large event space There has, as you suggest, been quite a lot of discussion in various communities surrounding this point. The crux of the problem is that when you have a relatively rare event (the probability that even the most murderous person commits a murder on a given day has to be quite small- naively, for someone who commits murders on 36 days a year, it would be only ~.1 for a given day) evaluating the value of that prediction presents significant challenges. A correct prediction that the probability is .1 still only results in a 10 percent chance of the event occuring. Luckily, there are many branches of inquiry which concentrate on rare events. Meterology, for example, consists almost entirely of predicting relatively rare events. In this paper (1) , Marzban evaluates several metrics for evaluating models similarly in terms of the propensity of those metrics to induce the forecaster to over- or underestimate the likelihood of a given rare event. While there are many specific model variations that are discussed in more detail in the paper, the general approaches are: Some combination of the false alarm rate (0 results classified as 1 predictions) and the probability of detection (1 results classified as 0 predictions) Critical Success Index Skill scores Custom angle metrics (Marzban 754-55) The general results we see for a model is that some models encourage over-predicting the rare event in question and some encourage under-predicting the event. In this case, the police chief would want to make a decision on which metric to employ to evaluate the results of the statistician's model informed by the way in which the prediction would be used. If it was, as you mention, in Minority Report, going to be used to preemtively imprison people, presumably we would want to encourage the statistician to under-predict the rare events (or, in a more authoritarian state, over-predict them into near-oblivion). (1) Caren Marzban, 1998: Scalar Measures of Performance in Rare-Event Situations. Wea. Forecasting, 13, 753–763.
Evaluating probabilistic forecasts of K-most-likely events from an arbitrarily large event space There has, as you suggest, been quite a lot of discussion in various communities surrounding this point. The crux of the problem is that when you have a relatively rare event (the probability that eve
33,221
Evaluating probabilistic forecasts of K-most-likely events from an arbitrarily large event space
"You acted unwisely," I cried, "as you see     By the outcome." He calmly eyed me: "When choosing the course of my action," said he,    "I had not the outcome to guide me." (Ambrose Bierce: A Lacking Factor, from The Scrap Heap) Great way to pose the question! I'd like to offer a non-answer which presents the forecast problem from a different point of view. As I see it, your way of presenting the problem portrays the statistician's goal as trying to build a "model" that's "close to the truth". Then one can speak of performance and pay. This is a widespread way of viewing probability theory, but not the only one. I belong to those group of people (Jaynes, Jeffreys, Savage, de Finetti, and many others) who find it circular and ultimately hopeless. The problem is that we cannot validate a forecast by comparing it to the "truth" – because we don't have the truth. If we had it, there would be no need of making forecasts. On the other hand, once we acquire knowledge of the truth, the forecast ceases to be important. The point of a forecast is to try to guess the truth as best as possible given all information we have (and can gather). It may well happen that all info we have actually misses some crucial element, and therefore our forecast is grossly off the mark. Yet, that's the best we could do. It was impossible to use information that we didn't have – if we had had it, we would have used it! And maybe we didn't even know that some crucial info was missing. In fact, this is the whole crux of guessing and making forecasts – do the best with the info you have. We can't do our best with the info we don't have. From this point of view, performance should be judged based on whether the information we actually had was optimally used. It may happen that the forecast was optimal, and yet quite off the mark; or vice versa, the forecast was poorly made, and yet turns out to be close to the truth. Let me try to explain with an example. Take a person $P$ in the population of your scenario. Statistician $A$ examines the past history of $P$ and of $P$'s family (including, say, communications and interactions with other people) their general and mental health histories, and similar information. From this analysis, it appears that $P$ is a very pacific, altruistic, compassionate person, who would rather die than harm another person or let another person come to deadly harm, and who is generally loved by family, friends, acquaintances. Statistician $A$ therefore gives an extremely low – but not zero – probability that $P$ would commit murder the next day. Now comes statistician $B$. This statistician has exactly the same information as statistician $A$ – no more, no less. Statistician $B$ gives 100% probability that $P$ will commit murder the next day. We can imagine that statistician $A$ asks $B$ why such a forecast, and statistician $B$ replies: "Because I hate $P$ – that person is meek and a good-doer; I can't stand people like that". I imagine you'll agree with me that $B$'s reasoning and forecast are completely illogical and unreasonable. The reasoning and forecast of $A$, on the other hand, seem well-founded and reasonable. Or? The next day comes. Arrived at work after the usual morning walk, person $P$ murders a colleague with a pair of scissors, then commits suicide. Now, the statistician's $B$ forecast correctly "predicted" the murder. We could say that statistician $A$'s forecast was instead quite off the mark. Should then a score function reward $B$ and penalize $A$? I want to remind you that $A$ and $B$ made their forecasts based on exactly the same information, summarized above. We may wonder why $P$ committed murder – and the fact that we wonder confirms, in my opinion, the view that the murder was unexpected and $A$'s foracast was the most reasonable. Here's an explanation. While person $P$ was walking on the street towards work in the morning, someone bumped into $P$ and purposely injected an allucinogen or some kind of neurology-altering drug. $P$ did not notice, maybe just checked for the wallet, in case the stranger was a pick-pocket. The stranger was actually an emissary for a secret lab that develops neurological weapons for a foreign country, and was in the nation just for the day, with the explicit purpose of testing the drug on a random citizen. The drug was designed to cause violent behaviour followed by self-violent behaviour. I know this is a silly explanation, but you can find an alternative one of your own, maybe involving unsuspected congenital neurological problems or whatnot. The point is that unexpected events happen, and sometimes more than one in a row, as I'm sure you've experienced yourself in your life. Yet, according to the "truth-based" reward/penalty point of view, such rare events will affect negatively a statistician who actually made a completely reasonable forecast. And I believe they shouldn't. (We cannot even exclude that many such events could happen.) So my answer is that such the reward/penalty scheme should be based on whether the statistician does the most reasonable forecast given all the gatherable information, irrespective of whether the event happens or not. (Of course it's very difficult to come up with a score for this.) Jaynes discusses this throughout his book, and in chapter 13 he quotes the passage by Bierce I put at the beginning, which makes the point brilliantly.
Evaluating probabilistic forecasts of K-most-likely events from an arbitrarily large event space
"You acted unwisely," I cried, "as you see     By the outcome." He calmly eyed me: "When choosing the course of my action," said he,    "I had not the outcome to guide me." (Ambrose Bierce: A Lacking
Evaluating probabilistic forecasts of K-most-likely events from an arbitrarily large event space "You acted unwisely," I cried, "as you see     By the outcome." He calmly eyed me: "When choosing the course of my action," said he,    "I had not the outcome to guide me." (Ambrose Bierce: A Lacking Factor, from The Scrap Heap) Great way to pose the question! I'd like to offer a non-answer which presents the forecast problem from a different point of view. As I see it, your way of presenting the problem portrays the statistician's goal as trying to build a "model" that's "close to the truth". Then one can speak of performance and pay. This is a widespread way of viewing probability theory, but not the only one. I belong to those group of people (Jaynes, Jeffreys, Savage, de Finetti, and many others) who find it circular and ultimately hopeless. The problem is that we cannot validate a forecast by comparing it to the "truth" – because we don't have the truth. If we had it, there would be no need of making forecasts. On the other hand, once we acquire knowledge of the truth, the forecast ceases to be important. The point of a forecast is to try to guess the truth as best as possible given all information we have (and can gather). It may well happen that all info we have actually misses some crucial element, and therefore our forecast is grossly off the mark. Yet, that's the best we could do. It was impossible to use information that we didn't have – if we had had it, we would have used it! And maybe we didn't even know that some crucial info was missing. In fact, this is the whole crux of guessing and making forecasts – do the best with the info you have. We can't do our best with the info we don't have. From this point of view, performance should be judged based on whether the information we actually had was optimally used. It may happen that the forecast was optimal, and yet quite off the mark; or vice versa, the forecast was poorly made, and yet turns out to be close to the truth. Let me try to explain with an example. Take a person $P$ in the population of your scenario. Statistician $A$ examines the past history of $P$ and of $P$'s family (including, say, communications and interactions with other people) their general and mental health histories, and similar information. From this analysis, it appears that $P$ is a very pacific, altruistic, compassionate person, who would rather die than harm another person or let another person come to deadly harm, and who is generally loved by family, friends, acquaintances. Statistician $A$ therefore gives an extremely low – but not zero – probability that $P$ would commit murder the next day. Now comes statistician $B$. This statistician has exactly the same information as statistician $A$ – no more, no less. Statistician $B$ gives 100% probability that $P$ will commit murder the next day. We can imagine that statistician $A$ asks $B$ why such a forecast, and statistician $B$ replies: "Because I hate $P$ – that person is meek and a good-doer; I can't stand people like that". I imagine you'll agree with me that $B$'s reasoning and forecast are completely illogical and unreasonable. The reasoning and forecast of $A$, on the other hand, seem well-founded and reasonable. Or? The next day comes. Arrived at work after the usual morning walk, person $P$ murders a colleague with a pair of scissors, then commits suicide. Now, the statistician's $B$ forecast correctly "predicted" the murder. We could say that statistician $A$'s forecast was instead quite off the mark. Should then a score function reward $B$ and penalize $A$? I want to remind you that $A$ and $B$ made their forecasts based on exactly the same information, summarized above. We may wonder why $P$ committed murder – and the fact that we wonder confirms, in my opinion, the view that the murder was unexpected and $A$'s foracast was the most reasonable. Here's an explanation. While person $P$ was walking on the street towards work in the morning, someone bumped into $P$ and purposely injected an allucinogen or some kind of neurology-altering drug. $P$ did not notice, maybe just checked for the wallet, in case the stranger was a pick-pocket. The stranger was actually an emissary for a secret lab that develops neurological weapons for a foreign country, and was in the nation just for the day, with the explicit purpose of testing the drug on a random citizen. The drug was designed to cause violent behaviour followed by self-violent behaviour. I know this is a silly explanation, but you can find an alternative one of your own, maybe involving unsuspected congenital neurological problems or whatnot. The point is that unexpected events happen, and sometimes more than one in a row, as I'm sure you've experienced yourself in your life. Yet, according to the "truth-based" reward/penalty point of view, such rare events will affect negatively a statistician who actually made a completely reasonable forecast. And I believe they shouldn't. (We cannot even exclude that many such events could happen.) So my answer is that such the reward/penalty scheme should be based on whether the statistician does the most reasonable forecast given all the gatherable information, irrespective of whether the event happens or not. (Of course it's very difficult to come up with a score for this.) Jaynes discusses this throughout his book, and in chapter 13 he quotes the passage by Bierce I put at the beginning, which makes the point brilliantly.
Evaluating probabilistic forecasts of K-most-likely events from an arbitrarily large event space "You acted unwisely," I cried, "as you see     By the outcome." He calmly eyed me: "When choosing the course of my action," said he,    "I had not the outcome to guide me." (Ambrose Bierce: A Lacking
33,222
Evaluating probabilistic forecasts of K-most-likely events from an arbitrarily large event space
Before even getting to a scoring mechanism, you are going to need to deal with a fundamental problem in this setup, which is that you only get data from people who the police don't arrest for pre-crime in response to the statistician's predictions. Presumably the point of the minority-report style list is that the police will arrest the people on it so that they don't commit murder. (At least, that is what happens in the movie.) If they do this, they will never find out whether the person would or would not have committed a murder that day, so there is no data to begin with. If you want to get around this, you are going to have to do one of two things. Either the police don't arrest some (or all) of the people on the statistician's list, to see what happens, or the list needs to be longer than the number of people arrested. You are going to need to specify this in order to be clear on the actual data that is available for the scoring mechanism.
Evaluating probabilistic forecasts of K-most-likely events from an arbitrarily large event space
Before even getting to a scoring mechanism, you are going to need to deal with a fundamental problem in this setup, which is that you only get data from people who the police don't arrest for pre-crim
Evaluating probabilistic forecasts of K-most-likely events from an arbitrarily large event space Before even getting to a scoring mechanism, you are going to need to deal with a fundamental problem in this setup, which is that you only get data from people who the police don't arrest for pre-crime in response to the statistician's predictions. Presumably the point of the minority-report style list is that the police will arrest the people on it so that they don't commit murder. (At least, that is what happens in the movie.) If they do this, they will never find out whether the person would or would not have committed a murder that day, so there is no data to begin with. If you want to get around this, you are going to have to do one of two things. Either the police don't arrest some (or all) of the people on the statistician's list, to see what happens, or the list needs to be longer than the number of people arrested. You are going to need to specify this in order to be clear on the actual data that is available for the scoring mechanism.
Evaluating probabilistic forecasts of K-most-likely events from an arbitrarily large event space Before even getting to a scoring mechanism, you are going to need to deal with a fundamental problem in this setup, which is that you only get data from people who the police don't arrest for pre-crim
33,223
When do improper linear models get robustly beautiful?
In effect, it seems to me this is an assortment of assumed covariance structures. In other words, this is a type of Bayesian prior modelling. This gains in robustness over an ordinary MLR procedure because the number of parameters ($\downarrow$df) is reduced, and introduces inaccuracy because of augmented omitted variable bias, OVB. Because of the OVB, the slope is flattened, $|\hat\beta|<|\beta|$, the coefficient of determination is reduced $\hat{R}^2<R^2$. My personal experience is that the superior to the Bayesian approach is to use better modelling; transform parameters, use other norms, and/or use nonlinear methods. That is, once the physics of the problem and the methods are properly explored and co-ordinated, the F statistics, coefficient of determination, etc. improve rather than degrade.
When do improper linear models get robustly beautiful?
In effect, it seems to me this is an assortment of assumed covariance structures. In other words, this is a type of Bayesian prior modelling. This gains in robustness over an ordinary MLR procedure b
When do improper linear models get robustly beautiful? In effect, it seems to me this is an assortment of assumed covariance structures. In other words, this is a type of Bayesian prior modelling. This gains in robustness over an ordinary MLR procedure because the number of parameters ($\downarrow$df) is reduced, and introduces inaccuracy because of augmented omitted variable bias, OVB. Because of the OVB, the slope is flattened, $|\hat\beta|<|\beta|$, the coefficient of determination is reduced $\hat{R}^2<R^2$. My personal experience is that the superior to the Bayesian approach is to use better modelling; transform parameters, use other norms, and/or use nonlinear methods. That is, once the physics of the problem and the methods are properly explored and co-ordinated, the F statistics, coefficient of determination, etc. improve rather than degrade.
When do improper linear models get robustly beautiful? In effect, it seems to me this is an assortment of assumed covariance structures. In other words, this is a type of Bayesian prior modelling. This gains in robustness over an ordinary MLR procedure b
33,224
How to compute and interpret the confidence interval on a QQ plot [duplicate]
I'm trying to figure this out myself, but from my stats training, if your qqplot points fall within the confidence interval bounds, then the data are fairly Normal (this is what you want for linear models, such as linear regression or ANOVA). $H_0:$ Data is Normal vs $H_A:$ Data is not Normal You want to fail to reject the null hypothesis. However, if your qqplot points are really not mostly within the CI, then your data is not Normal. You may want to then consider nonparametric methods. The plot above looks fairly Normal; problems at the tails, so you may want to look out for outliers. Real world data is never perfect. If I come across a solution for methods of CI construction, I'll edit this post.
How to compute and interpret the confidence interval on a QQ plot [duplicate]
I'm trying to figure this out myself, but from my stats training, if your qqplot points fall within the confidence interval bounds, then the data are fairly Normal (this is what you want for linear mo
How to compute and interpret the confidence interval on a QQ plot [duplicate] I'm trying to figure this out myself, but from my stats training, if your qqplot points fall within the confidence interval bounds, then the data are fairly Normal (this is what you want for linear models, such as linear regression or ANOVA). $H_0:$ Data is Normal vs $H_A:$ Data is not Normal You want to fail to reject the null hypothesis. However, if your qqplot points are really not mostly within the CI, then your data is not Normal. You may want to then consider nonparametric methods. The plot above looks fairly Normal; problems at the tails, so you may want to look out for outliers. Real world data is never perfect. If I come across a solution for methods of CI construction, I'll edit this post.
How to compute and interpret the confidence interval on a QQ plot [duplicate] I'm trying to figure this out myself, but from my stats training, if your qqplot points fall within the confidence interval bounds, then the data are fairly Normal (this is what you want for linear mo
33,225
How to handle multiple measurements per participant, with categorical data?
Context of my answer I self-studied this question yesterday (the part concerning the possibility to use mixed models here). I shamelessly dump my fresh new understanding on this approach for 2x2 tables and wait for more advanced peers to correct my imprecisions or misunderstandings. My answer will be then lengthy and overly didactic (at least trying to be didactic) in order to help but also expose my own flaws. First of all, I must say that I shared your confusion that you stated here. I've read about multi-level models, which sound like they are intended the handle this situation when the underlying variables are continuous (e.g., real numbers) and when a linear model is appropriate I studied all the examples from this paper random-effects modelling of categorical response data. The title itself contradicts this thought. For our problem with 2x2 tables with repeated measurement, the example in section 3.6 is germane to our discussion. This is for reference only as my goal is to explain it. I may edit out this section in the future if this context is not necessary anymore. The model General Idea The first thing to understand is that the random effect is modelled not in a very different way as in regression over continuous variable. Indeed a regression over a categorical variable is nothing else than a linear regression over the logit (or another link function like probit) of the probability associated with the different levels of this categorical variable. If $\pi_i$ is the probability to answer yes at the question $i$, then $logit(\pi_{i})= FixedEffects_i + RandomEffect_i$. This model is linear and random effects can be expressed in a classical numerical way like for example $$RandomEffect_i\sim N(0,\sigma)$$ In this problem, the random effect represents the subject-related variation for the same answer. Our case For our problem, we want to model $\pi_{ijv}$ the probability of the subject to answer "yes" for the variable v at interview time j. The logit of this variable is modeled as a combination of fixed effects and subject-related random effects. $$logit(\pi_{ijv})=\beta_{jv}+u_{iv}$$ About the fixed effects The fixed effects are then related to the probability to answer "yes" at time j at the question v. According to your scientific goal you can test with a likelihood ratio to test if the equality of certain fixed effects must be rejected. For example, the model where $\beta_{1v}=\beta_{2v}=\beta_{3v}...$ means that there is no change tendency in the answer from time 1 to time 2. If you assume that this global tendency does not exist, which seems to be the case for your study, you can drop the $i$ straightaway in your model $\beta_{jv}$ becomes $\beta_{v}$. By analogy, you can test by a likelihood ratio if the equality $\beta_{1}=\beta_{2}$ must be rejected. About random effects I know it's possible to model random effects by something else than normal errors but I prefer to answer on the basis of normal random effects for the sake of simplicity. The random effects can be modelled in different ways. With the notations $u_{ij}$ I assumed that a random effect is drawn from its distribution each time a subject answer a question.This is the most specific degree of variation possible. If I used $u_{i}$ instead, it would have mean that a random effect is drawn for each subject $i$ and is the same for each question $v$ he has to answer (some subjects would then have a tendency to answer yes more often). You have to make a choice. If I understood well, you can also have both random effects $u_{i}\sim N(0,\sigma_1)$ which is subject-drawn and $u_{ij}\sim N(0,\sigma_2)$ which is subject+answer-drawn. I think that your choice depends of the details of your case. But If I understood well, the risk of overfitting by adding random effects is not big, so when one have a doubt, we can include many levels. A proposition I realize how weird my answer is, this is just an embarrassing rambling certainly more helpful to me than to others. Maybe I ll edit out 90% of it. I am not more confident, but more disposed to get to the point. I would suggest to compare the model with nested random effects ($u_{i}+u_{iv}$) versus the model with only the combinated random effect ($u_{iv}$). The idea is that the $u_i$ term is the sole responsible for the dependency between answers. Rejecting independence is rejecting the presence of $u_{i}$. Using glmer to test this would give something like : model1<-glmer(yes ~ Question + (1 | Subject/Question ), data = df, family = binomial) model2<-glmer(yes ~ Question + (1 | Subject:Question ), data = df, family = binomial) anova(model1,model2) Question is a dummy variable indicating if the question 1 or 2 is asked. If I understood well, (1 | Subject/Question ) is related to the nested structure $u_{i}+u_{iv}$ and (1 |Subject:Question) is just the combination $u_{iv}$. anova computes a likelihood ratio test between the two models.
How to handle multiple measurements per participant, with categorical data?
Context of my answer I self-studied this question yesterday (the part concerning the possibility to use mixed models here). I shamelessly dump my fresh new understanding on this approach for 2x2 table
How to handle multiple measurements per participant, with categorical data? Context of my answer I self-studied this question yesterday (the part concerning the possibility to use mixed models here). I shamelessly dump my fresh new understanding on this approach for 2x2 tables and wait for more advanced peers to correct my imprecisions or misunderstandings. My answer will be then lengthy and overly didactic (at least trying to be didactic) in order to help but also expose my own flaws. First of all, I must say that I shared your confusion that you stated here. I've read about multi-level models, which sound like they are intended the handle this situation when the underlying variables are continuous (e.g., real numbers) and when a linear model is appropriate I studied all the examples from this paper random-effects modelling of categorical response data. The title itself contradicts this thought. For our problem with 2x2 tables with repeated measurement, the example in section 3.6 is germane to our discussion. This is for reference only as my goal is to explain it. I may edit out this section in the future if this context is not necessary anymore. The model General Idea The first thing to understand is that the random effect is modelled not in a very different way as in regression over continuous variable. Indeed a regression over a categorical variable is nothing else than a linear regression over the logit (or another link function like probit) of the probability associated with the different levels of this categorical variable. If $\pi_i$ is the probability to answer yes at the question $i$, then $logit(\pi_{i})= FixedEffects_i + RandomEffect_i$. This model is linear and random effects can be expressed in a classical numerical way like for example $$RandomEffect_i\sim N(0,\sigma)$$ In this problem, the random effect represents the subject-related variation for the same answer. Our case For our problem, we want to model $\pi_{ijv}$ the probability of the subject to answer "yes" for the variable v at interview time j. The logit of this variable is modeled as a combination of fixed effects and subject-related random effects. $$logit(\pi_{ijv})=\beta_{jv}+u_{iv}$$ About the fixed effects The fixed effects are then related to the probability to answer "yes" at time j at the question v. According to your scientific goal you can test with a likelihood ratio to test if the equality of certain fixed effects must be rejected. For example, the model where $\beta_{1v}=\beta_{2v}=\beta_{3v}...$ means that there is no change tendency in the answer from time 1 to time 2. If you assume that this global tendency does not exist, which seems to be the case for your study, you can drop the $i$ straightaway in your model $\beta_{jv}$ becomes $\beta_{v}$. By analogy, you can test by a likelihood ratio if the equality $\beta_{1}=\beta_{2}$ must be rejected. About random effects I know it's possible to model random effects by something else than normal errors but I prefer to answer on the basis of normal random effects for the sake of simplicity. The random effects can be modelled in different ways. With the notations $u_{ij}$ I assumed that a random effect is drawn from its distribution each time a subject answer a question.This is the most specific degree of variation possible. If I used $u_{i}$ instead, it would have mean that a random effect is drawn for each subject $i$ and is the same for each question $v$ he has to answer (some subjects would then have a tendency to answer yes more often). You have to make a choice. If I understood well, you can also have both random effects $u_{i}\sim N(0,\sigma_1)$ which is subject-drawn and $u_{ij}\sim N(0,\sigma_2)$ which is subject+answer-drawn. I think that your choice depends of the details of your case. But If I understood well, the risk of overfitting by adding random effects is not big, so when one have a doubt, we can include many levels. A proposition I realize how weird my answer is, this is just an embarrassing rambling certainly more helpful to me than to others. Maybe I ll edit out 90% of it. I am not more confident, but more disposed to get to the point. I would suggest to compare the model with nested random effects ($u_{i}+u_{iv}$) versus the model with only the combinated random effect ($u_{iv}$). The idea is that the $u_i$ term is the sole responsible for the dependency between answers. Rejecting independence is rejecting the presence of $u_{i}$. Using glmer to test this would give something like : model1<-glmer(yes ~ Question + (1 | Subject/Question ), data = df, family = binomial) model2<-glmer(yes ~ Question + (1 | Subject:Question ), data = df, family = binomial) anova(model1,model2) Question is a dummy variable indicating if the question 1 or 2 is asked. If I understood well, (1 | Subject/Question ) is related to the nested structure $u_{i}+u_{iv}$ and (1 |Subject:Question) is just the combination $u_{iv}$. anova computes a likelihood ratio test between the two models.
How to handle multiple measurements per participant, with categorical data? Context of my answer I self-studied this question yesterday (the part concerning the possibility to use mixed models here). I shamelessly dump my fresh new understanding on this approach for 2x2 table
33,226
Schuette–Nesbitt formula
I have found an example in following book and my answer is a modified version of the Sec 8.4,8.6 of the book in order to make it concise and clear. Gerber, Hans U. "Life insurance." Life Insurance Mathematics. Springer Berlin Heidelberg, 1990. $B_1,\cdots B_n$ are arbitrary events. $N$ is a random variable ranging over $\{0, 1, ... , m\}$. For arbitrary real coefficients $c_1,\cdots c_m$, the Schuette–Nesbitt formula is the following operator identity between shifting operator $E:c_n\mapsto c_{n+1}$ and difference operator $\Delta:c_n\mapsto c_{n+1}-c_{n}$. By definition they are related via $E=id+\Delta$, the SN formula is $$\sum_{n=0}^{m}c_n\cdot Pr(N=n)=\sum_{k=0}^{m}[\Delta^{k}c_0]S_k$$ where $S_k=\sum_{j_1,\cdots j_k}Pr(B_{j1}\cap\cdots \cap B_{jk})$ is the symmetric sum among these $n$ events and $S_0=1$. Note that $[\Delta^{k}c_0]$ means difference operator acting on $c_0$. For example, $[\Delta^{2}c_0]=\Delta^{1}(c_1-c_0)=\Delta^{1}(c_1)-\Delta^{1}(c_0)=(c_2-c_1)-(c_1-c_0)=c_2-2c_1+c_0$. Both operators are linear and hence they have representations in terms of matrix, therefore they can be extended to polynomial rings and modules (since these two objects have "basis",loosely speaking.) $$E=\left(\begin{array}{ccccc} 0 & 0 & 0 & \cdots\\ 1 & 0 & 0 & \cdots\\ 0 & 1 & 0 & \cdots\\ 0 & 0 & 1 & \cdots \end{array}\right) $$ $$\Delta=\left(\begin{array}{ccccc} -1 & 0 & 0 & \cdots\\ 1 & -1 & 0 & \cdots\\ 0 & 1 & -1 & \cdots\\ 0 & 0 & 1 & \cdots \end{array}\right)$$ The proof makes use of the indicator trick and expansion of the operator polynomial $\prod_{j=1}^{m}(1+I_{B_j}\Delta)$ and the fact that $I_A\cdot I_B=I_{A\cap B}$ and $\Delta$ commutes with indicators, I will refer you to Gerber's book. If we choose $c_0=1$ and all other $c_1=c_2=\cdots=c_n=1$, then SN formula becomes the inclusion-exclusion principle as below: $$\sum_{n=1}^{m} Pr(N=n)=\sum_{k=0}^{m}\Delta^{k}c_0S_k=c_0 S_0+(c_1-c_0)S_1+(c_2-2c_1+c_0)S_2+\cdots =S_1-S_2+S_3+\cdots+(-1)^{n}S_n=[Pr(B_1)+\cdots+Pr(B_n)]-[Pr(B_1\cap B_2)+\cdots+Pr(B_{n-1}\cap B_{n})]+\cdots+(-1)^n\cdot Pr(S_1\cap\cdots \cap S_n)$$ Waring's Theorem gives the probability that exactly $r$ out of the $n$ events $B_1,\cdots B_n$ occur. Thus it can be derived by specifying $c_r=1$ and all other $c$'s=0. The SN formula becomes $$ Pr(N=r)=\sum_{k=0}^{m}[\Delta^{k}c_0]S_k=\sum_{k=r}^{m}[\Delta^{k}c_0]S_k$$ because any term $[\Delta^{k}c_0]=0$ when $k<r$, a change of variable $t=k-r$ will yield Waring's formula. There is an envelope assignment example in Gerber's book you can have a look into, but my suggestion is to understand it in terms of operator algebra instead of probability.
Schuette–Nesbitt formula
I have found an example in following book and my answer is a modified version of the Sec 8.4,8.6 of the book in order to make it concise and clear. Gerber, Hans U. "Life insurance." Life Insurance Mat
Schuette–Nesbitt formula I have found an example in following book and my answer is a modified version of the Sec 8.4,8.6 of the book in order to make it concise and clear. Gerber, Hans U. "Life insurance." Life Insurance Mathematics. Springer Berlin Heidelberg, 1990. $B_1,\cdots B_n$ are arbitrary events. $N$ is a random variable ranging over $\{0, 1, ... , m\}$. For arbitrary real coefficients $c_1,\cdots c_m$, the Schuette–Nesbitt formula is the following operator identity between shifting operator $E:c_n\mapsto c_{n+1}$ and difference operator $\Delta:c_n\mapsto c_{n+1}-c_{n}$. By definition they are related via $E=id+\Delta$, the SN formula is $$\sum_{n=0}^{m}c_n\cdot Pr(N=n)=\sum_{k=0}^{m}[\Delta^{k}c_0]S_k$$ where $S_k=\sum_{j_1,\cdots j_k}Pr(B_{j1}\cap\cdots \cap B_{jk})$ is the symmetric sum among these $n$ events and $S_0=1$. Note that $[\Delta^{k}c_0]$ means difference operator acting on $c_0$. For example, $[\Delta^{2}c_0]=\Delta^{1}(c_1-c_0)=\Delta^{1}(c_1)-\Delta^{1}(c_0)=(c_2-c_1)-(c_1-c_0)=c_2-2c_1+c_0$. Both operators are linear and hence they have representations in terms of matrix, therefore they can be extended to polynomial rings and modules (since these two objects have "basis",loosely speaking.) $$E=\left(\begin{array}{ccccc} 0 & 0 & 0 & \cdots\\ 1 & 0 & 0 & \cdots\\ 0 & 1 & 0 & \cdots\\ 0 & 0 & 1 & \cdots \end{array}\right) $$ $$\Delta=\left(\begin{array}{ccccc} -1 & 0 & 0 & \cdots\\ 1 & -1 & 0 & \cdots\\ 0 & 1 & -1 & \cdots\\ 0 & 0 & 1 & \cdots \end{array}\right)$$ The proof makes use of the indicator trick and expansion of the operator polynomial $\prod_{j=1}^{m}(1+I_{B_j}\Delta)$ and the fact that $I_A\cdot I_B=I_{A\cap B}$ and $\Delta$ commutes with indicators, I will refer you to Gerber's book. If we choose $c_0=1$ and all other $c_1=c_2=\cdots=c_n=1$, then SN formula becomes the inclusion-exclusion principle as below: $$\sum_{n=1}^{m} Pr(N=n)=\sum_{k=0}^{m}\Delta^{k}c_0S_k=c_0 S_0+(c_1-c_0)S_1+(c_2-2c_1+c_0)S_2+\cdots =S_1-S_2+S_3+\cdots+(-1)^{n}S_n=[Pr(B_1)+\cdots+Pr(B_n)]-[Pr(B_1\cap B_2)+\cdots+Pr(B_{n-1}\cap B_{n})]+\cdots+(-1)^n\cdot Pr(S_1\cap\cdots \cap S_n)$$ Waring's Theorem gives the probability that exactly $r$ out of the $n$ events $B_1,\cdots B_n$ occur. Thus it can be derived by specifying $c_r=1$ and all other $c$'s=0. The SN formula becomes $$ Pr(N=r)=\sum_{k=0}^{m}[\Delta^{k}c_0]S_k=\sum_{k=r}^{m}[\Delta^{k}c_0]S_k$$ because any term $[\Delta^{k}c_0]=0$ when $k<r$, a change of variable $t=k-r$ will yield Waring's formula. There is an envelope assignment example in Gerber's book you can have a look into, but my suggestion is to understand it in terms of operator algebra instead of probability.
Schuette–Nesbitt formula I have found an example in following book and my answer is a modified version of the Sec 8.4,8.6 of the book in order to make it concise and clear. Gerber, Hans U. "Life insurance." Life Insurance Mat
33,227
Fitting a heteroscedastic generalized linear model for binomial responses
Perhaps what you're looking for is something called double generalized linear models where both mean and dispersion parameter are modeled. There's even an R package dglm designed to fit such models.
Fitting a heteroscedastic generalized linear model for binomial responses
Perhaps what you're looking for is something called double generalized linear models where both mean and dispersion parameter are modeled. There's even an R package dglm designed to fit such models.
Fitting a heteroscedastic generalized linear model for binomial responses Perhaps what you're looking for is something called double generalized linear models where both mean and dispersion parameter are modeled. There's even an R package dglm designed to fit such models.
Fitting a heteroscedastic generalized linear model for binomial responses Perhaps what you're looking for is something called double generalized linear models where both mean and dispersion parameter are modeled. There's even an R package dglm designed to fit such models.
33,228
Estimate mass of fruit in a bag from only related totals?
Let's begin by plotting the data and take a look at it. This is a very limited amount of data, so this is going to be somewhat ad hoc with plenty of assumptions. rotten <- c(0,1,1,0,0,0,1,1,1,1,0,0,0) rotten <- as.factor(rotten) mass <- c(139.08, 91.48, 74.23, 129.8, 169.22, 123.43, 104.93, 103.27, 169.01, 83.29, 157.57, 117.72, 128.63) diam <- c(17.28, 6.57, 7.12, 16.52, 14.58, 6.99, 6.63, 6.75, 15.38, 7.45, 13.06, 6.61, 7.19) plot(mass,diam,col=rotten,lwd=2) title("Fruits") So this is the data, red dots represent rotten fruits: You are correct in assuming that there seem to be two kinds of fruits. The assumptions I make are the following: The diameter splits the fruits into two groups Fruits with a diameter greater than 10 are in one group, others in the smaller group. There is only one rotten fruit in the big fruit group. Let's assume that the if a fruit is in the large group, then being rotten does not affect the weight. This is essential, since we only have one data point in that group. If the fruit is a small fruit, then being rotten affects the mass. Let's assume that the variables diam and mass are normally distributed. Because it is given that the sum of the diameter is 64.2 cm, then it is most likely that two fruits are large and four are small. Now there are 3 cases for the weight. There is 2, 3 or 4 small fruits rotten, (a large fruit being rotten does not affect the mass by assumption). So now you can get bounds on your mass by calculating these values. We can empirically estimate the probability for number of small fruits being rotten. We use the probabilities to weight our estimates of the mass, depending on the number of rotten fruits: samps <- 100000 stored_vals <- matrix(0,samps,2) for(i in 1:samps){ numF <- 0 # Number of small rotten numR <- 0 # Total number of rotten # Pick 4 small fruits for(j in 1:4){ if(runif(1) < (5/8)){ # Empirical proportion of small rotten numF <- numF + 1 numR <- numR + 1 } } # Pick 2 large fruits for(j in 1:2){ if(runif(1) < 1/5){# Empirical proportion of large rotten numR <- numR + 1 } } stored_vals[i,] <- c(numF,numR) } # Pick out samples that had 4 rotten fourRotten <- stored_vals[stored_vals[,2] == 4,1] hist(fourRotten) table(fourRotten) # Proportions props <- table(fourRotten)/length(fourRotten) massBig <- mean(mass[diam>10]) massSmRot <- mean(mass[diam<10 & rotten == 1]) massSmOk <- mean(mass[diam<10 & rotten == 0]) weights <- 2*massBig + c(2*massSmOk+2*massSmRot,1*massSmOk+3*massSmRot,4*massSmRot) Est_Mass <- sum(props*weights) Giving us a final estimate of 691.5183g. I think you have to make most of the assumptions I have made to reach a conclusion, but I think it might be possible to do this in a smarter way. Also I sample empirically to get the probability of number of rotten small fruits, that is just laziness and can be done "analytically".
Estimate mass of fruit in a bag from only related totals?
Let's begin by plotting the data and take a look at it. This is a very limited amount of data, so this is going to be somewhat ad hoc with plenty of assumptions. rotten <- c(0,1,1,0,0,0,1,1,1,1,0,0,0)
Estimate mass of fruit in a bag from only related totals? Let's begin by plotting the data and take a look at it. This is a very limited amount of data, so this is going to be somewhat ad hoc with plenty of assumptions. rotten <- c(0,1,1,0,0,0,1,1,1,1,0,0,0) rotten <- as.factor(rotten) mass <- c(139.08, 91.48, 74.23, 129.8, 169.22, 123.43, 104.93, 103.27, 169.01, 83.29, 157.57, 117.72, 128.63) diam <- c(17.28, 6.57, 7.12, 16.52, 14.58, 6.99, 6.63, 6.75, 15.38, 7.45, 13.06, 6.61, 7.19) plot(mass,diam,col=rotten,lwd=2) title("Fruits") So this is the data, red dots represent rotten fruits: You are correct in assuming that there seem to be two kinds of fruits. The assumptions I make are the following: The diameter splits the fruits into two groups Fruits with a diameter greater than 10 are in one group, others in the smaller group. There is only one rotten fruit in the big fruit group. Let's assume that the if a fruit is in the large group, then being rotten does not affect the weight. This is essential, since we only have one data point in that group. If the fruit is a small fruit, then being rotten affects the mass. Let's assume that the variables diam and mass are normally distributed. Because it is given that the sum of the diameter is 64.2 cm, then it is most likely that two fruits are large and four are small. Now there are 3 cases for the weight. There is 2, 3 or 4 small fruits rotten, (a large fruit being rotten does not affect the mass by assumption). So now you can get bounds on your mass by calculating these values. We can empirically estimate the probability for number of small fruits being rotten. We use the probabilities to weight our estimates of the mass, depending on the number of rotten fruits: samps <- 100000 stored_vals <- matrix(0,samps,2) for(i in 1:samps){ numF <- 0 # Number of small rotten numR <- 0 # Total number of rotten # Pick 4 small fruits for(j in 1:4){ if(runif(1) < (5/8)){ # Empirical proportion of small rotten numF <- numF + 1 numR <- numR + 1 } } # Pick 2 large fruits for(j in 1:2){ if(runif(1) < 1/5){# Empirical proportion of large rotten numR <- numR + 1 } } stored_vals[i,] <- c(numF,numR) } # Pick out samples that had 4 rotten fourRotten <- stored_vals[stored_vals[,2] == 4,1] hist(fourRotten) table(fourRotten) # Proportions props <- table(fourRotten)/length(fourRotten) massBig <- mean(mass[diam>10]) massSmRot <- mean(mass[diam<10 & rotten == 1]) massSmOk <- mean(mass[diam<10 & rotten == 0]) weights <- 2*massBig + c(2*massSmOk+2*massSmRot,1*massSmOk+3*massSmRot,4*massSmRot) Est_Mass <- sum(props*weights) Giving us a final estimate of 691.5183g. I think you have to make most of the assumptions I have made to reach a conclusion, but I think it might be possible to do this in a smarter way. Also I sample empirically to get the probability of number of rotten small fruits, that is just laziness and can be done "analytically".
Estimate mass of fruit in a bag from only related totals? Let's begin by plotting the data and take a look at it. This is a very limited amount of data, so this is going to be somewhat ad hoc with plenty of assumptions. rotten <- c(0,1,1,0,0,0,1,1,1,1,0,0,0)
33,229
Estimate mass of fruit in a bag from only related totals?
I would propose the following approach: Generate all 6-tuples that satisfy the conditions on 4 rotten. They are ${6\choose 4}{7\choose 2}$. Select from the generated tuples only those that satisfy the the condition on the diameter. Calculate the average weight of the selected tuples (usual arithmetic average). All this is manageable by a simple script.
Estimate mass of fruit in a bag from only related totals?
I would propose the following approach: Generate all 6-tuples that satisfy the conditions on 4 rotten. They are ${6\choose 4}{7\choose 2}$. Select from the generated tuples only those that satisfy th
Estimate mass of fruit in a bag from only related totals? I would propose the following approach: Generate all 6-tuples that satisfy the conditions on 4 rotten. They are ${6\choose 4}{7\choose 2}$. Select from the generated tuples only those that satisfy the the condition on the diameter. Calculate the average weight of the selected tuples (usual arithmetic average). All this is manageable by a simple script.
Estimate mass of fruit in a bag from only related totals? I would propose the following approach: Generate all 6-tuples that satisfy the conditions on 4 rotten. They are ${6\choose 4}{7\choose 2}$. Select from the generated tuples only those that satisfy th
33,230
Estimate mass of fruit in a bag from only related totals?
Multiple approaches include, from simplest to complex, 6(mean mass) 6(mean volume)(mean density) 4(mean rotten mass) + 2 (mean non-rotten mass) 4((mean rotten volume) + 2(mean non-rotten volume))(mean density) 4 (mean rotten volume)(mean rotten density) + 2(mean non-rotten volume)(mean non-rotten density) . . . combinatoric methods The approaches are arranged in order of simplicity of calculating, not in order of any approach being better, or any good at all. Selection of which approach to use depends upon what characteristics of the population are known or assumed. For example, if the masses of fruits in the store population are normally distributed and independent of diameters and rot status, one could use the first, simplest approach without any advantages (or even disadvantages of sampling error of multiple variables) of using more complex approaches. If not independent identically distributed random variables, then a more complex choice depending upon known or assumed information about the population may be better.
Estimate mass of fruit in a bag from only related totals?
Multiple approaches include, from simplest to complex, 6(mean mass) 6(mean volume)(mean density) 4(mean rotten mass) + 2 (mean non-rotten mass) 4((mean rotten volume) + 2(mean non-rotten volume))(mea
Estimate mass of fruit in a bag from only related totals? Multiple approaches include, from simplest to complex, 6(mean mass) 6(mean volume)(mean density) 4(mean rotten mass) + 2 (mean non-rotten mass) 4((mean rotten volume) + 2(mean non-rotten volume))(mean density) 4 (mean rotten volume)(mean rotten density) + 2(mean non-rotten volume)(mean non-rotten density) . . . combinatoric methods The approaches are arranged in order of simplicity of calculating, not in order of any approach being better, or any good at all. Selection of which approach to use depends upon what characteristics of the population are known or assumed. For example, if the masses of fruits in the store population are normally distributed and independent of diameters and rot status, one could use the first, simplest approach without any advantages (or even disadvantages of sampling error of multiple variables) of using more complex approaches. If not independent identically distributed random variables, then a more complex choice depending upon known or assumed information about the population may be better.
Estimate mass of fruit in a bag from only related totals? Multiple approaches include, from simplest to complex, 6(mean mass) 6(mean volume)(mean density) 4(mean rotten mass) + 2 (mean non-rotten mass) 4((mean rotten volume) + 2(mean non-rotten volume))(mea
33,231
Measure of explained variance for Poisson GLM (log-link function)
McCullagh and Nelder 1989 (page 34) give for the deviance function $D$ for the Poisson distribution: $$ D = 2 \sum\left(y \log\left(\frac{y}{\mu} \right) - (y-\mu)\right) $$ (sign error in formula now corrected) where y represents your data and $\mu$ your modelled output. I use this function to estimate the explained deviance $ED$ of a GLM with Poisson distribution like this: $$ ED = 1 - \frac{D}{\text{total deviance}} $$ where total deviance is given by the same equation for $D$ but using the mean of $y$ (a single number, i.e., $\mathrm{mean}(y)$) instead of the array of modelled estimates $\mu$. I do not know if this is 100% correct, it sounds logical for me and seems to work as you would expect an estimate of the explained deviance to work (it gives you 1 if you use $\mu = y$, etc).
Measure of explained variance for Poisson GLM (log-link function)
McCullagh and Nelder 1989 (page 34) give for the deviance function $D$ for the Poisson distribution: $$ D = 2 \sum\left(y \log\left(\frac{y}{\mu} \right) - (y-\mu)\right) $$ (sign error in formula no
Measure of explained variance for Poisson GLM (log-link function) McCullagh and Nelder 1989 (page 34) give for the deviance function $D$ for the Poisson distribution: $$ D = 2 \sum\left(y \log\left(\frac{y}{\mu} \right) - (y-\mu)\right) $$ (sign error in formula now corrected) where y represents your data and $\mu$ your modelled output. I use this function to estimate the explained deviance $ED$ of a GLM with Poisson distribution like this: $$ ED = 1 - \frac{D}{\text{total deviance}} $$ where total deviance is given by the same equation for $D$ but using the mean of $y$ (a single number, i.e., $\mathrm{mean}(y)$) instead of the array of modelled estimates $\mu$. I do not know if this is 100% correct, it sounds logical for me and seems to work as you would expect an estimate of the explained deviance to work (it gives you 1 if you use $\mu = y$, etc).
Measure of explained variance for Poisson GLM (log-link function) McCullagh and Nelder 1989 (page 34) give for the deviance function $D$ for the Poisson distribution: $$ D = 2 \sum\left(y \log\left(\frac{y}{\mu} \right) - (y-\mu)\right) $$ (sign error in formula no
33,232
Gamma distribution different derivations
Well, for one thing, one cannot derive a gamma distribution by assuming a discrete non-negative integer variable and then saying that now, all of a sudden let us remove the concept of whole counting numbers of events and allow all reals. For example, there is no such thing as a transcendental event like rolling a six-sided die and getting $\pi$ rather than a whole number from 1 to 6. In specific, one cannot say without fear of contradiction that a gamma function is a generalized factorial, because that puts the "cart before the horse." However, one can say that a factorial is a special case of a gamma function, that discards negative reals, and only considers non-negative integers. So the thought experiment error is like saying that a gamma distribution is a generalized exponential distribution, where it is more accurate to say that an exponential distribution is a degenerate case of a gamma distribution. In order to make this clear consider that we would not call $y=b x+c$ a quadratic equation. It is a linear equation. Sure, it is a degenerate or trivialized case of a quadratic, i.e., $y=a x^2 +b x + c$ for $a=0$. What a linear equation lacks is a square term, i.e., a linear equation does not have the same form as a quadratic because it lacks a quadratic term. However, linear equations also lack a cubic term, an $x^\pi$ term and, any other nonlinear function one cares to consider, such that calling a linear equation a quadratic is only true in one out of a transfinite number of other possible imaginary circumstances. So too an exponential distribution can arise not only as a degenerate or trivialized gamma distribution, (as $x^0=1$) but in myriad other cases as well. So one has to say that a gamma distribution is one of a very large number of generalizations of an exponential distribution, e.g., see generalized exponential distribution. What is the danger in that you ask? Well for one thing, sloppy thinking leads to mistakes, and there are many of them, including the "derivation" alluded to above, that a gamma distribution arises from a Poisson distribution. Let me be clear, that is not the case. Rather, the Poisson distribution is a special case of a gamma distribution. Another example arises when examining tail heaviness. The correct method for comparing tail heaviness is to contrast so-called "survival" functions (more accurately called complementary cumulative distribution functions, ccdf's, $1-F(x)$, when continuous) and the correct procedure for this is given as examples on this site at Which has the heavier tail, lognormal or gamma?. Now when our unchecked inductive urge is let run free, mistakes are made, for example, by stating that various tail heaviness can be classified into groups, where one of the groups is functions of "exponential tail heaviness". The mistake (applying L'Hôpital's rule to non-indeterminate forms) is replicated throughout the literature, and is documented as a mistake in the Relative Tail Heaviness Appendix section of an article, which relates that "For continuous functions, pdf binary comparison through survival function ratios avoids false attribution, for example, classifying the GD (sic, gamma distribution) as having an ED (sic, exponential distribution) terminal tail, whereas in fact, the exponential has tail heaviness that is within the GD tail heaviness range...", and "...Even when two functions are in the same category of tail heaviness their range of heavinesses might not overlap, which may seem counterintuitive, but implies once again that only binary tail heaviness comparisons make sense." Now finally, we have said that a gamma distribution does not arise from a Poisson distribution, so then, from what does a gamma distribution arise? One thing should be clear, it does not arise from an Erlang distribution despite incorrect claims to that effect, e.g., see this mistake, which is yet another attempt to use uncritical inductive thinking. Rather, it may arise from a gamma Lévy process, where the difference in approach is that a Lévy process begins with a real number variable treatment (i.e., not integer), and where no claim is made that it has to arise only in that context. The next question relates to the Chi-Squared distribution, and again the treatment in the OP's link is deficient. Chi-squared implementations, unlike the text of the OP's link, have $ν$, degrees of freedom, to be any positive real number. Thus, there is an actual relationship between the gamma distribution and the Chi-squared distribution for continuous variables. Thus, the understanding that $v$ corresponds to a whole number of normal distributions is not actually a derivation of Chi-squared, just an example special case application of it. That is, in the OP's link: " Let now consider the special case of the gamma distribution that plays an important role in statistics. Let X have a gamma distribution with $θ=2$ and $α=r/2$, where $r$ is a positive integer. " If we drop the unnecessary $r\in\mathbb{Z+}$, and require instead that $r\in\mathbb{R+}$, then the substitutions above yield a more general than integer df derivation of Chi-squared. However, the obverse is not true, that is, Chi-squared does not provide a derivation of a gamma distribution because the number of parameters has decreased by 1 from setting $θ=2$, and generalizing Chi-squared does not imply only a gamma distribution but other things as well, for example, see Generalized Chi-squared distribution . The final question is should we expect distributions to be related through alternative paths, the answer to which is yes, we should. For example, see this answer which shows how general interrelationships are between distributions and how, in specific, gamma distributions relate to other distributions.
Gamma distribution different derivations
Well, for one thing, one cannot derive a gamma distribution by assuming a discrete non-negative integer variable and then saying that now, all of a sudden let us remove the concept of whole counting n
Gamma distribution different derivations Well, for one thing, one cannot derive a gamma distribution by assuming a discrete non-negative integer variable and then saying that now, all of a sudden let us remove the concept of whole counting numbers of events and allow all reals. For example, there is no such thing as a transcendental event like rolling a six-sided die and getting $\pi$ rather than a whole number from 1 to 6. In specific, one cannot say without fear of contradiction that a gamma function is a generalized factorial, because that puts the "cart before the horse." However, one can say that a factorial is a special case of a gamma function, that discards negative reals, and only considers non-negative integers. So the thought experiment error is like saying that a gamma distribution is a generalized exponential distribution, where it is more accurate to say that an exponential distribution is a degenerate case of a gamma distribution. In order to make this clear consider that we would not call $y=b x+c$ a quadratic equation. It is a linear equation. Sure, it is a degenerate or trivialized case of a quadratic, i.e., $y=a x^2 +b x + c$ for $a=0$. What a linear equation lacks is a square term, i.e., a linear equation does not have the same form as a quadratic because it lacks a quadratic term. However, linear equations also lack a cubic term, an $x^\pi$ term and, any other nonlinear function one cares to consider, such that calling a linear equation a quadratic is only true in one out of a transfinite number of other possible imaginary circumstances. So too an exponential distribution can arise not only as a degenerate or trivialized gamma distribution, (as $x^0=1$) but in myriad other cases as well. So one has to say that a gamma distribution is one of a very large number of generalizations of an exponential distribution, e.g., see generalized exponential distribution. What is the danger in that you ask? Well for one thing, sloppy thinking leads to mistakes, and there are many of them, including the "derivation" alluded to above, that a gamma distribution arises from a Poisson distribution. Let me be clear, that is not the case. Rather, the Poisson distribution is a special case of a gamma distribution. Another example arises when examining tail heaviness. The correct method for comparing tail heaviness is to contrast so-called "survival" functions (more accurately called complementary cumulative distribution functions, ccdf's, $1-F(x)$, when continuous) and the correct procedure for this is given as examples on this site at Which has the heavier tail, lognormal or gamma?. Now when our unchecked inductive urge is let run free, mistakes are made, for example, by stating that various tail heaviness can be classified into groups, where one of the groups is functions of "exponential tail heaviness". The mistake (applying L'Hôpital's rule to non-indeterminate forms) is replicated throughout the literature, and is documented as a mistake in the Relative Tail Heaviness Appendix section of an article, which relates that "For continuous functions, pdf binary comparison through survival function ratios avoids false attribution, for example, classifying the GD (sic, gamma distribution) as having an ED (sic, exponential distribution) terminal tail, whereas in fact, the exponential has tail heaviness that is within the GD tail heaviness range...", and "...Even when two functions are in the same category of tail heaviness their range of heavinesses might not overlap, which may seem counterintuitive, but implies once again that only binary tail heaviness comparisons make sense." Now finally, we have said that a gamma distribution does not arise from a Poisson distribution, so then, from what does a gamma distribution arise? One thing should be clear, it does not arise from an Erlang distribution despite incorrect claims to that effect, e.g., see this mistake, which is yet another attempt to use uncritical inductive thinking. Rather, it may arise from a gamma Lévy process, where the difference in approach is that a Lévy process begins with a real number variable treatment (i.e., not integer), and where no claim is made that it has to arise only in that context. The next question relates to the Chi-Squared distribution, and again the treatment in the OP's link is deficient. Chi-squared implementations, unlike the text of the OP's link, have $ν$, degrees of freedom, to be any positive real number. Thus, there is an actual relationship between the gamma distribution and the Chi-squared distribution for continuous variables. Thus, the understanding that $v$ corresponds to a whole number of normal distributions is not actually a derivation of Chi-squared, just an example special case application of it. That is, in the OP's link: " Let now consider the special case of the gamma distribution that plays an important role in statistics. Let X have a gamma distribution with $θ=2$ and $α=r/2$, where $r$ is a positive integer. " If we drop the unnecessary $r\in\mathbb{Z+}$, and require instead that $r\in\mathbb{R+}$, then the substitutions above yield a more general than integer df derivation of Chi-squared. However, the obverse is not true, that is, Chi-squared does not provide a derivation of a gamma distribution because the number of parameters has decreased by 1 from setting $θ=2$, and generalizing Chi-squared does not imply only a gamma distribution but other things as well, for example, see Generalized Chi-squared distribution . The final question is should we expect distributions to be related through alternative paths, the answer to which is yes, we should. For example, see this answer which shows how general interrelationships are between distributions and how, in specific, gamma distributions relate to other distributions.
Gamma distribution different derivations Well, for one thing, one cannot derive a gamma distribution by assuming a discrete non-negative integer variable and then saying that now, all of a sudden let us remove the concept of whole counting n
33,233
Tail probabilities of multivariate normal distribution
Answered in comments, copied here : The mvtnorm package in R can do this. Check https://cran.r-project.org/web/packages/mvtnorm/vignettes/MVT_Rnews.pdf for many examples.
Tail probabilities of multivariate normal distribution
Answered in comments, copied here : The mvtnorm package in R can do this. Check https://cran.r-project.org/web/packages/mvtnorm/vignettes/MVT_Rnews.pdf for many examples.
Tail probabilities of multivariate normal distribution Answered in comments, copied here : The mvtnorm package in R can do this. Check https://cran.r-project.org/web/packages/mvtnorm/vignettes/MVT_Rnews.pdf for many examples.
Tail probabilities of multivariate normal distribution Answered in comments, copied here : The mvtnorm package in R can do this. Check https://cran.r-project.org/web/packages/mvtnorm/vignettes/MVT_Rnews.pdf for many examples.
33,234
Power in proteomics?
In applications (especially ethical applications, where you have to do a power study) I like using this reference [Wang and Chen 2004], because it nicely explains the concept behind a power calculation for high-throughput data (whatever the data actually is). In essence, in addition to the usual parameters (α, β, N, effect size) you use two additional parameters, λ and η. The latter, η, is the assumed numbered of truly altered genes, and λ is the fraction of the truly altered genes that you want to be able to detect. It is quite straightforward to expand any known power calculations to a high-throughput data using this approach. Wang, Sue-Jane, and James J. Chen. "Sample size for identifying differentially expressed genes in microarray experiments." Journal of Computational Biology 11.4 (2004): 714-726.
Power in proteomics?
In applications (especially ethical applications, where you have to do a power study) I like using this reference [Wang and Chen 2004], because it nicely explains the concept behind a power calculatio
Power in proteomics? In applications (especially ethical applications, where you have to do a power study) I like using this reference [Wang and Chen 2004], because it nicely explains the concept behind a power calculation for high-throughput data (whatever the data actually is). In essence, in addition to the usual parameters (α, β, N, effect size) you use two additional parameters, λ and η. The latter, η, is the assumed numbered of truly altered genes, and λ is the fraction of the truly altered genes that you want to be able to detect. It is quite straightforward to expand any known power calculations to a high-throughput data using this approach. Wang, Sue-Jane, and James J. Chen. "Sample size for identifying differentially expressed genes in microarray experiments." Journal of Computational Biology 11.4 (2004): 714-726.
Power in proteomics? In applications (especially ethical applications, where you have to do a power study) I like using this reference [Wang and Chen 2004], because it nicely explains the concept behind a power calculatio
33,235
lmer() parametric bootstrap testing for fixed effects
This all looks fine. The histogram is the null distribution of differences in deviance between the full and reduced model. Because you have a large (40) number of levels in your smallest random effect, the likelihood ratio test is accurate -- the p-values based on parametric bootstrapping and on the LRT match almost exactly. You can also use PBmodcomp from the pbkrtest package to run these sorts of comparisons, or KRmodcomp (same package) to get a better (than the LRT) approximation of the p-value.
lmer() parametric bootstrap testing for fixed effects
This all looks fine. The histogram is the null distribution of differences in deviance between the full and reduced model. Because you have a large (40) number of levels in your smallest random ef
lmer() parametric bootstrap testing for fixed effects This all looks fine. The histogram is the null distribution of differences in deviance between the full and reduced model. Because you have a large (40) number of levels in your smallest random effect, the likelihood ratio test is accurate -- the p-values based on parametric bootstrapping and on the LRT match almost exactly. You can also use PBmodcomp from the pbkrtest package to run these sorts of comparisons, or KRmodcomp (same package) to get a better (than the LRT) approximation of the p-value.
lmer() parametric bootstrap testing for fixed effects This all looks fine. The histogram is the null distribution of differences in deviance between the full and reduced model. Because you have a large (40) number of levels in your smallest random ef
33,236
lmer() parametric bootstrap testing for fixed effects
While assuming the model is correct regarding all assumptions, then anova function in lmerTest package will give us exact p-values for testing fixed effects. The parametric booktstrap method is mainly used for testing the random effects if you want to obtain a less conservative p-values of exact likelihood ratio tests (LRTs) than those of the asymptotic LRTs given by ranova function in lmerTest.
lmer() parametric bootstrap testing for fixed effects
While assuming the model is correct regarding all assumptions, then anova function in lmerTest package will give us exact p-values for testing fixed effects. The parametric booktstrap method is mainly
lmer() parametric bootstrap testing for fixed effects While assuming the model is correct regarding all assumptions, then anova function in lmerTest package will give us exact p-values for testing fixed effects. The parametric booktstrap method is mainly used for testing the random effects if you want to obtain a less conservative p-values of exact likelihood ratio tests (LRTs) than those of the asymptotic LRTs given by ranova function in lmerTest.
lmer() parametric bootstrap testing for fixed effects While assuming the model is correct regarding all assumptions, then anova function in lmerTest package will give us exact p-values for testing fixed effects. The parametric booktstrap method is mainly
33,237
Persistent Contrastive Divergence for RBMs
The original paper describing this can be found here In section 4.4, they discuss the ways in which the algorithm can be implemented. The best implementation that they discovered initially was to not reset any Markov Chains, to do one full Gibbs update on each Markov Chain for each gradient estimate, and to use a number of Markov Chains equal to the number of training data points in a mini-batch. Section 3 might give you some intuition about the key idea behind PCD.
Persistent Contrastive Divergence for RBMs
The original paper describing this can be found here In section 4.4, they discuss the ways in which the algorithm can be implemented. The best implementation that they discovered initially was to not
Persistent Contrastive Divergence for RBMs The original paper describing this can be found here In section 4.4, they discuss the ways in which the algorithm can be implemented. The best implementation that they discovered initially was to not reset any Markov Chains, to do one full Gibbs update on each Markov Chain for each gradient estimate, and to use a number of Markov Chains equal to the number of training data points in a mini-batch. Section 3 might give you some intuition about the key idea behind PCD.
Persistent Contrastive Divergence for RBMs The original paper describing this can be found here In section 4.4, they discuss the ways in which the algorithm can be implemented. The best implementation that they discovered initially was to not
33,238
Why is the functional form of the 1st stage in 2SLS not important?
Because OLS is unbiased at the mean. Unless it is dramatically incorrect (biased) it really shouldn't matter much what the functional form is. However, a poor functional form might to cause inaccuracies (slower convergence). Poor choice of functional form cannot lead to omitted variable bias. Only the omission of a variable. Using g(x) instead of f(x) is poor functional form. Using g(x) instead of g(x,y) is an omitted variable.
Why is the functional form of the 1st stage in 2SLS not important?
Because OLS is unbiased at the mean. Unless it is dramatically incorrect (biased) it really shouldn't matter much what the functional form is. However, a poor functional form might to cause inaccurac
Why is the functional form of the 1st stage in 2SLS not important? Because OLS is unbiased at the mean. Unless it is dramatically incorrect (biased) it really shouldn't matter much what the functional form is. However, a poor functional form might to cause inaccuracies (slower convergence). Poor choice of functional form cannot lead to omitted variable bias. Only the omission of a variable. Using g(x) instead of f(x) is poor functional form. Using g(x) instead of g(x,y) is an omitted variable.
Why is the functional form of the 1st stage in 2SLS not important? Because OLS is unbiased at the mean. Unless it is dramatically incorrect (biased) it really shouldn't matter much what the functional form is. However, a poor functional form might to cause inaccurac
33,239
Kolmogorov-Smirnov test?
Interesting problem. I have two thoughts, one general and one about how to characterize your data... First, with respect to comparing distributions I agree with @Glen_b and @Scortchi that you do not want to compare Fly vs All as shown in your chart (but nice idea to overlay the plot of the D statistic). Because you have a strong belief about where the distributions are likely to be different, and not just that they are different, you might want to consider comparing quantiles of the two distributions. There is a nice blog post on the subject which works through R code to develop the testing method. And there is an R package, WRS, which implements quantile-based testing methods. Second, I'd consider dropping the use of a formal comparison test altogether and instead use Weight of Evidence (WOE). This approach is commonly used in industries that need decision frameworks dealing with different levels of risk across various predictors. Examples include insurance underwriting, credit evaluation, and clinical trials. In your setting there is a baseline "risk" of flight (you said 10%), but the odds of flight seem to increase greatly in the presence of ships at certain distances. Using the WOE approach you can convey the change in odds of flight as a function of a ships distance, which is easy to understand for lay audiences (well, at least easier than understanding p-values associated with test statistics). Note that this is closely related to @Scortchi's suggestion to use logistic regression, but with WOE you are not trying to fit a regression model. There is nice documentation on Statistica's website for applying the method, but the best introduction I have found is in a book Credit Scoring, Response Modeling, and Insurance Rating: A Practical Guide to Forecasting Consumer Behavior. If you search on the term "WOE" you'll find multiple sections discussing the idea, and section 5.1 walks through a complete example of calculating WOE (it's pretty easy) and evaluating the results for decision-making. Finally, note that there is a stackoverflow post on this topic, which is not very developed, but there is a link to PDF walking through another example in the context of SAS coding.
Kolmogorov-Smirnov test?
Interesting problem. I have two thoughts, one general and one about how to characterize your data... First, with respect to comparing distributions I agree with @Glen_b and @Scortchi that you do not w
Kolmogorov-Smirnov test? Interesting problem. I have two thoughts, one general and one about how to characterize your data... First, with respect to comparing distributions I agree with @Glen_b and @Scortchi that you do not want to compare Fly vs All as shown in your chart (but nice idea to overlay the plot of the D statistic). Because you have a strong belief about where the distributions are likely to be different, and not just that they are different, you might want to consider comparing quantiles of the two distributions. There is a nice blog post on the subject which works through R code to develop the testing method. And there is an R package, WRS, which implements quantile-based testing methods. Second, I'd consider dropping the use of a formal comparison test altogether and instead use Weight of Evidence (WOE). This approach is commonly used in industries that need decision frameworks dealing with different levels of risk across various predictors. Examples include insurance underwriting, credit evaluation, and clinical trials. In your setting there is a baseline "risk" of flight (you said 10%), but the odds of flight seem to increase greatly in the presence of ships at certain distances. Using the WOE approach you can convey the change in odds of flight as a function of a ships distance, which is easy to understand for lay audiences (well, at least easier than understanding p-values associated with test statistics). Note that this is closely related to @Scortchi's suggestion to use logistic regression, but with WOE you are not trying to fit a regression model. There is nice documentation on Statistica's website for applying the method, but the best introduction I have found is in a book Credit Scoring, Response Modeling, and Insurance Rating: A Practical Guide to Forecasting Consumer Behavior. If you search on the term "WOE" you'll find multiple sections discussing the idea, and section 5.1 walks through a complete example of calculating WOE (it's pretty easy) and evaluating the results for decision-making. Finally, note that there is a stackoverflow post on this topic, which is not very developed, but there is a link to PDF walking through another example in the context of SAS coding.
Kolmogorov-Smirnov test? Interesting problem. I have two thoughts, one general and one about how to characterize your data... First, with respect to comparing distributions I agree with @Glen_b and @Scortchi that you do not w
33,240
Evaluating Time Series Prediction Performance
You can create a ROC curve. For a given value of p between 0 and 1 you predict that the event is going to happen if the predicted probability is greater than p. Then you calculate TPR and FPR which gives you a single point on the ROC curve. By varying p between zero and one you obtain the entire curve. E.g. for p < 0.005 the prior-based predictor will always say that the event will happen at all times. For more, see: http://en.wikipedia.org/wiki/Receiver_operating_characteristic
Evaluating Time Series Prediction Performance
You can create a ROC curve. For a given value of p between 0 and 1 you predict that the event is going to happen if the predicted probability is greater than p. Then you calculate TPR and FPR which gi
Evaluating Time Series Prediction Performance You can create a ROC curve. For a given value of p between 0 and 1 you predict that the event is going to happen if the predicted probability is greater than p. Then you calculate TPR and FPR which gives you a single point on the ROC curve. By varying p between zero and one you obtain the entire curve. E.g. for p < 0.005 the prior-based predictor will always say that the event will happen at all times. For more, see: http://en.wikipedia.org/wiki/Receiver_operating_characteristic
Evaluating Time Series Prediction Performance You can create a ROC curve. For a given value of p between 0 and 1 you predict that the event is going to happen if the predicted probability is greater than p. Then you calculate TPR and FPR which gi
33,241
Why does Bayes' Theorem work graphically?
Basically just draw a Venn diagram of two overlapping circles that are supposed to represent sets of events. Call them A and B. Now the intersection of the two is P(A, B) which can be read probability of A AND B. By the basic rules of probability, P(A, B) = P(A | B) P(B). And since there is nothing special about A versus B, it must also be P(B| A) P(A). Equating these two gives you Bayes Theorem. Bayes Theorem is really quite simple. Bayesian statistics is harder because of two reason. One is that it takes a bit of abstraction to go from talking about random roles of dice to the probability that some fact is True. It required you to have a prior and this prior effects the posterior probability that you get in the end. And when you have to marginalize out a lot of parameters along the way, it is harder to see exactly how it is affected. Some find that this seems kind of circular. But really, there is no way of getting around it. Data analyzed with a model doesn't lead you directly to Truth. Nothing does. It simply allows you to update your beliefs in a consistent way. The other hard thing about Bayesian statistics is that the calculations become quite difficult except for simple problems and this is why all the mathematics is brought in to deal with it. We need to take advantage of every symmetry that we can to make the calculations easier or else resort to Monte Carlo simulations. So Bayesian statistics is hard but Bayes theorem is really not hard at all. Don't over think it! It follows directly from the fact that the "AND" operator, in a probabilistic context, is symmetric. A AND B is the same as B AND A and everyone seems to understand that intuitively.
Why does Bayes' Theorem work graphically?
Basically just draw a Venn diagram of two overlapping circles that are supposed to represent sets of events. Call them A and B. Now the intersection of the two is P(A, B) which can be read probability
Why does Bayes' Theorem work graphically? Basically just draw a Venn diagram of two overlapping circles that are supposed to represent sets of events. Call them A and B. Now the intersection of the two is P(A, B) which can be read probability of A AND B. By the basic rules of probability, P(A, B) = P(A | B) P(B). And since there is nothing special about A versus B, it must also be P(B| A) P(A). Equating these two gives you Bayes Theorem. Bayes Theorem is really quite simple. Bayesian statistics is harder because of two reason. One is that it takes a bit of abstraction to go from talking about random roles of dice to the probability that some fact is True. It required you to have a prior and this prior effects the posterior probability that you get in the end. And when you have to marginalize out a lot of parameters along the way, it is harder to see exactly how it is affected. Some find that this seems kind of circular. But really, there is no way of getting around it. Data analyzed with a model doesn't lead you directly to Truth. Nothing does. It simply allows you to update your beliefs in a consistent way. The other hard thing about Bayesian statistics is that the calculations become quite difficult except for simple problems and this is why all the mathematics is brought in to deal with it. We need to take advantage of every symmetry that we can to make the calculations easier or else resort to Monte Carlo simulations. So Bayesian statistics is hard but Bayes theorem is really not hard at all. Don't over think it! It follows directly from the fact that the "AND" operator, in a probabilistic context, is symmetric. A AND B is the same as B AND A and everyone seems to understand that intuitively.
Why does Bayes' Theorem work graphically? Basically just draw a Venn diagram of two overlapping circles that are supposed to represent sets of events. Call them A and B. Now the intersection of the two is P(A, B) which can be read probability
33,242
Why does Bayes' Theorem work graphically?
A physical argument to explain it was very clearly depicted by Galton in a two stage quincunx in the late 1800,s. See figure 5 in Stigler, Stephen M. 2010. Darwin, Galton and the statistical enlightenment. Journal of the Royal Statistical Society: Series A 173(3):469-482. I have a rudimentary animation of it here (requires adequate pdf support to run). I have also turned it into an allegory about an orange falling on Galton's head which I will try to upload in the future. Or perhaps you might prefer the ABC rejection picture here. An exercise based on it is here.
Why does Bayes' Theorem work graphically?
A physical argument to explain it was very clearly depicted by Galton in a two stage quincunx in the late 1800,s. See figure 5 in Stigler, Stephen M. 2010. Darwin, Galton and the statistical enlight
Why does Bayes' Theorem work graphically? A physical argument to explain it was very clearly depicted by Galton in a two stage quincunx in the late 1800,s. See figure 5 in Stigler, Stephen M. 2010. Darwin, Galton and the statistical enlightenment. Journal of the Royal Statistical Society: Series A 173(3):469-482. I have a rudimentary animation of it here (requires adequate pdf support to run). I have also turned it into an allegory about an orange falling on Galton's head which I will try to upload in the future. Or perhaps you might prefer the ABC rejection picture here. An exercise based on it is here.
Why does Bayes' Theorem work graphically? A physical argument to explain it was very clearly depicted by Galton in a two stage quincunx in the late 1800,s. See figure 5 in Stigler, Stephen M. 2010. Darwin, Galton and the statistical enlight
33,243
Why does Bayes' Theorem work graphically?
This Jan 10 2020 article on Medium explains with just one picture! Presume that A rare disease infects only $1/1000$ people. Tests identify the disease with 99% accuracy. If there are 100,000 people, 100 people who have the rare disease and the rest 99,900 don’t have it. If these 100 diseased people get tested, $\color{green}{99}$ would test positive and $\color{red}{1}$ test negative. But what we generally overlook is that if the 99,900 healthy get tested, 1% of those (that is $\color{#e68a00}{999}$) will test false positive. Now, if you test positive, for you to have the disease, you must be $1$ of the $\color{green}{99}$ diseased people who tested positive. The total number of persons who tested positive is $\color{green}{99}+\color{#e68a00}{999}$. So the probability that you have the disease when you tested positive is $\dfrac{\color{green}{99}}{\color{green}{99}+\color{#e68a00}{999}} = 0.0901$.
Why does Bayes' Theorem work graphically?
This Jan 10 2020 article on Medium explains with just one picture! Presume that A rare disease infects only $1/1000$ people. Tests identify the disease with 99% accuracy. If there are 100,000 peo
Why does Bayes' Theorem work graphically? This Jan 10 2020 article on Medium explains with just one picture! Presume that A rare disease infects only $1/1000$ people. Tests identify the disease with 99% accuracy. If there are 100,000 people, 100 people who have the rare disease and the rest 99,900 don’t have it. If these 100 diseased people get tested, $\color{green}{99}$ would test positive and $\color{red}{1}$ test negative. But what we generally overlook is that if the 99,900 healthy get tested, 1% of those (that is $\color{#e68a00}{999}$) will test false positive. Now, if you test positive, for you to have the disease, you must be $1$ of the $\color{green}{99}$ diseased people who tested positive. The total number of persons who tested positive is $\color{green}{99}+\color{#e68a00}{999}$. So the probability that you have the disease when you tested positive is $\dfrac{\color{green}{99}}{\color{green}{99}+\color{#e68a00}{999}} = 0.0901$.
Why does Bayes' Theorem work graphically? This Jan 10 2020 article on Medium explains with just one picture! Presume that A rare disease infects only $1/1000$ people. Tests identify the disease with 99% accuracy. If there are 100,000 peo
33,244
Why does Bayes' Theorem work graphically?
See Why testing positive for a disease may not mean you are sick. Visualization of the Bayes Theorem and Conditional Probability. | by Javier GB | Medium. Let’s use the following example, where 1 in 10 people are sick. We denote [sic] could write p(Sick | Total population) = the probability of being sick given that you study the whole population = 0.1. But in the case when you study the whole population you just write p(Sick). To simplify the example we assume that we know which ones are sick and which ones are healthy, but in a real test you don’t know that information. Now we test everybody for the disease: The number of positive results among the sick population (#(Positive | Sick) is 9. These people are the true positives, a value that it’s known for tests: #(Positive | Sick) = 9 p(Positive | Sick) = 9/#(Sick) = 9/10 = True Positive Rate Now the interesting question, what is the probability of being sick if you test positive? (in math: p(Sick | Positive)) In the figure above we have all the information, and therefore we can count the sick people among the positive results to say that the probability of being sick if you tested positive is 9/18 = 50%. However in real life you only know that 18/100 have tested positive. To know that 50% of those are false positives you can use the Bayes Theorem. But we will derive it here. All the information we know is: To remind you, we want to calculate the people inside the square #(Sick | Positive). The intuition is that if we know that there are 10 sick people (#(Sick)) and the true positive rate is 0.9, then #(Sick | Positive) = 9. In math: For the probability we divide among the studied population (the positive results) to get: But we don’t know exactly how many people are sick, only the probability, so we can divide both parts of the fraction by the #(Total) and get probabilities of being sick and of testing positive. And you have successfully derived the Bayes Theorem! To recap:
Why does Bayes' Theorem work graphically?
See Why testing positive for a disease may not mean you are sick. Visualization of the Bayes Theorem and Conditional Probability. | by Javier GB | Medium. Let’s use the following example, where 1 in
Why does Bayes' Theorem work graphically? See Why testing positive for a disease may not mean you are sick. Visualization of the Bayes Theorem and Conditional Probability. | by Javier GB | Medium. Let’s use the following example, where 1 in 10 people are sick. We denote [sic] could write p(Sick | Total population) = the probability of being sick given that you study the whole population = 0.1. But in the case when you study the whole population you just write p(Sick). To simplify the example we assume that we know which ones are sick and which ones are healthy, but in a real test you don’t know that information. Now we test everybody for the disease: The number of positive results among the sick population (#(Positive | Sick) is 9. These people are the true positives, a value that it’s known for tests: #(Positive | Sick) = 9 p(Positive | Sick) = 9/#(Sick) = 9/10 = True Positive Rate Now the interesting question, what is the probability of being sick if you test positive? (in math: p(Sick | Positive)) In the figure above we have all the information, and therefore we can count the sick people among the positive results to say that the probability of being sick if you tested positive is 9/18 = 50%. However in real life you only know that 18/100 have tested positive. To know that 50% of those are false positives you can use the Bayes Theorem. But we will derive it here. All the information we know is: To remind you, we want to calculate the people inside the square #(Sick | Positive). The intuition is that if we know that there are 10 sick people (#(Sick)) and the true positive rate is 0.9, then #(Sick | Positive) = 9. In math: For the probability we divide among the studied population (the positive results) to get: But we don’t know exactly how many people are sick, only the probability, so we can divide both parts of the fraction by the #(Total) and get probabilities of being sick and of testing positive. And you have successfully derived the Bayes Theorem! To recap:
Why does Bayes' Theorem work graphically? See Why testing positive for a disease may not mean you are sick. Visualization of the Bayes Theorem and Conditional Probability. | by Javier GB | Medium. Let’s use the following example, where 1 in
33,245
Fully Bayesian hyper-parameter selection in GPML
There is another package for machine learning using Gaussian processes called GPstuff which has it all in my opinion. You can use MCMC, integration on a grid, etc. to marginalise out your hyperparameters. NB In the documentation they call hyperparameters merely parameters.
Fully Bayesian hyper-parameter selection in GPML
There is another package for machine learning using Gaussian processes called GPstuff which has it all in my opinion. You can use MCMC, integration on a grid, etc. to marginalise out your hyperparamet
Fully Bayesian hyper-parameter selection in GPML There is another package for machine learning using Gaussian processes called GPstuff which has it all in my opinion. You can use MCMC, integration on a grid, etc. to marginalise out your hyperparameters. NB In the documentation they call hyperparameters merely parameters.
Fully Bayesian hyper-parameter selection in GPML There is another package for machine learning using Gaussian processes called GPstuff which has it all in my opinion. You can use MCMC, integration on a grid, etc. to marginalise out your hyperparamet
33,246
Using percentiles as predictors - good idea?
If your model entails some sort of contest in firm revenues, you can use percentile. Log percentile seems more meaningful, quantiles are not going to be linear in value, or so I imagine. In this story, you include ln(%) of firms with revenues under the observation firm. The story is that with high revenues have reputations that are better than firms with low revenues, and this relation of "having more than the competition" is relevant, not the level of revenue itself. I could see this as an important part of firm recognition and branding.
Using percentiles as predictors - good idea?
If your model entails some sort of contest in firm revenues, you can use percentile. Log percentile seems more meaningful, quantiles are not going to be linear in value, or so I imagine. In this story
Using percentiles as predictors - good idea? If your model entails some sort of contest in firm revenues, you can use percentile. Log percentile seems more meaningful, quantiles are not going to be linear in value, or so I imagine. In this story, you include ln(%) of firms with revenues under the observation firm. The story is that with high revenues have reputations that are better than firms with low revenues, and this relation of "having more than the competition" is relevant, not the level of revenue itself. I could see this as an important part of firm recognition and branding.
Using percentiles as predictors - good idea? If your model entails some sort of contest in firm revenues, you can use percentile. Log percentile seems more meaningful, quantiles are not going to be linear in value, or so I imagine. In this story
33,247
Intraclass correlation in the context of linear mixed-effects model
While still waiting for any suggestions for the first question, I will try to answer the second question myself. The correlation between two responses from two subjects $j_1$ and $j_2$ during the $i$th session is $\frac{cov(y_{ij_1}, y_{ij_2})}{\sqrt{var(y_{ij_1})var(y_{ij_2})}}=\frac{E(b_i+c_{J_1}+\epsilon_{ij_1})E(b_i+c_{j_2}+\epsilon_{ij_2})}{\sqrt{var(y_{ij_1})var(y_{ij_2})}}=\frac{\tau_1^2}{\tau_1^2+\tau_2^2+\sigma^2}$ which is exactly $ICC_{session}$. That is, $ICC_{session}$ is the correlation between any two subjects' responses coming from the same session, and it's not about the difference between the two sessions. Am I interpreting it correctly?
Intraclass correlation in the context of linear mixed-effects model
While still waiting for any suggestions for the first question, I will try to answer the second question myself. The correlation between two responses from two subjects $j_1$ and $j_2$ during the $i$t
Intraclass correlation in the context of linear mixed-effects model While still waiting for any suggestions for the first question, I will try to answer the second question myself. The correlation between two responses from two subjects $j_1$ and $j_2$ during the $i$th session is $\frac{cov(y_{ij_1}, y_{ij_2})}{\sqrt{var(y_{ij_1})var(y_{ij_2})}}=\frac{E(b_i+c_{J_1}+\epsilon_{ij_1})E(b_i+c_{j_2}+\epsilon_{ij_2})}{\sqrt{var(y_{ij_1})var(y_{ij_2})}}=\frac{\tau_1^2}{\tau_1^2+\tau_2^2+\sigma^2}$ which is exactly $ICC_{session}$. That is, $ICC_{session}$ is the correlation between any two subjects' responses coming from the same session, and it's not about the difference between the two sessions. Am I interpreting it correctly?
Intraclass correlation in the context of linear mixed-effects model While still waiting for any suggestions for the first question, I will try to answer the second question myself. The correlation between two responses from two subjects $j_1$ and $j_2$ during the $i$t
33,248
How to map a trajectory to a vector?
I would start with dynamic time warping. As long as you have the distance between any two points (lat,long) this approach should work. It adjusts for different speeds of motion. For instance, you and I live in the same village and go to work to the same factory, but I stop by a coffee shop on the way. It takes longer for me to arrive but we're more or less on the same path, so the similarity measure adjusts for different time scales. This is different from what you have in mind. It seems that you want to come up with one value (vector) to represent the trajectory, then calculate the distance between the vectors. I'm suggesting you to use the distance measure between the trajectories directly, without intermediate step.
How to map a trajectory to a vector?
I would start with dynamic time warping. As long as you have the distance between any two points (lat,long) this approach should work. It adjusts for different speeds of motion. For instance, you and
How to map a trajectory to a vector? I would start with dynamic time warping. As long as you have the distance between any two points (lat,long) this approach should work. It adjusts for different speeds of motion. For instance, you and I live in the same village and go to work to the same factory, but I stop by a coffee shop on the way. It takes longer for me to arrive but we're more or less on the same path, so the similarity measure adjusts for different time scales. This is different from what you have in mind. It seems that you want to come up with one value (vector) to represent the trajectory, then calculate the distance between the vectors. I'm suggesting you to use the distance measure between the trajectories directly, without intermediate step.
How to map a trajectory to a vector? I would start with dynamic time warping. As long as you have the distance between any two points (lat,long) this approach should work. It adjusts for different speeds of motion. For instance, you and
33,249
How to map a trajectory to a vector?
If you only consider instantaneous turns, i.e., changes in direction, I don't think this will uniquely define the position at a next time instance -- unless each user is travelling at a constant known speed (no indication of this in your question). Since you are moving across a (spherical, I infer?) surface, you will probably need at least a second coordinate to determine you positions uniquely. Why not simply build the $2 \times N$ array $[\bf{x}(t); \bf{y}(t)]$ per user with time stamp as a parameter, then concatenate this to a $1 \times (2N)$ vector $[\bf{x}(t) \bf{y}(t)]$ you must have a vector (or $1 \times (2N\times M)$ for $M$ tagged users? You could also take arc length $s(t)$ for the travelled path as a parameter instead. Are the time stamps at regular intervals; otherwise you will need a separate vector for them for look-up. PS: I cannot see a link with stats; is this relevant to Cross Validated?
How to map a trajectory to a vector?
If you only consider instantaneous turns, i.e., changes in direction, I don't think this will uniquely define the position at a next time instance -- unless each user is travelling at a constant known
How to map a trajectory to a vector? If you only consider instantaneous turns, i.e., changes in direction, I don't think this will uniquely define the position at a next time instance -- unless each user is travelling at a constant known speed (no indication of this in your question). Since you are moving across a (spherical, I infer?) surface, you will probably need at least a second coordinate to determine you positions uniquely. Why not simply build the $2 \times N$ array $[\bf{x}(t); \bf{y}(t)]$ per user with time stamp as a parameter, then concatenate this to a $1 \times (2N)$ vector $[\bf{x}(t) \bf{y}(t)]$ you must have a vector (or $1 \times (2N\times M)$ for $M$ tagged users? You could also take arc length $s(t)$ for the travelled path as a parameter instead. Are the time stamps at regular intervals; otherwise you will need a separate vector for them for look-up. PS: I cannot see a link with stats; is this relevant to Cross Validated?
How to map a trajectory to a vector? If you only consider instantaneous turns, i.e., changes in direction, I don't think this will uniquely define the position at a next time instance -- unless each user is travelling at a constant known
33,250
How to map a trajectory to a vector?
For each user, you have two time series, lat(t) and long(t). I think that's the simplest representation -- I wouldn't try to complicate things by converting to some definition of turns, which would not only be more difficult, but also would require being very careful about the initial starting point and treating it differently in any analysis. (It's probably noisier as well.) Keeping the data as lat & long time series also keeps it simple for the most likely use -- where you will look at a various time windows at different times - there's no need to constantly recalculate a starting point at the beginning of a new time window being analyzed. If every users' time series lat & long were all sampled at the exact same times, as noted in another reply you can just concatenate the two time series vectors into one long vector. A similar example that had 5 time series looked like this: . Then you have one long vector for each user that you can analyze just like any other vector for pattern recognition, distance measures, clustering, etc. For distance measures between users, you're typically going to use a weighted form depending on the application. For instance, when focusing on convergence towards a common destination, you'd increase the weights the most towards the end of the time window (whether looking at euclidean calculations, max distance, etc.). But, the original question seems to say that there may be differing numbers of points between A and B for different users. And in any case, even for the same sampling interval, it's likely that that the times aren't exactly the same (maybe differing by some constant because sampling started at different times). Furthermore, it's quite possible that there will be some missing data. In any of these cases, conceptually, you'd need to think of each time series in continuous form, perhaps fitting a curve to it, and resampling every user at the exact same times. (That's analogous to the resampling that occurs in photo analysis when you shrink a picture). Then your time series vectors for lat & long are the same length and correspond exactly to the same times, so that the concatenated vectors for each user over some time period can be compared to each other correctly.
How to map a trajectory to a vector?
For each user, you have two time series, lat(t) and long(t). I think that's the simplest representation -- I wouldn't try to complicate things by converting to some definition of turns, which would no
How to map a trajectory to a vector? For each user, you have two time series, lat(t) and long(t). I think that's the simplest representation -- I wouldn't try to complicate things by converting to some definition of turns, which would not only be more difficult, but also would require being very careful about the initial starting point and treating it differently in any analysis. (It's probably noisier as well.) Keeping the data as lat & long time series also keeps it simple for the most likely use -- where you will look at a various time windows at different times - there's no need to constantly recalculate a starting point at the beginning of a new time window being analyzed. If every users' time series lat & long were all sampled at the exact same times, as noted in another reply you can just concatenate the two time series vectors into one long vector. A similar example that had 5 time series looked like this: . Then you have one long vector for each user that you can analyze just like any other vector for pattern recognition, distance measures, clustering, etc. For distance measures between users, you're typically going to use a weighted form depending on the application. For instance, when focusing on convergence towards a common destination, you'd increase the weights the most towards the end of the time window (whether looking at euclidean calculations, max distance, etc.). But, the original question seems to say that there may be differing numbers of points between A and B for different users. And in any case, even for the same sampling interval, it's likely that that the times aren't exactly the same (maybe differing by some constant because sampling started at different times). Furthermore, it's quite possible that there will be some missing data. In any of these cases, conceptually, you'd need to think of each time series in continuous form, perhaps fitting a curve to it, and resampling every user at the exact same times. (That's analogous to the resampling that occurs in photo analysis when you shrink a picture). Then your time series vectors for lat & long are the same length and correspond exactly to the same times, so that the concatenated vectors for each user over some time period can be compared to each other correctly.
How to map a trajectory to a vector? For each user, you have two time series, lat(t) and long(t). I think that's the simplest representation -- I wouldn't try to complicate things by converting to some definition of turns, which would no
33,251
Unique (?) idea for forecasting sales
You may end up with a model which seems to fit your current data OK, but it will come unstuck as soon as you try and produce an out-of-sample forecast. Consider producing your forecast for 6 months time. You have no way of knowing what the opportunities will be in six months, so you are going to have to create another set of models predicting each of of the inputs to your opportunity model. And, once you do this you are going to have lots of models feeding into your main model, but each of the little models is going to have its own prediction error attached to it,and these will be compound, but your main model will not know about these, and, as a result, all your prediction intervals will be grossly deflated.
Unique (?) idea for forecasting sales
You may end up with a model which seems to fit your current data OK, but it will come unstuck as soon as you try and produce an out-of-sample forecast. Consider producing your forecast for 6 months t
Unique (?) idea for forecasting sales You may end up with a model which seems to fit your current data OK, but it will come unstuck as soon as you try and produce an out-of-sample forecast. Consider producing your forecast for 6 months time. You have no way of knowing what the opportunities will be in six months, so you are going to have to create another set of models predicting each of of the inputs to your opportunity model. And, once you do this you are going to have lots of models feeding into your main model, but each of the little models is going to have its own prediction error attached to it,and these will be compound, but your main model will not know about these, and, as a result, all your prediction intervals will be grossly deflated.
Unique (?) idea for forecasting sales You may end up with a model which seems to fit your current data OK, but it will come unstuck as soon as you try and produce an out-of-sample forecast. Consider producing your forecast for 6 months t
33,252
Bayesian analysis with histogram prior. Why draw simulations from the posterior?
To answer your subquestion: How to do the following more elegantly? post.vector <- vector() post.vector[1] <- sum(post[p < 0.1]) post.vector[2] <- sum(post[p > 0.1 & p <= 0.2]) post.vector[3] <- sum(post[p > 0.2 & p <= 0.3]) post.vector[4] <- sum(post[p > 0.3 & p <= 0.4]) post.vector[5] <- sum(post[p > 0.4 & p <= 0.5]) post.vector[6] <- sum(post[p > 0.5 & p <= 0.6]) post.vector[7] <- sum(post[p > 0.6 & p <= 0.7]) post.vector[8] <- sum(post[p > 0.7 & p <= 0.8]) post.vector[9] <- sum(post[p > 0.8 & p <= 0.9]) post.vector[10] <- sum(post[p > 0.9 & p <= 1]) The easiest way to do it using base R is: group <- cut(p, breaks=seq(0,1,0.1), include.lowest = T) post.vector.alt <- aggregate(post, FUN=sum, by=list(group)) Note that the breaks go from 0 to 1. This yields: Group.1 x 1 [0,0.1] 3.030528e-13 2 (0.1,0.2] 1.251849e-08 3 (0.2,0.3] 6.385088e-06 4 (0.3,0.4] 6.732672e-04 5 (0.4,0.5] 2.376448e-01 6 (0.5,0.6] 7.372805e-01 7 (0.6,0.7] 2.158296e-02 8 (0.7,0.8] 2.691182e-03 9 (0.8,0.9] 1.205200e-04 10 (0.9,1] 3.345072e-07 And we have: > all.equal (post.vector.alt$x, post.vector) [1] TRUE
Bayesian analysis with histogram prior. Why draw simulations from the posterior?
To answer your subquestion: How to do the following more elegantly? post.vector <- vector() post.vector[1] <- sum(post[p < 0.1]) post.vector[2] <- sum(post[p > 0.1 & p <= 0.2]) post.vector[3] <- sum(p
Bayesian analysis with histogram prior. Why draw simulations from the posterior? To answer your subquestion: How to do the following more elegantly? post.vector <- vector() post.vector[1] <- sum(post[p < 0.1]) post.vector[2] <- sum(post[p > 0.1 & p <= 0.2]) post.vector[3] <- sum(post[p > 0.2 & p <= 0.3]) post.vector[4] <- sum(post[p > 0.3 & p <= 0.4]) post.vector[5] <- sum(post[p > 0.4 & p <= 0.5]) post.vector[6] <- sum(post[p > 0.5 & p <= 0.6]) post.vector[7] <- sum(post[p > 0.6 & p <= 0.7]) post.vector[8] <- sum(post[p > 0.7 & p <= 0.8]) post.vector[9] <- sum(post[p > 0.8 & p <= 0.9]) post.vector[10] <- sum(post[p > 0.9 & p <= 1]) The easiest way to do it using base R is: group <- cut(p, breaks=seq(0,1,0.1), include.lowest = T) post.vector.alt <- aggregate(post, FUN=sum, by=list(group)) Note that the breaks go from 0 to 1. This yields: Group.1 x 1 [0,0.1] 3.030528e-13 2 (0.1,0.2] 1.251849e-08 3 (0.2,0.3] 6.385088e-06 4 (0.3,0.4] 6.732672e-04 5 (0.4,0.5] 2.376448e-01 6 (0.5,0.6] 7.372805e-01 7 (0.6,0.7] 2.158296e-02 8 (0.7,0.8] 2.691182e-03 9 (0.8,0.9] 1.205200e-04 10 (0.9,1] 3.345072e-07 And we have: > all.equal (post.vector.alt$x, post.vector) [1] TRUE
Bayesian analysis with histogram prior. Why draw simulations from the posterior? To answer your subquestion: How to do the following more elegantly? post.vector <- vector() post.vector[1] <- sum(post[p < 0.1]) post.vector[2] <- sum(post[p > 0.1 & p <= 0.2]) post.vector[3] <- sum(p
33,253
Bayesian analysis with histogram prior. Why draw simulations from the posterior?
My understanding is that since the posterior density obtained from the product of prior density and likelihood is just an APPROXIMATE of the posterior density, thus we cannot make any EXACT inference from it directly. Consequently, we need to take a random sample from it, and conduct inference from the sample, just like the simulation method for the posterior from beta family.
Bayesian analysis with histogram prior. Why draw simulations from the posterior?
My understanding is that since the posterior density obtained from the product of prior density and likelihood is just an APPROXIMATE of the posterior density, thus we cannot make any EXACT inference
Bayesian analysis with histogram prior. Why draw simulations from the posterior? My understanding is that since the posterior density obtained from the product of prior density and likelihood is just an APPROXIMATE of the posterior density, thus we cannot make any EXACT inference from it directly. Consequently, we need to take a random sample from it, and conduct inference from the sample, just like the simulation method for the posterior from beta family.
Bayesian analysis with histogram prior. Why draw simulations from the posterior? My understanding is that since the posterior density obtained from the product of prior density and likelihood is just an APPROXIMATE of the posterior density, thus we cannot make any EXACT inference
33,254
Determine an unknown number of real world locations from GPS-based reports
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. I have found a software that maybe can help you. It looks like somebody had the same problem that you and they gave him a solution in this forum, so you will need to use ArcGIS, but if you are looking for an algorithm they suggest this paper. I think the paper is detailed enough to be a good start fro your algorithm.
Determine an unknown number of real world locations from GPS-based reports
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Determine an unknown number of real world locations from GPS-based reports Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. I have found a software that maybe can help you. It looks like somebody had the same problem that you and they gave him a solution in this forum, so you will need to use ArcGIS, but if you are looking for an algorithm they suggest this paper. I think the paper is detailed enough to be a good start fro your algorithm.
Determine an unknown number of real world locations from GPS-based reports Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
33,255
Estimating comparative success of different brochures
There are empirical formulas for determining the sample size. The underlying test is a two-sample t test for equality of the metric( response rate in your case). Assuming that you want the power of the test to be 80%, one such formula is $n= 16\sigma^2/\Delta^2$ where $\sigma$ is the std dev of the metric ( response rate) and $\Delta$ is the amount of change in the response rate that you want to resolve reliably ( with statistical significance). Also, there are fractional factorial designs available which let you optimize the number of trials (assuming you don't want to measure interactions of each factor with every other factor). This is a survey paper on experimental design that describes the details.
Estimating comparative success of different brochures
There are empirical formulas for determining the sample size. The underlying test is a two-sample t test for equality of the metric( response rate in your case). Assuming that you want the power of th
Estimating comparative success of different brochures There are empirical formulas for determining the sample size. The underlying test is a two-sample t test for equality of the metric( response rate in your case). Assuming that you want the power of the test to be 80%, one such formula is $n= 16\sigma^2/\Delta^2$ where $\sigma$ is the std dev of the metric ( response rate) and $\Delta$ is the amount of change in the response rate that you want to resolve reliably ( with statistical significance). Also, there are fractional factorial designs available which let you optimize the number of trials (assuming you don't want to measure interactions of each factor with every other factor). This is a survey paper on experimental design that describes the details.
Estimating comparative success of different brochures There are empirical formulas for determining the sample size. The underlying test is a two-sample t test for equality of the metric( response rate in your case). Assuming that you want the power of th
33,256
Estimating comparative success of different brochures
Suppose that you sent brochures $A$ and $B$ to equal number of customers, then $a$ users respond to brochure $A$, and $b$ users respond to brochure $B$, and $b>a$. Then the significance is $P = {\sum_{n=b}^{a+b} C^{a+b}_n \over 2^{a+b}}$ It doesn't matter how many users received your brochures, just how many responded.
Estimating comparative success of different brochures
Suppose that you sent brochures $A$ and $B$ to equal number of customers, then $a$ users respond to brochure $A$, and $b$ users respond to brochure $B$, and $b>a$. Then the significance is $P = {\sum
Estimating comparative success of different brochures Suppose that you sent brochures $A$ and $B$ to equal number of customers, then $a$ users respond to brochure $A$, and $b$ users respond to brochure $B$, and $b>a$. Then the significance is $P = {\sum_{n=b}^{a+b} C^{a+b}_n \over 2^{a+b}}$ It doesn't matter how many users received your brochures, just how many responded.
Estimating comparative success of different brochures Suppose that you sent brochures $A$ and $B$ to equal number of customers, then $a$ users respond to brochure $A$, and $b$ users respond to brochure $B$, and $b>a$. Then the significance is $P = {\sum
33,257
Confusion related to linear dynamic systems
There is a nice derivation, several actually, in the following: http://amzn.com/0470173661 This is a good book on the subject as well: http://amzn.com/0471708585 The complete derivation, and simplifications that result in the textbook shortened form you present, is not short/clean so it is often omitted or left as an exercise for the reader. You can think of Kalman gain as a mixture proportion that makes a weighted sum of an analytic/symbolic model and some noisy real-world measurement. If you have crappy measurements, but a good model then a properly set Kalman gain should favor the model. If you have a junk model, but pretty good measurements then your Kalman gain should favor the measurements. If you don't have a good handle on what your uncertainties are, then it can be hard to properly setup your Kalman filter. If you set the inputs properly, then it is an optimal estimator. There are a number of assumptions that go into its derivation and if any one of them isn't true then it becomes a pretty good suboptimal estimator. For example, a Lag plot will demonstrate that the one-step Markov assumption implicit in the Kalman filter is not true for a cosine function. A Taylor series is an approximation, but it is not exact. You can make an extended Kalman filter based on the Taylor series but it is approximate, not exact. If you can take in information from two previous states instead of one, you can use a Block Kalman filter and regain your optimality. Bottom line, it is not a bad tool, but it is not "the silver bullet" and your mileage will vary. Make sure that you characterize it well before using it in the real world.
Confusion related to linear dynamic systems
There is a nice derivation, several actually, in the following: http://amzn.com/0470173661 This is a good book on the subject as well: http://amzn.com/0471708585 The complete derivation, and simplific
Confusion related to linear dynamic systems There is a nice derivation, several actually, in the following: http://amzn.com/0470173661 This is a good book on the subject as well: http://amzn.com/0471708585 The complete derivation, and simplifications that result in the textbook shortened form you present, is not short/clean so it is often omitted or left as an exercise for the reader. You can think of Kalman gain as a mixture proportion that makes a weighted sum of an analytic/symbolic model and some noisy real-world measurement. If you have crappy measurements, but a good model then a properly set Kalman gain should favor the model. If you have a junk model, but pretty good measurements then your Kalman gain should favor the measurements. If you don't have a good handle on what your uncertainties are, then it can be hard to properly setup your Kalman filter. If you set the inputs properly, then it is an optimal estimator. There are a number of assumptions that go into its derivation and if any one of them isn't true then it becomes a pretty good suboptimal estimator. For example, a Lag plot will demonstrate that the one-step Markov assumption implicit in the Kalman filter is not true for a cosine function. A Taylor series is an approximation, but it is not exact. You can make an extended Kalman filter based on the Taylor series but it is approximate, not exact. If you can take in information from two previous states instead of one, you can use a Block Kalman filter and regain your optimality. Bottom line, it is not a bad tool, but it is not "the silver bullet" and your mileage will vary. Make sure that you characterize it well before using it in the real world.
Confusion related to linear dynamic systems There is a nice derivation, several actually, in the following: http://amzn.com/0470173661 This is a good book on the subject as well: http://amzn.com/0471708585 The complete derivation, and simplific
33,258
Discrete data & alternatives to PCA
It depends a little bit on your purpose, but if you're after a visualization tool there's a trick with applying multidimensional scaling to the output of random forest proximity which can produce pretty pictures and will work for a mixture of categorical and continuous data. Here you would classify the species according to your predictors. But - and it's a big caveat - I don't know if anyone really knows what the output to these visualizations mean. Another alternative might be to apply multidimensional scaling to something like the Gower similarity. There's a hanging question - what's your ultimate purpose? What question do you want to answer? I like these techniques as exploratory tools to perhaps lead you to asking more and better questions, but I'm not sure what they explain or tell you by themselves. Maybe I'm reading too much into your question, but if you want to explore which predictor variables have the values for the hybrids sitting between the two pure species, you might be better building a model to estimate the values for the predictor variables which lead to the species and the hybrids directly. If you want to measure how the variables are related to each other, perhaps build a correlation matrix - and there are many neat visualizations for this.
Discrete data & alternatives to PCA
It depends a little bit on your purpose, but if you're after a visualization tool there's a trick with applying multidimensional scaling to the output of random forest proximity which can produce pret
Discrete data & alternatives to PCA It depends a little bit on your purpose, but if you're after a visualization tool there's a trick with applying multidimensional scaling to the output of random forest proximity which can produce pretty pictures and will work for a mixture of categorical and continuous data. Here you would classify the species according to your predictors. But - and it's a big caveat - I don't know if anyone really knows what the output to these visualizations mean. Another alternative might be to apply multidimensional scaling to something like the Gower similarity. There's a hanging question - what's your ultimate purpose? What question do you want to answer? I like these techniques as exploratory tools to perhaps lead you to asking more and better questions, but I'm not sure what they explain or tell you by themselves. Maybe I'm reading too much into your question, but if you want to explore which predictor variables have the values for the hybrids sitting between the two pure species, you might be better building a model to estimate the values for the predictor variables which lead to the species and the hybrids directly. If you want to measure how the variables are related to each other, perhaps build a correlation matrix - and there are many neat visualizations for this.
Discrete data & alternatives to PCA It depends a little bit on your purpose, but if you're after a visualization tool there's a trick with applying multidimensional scaling to the output of random forest proximity which can produce pret
33,259
How do I get a group of people to collectively rank a set of objects?
You could use a Bradley-Terry-Luce type model based on pairwise comparisons. Randomly (or otherwise) generate a bunch of pairs of schools and have each staff member look at several pairs and tell you which in the pair is better (or an "I don't know" if they have no familiarity with one or both schools). Then plug this data into the model to get the ranking. There is a BradleyTerry2 package for R that fits these models.
How do I get a group of people to collectively rank a set of objects?
You could use a Bradley-Terry-Luce type model based on pairwise comparisons. Randomly (or otherwise) generate a bunch of pairs of schools and have each staff member look at several pairs and tell you
How do I get a group of people to collectively rank a set of objects? You could use a Bradley-Terry-Luce type model based on pairwise comparisons. Randomly (or otherwise) generate a bunch of pairs of schools and have each staff member look at several pairs and tell you which in the pair is better (or an "I don't know" if they have no familiarity with one or both schools). Then plug this data into the model to get the ranking. There is a BradleyTerry2 package for R that fits these models.
How do I get a group of people to collectively rank a set of objects? You could use a Bradley-Terry-Luce type model based on pairwise comparisons. Randomly (or otherwise) generate a bunch of pairs of schools and have each staff member look at several pairs and tell you
33,260
What to do with heterogeneity of variance when spread decreases with larger fitted values
Adding quadratic terms would help if the mean varied that way but the variability is in the variance in your case. Since it is the covariates that cause the change, a form of variance function estimation involving those covariates would be the approach I recommend.
What to do with heterogeneity of variance when spread decreases with larger fitted values
Adding quadratic terms would help if the mean varied that way but the variability is in the variance in your case. Since it is the covariates that cause the change, a form of variance function estima
What to do with heterogeneity of variance when spread decreases with larger fitted values Adding quadratic terms would help if the mean varied that way but the variability is in the variance in your case. Since it is the covariates that cause the change, a form of variance function estimation involving those covariates would be the approach I recommend.
What to do with heterogeneity of variance when spread decreases with larger fitted values Adding quadratic terms would help if the mean varied that way but the variability is in the variance in your case. Since it is the covariates that cause the change, a form of variance function estima
33,261
Several questions about statistical financial timeseries models from "machine-learning person"
Regarding question 1, time series do not deal mainly with random walks. Stationary time series have correlation structure that is modelled in for example ARMA models. Time series analysis also looks at periodic effects and trend (we call those time series nonstationary). Looking for patterns in data is not incompatible with statistics as long as there is recognition that there is a pattern + a random component and the random component must be considered in the analysis. Regarding question 2 I don't see why you call TAR a mix of machine learning and statistics. I see it as just a more complicated time series model that includes a threshold parameter an 2 AR models. I guess I also don't see a big distinction between machine learning and statistics. I view machine learning as part of statistical pattern recognition/classification which falls under the realm of multivariate analysis. It seems to me that TAR could easily be extended to putting a threshold on an ARMA model. I don't know if it has been tried or why it might not have been developed. Perhaps someone who works with these type of time series models can answer that question.
Several questions about statistical financial timeseries models from "machine-learning person"
Regarding question 1, time series do not deal mainly with random walks. Stationary time series have correlation structure that is modelled in for example ARMA models. Time series analysis also looks
Several questions about statistical financial timeseries models from "machine-learning person" Regarding question 1, time series do not deal mainly with random walks. Stationary time series have correlation structure that is modelled in for example ARMA models. Time series analysis also looks at periodic effects and trend (we call those time series nonstationary). Looking for patterns in data is not incompatible with statistics as long as there is recognition that there is a pattern + a random component and the random component must be considered in the analysis. Regarding question 2 I don't see why you call TAR a mix of machine learning and statistics. I see it as just a more complicated time series model that includes a threshold parameter an 2 AR models. I guess I also don't see a big distinction between machine learning and statistics. I view machine learning as part of statistical pattern recognition/classification which falls under the realm of multivariate analysis. It seems to me that TAR could easily be extended to putting a threshold on an ARMA model. I don't know if it has been tried or why it might not have been developed. Perhaps someone who works with these type of time series models can answer that question.
Several questions about statistical financial timeseries models from "machine-learning person" Regarding question 1, time series do not deal mainly with random walks. Stationary time series have correlation structure that is modelled in for example ARMA models. Time series analysis also looks
33,262
Particle filter in R – trivial code example
EDIT: It seems that most particle filter packages are gone now. However, I have been playing with LaplacesDemon (a Bayesian MCMC package) and it does have the PMC (Population Monte Carlo) function which implements PMC, which is a type of particle filter. Maybe too much machinery for a quick particle filter kind of thing, but a package well worth learning. You can find package and tutorials at CRAN. ORIGINAL: To be honest, in the simplest case, pomp is hard to use. It's very flexible for anything you might want to do, but it's like using a space ship to go to the grocery store. Have you tried looking at Kalman filters (if your data might satisfy assumptions of the Kalman filter), including base functions tsSmooth and StructTS (univariate only), and package dlm? I'd also take a look at loess and other smoothers. I hope I'm wrong and someone hops on here with a quick, "Here's how to do it for simple univariate data such as you have with some modest assumptions." I'd love to be able to use the package myself.
Particle filter in R – trivial code example
EDIT: It seems that most particle filter packages are gone now. However, I have been playing with LaplacesDemon (a Bayesian MCMC package) and it does have the PMC (Population Monte Carlo) function whi
Particle filter in R – trivial code example EDIT: It seems that most particle filter packages are gone now. However, I have been playing with LaplacesDemon (a Bayesian MCMC package) and it does have the PMC (Population Monte Carlo) function which implements PMC, which is a type of particle filter. Maybe too much machinery for a quick particle filter kind of thing, but a package well worth learning. You can find package and tutorials at CRAN. ORIGINAL: To be honest, in the simplest case, pomp is hard to use. It's very flexible for anything you might want to do, but it's like using a space ship to go to the grocery store. Have you tried looking at Kalman filters (if your data might satisfy assumptions of the Kalman filter), including base functions tsSmooth and StructTS (univariate only), and package dlm? I'd also take a look at loess and other smoothers. I hope I'm wrong and someone hops on here with a quick, "Here's how to do it for simple univariate data such as you have with some modest assumptions." I'd love to be able to use the package myself.
Particle filter in R – trivial code example EDIT: It seems that most particle filter packages are gone now. However, I have been playing with LaplacesDemon (a Bayesian MCMC package) and it does have the PMC (Population Monte Carlo) function whi
33,263
Particle filter in R – trivial code example
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. In 2017, Dahlin & Schön published a really good article with R scripts for both Kalman and Particle filters, as well as Metropolis-Hasting algorithm: https://arxiv.org/pdf/1511.01707.pdf It is not the most efficient code (its in R only), but very well made.
Particle filter in R – trivial code example
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Particle filter in R – trivial code example Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. In 2017, Dahlin & Schön published a really good article with R scripts for both Kalman and Particle filters, as well as Metropolis-Hasting algorithm: https://arxiv.org/pdf/1511.01707.pdf It is not the most efficient code (its in R only), but very well made.
Particle filter in R – trivial code example Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
33,264
How to compute an accuracy measure based on RMSE? Is my large dataset normally distributed?
Using RMSE computed from 2-datasets, how can I relate RMSE to some sort of accuracy (i.e. 95-percent of my data points are within +/- X cm)? Take a look at a near duplicate question: Confidence interval of RMSE? Is my large dataset normally distributed? A good start would be to observe the empirical distribution of z values. Here is a reproducible example. set.seed(1) z <- rnorm(2000,2,3) z.difference <- data.frame(z=z) library(ggplot2) ggplot(z.difference,aes(x=z)) + geom_histogram(binwidth=1,aes(y=..density..), fill="white", color="black") + ylab("Density") + xlab("Elevation differences (meters)") + theme_bw() + coord_flip() At a first glance, it looks normal, right? (actually, we know it is normal because the rnorm command we used). If one wants to analyse small samples over the dataset there is the Shapiro-Wilk Normality Test. z_sample <- sample(z.difference$z,40,replace=T) shapiro.test(z_sample) #high p-value indicates the data is normal (null hypothesis) Shapiro-Wilk normality test data: z_sample W = 0.98618, p-value = 0.8984 #normal One can also repeat the SW test many times over different small samples, and then, look at the distribution of p-values. Be aware that normality tests on large datasets are not so useful as it is explained in this answer provided by Greg Snow. On the other hand, with really large datasets the central limit theorem comes into play and for common analyses (regression, t-tests, ...) you really don't care if the population is normally distributed or not. The good rule of thumb is to do a qq-plot and ask, is this normal enough? So, let's make a QQ-plot: #qq-plot (quantiles from empirical distribution - quantiles from theoretical distribution) mean_z <- mean(z.difference$z) sd_z <- sd(z.difference$z) set.seed(77) normal <- rnorm(length(z.difference$z), mean = mean_z, sd = sd_z) qqplot(normal, z.difference$z, xlab="Theoretical", ylab="Empirical") If dots are aligned in the y=x line it means the empirical distribution matches the theoretical distribution, which in this case is the normal distribution.
How to compute an accuracy measure based on RMSE? Is my large dataset normally distributed?
Using RMSE computed from 2-datasets, how can I relate RMSE to some sort of accuracy (i.e. 95-percent of my data points are within +/- X cm)? Take a look at a near duplicate question: Confidence inter
How to compute an accuracy measure based on RMSE? Is my large dataset normally distributed? Using RMSE computed from 2-datasets, how can I relate RMSE to some sort of accuracy (i.e. 95-percent of my data points are within +/- X cm)? Take a look at a near duplicate question: Confidence interval of RMSE? Is my large dataset normally distributed? A good start would be to observe the empirical distribution of z values. Here is a reproducible example. set.seed(1) z <- rnorm(2000,2,3) z.difference <- data.frame(z=z) library(ggplot2) ggplot(z.difference,aes(x=z)) + geom_histogram(binwidth=1,aes(y=..density..), fill="white", color="black") + ylab("Density") + xlab("Elevation differences (meters)") + theme_bw() + coord_flip() At a first glance, it looks normal, right? (actually, we know it is normal because the rnorm command we used). If one wants to analyse small samples over the dataset there is the Shapiro-Wilk Normality Test. z_sample <- sample(z.difference$z,40,replace=T) shapiro.test(z_sample) #high p-value indicates the data is normal (null hypothesis) Shapiro-Wilk normality test data: z_sample W = 0.98618, p-value = 0.8984 #normal One can also repeat the SW test many times over different small samples, and then, look at the distribution of p-values. Be aware that normality tests on large datasets are not so useful as it is explained in this answer provided by Greg Snow. On the other hand, with really large datasets the central limit theorem comes into play and for common analyses (regression, t-tests, ...) you really don't care if the population is normally distributed or not. The good rule of thumb is to do a qq-plot and ask, is this normal enough? So, let's make a QQ-plot: #qq-plot (quantiles from empirical distribution - quantiles from theoretical distribution) mean_z <- mean(z.difference$z) sd_z <- sd(z.difference$z) set.seed(77) normal <- rnorm(length(z.difference$z), mean = mean_z, sd = sd_z) qqplot(normal, z.difference$z, xlab="Theoretical", ylab="Empirical") If dots are aligned in the y=x line it means the empirical distribution matches the theoretical distribution, which in this case is the normal distribution.
How to compute an accuracy measure based on RMSE? Is my large dataset normally distributed? Using RMSE computed from 2-datasets, how can I relate RMSE to some sort of accuracy (i.e. 95-percent of my data points are within +/- X cm)? Take a look at a near duplicate question: Confidence inter
33,265
How to combine the forecasts when the response variable in forecasting models was different?
I think one of the most reliable methods for comparing models is to cross-validate out-of-sample error (e.g. MAE). You will need to un-transform the exogenous variable for each model to directly compare apples to apples.
How to combine the forecasts when the response variable in forecasting models was different?
I think one of the most reliable methods for comparing models is to cross-validate out-of-sample error (e.g. MAE). You will need to un-transform the exogenous variable for each model to directly comp
How to combine the forecasts when the response variable in forecasting models was different? I think one of the most reliable methods for comparing models is to cross-validate out-of-sample error (e.g. MAE). You will need to un-transform the exogenous variable for each model to directly compare apples to apples.
How to combine the forecasts when the response variable in forecasting models was different? I think one of the most reliable methods for comparing models is to cross-validate out-of-sample error (e.g. MAE). You will need to un-transform the exogenous variable for each model to directly comp
33,266
Consistency of the learning process
For your Question 1, I have an example, but it requires the loss function to take the value $\infty$. I am pretty sure we can give an example that only requires an unbounded loss function, but that would be a bit more work to construct. An open question is whether there's an example with a bounded loss function. Consider the classification setting, where the probability distribution $P$ is on a space $\mathcal{Z}=\mathcal{X}\times\{0,1\}$. We'll denote an example by $z=(x,y)$, with $x\in\mathcal{X}$ and $y\in\{0,1\}$. Let $\mathcal{F}=\mathcal{X}^{\{0,1\}}$ be the space of all classification functions on $\mathcal{X}$. Define the loss function $$ Q(z,f)=Q\left((x,y),f\right)=\begin{cases} 0 & \text{for }f(x)=y\\ \infty & \mbox{otherwise,} \end{cases} $$ for any $f\in\mathcal{F}$. In other words, whether you get one example wrong or all of them wrong, your risk is $\infty$. Now, suppose $\mathcal{X}=\left\{ x_{1},x_{2},\ldots\right\} $ is some countably infinite set, and let $P$ be any probability distribution for which $P(\{x_{i}\})>0$ for all $i=1,2,\ldots$. Also, let's assume that there is a deterministic classification function, i.e. there exists $c\in\mathcal{F}$ for which $y_{i}=c(x_{i})$ for $i=1,2,...$. This implies that $\inf_{f\in\mathcal{F}}R(f)=0$. Then for each $l$, $R_{emp}(f_{l}^{*})=0$, but $R(f_{l}^{*})=\infty$ (unless there is an extremely lucky choice of $f_{l}^{*}$ among all those $f\in\mathcal{F}$ that have $0$ empirical error). Thus $R_{emp}(f_{l}^{*})\to\inf_{f\in\mathcal{F}}R(f)$, but $R(f_{l}^{*})$ does not converge to that value. For Question 2, I agree that his example does not seem to apply to the classification case, and I don't see an obvious way to make such an example .
Consistency of the learning process
For your Question 1, I have an example, but it requires the loss function to take the value $\infty$. I am pretty sure we can give an example that only requires an unbounded loss function, but that wo
Consistency of the learning process For your Question 1, I have an example, but it requires the loss function to take the value $\infty$. I am pretty sure we can give an example that only requires an unbounded loss function, but that would be a bit more work to construct. An open question is whether there's an example with a bounded loss function. Consider the classification setting, where the probability distribution $P$ is on a space $\mathcal{Z}=\mathcal{X}\times\{0,1\}$. We'll denote an example by $z=(x,y)$, with $x\in\mathcal{X}$ and $y\in\{0,1\}$. Let $\mathcal{F}=\mathcal{X}^{\{0,1\}}$ be the space of all classification functions on $\mathcal{X}$. Define the loss function $$ Q(z,f)=Q\left((x,y),f\right)=\begin{cases} 0 & \text{for }f(x)=y\\ \infty & \mbox{otherwise,} \end{cases} $$ for any $f\in\mathcal{F}$. In other words, whether you get one example wrong or all of them wrong, your risk is $\infty$. Now, suppose $\mathcal{X}=\left\{ x_{1},x_{2},\ldots\right\} $ is some countably infinite set, and let $P$ be any probability distribution for which $P(\{x_{i}\})>0$ for all $i=1,2,\ldots$. Also, let's assume that there is a deterministic classification function, i.e. there exists $c\in\mathcal{F}$ for which $y_{i}=c(x_{i})$ for $i=1,2,...$. This implies that $\inf_{f\in\mathcal{F}}R(f)=0$. Then for each $l$, $R_{emp}(f_{l}^{*})=0$, but $R(f_{l}^{*})=\infty$ (unless there is an extremely lucky choice of $f_{l}^{*}$ among all those $f\in\mathcal{F}$ that have $0$ empirical error). Thus $R_{emp}(f_{l}^{*})\to\inf_{f\in\mathcal{F}}R(f)$, but $R(f_{l}^{*})$ does not converge to that value. For Question 2, I agree that his example does not seem to apply to the classification case, and I don't see an obvious way to make such an example .
Consistency of the learning process For your Question 1, I have an example, but it requires the loss function to take the value $\infty$. I am pretty sure we can give an example that only requires an unbounded loss function, but that wo
33,267
Standard error of sample standard deviation of proportions
The exact distributions for the proportions is $p_i \text{ ~ Bin}(n_i, \pi_i)/n_i$, and the proportions can take on values $p_i = 0, \frac{1}{n_i}, \frac{2}{n_i}, ..., \frac{n_i-1}{n_i}, 1$. The resulting distribution of the sample standard deviation $T$ is a complicated discrete distribution. Letting $\boldsymbol{p} \equiv (p_1, p_2, ..., p_6)$, it can be written in its most trivial form as: $$F_T(t) \equiv \mathbb{P}(T \leqslant t) = \sum_{\boldsymbol{p \in \mathcal{P}(t)}} \prod_{i=1}^6 \text{Bin}( n_i p_i|n_i, \pi_i),$$ where $\mathcal{P}(t) \equiv \{ \boldsymbol{p}| T \leqslant t \}$ is the set of all proportion vectors that lead to a sample variance no greater than $t$. There is really no way to simplify this in the general case. Getting an exact probability from this distribution would require you to enumerate the proportion vectors that yield a sample variance in the range of interest, and then sum the binomial products over that enumerated range. It would be an onerous calculation exercise for even moderately large values of $n_1, ..., n_6$. Now, obviously the above distribution is not a very helpful form. All it really tells you is that you need to enumerate the outcomes of interest and then sum their probabilities. That is why it would be unusual to calculate exact probabilities in this case, and it is much easier to appeal to an asymptotic form for the distribution of the sample variance.
Standard error of sample standard deviation of proportions
The exact distributions for the proportions is $p_i \text{ ~ Bin}(n_i, \pi_i)/n_i$, and the proportions can take on values $p_i = 0, \frac{1}{n_i}, \frac{2}{n_i}, ..., \frac{n_i-1}{n_i}, 1$. The resu
Standard error of sample standard deviation of proportions The exact distributions for the proportions is $p_i \text{ ~ Bin}(n_i, \pi_i)/n_i$, and the proportions can take on values $p_i = 0, \frac{1}{n_i}, \frac{2}{n_i}, ..., \frac{n_i-1}{n_i}, 1$. The resulting distribution of the sample standard deviation $T$ is a complicated discrete distribution. Letting $\boldsymbol{p} \equiv (p_1, p_2, ..., p_6)$, it can be written in its most trivial form as: $$F_T(t) \equiv \mathbb{P}(T \leqslant t) = \sum_{\boldsymbol{p \in \mathcal{P}(t)}} \prod_{i=1}^6 \text{Bin}( n_i p_i|n_i, \pi_i),$$ where $\mathcal{P}(t) \equiv \{ \boldsymbol{p}| T \leqslant t \}$ is the set of all proportion vectors that lead to a sample variance no greater than $t$. There is really no way to simplify this in the general case. Getting an exact probability from this distribution would require you to enumerate the proportion vectors that yield a sample variance in the range of interest, and then sum the binomial products over that enumerated range. It would be an onerous calculation exercise for even moderately large values of $n_1, ..., n_6$. Now, obviously the above distribution is not a very helpful form. All it really tells you is that you need to enumerate the outcomes of interest and then sum their probabilities. That is why it would be unusual to calculate exact probabilities in this case, and it is much easier to appeal to an asymptotic form for the distribution of the sample variance.
Standard error of sample standard deviation of proportions The exact distributions for the proportions is $p_i \text{ ~ Bin}(n_i, \pi_i)/n_i$, and the proportions can take on values $p_i = 0, \frac{1}{n_i}, \frac{2}{n_i}, ..., \frac{n_i-1}{n_i}, 1$. The resu
33,268
How can I generate predictions from the randomSurvivalForest package in R?
As you descibe it with: next event in a series of events It seems like you will definately want to look into time series analysis, perhaps an arima class model can provide you with some good results.
How can I generate predictions from the randomSurvivalForest package in R?
As you descibe it with: next event in a series of events It seems like you will definately want to look into time series analysis, perhaps an arima class model can provide you with some good result
How can I generate predictions from the randomSurvivalForest package in R? As you descibe it with: next event in a series of events It seems like you will definately want to look into time series analysis, perhaps an arima class model can provide you with some good results.
How can I generate predictions from the randomSurvivalForest package in R? As you descibe it with: next event in a series of events It seems like you will definately want to look into time series analysis, perhaps an arima class model can provide you with some good result
33,269
How do I algorithmically determine values of T1 & T2 for canopy clustering?
As whuber notes, the authors of the canopy clustering algorithm suggest that T1 and T2 can be set with cross-validation. However, these parameters could be tuned in the same way as any other hyper-parameter. One of the most common techniques is grid search, where a range is specified for each parameter, as well as a step size for how parameters are changed at each iteration. For example, suppose we specified T1 to have a value range of 25 to 100 with a step size of 25. This would mean the possible values of T1 to try would be (25, 50, 75, 100). Likewise, we could set T2 to have possible values between 1-4, with a step size of 1, such that the possible values are (1,2,3,4). This would mean there were 16 possible sets of parameters to try. As with any other classification or clustering algorithm, would you assess its efficacy by calculating its F1-score, accuracy/error, or other performance metric to determine the best set of the 16 sets of parameters. In addition to grid search, other hyper-parameter optimization algorithms include Nelder-Mead, genetic algorithms, simulated annealing, and particle swarm optimization, among many others. These algorithms will help you determine appropriate values for T1 and T2 in an automated fashion. You noted above that you have an 100K-dimensional data set. Are you referring to the number of rows or the number of columns within your data? If you are referring to the number of columns, I would suggest performing some combination of feature selection based on the variance of individual features and feature extraction via principal component analysis (PCA) or Kernel-PCA. Even if many of your features are useful (i.e. provide an information gain towards discriminating between clusters/classes/output variable values), having too many features might mean your clustering algorithm is unable to determine appropriate distances between instances.
How do I algorithmically determine values of T1 & T2 for canopy clustering?
As whuber notes, the authors of the canopy clustering algorithm suggest that T1 and T2 can be set with cross-validation. However, these parameters could be tuned in the same way as any other hyper-par
How do I algorithmically determine values of T1 & T2 for canopy clustering? As whuber notes, the authors of the canopy clustering algorithm suggest that T1 and T2 can be set with cross-validation. However, these parameters could be tuned in the same way as any other hyper-parameter. One of the most common techniques is grid search, where a range is specified for each parameter, as well as a step size for how parameters are changed at each iteration. For example, suppose we specified T1 to have a value range of 25 to 100 with a step size of 25. This would mean the possible values of T1 to try would be (25, 50, 75, 100). Likewise, we could set T2 to have possible values between 1-4, with a step size of 1, such that the possible values are (1,2,3,4). This would mean there were 16 possible sets of parameters to try. As with any other classification or clustering algorithm, would you assess its efficacy by calculating its F1-score, accuracy/error, or other performance metric to determine the best set of the 16 sets of parameters. In addition to grid search, other hyper-parameter optimization algorithms include Nelder-Mead, genetic algorithms, simulated annealing, and particle swarm optimization, among many others. These algorithms will help you determine appropriate values for T1 and T2 in an automated fashion. You noted above that you have an 100K-dimensional data set. Are you referring to the number of rows or the number of columns within your data? If you are referring to the number of columns, I would suggest performing some combination of feature selection based on the variance of individual features and feature extraction via principal component analysis (PCA) or Kernel-PCA. Even if many of your features are useful (i.e. provide an information gain towards discriminating between clusters/classes/output variable values), having too many features might mean your clustering algorithm is unable to determine appropriate distances between instances.
How do I algorithmically determine values of T1 & T2 for canopy clustering? As whuber notes, the authors of the canopy clustering algorithm suggest that T1 and T2 can be set with cross-validation. However, these parameters could be tuned in the same way as any other hyper-par
33,270
Estimating the variance of poker win rates
Here is an approach to calculate how many players have an actual win rate of over 0.3 dollar per hand. 0: Check whether individual player results are normally distributed. 1: Calculate the standard deviation for each player. 2: Given his mean and standard deviation, calculate the probability that his win rate is over 0.3 3: Add all these probabilities of all players and this is a good estimate of how many players actually have a win rate of over 0.3. Perhaps you may have to stop at step 0 in real life, but you can try to find a proper destribution or just use this as a guideline.
Estimating the variance of poker win rates
Here is an approach to calculate how many players have an actual win rate of over 0.3 dollar per hand. 0: Check whether individual player results are normally distributed. 1: Calculate the standard de
Estimating the variance of poker win rates Here is an approach to calculate how many players have an actual win rate of over 0.3 dollar per hand. 0: Check whether individual player results are normally distributed. 1: Calculate the standard deviation for each player. 2: Given his mean and standard deviation, calculate the probability that his win rate is over 0.3 3: Add all these probabilities of all players and this is a good estimate of how many players actually have a win rate of over 0.3. Perhaps you may have to stop at step 0 in real life, but you can try to find a proper destribution or just use this as a guideline.
Estimating the variance of poker win rates Here is an approach to calculate how many players have an actual win rate of over 0.3 dollar per hand. 0: Check whether individual player results are normally distributed. 1: Calculate the standard de
33,271
Data mining approaches for analysis of sequential data with nominal attributes
I am a novice data miner as well, but may I suggest that exploratory data analysis is always a good first step? I would see if items can be assigned some sort of 'priority value' which can serve to predict how early they appear in the cart, as such a result may allow you to use simpler models. Something as simple as a linear regression on (#order in cart/#number of items in cart) for all carts possessing item X will give you an idea of whether this is possible. Suppose you find that a certain proportion of items always appear early, or later, and some seem to be completely random: this would guide you in your later model-building.
Data mining approaches for analysis of sequential data with nominal attributes
I am a novice data miner as well, but may I suggest that exploratory data analysis is always a good first step? I would see if items can be assigned some sort of 'priority value' which can serve to p
Data mining approaches for analysis of sequential data with nominal attributes I am a novice data miner as well, but may I suggest that exploratory data analysis is always a good first step? I would see if items can be assigned some sort of 'priority value' which can serve to predict how early they appear in the cart, as such a result may allow you to use simpler models. Something as simple as a linear regression on (#order in cart/#number of items in cart) for all carts possessing item X will give you an idea of whether this is possible. Suppose you find that a certain proportion of items always appear early, or later, and some seem to be completely random: this would guide you in your later model-building.
Data mining approaches for analysis of sequential data with nominal attributes I am a novice data miner as well, but may I suggest that exploratory data analysis is always a good first step? I would see if items can be assigned some sort of 'priority value' which can serve to p
33,272
How to decrease the information loss from lag variables?
You can't help but lose information when you use lags. I can't think of any way around this, except to use shorter lags.
How to decrease the information loss from lag variables?
You can't help but lose information when you use lags. I can't think of any way around this, except to use shorter lags.
How to decrease the information loss from lag variables? You can't help but lose information when you use lags. I can't think of any way around this, except to use shorter lags.
How to decrease the information loss from lag variables? You can't help but lose information when you use lags. I can't think of any way around this, except to use shorter lags.
33,273
Is there an equivalent of ARMA for rank correlation?
Answering with the comment by mpiktas, as it is IMHO a good answer. The usual answer to non-linearity in context of ARMA models is ARCH/GARCH models. For your first question, it is probably possible to construct PACF using rank correlation concepts, it will boil down probably to independent component analysis. As for the second the answer is probably no, but for this Asymptotic normality of the quasi maximum likelihood estimator for multidimensional causal process by Bardet and Wintenberger might be of interest, since it involves non-linear specification and ARMA is a subclass of it.
Is there an equivalent of ARMA for rank correlation?
Answering with the comment by mpiktas, as it is IMHO a good answer. The usual answer to non-linearity in context of ARMA models is ARCH/GARCH models. For your first question, it is probably possible t
Is there an equivalent of ARMA for rank correlation? Answering with the comment by mpiktas, as it is IMHO a good answer. The usual answer to non-linearity in context of ARMA models is ARCH/GARCH models. For your first question, it is probably possible to construct PACF using rank correlation concepts, it will boil down probably to independent component analysis. As for the second the answer is probably no, but for this Asymptotic normality of the quasi maximum likelihood estimator for multidimensional causal process by Bardet and Wintenberger might be of interest, since it involves non-linear specification and ARMA is a subclass of it.
Is there an equivalent of ARMA for rank correlation? Answering with the comment by mpiktas, as it is IMHO a good answer. The usual answer to non-linearity in context of ARMA models is ARCH/GARCH models. For your first question, it is probably possible t
33,274
How to construct quadrats for point processes that differ greatly in frequency?
I have used quadrat analysis only on regular grids. It was helpful in regard to the purpose, which was to compare the dispersion of sampling data with a known process, e.g., random. Therefore a regular grid worked well.The method you developed and described is not sure to be quadrat counting. For example in the moving average method, one option is to count the number of neighbors for the process, i.e., averaging, which is simply done by searching within a circle (in 2D) or sphere (in 3D). Your method looks similar with a slightly different use of those selected samples.
How to construct quadrats for point processes that differ greatly in frequency?
I have used quadrat analysis only on regular grids. It was helpful in regard to the purpose, which was to compare the dispersion of sampling data with a known process, e.g., random. Therefore a regula
How to construct quadrats for point processes that differ greatly in frequency? I have used quadrat analysis only on regular grids. It was helpful in regard to the purpose, which was to compare the dispersion of sampling data with a known process, e.g., random. Therefore a regular grid worked well.The method you developed and described is not sure to be quadrat counting. For example in the moving average method, one option is to count the number of neighbors for the process, i.e., averaging, which is simply done by searching within a circle (in 2D) or sphere (in 3D). Your method looks similar with a slightly different use of those selected samples.
How to construct quadrats for point processes that differ greatly in frequency? I have used quadrat analysis only on regular grids. It was helpful in regard to the purpose, which was to compare the dispersion of sampling data with a known process, e.g., random. Therefore a regula
33,275
What temporal resolution for time series significance test?
I believe that you are trying to use statistical methods that are appropriate for independent observations while you have correlated data, both temporarily and spatially. If you have observations say for 5 hours and decide to re-state this as 241 observations taken every minute, you really don't have 240 degrees of freedom in respect to the mean of these 241 values. Autocorrelation potentially yields an overstatement of the size of "N" and thusly creates false uncertainty statements. What you need to do is to find someone/some textbook/some web site/.... to teach you about time series data and it's analysis. One way to start is to GOOGLE "help me understand time series" and start to read/learn. There is a lot of material available on the web. One available trove of time series information is something I helped create at http://www.autobox.com/AFSUniversity/afsuFrameset.htm . I mention this as I am still associated with this firm and it's products thus my comments are "biased and opinionated" but not solely self-serving.
What temporal resolution for time series significance test?
I believe that you are trying to use statistical methods that are appropriate for independent observations while you have correlated data, both temporarily and spatially. If you have observations say
What temporal resolution for time series significance test? I believe that you are trying to use statistical methods that are appropriate for independent observations while you have correlated data, both temporarily and spatially. If you have observations say for 5 hours and decide to re-state this as 241 observations taken every minute, you really don't have 240 degrees of freedom in respect to the mean of these 241 values. Autocorrelation potentially yields an overstatement of the size of "N" and thusly creates false uncertainty statements. What you need to do is to find someone/some textbook/some web site/.... to teach you about time series data and it's analysis. One way to start is to GOOGLE "help me understand time series" and start to read/learn. There is a lot of material available on the web. One available trove of time series information is something I helped create at http://www.autobox.com/AFSUniversity/afsuFrameset.htm . I mention this as I am still associated with this firm and it's products thus my comments are "biased and opinionated" but not solely self-serving.
What temporal resolution for time series significance test? I believe that you are trying to use statistical methods that are appropriate for independent observations while you have correlated data, both temporarily and spatially. If you have observations say
33,276
Using autocorrelation to find commonly occurring signal fragments
Since there is quite some variation in how the steps look, you could try a statistical approach. Which could, for example, be done in the following steps: Generate the feature vector. Filter the signal by a number of filters, each having a different frequency response. A set foo (haar)-wavelets might be a reasonable starting points. if your original signal has N samples, and you have K filters, this filtering should result in an N-by-K matrix. take the element-wise square to determine the energy in each of the signals. Generate ground truth. Write down the sample numbers which mark the start of each step, store them in vector S. Use this to make ground truth output data: Y = zeros(N,1); Y(S) = 1. Train your classifier. Now you can apply a generic classification algorithm (e.g. LDA or logistic regression) to the results of step 1 and 2. Matlab implementations should not be hard to find. Apply you classifier on new data. For new data, repeat step 1. this can then be used to as input for the classifier resulting from step 3. It might be necessary to post-process this data, for example by low-pass filtering it. Setting a threshold with some hysteresis should then give you the start of each step
Using autocorrelation to find commonly occurring signal fragments
Since there is quite some variation in how the steps look, you could try a statistical approach. Which could, for example, be done in the following steps: Generate the feature vector. Filter the sign
Using autocorrelation to find commonly occurring signal fragments Since there is quite some variation in how the steps look, you could try a statistical approach. Which could, for example, be done in the following steps: Generate the feature vector. Filter the signal by a number of filters, each having a different frequency response. A set foo (haar)-wavelets might be a reasonable starting points. if your original signal has N samples, and you have K filters, this filtering should result in an N-by-K matrix. take the element-wise square to determine the energy in each of the signals. Generate ground truth. Write down the sample numbers which mark the start of each step, store them in vector S. Use this to make ground truth output data: Y = zeros(N,1); Y(S) = 1. Train your classifier. Now you can apply a generic classification algorithm (e.g. LDA or logistic regression) to the results of step 1 and 2. Matlab implementations should not be hard to find. Apply you classifier on new data. For new data, repeat step 1. this can then be used to as input for the classifier resulting from step 3. It might be necessary to post-process this data, for example by low-pass filtering it. Setting a threshold with some hysteresis should then give you the start of each step
Using autocorrelation to find commonly occurring signal fragments Since there is quite some variation in how the steps look, you could try a statistical approach. Which could, for example, be done in the following steps: Generate the feature vector. Filter the sign
33,277
Is there a ML or DL tool that can learn to detect periodically occurring patterns in a one dimensional time series?
I would use a decent smooth for the characteristic time (EWMA, Sgolay, ...) on the time-series, and I would look at divergence from that smooth. If you are sampling every 5 minutes then the EWMA weight should be something like 1/12 or 1/24, or the SG window size should be around 12 or 24 units in size. I would also cyclize the time: (hour of day) --> [cos(hour/24), sin(hour/24)] (day of week) --> [cos(day/7), sin(day/7)] (week of year) --> [cos(week/53), sin(week/53)] And add flags for weekends and holidays. You aren't going to get everything. If someone has a super-bowl (or other sports/ball) party at their house, a kids birthday, or other celebration, then the fridge might get a lot of atypical mileage. A decent Random Forest should do a solid job here. Feed it the errors, the cyclized time/date and flags for weekend or holiday, and it should do a fair job of predicting the defrost events. If you had decent dummy data I could show this to you in pseudocode and give decent graphs and fit analyses.
Is there a ML or DL tool that can learn to detect periodically occurring patterns in a one dimension
I would use a decent smooth for the characteristic time (EWMA, Sgolay, ...) on the time-series, and I would look at divergence from that smooth. If you are sampling every 5 minutes then the EWMA weig
Is there a ML or DL tool that can learn to detect periodically occurring patterns in a one dimensional time series? I would use a decent smooth for the characteristic time (EWMA, Sgolay, ...) on the time-series, and I would look at divergence from that smooth. If you are sampling every 5 minutes then the EWMA weight should be something like 1/12 or 1/24, or the SG window size should be around 12 or 24 units in size. I would also cyclize the time: (hour of day) --> [cos(hour/24), sin(hour/24)] (day of week) --> [cos(day/7), sin(day/7)] (week of year) --> [cos(week/53), sin(week/53)] And add flags for weekends and holidays. You aren't going to get everything. If someone has a super-bowl (or other sports/ball) party at their house, a kids birthday, or other celebration, then the fridge might get a lot of atypical mileage. A decent Random Forest should do a solid job here. Feed it the errors, the cyclized time/date and flags for weekend or holiday, and it should do a fair job of predicting the defrost events. If you had decent dummy data I could show this to you in pseudocode and give decent graphs and fit analyses.
Is there a ML or DL tool that can learn to detect periodically occurring patterns in a one dimension I would use a decent smooth for the characteristic time (EWMA, Sgolay, ...) on the time-series, and I would look at divergence from that smooth. If you are sampling every 5 minutes then the EWMA weig
33,278
What is this chart called?
I wouldn't promote the name 'k chart' for this. That term is already used as alternative for candlestick charts. In addition, just using a single letter as a name may sound cool, but it is also gimmicky. It is not very functional. These letters as name do occur, but are often more common for well known tools and used as abbreviation. The letter as a name comes afterwards and is typically not something to start with. I would use something more descriptive, for example: waterfall Pareto chart. Sidenote 1: The use of two y-axes is not very easy and when it is used it is more often in a technical and specialist environment. I see it for instance often used in bioengineering where technologist map the changes/timeseries for multiple parameters, and they may even use a third or fourth y-axis. The usefulness of this is that the links between the multiple time series can be seen more easily. In your graph, the use of visualizing this link might not be so much necessary. In addition, the information is already shown in the height of the bars. So adding the additional level chart adds relatively little and may be mostly confusing. Sidenote 2: Another problem with this chart is that it mostly functions when displaying a data for a single case. Line graphs allow you to place multiple cases along each other in a single chart. About the bonus question: I am afraid, that I need to say that this point is not of much use. Or, at least, it doesn't have any specific or special meaning. The reason is because the two y-axis scales have not much relevant relationship with each other. The one scale is the cumulative distribution, the other scale is the density or frequency distribution scaled by the value maximum. Sure, you can attach some meaning to the crossing point (more about that later)... but the choice of the point to attach this meaning to is arbitrary and not special, we could just as well use a different point, for instance the point where the one curve is 10% above the other curve. The crossing point has a visual relevance, but there is not another principle that makes this particular point important. So what does this point indicate? When this point is early, then it relates to a density/mass curve that is dropping relatively quickly. (and at the same time the Pareto curve increases more quickly) But, instead of finding the point where it crosses the pareto curve, we can just as well use the something like the halfway point, the point where the density/mass has dropped by half (or we use any other point instead of half, e.g. the point where the curve has dropped to '42%', but the use of '50%' is just easy or convenient).
What is this chart called?
I wouldn't promote the name 'k chart' for this. That term is already used as alternative for candlestick charts. In addition, just using a single letter as a name may sound cool, but it is also gimmic
What is this chart called? I wouldn't promote the name 'k chart' for this. That term is already used as alternative for candlestick charts. In addition, just using a single letter as a name may sound cool, but it is also gimmicky. It is not very functional. These letters as name do occur, but are often more common for well known tools and used as abbreviation. The letter as a name comes afterwards and is typically not something to start with. I would use something more descriptive, for example: waterfall Pareto chart. Sidenote 1: The use of two y-axes is not very easy and when it is used it is more often in a technical and specialist environment. I see it for instance often used in bioengineering where technologist map the changes/timeseries for multiple parameters, and they may even use a third or fourth y-axis. The usefulness of this is that the links between the multiple time series can be seen more easily. In your graph, the use of visualizing this link might not be so much necessary. In addition, the information is already shown in the height of the bars. So adding the additional level chart adds relatively little and may be mostly confusing. Sidenote 2: Another problem with this chart is that it mostly functions when displaying a data for a single case. Line graphs allow you to place multiple cases along each other in a single chart. About the bonus question: I am afraid, that I need to say that this point is not of much use. Or, at least, it doesn't have any specific or special meaning. The reason is because the two y-axis scales have not much relevant relationship with each other. The one scale is the cumulative distribution, the other scale is the density or frequency distribution scaled by the value maximum. Sure, you can attach some meaning to the crossing point (more about that later)... but the choice of the point to attach this meaning to is arbitrary and not special, we could just as well use a different point, for instance the point where the one curve is 10% above the other curve. The crossing point has a visual relevance, but there is not another principle that makes this particular point important. So what does this point indicate? When this point is early, then it relates to a density/mass curve that is dropping relatively quickly. (and at the same time the Pareto curve increases more quickly) But, instead of finding the point where it crosses the pareto curve, we can just as well use the something like the halfway point, the point where the density/mass has dropped by half (or we use any other point instead of half, e.g. the point where the curve has dropped to '42%', but the use of '50%' is just easy or convenient).
What is this chart called? I wouldn't promote the name 'k chart' for this. That term is already used as alternative for candlestick charts. In addition, just using a single letter as a name may sound cool, but it is also gimmic
33,279
What is this chart called?
Revising the Pareto Chart by Leland Wilkinson (2006) offers some interesting variations on the Pareto chart, but none that resembles the K chart. And for reference purposes, here is what a Single Axis Pareto chart looks like for the same dataset: And a Dual Axes Pareto Chart: I find the K chart much easier to read than either version of the traditional Pareto chart. Furthermore, as Leland Wilkinson correctly states: "[...], there is no theoretical justification for representing the cumulative frequencies with an interpolated line element. Since the categories cannot be assumed to be equally spaced on a scale, we are not justified in interpreting the overall slope or segment slopes in this line. For similar reasons, we are not justified in looking for “kinks” in this line to detect breakpoints or subgroups of problem categories." This is probably the biggest problem with the traditional Pareto chart, and this problem is solved by replacing the Line graph by a Level chart.
What is this chart called?
Revising the Pareto Chart by Leland Wilkinson (2006) offers some interesting variations on the Pareto chart, but none that resembles the K chart. And for reference purposes, here is what a Single Axis
What is this chart called? Revising the Pareto Chart by Leland Wilkinson (2006) offers some interesting variations on the Pareto chart, but none that resembles the K chart. And for reference purposes, here is what a Single Axis Pareto chart looks like for the same dataset: And a Dual Axes Pareto Chart: I find the K chart much easier to read than either version of the traditional Pareto chart. Furthermore, as Leland Wilkinson correctly states: "[...], there is no theoretical justification for representing the cumulative frequencies with an interpolated line element. Since the categories cannot be assumed to be equally spaced on a scale, we are not justified in interpreting the overall slope or segment slopes in this line. For similar reasons, we are not justified in looking for “kinks” in this line to detect breakpoints or subgroups of problem categories." This is probably the biggest problem with the traditional Pareto chart, and this problem is solved by replacing the Line graph by a Level chart.
What is this chart called? Revising the Pareto Chart by Leland Wilkinson (2006) offers some interesting variations on the Pareto chart, but none that resembles the K chart. And for reference purposes, here is what a Single Axis
33,280
What is this chart called?
This chart is a particular case of what could be called a Univariate Combo Chart.
What is this chart called?
This chart is a particular case of what could be called a Univariate Combo Chart.
What is this chart called? This chart is a particular case of what could be called a Univariate Combo Chart.
What is this chart called? This chart is a particular case of what could be called a Univariate Combo Chart.
33,281
Rao-Blackwellization in variational inference
I don't think you can use Rao-Blackwellization here. In order to use it you need to assume the "mean field" approximation, i.e., that $q(z)=\prod_{d=1}^D q_d(z_d)$. The main idea of Rao-Blackwellization is to try and get rid of as many unneeded z's samples as possible. Assuming "mean field" we don’t really need all of the dimension of z in the calculations of the ELBO-derivative. So, we are going to “massage” the ELBO-derivative to remove unnecessary z’s. To reiterate in other words: sampling each $z_d$ adds to the overall variance of our MC estimator; if we can somehow eliminate some of the z's that would reduce the variance. But in your case, you don't assume "mean field", but have a complete distribution over the entire $P$ matrix.
Rao-Blackwellization in variational inference
I don't think you can use Rao-Blackwellization here. In order to use it you need to assume the "mean field" approximation, i.e., that $q(z)=\prod_{d=1}^D q_d(z_d)$. The main idea of Rao-Blackwellizati
Rao-Blackwellization in variational inference I don't think you can use Rao-Blackwellization here. In order to use it you need to assume the "mean field" approximation, i.e., that $q(z)=\prod_{d=1}^D q_d(z_d)$. The main idea of Rao-Blackwellization is to try and get rid of as many unneeded z's samples as possible. Assuming "mean field" we don’t really need all of the dimension of z in the calculations of the ELBO-derivative. So, we are going to “massage” the ELBO-derivative to remove unnecessary z’s. To reiterate in other words: sampling each $z_d$ adds to the overall variance of our MC estimator; if we can somehow eliminate some of the z's that would reduce the variance. But in your case, you don't assume "mean field", but have a complete distribution over the entire $P$ matrix.
Rao-Blackwellization in variational inference I don't think you can use Rao-Blackwellization here. In order to use it you need to assume the "mean field" approximation, i.e., that $q(z)=\prod_{d=1}^D q_d(z_d)$. The main idea of Rao-Blackwellizati
33,282
How to judge whether to model a time series additively or multiplicatively?
One method would be to use the test provided in JDemetra+, which is explained in the manual and the reference therein (though it's unclear from their list of references which one they are referencing): The test for a log-level specification used by TRAMO is based on the maximum likelihood estimation of the parameter $\lambda$ in the Box-Cox transformation, which is a power transformation such that the transformed values of the time series $y$ are a monotonic function of the observations, i.e. $y^\alpha = \begin{cases} \frac{(y^\alpha_i - 1)}{\lambda}, & \lambda \neq 0 \\ \log y^\alpha_i, & \lambda = 0 \end{cases}$. The program first fits two Airline models (i.e. ARIMA (0,1,1)(0,1,1) with a mean) to the time series: one in logs ($\lambda = 0$), other without logs ($\lambda = 1$). The test compares the sum of squares of the model without logs with the sum of squares multiplied by the square of the geometric mean of the (regularly and seasonally) differenced series in the case of the model in logs. Logs are taken in the case this last function is the minimum. GÓMEZ, V., and MARAVALL, A. (2010).
How to judge whether to model a time series additively or multiplicatively?
One method would be to use the test provided in JDemetra+, which is explained in the manual and the reference therein (though it's unclear from their list of references which one they are referencing)
How to judge whether to model a time series additively or multiplicatively? One method would be to use the test provided in JDemetra+, which is explained in the manual and the reference therein (though it's unclear from their list of references which one they are referencing): The test for a log-level specification used by TRAMO is based on the maximum likelihood estimation of the parameter $\lambda$ in the Box-Cox transformation, which is a power transformation such that the transformed values of the time series $y$ are a monotonic function of the observations, i.e. $y^\alpha = \begin{cases} \frac{(y^\alpha_i - 1)}{\lambda}, & \lambda \neq 0 \\ \log y^\alpha_i, & \lambda = 0 \end{cases}$. The program first fits two Airline models (i.e. ARIMA (0,1,1)(0,1,1) with a mean) to the time series: one in logs ($\lambda = 0$), other without logs ($\lambda = 1$). The test compares the sum of squares of the model without logs with the sum of squares multiplied by the square of the geometric mean of the (regularly and seasonally) differenced series in the case of the model in logs. Logs are taken in the case this last function is the minimum. GÓMEZ, V., and MARAVALL, A. (2010).
How to judge whether to model a time series additively or multiplicatively? One method would be to use the test provided in JDemetra+, which is explained in the manual and the reference therein (though it's unclear from their list of references which one they are referencing)
33,283
Difference between Advantage Actor Critic and TD Actor Critic?
Advantage can be approximated by TD error. This may be helpful especially if you want to update $\theta$ after each transition. For the batch approaches, you can calculate $Q_w(A,S)$ e.g. by means of fitted Q-iteration and subsequently $V(S)$. Using this, you have the general advantage function and your gradient change of the policy may be much more stable because it will be closer to global/actual advantage function.
Difference between Advantage Actor Critic and TD Actor Critic?
Advantage can be approximated by TD error. This may be helpful especially if you want to update $\theta$ after each transition. For the batch approaches, you can calculate $Q_w(A,S)$ e.g. by means of
Difference between Advantage Actor Critic and TD Actor Critic? Advantage can be approximated by TD error. This may be helpful especially if you want to update $\theta$ after each transition. For the batch approaches, you can calculate $Q_w(A,S)$ e.g. by means of fitted Q-iteration and subsequently $V(S)$. Using this, you have the general advantage function and your gradient change of the policy may be much more stable because it will be closer to global/actual advantage function.
Difference between Advantage Actor Critic and TD Actor Critic? Advantage can be approximated by TD error. This may be helpful especially if you want to update $\theta$ after each transition. For the batch approaches, you can calculate $Q_w(A,S)$ e.g. by means of
33,284
Difference between Advantage Actor Critic and TD Actor Critic?
They are different. Advantage is the difference between action value and state value. TD error is the error term which the value function wants to minimize. TD error can be used to approximate advantage. There are other ways to approximate advantage as well, such as (return - state_value).
Difference between Advantage Actor Critic and TD Actor Critic?
They are different. Advantage is the difference between action value and state value. TD error is the error term which the value function wants to minimize. TD error can be used to approximate advanta
Difference between Advantage Actor Critic and TD Actor Critic? They are different. Advantage is the difference between action value and state value. TD error is the error term which the value function wants to minimize. TD error can be used to approximate advantage. There are other ways to approximate advantage as well, such as (return - state_value).
Difference between Advantage Actor Critic and TD Actor Critic? They are different. Advantage is the difference between action value and state value. TD error is the error term which the value function wants to minimize. TD error can be used to approximate advanta
33,285
T-test for regression coefficients obtained from Ridge, LASSO etc
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. I recommend the following paper to answer the posed question: Significance testing in ridge regression for genetic data: http://www.biomedcentral.com/1471-2105/12/372
T-test for regression coefficients obtained from Ridge, LASSO etc
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
T-test for regression coefficients obtained from Ridge, LASSO etc Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. I recommend the following paper to answer the posed question: Significance testing in ridge regression for genetic data: http://www.biomedcentral.com/1471-2105/12/372
T-test for regression coefficients obtained from Ridge, LASSO etc Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
33,286
Olympics - Hungary Has Double Digit Lead in Gold? (Population Relative)
Smaller countries can get an advantage in two ways. Systematic advantage because the number of athletes per country is limited. Large countries like the USA and China, who have populations 25 and 100 times larger than Hungary, are not sending an equivalent amount of athletes or teams. For instance, in many team sports there is only one team per country competing and in individual sports the number of entries per country is limited. Stochastic advantage because variations are larger for smaller countries. The coefficient of variation for the number of medals will be smaller when the expected value is larger. Example if every athlete rolls a six sided dice. Then the countries with the largest (but also the smallest) average dice roll will often be countries with a smaller amount of athletes. See the simulation below where we have hundred countries with 50 athletes and hundred countries with 200 athletes. Idea for the image from vondj's YouTube video: Kleine Schulen sind besser! Lügen mit der gefährlichsten Formel der Welt I imagine a graph of number of medals versus population might show some interesting insights. (There are several on the internet, but they are often for only a single year and with some noise, am average of several editions might give a good view of the relationship between medals and population)
Olympics - Hungary Has Double Digit Lead in Gold? (Population Relative)
Smaller countries can get an advantage in two ways. Systematic advantage because the number of athletes per country is limited. Large countries like the USA and China, who have populations 25 and 100
Olympics - Hungary Has Double Digit Lead in Gold? (Population Relative) Smaller countries can get an advantage in two ways. Systematic advantage because the number of athletes per country is limited. Large countries like the USA and China, who have populations 25 and 100 times larger than Hungary, are not sending an equivalent amount of athletes or teams. For instance, in many team sports there is only one team per country competing and in individual sports the number of entries per country is limited. Stochastic advantage because variations are larger for smaller countries. The coefficient of variation for the number of medals will be smaller when the expected value is larger. Example if every athlete rolls a six sided dice. Then the countries with the largest (but also the smallest) average dice roll will often be countries with a smaller amount of athletes. See the simulation below where we have hundred countries with 50 athletes and hundred countries with 200 athletes. Idea for the image from vondj's YouTube video: Kleine Schulen sind besser! Lügen mit der gefährlichsten Formel der Welt I imagine a graph of number of medals versus population might show some interesting insights. (There are several on the internet, but they are often for only a single year and with some noise, am average of several editions might give a good view of the relationship between medals and population)
Olympics - Hungary Has Double Digit Lead in Gold? (Population Relative) Smaller countries can get an advantage in two ways. Systematic advantage because the number of athletes per country is limited. Large countries like the USA and China, who have populations 25 and 100
33,287
Olympics - Hungary Has Double Digit Lead in Gold? (Population Relative)
You are trying to find an estimate of any individual's chance to win a medal, knowing that the "data" we have is just the number by country. It's a great question a fair solution being closer to the spirit of Olympics. Basically, this is a statistical problem which is well approximated by your method as the average number (frequency) of medals (for each color) relative to the population. But how reliable is this method? This is pretty close to the problem of estimating the reliability of a binomial toss from different number of throws which has applications for instance to compare the quality of resellers in Amazon based on different feedback numbers (see this thorough explanation). In this particular case, the population number is always enough to make the approximation of the beta distribution with a normal - such that it is certainly possible to compare the significativity of each estimate for each country.
Olympics - Hungary Has Double Digit Lead in Gold? (Population Relative)
You are trying to find an estimate of any individual's chance to win a medal, knowing that the "data" we have is just the number by country. It's a great question a fair solution being closer to the s
Olympics - Hungary Has Double Digit Lead in Gold? (Population Relative) You are trying to find an estimate of any individual's chance to win a medal, knowing that the "data" we have is just the number by country. It's a great question a fair solution being closer to the spirit of Olympics. Basically, this is a statistical problem which is well approximated by your method as the average number (frequency) of medals (for each color) relative to the population. But how reliable is this method? This is pretty close to the problem of estimating the reliability of a binomial toss from different number of throws which has applications for instance to compare the quality of resellers in Amazon based on different feedback numbers (see this thorough explanation). In this particular case, the population number is always enough to make the approximation of the beta distribution with a normal - such that it is certainly possible to compare the significativity of each estimate for each country.
Olympics - Hungary Has Double Digit Lead in Gold? (Population Relative) You are trying to find an estimate of any individual's chance to win a medal, knowing that the "data" we have is just the number by country. It's a great question a fair solution being closer to the s
33,288
How to understand degrees of freedom?
This is a subtle question. It takes a thoughtful person not to understand those quotations! Although they are suggestive, it turns out that none of them is exactly or generally correct. I haven't the time (and there isn't the space here) to give a full exposition, but I would like to share one approach and an insight that it suggests. Where does the concept of degrees of freedom (DF) arise? The contexts in which it's found in elementary treatments are: The Student t-test and its variants such as the Welch or Satterthwaite solutions to the Behrens-Fisher problem (where two populations have different variances). The Chi-squared distribution (defined as a sum of squares of independent standard Normals), which is implicated in the sampling distribution of the variance. The F-test (of ratios of estimated variances). The Chi-squared test, comprising its uses in (a) testing for independence in contingency tables and (b) testing for goodness of fit of distributional estimates. In spirit, these tests run a gamut from being exact (the Student t-test and F-test for Normal variates) to being good approximations (the Student t-test and the Welch/Satterthwaite tests for not-too-badly-skewed data) to being based on asymptotic approximations (the Chi-squared test). An interesting aspect of some of these is the appearance of non-integral "degrees of freedom" (the Welch/Satterthwaite tests and, as we will see, the Chi-squared test). This is of especial interest because it is the first hint that DF is not any of the things claimed of it. We can dispose right away of some of the claims in the question. Because "final calculation of a statistic" is not well-defined (it apparently depends on what algorithm one uses for the calculation), it can be no more than a vague suggestion and is worth no further criticism. Similarly, neither "number of independent scores that go into the estimate" nor "the number of parameters used as intermediate steps" are well-defined. "Independent pieces of information that go into [an] estimate" is difficult to deal with, because there are two different but intimately related senses of "independent" that can be relevant here. One is independence of random variables; the other is functional independence. As an example of the latter, suppose we collect morphometric measurements of subjects--say, for simplicity, the three side lengths $X$, $Y$, $Z$, surface areas $S=2(XY+YZ+ZX)$, and volumes $V=XYZ$ of a set of wooden blocks. The three side lengths can be considered independent random variables, but all five variables are dependent RVs. The five are also functionally dependent because the codomain (not the "domain"!) of the vector-valued random variable $(X,Y,Z,S,V)$ traces out a three-dimensional manifold in $\mathbb{R}^5$. (Thus, locally at any point $\omega\in\mathbb{R}^5$, there are two functions $f_\omega$ and $g_\omega$ for which $f_\omega(X(\psi),\ldots,V(\psi))=0$ and $g_\omega(X(\psi),\ldots,V(\psi))=0$ for points $\psi$ "near" $\omega$ and the derivatives of $f$ and $g$ evaluated at $\omega$ are linearly independent.) However--here's the kicker--for many probability measures on the blocks, subsets of the variables such as $(X,S,V)$ are dependent as random variables but functionally independent. Having been alerted by these potential ambiguities, let's hold up the Chi-squared goodness of fit test for examination, because (a) it's simple, (b) it's one of the common situations where people really do need to know about DF to get the p-value right and (c) it's often used incorrectly. Here's a brief synopsis of the least controversial application of this test: You have a collection of data values $(x_1, \ldots, x_n)$, considered as a sample of a population. You have estimated some parameters $\theta_1, \ldots, \theta_p$ of a distribution. For example, you estimated the mean $\theta_1$ and standard deviation $\theta_2 = \theta_p$ of a Normal distribution, hypothesizing that the population is normally distributed but not knowing (in advance of obtaining the data) what $\theta_1$ or $\theta_2$ might be. In advance, you created a set of $k$ "bins" for the data. (It may be problematic when the bins are determined by the data, even though this is often done.) Using these bins, the data are reduced to the set of counts within each bin. Anticipating what the true values of $(\theta)$ might be, you have arranged it so (hopefully) each bin will receive approximately the same count. (Equal-probability binning assures the chi-squared distribution really is a good approximation to the true distribution of the chi-squared statistic about to be described.) You have a lot of data--enough to assure that almost all bins ought to have counts of 5 or greater. (This, we hope, will enable the sampling distribution of the $\chi^2$ statistic to be approximated adequately by some $\chi^2$ distribution.) Using the parameter estimates, you can compute the expected count in each bin. The Chi-squared statistic is the sum of the ratios $$\frac{(\text{observed}-\text{expected})^2}{\text{expected}}.$$ This, many authorities tell us, should have (to a very close approximation) a Chi-squared distribution. But there's a whole family of such distributions. They are differentiated by a parameter $\nu$ often referred to as the "degrees of freedom." The standard reasoning about how to determine $\nu$ goes like this I have $k$ counts. That's $k$ pieces of data. But there are (functional) relationships among them. To start with, I know in advance that the sum of the counts must equal $n$. That's one relationship. I estimated two (or $p$, generally) parameters from the data. That's two (or $p$) additional relationships, giving $p+1$ total relationships. Presuming they (the parameters) are all (functionally) independent, that leaves only $k-p-1$ (functionally) independent "degrees of freedom": that's the value to use for $\nu$. The problem with this reasoning (which is the sort of calculation the quotations in the question are hinting at) is that it's wrong except when some special additional conditions hold. Moreover, those conditions have nothing to do with independence (functional or statistical), with numbers of "components" of the data, with the numbers of parameters, nor with anything else referred to in the original question. Let me show you with an example. (To make it as clear as possible, I'm using a small number of bins, but that's not essential.) Let's generate 20 independent and identically distributed (iid) standard Normal variates and estimate their mean and standard deviation with the usual formulas (mean = sum/count, etc.). To test goodness of fit, create four bins with cutpoints at the quartiles of a standard normal: -0.675, 0, +0.657, and use the bin counts to generate a Chi-squared statistic. Repeat as patience allows; I had time to do 10,000 repetitions. The standard wisdom about DF says we have 4 bins and 1+2 = 3 constraints, implying the distribution of these 10,000 Chi-squared statistics should follow a Chi-squared distribution with 1 DF. Here's the histogram: The dark blue line graphs the PDF of a $\chi^2(1)$ distribution--the one we thought would work--while the dark red line graphs that of a $\chi^2(2)$ distribution (which would be a good guess if someone were to tell you that $\nu=1$ is incorrect). Neither fits the data. You might expect the problem to be due to the small size of the data sets ($n$=20) or perhaps the small size of the number of bins. However, the problem persists even with very large datasets and larger numbers of bins: it is not merely a failure to reach an asymptotic approximation. Things went wrong because I violated two requirements of the Chi-squared test: You must use the Maximum Likelihood estimate of the parameters. (This requirement can, in practice, be slightly violated.) You must base that estimate on the counts, not on the actual data! (This is crucial.) The red histogram depicts the chi-squared statistics for 10,000 separate iterations, following these requirements. Sure enough, it visibly follows the $\chi^2(1)$ curve (with an acceptable amount of sampling error), as we had originally hoped. The point of this comparison--which I hope you have seen coming--is that the correct DF to use for computing the p-values depends on many things other than dimensions of manifolds, counts of functional relationships, or the geometry of Normal variates. There is a subtle, delicate interaction between certain functional dependencies, as found in mathematical relationships among quantities, and distributions of the data, their statistics, and the estimators formed from them. Accordingly, it cannot be the case that DF is adequately explainable in terms of the geometry of multivariate normal distributions, or in terms of functional independence, or as counts of parameters, or anything else of this nature. We are led to see, then, that "degrees of freedom" is merely a heuristic that suggests what the sampling distribution of a (t, Chi-squared, or F) statistic ought to be, but it is not dispositive. Belief that it is dispositive leads to egregious errors. (For instance, the top hit on Google when searching "chi squared goodness of fit" is a Web page from an Ivy League university that gets most of this completely wrong! In particular, a simulation based on its instructions shows that the chi-squared value it recommends as having 7 DF actually has 9 DF.) With this more nuanced understanding, it's worthwhile to re-read the Wikipedia article in question: in its details it gets things right, pointing out where the DF heuristic tends to work and where it is either an approximation or does not apply at all. A good account of the phenomenon illustrated here (unexpectedly high DF in Chi-squared GOF tests) appears in Volume II of Kendall & Stuart, 5th edition. I am grateful for the opportunity afforded by this question to lead me back to this wonderful text, which is full of such useful analyses. Edit (Jan 2017) Here is R code to produce the figure following "The standard wisdom about DF..." # # Simulate data, one iteration per column of `x`. # n <- 20 n.sim <- 1e4 bins <- qnorm(seq(0, 1, 1/4)) x <- matrix(rnorm(n*n.sim), nrow=n) # # Compute statistics. # m <- colMeans(x) s <- apply(sweep(x, 2, m), 2, sd) counts <- apply(matrix(as.numeric(cut(x, bins)), nrow=n), 2, tabulate, nbins=4) expectations <- mapply(function(m,s) n*diff(pnorm(bins, m, s)), m, s) chisquared <- colSums((counts - expectations)^2 / expectations) # # Plot histograms of means, variances, and chi-squared stats. The first # two confirm all is working as expected. # mfrow <- par("mfrow") par(mfrow=c(1,3)) red <- "#a04040" # Intended to show correct distributions blue <- "#404090" # To show the putative chi-squared distribution hist(m, freq=FALSE) curve(dnorm(x, sd=1/sqrt(n)), add=TRUE, col=red, lwd=2) hist(s^2, freq=FALSE) curve(dchisq(x*(n-1), df=n-1)*(n-1), add=TRUE, col=red, lwd=2) hist(chisquared, freq=FALSE, breaks=seq(0, ceiling(max(chisquared)), 1/4), xlim=c(0, 13), ylim=c(0, 0.55), col="#c0c0ff", border="#404040") curve(ifelse(x <= 0, Inf, dchisq(x, df=2)), add=TRUE, col=red, lwd=2) curve(ifelse(x <= 0, Inf, dchisq(x, df=1)), add=TRUE, col=blue, lwd=2) par(mfrow=mfrow)
How to understand degrees of freedom?
This is a subtle question. It takes a thoughtful person not to understand those quotations! Although they are suggestive, it turns out that none of them is exactly or generally correct. I haven't t
How to understand degrees of freedom? This is a subtle question. It takes a thoughtful person not to understand those quotations! Although they are suggestive, it turns out that none of them is exactly or generally correct. I haven't the time (and there isn't the space here) to give a full exposition, but I would like to share one approach and an insight that it suggests. Where does the concept of degrees of freedom (DF) arise? The contexts in which it's found in elementary treatments are: The Student t-test and its variants such as the Welch or Satterthwaite solutions to the Behrens-Fisher problem (where two populations have different variances). The Chi-squared distribution (defined as a sum of squares of independent standard Normals), which is implicated in the sampling distribution of the variance. The F-test (of ratios of estimated variances). The Chi-squared test, comprising its uses in (a) testing for independence in contingency tables and (b) testing for goodness of fit of distributional estimates. In spirit, these tests run a gamut from being exact (the Student t-test and F-test for Normal variates) to being good approximations (the Student t-test and the Welch/Satterthwaite tests for not-too-badly-skewed data) to being based on asymptotic approximations (the Chi-squared test). An interesting aspect of some of these is the appearance of non-integral "degrees of freedom" (the Welch/Satterthwaite tests and, as we will see, the Chi-squared test). This is of especial interest because it is the first hint that DF is not any of the things claimed of it. We can dispose right away of some of the claims in the question. Because "final calculation of a statistic" is not well-defined (it apparently depends on what algorithm one uses for the calculation), it can be no more than a vague suggestion and is worth no further criticism. Similarly, neither "number of independent scores that go into the estimate" nor "the number of parameters used as intermediate steps" are well-defined. "Independent pieces of information that go into [an] estimate" is difficult to deal with, because there are two different but intimately related senses of "independent" that can be relevant here. One is independence of random variables; the other is functional independence. As an example of the latter, suppose we collect morphometric measurements of subjects--say, for simplicity, the three side lengths $X$, $Y$, $Z$, surface areas $S=2(XY+YZ+ZX)$, and volumes $V=XYZ$ of a set of wooden blocks. The three side lengths can be considered independent random variables, but all five variables are dependent RVs. The five are also functionally dependent because the codomain (not the "domain"!) of the vector-valued random variable $(X,Y,Z,S,V)$ traces out a three-dimensional manifold in $\mathbb{R}^5$. (Thus, locally at any point $\omega\in\mathbb{R}^5$, there are two functions $f_\omega$ and $g_\omega$ for which $f_\omega(X(\psi),\ldots,V(\psi))=0$ and $g_\omega(X(\psi),\ldots,V(\psi))=0$ for points $\psi$ "near" $\omega$ and the derivatives of $f$ and $g$ evaluated at $\omega$ are linearly independent.) However--here's the kicker--for many probability measures on the blocks, subsets of the variables such as $(X,S,V)$ are dependent as random variables but functionally independent. Having been alerted by these potential ambiguities, let's hold up the Chi-squared goodness of fit test for examination, because (a) it's simple, (b) it's one of the common situations where people really do need to know about DF to get the p-value right and (c) it's often used incorrectly. Here's a brief synopsis of the least controversial application of this test: You have a collection of data values $(x_1, \ldots, x_n)$, considered as a sample of a population. You have estimated some parameters $\theta_1, \ldots, \theta_p$ of a distribution. For example, you estimated the mean $\theta_1$ and standard deviation $\theta_2 = \theta_p$ of a Normal distribution, hypothesizing that the population is normally distributed but not knowing (in advance of obtaining the data) what $\theta_1$ or $\theta_2$ might be. In advance, you created a set of $k$ "bins" for the data. (It may be problematic when the bins are determined by the data, even though this is often done.) Using these bins, the data are reduced to the set of counts within each bin. Anticipating what the true values of $(\theta)$ might be, you have arranged it so (hopefully) each bin will receive approximately the same count. (Equal-probability binning assures the chi-squared distribution really is a good approximation to the true distribution of the chi-squared statistic about to be described.) You have a lot of data--enough to assure that almost all bins ought to have counts of 5 or greater. (This, we hope, will enable the sampling distribution of the $\chi^2$ statistic to be approximated adequately by some $\chi^2$ distribution.) Using the parameter estimates, you can compute the expected count in each bin. The Chi-squared statistic is the sum of the ratios $$\frac{(\text{observed}-\text{expected})^2}{\text{expected}}.$$ This, many authorities tell us, should have (to a very close approximation) a Chi-squared distribution. But there's a whole family of such distributions. They are differentiated by a parameter $\nu$ often referred to as the "degrees of freedom." The standard reasoning about how to determine $\nu$ goes like this I have $k$ counts. That's $k$ pieces of data. But there are (functional) relationships among them. To start with, I know in advance that the sum of the counts must equal $n$. That's one relationship. I estimated two (or $p$, generally) parameters from the data. That's two (or $p$) additional relationships, giving $p+1$ total relationships. Presuming they (the parameters) are all (functionally) independent, that leaves only $k-p-1$ (functionally) independent "degrees of freedom": that's the value to use for $\nu$. The problem with this reasoning (which is the sort of calculation the quotations in the question are hinting at) is that it's wrong except when some special additional conditions hold. Moreover, those conditions have nothing to do with independence (functional or statistical), with numbers of "components" of the data, with the numbers of parameters, nor with anything else referred to in the original question. Let me show you with an example. (To make it as clear as possible, I'm using a small number of bins, but that's not essential.) Let's generate 20 independent and identically distributed (iid) standard Normal variates and estimate their mean and standard deviation with the usual formulas (mean = sum/count, etc.). To test goodness of fit, create four bins with cutpoints at the quartiles of a standard normal: -0.675, 0, +0.657, and use the bin counts to generate a Chi-squared statistic. Repeat as patience allows; I had time to do 10,000 repetitions. The standard wisdom about DF says we have 4 bins and 1+2 = 3 constraints, implying the distribution of these 10,000 Chi-squared statistics should follow a Chi-squared distribution with 1 DF. Here's the histogram: The dark blue line graphs the PDF of a $\chi^2(1)$ distribution--the one we thought would work--while the dark red line graphs that of a $\chi^2(2)$ distribution (which would be a good guess if someone were to tell you that $\nu=1$ is incorrect). Neither fits the data. You might expect the problem to be due to the small size of the data sets ($n$=20) or perhaps the small size of the number of bins. However, the problem persists even with very large datasets and larger numbers of bins: it is not merely a failure to reach an asymptotic approximation. Things went wrong because I violated two requirements of the Chi-squared test: You must use the Maximum Likelihood estimate of the parameters. (This requirement can, in practice, be slightly violated.) You must base that estimate on the counts, not on the actual data! (This is crucial.) The red histogram depicts the chi-squared statistics for 10,000 separate iterations, following these requirements. Sure enough, it visibly follows the $\chi^2(1)$ curve (with an acceptable amount of sampling error), as we had originally hoped. The point of this comparison--which I hope you have seen coming--is that the correct DF to use for computing the p-values depends on many things other than dimensions of manifolds, counts of functional relationships, or the geometry of Normal variates. There is a subtle, delicate interaction between certain functional dependencies, as found in mathematical relationships among quantities, and distributions of the data, their statistics, and the estimators formed from them. Accordingly, it cannot be the case that DF is adequately explainable in terms of the geometry of multivariate normal distributions, or in terms of functional independence, or as counts of parameters, or anything else of this nature. We are led to see, then, that "degrees of freedom" is merely a heuristic that suggests what the sampling distribution of a (t, Chi-squared, or F) statistic ought to be, but it is not dispositive. Belief that it is dispositive leads to egregious errors. (For instance, the top hit on Google when searching "chi squared goodness of fit" is a Web page from an Ivy League university that gets most of this completely wrong! In particular, a simulation based on its instructions shows that the chi-squared value it recommends as having 7 DF actually has 9 DF.) With this more nuanced understanding, it's worthwhile to re-read the Wikipedia article in question: in its details it gets things right, pointing out where the DF heuristic tends to work and where it is either an approximation or does not apply at all. A good account of the phenomenon illustrated here (unexpectedly high DF in Chi-squared GOF tests) appears in Volume II of Kendall & Stuart, 5th edition. I am grateful for the opportunity afforded by this question to lead me back to this wonderful text, which is full of such useful analyses. Edit (Jan 2017) Here is R code to produce the figure following "The standard wisdom about DF..." # # Simulate data, one iteration per column of `x`. # n <- 20 n.sim <- 1e4 bins <- qnorm(seq(0, 1, 1/4)) x <- matrix(rnorm(n*n.sim), nrow=n) # # Compute statistics. # m <- colMeans(x) s <- apply(sweep(x, 2, m), 2, sd) counts <- apply(matrix(as.numeric(cut(x, bins)), nrow=n), 2, tabulate, nbins=4) expectations <- mapply(function(m,s) n*diff(pnorm(bins, m, s)), m, s) chisquared <- colSums((counts - expectations)^2 / expectations) # # Plot histograms of means, variances, and chi-squared stats. The first # two confirm all is working as expected. # mfrow <- par("mfrow") par(mfrow=c(1,3)) red <- "#a04040" # Intended to show correct distributions blue <- "#404090" # To show the putative chi-squared distribution hist(m, freq=FALSE) curve(dnorm(x, sd=1/sqrt(n)), add=TRUE, col=red, lwd=2) hist(s^2, freq=FALSE) curve(dchisq(x*(n-1), df=n-1)*(n-1), add=TRUE, col=red, lwd=2) hist(chisquared, freq=FALSE, breaks=seq(0, ceiling(max(chisquared)), 1/4), xlim=c(0, 13), ylim=c(0, 0.55), col="#c0c0ff", border="#404040") curve(ifelse(x <= 0, Inf, dchisq(x, df=2)), add=TRUE, col=red, lwd=2) curve(ifelse(x <= 0, Inf, dchisq(x, df=1)), add=TRUE, col=blue, lwd=2) par(mfrow=mfrow)
How to understand degrees of freedom? This is a subtle question. It takes a thoughtful person not to understand those quotations! Although they are suggestive, it turns out that none of them is exactly or generally correct. I haven't t
33,289
How to understand degrees of freedom?
Or simply: the number of elements in a numerical array that you're allowed to change so that the value of the statistic remains unchanged. # for instance if: x + y + z = 10 you can change, for instance, x and y at random, but you cannot change z (you can, but not at random, therefore you're not free to change it - see Harvey's comment), 'cause you'll change the value of the statistic (Σ = 10). So, in this case df = 2.
How to understand degrees of freedom?
Or simply: the number of elements in a numerical array that you're allowed to change so that the value of the statistic remains unchanged. # for instance if: x + y + z = 10 you can change, for instan
How to understand degrees of freedom? Or simply: the number of elements in a numerical array that you're allowed to change so that the value of the statistic remains unchanged. # for instance if: x + y + z = 10 you can change, for instance, x and y at random, but you cannot change z (you can, but not at random, therefore you're not free to change it - see Harvey's comment), 'cause you'll change the value of the statistic (Σ = 10). So, in this case df = 2.
How to understand degrees of freedom? Or simply: the number of elements in a numerical array that you're allowed to change so that the value of the statistic remains unchanged. # for instance if: x + y + z = 10 you can change, for instan
33,290
How to understand degrees of freedom?
The concept is not at all difficult to make mathematical precise given a bit of general knowledge of $n$-dimensional Euclidean geometry, subspaces and orthogonal projections. If $P$ is an orthogonal projection from $\mathbb{R}^n$ to a $p$-dimensional subspace $L$ and $x$ is an arbitrary $n$-vector then $Px$ is in $L$, $x - Px$ and $Px$ are orthogonal and $x - Px \in L^{\perp}$ is in the orthogonal complement of $L$. The dimension of this orthogonal complement, $L^{\perp}$, is $n-p$. If $x$ is free to vary in an $n$-dimensional space then $x - Px$ is free to vary in an $n-p$ dimensional space. For this reason we say that $x - Px$ has $n-p$ degrees of freedom. These considerations are important to statistics because if $X$ is an $n$-dimensional random vector and $L$ is a model of its mean, that is, the mean vector $E(X)$ is in $L$, then we call $X-PX$ the vector of residuals, and we use the residuals to estimate the variance. The vector of residuals has $n-p$ degrees of freedom, that is, it is constrained to a subspace of dimension $n-p$. If the coordinates of $X$ are independent and normally distributed with the same variance $\sigma^2$ then The vectors $PX$ and $X - PX$ are independent. If $E(X) \in L$ the distribution of the squared norm of the vector of residuals $||X - PX||^2$ is a $\chi^2$-distribution with scale parameter $\sigma^2$ and another parameter that happens to be the degrees of freedom $n-p$. The sketch of proof of these facts is given below. The two results are central for the further development of the statistical theory based on the normal distribution. Note also that this is why the $\chi^2$-distribution has the parametrization it has. It is also a $\Gamma$-distribution with scale parameter $2\sigma^2$ and shape parameter $(n-p)/2$, but in the context above it is natural to parametrize in terms of the degrees of freedom. I must admit that I don't find any of the paragraphs cited from the Wikipedia article particularly enlightening, but they are not really wrong or contradictory either. They say in an imprecise, and in a general loose sense, that when we compute the estimate of the variance parameter, but do so based on residuals, we base the computation on a vector that is only free to vary in a space of dimension $n-p$. Beyond the theory of linear normal models the use of the concept of degrees of freedom can be confusing. It is, for instance, used in the parametrization of the $\chi^2$-distribution whether or not there is a reference to anything that could have any degrees of freedom. When we consider statistical analysis of categorical data there can be some confusion about whether the "independent pieces" should be counted before or after a tabulation. Furthermore, for constraints, even for normal models, that are not subspace constraints, it is not obvious how to extend the concept of degrees of freedom. Various suggestions exist typically under the name of effective degrees of freedom. Before any other usages and meanings of degrees of freedom is considered I will strongly recommend to become confident with it in the context of linear normal models. A reference dealing with this model class is A First Course in Linear Model Theory, and there are additional references in the preface of the book to other classical books on linear models. Proof of the results above: Let $\xi = E(X)$, note that the variance matrix is $\sigma^2 I$ and choose an orthonormal basis $z_1, \ldots, z_p$ of $L$ and an orthonormal basis $z_{p+1}, \ldots, z_n$ of $L^{\perp}$. Then $z_1, \ldots, z_n$ is an orthonormal basis of $\mathbb{R}^n$. Let $\tilde{X}$ denote the $n$-vector of the coefficients of $X$ in this basis, that is $$\tilde{X}_i = z_i^T X.$$ This can also be written as $\tilde{X} = Z^T X$ where $Z$ is the orthogonal matrix with the $z_i$'s in the columns. Then we have to use that $\tilde{X}$ has a normal distribution with mean $Z^T \xi$ and, because $Z$ is orthogonal, variance matrix $\sigma^2 I$. This follows from general linear transformation results of the normal distribution. The basis was chosen so that the coefficients of $PX$ are $\tilde{X}_i$ for $i= 1, \ldots, p$, and the coefficients of $X - PX$ are $\tilde{X}_i$ for $i= p+1, \ldots, n$. Since the coefficients are uncorrelated and jointly normal, they are independent, and this implies that $$PX = \sum_{i=1}^p \tilde{X}_i z_i$$ and $$X - PX = \sum_{i=p+1}^n \tilde{X}_i z_i$$ are independent. Moreover, $$||X - PX||^2 = \sum_{i=p+1}^n \tilde{X}_i^2.$$ If $\xi \in L$ then $E(\tilde{X}_i) = z_i^T \xi = 0$ for $i = p +1, \ldots, n$ because then $z_i \in L^{\perp}$ and hence $z_i \perp \xi$. In this case $||X - PX||^2$ is the sum of $n-p$ independent $N(0, \sigma^2)$-distributed random variables, whose distribution, by definition, is a $\chi^2$-distribution with scale parameter $\sigma^2$ and $n-p$ degrees of freedom.
How to understand degrees of freedom?
The concept is not at all difficult to make mathematical precise given a bit of general knowledge of $n$-dimensional Euclidean geometry, subspaces and orthogonal projections. If $P$ is an orthogonal p
How to understand degrees of freedom? The concept is not at all difficult to make mathematical precise given a bit of general knowledge of $n$-dimensional Euclidean geometry, subspaces and orthogonal projections. If $P$ is an orthogonal projection from $\mathbb{R}^n$ to a $p$-dimensional subspace $L$ and $x$ is an arbitrary $n$-vector then $Px$ is in $L$, $x - Px$ and $Px$ are orthogonal and $x - Px \in L^{\perp}$ is in the orthogonal complement of $L$. The dimension of this orthogonal complement, $L^{\perp}$, is $n-p$. If $x$ is free to vary in an $n$-dimensional space then $x - Px$ is free to vary in an $n-p$ dimensional space. For this reason we say that $x - Px$ has $n-p$ degrees of freedom. These considerations are important to statistics because if $X$ is an $n$-dimensional random vector and $L$ is a model of its mean, that is, the mean vector $E(X)$ is in $L$, then we call $X-PX$ the vector of residuals, and we use the residuals to estimate the variance. The vector of residuals has $n-p$ degrees of freedom, that is, it is constrained to a subspace of dimension $n-p$. If the coordinates of $X$ are independent and normally distributed with the same variance $\sigma^2$ then The vectors $PX$ and $X - PX$ are independent. If $E(X) \in L$ the distribution of the squared norm of the vector of residuals $||X - PX||^2$ is a $\chi^2$-distribution with scale parameter $\sigma^2$ and another parameter that happens to be the degrees of freedom $n-p$. The sketch of proof of these facts is given below. The two results are central for the further development of the statistical theory based on the normal distribution. Note also that this is why the $\chi^2$-distribution has the parametrization it has. It is also a $\Gamma$-distribution with scale parameter $2\sigma^2$ and shape parameter $(n-p)/2$, but in the context above it is natural to parametrize in terms of the degrees of freedom. I must admit that I don't find any of the paragraphs cited from the Wikipedia article particularly enlightening, but they are not really wrong or contradictory either. They say in an imprecise, and in a general loose sense, that when we compute the estimate of the variance parameter, but do so based on residuals, we base the computation on a vector that is only free to vary in a space of dimension $n-p$. Beyond the theory of linear normal models the use of the concept of degrees of freedom can be confusing. It is, for instance, used in the parametrization of the $\chi^2$-distribution whether or not there is a reference to anything that could have any degrees of freedom. When we consider statistical analysis of categorical data there can be some confusion about whether the "independent pieces" should be counted before or after a tabulation. Furthermore, for constraints, even for normal models, that are not subspace constraints, it is not obvious how to extend the concept of degrees of freedom. Various suggestions exist typically under the name of effective degrees of freedom. Before any other usages and meanings of degrees of freedom is considered I will strongly recommend to become confident with it in the context of linear normal models. A reference dealing with this model class is A First Course in Linear Model Theory, and there are additional references in the preface of the book to other classical books on linear models. Proof of the results above: Let $\xi = E(X)$, note that the variance matrix is $\sigma^2 I$ and choose an orthonormal basis $z_1, \ldots, z_p$ of $L$ and an orthonormal basis $z_{p+1}, \ldots, z_n$ of $L^{\perp}$. Then $z_1, \ldots, z_n$ is an orthonormal basis of $\mathbb{R}^n$. Let $\tilde{X}$ denote the $n$-vector of the coefficients of $X$ in this basis, that is $$\tilde{X}_i = z_i^T X.$$ This can also be written as $\tilde{X} = Z^T X$ where $Z$ is the orthogonal matrix with the $z_i$'s in the columns. Then we have to use that $\tilde{X}$ has a normal distribution with mean $Z^T \xi$ and, because $Z$ is orthogonal, variance matrix $\sigma^2 I$. This follows from general linear transformation results of the normal distribution. The basis was chosen so that the coefficients of $PX$ are $\tilde{X}_i$ for $i= 1, \ldots, p$, and the coefficients of $X - PX$ are $\tilde{X}_i$ for $i= p+1, \ldots, n$. Since the coefficients are uncorrelated and jointly normal, they are independent, and this implies that $$PX = \sum_{i=1}^p \tilde{X}_i z_i$$ and $$X - PX = \sum_{i=p+1}^n \tilde{X}_i z_i$$ are independent. Moreover, $$||X - PX||^2 = \sum_{i=p+1}^n \tilde{X}_i^2.$$ If $\xi \in L$ then $E(\tilde{X}_i) = z_i^T \xi = 0$ for $i = p +1, \ldots, n$ because then $z_i \in L^{\perp}$ and hence $z_i \perp \xi$. In this case $||X - PX||^2$ is the sum of $n-p$ independent $N(0, \sigma^2)$-distributed random variables, whose distribution, by definition, is a $\chi^2$-distribution with scale parameter $\sigma^2$ and $n-p$ degrees of freedom.
How to understand degrees of freedom? The concept is not at all difficult to make mathematical precise given a bit of general knowledge of $n$-dimensional Euclidean geometry, subspaces and orthogonal projections. If $P$ is an orthogonal p
33,291
How to understand degrees of freedom?
It's really no different from the way the term "degrees of freedom" works in any other field. For example, suppose you have four variables: the length, the width, the area, and the perimeter of a rectangle. Do you really know four things? No, because there are only two degrees of freedom. If you know the length and the width, you can derive the area and the perimeter. If you know the length and the area, you can derive the width and the perimeter. If you know the area and the perimeter you can derive the length and the width (up to rotation). If you have all four, you can either say that the system is consistent (all of the variables agree with each other), or inconsistent (no rectangle could actually satisfy all of the conditions). A square is a rectangle with a degree of freedom removed; if you know any side of a square or its perimeter or its area, you can derive all of the others because there's only one degree of freedom. In statistics, things get more fuzzy, but the idea is still the same. If all of the data that you're using as the input for a function are independent variables, then you have as many degrees of freedom as you have inputs. But if they have dependence in some way, such that if you had n - k inputs you could figure out the remaining k, then you've actually only got n - k degrees of freedom. And sometimes you need to take that into account, lest you convince yourself that the data are more reliable or have more predictive power than they really do, by counting more data points than you really have independent bits of data. (Taken from a post at http://www.reddit.com/r/math/comments/9qbut/could_someone_explain_to_me_what_degrees_of/c0dxtbq?context=3.) Moreover, all three definitions are almost trying to give a same message.
How to understand degrees of freedom?
It's really no different from the way the term "degrees of freedom" works in any other field. For example, suppose you have four variables: the length, the width, the area, and the perimeter of a rect
How to understand degrees of freedom? It's really no different from the way the term "degrees of freedom" works in any other field. For example, suppose you have four variables: the length, the width, the area, and the perimeter of a rectangle. Do you really know four things? No, because there are only two degrees of freedom. If you know the length and the width, you can derive the area and the perimeter. If you know the length and the area, you can derive the width and the perimeter. If you know the area and the perimeter you can derive the length and the width (up to rotation). If you have all four, you can either say that the system is consistent (all of the variables agree with each other), or inconsistent (no rectangle could actually satisfy all of the conditions). A square is a rectangle with a degree of freedom removed; if you know any side of a square or its perimeter or its area, you can derive all of the others because there's only one degree of freedom. In statistics, things get more fuzzy, but the idea is still the same. If all of the data that you're using as the input for a function are independent variables, then you have as many degrees of freedom as you have inputs. But if they have dependence in some way, such that if you had n - k inputs you could figure out the remaining k, then you've actually only got n - k degrees of freedom. And sometimes you need to take that into account, lest you convince yourself that the data are more reliable or have more predictive power than they really do, by counting more data points than you really have independent bits of data. (Taken from a post at http://www.reddit.com/r/math/comments/9qbut/could_someone_explain_to_me_what_degrees_of/c0dxtbq?context=3.) Moreover, all three definitions are almost trying to give a same message.
How to understand degrees of freedom? It's really no different from the way the term "degrees of freedom" works in any other field. For example, suppose you have four variables: the length, the width, the area, and the perimeter of a rect
33,292
How to understand degrees of freedom?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. I really like first sentence from The Little Handbook of Statistical Practice. Degrees of Freedom Chapter One of the questions an instrutor dreads most from a mathematically unsophisticated audience is, "What exactly is degrees of freedom?" I think you can get really good understanding about degrees of freedom from reading this chapter.
How to understand degrees of freedom?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
How to understand degrees of freedom? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. I really like first sentence from The Little Handbook of Statistical Practice. Degrees of Freedom Chapter One of the questions an instrutor dreads most from a mathematically unsophisticated audience is, "What exactly is degrees of freedom?" I think you can get really good understanding about degrees of freedom from reading this chapter.
How to understand degrees of freedom? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
33,293
How to understand degrees of freedom?
Wikipedia asserts that degrees of freedom of a random vector can be interpreted as the dimensions of the vector subspace. I want to go step-by-step, very basically through this as a partial answer and elaboration on the Wikipedia entry. The example proposed is that of a random vector corresponding to the measurements of a continuous variable for different subjects, expressed as a vector extending from the origin $[a\,b\,c]^T$. Its orthogonal projection on the vector $[1\,1\,1]^T$ results in a vector equal to the projection of the vector of measurement means ($\bar{x}=1/3(a+b+c)$), i.e. $[\bar x \, \bar x \, \bar x]^T$, dotted with the $\vec{1}$ vector, $[1\,1\,1]^T $ This projection onto the subspace spanned by the vector of ones has $1\,\text{degree of freedom}$. The residual vector (distance from the mean) is the least-squares projection onto the $(n − 1)$-dimensional orthogonal complement of this subspace, and has $n − 1\,\text{degrees of freedom}$, $n$ being the total number of components of the vector (in our case $3$ since we are in $\mathbb{R}^3$ in the example).This can be simply proven by obtaining the dot product of $[\bar{x}\,\bar{x}\,\bar{x}]^T$ with the difference between $[a\,b\,c]^T$ and $[\bar{x}\,\bar{x}\,\bar{x}]^T$: $$ [\bar{x}\, \bar{x}\,\bar{x}]\, \begin{bmatrix} a-\bar{x}\\b-\bar{x}\\c-\bar{x}\end{bmatrix}=$$ $$= \bigg[\tiny\frac{(a+b+c)}{3}\, \bigg(a-\frac{(a+b+c)}{3}\bigg)\bigg]+ \bigg[\tiny\frac{(a+b+c)}{3} \,\bigg(b-\frac{(a+b+c)}{3}\bigg)\bigg]+ \bigg[\tiny\frac{(a+b+c)}{3} \,\bigg(c-\frac{(a+b+c)}{3}\bigg)\bigg]$$ $$=\tiny \frac{(a+b+c)}{3}\bigg[ \bigg(\tiny a-\frac{(a+b+c)}{3}\bigg)+ \bigg(b-\frac{(a+b+c)}{3}\bigg)+ \bigg(c-\frac{(a+b+c)}{3}\bigg)\bigg]$$ $$= \tiny \frac{(a+b+c)}{3}\bigg[\tiny \frac{1}{3} \bigg(\tiny 3a-(a+b+c)+ 3b-(a+b+c)+3c-(a+b+c)\bigg)\bigg]$$ $$=\tiny\frac{(a+b+c)}{3}\bigg[\tiny\frac{1}{3} (3a-3a+ 3b-3b+3c-3c)\bigg]\large= 0$$. And this relationship extends to any point in a plane orthogonal to $[\bar{x}\,\bar{x}\,\bar{x}]^T$. This concept is important in understanding why $\frac 1 {\sigma^2} \Big((X_1-\bar X)^2 + \cdots + (X_n - \bar X)^2 \Big) \sim \chi^2_{n-1}$, a step in the derivation of the t-distribution(here and here). Let's take the point $[35\,50\,80]^T$, corresponding to three observations. The mean is $55$, and the vector $[55\,\,55\,\,55]^T$ is the normal (orthogonal) to a plane, $55x + 55y + 55z = D$. Plugging in the point coordinates into the plane equation, $D = -9075$. Now we can choose any other point in this plane, and the mean of its coordinates is going to be $55$, geometrically corresponding to its projection onto the vector $[1\,\,1\,\,1]^T$. Hence for every mean value (in our example, $55$) we can choose an infinite number of pairs of coordinates in $\mathbb{R}^2$ without restriction ($2\,\text{degrees of freedom}$); yet, since the plane is in $\mathbb{R}^3$, the third coordinate will come determined by the equation of the plane (or, geometrically the orthogonal projection of the point onto $[55\,\,55\,\,55]^T$. Here is representation of three points (in white) lying on the plane (cerulean blue) orthogonal to $[55\,\,55\,\,55]^T$ (arrow): $[35\,\,50\,\,80]^T$, $[80\,\,80\,\,5]$ and $[90\,\,15\,\,60]$ all of them on the plane (subspace with $2\,\text{df}$), and then with a mean of their components of $55$, and an orthogonal projection to $[1\,\,1\,\,1]^T$ (subspace with $1\,\text{df}$) equal to $[55\,\,55\,\,55]^T$:
How to understand degrees of freedom?
Wikipedia asserts that degrees of freedom of a random vector can be interpreted as the dimensions of the vector subspace. I want to go step-by-step, very basically through this as a partial answer and
How to understand degrees of freedom? Wikipedia asserts that degrees of freedom of a random vector can be interpreted as the dimensions of the vector subspace. I want to go step-by-step, very basically through this as a partial answer and elaboration on the Wikipedia entry. The example proposed is that of a random vector corresponding to the measurements of a continuous variable for different subjects, expressed as a vector extending from the origin $[a\,b\,c]^T$. Its orthogonal projection on the vector $[1\,1\,1]^T$ results in a vector equal to the projection of the vector of measurement means ($\bar{x}=1/3(a+b+c)$), i.e. $[\bar x \, \bar x \, \bar x]^T$, dotted with the $\vec{1}$ vector, $[1\,1\,1]^T $ This projection onto the subspace spanned by the vector of ones has $1\,\text{degree of freedom}$. The residual vector (distance from the mean) is the least-squares projection onto the $(n − 1)$-dimensional orthogonal complement of this subspace, and has $n − 1\,\text{degrees of freedom}$, $n$ being the total number of components of the vector (in our case $3$ since we are in $\mathbb{R}^3$ in the example).This can be simply proven by obtaining the dot product of $[\bar{x}\,\bar{x}\,\bar{x}]^T$ with the difference between $[a\,b\,c]^T$ and $[\bar{x}\,\bar{x}\,\bar{x}]^T$: $$ [\bar{x}\, \bar{x}\,\bar{x}]\, \begin{bmatrix} a-\bar{x}\\b-\bar{x}\\c-\bar{x}\end{bmatrix}=$$ $$= \bigg[\tiny\frac{(a+b+c)}{3}\, \bigg(a-\frac{(a+b+c)}{3}\bigg)\bigg]+ \bigg[\tiny\frac{(a+b+c)}{3} \,\bigg(b-\frac{(a+b+c)}{3}\bigg)\bigg]+ \bigg[\tiny\frac{(a+b+c)}{3} \,\bigg(c-\frac{(a+b+c)}{3}\bigg)\bigg]$$ $$=\tiny \frac{(a+b+c)}{3}\bigg[ \bigg(\tiny a-\frac{(a+b+c)}{3}\bigg)+ \bigg(b-\frac{(a+b+c)}{3}\bigg)+ \bigg(c-\frac{(a+b+c)}{3}\bigg)\bigg]$$ $$= \tiny \frac{(a+b+c)}{3}\bigg[\tiny \frac{1}{3} \bigg(\tiny 3a-(a+b+c)+ 3b-(a+b+c)+3c-(a+b+c)\bigg)\bigg]$$ $$=\tiny\frac{(a+b+c)}{3}\bigg[\tiny\frac{1}{3} (3a-3a+ 3b-3b+3c-3c)\bigg]\large= 0$$. And this relationship extends to any point in a plane orthogonal to $[\bar{x}\,\bar{x}\,\bar{x}]^T$. This concept is important in understanding why $\frac 1 {\sigma^2} \Big((X_1-\bar X)^2 + \cdots + (X_n - \bar X)^2 \Big) \sim \chi^2_{n-1}$, a step in the derivation of the t-distribution(here and here). Let's take the point $[35\,50\,80]^T$, corresponding to three observations. The mean is $55$, and the vector $[55\,\,55\,\,55]^T$ is the normal (orthogonal) to a plane, $55x + 55y + 55z = D$. Plugging in the point coordinates into the plane equation, $D = -9075$. Now we can choose any other point in this plane, and the mean of its coordinates is going to be $55$, geometrically corresponding to its projection onto the vector $[1\,\,1\,\,1]^T$. Hence for every mean value (in our example, $55$) we can choose an infinite number of pairs of coordinates in $\mathbb{R}^2$ without restriction ($2\,\text{degrees of freedom}$); yet, since the plane is in $\mathbb{R}^3$, the third coordinate will come determined by the equation of the plane (or, geometrically the orthogonal projection of the point onto $[55\,\,55\,\,55]^T$. Here is representation of three points (in white) lying on the plane (cerulean blue) orthogonal to $[55\,\,55\,\,55]^T$ (arrow): $[35\,\,50\,\,80]^T$, $[80\,\,80\,\,5]$ and $[90\,\,15\,\,60]$ all of them on the plane (subspace with $2\,\text{df}$), and then with a mean of their components of $55$, and an orthogonal projection to $[1\,\,1\,\,1]^T$ (subspace with $1\,\text{df}$) equal to $[55\,\,55\,\,55]^T$:
How to understand degrees of freedom? Wikipedia asserts that degrees of freedom of a random vector can be interpreted as the dimensions of the vector subspace. I want to go step-by-step, very basically through this as a partial answer and
33,294
How to understand degrees of freedom?
In my classes, I use one "simple" situation that might help you wonder and perhaps develop a gut feeling for what a degree of freedom may mean. It is kind of a "Forrest Gump" approach to the subject, but it is worth the try. Consider you have 10 independent observations $X_1, X_2, \ldots, X_{10}\sim N(\mu,\sigma^2)$ that came right from a normal population whose mean $\mu$ and variance $\sigma^2$ are unknown. Your observations bring to you collectively information both about $\mu$ and $\sigma^2$. After all, your observations tend to be spread around one central value, which ought to be close to the actual and unknown value of $\mu$ and, likewise, if $\mu$ is very high or very low, then you can expect to see your observations gather around a very high or very low value respectively. One good "substitute" for $\mu$ (in the absence of knowledge of its actual value) is $\bar X$, the average of your observation. Also, if your observations are very close to one another, that is an indication that you can expect that $\sigma^2$ must be small and, likewise, if $\sigma^2$ is very large, then you can expect to see wildly different values for $X_1$ to $X_{10}$. If you were to bet your week's wage on which should be the actual values of $\mu$ and $\sigma^2$, you would need to choose a pair of values in which you would bet your money. Let's not think of anything as dramatic as losing your paycheck unless you guess $\mu$ correctly until its 200th decimal position. Nope. Let's think of some sort of prizing system that the closer you guess $\mu$ and $\sigma^2$ the more you get rewarded. In some sense, your better, more informed, and more polite guess for $\mu$'s value could be $\bar X$. In that sense, you estimate that $\mu$ must be some value around $\bar X$. Similarly, one good "substitute" for $\sigma^2$ (not required for now) is $S^2$, your sample variance, which makes a good estimate for $\sigma$. If your were to believe that those substitutes are the actual values of $\mu$ and $\sigma 2$, you would probably be wrong, because very slim are the chances that you were so lucky that your observations coordinated themselves to get you the gift of $\bar X$ being equal to $\mu$ and $S^2$ equal to $\sigma^2$. Nah, probably it didn't happen. But you could be at different levels of wrong, varying from a bit wrong to really, really, really miserably wrong (a.k.a., "Bye-bye, paycheck; see you next week!"). Ok, let's say that you took $\bar X$ as your guess for $\mu$. Consider just two scenarios: $S^2=2$ and $S^2=20,000,000$. In the first, your observations sit pretty and close to one another. In the latter, your observations vary wildly. In which scenario you should be more concerned with your potential losses? If you thought of the second one, you're right. Having a estimate about $\sigma^2$ changes your confidence on your bet very reasonably, for the larger $\sigma^2$ is, the wider you can expect $\bar X$ to variate. But, beyond information about $\mu$ and $\sigma^2$, your observations also carry some amount of just pure random fluctuation that is not informative neither about $\mu$ nor about $\sigma^2$. How can you notice it? Well, let's assume, for sake of argument, that there is a God and that He has spare time enough to give Himself the frivolity of telling you specifically the real (and so far unknown) values of both $\mu$ and $\sigma$. And here is the annoying plot twist of this lysergic tale: He tells it to you after you placed your bet. Perhaps to enlighten you, perhaps to prepare you, perhaps to mock you. How could you know? Well, that makes the information about $\mu$ and $\sigma^2$ contained in your observations quite useless now. Your observations' central position $\bar X$ and variance $S^2$ are no longer of any help to get closer to the actual values of $\mu$ and $\sigma^2$, for you already know them. One of the benefits of your good acquaintance with God is that you actually know by how much you failed to guess correctly $\mu$ by using $\bar X$, that is, $(\bar X - \mu)$ your estimation error. Well, since $X_i\sim N(\mu,\sigma^2)$, then $\bar X\sim N(\mu,\sigma^2/10)$ (trust me in that if you will), also $(\bar X - \mu)\sim N(0,\sigma^2/10)$ (ok, trust me in that on too) and, finally, $$ \frac{\bar X - \mu}{\sigma/\sqrt{10}} \sim N(0,1) $$ (guess what? trust me in that one as well), which carries absolutely no information about $\mu$ or $\sigma^2$. You know what? If you took any of your individual observations as a guess for $\mu$, your estimation error $(X_i-\mu)$ would be distributed as $N(0,\sigma^2)$. Well, between estimating $\mu$ with $\bar X$ and any $X_i$, choosing $\bar X$ would be better business, because $Var(\bar X) = \sigma^2/10 < \sigma^2 = Var(X_i)$, so $\bar X$ was less prone to be astray from $\mu$ than an individual $X_i$. Anyway, $(X_i-\mu)/\sigma\sim N(0,1)$ is also absolutely non informative about neither $\mu$ nor $\sigma^2$. "Will this tale ever end?" you may be thinking. You also may be thinking "Is there any more random fluctuation that is non informative about $\mu$ and $\sigma^2$?". [I prefer to think that you are thinking of the latter.] Yes, there is! The square of your estimation error for $\mu$ with $X_i$ divided by $\sigma$, $$ \frac{(X_i-\mu)^2}{\sigma^2} = \left(\frac{X_i-\mu}{\sigma}\right)^2 \sim \chi^2 $$ has a Chi-squared distribution, which is the distribution of the square $Z^2$ of a standard Normal $Z\sim N(0,1)$, which I am sure you noticed has absolutely no information about either $\mu$ nor $\sigma^2$, but conveys information about the variability you should expect to face. That is a very well known distribution that arises naturally from the very scenario of you gambling problem for every single one of your ten observations and also from your mean: $$ \frac{(\bar X-\mu)^2}{\sigma^2/10} = \left(\frac{\bar X-\mu}{\sigma/\sqrt{10}}\right)^2 = \left(N(0,1)\right)^2 \sim\chi^2 $$ and also from the gathering of your ten observations' variation: $$ \sum_{i=1}^{10} \frac{(X_i-\mu)^2}{\sigma^2/10} =\sum_{i=1}^{10} \left(\frac{X_i-\mu}{\sigma/\sqrt{10}}\right)^2 =\sum_{i=1}^{10} \left(N(0,1)\right)^2 =\sum_{i=1}^{10} \chi^2. $$ Now that last guy doesn't have a Chi-squared distribution, because he is the sum of ten of those Chi-squared distributions, all of them independent from one another (because so are $X_1, \ldots, X_{10}$). Each one of those single Chi-squared distribution is one contribution to the amount of random variability you should expect to face, with roughly the same amount of contribution to the sum. The value of each contribution is not mathematically equal to the other nine, but all of them have the same expected behavior in distribution. In that sense, they are somehow symmetric. Each one of those Chi-square is one contribution to the amount of pure, random variability you should expect in that sum. If you had 100 observations, the sum above would be expected to be bigger just because it have more sources of contibutions. Each of those "sources of contributions" with the same behavior can be called degree of freedom. Now take one or two steps back, re-read the previous paragraphs if needed to accommodate the sudden arrival of your quested-for degree of freedom. Yep, each degree of freedom can be thought of as one unit of variability that is obligatorily expected to occur and that brings nothing to the improvement of guessing of $\mu$ or $\sigma^2$. The thing is, you start to count on the behavior of those 10 equivalent sources of variability. If you had 100 observations, you would have 100 independent equally-behaved sources of strictly random fluctuation to that sum. That sum of 10 Chi-squares gets called a Chi-squared distributions with 10 degrees of freedom from now on, and written $\chi^2_{10}$. We can describe what to expect from it starting from its probability density function, that can be mathematically derived from the density from that single Chi-squared distribution (from now on called Chi-squared distribution with one degree of freedom and written $\chi^2_1$), that can be mathematically derived from the density of the normal distribution. "So what?" --- you might be thinking --- "That is of any good only if God took the time to tell me the values of $\mu$ and $\sigma^2$, of all the things He could tell me!" Indeed, if God Almighty were too busy to tell you the values of $\mu$ and $\sigma^2$, you would still have that 10 sources, that 10 degrees of freedom. Things start to get weird (Hahahaha; only now!) when you rebel against God and try and get along all by yourself, without expecting Him to patronize you. You have $\bar X$ and $S^2$, estimators for $\mu$ and $\sigma^2$. You can find your way to a safer bet. You could consider calculating the sum above with $\bar X$ and $S^2$ in the places of $\mu$ and $\sigma^2$: $$ \sum_{i=1}^{10} \frac{(X_i-\bar X)^2}{S^2/10} =\sum_{i=1}^{10} \left(\frac{X_i-\bar X}{S/\sqrt{10}}\right)^2, $$ but that is not the same as the original sum. "Why not?" The term inside the square of both sums are very different. For instance, it is unlikely but possible that all your observations end up being larger than $\mu$, in which case $(X_i-\mu) > 0$, which implies $\sum_{i=1}^{10}(X_i-\mu) > 0$, but, by its turn, $\sum_{i=1}^{10}(X_i-\bar X) = 0$, because $\sum_{i=1}^{10}X_i-10 \bar X =10 \bar X - 10 \bar X = 0$. Worse, you can prove easily (Hahahaha; right!) that $\sum_{i=1}^{10}(X_i-\bar X)^2 \le \sum_{i=1}^{10}(X_i-\mu)^2$ with strict inequality when at least two observations are different (which is not unusual). "But wait! There's more!" $$ \frac{X_i-\bar X}{S/\sqrt{10}} $$ doesn't have standard normal distribution, $$ \frac{(X_i-\bar X)^2}{S^2/10} $$ doesn't have Chi-squared distribution with one degree of freedom, $$ \sum_{i=1}^{10} \frac{(X_i-\bar X)^2}{S^2/10} $$ doesn't have Chi-squared distribution with 10 degrees of freedom $$ \frac{\bar X-\mu}{S/\sqrt{10}} $$ doesn't have standard normal distribution. "Was it all for nothing?" No way. Now comes the magic! Note that $$ \sum_{i=1}^{10} \frac{(X_i-\bar X)^2}{\sigma^2} =\sum_{i=1}^{10} \frac{[X_i-\mu+\mu-\bar X]^2}{\sigma^2} =\sum_{i=1}^{10} \frac{[(X_i-\mu)-(\bar X-\mu)]^2}{\sigma^2} =\sum_{i=1}^{10} \frac{(X_i-\mu)^2-2(X_i-\mu)(\bar X-\mu)+(\bar X-\mu)^2}{\sigma^2} =\sum_{i=1}^{10} \frac{(X_i-\mu)^2-(\bar X-\mu)^2}{\sigma^2} =\sum_{i=1}^{10} \frac{(X_i-\mu)^2}{\sigma^2}-\sum_{i=1}^{10} \frac{(\bar X-\mu)^2}{\sigma^2} =\sum_{i=1}^{10} \frac{(X_i-\mu)^2}{\sigma^2}-10\frac{(\bar X-\mu)^2}{\sigma^2} =\sum_{i=1}^{10} \frac{(X_i-\mu)^2}{\sigma^2}-\frac{(\bar X-\mu)^2}{\sigma^2/10} $$ or, equivalently, $$ \sum_{i=1}^{10} \frac{(X_i-\mu)^2}{\sigma^2} =\sum_{i=1}^{10} \frac{(X_i-\bar X)^2}{\sigma^2} +\frac{(\bar X-\mu)^2}{\sigma^2/10}. $$ Now we get back to those known faces. The first term has Chi-squared distribution with 10 degrees of freedom and the last term has Chi-squared distribution with one degree of freedom(!). We simply split a Chi-square with 10 independent equally-behaved sources of variability in two parts, both positive: one part is a Chi-square with one source of variability and the other we can prove (leap of faith? win by W.O.?) to be also a Chi-square with 9 (= 10-1) independent equally-behaved sources of variability, with both parts independent from one another. This is already a good news, since now we have its distribution. Alas, it uses $\sigma^2$, to which we have no access (recall that God is amusing Himself on watching our struggle). Well, $$ S^2=\frac{1}{10-1}\sum_{i=1}^{10} (X_i-\bar X)^2, $$ so $$ \sum_{i=1}^{10} \frac{(X_i-\bar X)^2}{\sigma^2} =\frac{\sum_{i=1}^{10} (X_i-\bar X)^2}{\sigma^2} =\frac{(10-1)S^2}{\sigma^2} \sim\chi^2_{(10-1)} $$ therefore $$ \frac{\bar X-\mu}{S/\sqrt{10}} =\frac{\frac{\bar X-\mu}{\sigma/\sqrt{10}}}{\frac{S}{\sigma}} =\frac{\frac{\bar X-\mu}{\sigma/\sqrt{10}}}{\sqrt{\frac{S^2}{\sigma^2}}} =\frac{\frac{\bar X-\mu}{\sigma/\sqrt{10}}}{\sqrt{\frac{\frac{(10-1)S^2}{\sigma^2}}{(10-1)}}} =\frac{N(0,1)}{\sqrt{\frac{\chi^2_{(10-1)}}{(10-1)}}}, $$ which is a distribution that is not the standard normal, but whose density can be derived from the densities of the standard normal and the Chi-squared with $(10-1)$ degrees of freedom. One very, very smart guy did that math[^1] in the beginning of 20th century and, as an unintended consequence, he made his boss the absolute world leader in the industry of Stout beer. I am talking about William Sealy Gosset (a.k.a. Student; yes, that Student, from the $t$ distribution) and Saint James's Gate Brewery (a.k.a. Guinness Brewery), of which I am a devout. [^1]: @whuber told in the comments below that Gosset did not do the math, but guessed instead! I really don't know which feat is more surprising for that time. That, my dear friend, is the origin of the $t$ distribution with $(10-1)$ degrees of freedom. The ratio of a standard normal and the squared root of an independent Chi-square divided by its degrees of freedom, which, in an unpredictable turn of tides, wind up describing the expected behavior of the estimation error you undergo when using the sample average $\bar X$ to estimate $\mu$ and using $S^2$ to estimate the variability of $\bar X$. There you go. With an awful lot of technical details grossly swept behind the rug, but not depending solely on God's intervention to dangerously bet your whole paycheck.
How to understand degrees of freedom?
In my classes, I use one "simple" situation that might help you wonder and perhaps develop a gut feeling for what a degree of freedom may mean. It is kind of a "Forrest Gump" approach to the subject,
How to understand degrees of freedom? In my classes, I use one "simple" situation that might help you wonder and perhaps develop a gut feeling for what a degree of freedom may mean. It is kind of a "Forrest Gump" approach to the subject, but it is worth the try. Consider you have 10 independent observations $X_1, X_2, \ldots, X_{10}\sim N(\mu,\sigma^2)$ that came right from a normal population whose mean $\mu$ and variance $\sigma^2$ are unknown. Your observations bring to you collectively information both about $\mu$ and $\sigma^2$. After all, your observations tend to be spread around one central value, which ought to be close to the actual and unknown value of $\mu$ and, likewise, if $\mu$ is very high or very low, then you can expect to see your observations gather around a very high or very low value respectively. One good "substitute" for $\mu$ (in the absence of knowledge of its actual value) is $\bar X$, the average of your observation. Also, if your observations are very close to one another, that is an indication that you can expect that $\sigma^2$ must be small and, likewise, if $\sigma^2$ is very large, then you can expect to see wildly different values for $X_1$ to $X_{10}$. If you were to bet your week's wage on which should be the actual values of $\mu$ and $\sigma^2$, you would need to choose a pair of values in which you would bet your money. Let's not think of anything as dramatic as losing your paycheck unless you guess $\mu$ correctly until its 200th decimal position. Nope. Let's think of some sort of prizing system that the closer you guess $\mu$ and $\sigma^2$ the more you get rewarded. In some sense, your better, more informed, and more polite guess for $\mu$'s value could be $\bar X$. In that sense, you estimate that $\mu$ must be some value around $\bar X$. Similarly, one good "substitute" for $\sigma^2$ (not required for now) is $S^2$, your sample variance, which makes a good estimate for $\sigma$. If your were to believe that those substitutes are the actual values of $\mu$ and $\sigma 2$, you would probably be wrong, because very slim are the chances that you were so lucky that your observations coordinated themselves to get you the gift of $\bar X$ being equal to $\mu$ and $S^2$ equal to $\sigma^2$. Nah, probably it didn't happen. But you could be at different levels of wrong, varying from a bit wrong to really, really, really miserably wrong (a.k.a., "Bye-bye, paycheck; see you next week!"). Ok, let's say that you took $\bar X$ as your guess for $\mu$. Consider just two scenarios: $S^2=2$ and $S^2=20,000,000$. In the first, your observations sit pretty and close to one another. In the latter, your observations vary wildly. In which scenario you should be more concerned with your potential losses? If you thought of the second one, you're right. Having a estimate about $\sigma^2$ changes your confidence on your bet very reasonably, for the larger $\sigma^2$ is, the wider you can expect $\bar X$ to variate. But, beyond information about $\mu$ and $\sigma^2$, your observations also carry some amount of just pure random fluctuation that is not informative neither about $\mu$ nor about $\sigma^2$. How can you notice it? Well, let's assume, for sake of argument, that there is a God and that He has spare time enough to give Himself the frivolity of telling you specifically the real (and so far unknown) values of both $\mu$ and $\sigma$. And here is the annoying plot twist of this lysergic tale: He tells it to you after you placed your bet. Perhaps to enlighten you, perhaps to prepare you, perhaps to mock you. How could you know? Well, that makes the information about $\mu$ and $\sigma^2$ contained in your observations quite useless now. Your observations' central position $\bar X$ and variance $S^2$ are no longer of any help to get closer to the actual values of $\mu$ and $\sigma^2$, for you already know them. One of the benefits of your good acquaintance with God is that you actually know by how much you failed to guess correctly $\mu$ by using $\bar X$, that is, $(\bar X - \mu)$ your estimation error. Well, since $X_i\sim N(\mu,\sigma^2)$, then $\bar X\sim N(\mu,\sigma^2/10)$ (trust me in that if you will), also $(\bar X - \mu)\sim N(0,\sigma^2/10)$ (ok, trust me in that on too) and, finally, $$ \frac{\bar X - \mu}{\sigma/\sqrt{10}} \sim N(0,1) $$ (guess what? trust me in that one as well), which carries absolutely no information about $\mu$ or $\sigma^2$. You know what? If you took any of your individual observations as a guess for $\mu$, your estimation error $(X_i-\mu)$ would be distributed as $N(0,\sigma^2)$. Well, between estimating $\mu$ with $\bar X$ and any $X_i$, choosing $\bar X$ would be better business, because $Var(\bar X) = \sigma^2/10 < \sigma^2 = Var(X_i)$, so $\bar X$ was less prone to be astray from $\mu$ than an individual $X_i$. Anyway, $(X_i-\mu)/\sigma\sim N(0,1)$ is also absolutely non informative about neither $\mu$ nor $\sigma^2$. "Will this tale ever end?" you may be thinking. You also may be thinking "Is there any more random fluctuation that is non informative about $\mu$ and $\sigma^2$?". [I prefer to think that you are thinking of the latter.] Yes, there is! The square of your estimation error for $\mu$ with $X_i$ divided by $\sigma$, $$ \frac{(X_i-\mu)^2}{\sigma^2} = \left(\frac{X_i-\mu}{\sigma}\right)^2 \sim \chi^2 $$ has a Chi-squared distribution, which is the distribution of the square $Z^2$ of a standard Normal $Z\sim N(0,1)$, which I am sure you noticed has absolutely no information about either $\mu$ nor $\sigma^2$, but conveys information about the variability you should expect to face. That is a very well known distribution that arises naturally from the very scenario of you gambling problem for every single one of your ten observations and also from your mean: $$ \frac{(\bar X-\mu)^2}{\sigma^2/10} = \left(\frac{\bar X-\mu}{\sigma/\sqrt{10}}\right)^2 = \left(N(0,1)\right)^2 \sim\chi^2 $$ and also from the gathering of your ten observations' variation: $$ \sum_{i=1}^{10} \frac{(X_i-\mu)^2}{\sigma^2/10} =\sum_{i=1}^{10} \left(\frac{X_i-\mu}{\sigma/\sqrt{10}}\right)^2 =\sum_{i=1}^{10} \left(N(0,1)\right)^2 =\sum_{i=1}^{10} \chi^2. $$ Now that last guy doesn't have a Chi-squared distribution, because he is the sum of ten of those Chi-squared distributions, all of them independent from one another (because so are $X_1, \ldots, X_{10}$). Each one of those single Chi-squared distribution is one contribution to the amount of random variability you should expect to face, with roughly the same amount of contribution to the sum. The value of each contribution is not mathematically equal to the other nine, but all of them have the same expected behavior in distribution. In that sense, they are somehow symmetric. Each one of those Chi-square is one contribution to the amount of pure, random variability you should expect in that sum. If you had 100 observations, the sum above would be expected to be bigger just because it have more sources of contibutions. Each of those "sources of contributions" with the same behavior can be called degree of freedom. Now take one or two steps back, re-read the previous paragraphs if needed to accommodate the sudden arrival of your quested-for degree of freedom. Yep, each degree of freedom can be thought of as one unit of variability that is obligatorily expected to occur and that brings nothing to the improvement of guessing of $\mu$ or $\sigma^2$. The thing is, you start to count on the behavior of those 10 equivalent sources of variability. If you had 100 observations, you would have 100 independent equally-behaved sources of strictly random fluctuation to that sum. That sum of 10 Chi-squares gets called a Chi-squared distributions with 10 degrees of freedom from now on, and written $\chi^2_{10}$. We can describe what to expect from it starting from its probability density function, that can be mathematically derived from the density from that single Chi-squared distribution (from now on called Chi-squared distribution with one degree of freedom and written $\chi^2_1$), that can be mathematically derived from the density of the normal distribution. "So what?" --- you might be thinking --- "That is of any good only if God took the time to tell me the values of $\mu$ and $\sigma^2$, of all the things He could tell me!" Indeed, if God Almighty were too busy to tell you the values of $\mu$ and $\sigma^2$, you would still have that 10 sources, that 10 degrees of freedom. Things start to get weird (Hahahaha; only now!) when you rebel against God and try and get along all by yourself, without expecting Him to patronize you. You have $\bar X$ and $S^2$, estimators for $\mu$ and $\sigma^2$. You can find your way to a safer bet. You could consider calculating the sum above with $\bar X$ and $S^2$ in the places of $\mu$ and $\sigma^2$: $$ \sum_{i=1}^{10} \frac{(X_i-\bar X)^2}{S^2/10} =\sum_{i=1}^{10} \left(\frac{X_i-\bar X}{S/\sqrt{10}}\right)^2, $$ but that is not the same as the original sum. "Why not?" The term inside the square of both sums are very different. For instance, it is unlikely but possible that all your observations end up being larger than $\mu$, in which case $(X_i-\mu) > 0$, which implies $\sum_{i=1}^{10}(X_i-\mu) > 0$, but, by its turn, $\sum_{i=1}^{10}(X_i-\bar X) = 0$, because $\sum_{i=1}^{10}X_i-10 \bar X =10 \bar X - 10 \bar X = 0$. Worse, you can prove easily (Hahahaha; right!) that $\sum_{i=1}^{10}(X_i-\bar X)^2 \le \sum_{i=1}^{10}(X_i-\mu)^2$ with strict inequality when at least two observations are different (which is not unusual). "But wait! There's more!" $$ \frac{X_i-\bar X}{S/\sqrt{10}} $$ doesn't have standard normal distribution, $$ \frac{(X_i-\bar X)^2}{S^2/10} $$ doesn't have Chi-squared distribution with one degree of freedom, $$ \sum_{i=1}^{10} \frac{(X_i-\bar X)^2}{S^2/10} $$ doesn't have Chi-squared distribution with 10 degrees of freedom $$ \frac{\bar X-\mu}{S/\sqrt{10}} $$ doesn't have standard normal distribution. "Was it all for nothing?" No way. Now comes the magic! Note that $$ \sum_{i=1}^{10} \frac{(X_i-\bar X)^2}{\sigma^2} =\sum_{i=1}^{10} \frac{[X_i-\mu+\mu-\bar X]^2}{\sigma^2} =\sum_{i=1}^{10} \frac{[(X_i-\mu)-(\bar X-\mu)]^2}{\sigma^2} =\sum_{i=1}^{10} \frac{(X_i-\mu)^2-2(X_i-\mu)(\bar X-\mu)+(\bar X-\mu)^2}{\sigma^2} =\sum_{i=1}^{10} \frac{(X_i-\mu)^2-(\bar X-\mu)^2}{\sigma^2} =\sum_{i=1}^{10} \frac{(X_i-\mu)^2}{\sigma^2}-\sum_{i=1}^{10} \frac{(\bar X-\mu)^2}{\sigma^2} =\sum_{i=1}^{10} \frac{(X_i-\mu)^2}{\sigma^2}-10\frac{(\bar X-\mu)^2}{\sigma^2} =\sum_{i=1}^{10} \frac{(X_i-\mu)^2}{\sigma^2}-\frac{(\bar X-\mu)^2}{\sigma^2/10} $$ or, equivalently, $$ \sum_{i=1}^{10} \frac{(X_i-\mu)^2}{\sigma^2} =\sum_{i=1}^{10} \frac{(X_i-\bar X)^2}{\sigma^2} +\frac{(\bar X-\mu)^2}{\sigma^2/10}. $$ Now we get back to those known faces. The first term has Chi-squared distribution with 10 degrees of freedom and the last term has Chi-squared distribution with one degree of freedom(!). We simply split a Chi-square with 10 independent equally-behaved sources of variability in two parts, both positive: one part is a Chi-square with one source of variability and the other we can prove (leap of faith? win by W.O.?) to be also a Chi-square with 9 (= 10-1) independent equally-behaved sources of variability, with both parts independent from one another. This is already a good news, since now we have its distribution. Alas, it uses $\sigma^2$, to which we have no access (recall that God is amusing Himself on watching our struggle). Well, $$ S^2=\frac{1}{10-1}\sum_{i=1}^{10} (X_i-\bar X)^2, $$ so $$ \sum_{i=1}^{10} \frac{(X_i-\bar X)^2}{\sigma^2} =\frac{\sum_{i=1}^{10} (X_i-\bar X)^2}{\sigma^2} =\frac{(10-1)S^2}{\sigma^2} \sim\chi^2_{(10-1)} $$ therefore $$ \frac{\bar X-\mu}{S/\sqrt{10}} =\frac{\frac{\bar X-\mu}{\sigma/\sqrt{10}}}{\frac{S}{\sigma}} =\frac{\frac{\bar X-\mu}{\sigma/\sqrt{10}}}{\sqrt{\frac{S^2}{\sigma^2}}} =\frac{\frac{\bar X-\mu}{\sigma/\sqrt{10}}}{\sqrt{\frac{\frac{(10-1)S^2}{\sigma^2}}{(10-1)}}} =\frac{N(0,1)}{\sqrt{\frac{\chi^2_{(10-1)}}{(10-1)}}}, $$ which is a distribution that is not the standard normal, but whose density can be derived from the densities of the standard normal and the Chi-squared with $(10-1)$ degrees of freedom. One very, very smart guy did that math[^1] in the beginning of 20th century and, as an unintended consequence, he made his boss the absolute world leader in the industry of Stout beer. I am talking about William Sealy Gosset (a.k.a. Student; yes, that Student, from the $t$ distribution) and Saint James's Gate Brewery (a.k.a. Guinness Brewery), of which I am a devout. [^1]: @whuber told in the comments below that Gosset did not do the math, but guessed instead! I really don't know which feat is more surprising for that time. That, my dear friend, is the origin of the $t$ distribution with $(10-1)$ degrees of freedom. The ratio of a standard normal and the squared root of an independent Chi-square divided by its degrees of freedom, which, in an unpredictable turn of tides, wind up describing the expected behavior of the estimation error you undergo when using the sample average $\bar X$ to estimate $\mu$ and using $S^2$ to estimate the variability of $\bar X$. There you go. With an awful lot of technical details grossly swept behind the rug, but not depending solely on God's intervention to dangerously bet your whole paycheck.
How to understand degrees of freedom? In my classes, I use one "simple" situation that might help you wonder and perhaps develop a gut feeling for what a degree of freedom may mean. It is kind of a "Forrest Gump" approach to the subject,
33,295
How to understand degrees of freedom?
This particular issue is quite frustrating for students in statistics courses, since they often cannot get a straight answer on exactly what a degree-of-freedom is defined to be. I will try to clear that up here. Suppose we have a random vector $\mathbf{x} \in \mathbb{R}^n$ and we form a new random vector $\mathbf{t} = T(\mathbf{x})$ via the linear function $T$. Formally, the degrees-of-freedom of $\mathbf{t}$ is the dimension of the space of allowable values for this vector, which is: $$DF \equiv \dim \mathscr{T} \equiv \dim \{ \mathbf{t} = T(\mathbf{x}) | \mathbf{x} \in \mathbb{R}^n \}.$$ The initial random vector $\mathbf{x}$ has an allowable space of dimension $n$, so it has $n$ degrees of freedom. Often the function $T$ will reduce the dimension of the allowable space of outcomes, and so $\mathbf{t}$ may have a lower degrees-of-freedom than $\mathbf{x}$. For example, in an answer to a related question you can see this formal definition of the degrees-of-freedom being used to explain Bessel's correction in the sample variance formula. In that particular case, transforming an initial sample to obtain its deviations from the sample mean leads to a deviation vector that has $n-1$ degrees-of-freedom (i.e., it is a vector in an allowable space with dimension $n-1$). When you apply this formal definition to statistical problems, you will usually find that the imposition of a single "constraint" on the random vector (via a linear equation on that vector) reduces the dimension of its allowable values by one, and thus reduces the degrees-of-freedom by one. As such, you will find that the above formal definition corresponds with the informal explanations you have been given. In undergraduate courses on statistics, you will generally find a lot of hand-waving and informal explanation of degrees-of-freedom, often via analogies or examples. The reason for this is that the formal definition requires an understanding of vector algebra and the geometry of vector spaces, which may be lacking in introductory statistics courses at an undergraduate level.
How to understand degrees of freedom?
This particular issue is quite frustrating for students in statistics courses, since they often cannot get a straight answer on exactly what a degree-of-freedom is defined to be. I will try to clear
How to understand degrees of freedom? This particular issue is quite frustrating for students in statistics courses, since they often cannot get a straight answer on exactly what a degree-of-freedom is defined to be. I will try to clear that up here. Suppose we have a random vector $\mathbf{x} \in \mathbb{R}^n$ and we form a new random vector $\mathbf{t} = T(\mathbf{x})$ via the linear function $T$. Formally, the degrees-of-freedom of $\mathbf{t}$ is the dimension of the space of allowable values for this vector, which is: $$DF \equiv \dim \mathscr{T} \equiv \dim \{ \mathbf{t} = T(\mathbf{x}) | \mathbf{x} \in \mathbb{R}^n \}.$$ The initial random vector $\mathbf{x}$ has an allowable space of dimension $n$, so it has $n$ degrees of freedom. Often the function $T$ will reduce the dimension of the allowable space of outcomes, and so $\mathbf{t}$ may have a lower degrees-of-freedom than $\mathbf{x}$. For example, in an answer to a related question you can see this formal definition of the degrees-of-freedom being used to explain Bessel's correction in the sample variance formula. In that particular case, transforming an initial sample to obtain its deviations from the sample mean leads to a deviation vector that has $n-1$ degrees-of-freedom (i.e., it is a vector in an allowable space with dimension $n-1$). When you apply this formal definition to statistical problems, you will usually find that the imposition of a single "constraint" on the random vector (via a linear equation on that vector) reduces the dimension of its allowable values by one, and thus reduces the degrees-of-freedom by one. As such, you will find that the above formal definition corresponds with the informal explanations you have been given. In undergraduate courses on statistics, you will generally find a lot of hand-waving and informal explanation of degrees-of-freedom, often via analogies or examples. The reason for this is that the formal definition requires an understanding of vector algebra and the geometry of vector spaces, which may be lacking in introductory statistics courses at an undergraduate level.
How to understand degrees of freedom? This particular issue is quite frustrating for students in statistics courses, since they often cannot get a straight answer on exactly what a degree-of-freedom is defined to be. I will try to clear
33,296
How to understand degrees of freedom?
You can see the degree of freedom as the number of observations minus the number of necessary relations among these observations. By exemple if you have $n$ sample of independant normal distribution observations $X_1,\dots,X_n$. The random variable $\sum_{i=1}^n (X_i-\overline{X}_n)^2\sim \mathcal{X}^2_{n-1}$, where $\overline{X}_n = \frac{1}{n}\sum_{i=1}^n X_i$. The degree of freedom here is $n-1$ because, their is one necessary relation between theses observations $(\overline{X}_n = \frac{1}{n}\sum_{i=1}^n X_i)$. For more information see this
How to understand degrees of freedom?
You can see the degree of freedom as the number of observations minus the number of necessary relations among these observations. By exemple if you have $n$ sample of independant normal distribution o
How to understand degrees of freedom? You can see the degree of freedom as the number of observations minus the number of necessary relations among these observations. By exemple if you have $n$ sample of independant normal distribution observations $X_1,\dots,X_n$. The random variable $\sum_{i=1}^n (X_i-\overline{X}_n)^2\sim \mathcal{X}^2_{n-1}$, where $\overline{X}_n = \frac{1}{n}\sum_{i=1}^n X_i$. The degree of freedom here is $n-1$ because, their is one necessary relation between theses observations $(\overline{X}_n = \frac{1}{n}\sum_{i=1}^n X_i)$. For more information see this
How to understand degrees of freedom? You can see the degree of freedom as the number of observations minus the number of necessary relations among these observations. By exemple if you have $n$ sample of independant normal distribution o
33,297
How to understand degrees of freedom?
The clearest "formal" definition of degrees-of-freedom is that it is the dimension of the space of allowable values for a random vector. This generally arises in a context where we have a sample vector $\mathbf{x} \in \mathbb{R}^n$ and we form a new random vector $\mathbf{t} = T(\mathbf{x})$ via the linear function $T$. Formally, the degrees-of-freedom of $\mathbf{t}$ is the dimension of the space of allowable values for this vector, which is: $$DF \equiv \dim \mathscr{T} \equiv \dim \{ \mathbf{t} = T(\mathbf{x}) | \mathbf{x} \in \mathbb{R}^n \}.$$ If we represent this linear transformation by the matrix transformation $T(\mathbf{x}) = \mathbf{T} \mathbf{x}$ then we have: $$\begin{aligned} DF &= \dim \{ \mathbf{t} = T(\mathbf{x}) | \mathbf{x} \in \mathbb{R}^n \} \\[6pt] &= \dim \{ \mathbf{T} \mathbf{x} | \mathbf{x} \in \mathbb{R}^n \} \\[6pt] &= \text{rank} \ \mathbf{T} \\[6pt] &= n - \text{Ker} \ \mathbf{T}, \\[6pt] \end{aligned}$$ where the last step follows from the rank-nullity theorem. This means that when we transform $\mathbf{x}$ by the linear transformation $T$ we lose degrees-of-freedom equal to the kernel (nullspace) of $\mathbf{T}$. In statistical problems, there is a close relationship between the eigenvalues of $\mathbf{T}$ and the loss of degrees-of-freedom from the transformation. Often the loss of degrees-of-freedom is equivalent to the number of zero eigenvalues in the transformation matrix $\mathbf{T}$. For example, in this answer we see that Bessel's correction to the sample variance, adjusting for the degrees-of-freedom of the vector of deviations from the mean, is closely related to the eigenvalues of the centering matrix. An identical result occurs in higher dimensions in linear regression analysis. In other statistical problems, similar relationships occur between the eigenvalues of the transformation matrix and the loss of degrees-of-freedom. The above result also formalises the notation that one loses a degree-of-freedom for each "constraint" imposed on the observable vector of interest. Thus, in simple univariate sampling problems, when looking at the sample variance, one loses a degree-of-freedom from estimating the mean. In linear regression models, when looking at the MSE, one loses a degree-of-freedom for each model coefficient that was estimated.
How to understand degrees of freedom?
The clearest "formal" definition of degrees-of-freedom is that it is the dimension of the space of allowable values for a random vector. This generally arises in a context where we have a sample vect
How to understand degrees of freedom? The clearest "formal" definition of degrees-of-freedom is that it is the dimension of the space of allowable values for a random vector. This generally arises in a context where we have a sample vector $\mathbf{x} \in \mathbb{R}^n$ and we form a new random vector $\mathbf{t} = T(\mathbf{x})$ via the linear function $T$. Formally, the degrees-of-freedom of $\mathbf{t}$ is the dimension of the space of allowable values for this vector, which is: $$DF \equiv \dim \mathscr{T} \equiv \dim \{ \mathbf{t} = T(\mathbf{x}) | \mathbf{x} \in \mathbb{R}^n \}.$$ If we represent this linear transformation by the matrix transformation $T(\mathbf{x}) = \mathbf{T} \mathbf{x}$ then we have: $$\begin{aligned} DF &= \dim \{ \mathbf{t} = T(\mathbf{x}) | \mathbf{x} \in \mathbb{R}^n \} \\[6pt] &= \dim \{ \mathbf{T} \mathbf{x} | \mathbf{x} \in \mathbb{R}^n \} \\[6pt] &= \text{rank} \ \mathbf{T} \\[6pt] &= n - \text{Ker} \ \mathbf{T}, \\[6pt] \end{aligned}$$ where the last step follows from the rank-nullity theorem. This means that when we transform $\mathbf{x}$ by the linear transformation $T$ we lose degrees-of-freedom equal to the kernel (nullspace) of $\mathbf{T}$. In statistical problems, there is a close relationship between the eigenvalues of $\mathbf{T}$ and the loss of degrees-of-freedom from the transformation. Often the loss of degrees-of-freedom is equivalent to the number of zero eigenvalues in the transformation matrix $\mathbf{T}$. For example, in this answer we see that Bessel's correction to the sample variance, adjusting for the degrees-of-freedom of the vector of deviations from the mean, is closely related to the eigenvalues of the centering matrix. An identical result occurs in higher dimensions in linear regression analysis. In other statistical problems, similar relationships occur between the eigenvalues of the transformation matrix and the loss of degrees-of-freedom. The above result also formalises the notation that one loses a degree-of-freedom for each "constraint" imposed on the observable vector of interest. Thus, in simple univariate sampling problems, when looking at the sample variance, one loses a degree-of-freedom from estimating the mean. In linear regression models, when looking at the MSE, one loses a degree-of-freedom for each model coefficient that was estimated.
How to understand degrees of freedom? The clearest "formal" definition of degrees-of-freedom is that it is the dimension of the space of allowable values for a random vector. This generally arises in a context where we have a sample vect
33,298
How to understand degrees of freedom?
An intuitive explanation of degrees of freedom is that they represent the number of independent pieces of information available in the data for estimating a parameter (i.e., unknown quantity) of interest. As an example, in a simple linear regression model of the form: $$ Y_i=\beta_0 + \beta_1\cdot X_i + \epsilon_i,\quad i=1,\ldots, n $$ where the $\epsilon_i$'s represent independent normally distributed error terms with mean 0 and standard deviation $\sigma$, we use 1 degree of freedom to estimate the intercept $\beta_0$ and 1 degree of freedom to estimate the slope $\beta_1$. Since we started out with $n$ observations and used up 2 degrees of freedom (i.e., two independent pieces of information), we are left with $n-2$ degrees of freedom (i.e., $n-2$ independent pieces of information) available for estimating the error standard deviation $\sigma$.
How to understand degrees of freedom?
An intuitive explanation of degrees of freedom is that they represent the number of independent pieces of information available in the data for estimating a parameter (i.e., unknown quantity) of inter
How to understand degrees of freedom? An intuitive explanation of degrees of freedom is that they represent the number of independent pieces of information available in the data for estimating a parameter (i.e., unknown quantity) of interest. As an example, in a simple linear regression model of the form: $$ Y_i=\beta_0 + \beta_1\cdot X_i + \epsilon_i,\quad i=1,\ldots, n $$ where the $\epsilon_i$'s represent independent normally distributed error terms with mean 0 and standard deviation $\sigma$, we use 1 degree of freedom to estimate the intercept $\beta_0$ and 1 degree of freedom to estimate the slope $\beta_1$. Since we started out with $n$ observations and used up 2 degrees of freedom (i.e., two independent pieces of information), we are left with $n-2$ degrees of freedom (i.e., $n-2$ independent pieces of information) available for estimating the error standard deviation $\sigma$.
How to understand degrees of freedom? An intuitive explanation of degrees of freedom is that they represent the number of independent pieces of information available in the data for estimating a parameter (i.e., unknown quantity) of inter
33,299
How to understand degrees of freedom?
For me the first explanation I understood was: If you know some statistical value like mean or variation, how many variables of data you need to know before you can know the value of every variable? This is the same as aL3xa said, but without giving any data point a special role and close to the third case given in the answer. In this way the same example would be: If you know the mean of data, you need to know the values for all but one data point, to know the value to all data points.
How to understand degrees of freedom?
For me the first explanation I understood was: If you know some statistical value like mean or variation, how many variables of data you need to know before you can know the value of every variab
How to understand degrees of freedom? For me the first explanation I understood was: If you know some statistical value like mean or variation, how many variables of data you need to know before you can know the value of every variable? This is the same as aL3xa said, but without giving any data point a special role and close to the third case given in the answer. In this way the same example would be: If you know the mean of data, you need to know the values for all but one data point, to know the value to all data points.
How to understand degrees of freedom? For me the first explanation I understood was: If you know some statistical value like mean or variation, how many variables of data you need to know before you can know the value of every variab
33,300
How to understand degrees of freedom?
Think of it this way. Variances are additive when independent. For example, suppose we are throwing darts at a board and we measure the standard deviations of the $x$ and $y$ displacements from the exact center of the board. Then $V_{x,y}=V_x+V_y$. But, $V_x=SD_x^2$ if we take the square root of the $V_{x,y}$ formula, we get the distance formula for orthogonal coordinates, $SD_{x,y}=\sqrt{SD_x^2+SD_y^2}$. Now all we have to show is that standard deviation is a representative measure of displacement away from the center of the dart board. Since $SD_x=\sqrt{\dfrac{\sum_{i=1}^n(x_i-\bar{x})^2}{n-1}}$, we have a ready means of discussing df. Note that when $n=1$, then $x_1-\bar{x}=0$ and the ratio $\dfrac{\sum_{i=1}^n(x_i-\bar{x})^2}{n-1}\rightarrow \dfrac{0}{0}$. In other words, there is no deviation to be had between one dart's $x$-coordinate and itself. The first time we have a deviation is for $n=2$ and there is only one of them, a duplicate. That duplicate deviation is the squared distance between $x_1$ or $x_2$ and $\bar{x}=\dfrac{x_1+x_2}{2}$ because $\bar{x}$ is the midpoint between or average of $x_1$ and $x_2$. In general, for $n$ distances we remove 1 because $\bar{x}$ is dependent on all $n$ of those distances. Now, $n-1$ represents the degrees of freedom because it normalizes for the number of unique outcomes to make an expected square distance. when divided into the sum of those square distances.
How to understand degrees of freedom?
Think of it this way. Variances are additive when independent. For example, suppose we are throwing darts at a board and we measure the standard deviations of the $x$ and $y$ displacements from the ex
How to understand degrees of freedom? Think of it this way. Variances are additive when independent. For example, suppose we are throwing darts at a board and we measure the standard deviations of the $x$ and $y$ displacements from the exact center of the board. Then $V_{x,y}=V_x+V_y$. But, $V_x=SD_x^2$ if we take the square root of the $V_{x,y}$ formula, we get the distance formula for orthogonal coordinates, $SD_{x,y}=\sqrt{SD_x^2+SD_y^2}$. Now all we have to show is that standard deviation is a representative measure of displacement away from the center of the dart board. Since $SD_x=\sqrt{\dfrac{\sum_{i=1}^n(x_i-\bar{x})^2}{n-1}}$, we have a ready means of discussing df. Note that when $n=1$, then $x_1-\bar{x}=0$ and the ratio $\dfrac{\sum_{i=1}^n(x_i-\bar{x})^2}{n-1}\rightarrow \dfrac{0}{0}$. In other words, there is no deviation to be had between one dart's $x$-coordinate and itself. The first time we have a deviation is for $n=2$ and there is only one of them, a duplicate. That duplicate deviation is the squared distance between $x_1$ or $x_2$ and $\bar{x}=\dfrac{x_1+x_2}{2}$ because $\bar{x}$ is the midpoint between or average of $x_1$ and $x_2$. In general, for $n$ distances we remove 1 because $\bar{x}$ is dependent on all $n$ of those distances. Now, $n-1$ represents the degrees of freedom because it normalizes for the number of unique outcomes to make an expected square distance. when divided into the sum of those square distances.
How to understand degrees of freedom? Think of it this way. Variances are additive when independent. For example, suppose we are throwing darts at a board and we measure the standard deviations of the $x$ and $y$ displacements from the ex