idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
48,301
Is the idea of a bias-variance "tradeoff" a false construct?
First of all we have to say that bias-variance tradeoff (BVT) can be seen in respect not only of parameters estimators but also about prediction. Usually BVT is used in machine learning on prediction side and more precisely about the minimization of Expected Prediction Error (EPE). In this last sense the BVT was treated and derived in the discussion that you linked above. Now you says: Who's to say that if you have an estimator $\hat{f}$ of $Y = f(X) + \epsilon$ that you couldn't find an estimator $\hat{g}$ that has not only lowers expected squared error, but has lower bias and variance than $\hat{f}$ as well? BVT do not exclude this possibility. Usually in classical statistical or econometrics textbooks the focus is mainly on unbiased estimators (or consistent one, but the difference is not crucial here). So, what BVT tell you is that even if among all unbiased estimators you find the efficient one … remain possible that some biased ones achieve a lower $MSE$. I spoke about this possibility here (Mean squared error of OLS smaller than Ridge?), even if this answer was not appreciated much. In general, if your goal is prediction, EPE minimization is the core, while in explanatory models the core is bias reduction. In math term you have to minimize two related but different loss functions, the tradeoff come from that. This discussion is about that: What is the relationship between minimizing prediciton error versus parameter estimation error? Moreover what I said above is mainly related on linear models. While It seems me that in machine learning literature the concept BVT, the what that rendered it famous, is primarily related to the interpretability vs flexibility tradeoff. In general, the more flexible models have lower bias but higher variance. For less flexible models the opposite is true (lower variance and higher bias). Among the more flexible alternatives there are Neural Networks, among the less flexible there are linear regressions. Doesn't this depend completely on the expected squared error being constant? No. Among various alternative specifications (flexibility level) the test MSE (=EPE) is far from constant. Depend of the true model (true functional form), and the amount of data we have for training, we can find the flexibility level (specification) that permit us to achieve the EPE minimization. This graph taken from: An Introduction to Statistical Learning with Applications in R - James Witten Hastie Tibshirani (pag 36) gives us three examples. In the par 2.1.3 you can find a more exhaustive explanation of this last point.
Is the idea of a bias-variance "tradeoff" a false construct?
First of all we have to say that bias-variance tradeoff (BVT) can be seen in respect not only of parameters estimators but also about prediction. Usually BVT is used in machine learning on prediction
Is the idea of a bias-variance "tradeoff" a false construct? First of all we have to say that bias-variance tradeoff (BVT) can be seen in respect not only of parameters estimators but also about prediction. Usually BVT is used in machine learning on prediction side and more precisely about the minimization of Expected Prediction Error (EPE). In this last sense the BVT was treated and derived in the discussion that you linked above. Now you says: Who's to say that if you have an estimator $\hat{f}$ of $Y = f(X) + \epsilon$ that you couldn't find an estimator $\hat{g}$ that has not only lowers expected squared error, but has lower bias and variance than $\hat{f}$ as well? BVT do not exclude this possibility. Usually in classical statistical or econometrics textbooks the focus is mainly on unbiased estimators (or consistent one, but the difference is not crucial here). So, what BVT tell you is that even if among all unbiased estimators you find the efficient one … remain possible that some biased ones achieve a lower $MSE$. I spoke about this possibility here (Mean squared error of OLS smaller than Ridge?), even if this answer was not appreciated much. In general, if your goal is prediction, EPE minimization is the core, while in explanatory models the core is bias reduction. In math term you have to minimize two related but different loss functions, the tradeoff come from that. This discussion is about that: What is the relationship between minimizing prediciton error versus parameter estimation error? Moreover what I said above is mainly related on linear models. While It seems me that in machine learning literature the concept BVT, the what that rendered it famous, is primarily related to the interpretability vs flexibility tradeoff. In general, the more flexible models have lower bias but higher variance. For less flexible models the opposite is true (lower variance and higher bias). Among the more flexible alternatives there are Neural Networks, among the less flexible there are linear regressions. Doesn't this depend completely on the expected squared error being constant? No. Among various alternative specifications (flexibility level) the test MSE (=EPE) is far from constant. Depend of the true model (true functional form), and the amount of data we have for training, we can find the flexibility level (specification) that permit us to achieve the EPE minimization. This graph taken from: An Introduction to Statistical Learning with Applications in R - James Witten Hastie Tibshirani (pag 36) gives us three examples. In the par 2.1.3 you can find a more exhaustive explanation of this last point.
Is the idea of a bias-variance "tradeoff" a false construct? First of all we have to say that bias-variance tradeoff (BVT) can be seen in respect not only of parameters estimators but also about prediction. Usually BVT is used in machine learning on prediction
48,302
Is the idea of a bias-variance "tradeoff" a false construct?
Who's to say that if you have an estimator $\hat{f}$ of $Y = f(X) + \epsilon$ that you couldn't find an estimator $\hat{g}$ that has not only lowers expected squared error, but has lower bias and variance than $\hat{f}$ as well? A similar question was Bias / variance tradeoff math. In that question, it was asked if bias and variance could not be decreased simultaneously. Often the starting-point is zero bias, and you can not lower the bias. So that's normally the trade-off, whether some alternative biased function will have lower variance and lower overall error than an unbiased function. Sure if you have some bad estimator that has high bias and high variance, then there is no trade-off and you can make an improvement for both. But that is not the typical situation that you find in practice. Normally you are considering a range of biased values and for each biased value, you have the situation that it has the most optimal variance possible for that biased value (at least the lowest that you know of, or the lowest that is practical to consider). Below is the image of the linked question. It shows the bias-variance tradeoff for the bias of scaling the sample mean (as a predictor for the population mean). In the right image, the image is split in two. If you are scaling with a factor above 1 then you have both an increased variance and increased bias. So that would indeed be silly. And when you have such a bad estimator, then there is no trade-off because you can make an improvement in both decreasing bias and decreasing variance. If you are scaling with a factor below 1 then you do have a trade-off. Decreasing bias means increasing variance and vice versa. Within this particular set of biased estimators, you can say that you can't find an estimator that not only lowers variance, but also bias (Sure maybe you can find an even better estimator with a different type of bias. Indeed it may be difficult to proof that a particular biased estimator is the lowest variance estimator. Often, nobody is to say that it can't be improved).
Is the idea of a bias-variance "tradeoff" a false construct?
Who's to say that if you have an estimator $\hat{f}$ of $Y = f(X) + \epsilon$ that you couldn't find an estimator $\hat{g}$ that has not only lowers expected squared error, but has lower bias and vari
Is the idea of a bias-variance "tradeoff" a false construct? Who's to say that if you have an estimator $\hat{f}$ of $Y = f(X) + \epsilon$ that you couldn't find an estimator $\hat{g}$ that has not only lowers expected squared error, but has lower bias and variance than $\hat{f}$ as well? A similar question was Bias / variance tradeoff math. In that question, it was asked if bias and variance could not be decreased simultaneously. Often the starting-point is zero bias, and you can not lower the bias. So that's normally the trade-off, whether some alternative biased function will have lower variance and lower overall error than an unbiased function. Sure if you have some bad estimator that has high bias and high variance, then there is no trade-off and you can make an improvement for both. But that is not the typical situation that you find in practice. Normally you are considering a range of biased values and for each biased value, you have the situation that it has the most optimal variance possible for that biased value (at least the lowest that you know of, or the lowest that is practical to consider). Below is the image of the linked question. It shows the bias-variance tradeoff for the bias of scaling the sample mean (as a predictor for the population mean). In the right image, the image is split in two. If you are scaling with a factor above 1 then you have both an increased variance and increased bias. So that would indeed be silly. And when you have such a bad estimator, then there is no trade-off because you can make an improvement in both decreasing bias and decreasing variance. If you are scaling with a factor below 1 then you do have a trade-off. Decreasing bias means increasing variance and vice versa. Within this particular set of biased estimators, you can say that you can't find an estimator that not only lowers variance, but also bias (Sure maybe you can find an even better estimator with a different type of bias. Indeed it may be difficult to proof that a particular biased estimator is the lowest variance estimator. Often, nobody is to say that it can't be improved).
Is the idea of a bias-variance "tradeoff" a false construct? Who's to say that if you have an estimator $\hat{f}$ of $Y = f(X) + \epsilon$ that you couldn't find an estimator $\hat{g}$ that has not only lowers expected squared error, but has lower bias and vari
48,303
Empirical Risk Minimization: Rewriting the expected loss using Bayes' rule and the definition of expectation
I'll assume continuous distributions here but, if any variable is discrete, simply replace the corresponding integral with a sum. Recall that the expectation of a function $f$ with respect to a continuous distribution $p(z)$ is: $$E_{z \sim p(z)}\big[f(z)\big] = \int_\mathcal{Z} p(z) f(z) dz$$ The objective function in equation 2.1 can therefore be written as an integral: $$E_{(x,y) \sim P_t^{X,Y}} \big[ \ell(x, y, \theta_t) \big] = \int_\mathcal{X} \int_\mathcal{Y} P_t(x,y) \ell(x,y,\theta_t) dx dy$$ We can multiply by one without changing anything: $$= \int_\mathcal{X} \int_\mathcal{Y} \frac{P_s(x,y)}{P_s(x,y)} P_t(x,y) \ell(x,y,\theta_t) dx dy$$ Using the definition of expectation again, the above integral can be seen as an expectation w.r.t. $P_s(x,y)$: $$= E_{(x,y) \sim P_s^{X,Y}} \left[ \frac{P_t(x,y)}{P_s(x,y)} \ell(x,y,\theta_t) \right]$$ This is the objective function in equation 2.2. So, the optimization problems in equations 2.1 and 2.2 are equivalent. Note that Bayes' rule wasn't needed here. But, based on the text you quoted, it sounds like they might be about to use it to move to equation 2.3.
Empirical Risk Minimization: Rewriting the expected loss using Bayes' rule and the definition of exp
I'll assume continuous distributions here but, if any variable is discrete, simply replace the corresponding integral with a sum. Recall that the expectation of a function $f$ with respect to a contin
Empirical Risk Minimization: Rewriting the expected loss using Bayes' rule and the definition of expectation I'll assume continuous distributions here but, if any variable is discrete, simply replace the corresponding integral with a sum. Recall that the expectation of a function $f$ with respect to a continuous distribution $p(z)$ is: $$E_{z \sim p(z)}\big[f(z)\big] = \int_\mathcal{Z} p(z) f(z) dz$$ The objective function in equation 2.1 can therefore be written as an integral: $$E_{(x,y) \sim P_t^{X,Y}} \big[ \ell(x, y, \theta_t) \big] = \int_\mathcal{X} \int_\mathcal{Y} P_t(x,y) \ell(x,y,\theta_t) dx dy$$ We can multiply by one without changing anything: $$= \int_\mathcal{X} \int_\mathcal{Y} \frac{P_s(x,y)}{P_s(x,y)} P_t(x,y) \ell(x,y,\theta_t) dx dy$$ Using the definition of expectation again, the above integral can be seen as an expectation w.r.t. $P_s(x,y)$: $$= E_{(x,y) \sim P_s^{X,Y}} \left[ \frac{P_t(x,y)}{P_s(x,y)} \ell(x,y,\theta_t) \right]$$ This is the objective function in equation 2.2. So, the optimization problems in equations 2.1 and 2.2 are equivalent. Note that Bayes' rule wasn't needed here. But, based on the text you quoted, it sounds like they might be about to use it to move to equation 2.3.
Empirical Risk Minimization: Rewriting the expected loss using Bayes' rule and the definition of exp I'll assume continuous distributions here but, if any variable is discrete, simply replace the corresponding integral with a sum. Recall that the expectation of a function $f$ with respect to a contin
48,304
Deriving posterior update equation in a Variational Bayes inference
Joint distribution. Using the graphical model you provided, we get the following joint distribution over all variables of interest, conditioning on model parameters. $$p(\Theta, \mathbf{v} | a_0, b_0, c_0, d_0, \left\{e_0^s, f_0^s \right\}_{s = 0,1}, \left \{ e_0^{s0}, f_0^{s0}, e_0^{s1}, f_0^{s1} \right\}_{s=2:L})$$ In more detail, this gives the provisional joint distribution: $$\begin{align} p(\Theta, \mathbf{v}) = p(\alpha_0 | a_0, b_0) p(\mathbf{v} | \alpha_0 \mathbf{w}, \mathbf{z}) \prod^L_{s=0} \left \{ p(a_s | c_0, d_0) p(\pi^s | e_0^s, f_0^s) p(\pi^{s0} | e_0^{s0}, f_0^{s0}) p(\pi^{s1} | e_0^{s1}, f_0^{s1}) \prod^{I(s)}_{i=1} \prod^K_{k=1} p(z_{k, s, i} | \pi^s, \pi^{s0}, \pi^{s1}, {z_{pa(k, s, i)}}) p(w_{k,s,i} | \alpha_s) \right\} \tag{1} \\ \end{align}$$ Now the above is rough - you will need to tweak the $s$-indexing on $\pi^s, \pi^{s0}, \pi^{s1}$ over which $\prod^L_{s}$ operates to better comply with the written details, and I'm not entirely sure about what is going on with $z_{pa(k, s, i)}$, which is dotted in the plate notation. Variational distribution. Now the variational distribution on all latent variables of interest is specified by the following, with conditioning on the variational parameters: $$q \left(\boldsymbol{\pi}, \alpha_0, \{ \alpha_s \}_{s=0:L}, \mathbf{w} , \mathbf{z} \space | \space a, b, \{c_s, d_s \}_{s=0:L},\{e^s, f^s \}_{s = 0,1}, \{ e^{s0}, f^{s0}, e^{s1}, f^{s1} \}_{s=2:L}, \{ \mu_{k, s, i}, \sigma^2_{k,s,i}, p_{k,s,i} \}_{s=0:L, i=1:I(s), k=1:K} \right) \\ $$ We then have: $$q(\boldsymbol{\pi}, \alpha_0, \left\{ \alpha_s \right\}_{s=0:L}, \mathbf{w} , \mathbf{z}) = q(\alpha_0 | a, b)q \left(\boldsymbol{\pi} | \left\{e^s, f^s \right\}_{s = 0,1},\left \{e^{s0}, f^{s0}, e^{s1}, f^{s1} \right\}_{s=2:L} \right) \prod^L_{s=0} \left \{ q(\alpha_s | c_s, d_s) \prod^{I(s)}_{i=1} \prod^K_{k=1} q(w_{k,s,i} | \mu_{k,s,i}, \sigma^2_{k, s, i}) q(z_{k, s, i} | p_{k, s, i}) \right\} \tag{2} $$ The mean-field approximation implies that we can specify the variational distribution $q(\boldsymbol{\pi}, ..., \mathbf{z})$ as a product of individual variational distributions as we have done so above - and it can be thought of as an independence assumption between latent variables specified. Evidence Lower Bound (ELBO). Consider computing the log-marginal likelihood of the observed data $\mathbf{v}$, that is, $\ln p(\mathbf{v} | a_0, ..., \left \{ e_0^{s0}, f_0^{s0}, e_0^{s1}, f_0^{s1} \right\}_{s=2:L})$. As computing this exactly is likely intractable (hence the need for approximate inference methods such as MCMC/variational inference), we instead consider a lower bound on the log-marginal likelihood, known as the evidence lower bound (ELBO). Writing the log-marginal likelihood as the log of the joint likelihood having integrated/summed out all the latent variables $\Theta$, we have: $$\begin{align}\ln p(\mathbf{v} | ...) &= \ln \int p(\Theta, \mathbf{v}) | ...) d\Theta \\ &= \ln \int q(\boldsymbol{\pi}, ..., \mathbf{z}) \cdot \frac{p(\Theta, \mathbf{v})}{q(\boldsymbol{\pi}, ..., \mathbf{z})} d \Theta \\ &= \ln \mathbb{E}_{q(\boldsymbol{\pi}, ..., \mathbf{z})}\left[\frac{p(\Theta, \mathbf{v})}{q(\boldsymbol{\pi}, ..., \mathbf{z})} \right] \\ &\geq \mathbb{E}_q \left[\ln \frac{p(\Theta, \mathbf{v})}{q(\boldsymbol{\pi}, ..., \mathbf{z})} \right] \\ &= \mathbb{E}_q[ \ln p(\Theta, \mathbf{v})] - \mathbb{E}_q[\ln q(\boldsymbol{\pi}, ..., \mathbf{z})] \end{align}$$ Where I have used the notation $\int d\Theta$ as shorthand for the appropriate combination of multiple integrals/summations associated with the continuous/discrete latent variables in $\Theta$. The reasoning in going from the 3rd to the 4th line is that $g(u) = \ln(u)$ is a concave function, so by Jensen's inequality, we have $\ln \mathbb{E}_q[U] \geq \mathbb{E}_q[\ln (U)]$. Hence the ELBO, which I from hereon denote $L$, will be the following: $$L = \mathbb{E}_{q}[\ln p(\Theta, \mathbf{v})] - \mathbb{E}_q[\ln q(\boldsymbol{\pi}, \alpha_0, \left\{ \alpha_s \right\}_{s=0:L}, \mathbf{w} , \mathbf{z})] \tag{3} $$ Where both expectations are taken with respect to $q(\boldsymbol{\pi}, \alpha_0, \left\{ \alpha_s \right\}_{s=0:L}, \mathbf{w} , \mathbf{z})$. The entire ELBO is computed by substituting $(1)$ and $(2)$ into $(3)$. You will additionally need to insert the specific functional forms of the model probability distributions e.g. $p(\alpha_0 | a_0, b_0) = \text{Gamma}(\alpha_0 | a_0, b_0) = \frac{1}{b_0^{a_0} \Gamma(a_0)} \alpha_0^{a_0 - 1} e^{-\alpha_0 / b_0}$ and variational distributions e.g. $q(w_{k, s, i} | \mu_{k, s, i}, \sigma^2_{k, s, i}) = \mathcal{N}(w_{k, s, i} | \mu_{k, s, i}, \sigma^2_{k, s, i})$ etc. This is something that will require at least a few pages of algebra to fully derive. If you want to derive all the update equations, you will need to plug-in the functional forms of all the distributions specified by the model, and the all variational distributions. Towards updates for $p_{k,s,i}$. (needs work on details of $(*)$). In order to get update equations we need to maximise the ELBO, $L$ with respect to the variational parameters of interest, one of which is $p_{k,s,i}$. As the ELBO is cumbersome to write out in full, and we are currently only interested in updates on $p_{k, s, i}$, we selectively specify the parts of the ELBO we need. Now the only way in which the variational parameter $p_{k, s, i}$ of the variational Bernoulli on $z_{k, s, i}$ can appear in the ELBO is when we compute $\mathbb{E}_q[\ln f(z_{k, s, i})]$, that is expectations of log terms containing $z_{k,s,i}$ as arguments (where $\ln f(z_{k,s,i})$ may be log-model-probabilities of the form $\ln p(\cdot)$ or log-variational probabilities of the form $\ln q(\cdot)$). To that end, we need to isolate terms in $(3)$ containing $z_{k,s,i}$. The only terms that will contain $z_{k,s,i}$ in $(3)$ are $\mathbb{E}_q[\ln p(\mathbf{v} | \alpha_0 , \mathbf{w}, \mathbf{z})]$, $\mathbb{E}_q[\sum_s \sum_i \sum_k \ln p(z_{k,s,i} | \pi^s ... z_{pa(k,s,i)}]$, and $\mathbb{E}_q[\sum_s \sum_i \sum_k \ln q(z_{k,s,i} | p_{k,s,i})]$. We denote those parts of the ELBO in which the variational parameter $p_{k, s, i}$ will appear as $L_{[p_{k, s, i}]}$. Yielding: $$L_{[p_{k, s, i}]} = \underbrace{\mathbb{E}_q[\ln(p(\mathbf{v} | \alpha_0, \mathbf{w}, \mathbf{z}])}_{(*)} + \mathbb{E}_q \left[\sum^L_{s=0} \sum^{I(s)}_{s=1} \sum^K_{k=1} \ln p(z_{k, s, i} | \pi^s, \pi^{s0}, \pi^{s1}, z_{pa(k, s, i)})\right] - \mathbb{E}_q \left[\sum^L_{s=0} \sum^{I(s)}_{s=1} \sum^K_{k=1} \ln q(z_{k, s, i} | p_{k, s, i}) \right]$$ Now I have managed to squeeze out something vaguely sensible resembling the contribution of the 1st expectation $(*)$ to the update but it's not quite there yet. But here is what you do for the 2nd and 3rd expectation. For the 2nd expectation, we have: $$\begin{align} \mathbb{E}_q[\ln p(z_{k, s, i} | \pi_{k, s, i})] = &\space \mathbb{E}_q[\ln ({\pi_{k, s, i}}^{z_{k, s, i}} (1 - \pi_{k, s, i})^{1 - z_{k, s, i}} ]\\ = &\space \mathbb{E}_q[z_{k, s, i} \ln \pi_{k, s, i} + (1 - z_{k,s,i}) \ln (1 - \pi_{k, s, i})] \\ = &\space \mathbb{E}_{q(z_{k, s, i} | p_{k, s, i})}[z_{k, s, i} | p_{k,s, i}] \cdot \mathbb{E}_{q(\pi_{k, s, i})}[\ln \pi_{k, s, i} ] \\ &+ \mathbb{E}_{q(z_{k, s, i} | p_{k, s, i})}[1- z_{k, s, i} | p_{k,s, i}] \cdot \mathbb{E}_{q(\pi_{k, s, i})}[\ln 1- \pi_{k, s, i} ] \\ = &\space p_{k,s,i} \langle\ln \pi_{k, s, i} \rangle + (1 - p_{k, s, i}) \langle\ln (1 - \pi_{k, s, i}) \rangle \end{align}$$ Where in the 3rd equality we have used the fact that the expectation of the product of $z_{k, s, i}$ and $\ln \pi_{k, s, i}$ is the product of expectations - this is because of the mean-field assumption, they are independent once we have conditioned on our variational parameters $p_{k, s, i}$ on $z_{k, s, i}$ and $e^s, f^s, e^{s0}, f^{s0}, e^{s1}, f^{s1}$ on $ \pi_{k,s,i}$. For the 3rd expectation, we have: $$\begin{align} \mathbb{E}_q[\ln q(z_{k, s, i} | p_{k, s, i})] &= \mathbb{E}_q[z_{k, s, i} \ln p_{k, s, i} + (1 - z_{k,s,i}) \ln (1 - p_{k, s, i})] \\ &= \mathbb{E}_{q(z_{k, s, i} | p_{k, s, i})}[z_{k, s, i} | p_{k,s, i}] \ln p_{k, s, i} + \mathbb{E}_{q(z_{k, s, i} | p_{k, s, i})}[1- z_{k, s, i} | p_{k,s, i}] \ln (1- \pi_{k, s, i}) \\ &= p_{k, s, i} \ln p_{k, s, i} + (1 - p_{k, s, i}) \ln (1 - p_{k, s,i}) \end{align}$$ Putting this all together, with $(*)$ to be completed, we have: \begin{align}L_{[p_{k,s,i}]} =& \underbrace{\mathbb{E}_q[\ln(p(\mathbf{v} | \alpha_0, \mathbf{w}, \mathbf{z}])}_{(*)} \\ &+ p_{k,s,i} \langle\ln \pi_{k, s, i} \rangle + (1 - p_{k, s, i}) \langle\ln (1 - \pi_{k, s, i}) \rangle \\ &- p_{k, s, i} \ln p_{k, s, i} - (1 - p_{k, s, i}) \ln (1 - p_{k, s,i}) \end{align} After simplifying $(*)$, which I can't quite get exactly, you then maximise w.r.t $p_{k, s, i}$ by computing partial derivatives. After collecting terms, you then rearrange for $p_{k, s, i}$ to get your update equation. You should now be able to see how it is we get the $exp$ and $\langle \ln \pi_{k, s, i} \rangle$ terms in the update equation for $p_{k, s, i}$. To derive the rest of the update equations, you need to follow a similar strategy for each variational parameter. Addressing the 1st expectation $(*)$. Here is the working for the 1st expectation which I can't quite complete, as I don't have the context-specific knowledge (of the paper) to appropriately deal with the dimensionality of $\mathbf{z}$ and $\mathbf{w}$. I will go as far as I can before stating the specifics that prevent me from proceeding further. Perhaps you can assist on that front. $$\begin{align} \mathbb{E}_q[\ln p(\mathbf{v} | \alpha_0, \mathbf{w}, \mathbf{z})] = &\space \mathbb{E}_q \left[\ln \frac{1}{(2 \pi)^{n/2} | \alpha_0^{-1} \mathbf{I} |^{1/2}} \exp \left( -\frac{1}{2}(\mathbf{v} - \boldsymbol{\Phi} \boldsymbol{\theta})^T(\alpha_0^{-1} \mathbf{I})^{-1}(\mathbf{v} - \boldsymbol{\Phi} \boldsymbol{\theta}) \right) \right] \\ = &\space \mathbb{E}_q\left[-\frac{\alpha_0}{2} (\mathbf{v}^T\mathbf{v} - 2 \mathbf{v}^T \boldsymbol{\Phi} \boldsymbol{\theta} + \boldsymbol{\theta}^T \boldsymbol{\Phi}^T \boldsymbol{\Phi} \boldsymbol{\theta}) - \frac{1}{2} \ln((2 \pi)^n |a_0^{-1} \mathbf{I}|) \ \right] \\ = &\space \mathbb{E}_q\left[-\frac{\alpha_0}{2} (\mathbf{v}^T\mathbf{v} - 2 \mathbf{v}^T \boldsymbol{\Phi} (\mathbf{w} \odot \mathbf{z}) + (\mathbf{w} \odot \mathbf{z})^T \boldsymbol{\Phi}^T \boldsymbol{\Phi} (\mathbf{w} \odot \mathbf{z})) - \frac{n}{2} \ln(2 \pi) - \frac{n}{2} \ln (\alpha_0) \right] \\ = &\space -\frac{\langle a_0 \rangle}{2} (\mathbf{v}^T\mathbf{v} -2 \mathbf{v}^T \boldsymbol{\Phi} (\langle \mathbf{w} \rangle \odot \langle \mathbf{z} \rangle) + (\langle \mathbf{w} \rangle^T \odot \langle \mathbf{z} \rangle^T) \boldsymbol{\Phi}^T \boldsymbol{\Phi} (\langle \mathbf{w} \rangle \odot \langle \mathbf{z} \rangle) \\ &\space- \frac{n}{2} \ln(2 \pi) - \frac{n}{2} \langle \ln(\alpha_0)\rangle \\ = &\space -\frac{\langle a_0 \rangle}{2} (\mathbf{v}^T\mathbf{v} -2 \mathbf{v}^T \boldsymbol{\Phi} (\langle \mathbf{w} \rangle \odot \mathbf{p}) + (\langle \mathbf{w} \rangle^T \odot \mathbf{p}^T) \boldsymbol{\Phi}^T \boldsymbol{\Phi} (\langle \mathbf{w} \rangle \odot \mathbf{p} ) \\ &\space- \frac{n}{2} \ln(2 \pi) - \frac{n}{2} \langle \ln(\alpha_0)\rangle \\ \end{align}$$ Where in going from the 2nd equality to 3rd quality I have used the fact that the determinant of a diagonal matrix is a product of its entries. In going from the 3rd equality to 4th equality, I have treated $\mathbf{v}$ and $\mathbf{\Phi}$ as constants - they do not appear in the variational distribution over which you are taking expectations. The expectations $\langle \alpha_0 \rangle$ and $\langle \ln( \alpha_0 ) \rangle$ with respect to the variational distribution $q(\alpha_0 | a, b)$; the expectations $\langle \mathbf{w} \rangle$ and $\langle \mathbf{z} \rangle$ are with respect to $\prod_s \prod_i \prod_k q (w_{k, s, i} | \mu_{k,s,i}, \sigma^2_{k, s, i})$ and $\prod_s \prod_i \prod_k q (z_{k, s, i} | p_{k,s,i})$. And I have again used the mean-field approximation to justify the expectation of a product of these latent variables under the variational distribution as being the product of expectations. For the purposes of computing update equations with respect to $p_{k, s, i}$, this final equality should be viewed with a focus on all terms containing $p_{k, s, i}$. Now what I've isolated as preventing me from going further with this is details concerning the dimensionality of $\mathbf{w}$, $\mathbf{z}$, and what I've denoted as $\mathbf{p}$. In going from the 4th to 5th equality, I have written $\langle \mathbf{z} \rangle = \mathbf{p}$ to convey the fact that $\langle z_{k, s, i} \rangle = p_{k, s, i}$, which allows me to defer specifying details about how the dimensionality/indexing of this works. You will notice that the update equation on $p_{k, s, i}$ in the paper is much more precise concerning how this dimensionality is treated, and that is something I will not be able to assist on without more intimate knowledge of the paper. In particular, the paper states under "B. Tree-Structured Bayesian Compressive Sensing" that "$\mathbf{w} \in \mathbb{R}^N$ and $\mathbf{z} \in \mathbb{R}^N$", and that "$w_i \sim \mathcal{N}(0, \alpha_i^{-1})$ and $z_i \sim \text{Bernoulli}(\pi_i)$". However, the model and variational distribution specifies that $w_{k, s, i}$, $z_{k, s, i}$ and $p_{k, s, i}$ are indexed by $k, s, i$, and not just $i$. Hence what needs to be accounted for to get the precise update equation on $p_{k,s,i}$, is how the "block" indexing $k = 1, ..., K$ and "level" indexing $s = 1,..., L$ are treated.
Deriving posterior update equation in a Variational Bayes inference
Joint distribution. Using the graphical model you provided, we get the following joint distribution over all variables of interest, conditioning on model parameters. $$p(\Theta, \mathbf{v} | a_0, b_0,
Deriving posterior update equation in a Variational Bayes inference Joint distribution. Using the graphical model you provided, we get the following joint distribution over all variables of interest, conditioning on model parameters. $$p(\Theta, \mathbf{v} | a_0, b_0, c_0, d_0, \left\{e_0^s, f_0^s \right\}_{s = 0,1}, \left \{ e_0^{s0}, f_0^{s0}, e_0^{s1}, f_0^{s1} \right\}_{s=2:L})$$ In more detail, this gives the provisional joint distribution: $$\begin{align} p(\Theta, \mathbf{v}) = p(\alpha_0 | a_0, b_0) p(\mathbf{v} | \alpha_0 \mathbf{w}, \mathbf{z}) \prod^L_{s=0} \left \{ p(a_s | c_0, d_0) p(\pi^s | e_0^s, f_0^s) p(\pi^{s0} | e_0^{s0}, f_0^{s0}) p(\pi^{s1} | e_0^{s1}, f_0^{s1}) \prod^{I(s)}_{i=1} \prod^K_{k=1} p(z_{k, s, i} | \pi^s, \pi^{s0}, \pi^{s1}, {z_{pa(k, s, i)}}) p(w_{k,s,i} | \alpha_s) \right\} \tag{1} \\ \end{align}$$ Now the above is rough - you will need to tweak the $s$-indexing on $\pi^s, \pi^{s0}, \pi^{s1}$ over which $\prod^L_{s}$ operates to better comply with the written details, and I'm not entirely sure about what is going on with $z_{pa(k, s, i)}$, which is dotted in the plate notation. Variational distribution. Now the variational distribution on all latent variables of interest is specified by the following, with conditioning on the variational parameters: $$q \left(\boldsymbol{\pi}, \alpha_0, \{ \alpha_s \}_{s=0:L}, \mathbf{w} , \mathbf{z} \space | \space a, b, \{c_s, d_s \}_{s=0:L},\{e^s, f^s \}_{s = 0,1}, \{ e^{s0}, f^{s0}, e^{s1}, f^{s1} \}_{s=2:L}, \{ \mu_{k, s, i}, \sigma^2_{k,s,i}, p_{k,s,i} \}_{s=0:L, i=1:I(s), k=1:K} \right) \\ $$ We then have: $$q(\boldsymbol{\pi}, \alpha_0, \left\{ \alpha_s \right\}_{s=0:L}, \mathbf{w} , \mathbf{z}) = q(\alpha_0 | a, b)q \left(\boldsymbol{\pi} | \left\{e^s, f^s \right\}_{s = 0,1},\left \{e^{s0}, f^{s0}, e^{s1}, f^{s1} \right\}_{s=2:L} \right) \prod^L_{s=0} \left \{ q(\alpha_s | c_s, d_s) \prod^{I(s)}_{i=1} \prod^K_{k=1} q(w_{k,s,i} | \mu_{k,s,i}, \sigma^2_{k, s, i}) q(z_{k, s, i} | p_{k, s, i}) \right\} \tag{2} $$ The mean-field approximation implies that we can specify the variational distribution $q(\boldsymbol{\pi}, ..., \mathbf{z})$ as a product of individual variational distributions as we have done so above - and it can be thought of as an independence assumption between latent variables specified. Evidence Lower Bound (ELBO). Consider computing the log-marginal likelihood of the observed data $\mathbf{v}$, that is, $\ln p(\mathbf{v} | a_0, ..., \left \{ e_0^{s0}, f_0^{s0}, e_0^{s1}, f_0^{s1} \right\}_{s=2:L})$. As computing this exactly is likely intractable (hence the need for approximate inference methods such as MCMC/variational inference), we instead consider a lower bound on the log-marginal likelihood, known as the evidence lower bound (ELBO). Writing the log-marginal likelihood as the log of the joint likelihood having integrated/summed out all the latent variables $\Theta$, we have: $$\begin{align}\ln p(\mathbf{v} | ...) &= \ln \int p(\Theta, \mathbf{v}) | ...) d\Theta \\ &= \ln \int q(\boldsymbol{\pi}, ..., \mathbf{z}) \cdot \frac{p(\Theta, \mathbf{v})}{q(\boldsymbol{\pi}, ..., \mathbf{z})} d \Theta \\ &= \ln \mathbb{E}_{q(\boldsymbol{\pi}, ..., \mathbf{z})}\left[\frac{p(\Theta, \mathbf{v})}{q(\boldsymbol{\pi}, ..., \mathbf{z})} \right] \\ &\geq \mathbb{E}_q \left[\ln \frac{p(\Theta, \mathbf{v})}{q(\boldsymbol{\pi}, ..., \mathbf{z})} \right] \\ &= \mathbb{E}_q[ \ln p(\Theta, \mathbf{v})] - \mathbb{E}_q[\ln q(\boldsymbol{\pi}, ..., \mathbf{z})] \end{align}$$ Where I have used the notation $\int d\Theta$ as shorthand for the appropriate combination of multiple integrals/summations associated with the continuous/discrete latent variables in $\Theta$. The reasoning in going from the 3rd to the 4th line is that $g(u) = \ln(u)$ is a concave function, so by Jensen's inequality, we have $\ln \mathbb{E}_q[U] \geq \mathbb{E}_q[\ln (U)]$. Hence the ELBO, which I from hereon denote $L$, will be the following: $$L = \mathbb{E}_{q}[\ln p(\Theta, \mathbf{v})] - \mathbb{E}_q[\ln q(\boldsymbol{\pi}, \alpha_0, \left\{ \alpha_s \right\}_{s=0:L}, \mathbf{w} , \mathbf{z})] \tag{3} $$ Where both expectations are taken with respect to $q(\boldsymbol{\pi}, \alpha_0, \left\{ \alpha_s \right\}_{s=0:L}, \mathbf{w} , \mathbf{z})$. The entire ELBO is computed by substituting $(1)$ and $(2)$ into $(3)$. You will additionally need to insert the specific functional forms of the model probability distributions e.g. $p(\alpha_0 | a_0, b_0) = \text{Gamma}(\alpha_0 | a_0, b_0) = \frac{1}{b_0^{a_0} \Gamma(a_0)} \alpha_0^{a_0 - 1} e^{-\alpha_0 / b_0}$ and variational distributions e.g. $q(w_{k, s, i} | \mu_{k, s, i}, \sigma^2_{k, s, i}) = \mathcal{N}(w_{k, s, i} | \mu_{k, s, i}, \sigma^2_{k, s, i})$ etc. This is something that will require at least a few pages of algebra to fully derive. If you want to derive all the update equations, you will need to plug-in the functional forms of all the distributions specified by the model, and the all variational distributions. Towards updates for $p_{k,s,i}$. (needs work on details of $(*)$). In order to get update equations we need to maximise the ELBO, $L$ with respect to the variational parameters of interest, one of which is $p_{k,s,i}$. As the ELBO is cumbersome to write out in full, and we are currently only interested in updates on $p_{k, s, i}$, we selectively specify the parts of the ELBO we need. Now the only way in which the variational parameter $p_{k, s, i}$ of the variational Bernoulli on $z_{k, s, i}$ can appear in the ELBO is when we compute $\mathbb{E}_q[\ln f(z_{k, s, i})]$, that is expectations of log terms containing $z_{k,s,i}$ as arguments (where $\ln f(z_{k,s,i})$ may be log-model-probabilities of the form $\ln p(\cdot)$ or log-variational probabilities of the form $\ln q(\cdot)$). To that end, we need to isolate terms in $(3)$ containing $z_{k,s,i}$. The only terms that will contain $z_{k,s,i}$ in $(3)$ are $\mathbb{E}_q[\ln p(\mathbf{v} | \alpha_0 , \mathbf{w}, \mathbf{z})]$, $\mathbb{E}_q[\sum_s \sum_i \sum_k \ln p(z_{k,s,i} | \pi^s ... z_{pa(k,s,i)}]$, and $\mathbb{E}_q[\sum_s \sum_i \sum_k \ln q(z_{k,s,i} | p_{k,s,i})]$. We denote those parts of the ELBO in which the variational parameter $p_{k, s, i}$ will appear as $L_{[p_{k, s, i}]}$. Yielding: $$L_{[p_{k, s, i}]} = \underbrace{\mathbb{E}_q[\ln(p(\mathbf{v} | \alpha_0, \mathbf{w}, \mathbf{z}])}_{(*)} + \mathbb{E}_q \left[\sum^L_{s=0} \sum^{I(s)}_{s=1} \sum^K_{k=1} \ln p(z_{k, s, i} | \pi^s, \pi^{s0}, \pi^{s1}, z_{pa(k, s, i)})\right] - \mathbb{E}_q \left[\sum^L_{s=0} \sum^{I(s)}_{s=1} \sum^K_{k=1} \ln q(z_{k, s, i} | p_{k, s, i}) \right]$$ Now I have managed to squeeze out something vaguely sensible resembling the contribution of the 1st expectation $(*)$ to the update but it's not quite there yet. But here is what you do for the 2nd and 3rd expectation. For the 2nd expectation, we have: $$\begin{align} \mathbb{E}_q[\ln p(z_{k, s, i} | \pi_{k, s, i})] = &\space \mathbb{E}_q[\ln ({\pi_{k, s, i}}^{z_{k, s, i}} (1 - \pi_{k, s, i})^{1 - z_{k, s, i}} ]\\ = &\space \mathbb{E}_q[z_{k, s, i} \ln \pi_{k, s, i} + (1 - z_{k,s,i}) \ln (1 - \pi_{k, s, i})] \\ = &\space \mathbb{E}_{q(z_{k, s, i} | p_{k, s, i})}[z_{k, s, i} | p_{k,s, i}] \cdot \mathbb{E}_{q(\pi_{k, s, i})}[\ln \pi_{k, s, i} ] \\ &+ \mathbb{E}_{q(z_{k, s, i} | p_{k, s, i})}[1- z_{k, s, i} | p_{k,s, i}] \cdot \mathbb{E}_{q(\pi_{k, s, i})}[\ln 1- \pi_{k, s, i} ] \\ = &\space p_{k,s,i} \langle\ln \pi_{k, s, i} \rangle + (1 - p_{k, s, i}) \langle\ln (1 - \pi_{k, s, i}) \rangle \end{align}$$ Where in the 3rd equality we have used the fact that the expectation of the product of $z_{k, s, i}$ and $\ln \pi_{k, s, i}$ is the product of expectations - this is because of the mean-field assumption, they are independent once we have conditioned on our variational parameters $p_{k, s, i}$ on $z_{k, s, i}$ and $e^s, f^s, e^{s0}, f^{s0}, e^{s1}, f^{s1}$ on $ \pi_{k,s,i}$. For the 3rd expectation, we have: $$\begin{align} \mathbb{E}_q[\ln q(z_{k, s, i} | p_{k, s, i})] &= \mathbb{E}_q[z_{k, s, i} \ln p_{k, s, i} + (1 - z_{k,s,i}) \ln (1 - p_{k, s, i})] \\ &= \mathbb{E}_{q(z_{k, s, i} | p_{k, s, i})}[z_{k, s, i} | p_{k,s, i}] \ln p_{k, s, i} + \mathbb{E}_{q(z_{k, s, i} | p_{k, s, i})}[1- z_{k, s, i} | p_{k,s, i}] \ln (1- \pi_{k, s, i}) \\ &= p_{k, s, i} \ln p_{k, s, i} + (1 - p_{k, s, i}) \ln (1 - p_{k, s,i}) \end{align}$$ Putting this all together, with $(*)$ to be completed, we have: \begin{align}L_{[p_{k,s,i}]} =& \underbrace{\mathbb{E}_q[\ln(p(\mathbf{v} | \alpha_0, \mathbf{w}, \mathbf{z}])}_{(*)} \\ &+ p_{k,s,i} \langle\ln \pi_{k, s, i} \rangle + (1 - p_{k, s, i}) \langle\ln (1 - \pi_{k, s, i}) \rangle \\ &- p_{k, s, i} \ln p_{k, s, i} - (1 - p_{k, s, i}) \ln (1 - p_{k, s,i}) \end{align} After simplifying $(*)$, which I can't quite get exactly, you then maximise w.r.t $p_{k, s, i}$ by computing partial derivatives. After collecting terms, you then rearrange for $p_{k, s, i}$ to get your update equation. You should now be able to see how it is we get the $exp$ and $\langle \ln \pi_{k, s, i} \rangle$ terms in the update equation for $p_{k, s, i}$. To derive the rest of the update equations, you need to follow a similar strategy for each variational parameter. Addressing the 1st expectation $(*)$. Here is the working for the 1st expectation which I can't quite complete, as I don't have the context-specific knowledge (of the paper) to appropriately deal with the dimensionality of $\mathbf{z}$ and $\mathbf{w}$. I will go as far as I can before stating the specifics that prevent me from proceeding further. Perhaps you can assist on that front. $$\begin{align} \mathbb{E}_q[\ln p(\mathbf{v} | \alpha_0, \mathbf{w}, \mathbf{z})] = &\space \mathbb{E}_q \left[\ln \frac{1}{(2 \pi)^{n/2} | \alpha_0^{-1} \mathbf{I} |^{1/2}} \exp \left( -\frac{1}{2}(\mathbf{v} - \boldsymbol{\Phi} \boldsymbol{\theta})^T(\alpha_0^{-1} \mathbf{I})^{-1}(\mathbf{v} - \boldsymbol{\Phi} \boldsymbol{\theta}) \right) \right] \\ = &\space \mathbb{E}_q\left[-\frac{\alpha_0}{2} (\mathbf{v}^T\mathbf{v} - 2 \mathbf{v}^T \boldsymbol{\Phi} \boldsymbol{\theta} + \boldsymbol{\theta}^T \boldsymbol{\Phi}^T \boldsymbol{\Phi} \boldsymbol{\theta}) - \frac{1}{2} \ln((2 \pi)^n |a_0^{-1} \mathbf{I}|) \ \right] \\ = &\space \mathbb{E}_q\left[-\frac{\alpha_0}{2} (\mathbf{v}^T\mathbf{v} - 2 \mathbf{v}^T \boldsymbol{\Phi} (\mathbf{w} \odot \mathbf{z}) + (\mathbf{w} \odot \mathbf{z})^T \boldsymbol{\Phi}^T \boldsymbol{\Phi} (\mathbf{w} \odot \mathbf{z})) - \frac{n}{2} \ln(2 \pi) - \frac{n}{2} \ln (\alpha_0) \right] \\ = &\space -\frac{\langle a_0 \rangle}{2} (\mathbf{v}^T\mathbf{v} -2 \mathbf{v}^T \boldsymbol{\Phi} (\langle \mathbf{w} \rangle \odot \langle \mathbf{z} \rangle) + (\langle \mathbf{w} \rangle^T \odot \langle \mathbf{z} \rangle^T) \boldsymbol{\Phi}^T \boldsymbol{\Phi} (\langle \mathbf{w} \rangle \odot \langle \mathbf{z} \rangle) \\ &\space- \frac{n}{2} \ln(2 \pi) - \frac{n}{2} \langle \ln(\alpha_0)\rangle \\ = &\space -\frac{\langle a_0 \rangle}{2} (\mathbf{v}^T\mathbf{v} -2 \mathbf{v}^T \boldsymbol{\Phi} (\langle \mathbf{w} \rangle \odot \mathbf{p}) + (\langle \mathbf{w} \rangle^T \odot \mathbf{p}^T) \boldsymbol{\Phi}^T \boldsymbol{\Phi} (\langle \mathbf{w} \rangle \odot \mathbf{p} ) \\ &\space- \frac{n}{2} \ln(2 \pi) - \frac{n}{2} \langle \ln(\alpha_0)\rangle \\ \end{align}$$ Where in going from the 2nd equality to 3rd quality I have used the fact that the determinant of a diagonal matrix is a product of its entries. In going from the 3rd equality to 4th equality, I have treated $\mathbf{v}$ and $\mathbf{\Phi}$ as constants - they do not appear in the variational distribution over which you are taking expectations. The expectations $\langle \alpha_0 \rangle$ and $\langle \ln( \alpha_0 ) \rangle$ with respect to the variational distribution $q(\alpha_0 | a, b)$; the expectations $\langle \mathbf{w} \rangle$ and $\langle \mathbf{z} \rangle$ are with respect to $\prod_s \prod_i \prod_k q (w_{k, s, i} | \mu_{k,s,i}, \sigma^2_{k, s, i})$ and $\prod_s \prod_i \prod_k q (z_{k, s, i} | p_{k,s,i})$. And I have again used the mean-field approximation to justify the expectation of a product of these latent variables under the variational distribution as being the product of expectations. For the purposes of computing update equations with respect to $p_{k, s, i}$, this final equality should be viewed with a focus on all terms containing $p_{k, s, i}$. Now what I've isolated as preventing me from going further with this is details concerning the dimensionality of $\mathbf{w}$, $\mathbf{z}$, and what I've denoted as $\mathbf{p}$. In going from the 4th to 5th equality, I have written $\langle \mathbf{z} \rangle = \mathbf{p}$ to convey the fact that $\langle z_{k, s, i} \rangle = p_{k, s, i}$, which allows me to defer specifying details about how the dimensionality/indexing of this works. You will notice that the update equation on $p_{k, s, i}$ in the paper is much more precise concerning how this dimensionality is treated, and that is something I will not be able to assist on without more intimate knowledge of the paper. In particular, the paper states under "B. Tree-Structured Bayesian Compressive Sensing" that "$\mathbf{w} \in \mathbb{R}^N$ and $\mathbf{z} \in \mathbb{R}^N$", and that "$w_i \sim \mathcal{N}(0, \alpha_i^{-1})$ and $z_i \sim \text{Bernoulli}(\pi_i)$". However, the model and variational distribution specifies that $w_{k, s, i}$, $z_{k, s, i}$ and $p_{k, s, i}$ are indexed by $k, s, i$, and not just $i$. Hence what needs to be accounted for to get the precise update equation on $p_{k,s,i}$, is how the "block" indexing $k = 1, ..., K$ and "level" indexing $s = 1,..., L$ are treated.
Deriving posterior update equation in a Variational Bayes inference Joint distribution. Using the graphical model you provided, we get the following joint distribution over all variables of interest, conditioning on model parameters. $$p(\Theta, \mathbf{v} | a_0, b_0,
48,305
Why do we not interpret main effects if interaction terms are significant in ANOVA?
Suppose that we have the following regression relationship: $y=\beta_0 + \beta_1 X + \beta_2 Z + \beta_3 X \times Z + \varepsilon$. If there is no the interaction term, i.e., $y=\beta_0 + \beta_1 X + \beta_2 Z + \varepsilon$, we can interpret the main effect as usual: "Keeping other variable, changing one unit in $X$ associates with $\beta_1$ units in $Y$". That is not true if there is the interaction term. It is because the effect of $X$ depends on the value of $Z$ (through the interaction). Indeed, we can re-write the first formula as follows: $y=\beta_0 + \beta_2 Z + (\beta_1 + \beta_3 Z) X + \varepsilon$. Now we see that the coefficient of $X$ is $(\beta_1 + \beta_3 Z)$. After fixing $Z$ at a known value, we can interpret the effect of $X$ as usual. For example, with $Z=1$, the effect is represented by $\beta_1 + \beta_3$. Please note that significance of $\beta_1$ and $\beta_3$ does not guarantee a significant effect of $X$ (with $Z=1$). We need to test for the sum of those coefficients in this case.
Why do we not interpret main effects if interaction terms are significant in ANOVA?
Suppose that we have the following regression relationship: $y=\beta_0 + \beta_1 X + \beta_2 Z + \beta_3 X \times Z + \varepsilon$. If there is no the interaction term, i.e., $y=\beta_0 + \beta_1 X +
Why do we not interpret main effects if interaction terms are significant in ANOVA? Suppose that we have the following regression relationship: $y=\beta_0 + \beta_1 X + \beta_2 Z + \beta_3 X \times Z + \varepsilon$. If there is no the interaction term, i.e., $y=\beta_0 + \beta_1 X + \beta_2 Z + \varepsilon$, we can interpret the main effect as usual: "Keeping other variable, changing one unit in $X$ associates with $\beta_1$ units in $Y$". That is not true if there is the interaction term. It is because the effect of $X$ depends on the value of $Z$ (through the interaction). Indeed, we can re-write the first formula as follows: $y=\beta_0 + \beta_2 Z + (\beta_1 + \beta_3 Z) X + \varepsilon$. Now we see that the coefficient of $X$ is $(\beta_1 + \beta_3 Z)$. After fixing $Z$ at a known value, we can interpret the effect of $X$ as usual. For example, with $Z=1$, the effect is represented by $\beta_1 + \beta_3$. Please note that significance of $\beta_1$ and $\beta_3$ does not guarantee a significant effect of $X$ (with $Z=1$). We need to test for the sum of those coefficients in this case.
Why do we not interpret main effects if interaction terms are significant in ANOVA? Suppose that we have the following regression relationship: $y=\beta_0 + \beta_1 X + \beta_2 Z + \beta_3 X \times Z + \varepsilon$. If there is no the interaction term, i.e., $y=\beta_0 + \beta_1 X +
48,306
Why do we not interpret main effects if interaction terms are significant in ANOVA?
This is something which is pretty well discussed in chapter 8 of John Fox's book, Applied Regression Analysis and Generalized Linear Models, or Weisberg's Applied Linear Regression. Both emphasize that your question is related to Nelder's (1977) principle of marginality. From this last book for example: The approach to testing we adopt in this book follows from the marginality principle suggested by Nelder (1977). A lower-order term, such as the A main effect, is never tested in models that include any of its higher-order relatives like A:B, A:C, or A:B:C. [...] An analysis of variance table derived under the marginality principle has the unfortunate name of Type II analysis of variance. [...] Type III analysis of variance violates the marginality principle. It computes the test for every regressor adjusted for every other regressor; so, for example, the test for the A main effect would include the interactions A:B, A:C, and A:B:C. The key point is that, with "type II" ANOVA, the $F$-tests based on the sum of squares used in this decomposition are valid (i.e., do really test main effects) only when interaction is absent. Type III ANOVA allows for testing main effects in all cases, but do ask a different research question, and should not be used carelessly. As an intuitive answer however, the idea of non interpreting main effects when interaction terms are significant could be the following: if A:B is significant, then both A and B do play an important role in the process. Furthermore, in many instances where we can observe complex interaction patterns, asking for main effects of A and B can be simply meaningless, since the expression of A depends too much on the expression of B. (For example, let's imagine a fertilizer that would increase yields only on very wet soils, but that would drastically decrease yields on dry soils. There would be a strong interaction fertilizer:irrigation, but it would be tricky to talk about the "main effect" of this fertilizer: this simply depends too much on watering.)
Why do we not interpret main effects if interaction terms are significant in ANOVA?
This is something which is pretty well discussed in chapter 8 of John Fox's book, Applied Regression Analysis and Generalized Linear Models, or Weisberg's Applied Linear Regression. Both emphasize tha
Why do we not interpret main effects if interaction terms are significant in ANOVA? This is something which is pretty well discussed in chapter 8 of John Fox's book, Applied Regression Analysis and Generalized Linear Models, or Weisberg's Applied Linear Regression. Both emphasize that your question is related to Nelder's (1977) principle of marginality. From this last book for example: The approach to testing we adopt in this book follows from the marginality principle suggested by Nelder (1977). A lower-order term, such as the A main effect, is never tested in models that include any of its higher-order relatives like A:B, A:C, or A:B:C. [...] An analysis of variance table derived under the marginality principle has the unfortunate name of Type II analysis of variance. [...] Type III analysis of variance violates the marginality principle. It computes the test for every regressor adjusted for every other regressor; so, for example, the test for the A main effect would include the interactions A:B, A:C, and A:B:C. The key point is that, with "type II" ANOVA, the $F$-tests based on the sum of squares used in this decomposition are valid (i.e., do really test main effects) only when interaction is absent. Type III ANOVA allows for testing main effects in all cases, but do ask a different research question, and should not be used carelessly. As an intuitive answer however, the idea of non interpreting main effects when interaction terms are significant could be the following: if A:B is significant, then both A and B do play an important role in the process. Furthermore, in many instances where we can observe complex interaction patterns, asking for main effects of A and B can be simply meaningless, since the expression of A depends too much on the expression of B. (For example, let's imagine a fertilizer that would increase yields only on very wet soils, but that would drastically decrease yields on dry soils. There would be a strong interaction fertilizer:irrigation, but it would be tricky to talk about the "main effect" of this fertilizer: this simply depends too much on watering.)
Why do we not interpret main effects if interaction terms are significant in ANOVA? This is something which is pretty well discussed in chapter 8 of John Fox's book, Applied Regression Analysis and Generalized Linear Models, or Weisberg's Applied Linear Regression. Both emphasize tha
48,307
Bias and variance in KNN and decision trees
Fewer neighbors usually mean closer neighbors (unless there are multiple close neighbors with equal distance from the point of interest $x_0$). Modelling $x_0$ as a function of only the few closest neighbors, i.e. the most similar data points, allows for high flexibility (utilizing the features of the closest data points but not the ones farther apart) and thus low bias but high variance. Including more neighbors results in less flexibility (higher smootheness, utilizing the features of not only the closest data points but also the ones farther apart) and thus higher bias but lower variance. Take an extreme example: I can model you as equalling your twin brother or a person that is the most similar to you in the whole world ($k=1$). This is highly flexible (low bias), but relying on a single data point is very risky (high variance). Or I can model you as an average (in regression) or mode (in classification) of all the people on the planet ($k=N$). This is highly inflexible (high bias) but very robust (low variance).
Bias and variance in KNN and decision trees
Fewer neighbors usually mean closer neighbors (unless there are multiple close neighbors with equal distance from the point of interest $x_0$). Modelling $x_0$ as a function of only the few closest ne
Bias and variance in KNN and decision trees Fewer neighbors usually mean closer neighbors (unless there are multiple close neighbors with equal distance from the point of interest $x_0$). Modelling $x_0$ as a function of only the few closest neighbors, i.e. the most similar data points, allows for high flexibility (utilizing the features of the closest data points but not the ones farther apart) and thus low bias but high variance. Including more neighbors results in less flexibility (higher smootheness, utilizing the features of not only the closest data points but also the ones farther apart) and thus higher bias but lower variance. Take an extreme example: I can model you as equalling your twin brother or a person that is the most similar to you in the whole world ($k=1$). This is highly flexible (low bias), but relying on a single data point is very risky (high variance). Or I can model you as an average (in regression) or mode (in classification) of all the people on the planet ($k=N$). This is highly inflexible (high bias) but very robust (low variance).
Bias and variance in KNN and decision trees Fewer neighbors usually mean closer neighbors (unless there are multiple close neighbors with equal distance from the point of interest $x_0$). Modelling $x_0$ as a function of only the few closest ne
48,308
probability calibration and Brier score
The short answer is that it only makes sense to calculate the Brier score for the conditional probabilities, $\hat y = P(y=1|X)$, where $y$ is the outcome, $\hat y$ is your prediction, and $X$ are your predictors. In other words, $\hat y$ is the probability that $y=1$, conditional on this particular value of the predictors, $X$. The Brier score in this case is just $$ \frac{1}{N}\sum_i^N (\hat y_i - y_i)^2 $$ What other kinds of probability could there be? The only other option here is the marginal probability, $P(y=1)$. We can estimate this by simply counting the proportion of times $y=1$ in the data. Clearly, it doesn't make sense to use this value when calculating the Brier score! Would the classifier with smaller Brier score provide a rather better reliability curve? Yes. If your classifier predicts $\hat y = 1$ in all cases where $y = 1$ and $\hat y = 0$ where $y = 0$, it has a Brier score of $0$. If it does the opposite, it has a score of $\pm1^2 = 1$. In most cases, such perfect predictions won't be possible, but a good classifier can still be well calibrated, for instance by predicting $\hat y = 0.5$ in cases where $y = 1$ half the time and $y=0$ the rest. A classifier that does this will have the lowest possible Brier score on your data.
probability calibration and Brier score
The short answer is that it only makes sense to calculate the Brier score for the conditional probabilities, $\hat y = P(y=1|X)$, where $y$ is the outcome, $\hat y$ is your prediction, and $X$ are you
probability calibration and Brier score The short answer is that it only makes sense to calculate the Brier score for the conditional probabilities, $\hat y = P(y=1|X)$, where $y$ is the outcome, $\hat y$ is your prediction, and $X$ are your predictors. In other words, $\hat y$ is the probability that $y=1$, conditional on this particular value of the predictors, $X$. The Brier score in this case is just $$ \frac{1}{N}\sum_i^N (\hat y_i - y_i)^2 $$ What other kinds of probability could there be? The only other option here is the marginal probability, $P(y=1)$. We can estimate this by simply counting the proportion of times $y=1$ in the data. Clearly, it doesn't make sense to use this value when calculating the Brier score! Would the classifier with smaller Brier score provide a rather better reliability curve? Yes. If your classifier predicts $\hat y = 1$ in all cases where $y = 1$ and $\hat y = 0$ where $y = 0$, it has a Brier score of $0$. If it does the opposite, it has a score of $\pm1^2 = 1$. In most cases, such perfect predictions won't be possible, but a good classifier can still be well calibrated, for instance by predicting $\hat y = 0.5$ in cases where $y = 1$ half the time and $y=0$ the rest. A classifier that does this will have the lowest possible Brier score on your data.
probability calibration and Brier score The short answer is that it only makes sense to calculate the Brier score for the conditional probabilities, $\hat y = P(y=1|X)$, where $y$ is the outcome, $\hat y$ is your prediction, and $X$ are you
48,309
How to interpret GLMM results?
There are 2 main problems here: As with other linear models there is no requirement for the outcome variable to be normally distributed in a linear mixed effects model. So shapiro.test(x = Incidence$Inc.) is a waste of time and so is any procedure that tries to find the distribution of the outcome, such as fit.cont that you use - such things might be of interest to theoreticians but they are of very limited value to applied research. We would, however, like the residuals to be, at least approximately, normally distributed. You have fitted a poisson model. Poisson models are for data with a count (integer) outcome. You have a numeric variable so the first model to fit is a standard linear mixed effects model. You have only 3 levels of Season. This should probably be a fixed effect. So, with your data we can fit: > m0 <- lmer(Inc.~ Habitat + (1|Season)+ (1|Site), + data = Incidence) > summary(m0) Linear mixed model fit by REML ['lmerMod'] Formula: Inc. ~ Habitat + (1 | Season) + (1 | Site) Data: Incidence REML criterion at convergence: -78.9 Scaled residuals: Min 1Q Median 3Q Max -1.45229 -0.30319 -0.01575 0.20558 2.53994 Random effects: Groups Name Variance Std.Dev. Site (Intercept) 0.0031294 0.05594 Season (Intercept) 0.0005702 0.02388 Residual 0.0008246 0.02872 Number of obs: 31, groups: Site, 16; Season, 3 Fixed effects: Estimate Std. Error t value (Intercept) 0.35450 0.03607 9.827 HabitatEdge -0.32669 0.04475 -7.301 HabitatOakwood -0.31616 0.04637 -6.818 HabitatWasteland -0.33973 0.04637 -7.326 and then we can inspect the residuals histgram: hist(residuals(m0)) which looks fine. There is no need to perform a statistical test for normality. Note that you should probably model Season as a fixed effect, not random.
How to interpret GLMM results?
There are 2 main problems here: As with other linear models there is no requirement for the outcome variable to be normally distributed in a linear mixed effects model. So shapiro.test(x = Incidence$
How to interpret GLMM results? There are 2 main problems here: As with other linear models there is no requirement for the outcome variable to be normally distributed in a linear mixed effects model. So shapiro.test(x = Incidence$Inc.) is a waste of time and so is any procedure that tries to find the distribution of the outcome, such as fit.cont that you use - such things might be of interest to theoreticians but they are of very limited value to applied research. We would, however, like the residuals to be, at least approximately, normally distributed. You have fitted a poisson model. Poisson models are for data with a count (integer) outcome. You have a numeric variable so the first model to fit is a standard linear mixed effects model. You have only 3 levels of Season. This should probably be a fixed effect. So, with your data we can fit: > m0 <- lmer(Inc.~ Habitat + (1|Season)+ (1|Site), + data = Incidence) > summary(m0) Linear mixed model fit by REML ['lmerMod'] Formula: Inc. ~ Habitat + (1 | Season) + (1 | Site) Data: Incidence REML criterion at convergence: -78.9 Scaled residuals: Min 1Q Median 3Q Max -1.45229 -0.30319 -0.01575 0.20558 2.53994 Random effects: Groups Name Variance Std.Dev. Site (Intercept) 0.0031294 0.05594 Season (Intercept) 0.0005702 0.02388 Residual 0.0008246 0.02872 Number of obs: 31, groups: Site, 16; Season, 3 Fixed effects: Estimate Std. Error t value (Intercept) 0.35450 0.03607 9.827 HabitatEdge -0.32669 0.04475 -7.301 HabitatOakwood -0.31616 0.04637 -6.818 HabitatWasteland -0.33973 0.04637 -7.326 and then we can inspect the residuals histgram: hist(residuals(m0)) which looks fine. There is no need to perform a statistical test for normality. Note that you should probably model Season as a fixed effect, not random.
How to interpret GLMM results? There are 2 main problems here: As with other linear models there is no requirement for the outcome variable to be normally distributed in a linear mixed effects model. So shapiro.test(x = Incidence$
48,310
On the difference between the main effect in a one-factor and a two-factor regression
$b_2$ here corresponds to the conditional effect of $X_2$ when $X_1=0$. A common mistake is to understand $b_2$ as being the main effect of $X_2$, i.e. the average effect of $X_2$ over all possible values of $X_1$. Indeed. I typically answer at least one question per week where this mistake is made. It it also worth pointing out for completeness that $b_1$ here corresponds to the conditional effect of $X_1$ when $X_2= 0 $ and not the main effect of $X_1$ which is easily seen by rearranging the formula $$Y=(b_0+b_2X_2)+(b_1+b_3X_2)X_1$$ In practice, it seems that $b_2$ and $B_2$ are reasonably close to each other. I think this is false in general for this model and will will only be true when the interaction term $b_3$ is very small. Are there any "common knowledge" examples of situations where $B_2$ and $b_2$ are remarkably far from each other? Yes, when the $b_3$ is meaningfully large then $B_2$ and $b_2$ will be meaningfully apart. I am thinking of how to show this algebraiclly and graphically but I don't have much time now, so I will resort to a simple simulation for now. First with no interaction: > set.seed(25) > N <- 100 > > dt <- data.frame(X1 = rnorm(N, 0, 1), X2 = rnorm(N, 5, 1)) > > X <- model.matrix(~ X1 + X2 + X1:X2, dt) > > betas <- c(10, -2, 2, 0) > > dt$Y <- X %*% betas + rnorm(N, 0, 1) > > (m1 <- lm(Y ~ X1*X2, data = dt))$coefficients[3] X2 2.06 > (m2 <- lm(Y ~ X2, data = dt))$coefficients[2] X2 1.96 as expected. And now with an interaction: > set.seed(25) > N <- 100 > > dt <- data.frame(X1 = rnorm(N, 0, 1), X2 = rnorm(N, 5, 1)) > > X <- model.matrix(~ X1 + X2 + X1:X2, dt) > > betas <- c(10, -2, 2, 10) > > dt$Y <- X %*% betas + rnorm(N, 0, 1) > > (m1 <- lm(Y ~ X1*X2, data = dt))$coefficients[3] X2 2.06 > (m2 <- lm(Y ~ X2, data = dt))$coefficients[2] X2 3.29 Are there any known upper bounds to $|b_2-B_2|$ I don't think so. As you increase $|b_3|$ then $|b_2-B_2|$ should increase
On the difference between the main effect in a one-factor and a two-factor regression
$b_2$ here corresponds to the conditional effect of $X_2$ when $X_1=0$. A common mistake is to understand $b_2$ as being the main effect of $X_2$, i.e. the average effect of $X_2$ over all possible v
On the difference between the main effect in a one-factor and a two-factor regression $b_2$ here corresponds to the conditional effect of $X_2$ when $X_1=0$. A common mistake is to understand $b_2$ as being the main effect of $X_2$, i.e. the average effect of $X_2$ over all possible values of $X_1$. Indeed. I typically answer at least one question per week where this mistake is made. It it also worth pointing out for completeness that $b_1$ here corresponds to the conditional effect of $X_1$ when $X_2= 0 $ and not the main effect of $X_1$ which is easily seen by rearranging the formula $$Y=(b_0+b_2X_2)+(b_1+b_3X_2)X_1$$ In practice, it seems that $b_2$ and $B_2$ are reasonably close to each other. I think this is false in general for this model and will will only be true when the interaction term $b_3$ is very small. Are there any "common knowledge" examples of situations where $B_2$ and $b_2$ are remarkably far from each other? Yes, when the $b_3$ is meaningfully large then $B_2$ and $b_2$ will be meaningfully apart. I am thinking of how to show this algebraiclly and graphically but I don't have much time now, so I will resort to a simple simulation for now. First with no interaction: > set.seed(25) > N <- 100 > > dt <- data.frame(X1 = rnorm(N, 0, 1), X2 = rnorm(N, 5, 1)) > > X <- model.matrix(~ X1 + X2 + X1:X2, dt) > > betas <- c(10, -2, 2, 0) > > dt$Y <- X %*% betas + rnorm(N, 0, 1) > > (m1 <- lm(Y ~ X1*X2, data = dt))$coefficients[3] X2 2.06 > (m2 <- lm(Y ~ X2, data = dt))$coefficients[2] X2 1.96 as expected. And now with an interaction: > set.seed(25) > N <- 100 > > dt <- data.frame(X1 = rnorm(N, 0, 1), X2 = rnorm(N, 5, 1)) > > X <- model.matrix(~ X1 + X2 + X1:X2, dt) > > betas <- c(10, -2, 2, 10) > > dt$Y <- X %*% betas + rnorm(N, 0, 1) > > (m1 <- lm(Y ~ X1*X2, data = dt))$coefficients[3] X2 2.06 > (m2 <- lm(Y ~ X2, data = dt))$coefficients[2] X2 3.29 Are there any known upper bounds to $|b_2-B_2|$ I don't think so. As you increase $|b_3|$ then $|b_2-B_2|$ should increase
On the difference between the main effect in a one-factor and a two-factor regression $b_2$ here corresponds to the conditional effect of $X_2$ when $X_1=0$. A common mistake is to understand $b_2$ as being the main effect of $X_2$, i.e. the average effect of $X_2$ over all possible v
48,311
On the difference between the main effect in a one-factor and a two-factor regression
Adding to @RobertLong's answer, there is a slight conceptual mistake in the way $b_2$ is described in the question in the case where $X_1$ was centered. It is indeed true that $b_2$ becomes the average effect of $X_2$ over all possible values of $X_1$, in the sense that $\overline{b_2+b_3X_1}=b_2$, but it should be emphasized that this is an average of simple effects. It may have nothing to do with the main effect of $X_2$ on the DV, which means that $b_2$ may be really far from $B_2$ even without interaction. Here is an example where there is no interaction, and $b_2$ and $B_2$ have nothing in common: the vertical axis is the DV $Y$, the horizontal axis is for the covariate $X_2$, and the colors stand for levels of the covariate $X_1$. For any value of $X_1$, the simple effect $b_2+b_3X_1$ is around $-1$, while the main effect $B_2$ is clearly positive.
On the difference between the main effect in a one-factor and a two-factor regression
Adding to @RobertLong's answer, there is a slight conceptual mistake in the way $b_2$ is described in the question in the case where $X_1$ was centered. It is indeed true that $b_2$ becomes the averag
On the difference between the main effect in a one-factor and a two-factor regression Adding to @RobertLong's answer, there is a slight conceptual mistake in the way $b_2$ is described in the question in the case where $X_1$ was centered. It is indeed true that $b_2$ becomes the average effect of $X_2$ over all possible values of $X_1$, in the sense that $\overline{b_2+b_3X_1}=b_2$, but it should be emphasized that this is an average of simple effects. It may have nothing to do with the main effect of $X_2$ on the DV, which means that $b_2$ may be really far from $B_2$ even without interaction. Here is an example where there is no interaction, and $b_2$ and $B_2$ have nothing in common: the vertical axis is the DV $Y$, the horizontal axis is for the covariate $X_2$, and the colors stand for levels of the covariate $X_1$. For any value of $X_1$, the simple effect $b_2+b_3X_1$ is around $-1$, while the main effect $B_2$ is clearly positive.
On the difference between the main effect in a one-factor and a two-factor regression Adding to @RobertLong's answer, there is a slight conceptual mistake in the way $b_2$ is described in the question in the case where $X_1$ was centered. It is indeed true that $b_2$ becomes the averag
48,312
Are Brier and log-loss proper or strictly proper scoring rules?
Both are strictly proper. See Selten ("Axiomatic Characterization of the Quadratic Scoring Rule", Experimental Economics, 1998), who uses the term "incentive compatible" in place of "strictly proper". His proofs work with distributions with finite support, but apply to the continuous case as well, with the necessary modifications. See also Gneiting & Raftery ("Strictly Proper Scoring Rules, Prediction, and Estimation", JASA, 2007), who give the Brier and the log scores as examples of strictly proper scoring rules in Examples 1 & 3 in section 3.1 without further comment or explanation.
Are Brier and log-loss proper or strictly proper scoring rules?
Both are strictly proper. See Selten ("Axiomatic Characterization of the Quadratic Scoring Rule", Experimental Economics, 1998), who uses the term "incentive compatible" in place of "strictly proper".
Are Brier and log-loss proper or strictly proper scoring rules? Both are strictly proper. See Selten ("Axiomatic Characterization of the Quadratic Scoring Rule", Experimental Economics, 1998), who uses the term "incentive compatible" in place of "strictly proper". His proofs work with distributions with finite support, but apply to the continuous case as well, with the necessary modifications. See also Gneiting & Raftery ("Strictly Proper Scoring Rules, Prediction, and Estimation", JASA, 2007), who give the Brier and the log scores as examples of strictly proper scoring rules in Examples 1 & 3 in section 3.1 without further comment or explanation.
Are Brier and log-loss proper or strictly proper scoring rules? Both are strictly proper. See Selten ("Axiomatic Characterization of the Quadratic Scoring Rule", Experimental Economics, 1998), who uses the term "incentive compatible" in place of "strictly proper".
48,313
Intuition behind m-out-of-n bootstrap
I would argue that it's not so much that the $m$ of $n$ bootstrap does smoothing as that it makes smoothing unnecessary. There are two components to the $m$ of $n$ bootstrap. The first is sampling just $m$ observations; the second is knowing the convergence rate. A big part of the advantage of the subsampling is being able to handle the correct rate. If a statistic is $\sqrt{n}$-consistent and based on iid observations, the ordinary bootstrap pretty much has to work (Chapter 3.6 of van der Vaart & Wellner does this) So, if you are looking to bootstrap the maximum, you need to know that it converges faster than $\sqrt{n}$ when you have a hard maximum. For example, with $U[0,\theta]$ you have $n(X_{(n}-\theta)=O_p(1)$. That means you need to scale the variance by $m^2/n^2$, not $m/n$. Another big part is reducing ties. Again if you're going for the maximum, the ordinary bootstrap has the same maximum as the sample 0.632 of the time, whereas the sample never has the same maximum as the generating distribution. Subsampling means that the bootstrap sample doesn't have the same maximum as the original sample and so you get a useful distribution over the bootstrap replicates. You don't need the smoothness in the statistic, because the distribution of replicates is less discrete.
Intuition behind m-out-of-n bootstrap
I would argue that it's not so much that the $m$ of $n$ bootstrap does smoothing as that it makes smoothing unnecessary. There are two components to the $m$ of $n$ bootstrap. The first is sampling ju
Intuition behind m-out-of-n bootstrap I would argue that it's not so much that the $m$ of $n$ bootstrap does smoothing as that it makes smoothing unnecessary. There are two components to the $m$ of $n$ bootstrap. The first is sampling just $m$ observations; the second is knowing the convergence rate. A big part of the advantage of the subsampling is being able to handle the correct rate. If a statistic is $\sqrt{n}$-consistent and based on iid observations, the ordinary bootstrap pretty much has to work (Chapter 3.6 of van der Vaart & Wellner does this) So, if you are looking to bootstrap the maximum, you need to know that it converges faster than $\sqrt{n}$ when you have a hard maximum. For example, with $U[0,\theta]$ you have $n(X_{(n}-\theta)=O_p(1)$. That means you need to scale the variance by $m^2/n^2$, not $m/n$. Another big part is reducing ties. Again if you're going for the maximum, the ordinary bootstrap has the same maximum as the sample 0.632 of the time, whereas the sample never has the same maximum as the generating distribution. Subsampling means that the bootstrap sample doesn't have the same maximum as the original sample and so you get a useful distribution over the bootstrap replicates. You don't need the smoothness in the statistic, because the distribution of replicates is less discrete.
Intuition behind m-out-of-n bootstrap I would argue that it's not so much that the $m$ of $n$ bootstrap does smoothing as that it makes smoothing unnecessary. There are two components to the $m$ of $n$ bootstrap. The first is sampling ju
48,314
Optimization as sampling for stochastic functions
To expand upon the solution which is hinted at in the answer of @Xi'an: Assume that $f$ is represented as $$f(x) = \mathbf{E}_{\rho(\xi)} \left[ F(x, \xi) \right]$$ where $\xi$ is some auxiliary source of randomness, and $0 \leqslant F(x, \xi) \leqslant 1$ for all $(x, \xi)$. One can then develop \begin{align} \exp(-\beta f(x)) &= \exp \left( -\beta \right) \cdot \exp \left(\beta \left\{1 - f(x) \right\} \right) \\ &= \sum_{n \geqslant 0} \frac{\beta^n e^{-\beta}}{n!} \left\{1 - f(x) \right\}^n \\ &= \mathbf{E}_{N \sim \text{Po}(\beta)} \left[ \left\{1 - f(x) \right\}^N \right] \\ &= \mathbf{E}_{N \sim \text{Po}(\beta)} \left[ \prod_{a = 1}^N \mathbf{E}_{\rho(\xi^a)} \left[ 1 - F \left(x, \xi^a \right) \right] \right]. \end{align} This implies that if we write down the joint distribution $$ \Pi \left( x, N, \{ \xi^a \}_{a = 1}^N \right) \propto \frac{\beta^N e^{-\beta}}{N!} \cdot \prod_{a = 1}^N \left\{ \rho(\xi^a) \left[ 1 - F \left(x, \xi^a \right) \right] \right\},$$ then the $x$-marginal is given by $\mu_\beta (x) \propto \exp(-\beta f(x))$. This enables the application of a Pseudo-Marginal Metropolis-Hastings MCMC algorithm. Consider the proposal $$Q \left( (x, N, \Xi) \to (x', N', \Xi') \right) = q ( x \to x' ) \cdot \text{Po} ( N' | \beta ) \cdot \prod_{b = 1}^{N'} \rho ( \xi'^b ).$$ Working through the details, one can compute that the Metropolis-Hastings ratio simplifies to $$r \left( (x, N, \Xi) \to (x', N', \Xi') \right) = \frac{q ( x' \to x )}{q ( x \to x' )} \cdot \frac{ \prod_{b = 1}^{N'} \left[ 1 - F \left(x, \xi'^b \right) \right] }{ \prod_{a = 1}^N \left[ 1 - F \left(x, \xi^a \right) \right]}$$ which can be computed exactly, allowing for a tractable Metropolis-Hastings correction. This means that one can generate a Markov chain with $\Pi \left( x, N, \Xi \right)$ as its invariant measure, and hence the $x$-marginal of the chain will converge to $\mu_\beta$ as desired.
Optimization as sampling for stochastic functions
To expand upon the solution which is hinted at in the answer of @Xi'an: Assume that $f$ is represented as $$f(x) = \mathbf{E}_{\rho(\xi)} \left[ F(x, \xi) \right]$$ where $\xi$ is some auxiliary sourc
Optimization as sampling for stochastic functions To expand upon the solution which is hinted at in the answer of @Xi'an: Assume that $f$ is represented as $$f(x) = \mathbf{E}_{\rho(\xi)} \left[ F(x, \xi) \right]$$ where $\xi$ is some auxiliary source of randomness, and $0 \leqslant F(x, \xi) \leqslant 1$ for all $(x, \xi)$. One can then develop \begin{align} \exp(-\beta f(x)) &= \exp \left( -\beta \right) \cdot \exp \left(\beta \left\{1 - f(x) \right\} \right) \\ &= \sum_{n \geqslant 0} \frac{\beta^n e^{-\beta}}{n!} \left\{1 - f(x) \right\}^n \\ &= \mathbf{E}_{N \sim \text{Po}(\beta)} \left[ \left\{1 - f(x) \right\}^N \right] \\ &= \mathbf{E}_{N \sim \text{Po}(\beta)} \left[ \prod_{a = 1}^N \mathbf{E}_{\rho(\xi^a)} \left[ 1 - F \left(x, \xi^a \right) \right] \right]. \end{align} This implies that if we write down the joint distribution $$ \Pi \left( x, N, \{ \xi^a \}_{a = 1}^N \right) \propto \frac{\beta^N e^{-\beta}}{N!} \cdot \prod_{a = 1}^N \left\{ \rho(\xi^a) \left[ 1 - F \left(x, \xi^a \right) \right] \right\},$$ then the $x$-marginal is given by $\mu_\beta (x) \propto \exp(-\beta f(x))$. This enables the application of a Pseudo-Marginal Metropolis-Hastings MCMC algorithm. Consider the proposal $$Q \left( (x, N, \Xi) \to (x', N', \Xi') \right) = q ( x \to x' ) \cdot \text{Po} ( N' | \beta ) \cdot \prod_{b = 1}^{N'} \rho ( \xi'^b ).$$ Working through the details, one can compute that the Metropolis-Hastings ratio simplifies to $$r \left( (x, N, \Xi) \to (x', N', \Xi') \right) = \frac{q ( x' \to x )}{q ( x \to x' )} \cdot \frac{ \prod_{b = 1}^{N'} \left[ 1 - F \left(x, \xi'^b \right) \right] }{ \prod_{a = 1}^N \left[ 1 - F \left(x, \xi^a \right) \right]}$$ which can be computed exactly, allowing for a tractable Metropolis-Hastings correction. This means that one can generate a Markov chain with $\Pi \left( x, N, \Xi \right)$ as its invariant measure, and hence the $x$-marginal of the chain will converge to $\mu_\beta$ as desired.
Optimization as sampling for stochastic functions To expand upon the solution which is hinted at in the answer of @Xi'an: Assume that $f$ is represented as $$f(x) = \mathbf{E}_{\rho(\xi)} \left[ F(x, \xi) \right]$$ where $\xi$ is some auxiliary sourc
48,315
Optimization as sampling for stochastic functions
This is a very interesting question for which there is no clear-cut answer. It all depends on the computing budget and the output of a realistic will depend on this computing budget. My suggestion would be to mix (i) simulated annealing, that is, simulating from a target like $$h_t(x)\propto e^{-T_t \cdot \mathbb E[f(x)]}\qquad T_t \uparrow \infty$$ where the temperature $T_t$ is slowing increasing with $t$, (ii) pseudo-marginal Metropolis-Hastings, when the value of the target is replaced with an unbiased estimate at each iteration, and (iii) debiasing à la Glynn and Rhee, as in Russian roulette estimators, where a converging sequence of biased estimators, $\hat\eta_n$ is turned into a unbiased estimator $$\sum_{n=1}^G \{\eta_{n+1}-\eta_n\}/\mathbb P(G\ge n)$$ $G$ being a integer valued random variable (like a Poisson). This last step involves computing a random number $G$ of realisations of $f(x)$. An alternative is to use stochastic optimisation, by considering the sequence $(X_n)_n$ such that $$X_{n+1}=X_n-\epsilon_n \nabla f(X_n)\qquad \epsilon_n\downarrow 0$$ where $\nabla f$ denotes a realisation of the gradient of $f$, i.e. $$\mathbb E[\nabla f(X_n] = \nabla \mathbb E[f(X_n]]$$ If this is impossible to obtain, a finite difference approach is the Kiefer-Wolfowitz algorithm $$X_{n+1}=X_n-\epsilon_n \dfrac{f(X_n+\upsilon_n)-f(X_n-\upsilon_n)}{2\upsilon_n}\qquad \epsilon_n,\upsilon_n\downarrow 0$$
Optimization as sampling for stochastic functions
This is a very interesting question for which there is no clear-cut answer. It all depends on the computing budget and the output of a realistic will depend on this computing budget. My suggestion wou
Optimization as sampling for stochastic functions This is a very interesting question for which there is no clear-cut answer. It all depends on the computing budget and the output of a realistic will depend on this computing budget. My suggestion would be to mix (i) simulated annealing, that is, simulating from a target like $$h_t(x)\propto e^{-T_t \cdot \mathbb E[f(x)]}\qquad T_t \uparrow \infty$$ where the temperature $T_t$ is slowing increasing with $t$, (ii) pseudo-marginal Metropolis-Hastings, when the value of the target is replaced with an unbiased estimate at each iteration, and (iii) debiasing à la Glynn and Rhee, as in Russian roulette estimators, where a converging sequence of biased estimators, $\hat\eta_n$ is turned into a unbiased estimator $$\sum_{n=1}^G \{\eta_{n+1}-\eta_n\}/\mathbb P(G\ge n)$$ $G$ being a integer valued random variable (like a Poisson). This last step involves computing a random number $G$ of realisations of $f(x)$. An alternative is to use stochastic optimisation, by considering the sequence $(X_n)_n$ such that $$X_{n+1}=X_n-\epsilon_n \nabla f(X_n)\qquad \epsilon_n\downarrow 0$$ where $\nabla f$ denotes a realisation of the gradient of $f$, i.e. $$\mathbb E[\nabla f(X_n] = \nabla \mathbb E[f(X_n]]$$ If this is impossible to obtain, a finite difference approach is the Kiefer-Wolfowitz algorithm $$X_{n+1}=X_n-\epsilon_n \dfrac{f(X_n+\upsilon_n)-f(X_n-\upsilon_n)}{2\upsilon_n}\qquad \epsilon_n,\upsilon_n\downarrow 0$$
Optimization as sampling for stochastic functions This is a very interesting question for which there is no clear-cut answer. It all depends on the computing budget and the output of a realistic will depend on this computing budget. My suggestion wou
48,316
Test if function "raises faster then linear"
Saying that a function “raises faster then linear” essencialy means that its derivative increases, meaning, its second derivative is positive. The way you approximate the second derivative of a function is with a parabola. This is true for Taylor decomposition, when you want to approximate a function starting from a point evaluation of the function and its derivatives, but it works also for least squares. When fitting a straight line to your data, you are imposing a model with constant first derivative, but this can be amended adding a quadratic term, then the second derivative is constant, and you can allow it to vary adding a cubic term now, and so on. But don't worry about how that (second) derivative varies, just settle with a mean estimate, it's the best thing you can use for testing. When you consider a null model, that's the average $y$ value. When you have a linear model, the slope measures the average increment, when you include a quadratic term, that's the average second derivative. Simply test that for being positive.
Test if function "raises faster then linear"
Saying that a function “raises faster then linear” essencialy means that its derivative increases, meaning, its second derivative is positive. The way you approximate the second derivative of a functi
Test if function "raises faster then linear" Saying that a function “raises faster then linear” essencialy means that its derivative increases, meaning, its second derivative is positive. The way you approximate the second derivative of a function is with a parabola. This is true for Taylor decomposition, when you want to approximate a function starting from a point evaluation of the function and its derivatives, but it works also for least squares. When fitting a straight line to your data, you are imposing a model with constant first derivative, but this can be amended adding a quadratic term, then the second derivative is constant, and you can allow it to vary adding a cubic term now, and so on. But don't worry about how that (second) derivative varies, just settle with a mean estimate, it's the best thing you can use for testing. When you consider a null model, that's the average $y$ value. When you have a linear model, the slope measures the average increment, when you include a quadratic term, that's the average second derivative. Simply test that for being positive.
Test if function "raises faster then linear" Saying that a function “raises faster then linear” essencialy means that its derivative increases, meaning, its second derivative is positive. The way you approximate the second derivative of a functi
48,317
Test if function "raises faster then linear"
Assuming you already know that $f$ is increasing, we can further posit that it increases super-linearly if its first derivative is monotone increasing in $x$ (this also makes it a convex function). Since we’re working with a discrete, countable set of observations $$\{ (x_1, f_1) , (x_2, f_2), \dots, (x_n, f_n) \}$$ we can’t observe derivatives. But we can take a look at some form of discrete derivative, such as the forward difference of the series $$\Delta f_i = f_{i+1} - f_i$$ for $i \in \{1, \dots, n-1\}$ (in this case, you'll have to discard the last observation $x_n$). Fitting a polynomial or a particular function by regressing $\Delta f_i$ on $x_i$ and checking the significance of the coefficients is not a robust solution since the functional form of the derivative can really take any non-polynomial shape. Also p-values of the regression coefficients aren’t accurate if there are significant departures from normality. This is why I would instead recommend checking something like rank correlation between $\Delta f$ on $x$. Namely, Spearman correlation $\rho$ is a non-parametric correlation based on rank which assess the monotonicity between two variables. And its statistical distribution is known both in small samples and large samples. Thus, the one-sided test $$H_0: \rho( \Delta f , x) = 0$$ $$H_A: \rho( \Delta f , x) > 0$$ if rejected, would lend credence to the claim that $f$ is indeed super-linear in $x$. Numerical Example. Here, I'll generate two functions $f_0$ and $f_1$ with a $p$ of .8 and 1.2, respectively. Then I'll show that spearman correlation can distinguish which one is super-linear. import numpy as np from scipy.stats import spearmanr as sp # this is spearman correlation delta = lambda series: series[1:] - series[:-1] # forward diff operator n = 100 # size of sample x = np.linspace(0,100,n) # x series e = np.random.normal(0,1,n) # noise term f0 = x**.8 + e # sub-linear function of x f1 = x**1.2 + e # super-linear function of x sp(delta(f0),x[:-1]) correlation=-0.034, pvalue=0.735 sp(delta(f1),x[:-1]) correlation=0.309, pvalue=0.002 While it doesn't invalidate the results of this experiment, keep in mind that to get accurate Type 1 error rate, that this p-value (from scipy) is for a 2-sided test. In your case, you are looking for a 1-sided test.
Test if function "raises faster then linear"
Assuming you already know that $f$ is increasing, we can further posit that it increases super-linearly if its first derivative is monotone increasing in $x$ (this also makes it a convex function). Si
Test if function "raises faster then linear" Assuming you already know that $f$ is increasing, we can further posit that it increases super-linearly if its first derivative is monotone increasing in $x$ (this also makes it a convex function). Since we’re working with a discrete, countable set of observations $$\{ (x_1, f_1) , (x_2, f_2), \dots, (x_n, f_n) \}$$ we can’t observe derivatives. But we can take a look at some form of discrete derivative, such as the forward difference of the series $$\Delta f_i = f_{i+1} - f_i$$ for $i \in \{1, \dots, n-1\}$ (in this case, you'll have to discard the last observation $x_n$). Fitting a polynomial or a particular function by regressing $\Delta f_i$ on $x_i$ and checking the significance of the coefficients is not a robust solution since the functional form of the derivative can really take any non-polynomial shape. Also p-values of the regression coefficients aren’t accurate if there are significant departures from normality. This is why I would instead recommend checking something like rank correlation between $\Delta f$ on $x$. Namely, Spearman correlation $\rho$ is a non-parametric correlation based on rank which assess the monotonicity between two variables. And its statistical distribution is known both in small samples and large samples. Thus, the one-sided test $$H_0: \rho( \Delta f , x) = 0$$ $$H_A: \rho( \Delta f , x) > 0$$ if rejected, would lend credence to the claim that $f$ is indeed super-linear in $x$. Numerical Example. Here, I'll generate two functions $f_0$ and $f_1$ with a $p$ of .8 and 1.2, respectively. Then I'll show that spearman correlation can distinguish which one is super-linear. import numpy as np from scipy.stats import spearmanr as sp # this is spearman correlation delta = lambda series: series[1:] - series[:-1] # forward diff operator n = 100 # size of sample x = np.linspace(0,100,n) # x series e = np.random.normal(0,1,n) # noise term f0 = x**.8 + e # sub-linear function of x f1 = x**1.2 + e # super-linear function of x sp(delta(f0),x[:-1]) correlation=-0.034, pvalue=0.735 sp(delta(f1),x[:-1]) correlation=0.309, pvalue=0.002 While it doesn't invalidate the results of this experiment, keep in mind that to get accurate Type 1 error rate, that this p-value (from scipy) is for a 2-sided test. In your case, you are looking for a 1-sided test.
Test if function "raises faster then linear" Assuming you already know that $f$ is increasing, we can further posit that it increases super-linearly if its first derivative is monotone increasing in $x$ (this also makes it a convex function). Si
48,318
Are Neural Networks Mixture Models?
They both fall into the general domain of graphical models. As you've pointed out, they are very similar to each other, for they both have hidden layers and both require iterative methods to perform inference tasks. But they are proposed on different initial ideas. "Neural network" was originally proposed by the connectionists and is now very active in the machine learning community, while "mixture model", or more general "latent variable models", is a category of classical models in the statistics community. Neural network (in machine learning) focus mainly on minimizing the prediction error, as long as the prediction error is minimized, it doesn't matter how you interpret the mathematic equations, or how many hidden layer/nodes you used in the model. On the other hand, mixture model (in statistics) focus mainly on maximizing the marginal likelihood, and every hidden layer and node matters because each of the hidden node or layer must have a corresponding real world explanation. The difference in initial purpose lead to some minor differences in the math equations and terms. For example the "activation function" in neural networks plays the same role as "conditional probability distribution function" in mixture models. Nowadays there's a tendency to unify the terms in different community with graphical model language. For example from graphical model perspective, no matter it's "activation function" or "conditional probability distribution function", they are all called "factors".
Are Neural Networks Mixture Models?
They both fall into the general domain of graphical models. As you've pointed out, they are very similar to each other, for they both have hidden layers and both require iterative methods to perform i
Are Neural Networks Mixture Models? They both fall into the general domain of graphical models. As you've pointed out, they are very similar to each other, for they both have hidden layers and both require iterative methods to perform inference tasks. But they are proposed on different initial ideas. "Neural network" was originally proposed by the connectionists and is now very active in the machine learning community, while "mixture model", or more general "latent variable models", is a category of classical models in the statistics community. Neural network (in machine learning) focus mainly on minimizing the prediction error, as long as the prediction error is minimized, it doesn't matter how you interpret the mathematic equations, or how many hidden layer/nodes you used in the model. On the other hand, mixture model (in statistics) focus mainly on maximizing the marginal likelihood, and every hidden layer and node matters because each of the hidden node or layer must have a corresponding real world explanation. The difference in initial purpose lead to some minor differences in the math equations and terms. For example the "activation function" in neural networks plays the same role as "conditional probability distribution function" in mixture models. Nowadays there's a tendency to unify the terms in different community with graphical model language. For example from graphical model perspective, no matter it's "activation function" or "conditional probability distribution function", they are all called "factors".
Are Neural Networks Mixture Models? They both fall into the general domain of graphical models. As you've pointed out, they are very similar to each other, for they both have hidden layers and both require iterative methods to perform i
48,319
Are Neural Networks Mixture Models?
no, neural network per-se are not mixture models, tho ideas from mixture models has influenced some of the neural network modules like softmax attention. mixture models require a mixing in the probability density function, corresponding to the logsumexp() function. Some NN uses logsumexp, on pdf quantities and non-pdf quantities. Thus NN are not necessarily MM, but may use MM subcomponents, and vice versa.
Are Neural Networks Mixture Models?
no, neural network per-se are not mixture models, tho ideas from mixture models has influenced some of the neural network modules like softmax attention. mixture models require a mixing in the probabi
Are Neural Networks Mixture Models? no, neural network per-se are not mixture models, tho ideas from mixture models has influenced some of the neural network modules like softmax attention. mixture models require a mixing in the probability density function, corresponding to the logsumexp() function. Some NN uses logsumexp, on pdf quantities and non-pdf quantities. Thus NN are not necessarily MM, but may use MM subcomponents, and vice versa.
Are Neural Networks Mixture Models? no, neural network per-se are not mixture models, tho ideas from mixture models has influenced some of the neural network modules like softmax attention. mixture models require a mixing in the probabi
48,320
Are Neural Networks Mixture Models?
Also to my understanding, in a neural network classifier with 1 hidden layer, you have a mixture of functions (sigmoids, relus, etc) that are aggregated ... So my question is: do neural networks fall in the general domain of mixture models? If so, why are they never referred to as such? You can consider a single layer models as a mixture model. And it is not true that they are never referred to as such. The link between neural networks and mixture models is not an unmentioned topic in the literature: Why is the posterior of a neural network gaussian process equal to the posterior of a neural network in the limit of infinite width layers? and for special cases, neural network Gaussian process, also deep networks are in the limit equivalent to a mixture model. But, neural networks can be more complex and compound several layers together. Also, interactions might be non-linear. So while there are probably historical reasons for the term neural network, neural networks are also a lot different from mixture models and in practice they are different techniques. Neural networks are more about creating networks with multiple complex links, where mixture models are more straightforward single layer models with predefined elements to be mixed. To take your example with the 5 layer model (which I believe can be expressed in 3 layers as well) the neural network is about blending the layers together. I have tried to re-express it in the image below (this is my own brain doing the links in the neural network): In the last layer you have two nodes E and F that give an output between 0 and 1 depending on the values of the nodes before that C and D, but effectively they are functions of the original input X and Y and in some sense you can see it as a model that adds up a mixture of functions with extremely many parameters. The power of network models is that With every layer you exponentially grow the number of elements in the mixture. You don't define the functions that are being mixed. Instead out you let the network define these functions out of a much larger infinite space of potential functions.
Are Neural Networks Mixture Models?
Also to my understanding, in a neural network classifier with 1 hidden layer, you have a mixture of functions (sigmoids, relus, etc) that are aggregated ... So my question is: do neural networks fall
Are Neural Networks Mixture Models? Also to my understanding, in a neural network classifier with 1 hidden layer, you have a mixture of functions (sigmoids, relus, etc) that are aggregated ... So my question is: do neural networks fall in the general domain of mixture models? If so, why are they never referred to as such? You can consider a single layer models as a mixture model. And it is not true that they are never referred to as such. The link between neural networks and mixture models is not an unmentioned topic in the literature: Why is the posterior of a neural network gaussian process equal to the posterior of a neural network in the limit of infinite width layers? and for special cases, neural network Gaussian process, also deep networks are in the limit equivalent to a mixture model. But, neural networks can be more complex and compound several layers together. Also, interactions might be non-linear. So while there are probably historical reasons for the term neural network, neural networks are also a lot different from mixture models and in practice they are different techniques. Neural networks are more about creating networks with multiple complex links, where mixture models are more straightforward single layer models with predefined elements to be mixed. To take your example with the 5 layer model (which I believe can be expressed in 3 layers as well) the neural network is about blending the layers together. I have tried to re-express it in the image below (this is my own brain doing the links in the neural network): In the last layer you have two nodes E and F that give an output between 0 and 1 depending on the values of the nodes before that C and D, but effectively they are functions of the original input X and Y and in some sense you can see it as a model that adds up a mixture of functions with extremely many parameters. The power of network models is that With every layer you exponentially grow the number of elements in the mixture. You don't define the functions that are being mixed. Instead out you let the network define these functions out of a much larger infinite space of potential functions.
Are Neural Networks Mixture Models? Also to my understanding, in a neural network classifier with 1 hidden layer, you have a mixture of functions (sigmoids, relus, etc) that are aggregated ... So my question is: do neural networks fall
48,321
Relation between Pi and the Median
To answer your specific question about the median, there's a passage in your second link, in the section on sample median: Sampling distribution ... The distribution of the sample median from a population with a density function $f(x)$ is asymptotically normal with mean $m$ and variance $\begin{align}\frac{1}{4nf(m)^2}\end{align}$ where $m$ is the median of $f(x)$ and $n$ is the sample size. The example pulled from your first link assumes a normal population with mean $m$ and unit variance. In this case, $$f(x) = \frac{1}{\sqrt{2\pi\sigma^2}}\exp(\frac{-(x-m)^2}{2\sigma^2})=\frac{1}{\sqrt{2\pi}}\exp(\frac{-(x-m)^2}{2})$$ The normal distribution's mean is equal to its median, so $f(m)=\frac{1}{\sqrt{2\pi}}$. Putting it together then, we have that the sampling distribution of the median is asymptotically normal with mean $m$, and variance given by $$\frac{1}{4n(\frac{1}{\sqrt{2\pi}})^2}=\frac{2\pi}{4n}=\frac{\pi}{2n}$$ That's all to say that the reason $\pi$ shows up here is that the formula for the asymptotic variance includes the density function of the population, evaluated at the median. When the population is normally distributed, this brings $\pi$ into the term for the variance, because of the $\pi$ constant in the normal density. As for why $\pi$ shows up in the normal density, @jld's comment and @whuber's link to his answer offer great insight.
Relation between Pi and the Median
To answer your specific question about the median, there's a passage in your second link, in the section on sample median: Sampling distribution ... The distribution of the sample median from a popul
Relation between Pi and the Median To answer your specific question about the median, there's a passage in your second link, in the section on sample median: Sampling distribution ... The distribution of the sample median from a population with a density function $f(x)$ is asymptotically normal with mean $m$ and variance $\begin{align}\frac{1}{4nf(m)^2}\end{align}$ where $m$ is the median of $f(x)$ and $n$ is the sample size. The example pulled from your first link assumes a normal population with mean $m$ and unit variance. In this case, $$f(x) = \frac{1}{\sqrt{2\pi\sigma^2}}\exp(\frac{-(x-m)^2}{2\sigma^2})=\frac{1}{\sqrt{2\pi}}\exp(\frac{-(x-m)^2}{2})$$ The normal distribution's mean is equal to its median, so $f(m)=\frac{1}{\sqrt{2\pi}}$. Putting it together then, we have that the sampling distribution of the median is asymptotically normal with mean $m$, and variance given by $$\frac{1}{4n(\frac{1}{\sqrt{2\pi}})^2}=\frac{2\pi}{4n}=\frac{\pi}{2n}$$ That's all to say that the reason $\pi$ shows up here is that the formula for the asymptotic variance includes the density function of the population, evaluated at the median. When the population is normally distributed, this brings $\pi$ into the term for the variance, because of the $\pi$ constant in the normal density. As for why $\pi$ shows up in the normal density, @jld's comment and @whuber's link to his answer offer great insight.
Relation between Pi and the Median To answer your specific question about the median, there's a passage in your second link, in the section on sample median: Sampling distribution ... The distribution of the sample median from a popul
48,322
Wilcoxon Rank Sum Test vs $t$-test Power Simulation
You say "It is well known that the Wilcoxon rank sum test is more powerful for detecting shifts in location when the data is non-normal" but stated so generally, this is not actually the case. It is not well known at all (for all that it might be widely believed), because it's not true. For some non-normal distributions, sure, but not for all of them. For example, with a beta(2,2) distribution, the relative power for the Wilcoxon vs t is even worse than it is at the normal (specifically, the asymptotic relative efficiency is about 86% vs about 95% at the normal). Generally speaking the Wilcoxon will beat the t on power with shift alternatives on heavier-tailed symmetric distributions, but outside that it's sometimes less powerful. What makes it a good choice, however, is that it's not much less powerful (while the t can sometimes be much less powerful than the Wilcoxon). I should point out that your code does not compare a location shift alternative; you have a change of scale there. A location shift could be obtained by adding something to the y-values instead. (However, with strictly positive random variables, changes of scale may well make more sense to investigate, since those may more typically be the sort of thing you would see.) However, one thing to note when comparing under such situations is that the type I error rate (actual significance level) of the t can be impacted - which will move the whole power curve up or down. To my mind it would make sense to consider separately the effect on $α$ and the effect on power at the same true significance level. This involves figuring out the effect on the significance level first and then adjusting the nominal significance level to compare power at the same true significance level, so that you're correctly interpreting the cause (e.g. is a lower rejection rate mainly due to conservatism or is it lower curvature of the power function?). You can do slightly better in your simulation by comparing on the same samples. This reduces the variation. For the power comparison you were using in your question, you could do something like this: lam1 <- 1/1.0; n1 <- 25 lam2 <- 1/1.5; n2 <- 25 nsim <- 10000 ps <- replicate(nsim,{ x=rexp(n1,lam1) y=rexp(n2,lam2) c(wp=wilcox.test(x,y)$p.value,tp=t.test(x,y)$p.value)}) (power<-rowMeans(ps<=0.05))
Wilcoxon Rank Sum Test vs $t$-test Power Simulation
You say "It is well known that the Wilcoxon rank sum test is more powerful for detecting shifts in location when the data is non-normal" but stated so generally, this is not actually the case. It is n
Wilcoxon Rank Sum Test vs $t$-test Power Simulation You say "It is well known that the Wilcoxon rank sum test is more powerful for detecting shifts in location when the data is non-normal" but stated so generally, this is not actually the case. It is not well known at all (for all that it might be widely believed), because it's not true. For some non-normal distributions, sure, but not for all of them. For example, with a beta(2,2) distribution, the relative power for the Wilcoxon vs t is even worse than it is at the normal (specifically, the asymptotic relative efficiency is about 86% vs about 95% at the normal). Generally speaking the Wilcoxon will beat the t on power with shift alternatives on heavier-tailed symmetric distributions, but outside that it's sometimes less powerful. What makes it a good choice, however, is that it's not much less powerful (while the t can sometimes be much less powerful than the Wilcoxon). I should point out that your code does not compare a location shift alternative; you have a change of scale there. A location shift could be obtained by adding something to the y-values instead. (However, with strictly positive random variables, changes of scale may well make more sense to investigate, since those may more typically be the sort of thing you would see.) However, one thing to note when comparing under such situations is that the type I error rate (actual significance level) of the t can be impacted - which will move the whole power curve up or down. To my mind it would make sense to consider separately the effect on $α$ and the effect on power at the same true significance level. This involves figuring out the effect on the significance level first and then adjusting the nominal significance level to compare power at the same true significance level, so that you're correctly interpreting the cause (e.g. is a lower rejection rate mainly due to conservatism or is it lower curvature of the power function?). You can do slightly better in your simulation by comparing on the same samples. This reduces the variation. For the power comparison you were using in your question, you could do something like this: lam1 <- 1/1.0; n1 <- 25 lam2 <- 1/1.5; n2 <- 25 nsim <- 10000 ps <- replicate(nsim,{ x=rexp(n1,lam1) y=rexp(n2,lam2) c(wp=wilcox.test(x,y)$p.value,tp=t.test(x,y)$p.value)}) (power<-rowMeans(ps<=0.05))
Wilcoxon Rank Sum Test vs $t$-test Power Simulation You say "It is well known that the Wilcoxon rank sum test is more powerful for detecting shifts in location when the data is non-normal" but stated so generally, this is not actually the case. It is n
48,323
Sufficient statistics are not unique?
No, if they were unique, a transformation of the sufficient statistic wouldn't be a sufficient statistic (unless it is identity transformation). For example, let $T(\mathbf{y})=\bar y$, i.e. the sample mean. A 1-1 transformation here would be scaling, i.e. $T_1(\mathbf{y})=2\bar y$. This means any information that can be provided with the sample mean can also be provided with $2$ times the sample mean.
Sufficient statistics are not unique?
No, if they were unique, a transformation of the sufficient statistic wouldn't be a sufficient statistic (unless it is identity transformation). For example, let $T(\mathbf{y})=\bar y$, i.e. the sampl
Sufficient statistics are not unique? No, if they were unique, a transformation of the sufficient statistic wouldn't be a sufficient statistic (unless it is identity transformation). For example, let $T(\mathbf{y})=\bar y$, i.e. the sample mean. A 1-1 transformation here would be scaling, i.e. $T_1(\mathbf{y})=2\bar y$. This means any information that can be provided with the sample mean can also be provided with $2$ times the sample mean.
Sufficient statistics are not unique? No, if they were unique, a transformation of the sufficient statistic wouldn't be a sufficient statistic (unless it is identity transformation). For example, let $T(\mathbf{y})=\bar y$, i.e. the sampl
48,324
Confusion about Karush-Kuhn-Tucker conditions in SVM derivation
I've got the answer, thanks to DanielTheRocketMan for providing half of it! For $x^{(i)}$ that is a support vector, following equality holds: $$ y^{(i)} (w^Tx^{(i)} + b) = 1 $$ This satisfies the constraint irrespective of $\alpha_i$ For $x^{(i)}$ that is not a support vector, following inequality holds: $$ y^{(i)} (w^Tx^{(i)} + b) \gt 1 $$ Then in the Lagrangian below (that is being maximised in the dual formulation) it can be seen, that for such $x^{(i)}$ increasing value of $\alpha_i$ would lead to lowering the value of the objective function which we're trying to maximise, therefore the smallest possible value is chosen, thus $\alpha_i = 0$ (there is a non-negativity constraint on $\alpha_i$, see the question above). And so the constraint is satisfied in this case too. $$ \mathcal{L}(w,b,\alpha) = \frac{1}{2} ||w||^2 - \sum_{i=1}^m \alpha_i [y^{(i)}(w^Tx^{(i)} + b) - 1] $$ Therefore, the constraint holds implicitly for all $i = 1,\ldots,m$ and needs not to be included in the problem definition.
Confusion about Karush-Kuhn-Tucker conditions in SVM derivation
I've got the answer, thanks to DanielTheRocketMan for providing half of it! For $x^{(i)}$ that is a support vector, following equality holds: $$ y^{(i)} (w^Tx^{(i)} + b) = 1 $$ This satisfies the cons
Confusion about Karush-Kuhn-Tucker conditions in SVM derivation I've got the answer, thanks to DanielTheRocketMan for providing half of it! For $x^{(i)}$ that is a support vector, following equality holds: $$ y^{(i)} (w^Tx^{(i)} + b) = 1 $$ This satisfies the constraint irrespective of $\alpha_i$ For $x^{(i)}$ that is not a support vector, following inequality holds: $$ y^{(i)} (w^Tx^{(i)} + b) \gt 1 $$ Then in the Lagrangian below (that is being maximised in the dual formulation) it can be seen, that for such $x^{(i)}$ increasing value of $\alpha_i$ would lead to lowering the value of the objective function which we're trying to maximise, therefore the smallest possible value is chosen, thus $\alpha_i = 0$ (there is a non-negativity constraint on $\alpha_i$, see the question above). And so the constraint is satisfied in this case too. $$ \mathcal{L}(w,b,\alpha) = \frac{1}{2} ||w||^2 - \sum_{i=1}^m \alpha_i [y^{(i)}(w^Tx^{(i)} + b) - 1] $$ Therefore, the constraint holds implicitly for all $i = 1,\ldots,m$ and needs not to be included in the problem definition.
Confusion about Karush-Kuhn-Tucker conditions in SVM derivation I've got the answer, thanks to DanielTheRocketMan for providing half of it! For $x^{(i)}$ that is a support vector, following equality holds: $$ y^{(i)} (w^Tx^{(i)} + b) = 1 $$ This satisfies the cons
48,325
Confusion about Karush-Kuhn-Tucker conditions in SVM derivation
I believe that your problem is that you are not seeing the geometry of the problem. See the figure in the wikipedia. Since your points are separable, you can always find a vector $w$ and $b$ that ensures this constraint. Note that $y_i=1$ or $y_i=-1$ You basically must choose $b+w x=0\;\;\; (1) \;\;$ and $\;\;\;\;1/||w||=margin\;\;\; (2)$. See figure. This ensures the validity of this equation. Note that equation (1) is about choosing the correct slope and equation (2) is about choosing the correct size of the parameters.
Confusion about Karush-Kuhn-Tucker conditions in SVM derivation
I believe that your problem is that you are not seeing the geometry of the problem. See the figure in the wikipedia. Since your points are separable, you can always find a vector $w$ and $b$ that ensu
Confusion about Karush-Kuhn-Tucker conditions in SVM derivation I believe that your problem is that you are not seeing the geometry of the problem. See the figure in the wikipedia. Since your points are separable, you can always find a vector $w$ and $b$ that ensures this constraint. Note that $y_i=1$ or $y_i=-1$ You basically must choose $b+w x=0\;\;\; (1) \;\;$ and $\;\;\;\;1/||w||=margin\;\;\; (2)$. See figure. This ensures the validity of this equation. Note that equation (1) is about choosing the correct slope and equation (2) is about choosing the correct size of the parameters.
Confusion about Karush-Kuhn-Tucker conditions in SVM derivation I believe that your problem is that you are not seeing the geometry of the problem. See the figure in the wikipedia. Since your points are separable, you can always find a vector $w$ and $b$ that ensu
48,326
lme4 - correct formula for a crossed factor nested mixed model
It makes very little sense to me to treat Month as fixed, while also having Light and Nutrients nested within it. Either you should treat Month and fixed and Light, in which case the random part of the formula will be: ( 1| Month) + (1 | Light) ... Or you treat Month as random, in which case you have Light nested within `Month, so this would imply: (1 | month) + (1| Month:Light) and you also have Nutrients nested within `Month, so this would imply: (1 | month) + (1| Month:Nutrients) So combining these we arrive at (1 | month) + (1| Month:Light) + (1| Month:Nutrients) Edit: To address the concern in the comment: The singular fit implies that at least one of those random factors may not be needed. As for the extra interaction term, between Light and Nutrients, you could add a further random factor for that. However whether that would be useful depends on what exactly you mean by "a key interest". Modelling it as random will only result in an estimate for it's variance. If you are interested in the fixed effect of that interaction, that implies that they should be fixed factors and not random at all. This would then leave only Month as the random factor in the model, which could very well be appropriate. In conjuntion with this, and on re-reading the question, where you say that Light and Nutrients are "treatments", this usually means you should model them as fixed. Just because these factors can be thought of as a sample from a larger population does not mean they have to be random. Moreover I don't know how many levels of these factors you used ? If there are a large number then modelling them as random may indeed be better. But if they are few, perhaps a more appropriate model is: lmer(growth.day ~ Light * Nutrients) + ( 1 | Month), ... )
lme4 - correct formula for a crossed factor nested mixed model
It makes very little sense to me to treat Month as fixed, while also having Light and Nutrients nested within it. Either you should treat Month and fixed and Light, in which case the random part of th
lme4 - correct formula for a crossed factor nested mixed model It makes very little sense to me to treat Month as fixed, while also having Light and Nutrients nested within it. Either you should treat Month and fixed and Light, in which case the random part of the formula will be: ( 1| Month) + (1 | Light) ... Or you treat Month as random, in which case you have Light nested within `Month, so this would imply: (1 | month) + (1| Month:Light) and you also have Nutrients nested within `Month, so this would imply: (1 | month) + (1| Month:Nutrients) So combining these we arrive at (1 | month) + (1| Month:Light) + (1| Month:Nutrients) Edit: To address the concern in the comment: The singular fit implies that at least one of those random factors may not be needed. As for the extra interaction term, between Light and Nutrients, you could add a further random factor for that. However whether that would be useful depends on what exactly you mean by "a key interest". Modelling it as random will only result in an estimate for it's variance. If you are interested in the fixed effect of that interaction, that implies that they should be fixed factors and not random at all. This would then leave only Month as the random factor in the model, which could very well be appropriate. In conjuntion with this, and on re-reading the question, where you say that Light and Nutrients are "treatments", this usually means you should model them as fixed. Just because these factors can be thought of as a sample from a larger population does not mean they have to be random. Moreover I don't know how many levels of these factors you used ? If there are a large number then modelling them as random may indeed be better. But if they are few, perhaps a more appropriate model is: lmer(growth.day ~ Light * Nutrients) + ( 1 | Month), ... )
lme4 - correct formula for a crossed factor nested mixed model It makes very little sense to me to treat Month as fixed, while also having Light and Nutrients nested within it. Either you should treat Month and fixed and Light, in which case the random part of th
48,327
What is the disadvantage of repeated cross-validation?
There is no disadvantage in doing repeated CV in comparison with a single CV fold. If anything, repeated CV should decrease the variance of our estimate. An excellent and highly-cited overview on cross-validation procedures can be found in Arlot & Celisse (2010) A survey of cross-validation procedures for model selection. The paper is admittedly a bit long (it is a survey after all) but even reading the last Section on "Conclusion: which cross-validation method for which problem?" is very enlightening. It discusses how no single CV procedure is universally better but we should focus on the particular setting (e.g. variable selection vs. choosing the best among two learning procedures).
What is the disadvantage of repeated cross-validation?
There is no disadvantage in doing repeated CV in comparison with a single CV fold. If anything, repeated CV should decrease the variance of our estimate. An excellent and highly-cited overview on cros
What is the disadvantage of repeated cross-validation? There is no disadvantage in doing repeated CV in comparison with a single CV fold. If anything, repeated CV should decrease the variance of our estimate. An excellent and highly-cited overview on cross-validation procedures can be found in Arlot & Celisse (2010) A survey of cross-validation procedures for model selection. The paper is admittedly a bit long (it is a survey after all) but even reading the last Section on "Conclusion: which cross-validation method for which problem?" is very enlightening. It discusses how no single CV procedure is universally better but we should focus on the particular setting (e.g. variable selection vs. choosing the best among two learning procedures).
What is the disadvantage of repeated cross-validation? There is no disadvantage in doing repeated CV in comparison with a single CV fold. If anything, repeated CV should decrease the variance of our estimate. An excellent and highly-cited overview on cros
48,328
What is the disadvantage of repeated cross-validation?
With regards to the question of disadvantage, I think that needs refining. Disadvantage compared to k-fold CV? For large samples, it's computational time (as you noted). For small samples, there is no apparent disadvantage for repeated k-fold CV. Disadvantage compared to bootstrapping? For small samples, bootstrapping can be better at choosing between models because it will pick up issues with important variables being dropped. At larger samples, bootstrapping can be problematic because of overfitting. This article looks at bias and variance between bootstrapping and repeated 10-fold CV, with a note that they are surprised by the low variance of the repeated 10-fold CV resampling compared to bootstrapping.
What is the disadvantage of repeated cross-validation?
With regards to the question of disadvantage, I think that needs refining. Disadvantage compared to k-fold CV? For large samples, it's computational time (as you noted). For small samples, there is n
What is the disadvantage of repeated cross-validation? With regards to the question of disadvantage, I think that needs refining. Disadvantage compared to k-fold CV? For large samples, it's computational time (as you noted). For small samples, there is no apparent disadvantage for repeated k-fold CV. Disadvantage compared to bootstrapping? For small samples, bootstrapping can be better at choosing between models because it will pick up issues with important variables being dropped. At larger samples, bootstrapping can be problematic because of overfitting. This article looks at bias and variance between bootstrapping and repeated 10-fold CV, with a note that they are surprised by the low variance of the repeated 10-fold CV resampling compared to bootstrapping.
What is the disadvantage of repeated cross-validation? With regards to the question of disadvantage, I think that needs refining. Disadvantage compared to k-fold CV? For large samples, it's computational time (as you noted). For small samples, there is n
48,329
Difference between Wald test and Chi-squared test
The relationship between the Wald test and the Pearson $\chi^2$ is a particular example of the relationship between Wald tests and score tests. The Wald test statistic for the difference between a value of a parameter $\hat \theta$ estimated from a data sample and a null-hypothesis value $\theta_0$ is: $$W = \frac{ ( \widehat{ \theta}-\theta_0 )^2 }{\operatorname{var}(\hat \theta )}$$ The Pearson $\chi^2$ test statistic for differences in a contingency table between a set of observed counts ($O_i$) and those expected based on a null hypotheses like independence, $E_i$, is: $$\chi^2 = \sum_{i=1}^{n} \frac{(O_i - E_i)^2}{E_i}$$ where the sum is over all cells $i$ of the contingency table. In both Wald and Pearson tests, the numerator terms represent squared differences between values found from the data sample and those expected under the null hypothesis. The relationship you intuit between the Pearson statistic and the Wald statistic becomes clear if you think about a contingency table as representing a sample from a set of Poisson-distributed count variables, one for each cell. (That's the basis of Poisson regression, also called log-linear modeling, for contingency-table analysis.) For a Poisson distribution the variance equals the mean, so the denominator terms can be considered the estimated variances of counts in each cell of the table under the null hypothesis. In contrast, the denominator in the single-parameter Wald statistic is the variance around the estimated value of the parameter. For Wald tests of more complicated hypotheses the denominator is related to the covariance matrix of the parameter estimates, still evaluated around the estimated parameter values. So in both tests the denominator involves a variance estimate, one evaluated at the null hypothesis and the other around the estimated parameter values. This is as explained by @gung in this answer about likelihood-ratio, Wald, and score tests, as the Pearson test is a particular example of a score test.* Wald tests are evaluated at the parameter values estimated by maximum likelihood while score tests are evaluated at the null hypothesis. In practice, statistical software will report all 3 tests for a full multiple-regression model fit by maximum likelihood but usually only Wald tests for individual coefficients. That's mostly because the parameter covariance estimates for the Wald test can be obtained directly from the numerical approximations made in fitting the model, while there has to be re-evaluation of the model with respect to each parameter to get p-values and confidence intervals based on the other tests. That doesn't mean that Wald tests are "better" in any sense other than convenience. With small samples Wald tests are often considered the least reliable. *This paper by Gordon Smyth provides a proof of how a more general Pearson goodness-of-fit test is equivalent to a score test. Score tests are sometimes called Lagrange-multiplier tests.
Difference between Wald test and Chi-squared test
The relationship between the Wald test and the Pearson $\chi^2$ is a particular example of the relationship between Wald tests and score tests. The Wald test statistic for the difference between a va
Difference between Wald test and Chi-squared test The relationship between the Wald test and the Pearson $\chi^2$ is a particular example of the relationship between Wald tests and score tests. The Wald test statistic for the difference between a value of a parameter $\hat \theta$ estimated from a data sample and a null-hypothesis value $\theta_0$ is: $$W = \frac{ ( \widehat{ \theta}-\theta_0 )^2 }{\operatorname{var}(\hat \theta )}$$ The Pearson $\chi^2$ test statistic for differences in a contingency table between a set of observed counts ($O_i$) and those expected based on a null hypotheses like independence, $E_i$, is: $$\chi^2 = \sum_{i=1}^{n} \frac{(O_i - E_i)^2}{E_i}$$ where the sum is over all cells $i$ of the contingency table. In both Wald and Pearson tests, the numerator terms represent squared differences between values found from the data sample and those expected under the null hypothesis. The relationship you intuit between the Pearson statistic and the Wald statistic becomes clear if you think about a contingency table as representing a sample from a set of Poisson-distributed count variables, one for each cell. (That's the basis of Poisson regression, also called log-linear modeling, for contingency-table analysis.) For a Poisson distribution the variance equals the mean, so the denominator terms can be considered the estimated variances of counts in each cell of the table under the null hypothesis. In contrast, the denominator in the single-parameter Wald statistic is the variance around the estimated value of the parameter. For Wald tests of more complicated hypotheses the denominator is related to the covariance matrix of the parameter estimates, still evaluated around the estimated parameter values. So in both tests the denominator involves a variance estimate, one evaluated at the null hypothesis and the other around the estimated parameter values. This is as explained by @gung in this answer about likelihood-ratio, Wald, and score tests, as the Pearson test is a particular example of a score test.* Wald tests are evaluated at the parameter values estimated by maximum likelihood while score tests are evaluated at the null hypothesis. In practice, statistical software will report all 3 tests for a full multiple-regression model fit by maximum likelihood but usually only Wald tests for individual coefficients. That's mostly because the parameter covariance estimates for the Wald test can be obtained directly from the numerical approximations made in fitting the model, while there has to be re-evaluation of the model with respect to each parameter to get p-values and confidence intervals based on the other tests. That doesn't mean that Wald tests are "better" in any sense other than convenience. With small samples Wald tests are often considered the least reliable. *This paper by Gordon Smyth provides a proof of how a more general Pearson goodness-of-fit test is equivalent to a score test. Score tests are sometimes called Lagrange-multiplier tests.
Difference between Wald test and Chi-squared test The relationship between the Wald test and the Pearson $\chi^2$ is a particular example of the relationship between Wald tests and score tests. The Wald test statistic for the difference between a va
48,330
Is it possible to simultaneously call multiple regression coefficients significant?
It's true that with 40 variables, you would expect two to be significant (when using the conventional alpha of .05) by chance alone. That's why we shouldn't interpret the individual t-tests for a multiple regression model until after assessing the F-test for the model taken as a whole. (It may help you to read my answer here: Significance contradiction in linear regression: significant t-test for a coefficient vs non-significant overall F-statistic.) If I had a model with a non-significant F-test, but 2 out of 40 variables were individually significant, I would interpret those results as not actually meaningful. To be clear, I would not quite say, 'these effects are not real', because that isn't a valid interpretation of a non-significant result (see: Why do statisticians say a non-significant result means “you can't reject the null” as opposed to accepting the null hypothesis?) I would say that you don't have sufficient evidence to reject the null. Regarding the follow-up question (how to know which 2 of 4 are real and which fake), you'll never know if significant variables have a "real" relationship with the response, and you'll never know if non-significant variables don't. That's part of the nature of the game we're playing. If you were really concerned about the possibility that some of your results might be false discoveries, you could use false discovery procedures to explore that, but it isn't common to do so in the kind of situation you describe.
Is it possible to simultaneously call multiple regression coefficients significant?
It's true that with 40 variables, you would expect two to be significant (when using the conventional alpha of .05) by chance alone. That's why we shouldn't interpret the individual t-tests for a mul
Is it possible to simultaneously call multiple regression coefficients significant? It's true that with 40 variables, you would expect two to be significant (when using the conventional alpha of .05) by chance alone. That's why we shouldn't interpret the individual t-tests for a multiple regression model until after assessing the F-test for the model taken as a whole. (It may help you to read my answer here: Significance contradiction in linear regression: significant t-test for a coefficient vs non-significant overall F-statistic.) If I had a model with a non-significant F-test, but 2 out of 40 variables were individually significant, I would interpret those results as not actually meaningful. To be clear, I would not quite say, 'these effects are not real', because that isn't a valid interpretation of a non-significant result (see: Why do statisticians say a non-significant result means “you can't reject the null” as opposed to accepting the null hypothesis?) I would say that you don't have sufficient evidence to reject the null. Regarding the follow-up question (how to know which 2 of 4 are real and which fake), you'll never know if significant variables have a "real" relationship with the response, and you'll never know if non-significant variables don't. That's part of the nature of the game we're playing. If you were really concerned about the possibility that some of your results might be false discoveries, you could use false discovery procedures to explore that, but it isn't common to do so in the kind of situation you describe.
Is it possible to simultaneously call multiple regression coefficients significant? It's true that with 40 variables, you would expect two to be significant (when using the conventional alpha of .05) by chance alone. That's why we shouldn't interpret the individual t-tests for a mul
48,331
Is it possible to simultaneously call multiple regression coefficients significant?
If I'm reading your question correctly you are simply asking about multiple comparison in statistics which is a well known phenomena. To remedy this, you can correct your significance using something like a Bonferroni-correction, although many types of correction methods exist. Note that you also need to consider the implication of your analysis. Is it something that will dictate how your company or medical department operates? Then you should probably take it more seriously than if your study is simply to pave way and direct future research. Multiple comparison and how we address it, is in essence just a dance between type-1 and type-2 errors. EDIT: After reading your edit. This may be my main point: does significance at the regression coefficient level mean anything in this case? The coefficient tells you both the direction and size of the impact your covariates have on the dependent variable, with respect to the model as a whole. If the p-value of a covariate is significant, but the model contains enough individual covariates with no pre-determined hypothesis, that you suspect the unadjusted p-values may be erroneous, the coefficient and the CI may be inflated, even if truly significant. Another important part with respect to coefficients is how well you have controlled for confounders as these can dramatically change the coefficients in your model. Related literature on this topic.
Is it possible to simultaneously call multiple regression coefficients significant?
If I'm reading your question correctly you are simply asking about multiple comparison in statistics which is a well known phenomena. To remedy this, you can correct your significance using something
Is it possible to simultaneously call multiple regression coefficients significant? If I'm reading your question correctly you are simply asking about multiple comparison in statistics which is a well known phenomena. To remedy this, you can correct your significance using something like a Bonferroni-correction, although many types of correction methods exist. Note that you also need to consider the implication of your analysis. Is it something that will dictate how your company or medical department operates? Then you should probably take it more seriously than if your study is simply to pave way and direct future research. Multiple comparison and how we address it, is in essence just a dance between type-1 and type-2 errors. EDIT: After reading your edit. This may be my main point: does significance at the regression coefficient level mean anything in this case? The coefficient tells you both the direction and size of the impact your covariates have on the dependent variable, with respect to the model as a whole. If the p-value of a covariate is significant, but the model contains enough individual covariates with no pre-determined hypothesis, that you suspect the unadjusted p-values may be erroneous, the coefficient and the CI may be inflated, even if truly significant. Another important part with respect to coefficients is how well you have controlled for confounders as these can dramatically change the coefficients in your model. Related literature on this topic.
Is it possible to simultaneously call multiple regression coefficients significant? If I'm reading your question correctly you are simply asking about multiple comparison in statistics which is a well known phenomena. To remedy this, you can correct your significance using something
48,332
Representing a GAM with truncated power basis as a mixed model
The model you discuss in your quesiton can be written as $$ y = X \beta + F b+e $$ where $X$ is the matrix with columns equal to $1, x, x^2, x^3,...$ and $F$ is a matrix which columns are obtained by computing the truncated polinomials. The (penalized) objective function is then: $$ Q_{p} = \|y - X \beta + F b\|^2 + k\|b\|^{2} $$ Only the $b$s coefficients are shrunk. To compute $\beta$ and $b$ we need to solve the following system of penalized normal eqs.: $$ \left[ \begin{array}{lll} X'X & X'F \\ F' X & F'F + kI \end{array} \right] \left[ \begin{array}{ll} \beta\\ b \end{array} \right] = \left[ \begin{array}{ll} X'y \\ F'y \end{array} \right] $$ You can compare the system of eqs. above with the one, for example, here https://en.wikipedia.org/wiki/Mixed_model (estimation session). The variance components are $\sigma^2 = var(e)$ and $\tau^2 = var(b)$ and $k = \sigma^{2}/\tau^{2}$. Why do you have to separate the fixed and random effects this way: You will notice that also in Henderson's mixed model equations the random effects are "penalized" (the $G^{-1}$ term). What is the random effects distribution in this case: we are assuming that $b \sim N(0, \tau^{2} I)$ and $e \sim N(0, \sigma^{2} I)$ I hope that my answer helps a bit and that I got the notation correct. Edit Comment: why do the the tpf part need to be penalized? As usual, the penalization controls the trade-off between smoothness and data fitting (see plot below, in which I smooth the same data with fifteen 2nd degree TPF bases and different levels of k-parameter). This is true for all penalized smoothing techniques. Why do we do all this? What makes the mixed effect model notation convenient is the fact that the model (including the optimal amount of smoothing) can be computed using standard lmm routines (below I use nlme...notice please that I suppose you have a function to compute the tpf_bases). # Simulate some data n = 30 x = seq(-0, 2*pi, len = n) ys = 2 * sin(x) y = rnorm(n, ys, 0.5) # Create bases Bs = tpf_bases(x, ndx = 10, deg = 2) X = Bs$X Z = Bs$Z # Organize for lme dat = data.frame(X1 = X[, 2], X2 = X[, 3], y = y) dat$Z = Z dat$all = (1:n) * 0 + 1 # Fit lme fit = lme(y ~ X1 + X2, random = list(all = pdIdent( ~ Z - 1)), data = dat) # Extract coefficients & get fit beta.hat = fit$coef$fixed b.hat = unlist(fit$coef$random) f.hat = X %*% beta.hat + Z %*% b.hat # Plot results plot(x, y, main = "LME-based optimal fit") lines(x, f.hat, col = 'red')
Representing a GAM with truncated power basis as a mixed model
The model you discuss in your quesiton can be written as $$ y = X \beta + F b+e $$ where $X$ is the matrix with columns equal to $1, x, x^2, x^3,...$ and $F$ is a matrix which columns are obtained by
Representing a GAM with truncated power basis as a mixed model The model you discuss in your quesiton can be written as $$ y = X \beta + F b+e $$ where $X$ is the matrix with columns equal to $1, x, x^2, x^3,...$ and $F$ is a matrix which columns are obtained by computing the truncated polinomials. The (penalized) objective function is then: $$ Q_{p} = \|y - X \beta + F b\|^2 + k\|b\|^{2} $$ Only the $b$s coefficients are shrunk. To compute $\beta$ and $b$ we need to solve the following system of penalized normal eqs.: $$ \left[ \begin{array}{lll} X'X & X'F \\ F' X & F'F + kI \end{array} \right] \left[ \begin{array}{ll} \beta\\ b \end{array} \right] = \left[ \begin{array}{ll} X'y \\ F'y \end{array} \right] $$ You can compare the system of eqs. above with the one, for example, here https://en.wikipedia.org/wiki/Mixed_model (estimation session). The variance components are $\sigma^2 = var(e)$ and $\tau^2 = var(b)$ and $k = \sigma^{2}/\tau^{2}$. Why do you have to separate the fixed and random effects this way: You will notice that also in Henderson's mixed model equations the random effects are "penalized" (the $G^{-1}$ term). What is the random effects distribution in this case: we are assuming that $b \sim N(0, \tau^{2} I)$ and $e \sim N(0, \sigma^{2} I)$ I hope that my answer helps a bit and that I got the notation correct. Edit Comment: why do the the tpf part need to be penalized? As usual, the penalization controls the trade-off between smoothness and data fitting (see plot below, in which I smooth the same data with fifteen 2nd degree TPF bases and different levels of k-parameter). This is true for all penalized smoothing techniques. Why do we do all this? What makes the mixed effect model notation convenient is the fact that the model (including the optimal amount of smoothing) can be computed using standard lmm routines (below I use nlme...notice please that I suppose you have a function to compute the tpf_bases). # Simulate some data n = 30 x = seq(-0, 2*pi, len = n) ys = 2 * sin(x) y = rnorm(n, ys, 0.5) # Create bases Bs = tpf_bases(x, ndx = 10, deg = 2) X = Bs$X Z = Bs$Z # Organize for lme dat = data.frame(X1 = X[, 2], X2 = X[, 3], y = y) dat$Z = Z dat$all = (1:n) * 0 + 1 # Fit lme fit = lme(y ~ X1 + X2, random = list(all = pdIdent( ~ Z - 1)), data = dat) # Extract coefficients & get fit beta.hat = fit$coef$fixed b.hat = unlist(fit$coef$random) f.hat = X %*% beta.hat + Z %*% b.hat # Plot results plot(x, y, main = "LME-based optimal fit") lines(x, f.hat, col = 'red')
Representing a GAM with truncated power basis as a mixed model The model you discuss in your quesiton can be written as $$ y = X \beta + F b+e $$ where $X$ is the matrix with columns equal to $1, x, x^2, x^3,...$ and $F$ is a matrix which columns are obtained by
48,333
$cor(B_1,Y) > cor(B_2,Y) > 0$ but $cor(A + B_1, A+Y) < cor(A + B_2, A+Y)$. Is this possible?
Because correlation tells you nothing about the magnitudes of variables, you can reverse their relative order by adjusting the magnitudes suitably. Here, for instance, is a scatterplot matrix of some $(Y, B_1, B_2)$ data: Clearly $Y$ is more highly correlated with $B_1$ than with $B_2.$ To help us appreciate the variation in magnitudes, here are the same data shown using common scales on all axes: The correlation coefficients between $Y$ and the $B_i$ are $0.88\gt 0.67.$ Choosing $A=Y,$ here is a scatterplot matrix of the new variables also on common scales: Here's some detail: The correlation coefficients between $A+Y$ and the $A+B_i$ are $0.944 \lt 0.996:$ now the latter is greater than the former, reversing the original relation. If you would like to experiment with similar datasets, here is the R code used to generate these, along with computations of the correlations. Know that runif generates a specified number of iid uniform variates within the range of values specified by its second and third arguments; all arithmetic operations are vector operations (vector addition and scalar multiplication). n <- 1e2 Y <- runif(n, 1, 2) B.1 <- 2 * Y + runif(n, -1/2, 1/2) B.2 <- (Y + runif(n, -1/2, 1/2)) / 10 A <- Y cor(cbind(Y, B.1, B.2)) cor(cbind(A+Y, A+B.1, A+B.2))
$cor(B_1,Y) > cor(B_2,Y) > 0$ but $cor(A + B_1, A+Y) < cor(A + B_2, A+Y)$. Is this possible?
Because correlation tells you nothing about the magnitudes of variables, you can reverse their relative order by adjusting the magnitudes suitably. Here, for instance, is a scatterplot matrix of some
$cor(B_1,Y) > cor(B_2,Y) > 0$ but $cor(A + B_1, A+Y) < cor(A + B_2, A+Y)$. Is this possible? Because correlation tells you nothing about the magnitudes of variables, you can reverse their relative order by adjusting the magnitudes suitably. Here, for instance, is a scatterplot matrix of some $(Y, B_1, B_2)$ data: Clearly $Y$ is more highly correlated with $B_1$ than with $B_2.$ To help us appreciate the variation in magnitudes, here are the same data shown using common scales on all axes: The correlation coefficients between $Y$ and the $B_i$ are $0.88\gt 0.67.$ Choosing $A=Y,$ here is a scatterplot matrix of the new variables also on common scales: Here's some detail: The correlation coefficients between $A+Y$ and the $A+B_i$ are $0.944 \lt 0.996:$ now the latter is greater than the former, reversing the original relation. If you would like to experiment with similar datasets, here is the R code used to generate these, along with computations of the correlations. Know that runif generates a specified number of iid uniform variates within the range of values specified by its second and third arguments; all arithmetic operations are vector operations (vector addition and scalar multiplication). n <- 1e2 Y <- runif(n, 1, 2) B.1 <- 2 * Y + runif(n, -1/2, 1/2) B.2 <- (Y + runif(n, -1/2, 1/2)) / 10 A <- Y cor(cbind(Y, B.1, B.2)) cor(cbind(A+Y, A+B.1, A+B.2))
$cor(B_1,Y) > cor(B_2,Y) > 0$ but $cor(A + B_1, A+Y) < cor(A + B_2, A+Y)$. Is this possible? Because correlation tells you nothing about the magnitudes of variables, you can reverse their relative order by adjusting the magnitudes suitably. Here, for instance, is a scatterplot matrix of some
48,334
Why is regularization used only in training but not in testing?
I think there is a slight confusion. What the author means is that during testing we focus on the MSE as this is what we evaluate performance on. The MSE plus the regularisation penalty is what we use to fit the model with. We are using the "same model"; just what we measure during training and testing is not the same. During training we evaluate the goodness-of-fit on known data and penalising model complexity at the same time, while during testing we evaluate goodness-of-fit on unknown data and assume that this performance generalises further.
Why is regularization used only in training but not in testing?
I think there is a slight confusion. What the author means is that during testing we focus on the MSE as this is what we evaluate performance on. The MSE plus the regularisation penalty is what we use
Why is regularization used only in training but not in testing? I think there is a slight confusion. What the author means is that during testing we focus on the MSE as this is what we evaluate performance on. The MSE plus the regularisation penalty is what we use to fit the model with. We are using the "same model"; just what we measure during training and testing is not the same. During training we evaluate the goodness-of-fit on known data and penalising model complexity at the same time, while during testing we evaluate goodness-of-fit on unknown data and assume that this performance generalises further.
Why is regularization used only in training but not in testing? I think there is a slight confusion. What the author means is that during testing we focus on the MSE as this is what we evaluate performance on. The MSE plus the regularisation penalty is what we use
48,335
Linear Mixed Effects Model Variances
The terms within-individual variance and among-individual variance are not commonly found in the mixed effects model literature. It more commonly arises in the ANOVA literature, and rather than "among", the usual term is "between". Total variance is partitioned into that which is attributable to differences within individuals, for example the natural variation that occurs in the measurement of blood pressure of a person during a day, and that which is attributable to differences between (or among) individuals. Some people have generally different blood pressure than others, but each person's blood pressure also varies throughout the day. A repeated measures ANOVA can be formulated as a mixed effects model. In the case of repeated measures within individuals, a random intercepts model will estimate a variance at the individual level, which will be the variance of $b$ in your notation above, while the residual variance is at the measurement level, and is the variance of $\varepsilon$ in your model. The former is between (among) and the latter is within.
Linear Mixed Effects Model Variances
The terms within-individual variance and among-individual variance are not commonly found in the mixed effects model literature. It more commonly arises in the ANOVA literature, and rather than "among
Linear Mixed Effects Model Variances The terms within-individual variance and among-individual variance are not commonly found in the mixed effects model literature. It more commonly arises in the ANOVA literature, and rather than "among", the usual term is "between". Total variance is partitioned into that which is attributable to differences within individuals, for example the natural variation that occurs in the measurement of blood pressure of a person during a day, and that which is attributable to differences between (or among) individuals. Some people have generally different blood pressure than others, but each person's blood pressure also varies throughout the day. A repeated measures ANOVA can be formulated as a mixed effects model. In the case of repeated measures within individuals, a random intercepts model will estimate a variance at the individual level, which will be the variance of $b$ in your notation above, while the residual variance is at the measurement level, and is the variance of $\varepsilon$ in your model. The former is between (among) and the latter is within.
Linear Mixed Effects Model Variances The terms within-individual variance and among-individual variance are not commonly found in the mixed effects model literature. It more commonly arises in the ANOVA literature, and rather than "among
48,336
Linear Mixed Effects Model Variances
The following pieces of your model are fixed and known: $X_i$ and $Z_i$. The vector $\beta$ is fixed but unknown. You have two random pieces in your model, $b_i$ (I would expect this to be $b$, i.e. shared for all $i$, and I'm going to use $b$ below), and $\epsilon_i$. I suspect the setup of your problem also specifies that $b$ is independent of the $\epsilon_i$. To see where the variance expression comes from, just plug your model formula in: \begin{align} \text{var}(Y_i|X_i) &= \text{var}(X_i\beta + Z_ib + \epsilon_i)\\ \text{(a)}&=\text{var}(Z_ib + \epsilon_i)\\ \text{(b)}&=\text{var}(Z_ib) + \text{var}(\epsilon_i)\\ \text{(c)}&=Z_iDZ_i^T + R_i(\gamma), \end{align} where (a) is because $X_i\beta$ is constant, (b) is because $b$ and $\epsilon_i$ are independent, and (c) comes from looking at the distributions of $b$ and $\epsilon_i$. Putting the "among-individual" and "within-individual" language into context is problem-dependent. For example, mixed-effect models can be used to represent penalized splines for scatterplot smoothing, in which case there are no individuals at all: the random effects are coefficients of basis functions. For a (totally fictional) example where the language makes sense, suppose an experimenter is modeling sodium concentration $Y_i$ in pond water, and they are using potassium concentration as a covariate $z_i$. They sample 6 casks (say, 1 gallon each) of water from different locations in the pond, and from each cask, test 5 small 1 milliliter subsamples. We have $i=1, \dots, 30$ samples, and each sample belongs to some cask $k(i) \in \{1, \dots, 6\}$. We might model this as a mean, a linear part depending on potassium, a random adjustment to the intercept depending on the cask, and a random measurement error. We could write $$Y_i = [1\,\,\,z_i]^T\beta + Z_i^Tb + \epsilon_i,$$ where $Z_i$ is a vector of zeros except for a one at entry $k(i)$: it picks out the $k(i)$ entry of $b$. We assume $b \sim N(0,\sigma_b^2 I)$ and $\epsilon_i \sim N(0,\sigma_\epsilon^2)$. Our random vector $b$ has six entries: one for each cask, and they represent the variability between casks. You'll find that $\text{var}(Y_i) = \sigma_b^2 + \sigma_\epsilon^2$, and that $\text{cov}(Y_i,Y_j) = \sigma_b^2$ if observations $i$ and $j$ came from the same cask, but $\text{cov}(Y_i,Y_j) = 0$ if they came from different casks.
Linear Mixed Effects Model Variances
The following pieces of your model are fixed and known: $X_i$ and $Z_i$. The vector $\beta$ is fixed but unknown. You have two random pieces in your model, $b_i$ (I would expect this to be $b$, i.e. s
Linear Mixed Effects Model Variances The following pieces of your model are fixed and known: $X_i$ and $Z_i$. The vector $\beta$ is fixed but unknown. You have two random pieces in your model, $b_i$ (I would expect this to be $b$, i.e. shared for all $i$, and I'm going to use $b$ below), and $\epsilon_i$. I suspect the setup of your problem also specifies that $b$ is independent of the $\epsilon_i$. To see where the variance expression comes from, just plug your model formula in: \begin{align} \text{var}(Y_i|X_i) &= \text{var}(X_i\beta + Z_ib + \epsilon_i)\\ \text{(a)}&=\text{var}(Z_ib + \epsilon_i)\\ \text{(b)}&=\text{var}(Z_ib) + \text{var}(\epsilon_i)\\ \text{(c)}&=Z_iDZ_i^T + R_i(\gamma), \end{align} where (a) is because $X_i\beta$ is constant, (b) is because $b$ and $\epsilon_i$ are independent, and (c) comes from looking at the distributions of $b$ and $\epsilon_i$. Putting the "among-individual" and "within-individual" language into context is problem-dependent. For example, mixed-effect models can be used to represent penalized splines for scatterplot smoothing, in which case there are no individuals at all: the random effects are coefficients of basis functions. For a (totally fictional) example where the language makes sense, suppose an experimenter is modeling sodium concentration $Y_i$ in pond water, and they are using potassium concentration as a covariate $z_i$. They sample 6 casks (say, 1 gallon each) of water from different locations in the pond, and from each cask, test 5 small 1 milliliter subsamples. We have $i=1, \dots, 30$ samples, and each sample belongs to some cask $k(i) \in \{1, \dots, 6\}$. We might model this as a mean, a linear part depending on potassium, a random adjustment to the intercept depending on the cask, and a random measurement error. We could write $$Y_i = [1\,\,\,z_i]^T\beta + Z_i^Tb + \epsilon_i,$$ where $Z_i$ is a vector of zeros except for a one at entry $k(i)$: it picks out the $k(i)$ entry of $b$. We assume $b \sim N(0,\sigma_b^2 I)$ and $\epsilon_i \sim N(0,\sigma_\epsilon^2)$. Our random vector $b$ has six entries: one for each cask, and they represent the variability between casks. You'll find that $\text{var}(Y_i) = \sigma_b^2 + \sigma_\epsilon^2$, and that $\text{cov}(Y_i,Y_j) = \sigma_b^2$ if observations $i$ and $j$ came from the same cask, but $\text{cov}(Y_i,Y_j) = 0$ if they came from different casks.
Linear Mixed Effects Model Variances The following pieces of your model are fixed and known: $X_i$ and $Z_i$. The vector $\beta$ is fixed but unknown. You have two random pieces in your model, $b_i$ (I would expect this to be $b$, i.e. s
48,337
The ergodicity problem in economics
If your friend is talking about the expected wealth he's wrong. The point of the figure is that the expected value of gambling is very much not indicative of what happens typically for longer trajectories, so your simulation just backs that up. We can already see this pattern if we consider gambling twice. With probability one half, we gain 50% in value, the other outcome is losing 40%. Starting with wealth $1$ if we win twice, we have $2.25$. But the other possible outcomes are $0.9$ if we win once and $0.36$ if we lose twice. The expected value is $1.1025$, larger than $1$. We lose money 3 times out of 4, so typically we lose, but because we win big enough if we do win, the expected change in wealth is positive. In general we have $$E_{\text{Gamble}}[W_{t+1} | W_{t} = w]= w + 0.5 \cdot 0.5 \cdot w - 0.5 \cdot 0.4 \cdot w = 1.05w$$ Iterating that forward gives $$E_{\text{Gamble}}[W_{h} | W_{0} = 1] = 1.05^h$$ or $$\log_{10} E[.] = h \log_{10}1.05$$ That's the blue line in the plot.
The ergodicity problem in economics
If your friend is talking about the expected wealth he's wrong. The point of the figure is that the expected value of gambling is very much not indicative of what happens typically for longer trajecto
The ergodicity problem in economics If your friend is talking about the expected wealth he's wrong. The point of the figure is that the expected value of gambling is very much not indicative of what happens typically for longer trajectories, so your simulation just backs that up. We can already see this pattern if we consider gambling twice. With probability one half, we gain 50% in value, the other outcome is losing 40%. Starting with wealth $1$ if we win twice, we have $2.25$. But the other possible outcomes are $0.9$ if we win once and $0.36$ if we lose twice. The expected value is $1.1025$, larger than $1$. We lose money 3 times out of 4, so typically we lose, but because we win big enough if we do win, the expected change in wealth is positive. In general we have $$E_{\text{Gamble}}[W_{t+1} | W_{t} = w]= w + 0.5 \cdot 0.5 \cdot w - 0.5 \cdot 0.4 \cdot w = 1.05w$$ Iterating that forward gives $$E_{\text{Gamble}}[W_{h} | W_{0} = 1] = 1.05^h$$ or $$\log_{10} E[.] = h \log_{10}1.05$$ That's the blue line in the plot.
The ergodicity problem in economics If your friend is talking about the expected wealth he's wrong. The point of the figure is that the expected value of gambling is very much not indicative of what happens typically for longer trajecto
48,338
Why can't standard conditional language models be trained left-to-right *and* right-to-left?
In a standard language model (LM), you're trying to predict the probability of the next word given the past. The past could be a fixed window size of $n$ words, as in your example, or an indefinitely long window size, as in the case of RNNs (and their variants). Without loss of generality, let's stick with the RNN as the LM since that is a relatively common choice. Mathematically, each output $\hat{y}^{(t+1)} \in \Delta^{|V|}$ of an RNN specifies a conditional distribution $p(w^{(t+1)}| x^{(t)}, \ldots, x^{(1)})$ over $|V|$ words in the vocabulary $V$ for the next word $w^{(t+1)}$ given embeddings $x^{(s)}$ for $s = 1, \ldots, t$. This can be expressed as: $$ h^{(t)} = f(x^{(t)}, h^{(t-1)})\\ \hat{y}^{(t+1)} = p(w^{(t+1)} | x^{(t)}, \ldots, x^{(1)}) = g(h^{(t)}) = \mbox{softmax}(W_oh^{(t)} + b_o) $$ where $f$ and $g$ are recurrent and softmax neural network layers, respectively. This is a unidirectional LM since the probability of the next word is only dependent on the past. Suppose we could instead create a "bidirectional LM" using a bidirectional RNN. We would then have: $$ \overrightarrow{h^{(t)}} = \overrightarrow{f}(x^{(t)}, \overrightarrow{h^{(t-1)}})\\ \overleftarrow{h^{(t)}} = \overleftarrow{f}(x^{(t)}, \overleftarrow{h^{(t+1)}})\\ h^{(t)} = [\overrightarrow{h^{(t)}}; \overleftarrow{h^{(t)}}]\\ \hat{y}^{(t+1)} = p(w^{(t+1)} | x^{(T)}, \ldots, x^{(1)}) = g(h^{(t)}) = \mbox{softmax}(W_oh^{(t)} + b_o) $$ where $T$ is the length of the entire sequence. The problem here is that you're training the second RNN from back to front. At time $t$, the backward RNN already knows what word should come at time $t+1$ because $$ \overleftarrow{h^{(t)}} = \overleftarrow{f}(x^{(t)}, \overleftarrow{h^{(t+1)}}) = \overleftarrow{f}(x^{(t)}, \overleftarrow{f}(x^{(t+1)}, h^{(t+2)})) $$ and the embedding of $w^{(t+1)} = x^{(t+1)}$. Thus, the word is able to "see itself." This is illustrated in the example below, in which the word "runs" is already seen by the backward RNN (dashed green arrow) before the LM tries to predict it (black arrow). To remedy this, BERT includes MASK tokens, which mask certain words in the input sequence from being seen by the LM. If the word "runs" were masked, then there would be no way for the LM to know what it is, even if words that came after it (e.g., "quickly", not shown in this example) are present. This is because the only thing that would be revealed is the presence of a MASK token, not the actual word. In addition, a major difference between BERT and the LM shown here is that BERT is a Transformer-based architecture, but conceptually, the concept of a word "seeing itself" is the same. Note that these MASK tokens could be applied in this LM as well, but the Transformer model just has so much more going on, which is why it performs so well compared to RNNs.
Why can't standard conditional language models be trained left-to-right *and* right-to-left?
In a standard language model (LM), you're trying to predict the probability of the next word given the past. The past could be a fixed window size of $n$ words, as in your example, or an indefinitely
Why can't standard conditional language models be trained left-to-right *and* right-to-left? In a standard language model (LM), you're trying to predict the probability of the next word given the past. The past could be a fixed window size of $n$ words, as in your example, or an indefinitely long window size, as in the case of RNNs (and their variants). Without loss of generality, let's stick with the RNN as the LM since that is a relatively common choice. Mathematically, each output $\hat{y}^{(t+1)} \in \Delta^{|V|}$ of an RNN specifies a conditional distribution $p(w^{(t+1)}| x^{(t)}, \ldots, x^{(1)})$ over $|V|$ words in the vocabulary $V$ for the next word $w^{(t+1)}$ given embeddings $x^{(s)}$ for $s = 1, \ldots, t$. This can be expressed as: $$ h^{(t)} = f(x^{(t)}, h^{(t-1)})\\ \hat{y}^{(t+1)} = p(w^{(t+1)} | x^{(t)}, \ldots, x^{(1)}) = g(h^{(t)}) = \mbox{softmax}(W_oh^{(t)} + b_o) $$ where $f$ and $g$ are recurrent and softmax neural network layers, respectively. This is a unidirectional LM since the probability of the next word is only dependent on the past. Suppose we could instead create a "bidirectional LM" using a bidirectional RNN. We would then have: $$ \overrightarrow{h^{(t)}} = \overrightarrow{f}(x^{(t)}, \overrightarrow{h^{(t-1)}})\\ \overleftarrow{h^{(t)}} = \overleftarrow{f}(x^{(t)}, \overleftarrow{h^{(t+1)}})\\ h^{(t)} = [\overrightarrow{h^{(t)}}; \overleftarrow{h^{(t)}}]\\ \hat{y}^{(t+1)} = p(w^{(t+1)} | x^{(T)}, \ldots, x^{(1)}) = g(h^{(t)}) = \mbox{softmax}(W_oh^{(t)} + b_o) $$ where $T$ is the length of the entire sequence. The problem here is that you're training the second RNN from back to front. At time $t$, the backward RNN already knows what word should come at time $t+1$ because $$ \overleftarrow{h^{(t)}} = \overleftarrow{f}(x^{(t)}, \overleftarrow{h^{(t+1)}}) = \overleftarrow{f}(x^{(t)}, \overleftarrow{f}(x^{(t+1)}, h^{(t+2)})) $$ and the embedding of $w^{(t+1)} = x^{(t+1)}$. Thus, the word is able to "see itself." This is illustrated in the example below, in which the word "runs" is already seen by the backward RNN (dashed green arrow) before the LM tries to predict it (black arrow). To remedy this, BERT includes MASK tokens, which mask certain words in the input sequence from being seen by the LM. If the word "runs" were masked, then there would be no way for the LM to know what it is, even if words that came after it (e.g., "quickly", not shown in this example) are present. This is because the only thing that would be revealed is the presence of a MASK token, not the actual word. In addition, a major difference between BERT and the LM shown here is that BERT is a Transformer-based architecture, but conceptually, the concept of a word "seeing itself" is the same. Note that these MASK tokens could be applied in this LM as well, but the Transformer model just has so much more going on, which is why it performs so well compared to RNNs.
Why can't standard conditional language models be trained left-to-right *and* right-to-left? In a standard language model (LM), you're trying to predict the probability of the next word given the past. The past could be a fixed window size of $n$ words, as in your example, or an indefinitely
48,339
Prove that $\frac{(n-2)s^2}{\sigma^2}\sim \chi^{2}_{n-2}$
Without using the orthogonal change of variables in the linked answer, you can work under the general matrix setup of multiple linear regression. Here we are concerned with ordinary least squares. The key result to be used here is the Fisher-Cochran theorem on distribution of quadratic forms (e.g. see page 185-186, 2nd edition of Linear Statistical Inference and Its Applications by Rao). Suppose the model is $$y_i=\beta_0+\beta_1x_i+\varepsilon_i$$ Define the vectors $y=(y_1,\ldots,y_n)$, $\beta=(\beta_0,\beta_1)$ and $\varepsilon=(\varepsilon_1,\ldots,\varepsilon_n)$. So in matrix form you have $$y=X\beta+\varepsilon\,,$$ where $X_{n\times k}$ with rank $k$ (here $k=2$) is the matrix of covariates (fixed) and $\varepsilon\sim N(0,\sigma^2 I_n)$. By standard algebra of least squares, $$\hat y=X\hat\beta=X(X'X)^{-1}X'y=Hy\,,$$ where $H=X(X'X)^{-1}X'$ is the hat matrix. Note that $H$ is symmetric and idempotent. The residual vector is \begin{align} e=y-\hat y&=(I_n-H)y \\&=(I_n-H)(X\beta+\varepsilon) \\&=(I_n-H)\varepsilon \end{align} Now $I_n-H$ is symmetric and idempotent because $H$ is so. So the residual sum of squares is \begin{align} e'e&=y'(I_n-H)'(I_n-H)y \\&=y'(I_n-H)y \\&=\varepsilon'(I_n-H)\varepsilon \end{align} Therefore $$\frac{(n-2)s^2}{\sigma^2}=\frac{e'e}{\sigma^2}=\left(\frac{\varepsilon}{\sigma}\right)'(I_n-H)\frac{\varepsilon}{\sigma}$$ By a corollary of the Fisher-Cochran theorem, the above has a chi-square distribution because $(I_n-H)$ is idempotent, the degrees of freedom being $\operatorname{rank}(I_n-H)$. As rank of an idempotent matrix equals its trace, $\operatorname{rank}(I_n-H)=\operatorname{tr}(I_n-H)=n-\operatorname{tr}(H)$. And by properties of trace, $\operatorname{tr}(H)=\operatorname{tr}(X(X'X)^{-1}X')=\operatorname{tr}(X'X(X'X)^{-1})=\operatorname{tr}(I_k)=k$
Prove that $\frac{(n-2)s^2}{\sigma^2}\sim \chi^{2}_{n-2}$
Without using the orthogonal change of variables in the linked answer, you can work under the general matrix setup of multiple linear regression. Here we are concerned with ordinary least squares. T
Prove that $\frac{(n-2)s^2}{\sigma^2}\sim \chi^{2}_{n-2}$ Without using the orthogonal change of variables in the linked answer, you can work under the general matrix setup of multiple linear regression. Here we are concerned with ordinary least squares. The key result to be used here is the Fisher-Cochran theorem on distribution of quadratic forms (e.g. see page 185-186, 2nd edition of Linear Statistical Inference and Its Applications by Rao). Suppose the model is $$y_i=\beta_0+\beta_1x_i+\varepsilon_i$$ Define the vectors $y=(y_1,\ldots,y_n)$, $\beta=(\beta_0,\beta_1)$ and $\varepsilon=(\varepsilon_1,\ldots,\varepsilon_n)$. So in matrix form you have $$y=X\beta+\varepsilon\,,$$ where $X_{n\times k}$ with rank $k$ (here $k=2$) is the matrix of covariates (fixed) and $\varepsilon\sim N(0,\sigma^2 I_n)$. By standard algebra of least squares, $$\hat y=X\hat\beta=X(X'X)^{-1}X'y=Hy\,,$$ where $H=X(X'X)^{-1}X'$ is the hat matrix. Note that $H$ is symmetric and idempotent. The residual vector is \begin{align} e=y-\hat y&=(I_n-H)y \\&=(I_n-H)(X\beta+\varepsilon) \\&=(I_n-H)\varepsilon \end{align} Now $I_n-H$ is symmetric and idempotent because $H$ is so. So the residual sum of squares is \begin{align} e'e&=y'(I_n-H)'(I_n-H)y \\&=y'(I_n-H)y \\&=\varepsilon'(I_n-H)\varepsilon \end{align} Therefore $$\frac{(n-2)s^2}{\sigma^2}=\frac{e'e}{\sigma^2}=\left(\frac{\varepsilon}{\sigma}\right)'(I_n-H)\frac{\varepsilon}{\sigma}$$ By a corollary of the Fisher-Cochran theorem, the above has a chi-square distribution because $(I_n-H)$ is idempotent, the degrees of freedom being $\operatorname{rank}(I_n-H)$. As rank of an idempotent matrix equals its trace, $\operatorname{rank}(I_n-H)=\operatorname{tr}(I_n-H)=n-\operatorname{tr}(H)$. And by properties of trace, $\operatorname{tr}(H)=\operatorname{tr}(X(X'X)^{-1}X')=\operatorname{tr}(X'X(X'X)^{-1})=\operatorname{tr}(I_k)=k$
Prove that $\frac{(n-2)s^2}{\sigma^2}\sim \chi^{2}_{n-2}$ Without using the orthogonal change of variables in the linked answer, you can work under the general matrix setup of multiple linear regression. Here we are concerned with ordinary least squares. T
48,340
Understanding glm and link functions: how to generate data?
Here's how to generate from a glm (the order of some items can be moved): Choose your family and link function. choose your predictors (IV's) for each observation you want to simulate. Choose your coefficients. Evaluate the linear predictor for each observation. Transform by the inverse of the link function to get the conditional mean for each observation. Choose any other parameters. Sample the distribution at each observation, for which you now have all the parameters. Let's see how to simulate a simple Gamma GLM with inverse link, following those steps: Choose your family and link function. (Gamma, inverse) choose your predictors (IV's) for each observation you want to simulate. ($x$) Choose your coefficients. (choose a specific $\beta_0$ & $\beta_1$ in this case) Evaluate the linear predictor for each observation. ($\eta_i=\beta_0+\beta_1 x_i,\: i=1,...,n$) Transform by the inverse of the link function to get the conditional mean for each observation. (inverse of $\eta_i=1/\mu_i$ is $\mu_i=1/\eta_i$) Choose any other parameters. (Choose the shape parameter) Sample the distribution at each observation, for which you now have all the parameters. (e.g. y=rgamma(length(x),shape,scale=mu/shape) -- noting that scale is a vector of values here)
Understanding glm and link functions: how to generate data?
Here's how to generate from a glm (the order of some items can be moved): Choose your family and link function. choose your predictors (IV's) for each observation you want to simulate. Choose your co
Understanding glm and link functions: how to generate data? Here's how to generate from a glm (the order of some items can be moved): Choose your family and link function. choose your predictors (IV's) for each observation you want to simulate. Choose your coefficients. Evaluate the linear predictor for each observation. Transform by the inverse of the link function to get the conditional mean for each observation. Choose any other parameters. Sample the distribution at each observation, for which you now have all the parameters. Let's see how to simulate a simple Gamma GLM with inverse link, following those steps: Choose your family and link function. (Gamma, inverse) choose your predictors (IV's) for each observation you want to simulate. ($x$) Choose your coefficients. (choose a specific $\beta_0$ & $\beta_1$ in this case) Evaluate the linear predictor for each observation. ($\eta_i=\beta_0+\beta_1 x_i,\: i=1,...,n$) Transform by the inverse of the link function to get the conditional mean for each observation. (inverse of $\eta_i=1/\mu_i$ is $\mu_i=1/\eta_i$) Choose any other parameters. (Choose the shape parameter) Sample the distribution at each observation, for which you now have all the parameters. (e.g. y=rgamma(length(x),shape,scale=mu/shape) -- noting that scale is a vector of values here)
Understanding glm and link functions: how to generate data? Here's how to generate from a glm (the order of some items can be moved): Choose your family and link function. choose your predictors (IV's) for each observation you want to simulate. Choose your co
48,341
Understanding glm and link functions: how to generate data?
It matters whether the error term is included in the exp() call or not. That's your big issue for this code. Consider this: # first, set the random seed, so that everything is reproducible set.seed(5671) N = 10000 e = rnorm(N,0,1) x1 = runif(N,10,30) y1 = exp(5*x1+ 10 + e) y2 = exp(5*x1+ 10) + e mod1.1 = glm(y1 ~ x1,family=gaussian(link="log")) mod1.2 = lm(log(y1) ~ x1) mod2.1 = glm(y2 ~ x1,family=gaussian(link="log")) mod2.2 = lm(log(y2) ~ x1)
Understanding glm and link functions: how to generate data?
It matters whether the error term is included in the exp() call or not. That's your big issue for this code. Consider this: # first, set the random seed, so that everything is reproducible set.se
Understanding glm and link functions: how to generate data? It matters whether the error term is included in the exp() call or not. That's your big issue for this code. Consider this: # first, set the random seed, so that everything is reproducible set.seed(5671) N = 10000 e = rnorm(N,0,1) x1 = runif(N,10,30) y1 = exp(5*x1+ 10 + e) y2 = exp(5*x1+ 10) + e mod1.1 = glm(y1 ~ x1,family=gaussian(link="log")) mod1.2 = lm(log(y1) ~ x1) mod2.1 = glm(y2 ~ x1,family=gaussian(link="log")) mod2.2 = lm(log(y2) ~ x1)
Understanding glm and link functions: how to generate data? It matters whether the error term is included in the exp() call or not. That's your big issue for this code. Consider this: # first, set the random seed, so that everything is reproducible set.se
48,342
Modelling longitudinal data with crossed random effects
First, note that the simulated data above results in a singular model fit because there is no variation in the response among any of the random factors. This can be overcome with a simple modification: library(lme4) set.seed(15) participant <- rep(1:40, each = 30) session <- rep(rep(1:3, each = 10), times = 40) item <- rep(1:10, times = 120) type <- rep(1:2, times = 600) # score <- rnorm(1200) ##### This line removed score <- participant + item + rnorm(1200) ##### This line added data <- cbind(participant, session, item, type, score) data <- as.data.frame(data) data$participant <- factor(data$participant) data$session <- factor(data$session) data$item <- factor(data$item) data$type <- factor(data$type) m <- lmer(score ~ type * session + (1 + type | participant / session) + (1 | item / session), data = data) Second, now note that the model m above does not converge. This is because the fixed effect session is also included as a random grouping factor, which does not make any sense. A more sensible alternative is: m0 <- lmer(score ~ type * session + (1 | participant) + (1 | item), data = data) where I have also removed the random slope/coefficient for type by participant which also does not make sense given that the data are simple random draws - that is, there is no systematic association of score and type for each participant. > summary(m0) Linear mixed model fit by REML ['lmerMod'] Formula: score ~ type * session + (1 | participant) + (1 | item) Data: data REML criterion at convergence: 3828.3 Scaled residuals: Min 1Q Median 3Q Max -4.2095 -0.6430 0.0437 0.6908 3.1569 Random effects: Groups Name Variance Std.Dev. participant (Intercept) 136.623 11.689 item (Intercept) 9.856 3.139 Residual 1.024 1.012 Number of obs: 1200, groups: participant, 40; item, 10 Fixed effects: Estimate Std. Error t value (Intercept) 25.557744 2.322048 11.007 type2 0.988240 1.988126 0.497 session2 -0.025759 0.101185 -0.255 session3 -0.042207 0.101185 -0.417 type2:session2 -0.028411 0.143098 -0.199 type2:session3 -0.002558 0.143098 -0.018 Correlation of Fixed Effects: (Intr) type2 sessn2 sessn3 typ2:2 type2 -0.428 session2 -0.022 0.025 session3 -0.022 0.025 0.500 type2:sssn2 0.015 -0.036 -0.707 -0.354 type2:sssn3 0.015 -0.036 -0.354 -0.707 0.500 This model estimates the fixed effect of stimulus type, session number, and their interaction, while controlling for crossed random effects for participant and item, as requested. Random slopes for type, session and their interaction could potentially be included, provided that the data supports such a structure (the simulated data do not) and provided these are indicated by the underlying theory of the data generation process. It is not generally a good idea to begin with a maximal random effects structure.
Modelling longitudinal data with crossed random effects
First, note that the simulated data above results in a singular model fit because there is no variation in the response among any of the random factors. This can be overcome with a simple modificatio
Modelling longitudinal data with crossed random effects First, note that the simulated data above results in a singular model fit because there is no variation in the response among any of the random factors. This can be overcome with a simple modification: library(lme4) set.seed(15) participant <- rep(1:40, each = 30) session <- rep(rep(1:3, each = 10), times = 40) item <- rep(1:10, times = 120) type <- rep(1:2, times = 600) # score <- rnorm(1200) ##### This line removed score <- participant + item + rnorm(1200) ##### This line added data <- cbind(participant, session, item, type, score) data <- as.data.frame(data) data$participant <- factor(data$participant) data$session <- factor(data$session) data$item <- factor(data$item) data$type <- factor(data$type) m <- lmer(score ~ type * session + (1 + type | participant / session) + (1 | item / session), data = data) Second, now note that the model m above does not converge. This is because the fixed effect session is also included as a random grouping factor, which does not make any sense. A more sensible alternative is: m0 <- lmer(score ~ type * session + (1 | participant) + (1 | item), data = data) where I have also removed the random slope/coefficient for type by participant which also does not make sense given that the data are simple random draws - that is, there is no systematic association of score and type for each participant. > summary(m0) Linear mixed model fit by REML ['lmerMod'] Formula: score ~ type * session + (1 | participant) + (1 | item) Data: data REML criterion at convergence: 3828.3 Scaled residuals: Min 1Q Median 3Q Max -4.2095 -0.6430 0.0437 0.6908 3.1569 Random effects: Groups Name Variance Std.Dev. participant (Intercept) 136.623 11.689 item (Intercept) 9.856 3.139 Residual 1.024 1.012 Number of obs: 1200, groups: participant, 40; item, 10 Fixed effects: Estimate Std. Error t value (Intercept) 25.557744 2.322048 11.007 type2 0.988240 1.988126 0.497 session2 -0.025759 0.101185 -0.255 session3 -0.042207 0.101185 -0.417 type2:session2 -0.028411 0.143098 -0.199 type2:session3 -0.002558 0.143098 -0.018 Correlation of Fixed Effects: (Intr) type2 sessn2 sessn3 typ2:2 type2 -0.428 session2 -0.022 0.025 session3 -0.022 0.025 0.500 type2:sssn2 0.015 -0.036 -0.707 -0.354 type2:sssn3 0.015 -0.036 -0.354 -0.707 0.500 This model estimates the fixed effect of stimulus type, session number, and their interaction, while controlling for crossed random effects for participant and item, as requested. Random slopes for type, session and their interaction could potentially be included, provided that the data supports such a structure (the simulated data do not) and provided these are indicated by the underlying theory of the data generation process. It is not generally a good idea to begin with a maximal random effects structure.
Modelling longitudinal data with crossed random effects First, note that the simulated data above results in a singular model fit because there is no variation in the response among any of the random factors. This can be overcome with a simple modificatio
48,343
how to scale the density plot for my histogram
The area under a true density function is 1. So unless the total area of the bars in the histogram is also 1, you cannot make a useful match between a true density function and the histogram. Using actual density functions. A correct (and perhaps the easiest) course of action is to do what you explicitly say (without giving a reason) that you do not want to do: Put the histogram on a density scale and then superimpose either a density estimator based on data or the density function of the hypothetical distribution from which the data in the histogram where sampled. If you do this, the vertical scale of the histogram is automatically the correct scale for the densities. Below is a histogram of data from a mixture of normal distributions, simulated in R, along with a kernel density estimator (KDE) of the data (red), and the distribution used to simulate the data (dotted). [With sample size as large as $n=6000$ you can expect a good match between the histogram and the KDE---even if not always as good as shown here.] The relevant R code is shown below. set.seed(710) mix = sample(c(-.6, 0, .6), 6000, rep=T, p=c(.1,.8,.1)) x = rnorm(6000, mix, .15) lbl = "Histogram of Data with KDE (red) and Population Density" hist(x, prob=T, br=50, col="skyblue2", main=lbl) lines(density(x), col="red") curve(.1*dnorm(x,-.6,.15)+.8*dnorm(x,0,.15)+.1*dnorm(x,.6,.15), add=T, lty="dotted",lwd=3) "Scaled Density." If you insist on using a non-density function that imitates the shape of the density function, you can make a frequency histogram with the same bins as the plot above, then use the vertical scale to decide what constant multiple of the KDE or the population density gives the effect you want. [In that case you need to explain that the curve is not the density, but suggests its shape.] For the figure below I multiplied the proper density function by a guess of 300, which seems to work OK. [The term "scaled density" is not widely used, as far as I know, and may tend to make the procedure seem legitimate.] hist(x, br=50, main="Frequency Histogram with Scaled Density Function") curve(30*dnorm(x,-.6,.15)+240*dnorm(x,0,.15)+30*dnorm(x,.6,.15), add=T, lty="dotted",lwd=3)
how to scale the density plot for my histogram
The area under a true density function is 1. So unless the total area of the bars in the histogram is also 1, you cannot make a useful match between a true density function and the histogram. Using ac
how to scale the density plot for my histogram The area under a true density function is 1. So unless the total area of the bars in the histogram is also 1, you cannot make a useful match between a true density function and the histogram. Using actual density functions. A correct (and perhaps the easiest) course of action is to do what you explicitly say (without giving a reason) that you do not want to do: Put the histogram on a density scale and then superimpose either a density estimator based on data or the density function of the hypothetical distribution from which the data in the histogram where sampled. If you do this, the vertical scale of the histogram is automatically the correct scale for the densities. Below is a histogram of data from a mixture of normal distributions, simulated in R, along with a kernel density estimator (KDE) of the data (red), and the distribution used to simulate the data (dotted). [With sample size as large as $n=6000$ you can expect a good match between the histogram and the KDE---even if not always as good as shown here.] The relevant R code is shown below. set.seed(710) mix = sample(c(-.6, 0, .6), 6000, rep=T, p=c(.1,.8,.1)) x = rnorm(6000, mix, .15) lbl = "Histogram of Data with KDE (red) and Population Density" hist(x, prob=T, br=50, col="skyblue2", main=lbl) lines(density(x), col="red") curve(.1*dnorm(x,-.6,.15)+.8*dnorm(x,0,.15)+.1*dnorm(x,.6,.15), add=T, lty="dotted",lwd=3) "Scaled Density." If you insist on using a non-density function that imitates the shape of the density function, you can make a frequency histogram with the same bins as the plot above, then use the vertical scale to decide what constant multiple of the KDE or the population density gives the effect you want. [In that case you need to explain that the curve is not the density, but suggests its shape.] For the figure below I multiplied the proper density function by a guess of 300, which seems to work OK. [The term "scaled density" is not widely used, as far as I know, and may tend to make the procedure seem legitimate.] hist(x, br=50, main="Frequency Histogram with Scaled Density Function") curve(30*dnorm(x,-.6,.15)+240*dnorm(x,0,.15)+30*dnorm(x,.6,.15), add=T, lty="dotted",lwd=3)
how to scale the density plot for my histogram The area under a true density function is 1. So unless the total area of the bars in the histogram is also 1, you cannot make a useful match between a true density function and the histogram. Using ac
48,344
What's the difference between "Artificial neuron" and "Perceptron"?
Perceptron is an early type of a neural network for binary classification without hidden layers. It is a model of the form $$ y=\sigma(\mathbf w^T \mathbf x) $$ where $\sigma$ is the Heaviside step function. It can be trained using the perceptron algorithm. You could say that perceptron is a neural network with a single neuron. Image from https://towardsdatascience.com/what-the-hell-is-perceptron-626217814f53
What's the difference between "Artificial neuron" and "Perceptron"?
Perceptron is an early type of a neural network for binary classification without hidden layers. It is a model of the form $$ y=\sigma(\mathbf w^T \mathbf x) $$ where $\sigma$ is the Heaviside step fu
What's the difference between "Artificial neuron" and "Perceptron"? Perceptron is an early type of a neural network for binary classification without hidden layers. It is a model of the form $$ y=\sigma(\mathbf w^T \mathbf x) $$ where $\sigma$ is the Heaviside step function. It can be trained using the perceptron algorithm. You could say that perceptron is a neural network with a single neuron. Image from https://towardsdatascience.com/what-the-hell-is-perceptron-626217814f53
What's the difference between "Artificial neuron" and "Perceptron"? Perceptron is an early type of a neural network for binary classification without hidden layers. It is a model of the form $$ y=\sigma(\mathbf w^T \mathbf x) $$ where $\sigma$ is the Heaviside step fu
48,345
What is the meaning of generating data from a probabilistic model such as a naive bayes classifier?
Having the ability to generate data from the model may be useful for many reasons, e.g. Simulate the data from the model to judge if the representation of the reality by your model is reasonable, to conduct posterior predictive checks (compare distribution of the simulated data with the empirical data), If you can generate data from the model, you can learn about the distribution of the outcomes that are possible under the model, this is much richer information then the point estimate, If you want your model to make suggestions for the users, sometimes producing a set of most likely guesses is better then returning the single "best" prediction (think of machine translation, or text autocomplete), You can simulate results from the model to check if it is biased, for example, you have a model that helps HR department with recruitment, but when you generate simulated results from the model, you can learn that some minorities are unrepresented in the simulated results, so this tells you that there could be some kind of bias in the model against the minorities. See also closely related thread When to use simulations?
What is the meaning of generating data from a probabilistic model such as a naive bayes classifier?
Having the ability to generate data from the model may be useful for many reasons, e.g. Simulate the data from the model to judge if the representation of the reality by your model is reasonable, to
What is the meaning of generating data from a probabilistic model such as a naive bayes classifier? Having the ability to generate data from the model may be useful for many reasons, e.g. Simulate the data from the model to judge if the representation of the reality by your model is reasonable, to conduct posterior predictive checks (compare distribution of the simulated data with the empirical data), If you can generate data from the model, you can learn about the distribution of the outcomes that are possible under the model, this is much richer information then the point estimate, If you want your model to make suggestions for the users, sometimes producing a set of most likely guesses is better then returning the single "best" prediction (think of machine translation, or text autocomplete), You can simulate results from the model to check if it is biased, for example, you have a model that helps HR department with recruitment, but when you generate simulated results from the model, you can learn that some minorities are unrepresented in the simulated results, so this tells you that there could be some kind of bias in the model against the minorities. See also closely related thread When to use simulations?
What is the meaning of generating data from a probabilistic model such as a naive bayes classifier? Having the ability to generate data from the model may be useful for many reasons, e.g. Simulate the data from the model to judge if the representation of the reality by your model is reasonable, to
48,346
What is the meaning of generating data from a probabilistic model such as a naive bayes classifier?
[Naive] bayes is a generative model, which means we can generate data using it if we wanted. In NB, we estimate $p(\mathbf{x}|y)$, where $\mathbf{x}$ is our feature vector and $y$ is the class variable. For example, we first pick a $y$, indicating the class, and then pick word(s) according to the probability distribution, $p(\mathbf{x}|y)$.
What is the meaning of generating data from a probabilistic model such as a naive bayes classifier?
[Naive] bayes is a generative model, which means we can generate data using it if we wanted. In NB, we estimate $p(\mathbf{x}|y)$, where $\mathbf{x}$ is our feature vector and $y$ is the class variabl
What is the meaning of generating data from a probabilistic model such as a naive bayes classifier? [Naive] bayes is a generative model, which means we can generate data using it if we wanted. In NB, we estimate $p(\mathbf{x}|y)$, where $\mathbf{x}$ is our feature vector and $y$ is the class variable. For example, we first pick a $y$, indicating the class, and then pick word(s) according to the probability distribution, $p(\mathbf{x}|y)$.
What is the meaning of generating data from a probabilistic model such as a naive bayes classifier? [Naive] bayes is a generative model, which means we can generate data using it if we wanted. In NB, we estimate $p(\mathbf{x}|y)$, where $\mathbf{x}$ is our feature vector and $y$ is the class variabl
48,347
Statistical power of t-test in mildly skewed dataset
I will address the computation of the power of a one-sample t test. Suppose we wish to use $n = 20$ observations from a normal distribution to test $H_0: \mu = 110$ against $H_a: \mu < 110$ at the 5% level. Then we will reject $H_0$ when the t statistic $T = \frac{\bar X - 110}{S/\sqrt{20}} < -1.729,$ where $S$ is the sample standard deviation and $-1.729$ cuts probability $0.05$ from the lower tail of the distribution $\mathsf{T}(\nu = n-1 = 19).$ [Computation in R.] c = qt(.05, 19); c [1] -1.729133 In order to do a power computation, we need to make a guess at the unknown sample standard deviation $\sigma$ and to choose a particular value $\mu_a < \mu_0 = 100$ for the computation. If we use $\sigma = 15$ and $\mu_a = 100,$ then we can run a simulation based on many samples of size $n = 10$ from $\mathsf{Norm}(\mu_a = 100, \sigma = 15)$ and see in what proportion of the samples we reject with $T < c = -1.729.$ set.seed(1776); m=10^6; n=20; mu.0=110; mu.a=100; sg=15 t = replicate(m, t.test(rnorm(n, mu.a, sg), mu = mu.0, alt="less")$stat) mean(t <= c) [1] 0.890277 So the power is about 89%. Obviously, in this simulation the t statistics computed by t.test and captured using $-notation do not have Student's t distribution with 19 degrees of freedom. lbl = "Simulated Alternative Dist'n of T" hist(t, prob=T, br=30, col="skyblue2", main=lbl) abline(v = -1.728, col="red", lwd=2, lty="dotted") curve(dt(x, 19, -2.9814), add=T, lwd=2) The actual distribution of the t statistic $T$ is a noncentral t distribution with 19 degrees of freedom and noncentrality parameter $\lambda = \sqrt{n}(\mu_0 - \mu_a)/\sigma = -2.9814,$ the density function of which is plotted above. lam = sqrt(20)*(-10)/15 = lam [1] -2.981424 This means that we can find the exact power $0.8902$ in R, without simulation, using the code below: pt(c, 19, lam) [1] 0.8902459 Thus by using the noncentral t distribution, you can make a power curve, showing the power against a sequence of alternative values $\mu_a.$ Also, by trying various sample sizes, you can find the $n$ required to achieve the desired power against a particular alternative. However, if your data are not normal, then neither the regular nor the noncentral t distribution is applicable. It may be difficult to find a formula for the exact power. Nevertheless, you can use the simulation method with appropriate distributions to find approximate power. Similar simulation methods could be used to investigate the power of a nonparametric test. Addendum. (1) Using a Wilcoxon signed rank test instead of a t test with 20 observations from a normal population. Suppose you worry that normal data are not normal and use a one-sample test instead of a t test. What happens to the power? We use the same null and alternative hypotheses above, and seek power against the alternative that the distribution is centered at 100. We don't seek a formula in terms of the distribution of the Wilcoxon test statistic, so we use simulation. Specifically, we use the implementation of the Wilcoxon test in R, capture its P-value at each iteration, and express rejection in terms of P-values below $0.05.$ The power is 87.4%, compared with power 89.0% for the t test. set.seed(2019) pv = replicate(10^6, wilcox.test(rnorm(20,100,15), mu=110, alt="l")$p.val) mean(pv < .05) [1] 0.874229 (2) Using a Wilcoxon signed rank test instead of a t test when 20 observations are from a moderately right-skewed distribution. A slightly, but noticeably, skewed distribution results from taking the third power of a normal sample with positive elements. Very roughly speaking, the cube of observations from $\mathsf{Norm}(\mu=4.63, \sigma=0.232)$ has $\mu \approx 100, \sigma \approx 15.$ Also the cube of observations from $\mathsf{Norm}(4.783, .0.217)$ has $\mu \approx 110, \sigma \approx 15.$ The following simulation illustrates that with $H_0: \mu = 110$ and $H_a: \mu < 100$ a t.test at the 5% level, using such slightly skewed data, has power about 88% against the alternative $\mu_a = 100.$ Similarly, a Wilcoxon signed rank test also has power about 88%. So for 20 observations from the moderately skewed population mentioned above, the t test loses its slight power advantage over the Wilcoxon test. set.seed(705) pv = replicate(10^6, t.test(rnorm(20,4.63,.232)^3, mu=110, alt="l")$p.val) mean(pv < .05) [1] 0.877868 set.seed(705) pv = replicate(10^6, wilcox.test(rnorm(20,4.63,.232)^3, mu=110, alt="l")$p.val) mean(pv < .05) [1] 0.877091 The first iteration in each simulation is shown below: set.seed(705) x = rnorm(20, 4.63, .232)^3 ; mean(x); sd(x) [1] 104.0221 [1] 15.34042 t.test(x, mu=110, alt="less") One Sample t-test data: x t = -1.7427, df = 19, p-value = 0.04877 alternative hypothesis: true mean is less than 110 95 percent confidence interval: -Inf 109.9534 sample estimates: mean of x 104.0221 wilcox.test(x, mu=110, alt="less") Wilcoxon signed rank test data: x V = 61, p-value = 0.0527 alternative hypothesis: true location is less than 110
Statistical power of t-test in mildly skewed dataset
I will address the computation of the power of a one-sample t test. Suppose we wish to use $n = 20$ observations from a normal distribution to test $H_0: \mu = 110$ against $H_a: \mu < 110$ at the 5%
Statistical power of t-test in mildly skewed dataset I will address the computation of the power of a one-sample t test. Suppose we wish to use $n = 20$ observations from a normal distribution to test $H_0: \mu = 110$ against $H_a: \mu < 110$ at the 5% level. Then we will reject $H_0$ when the t statistic $T = \frac{\bar X - 110}{S/\sqrt{20}} < -1.729,$ where $S$ is the sample standard deviation and $-1.729$ cuts probability $0.05$ from the lower tail of the distribution $\mathsf{T}(\nu = n-1 = 19).$ [Computation in R.] c = qt(.05, 19); c [1] -1.729133 In order to do a power computation, we need to make a guess at the unknown sample standard deviation $\sigma$ and to choose a particular value $\mu_a < \mu_0 = 100$ for the computation. If we use $\sigma = 15$ and $\mu_a = 100,$ then we can run a simulation based on many samples of size $n = 10$ from $\mathsf{Norm}(\mu_a = 100, \sigma = 15)$ and see in what proportion of the samples we reject with $T < c = -1.729.$ set.seed(1776); m=10^6; n=20; mu.0=110; mu.a=100; sg=15 t = replicate(m, t.test(rnorm(n, mu.a, sg), mu = mu.0, alt="less")$stat) mean(t <= c) [1] 0.890277 So the power is about 89%. Obviously, in this simulation the t statistics computed by t.test and captured using $-notation do not have Student's t distribution with 19 degrees of freedom. lbl = "Simulated Alternative Dist'n of T" hist(t, prob=T, br=30, col="skyblue2", main=lbl) abline(v = -1.728, col="red", lwd=2, lty="dotted") curve(dt(x, 19, -2.9814), add=T, lwd=2) The actual distribution of the t statistic $T$ is a noncentral t distribution with 19 degrees of freedom and noncentrality parameter $\lambda = \sqrt{n}(\mu_0 - \mu_a)/\sigma = -2.9814,$ the density function of which is plotted above. lam = sqrt(20)*(-10)/15 = lam [1] -2.981424 This means that we can find the exact power $0.8902$ in R, without simulation, using the code below: pt(c, 19, lam) [1] 0.8902459 Thus by using the noncentral t distribution, you can make a power curve, showing the power against a sequence of alternative values $\mu_a.$ Also, by trying various sample sizes, you can find the $n$ required to achieve the desired power against a particular alternative. However, if your data are not normal, then neither the regular nor the noncentral t distribution is applicable. It may be difficult to find a formula for the exact power. Nevertheless, you can use the simulation method with appropriate distributions to find approximate power. Similar simulation methods could be used to investigate the power of a nonparametric test. Addendum. (1) Using a Wilcoxon signed rank test instead of a t test with 20 observations from a normal population. Suppose you worry that normal data are not normal and use a one-sample test instead of a t test. What happens to the power? We use the same null and alternative hypotheses above, and seek power against the alternative that the distribution is centered at 100. We don't seek a formula in terms of the distribution of the Wilcoxon test statistic, so we use simulation. Specifically, we use the implementation of the Wilcoxon test in R, capture its P-value at each iteration, and express rejection in terms of P-values below $0.05.$ The power is 87.4%, compared with power 89.0% for the t test. set.seed(2019) pv = replicate(10^6, wilcox.test(rnorm(20,100,15), mu=110, alt="l")$p.val) mean(pv < .05) [1] 0.874229 (2) Using a Wilcoxon signed rank test instead of a t test when 20 observations are from a moderately right-skewed distribution. A slightly, but noticeably, skewed distribution results from taking the third power of a normal sample with positive elements. Very roughly speaking, the cube of observations from $\mathsf{Norm}(\mu=4.63, \sigma=0.232)$ has $\mu \approx 100, \sigma \approx 15.$ Also the cube of observations from $\mathsf{Norm}(4.783, .0.217)$ has $\mu \approx 110, \sigma \approx 15.$ The following simulation illustrates that with $H_0: \mu = 110$ and $H_a: \mu < 100$ a t.test at the 5% level, using such slightly skewed data, has power about 88% against the alternative $\mu_a = 100.$ Similarly, a Wilcoxon signed rank test also has power about 88%. So for 20 observations from the moderately skewed population mentioned above, the t test loses its slight power advantage over the Wilcoxon test. set.seed(705) pv = replicate(10^6, t.test(rnorm(20,4.63,.232)^3, mu=110, alt="l")$p.val) mean(pv < .05) [1] 0.877868 set.seed(705) pv = replicate(10^6, wilcox.test(rnorm(20,4.63,.232)^3, mu=110, alt="l")$p.val) mean(pv < .05) [1] 0.877091 The first iteration in each simulation is shown below: set.seed(705) x = rnorm(20, 4.63, .232)^3 ; mean(x); sd(x) [1] 104.0221 [1] 15.34042 t.test(x, mu=110, alt="less") One Sample t-test data: x t = -1.7427, df = 19, p-value = 0.04877 alternative hypothesis: true mean is less than 110 95 percent confidence interval: -Inf 109.9534 sample estimates: mean of x 104.0221 wilcox.test(x, mu=110, alt="less") Wilcoxon signed rank test data: x V = 61, p-value = 0.0527 alternative hypothesis: true location is less than 110
Statistical power of t-test in mildly skewed dataset I will address the computation of the power of a one-sample t test. Suppose we wish to use $n = 20$ observations from a normal distribution to test $H_0: \mu = 110$ against $H_a: \mu < 110$ at the 5%
48,348
Statistical power of t-test in mildly skewed dataset
This has been discussed at length on this site. The t-test is not very robust to skewness. For example, with the log-normal distribution a sample size of 50,000 is not large enough for the t-based method to be sufficiently accurate. The Wilcoxon signed-rank one-sample test does not test a median. The Wilcoxon-Mann-Whitney test is a two-sample test. The Wilcoxon tests are 0.95 as efficient as the t-based methods if normality holds, and can be arbitrarily more powerful than this parametric counterpart when normality does not hold. I suggest reading an intro nonparametric statistics book.
Statistical power of t-test in mildly skewed dataset
This has been discussed at length on this site. The t-test is not very robust to skewness. For example, with the log-normal distribution a sample size of 50,000 is not large enough for the t-based m
Statistical power of t-test in mildly skewed dataset This has been discussed at length on this site. The t-test is not very robust to skewness. For example, with the log-normal distribution a sample size of 50,000 is not large enough for the t-based method to be sufficiently accurate. The Wilcoxon signed-rank one-sample test does not test a median. The Wilcoxon-Mann-Whitney test is a two-sample test. The Wilcoxon tests are 0.95 as efficient as the t-based methods if normality holds, and can be arbitrarily more powerful than this parametric counterpart when normality does not hold. I suggest reading an intro nonparametric statistics book.
Statistical power of t-test in mildly skewed dataset This has been discussed at length on this site. The t-test is not very robust to skewness. For example, with the log-normal distribution a sample size of 50,000 is not large enough for the t-based m
48,349
Statistical power of t-test in mildly skewed dataset
How does the equation for 𝑇 approach normal distribution? Well, that it isn't quite true. The sampling distribution for that statistic approaches a standard normal in the limit as n grows large. That is very different. T What is the "statistical power" that people talk about? Does it give information about how many sample sizes that I need to have a meaningful result from t-test? Power is the probability of correctly rejecting the null hypothesis when it is false. If there really is a difference between groups, then a high powered test will correctly identify that there is a difference upon repetition of the experiment. Power is an essential part of a sample size calculation. The sample size depends on a few other things (namely the effect size and the noise in the data), but generally speaking more power requires more samples (all else constant). Is there any way to determine sample size needed for t-test? Like, if you have skewness of 𝑥 and kurtosis of 𝑦, you need 𝑛 sample size for the result from t-test to be valid, even when your data is non-normal Not to my knowledge. The equations for sample sizes lean heavily on the assumptions of the t-test, and so I don't think there are formulae which make use of higher moments.
Statistical power of t-test in mildly skewed dataset
How does the equation for 𝑇 approach normal distribution? Well, that it isn't quite true. The sampling distribution for that statistic approaches a standard normal in the limit as n grows large. Th
Statistical power of t-test in mildly skewed dataset How does the equation for 𝑇 approach normal distribution? Well, that it isn't quite true. The sampling distribution for that statistic approaches a standard normal in the limit as n grows large. That is very different. T What is the "statistical power" that people talk about? Does it give information about how many sample sizes that I need to have a meaningful result from t-test? Power is the probability of correctly rejecting the null hypothesis when it is false. If there really is a difference between groups, then a high powered test will correctly identify that there is a difference upon repetition of the experiment. Power is an essential part of a sample size calculation. The sample size depends on a few other things (namely the effect size and the noise in the data), but generally speaking more power requires more samples (all else constant). Is there any way to determine sample size needed for t-test? Like, if you have skewness of 𝑥 and kurtosis of 𝑦, you need 𝑛 sample size for the result from t-test to be valid, even when your data is non-normal Not to my knowledge. The equations for sample sizes lean heavily on the assumptions of the t-test, and so I don't think there are formulae which make use of higher moments.
Statistical power of t-test in mildly skewed dataset How does the equation for 𝑇 approach normal distribution? Well, that it isn't quite true. The sampling distribution for that statistic approaches a standard normal in the limit as n grows large. Th
48,350
How could I find the prediction interval of a future observation given the present dataset?
There is a conventional concept that is a close match to your question: a nonparametric prediction interval. These are amazingly easy to compute and can work well with sufficiently large datasets. A "prediction interval" is a statistical problem where you intend to use an initial set of data to establish limits between which additional data will lie. We say these limits "cover" the additional data when all the additional values are included within the limits. The unconditional probability of coverage--that is, the one you would compute before seeing any of the data--is intended to be at least a given percentage, such as 95% or 99% (as in this question). (I will use 99% throughout, understanding it can be replaced by any desired percentage less than 100% in obvious ways.) A prediction interval is "nonparametric" when you make no (or very limited) assumptions about the underlying data distribution. The standard application is where that distribution could be literally anything and the data are independent. The simplest case is the one described in the question: using independent and identically distributed random values $X_1, X_2, \ldots, X_n,$ to erect a 99% prediction interval for a single independent future value $X_0$ drawn from the same unknown distribution. Provided $n$ is large enough, there are many solutions that are easy to obtain: they all rely on the fact that the $X_i$ are exchangeable: that is, any one of them could play the role of $X_0.$ The interval in these solutions is given by a pair of order statistics $(X_{(l)}, X_{(u)}]$ for the original data. This notation, using parenthesized subscripts, is conventional: when we sort the $X_i$, $X_{(1)}$ is the smallest, $X_{(2)}$ the next smallest, and so on. Including the universal endpoints $\pm\infty$ (for notational convenience), $$-\infty = X_{(0)} \lt X_{(1)} \le X_{(2)} \le \cdots \le X_{(n)} \lt X_{(n+1)}=\infty.$$ Thus, there are at most $l-1$ original data values less than $X_{(l)}$ (whatever $l \in \{0,1,2,\ldots, n,n+1\}$ may be) and there are at least $n-u+1$ original data values greater than or equal to $X_{(u)}.$ The exchangeability of all the data, including $X_0,$ implies $X_0$ is equally likely to be the smallest of all $n+1$ values, the second smallest, ..., or the very largest. That covers $n+1$ equally likely possibilities, of which at least $u-l$ are in the interval $(X_{(l)},X_{(u)}].$ Thus, the chance that $X_0$ is covered by the interval $(X_{(l)}, X_{(u)}]$ is at least $(u-l)/(n+1).$ (The coverage value is exact for continuous distributions and might be higher for non-continuous distributions where ties are possible.) This shows that $$\Pr(X_0 \in (X_{(l)}, X_{(u)}]) \ge \frac{u-l}{n+1}.\tag{1}$$ To find a 99% prediction interval for $X_0,$ then, all we need do is choose $l$ and $u$ to make the right hand side at least 99%. Usually we want to make the prediction interval as precise as possible--that is, as narrow as possible--so we take $l$ large and $u$ small, within these constraints. Usually, these order indexes are chosen symmetrically in the sense that $l-1$ and $n-u$ are approximately equal. The choice ought to be made before examining the data. Examples Suppose you have $n=299$ data values. Equation $(1)$ states $$ \frac{u-l}{300} \ge 0.99,$$ giving the solutions $$(l,u) \in \{(0, 297), (1, 298), (2, 299), (3, 300)\}.$$ I use the notation "$(0,297)$" to indicate an interval with no lower limit (that is, $X_{(0)} = -\infty$) and an upper limit of $X_{(297)}$: it is a nonparametric 99% upper prediction limit. Similarly, "$(3,300)$" represents the nonparametric 99% lower prediction limit given by $X_{(3)}.$ It is equivalent to pretending $X_{(300)} = X_{(n+1)}=+\infty.$ The other two solutions are genuine (finite) intervals. You may choose either one (in advance). Perhaps you would like to keep the upper limit as low as possible, subject to all the preceding requirements: you would accordingly use $(1,298)$ as your procedure. In this procedure, the prediction interval goes from the smallest data value $X_{(1)}$ up to and including the second highest data value $X_{(298)}.$ Otherwise you might use $(2,299).$ (You could also flip a coin to make the choice: this is called a randomized procedure.) Suppose you have $n=90$ values. Now there are no solutions to $(1)$ (or, continuing the conventions of the previous example, the only solution is $l=0$ and $u=91$ corresponding to the interval $(-\infty, \infty]$ of all real numbers): it is not possible to construct a meaningful nonparametric 99% prediction interval (for a single additional observation) with fewer than $99$ data points. References Hahn and Meeker, Statistical Intervals (1991).
How could I find the prediction interval of a future observation given the present dataset?
There is a conventional concept that is a close match to your question: a nonparametric prediction interval. These are amazingly easy to compute and can work well with sufficiently large datasets. A
How could I find the prediction interval of a future observation given the present dataset? There is a conventional concept that is a close match to your question: a nonparametric prediction interval. These are amazingly easy to compute and can work well with sufficiently large datasets. A "prediction interval" is a statistical problem where you intend to use an initial set of data to establish limits between which additional data will lie. We say these limits "cover" the additional data when all the additional values are included within the limits. The unconditional probability of coverage--that is, the one you would compute before seeing any of the data--is intended to be at least a given percentage, such as 95% or 99% (as in this question). (I will use 99% throughout, understanding it can be replaced by any desired percentage less than 100% in obvious ways.) A prediction interval is "nonparametric" when you make no (or very limited) assumptions about the underlying data distribution. The standard application is where that distribution could be literally anything and the data are independent. The simplest case is the one described in the question: using independent and identically distributed random values $X_1, X_2, \ldots, X_n,$ to erect a 99% prediction interval for a single independent future value $X_0$ drawn from the same unknown distribution. Provided $n$ is large enough, there are many solutions that are easy to obtain: they all rely on the fact that the $X_i$ are exchangeable: that is, any one of them could play the role of $X_0.$ The interval in these solutions is given by a pair of order statistics $(X_{(l)}, X_{(u)}]$ for the original data. This notation, using parenthesized subscripts, is conventional: when we sort the $X_i$, $X_{(1)}$ is the smallest, $X_{(2)}$ the next smallest, and so on. Including the universal endpoints $\pm\infty$ (for notational convenience), $$-\infty = X_{(0)} \lt X_{(1)} \le X_{(2)} \le \cdots \le X_{(n)} \lt X_{(n+1)}=\infty.$$ Thus, there are at most $l-1$ original data values less than $X_{(l)}$ (whatever $l \in \{0,1,2,\ldots, n,n+1\}$ may be) and there are at least $n-u+1$ original data values greater than or equal to $X_{(u)}.$ The exchangeability of all the data, including $X_0,$ implies $X_0$ is equally likely to be the smallest of all $n+1$ values, the second smallest, ..., or the very largest. That covers $n+1$ equally likely possibilities, of which at least $u-l$ are in the interval $(X_{(l)},X_{(u)}].$ Thus, the chance that $X_0$ is covered by the interval $(X_{(l)}, X_{(u)}]$ is at least $(u-l)/(n+1).$ (The coverage value is exact for continuous distributions and might be higher for non-continuous distributions where ties are possible.) This shows that $$\Pr(X_0 \in (X_{(l)}, X_{(u)}]) \ge \frac{u-l}{n+1}.\tag{1}$$ To find a 99% prediction interval for $X_0,$ then, all we need do is choose $l$ and $u$ to make the right hand side at least 99%. Usually we want to make the prediction interval as precise as possible--that is, as narrow as possible--so we take $l$ large and $u$ small, within these constraints. Usually, these order indexes are chosen symmetrically in the sense that $l-1$ and $n-u$ are approximately equal. The choice ought to be made before examining the data. Examples Suppose you have $n=299$ data values. Equation $(1)$ states $$ \frac{u-l}{300} \ge 0.99,$$ giving the solutions $$(l,u) \in \{(0, 297), (1, 298), (2, 299), (3, 300)\}.$$ I use the notation "$(0,297)$" to indicate an interval with no lower limit (that is, $X_{(0)} = -\infty$) and an upper limit of $X_{(297)}$: it is a nonparametric 99% upper prediction limit. Similarly, "$(3,300)$" represents the nonparametric 99% lower prediction limit given by $X_{(3)}.$ It is equivalent to pretending $X_{(300)} = X_{(n+1)}=+\infty.$ The other two solutions are genuine (finite) intervals. You may choose either one (in advance). Perhaps you would like to keep the upper limit as low as possible, subject to all the preceding requirements: you would accordingly use $(1,298)$ as your procedure. In this procedure, the prediction interval goes from the smallest data value $X_{(1)}$ up to and including the second highest data value $X_{(298)}.$ Otherwise you might use $(2,299).$ (You could also flip a coin to make the choice: this is called a randomized procedure.) Suppose you have $n=90$ values. Now there are no solutions to $(1)$ (or, continuing the conventions of the previous example, the only solution is $l=0$ and $u=91$ corresponding to the interval $(-\infty, \infty]$ of all real numbers): it is not possible to construct a meaningful nonparametric 99% prediction interval (for a single additional observation) with fewer than $99$ data points. References Hahn and Meeker, Statistical Intervals (1991).
How could I find the prediction interval of a future observation given the present dataset? There is a conventional concept that is a close match to your question: a nonparametric prediction interval. These are amazingly easy to compute and can work well with sufficiently large datasets. A
48,351
Top principal components versus most significant random forest variables
PCA maximizes variance captured by linear combinations of your input variables. There are several reasons why this might not extract useful information about your outcome variable: Maximizing variance does not mean maximizing dispersion if your variables are not approximately normally distributed; $>90\%$ of the variance in the input might be captured by your approach, but since this is an unsupervised technique, it might as well be that the last 10% correlates strongest with the output; The other techniques you use return significance/variable importance based on the output: You are comparing an unsupervised approach to supervised approaches. On a different note, why bother with dimension reduction at all? It seems you have plenty of observations to estimate 100 features. If you suspect some to be less informative than others, why not go with a regularized approach (e.g. ridge regression), or a method that inherently does variable selection... say... a random forest?
Top principal components versus most significant random forest variables
PCA maximizes variance captured by linear combinations of your input variables. There are several reasons why this might not extract useful information about your outcome variable: Maximizing varianc
Top principal components versus most significant random forest variables PCA maximizes variance captured by linear combinations of your input variables. There are several reasons why this might not extract useful information about your outcome variable: Maximizing variance does not mean maximizing dispersion if your variables are not approximately normally distributed; $>90\%$ of the variance in the input might be captured by your approach, but since this is an unsupervised technique, it might as well be that the last 10% correlates strongest with the output; The other techniques you use return significance/variable importance based on the output: You are comparing an unsupervised approach to supervised approaches. On a different note, why bother with dimension reduction at all? It seems you have plenty of observations to estimate 100 features. If you suspect some to be less informative than others, why not go with a regularized approach (e.g. ridge regression), or a method that inherently does variable selection... say... a random forest?
Top principal components versus most significant random forest variables PCA maximizes variance captured by linear combinations of your input variables. There are several reasons why this might not extract useful information about your outcome variable: Maximizing varianc
48,352
Standard error of sample variance
Looking at the variance of $\hat{\sigma}_{biased}^2$ we have \begin{eqnarray*} \mathrm{Var}(\hat{\sigma}_{biased}^2) &=& \mathrm{Var} \left( \dfrac{n-1}{n} \hat{\sigma}_{unbiased}^2 \right) \\ &=& \left( \dfrac{n-1}{n} \right)^2 \mathrm{Var}(\hat{\sigma}_{unbiased}^2). \end{eqnarray*} Since $(n-1)/n < 1$ it follows that $$ \mathrm{Var}(\hat{\sigma}_{biased}^2) < \mathrm{Var}(\hat{\sigma}_{unbiased}^2) $$ so $$ \mathrm{sd}(\hat{\sigma}_{biased}^2) < \mathrm{sd}(\hat{\sigma}_{unbiased}^2). $$ The maximum likelihood estimator (MLE) does indeed have a smaller standard error.
Standard error of sample variance
Looking at the variance of $\hat{\sigma}_{biased}^2$ we have \begin{eqnarray*} \mathrm{Var}(\hat{\sigma}_{biased}^2) &=& \mathrm{Var} \left( \dfrac{n-1}{n} \hat{\sigma}_{unbiased}^2 \right) \\ &=& \le
Standard error of sample variance Looking at the variance of $\hat{\sigma}_{biased}^2$ we have \begin{eqnarray*} \mathrm{Var}(\hat{\sigma}_{biased}^2) &=& \mathrm{Var} \left( \dfrac{n-1}{n} \hat{\sigma}_{unbiased}^2 \right) \\ &=& \left( \dfrac{n-1}{n} \right)^2 \mathrm{Var}(\hat{\sigma}_{unbiased}^2). \end{eqnarray*} Since $(n-1)/n < 1$ it follows that $$ \mathrm{Var}(\hat{\sigma}_{biased}^2) < \mathrm{Var}(\hat{\sigma}_{unbiased}^2) $$ so $$ \mathrm{sd}(\hat{\sigma}_{biased}^2) < \mathrm{sd}(\hat{\sigma}_{unbiased}^2). $$ The maximum likelihood estimator (MLE) does indeed have a smaller standard error.
Standard error of sample variance Looking at the variance of $\hat{\sigma}_{biased}^2$ we have \begin{eqnarray*} \mathrm{Var}(\hat{\sigma}_{biased}^2) &=& \mathrm{Var} \left( \dfrac{n-1}{n} \hat{\sigma}_{unbiased}^2 \right) \\ &=& \le
48,353
Is background subtraction common practice for image classification?
I tried this several times in several projects in the past, and yes, it may help if done properly. However, reliable background removal is not trivial, and it has to be carefully, manually checked for each image. And as it's not a very easy thing to do, it's not commonly done when working with neural networks and large datasets. Usually, a better approach is to ensure that you have a large number of images with diverse backgrounds, so that the network cannot just rely on the background to drive the prediction. As a fun fact, from a Marvin Minsky Interview: [...] Where a perceptron had been trained to distinguish between - this was for military purposes - it was looking at a scene of a forest in which there were camouflaged tanks in one picture and no camouflaged tanks in the other. And the perceptron - after a little training - made a 100% correct distinction between these two different sets of photographs. Then they were embarrassed a few hours later to discover that the two rolls of film had been developed differently. And so these pictures were just a little darker than all of these pictures and the perceptron was just measuring the total amount of light in the scene. But it was very clever of the perceptron to find some way of making the distinction.
Is background subtraction common practice for image classification?
I tried this several times in several projects in the past, and yes, it may help if done properly. However, reliable background removal is not trivial, and it has to be carefully, manually checked for
Is background subtraction common practice for image classification? I tried this several times in several projects in the past, and yes, it may help if done properly. However, reliable background removal is not trivial, and it has to be carefully, manually checked for each image. And as it's not a very easy thing to do, it's not commonly done when working with neural networks and large datasets. Usually, a better approach is to ensure that you have a large number of images with diverse backgrounds, so that the network cannot just rely on the background to drive the prediction. As a fun fact, from a Marvin Minsky Interview: [...] Where a perceptron had been trained to distinguish between - this was for military purposes - it was looking at a scene of a forest in which there were camouflaged tanks in one picture and no camouflaged tanks in the other. And the perceptron - after a little training - made a 100% correct distinction between these two different sets of photographs. Then they were embarrassed a few hours later to discover that the two rolls of film had been developed differently. And so these pictures were just a little darker than all of these pictures and the perceptron was just measuring the total amount of light in the scene. But it was very clever of the perceptron to find some way of making the distinction.
Is background subtraction common practice for image classification? I tried this several times in several projects in the past, and yes, it may help if done properly. However, reliable background removal is not trivial, and it has to be carefully, manually checked for
48,354
Is background subtraction common practice for image classification?
I cannot comment for the low reputation. Anyway, as seen as I have read about Image Classification it is not a common practice, especially for large databases. The most common preprocessing operations are transforming the range between [0, 1] or [-1,1] (like in ResNet50V2) and resizing the images to all the same width and height to use a batch size different than 1.
Is background subtraction common practice for image classification?
I cannot comment for the low reputation. Anyway, as seen as I have read about Image Classification it is not a common practice, especially for large databases. The most common preprocessing operations
Is background subtraction common practice for image classification? I cannot comment for the low reputation. Anyway, as seen as I have read about Image Classification it is not a common practice, especially for large databases. The most common preprocessing operations are transforming the range between [0, 1] or [-1,1] (like in ResNet50V2) and resizing the images to all the same width and height to use a batch size different than 1.
Is background subtraction common practice for image classification? I cannot comment for the low reputation. Anyway, as seen as I have read about Image Classification it is not a common practice, especially for large databases. The most common preprocessing operations
48,355
Does retraining a model on all available data necessarily yield a better model?
This practice derives from an understanding of the bias-variance tradeoff. Recall that the expected test error can be broken down into three components. $$ E \left[ \text{Test Error} \right] = \text{Bias}^2 + \text{Variance} + \text{Irreducible Error}$$ Assuming your data sets are independent random samples from a population, retraining on more data has the following effects: The bias, being a function of only the structure of your model, stays the same. The variance decreases (or at worst, stays the same). The irreducible error stays the same. All together, the expected test error is decreased with this procedure. It's possible that the error on any given test data set increases, since it's only the expectation that is guaranteed to decrease, but since we don't know anything about the particular population samples we are going to expose to the model in production, lowering the expected test error is a good strategy.
Does retraining a model on all available data necessarily yield a better model?
This practice derives from an understanding of the bias-variance tradeoff. Recall that the expected test error can be broken down into three components. $$ E \left[ \text{Test Error} \right] = \text{B
Does retraining a model on all available data necessarily yield a better model? This practice derives from an understanding of the bias-variance tradeoff. Recall that the expected test error can be broken down into three components. $$ E \left[ \text{Test Error} \right] = \text{Bias}^2 + \text{Variance} + \text{Irreducible Error}$$ Assuming your data sets are independent random samples from a population, retraining on more data has the following effects: The bias, being a function of only the structure of your model, stays the same. The variance decreases (or at worst, stays the same). The irreducible error stays the same. All together, the expected test error is decreased with this procedure. It's possible that the error on any given test data set increases, since it's only the expectation that is guaranteed to decrease, but since we don't know anything about the particular population samples we are going to expose to the model in production, lowering the expected test error is a good strategy.
Does retraining a model on all available data necessarily yield a better model? This practice derives from an understanding of the bias-variance tradeoff. Recall that the expected test error can be broken down into three components. $$ E \left[ \text{Test Error} \right] = \text{B
48,356
Estimate for the error of an error?
You did nothing wrong: the first value of $\delta s,$ which was obtained by an approximate method, is a close approximation to the second. Let's compare the two expressions by using Stirling's approximation $$\log\left(\Gamma(z)\right) \approx z \log(z) - z + \log(2\pi)/2 - \log(z)/2 + \frac{1}{12z} + O(z^{-2}).$$ and the Taylor series approximation $$\log\left(z+\frac{1}{2}\right) =\log(z) + \log\left(1 + \frac{1}{2z}\right)\approx \log(z) + \frac{1}{2z} - \frac{1}{8z^2} + O(z^{-3}).$$ (Eventually we will set $z=(n-1)/2,$ but in the meantime this notation is more convenient.) Carrying out all calculations modulo $O(z^{-2}),$ use these approximations to estimate $$\log\left(\Gamma\left(z+\frac{1}{2}\right)\right) - \log\left(\Gamma\left(z\right)\right) \approx \frac{1}{2}\left(\log(z) - \frac{1}{4z}\right) + O(z^{-2}).$$ Consequently $$f(z) = 1 - \frac{1}{z}\left(\frac{\Gamma\left(z+1/2)\right)}{\Gamma(z)}\right)^2 \approx 1 - \exp\left(-\frac{1}{4z}\right) = \frac{1}{4z} + O(z^{-2}).$$ The exact expression for $\delta s$ in the question is $\sigma\sqrt{f((n-1)/2)}$ whereas the approximation produced by first-order error propagation is $\sigma\sqrt{g((n-1)/2)}$ with $$g(z) = \frac{1}{4z}.$$ Consequently their ratio is $$\frac{\sigma\sqrt{g((n-1)/2)}}{\sigma\sqrt{f((n-1)/2)}} = \sqrt{\frac{g((n-1)/2)}{f((n-1)/2)}} = \sqrt{\frac{1/(2n-2) + O(n^{-2})}{1/(2n-2)}}=1+O(n^{-1}).$$ Thus, the two expressions closely agree for large $n.$ But just how large must $n$ be? Not very, as this plot of the ratio shows: The first (approximate) formula is always too large, but the ratio rapidly drops toward $1:1$ as $n$ increases. Indeed, further analysis indicates the relative error (the difference between the ratio and $1$) is $1/(8n)+O(n^{-2}).$ Even for $n=2$ this isn't bad.
Estimate for the error of an error?
You did nothing wrong: the first value of $\delta s,$ which was obtained by an approximate method, is a close approximation to the second. Let's compare the two expressions by using Stirling's approxi
Estimate for the error of an error? You did nothing wrong: the first value of $\delta s,$ which was obtained by an approximate method, is a close approximation to the second. Let's compare the two expressions by using Stirling's approximation $$\log\left(\Gamma(z)\right) \approx z \log(z) - z + \log(2\pi)/2 - \log(z)/2 + \frac{1}{12z} + O(z^{-2}).$$ and the Taylor series approximation $$\log\left(z+\frac{1}{2}\right) =\log(z) + \log\left(1 + \frac{1}{2z}\right)\approx \log(z) + \frac{1}{2z} - \frac{1}{8z^2} + O(z^{-3}).$$ (Eventually we will set $z=(n-1)/2,$ but in the meantime this notation is more convenient.) Carrying out all calculations modulo $O(z^{-2}),$ use these approximations to estimate $$\log\left(\Gamma\left(z+\frac{1}{2}\right)\right) - \log\left(\Gamma\left(z\right)\right) \approx \frac{1}{2}\left(\log(z) - \frac{1}{4z}\right) + O(z^{-2}).$$ Consequently $$f(z) = 1 - \frac{1}{z}\left(\frac{\Gamma\left(z+1/2)\right)}{\Gamma(z)}\right)^2 \approx 1 - \exp\left(-\frac{1}{4z}\right) = \frac{1}{4z} + O(z^{-2}).$$ The exact expression for $\delta s$ in the question is $\sigma\sqrt{f((n-1)/2)}$ whereas the approximation produced by first-order error propagation is $\sigma\sqrt{g((n-1)/2)}$ with $$g(z) = \frac{1}{4z}.$$ Consequently their ratio is $$\frac{\sigma\sqrt{g((n-1)/2)}}{\sigma\sqrt{f((n-1)/2)}} = \sqrt{\frac{g((n-1)/2)}{f((n-1)/2)}} = \sqrt{\frac{1/(2n-2) + O(n^{-2})}{1/(2n-2)}}=1+O(n^{-1}).$$ Thus, the two expressions closely agree for large $n.$ But just how large must $n$ be? Not very, as this plot of the ratio shows: The first (approximate) formula is always too large, but the ratio rapidly drops toward $1:1$ as $n$ increases. Indeed, further analysis indicates the relative error (the difference between the ratio and $1$) is $1/(8n)+O(n^{-2}).$ Even for $n=2$ this isn't bad.
Estimate for the error of an error? You did nothing wrong: the first value of $\delta s,$ which was obtained by an approximate method, is a close approximation to the second. Let's compare the two expressions by using Stirling's approxi
48,357
Clustered standard errors - Why are SE smaller or bigger than OLS depending on cluster level?
What you observe can be explained by the correlations in the measurements within the clusters. Namely, when you select an analysis, such as OLS that does not account for these correlations, you expect that standard errors of within clusters effects to be overestimated, and standard errors of between clusters effects to be underestimated.
Clustered standard errors - Why are SE smaller or bigger than OLS depending on cluster level?
What you observe can be explained by the correlations in the measurements within the clusters. Namely, when you select an analysis, such as OLS that does not account for these correlations, you expect
Clustered standard errors - Why are SE smaller or bigger than OLS depending on cluster level? What you observe can be explained by the correlations in the measurements within the clusters. Namely, when you select an analysis, such as OLS that does not account for these correlations, you expect that standard errors of within clusters effects to be overestimated, and standard errors of between clusters effects to be underestimated.
Clustered standard errors - Why are SE smaller or bigger than OLS depending on cluster level? What you observe can be explained by the correlations in the measurements within the clusters. Namely, when you select an analysis, such as OLS that does not account for these correlations, you expect
48,358
Clustered standard errors - Why are SE smaller or bigger than OLS depending on cluster level?
The variance inflation equation (6) on page six (adjusted for unequal cluster size below) in the Cameron and Miller paper you linked contains the intuition. If you have positive correlation in either the regressor of interest or the errors within cities (the two $\rho$s), but a negative correlation within states, that could explain the pattern of what you are seeing. This could be amplified by the unequal cluster size multiplying the $\rho$s at the two levels of clustering. You can estimate these to confirm this. You don't provide any details of your setting, so it is hard to give an example of how this could happen in your case. One example is if you have a pattern of migration from rural to urban areas in your data driven by local booms. Then all the observations from cities could have positively correlated positive residuals capturing the booms there and the rural areas will have positively correlated negative residuals because of the busts, but within the states, the rural observations' residuals would be negatively correlated with the urban ones if the migrants move in-state. There is another example here with more explanation. Also, you should use bigger and more aggregate clusters when possible, up to and including the point at which there is concern about having too few clusters. In other words, you definitely don't want to always cluster at the highest level (say the four census regions in the US). Unfortunately, there's no clear definition of "too few", but fewer than 50 is when people start getting worried.
Clustered standard errors - Why are SE smaller or bigger than OLS depending on cluster level?
The variance inflation equation (6) on page six (adjusted for unequal cluster size below) in the Cameron and Miller paper you linked contains the intuition. If you have positive correlation in either
Clustered standard errors - Why are SE smaller or bigger than OLS depending on cluster level? The variance inflation equation (6) on page six (adjusted for unequal cluster size below) in the Cameron and Miller paper you linked contains the intuition. If you have positive correlation in either the regressor of interest or the errors within cities (the two $\rho$s), but a negative correlation within states, that could explain the pattern of what you are seeing. This could be amplified by the unequal cluster size multiplying the $\rho$s at the two levels of clustering. You can estimate these to confirm this. You don't provide any details of your setting, so it is hard to give an example of how this could happen in your case. One example is if you have a pattern of migration from rural to urban areas in your data driven by local booms. Then all the observations from cities could have positively correlated positive residuals capturing the booms there and the rural areas will have positively correlated negative residuals because of the busts, but within the states, the rural observations' residuals would be negatively correlated with the urban ones if the migrants move in-state. There is another example here with more explanation. Also, you should use bigger and more aggregate clusters when possible, up to and including the point at which there is concern about having too few clusters. In other words, you definitely don't want to always cluster at the highest level (say the four census regions in the US). Unfortunately, there's no clear definition of "too few", but fewer than 50 is when people start getting worried.
Clustered standard errors - Why are SE smaller or bigger than OLS depending on cluster level? The variance inflation equation (6) on page six (adjusted for unequal cluster size below) in the Cameron and Miller paper you linked contains the intuition. If you have positive correlation in either
48,359
What are the theoretical/practical reasons to use normal distribution to initialize the weights in Neural Networks?
(1) I think restricting the weights to have mean at 0 and std at 1 can make the weights as small as possible, which make it convenient for regularization. Am I understanding it correctly? No, setting them all to 0 would make them as small as possible. (2) On the other hand, what are the theoretical/practical reasons to use the normal distribution? Why not sampling random weights from any other arbitrary distributions? Is it because normal distribution has the maximum entropy given the mean and variance? Having the maximum entropy means it's most possible chaotic and thus making least assumptions about the weights. Am I understanding it correctly? I don't think there's too much logic in that decision, perhaps besides the fact that the gaussian distribution is a good "default prior" as many things follow a gaussian distribution. In fact one popular default initialization scheme by Glorot et. al prescribes a uniform distribution, not a normal distribution. In fact what probably happens is 1. authors provide a theoretical justification for what variance the distribution of the initial weights should be. 2. they choose an arbitrary distribution with that variance. Of course the normal distribution is then a very natural and easy choice!
What are the theoretical/practical reasons to use normal distribution to initialize the weights in N
(1) I think restricting the weights to have mean at 0 and std at 1 can make the weights as small as possible, which make it convenient for regularization. Am I understanding it correctly? No, set
What are the theoretical/practical reasons to use normal distribution to initialize the weights in Neural Networks? (1) I think restricting the weights to have mean at 0 and std at 1 can make the weights as small as possible, which make it convenient for regularization. Am I understanding it correctly? No, setting them all to 0 would make them as small as possible. (2) On the other hand, what are the theoretical/practical reasons to use the normal distribution? Why not sampling random weights from any other arbitrary distributions? Is it because normal distribution has the maximum entropy given the mean and variance? Having the maximum entropy means it's most possible chaotic and thus making least assumptions about the weights. Am I understanding it correctly? I don't think there's too much logic in that decision, perhaps besides the fact that the gaussian distribution is a good "default prior" as many things follow a gaussian distribution. In fact one popular default initialization scheme by Glorot et. al prescribes a uniform distribution, not a normal distribution. In fact what probably happens is 1. authors provide a theoretical justification for what variance the distribution of the initial weights should be. 2. they choose an arbitrary distribution with that variance. Of course the normal distribution is then a very natural and easy choice!
What are the theoretical/practical reasons to use normal distribution to initialize the weights in N (1) I think restricting the weights to have mean at 0 and std at 1 can make the weights as small as possible, which make it convenient for regularization. Am I understanding it correctly? No, set
48,360
What are the theoretical/practical reasons to use normal distribution to initialize the weights in Neural Networks?
The other answer is good (+1), but just to add to it: (1) No, one can easily make the weights even smaller by choosing a smaller $\sigma$ than 1. I do think the fact that $L_1$ and $L_2$ weight decay are very common is related to this, in the sense that initializing with $\mu\ne 0$ would be wasteful, as the weight decay penalty would start off higher than necessary (and the weights would then have to slowly "migrate" towards zero mean anyway!). Of course, you could center the weight decay at another value (e.g. $L(w) = ||W - \mu_M||$ for some non-zero $\mu_M$), but this unnecessarily complicated and aesthetically unpleasant. Nevertheless, I don't think it is the cause of it. (2) Firstly, not all initializations use the normal distribution. Sometimes they use uniform, or in some cases (resnets, some normalizations, etc...) they use some fixed specialized value. As for the maximum entropy (ME) assumption, I am not sure if this is related (may well be though). ME is true only for that fixed variance. So the question is still why you would want $\sigma=1$. (and also why ME would be preferred either). Two things come to mind (mostly just gut feelings/things to look into): (a) There is plenty of theoretical work linking neural nets to Bayesian neural networks (BNNs), and then those to Gaussian processes. For example, see here. A major application is calibrated uncertainty estimation (e.g. here). The idea is that instead of having a single weight value, you have a distribution over weight values. It is common to use Gaussian distributions to (variationally) approximate these distributions, or (more efficiently) use a combination of regularization and noise to approximate it (e.g., here). Usually these approximate having a (variational, factored) Gaussian distribution over the weights. I suspect that initializing with $\mathcal{N}(0,1)$ is essentially part of the Bayesian prior in those cases (as you probably know, $L_2$ weight decay regularization is equivalent to a Bayesian Gaussian weight prior). So perhaps Gaussian initialization can be viewed as related to viewing a standard NN as some kind of approximate BNN. It might be worth looking into. (b) Normalization in NNs is now standard, in order to prevent problems with gradients in backpropagation. Such layers (e.g., instance, batch, group, layer, etc... norms) usually operate on activations. But there is also the famous weight normalization method, of course. The idea is that you want a "nice" distribution for backprop: too much variance could mean very uneven gradient updates (instability), while too little is also an issue (could be vanishing gradients, or that the weights are not learning different things). You can get this by controlling the weight values or the activation values, or even using methods like SELU. I suspect $\sigma=1$ just happens to be a nice value (theoretically and practically). Of course, for initialization, you want to start out close to this distribution. Hence why we use it (well, sometimes anyway). Of course, choosing $\sigma=0.9$ or $2.1$ would probably be fine (up to a point). Ultimately, it's probably aesthetics (would loved to be proved wrong though).
What are the theoretical/practical reasons to use normal distribution to initialize the weights in N
The other answer is good (+1), but just to add to it: (1) No, one can easily make the weights even smaller by choosing a smaller $\sigma$ than 1. I do think the fact that $L_1$ and $L_2$ weight decay
What are the theoretical/practical reasons to use normal distribution to initialize the weights in Neural Networks? The other answer is good (+1), but just to add to it: (1) No, one can easily make the weights even smaller by choosing a smaller $\sigma$ than 1. I do think the fact that $L_1$ and $L_2$ weight decay are very common is related to this, in the sense that initializing with $\mu\ne 0$ would be wasteful, as the weight decay penalty would start off higher than necessary (and the weights would then have to slowly "migrate" towards zero mean anyway!). Of course, you could center the weight decay at another value (e.g. $L(w) = ||W - \mu_M||$ for some non-zero $\mu_M$), but this unnecessarily complicated and aesthetically unpleasant. Nevertheless, I don't think it is the cause of it. (2) Firstly, not all initializations use the normal distribution. Sometimes they use uniform, or in some cases (resnets, some normalizations, etc...) they use some fixed specialized value. As for the maximum entropy (ME) assumption, I am not sure if this is related (may well be though). ME is true only for that fixed variance. So the question is still why you would want $\sigma=1$. (and also why ME would be preferred either). Two things come to mind (mostly just gut feelings/things to look into): (a) There is plenty of theoretical work linking neural nets to Bayesian neural networks (BNNs), and then those to Gaussian processes. For example, see here. A major application is calibrated uncertainty estimation (e.g. here). The idea is that instead of having a single weight value, you have a distribution over weight values. It is common to use Gaussian distributions to (variationally) approximate these distributions, or (more efficiently) use a combination of regularization and noise to approximate it (e.g., here). Usually these approximate having a (variational, factored) Gaussian distribution over the weights. I suspect that initializing with $\mathcal{N}(0,1)$ is essentially part of the Bayesian prior in those cases (as you probably know, $L_2$ weight decay regularization is equivalent to a Bayesian Gaussian weight prior). So perhaps Gaussian initialization can be viewed as related to viewing a standard NN as some kind of approximate BNN. It might be worth looking into. (b) Normalization in NNs is now standard, in order to prevent problems with gradients in backpropagation. Such layers (e.g., instance, batch, group, layer, etc... norms) usually operate on activations. But there is also the famous weight normalization method, of course. The idea is that you want a "nice" distribution for backprop: too much variance could mean very uneven gradient updates (instability), while too little is also an issue (could be vanishing gradients, or that the weights are not learning different things). You can get this by controlling the weight values or the activation values, or even using methods like SELU. I suspect $\sigma=1$ just happens to be a nice value (theoretically and practically). Of course, for initialization, you want to start out close to this distribution. Hence why we use it (well, sometimes anyway). Of course, choosing $\sigma=0.9$ or $2.1$ would probably be fine (up to a point). Ultimately, it's probably aesthetics (would loved to be proved wrong though).
What are the theoretical/practical reasons to use normal distribution to initialize the weights in N The other answer is good (+1), but just to add to it: (1) No, one can easily make the weights even smaller by choosing a smaller $\sigma$ than 1. I do think the fact that $L_1$ and $L_2$ weight decay
48,361
Definition of independence of two random vectors and how to show it in the jointly normal case
(1) What is the definition of independence between two random vectors $\mathbf X$ and $\mathbf Y$? The definition of independence between two random vectors is the same as that between two ordinary random variables: Random vectors $\mathbf{x}$ and $\mathbf{y}$ are independent if and only if their joint distribution is equal to the product of their marginal distributions. That is: $$p(x, y) = p(x) p(y)$$ Or, to write things explicitly in terms of the individual elements of each vector, let $\mathbf{x} = [\mathbf{x}_1, \dots, \mathbf{x}_n]^T$ and $\mathbf{y} = [\mathbf{y}_1, \dots, \mathbf{y}_m]^T$. Then $\mathbf{x}$ and $\mathbf{y}$ are independent if and only if: $$p(x_1, \dots, x_n, y_1, \dots, y_m) = p(x_1, \dots, x_n) p(y_1, \dots, y_m)$$ Note that the elements of $\mathbf{x}$ may depend on each other, and likewise for $\mathbf{y}$. But there's no dependence between the elements of $x$ and $y$. (2) The interpretation of having the covariance matrix between two random vectors equal to zero is that the elements of the vectors have pairwise covariance zero, because the $ij^{th}$ element of the covariance matrix is the covariance between $X_i$ and $Y_j$. In the joint normal setting, this is the same as saying the elements of the vectors are pairwise independent. How come this is enough to conclude that the two vectors are completely independent, i.e. not just pairwise independent? This follows from the particular form of the Gaussian distribution. Suppose random vectors $\mathbf{x} \sim \mathcal{N}(\mu_x, C_x)$ and $\mathbf{y} \sim \mathcal{N}(\mu_y, C_y)$ are jointly Gaussian, and $\text{cov}(\mathbf{x_i}, \mathbf{y_j}) = 0$ for all $i,j$. Then $\mathbf{x}$ and $\mathbf{y}$ are independent because their joint distribution factors into the product of their marginal distributions. Proof We can write the joint distribution of $\mathbf{x}$ and $\mathbf{y}$ by concatenating them to form random vector $\mathbf{z} = \left[ \begin{matrix} \mathbf{x} \\ \mathbf{y} \end{matrix} \right]$. $\mathbf{z}$ has a Gaussian distribution with mean $\mu = \left[ \begin{matrix} \mu_x \\ \mu_y \end{matrix} \right]$. Because the covariance between all entries of $\mathbf{x}$ and $\mathbf{y}$ is zero, $\mathbf{z}$ has a block diagonal covariance matrix $C = \left[ \begin{matrix} C_x & \mathbf{0} \\ \mathbf{0} & C_y \\ \end{matrix} \right ]$. The joint density of $\mathbf{x}$ and $\mathbf{y}$ is equal to the density of $\mathbf{z}$: $$p(x, y \mid \mu_x, \mu_y, C_x, C_y) = p(z \mid \mu, C) = \text{det}(2 \pi C)^{-\frac{1}{2}} \exp \left[ -\frac{1}{2} (z-\mu)^T C^{-1} (z-\mu) \right]$$ Because $C$ is block diagonal, its inverse is $C^{-1} = \left[ \begin{matrix} C_x^{-1} & \mathbf{0} \\ \mathbf{0} & C_y^{-1} \\ \end{matrix} \right ]$ and we can write: $$(z-\mu)^T C^{-1} (z-\mu) = (x-\mu_x)^T C_x^{-1} (x-\mu_x) + (y-\mu_y)^T C_y^{-1} (y-\mu_y)$$ As a further consequence of the block diagonal form of $C$, the determinant is: $$\text{det}(C) = \text{det}(C_x) \ \text{det}(C_y)$$ Using these facts and a little algebra, we can re-write the joint distribution as: $$p(x, y \mid \mu_x, \mu_y, C_x, C_y) =$$ $$\text{det}(2 \pi C_x)^{-\frac{1}{2}} \exp \left[ -\frac{1}{2} (x-\mu_x)^T C_x^{-1} (x-\mu_x) \right]$$ $$\text{det}(2 \pi C_y)^{-\frac{1}{2}} \exp \left[ -\frac{1}{2} (y-\mu_y)^T C_y^{-1} (y-\mu_y) \right]$$ Clearly, this is just the product of the Gaussian marginal distributions of $\mathbf{x}$ and $\mathbf{y}$.
Definition of independence of two random vectors and how to show it in the jointly normal case
(1) What is the definition of independence between two random vectors $\mathbf X$ and $\mathbf Y$? The definition of independence between two random vectors is the same as that between two ordinary r
Definition of independence of two random vectors and how to show it in the jointly normal case (1) What is the definition of independence between two random vectors $\mathbf X$ and $\mathbf Y$? The definition of independence between two random vectors is the same as that between two ordinary random variables: Random vectors $\mathbf{x}$ and $\mathbf{y}$ are independent if and only if their joint distribution is equal to the product of their marginal distributions. That is: $$p(x, y) = p(x) p(y)$$ Or, to write things explicitly in terms of the individual elements of each vector, let $\mathbf{x} = [\mathbf{x}_1, \dots, \mathbf{x}_n]^T$ and $\mathbf{y} = [\mathbf{y}_1, \dots, \mathbf{y}_m]^T$. Then $\mathbf{x}$ and $\mathbf{y}$ are independent if and only if: $$p(x_1, \dots, x_n, y_1, \dots, y_m) = p(x_1, \dots, x_n) p(y_1, \dots, y_m)$$ Note that the elements of $\mathbf{x}$ may depend on each other, and likewise for $\mathbf{y}$. But there's no dependence between the elements of $x$ and $y$. (2) The interpretation of having the covariance matrix between two random vectors equal to zero is that the elements of the vectors have pairwise covariance zero, because the $ij^{th}$ element of the covariance matrix is the covariance between $X_i$ and $Y_j$. In the joint normal setting, this is the same as saying the elements of the vectors are pairwise independent. How come this is enough to conclude that the two vectors are completely independent, i.e. not just pairwise independent? This follows from the particular form of the Gaussian distribution. Suppose random vectors $\mathbf{x} \sim \mathcal{N}(\mu_x, C_x)$ and $\mathbf{y} \sim \mathcal{N}(\mu_y, C_y)$ are jointly Gaussian, and $\text{cov}(\mathbf{x_i}, \mathbf{y_j}) = 0$ for all $i,j$. Then $\mathbf{x}$ and $\mathbf{y}$ are independent because their joint distribution factors into the product of their marginal distributions. Proof We can write the joint distribution of $\mathbf{x}$ and $\mathbf{y}$ by concatenating them to form random vector $\mathbf{z} = \left[ \begin{matrix} \mathbf{x} \\ \mathbf{y} \end{matrix} \right]$. $\mathbf{z}$ has a Gaussian distribution with mean $\mu = \left[ \begin{matrix} \mu_x \\ \mu_y \end{matrix} \right]$. Because the covariance between all entries of $\mathbf{x}$ and $\mathbf{y}$ is zero, $\mathbf{z}$ has a block diagonal covariance matrix $C = \left[ \begin{matrix} C_x & \mathbf{0} \\ \mathbf{0} & C_y \\ \end{matrix} \right ]$. The joint density of $\mathbf{x}$ and $\mathbf{y}$ is equal to the density of $\mathbf{z}$: $$p(x, y \mid \mu_x, \mu_y, C_x, C_y) = p(z \mid \mu, C) = \text{det}(2 \pi C)^{-\frac{1}{2}} \exp \left[ -\frac{1}{2} (z-\mu)^T C^{-1} (z-\mu) \right]$$ Because $C$ is block diagonal, its inverse is $C^{-1} = \left[ \begin{matrix} C_x^{-1} & \mathbf{0} \\ \mathbf{0} & C_y^{-1} \\ \end{matrix} \right ]$ and we can write: $$(z-\mu)^T C^{-1} (z-\mu) = (x-\mu_x)^T C_x^{-1} (x-\mu_x) + (y-\mu_y)^T C_y^{-1} (y-\mu_y)$$ As a further consequence of the block diagonal form of $C$, the determinant is: $$\text{det}(C) = \text{det}(C_x) \ \text{det}(C_y)$$ Using these facts and a little algebra, we can re-write the joint distribution as: $$p(x, y \mid \mu_x, \mu_y, C_x, C_y) =$$ $$\text{det}(2 \pi C_x)^{-\frac{1}{2}} \exp \left[ -\frac{1}{2} (x-\mu_x)^T C_x^{-1} (x-\mu_x) \right]$$ $$\text{det}(2 \pi C_y)^{-\frac{1}{2}} \exp \left[ -\frac{1}{2} (y-\mu_y)^T C_y^{-1} (y-\mu_y) \right]$$ Clearly, this is just the product of the Gaussian marginal distributions of $\mathbf{x}$ and $\mathbf{y}$.
Definition of independence of two random vectors and how to show it in the jointly normal case (1) What is the definition of independence between two random vectors $\mathbf X$ and $\mathbf Y$? The definition of independence between two random vectors is the same as that between two ordinary r
48,362
Definition of independence of two random vectors and how to show it in the jointly normal case
(1) Regarding your first question, $X$ and $Y$ are independent if we can simply factorize the joint PDF as: $p_{X,Y}(x,y)=p_X(x)p_Y(y)$, irrespective of $X$ and $Y$ being vectors or not. (2) I'm wondering where you saw that having pairwise independence of $X_i,Y_j$ for all $i,j$ pairs result in complete independence. In general, it is not true. In specific cases of RVs, I don't know if there exists such situation but, in joint normal setting, I can tweak this post of @Sarwate to conform to the situation here. Let's have two vectors as $X=[X_1]$, $Y=[Y_1,Y_2]$. Assume, these are pairwise independent normal RVs, and $Y_1$ and $Y_2$ are jointly normal therefore. But, as the post shows, their joint PDF is not the multiplication of their individual PDFs, in which we can say that they're not independent together.
Definition of independence of two random vectors and how to show it in the jointly normal case
(1) Regarding your first question, $X$ and $Y$ are independent if we can simply factorize the joint PDF as: $p_{X,Y}(x,y)=p_X(x)p_Y(y)$, irrespective of $X$ and $Y$ being vectors or not. (2) I'm wonde
Definition of independence of two random vectors and how to show it in the jointly normal case (1) Regarding your first question, $X$ and $Y$ are independent if we can simply factorize the joint PDF as: $p_{X,Y}(x,y)=p_X(x)p_Y(y)$, irrespective of $X$ and $Y$ being vectors or not. (2) I'm wondering where you saw that having pairwise independence of $X_i,Y_j$ for all $i,j$ pairs result in complete independence. In general, it is not true. In specific cases of RVs, I don't know if there exists such situation but, in joint normal setting, I can tweak this post of @Sarwate to conform to the situation here. Let's have two vectors as $X=[X_1]$, $Y=[Y_1,Y_2]$. Assume, these are pairwise independent normal RVs, and $Y_1$ and $Y_2$ are jointly normal therefore. But, as the post shows, their joint PDF is not the multiplication of their individual PDFs, in which we can say that they're not independent together.
Definition of independence of two random vectors and how to show it in the jointly normal case (1) Regarding your first question, $X$ and $Y$ are independent if we can simply factorize the joint PDF as: $p_{X,Y}(x,y)=p_X(x)p_Y(y)$, irrespective of $X$ and $Y$ being vectors or not. (2) I'm wonde
48,363
Can we apply KL divergence to the probability distributions on different domains?
KL divergence is only defined for distributions that are defined on the same domain. In t-SNE, KL divergence is not computed between data distributions in the high- and low-dimensional spaces (this would be undefined, as above). Rather, the distributions of interest are based on neighbor probabilities. The probability that two data points are neighbors is a function their proximity, which is measured in either the high- or low-dimensional space. This yields two neighbor distributions (one for each space). The neighbor distributions are not defined on the high/low-dimensional spaces themselves, but on pairs of points in the dataset. Because these distributions are defined on the same domain, it's possible to compute the KL divergence between them. t-SNE seeks an arrangement of points in the low dimensional space that minimizes the KL divergence.
Can we apply KL divergence to the probability distributions on different domains?
KL divergence is only defined for distributions that are defined on the same domain. In t-SNE, KL divergence is not computed between data distributions in the high- and low-dimensional spaces (this wo
Can we apply KL divergence to the probability distributions on different domains? KL divergence is only defined for distributions that are defined on the same domain. In t-SNE, KL divergence is not computed between data distributions in the high- and low-dimensional spaces (this would be undefined, as above). Rather, the distributions of interest are based on neighbor probabilities. The probability that two data points are neighbors is a function their proximity, which is measured in either the high- or low-dimensional space. This yields two neighbor distributions (one for each space). The neighbor distributions are not defined on the high/low-dimensional spaces themselves, but on pairs of points in the dataset. Because these distributions are defined on the same domain, it's possible to compute the KL divergence between them. t-SNE seeks an arrangement of points in the low dimensional space that minimizes the KL divergence.
Can we apply KL divergence to the probability distributions on different domains? KL divergence is only defined for distributions that are defined on the same domain. In t-SNE, KL divergence is not computed between data distributions in the high- and low-dimensional spaces (this wo
48,364
Find UMVUE of $\theta$ where $f_X(x\mid\theta) =\theta(1 +x)^{−(1+\theta)}I_{(0,\infty)}(x)$
From your previous question, you already have the complete sufficient statistic: $$T(\mathbf{X}) = \sum_{i=1}^n \ln(1+X_i).$$ The simplest way to find the UMVUE estimator for $\theta$ is to appeal to the Lehmann-Scheffé theorem, which says that any unbiased estimator of $\theta$ which is a function of $T$ is the unique UMVUE. To find an estimator with these properties, let $T_i = \ln(1+X_i)$ and observe that $T_i \sim \text{Exp}(\theta)$ so that $T \sim \text{Gamma}(n,\theta)$. Hence, we can use the complete sufficient statistic to form the estimator: $$\hat{\theta}(\mathbf{X}) = \frac{n-1}{T(\mathbf{X})} = \frac{n-1}{\sum_{i=1}^n \ln(1+X_i)} \sim (n-1) \cdot \text{Inv-Gamma}(n,\theta).$$ From the known moments of the inverse gamma distribution, we have $\mathbb{E}(\hat{\theta}) = \theta$ so we have found an unbiased estimator that is a function of the complete sufficient statistic. The Lehmann-Scheffé theorem ensures that our estimator is UMVUE for $\theta$. The variance of the UMVUE can easily be found by appeal to the moments of the inverse gamma distribution.
Find UMVUE of $\theta$ where $f_X(x\mid\theta) =\theta(1 +x)^{−(1+\theta)}I_{(0,\infty)}(x)$
From your previous question, you already have the complete sufficient statistic: $$T(\mathbf{X}) = \sum_{i=1}^n \ln(1+X_i).$$ The simplest way to find the UMVUE estimator for $\theta$ is to appeal to
Find UMVUE of $\theta$ where $f_X(x\mid\theta) =\theta(1 +x)^{−(1+\theta)}I_{(0,\infty)}(x)$ From your previous question, you already have the complete sufficient statistic: $$T(\mathbf{X}) = \sum_{i=1}^n \ln(1+X_i).$$ The simplest way to find the UMVUE estimator for $\theta$ is to appeal to the Lehmann-Scheffé theorem, which says that any unbiased estimator of $\theta$ which is a function of $T$ is the unique UMVUE. To find an estimator with these properties, let $T_i = \ln(1+X_i)$ and observe that $T_i \sim \text{Exp}(\theta)$ so that $T \sim \text{Gamma}(n,\theta)$. Hence, we can use the complete sufficient statistic to form the estimator: $$\hat{\theta}(\mathbf{X}) = \frac{n-1}{T(\mathbf{X})} = \frac{n-1}{\sum_{i=1}^n \ln(1+X_i)} \sim (n-1) \cdot \text{Inv-Gamma}(n,\theta).$$ From the known moments of the inverse gamma distribution, we have $\mathbb{E}(\hat{\theta}) = \theta$ so we have found an unbiased estimator that is a function of the complete sufficient statistic. The Lehmann-Scheffé theorem ensures that our estimator is UMVUE for $\theta$. The variance of the UMVUE can easily be found by appeal to the moments of the inverse gamma distribution.
Find UMVUE of $\theta$ where $f_X(x\mid\theta) =\theta(1 +x)^{−(1+\theta)}I_{(0,\infty)}(x)$ From your previous question, you already have the complete sufficient statistic: $$T(\mathbf{X}) = \sum_{i=1}^n \ln(1+X_i).$$ The simplest way to find the UMVUE estimator for $\theta$ is to appeal to
48,365
Beta Distribution and how it is related to this question
The integral of $f$ can be expressed as a Beta function times a hypergeometric function. This suggests $f$ is not the density of any particular Beta distribution, but that it is indeed related. To evaluate $k$ it's simpler and more elementary to use the substitution $y = \sin^2(x),$ $\mathrm{d}y = 2\sin(x)\cos(x)\mathrm{d}x$ and observe $\cos(x) = (1-\sin^2(x))^{1/2}$ to evaluate integrals of the form $$\int_0^{\pi/2} \sin^m(x)\cos^n(x)\mathrm{d}x = \frac{1}{2}\int_0^1 y^{(m-1)/2} (1-y)^{(n-1)/2}\mathrm{d}y = \frac{1}{2}B\left(\frac{m+1}{2}, \frac{n+1}{2}\right)$$ and then apply the Binomial Theorem to expand $(1-\sin(x))^7,$ producing $$\eqalign{ \int_0^{\pi/2} f(x)\mathrm{d}x &= k \int_0^{\pi/2} \sin^5(x)(1-\sin(x))^7 \mathrm{d}x \\ &=k \sum_{j=0}^7 (-1)^j \binom{7}{j}\int_0^{\pi/2}\sin^{5+j}(x)\mathrm{d}x \\ &= \frac{k}{2} \sum_{j=0}^7 (-1)^j \binom{7}{j} B\left(\frac{5+j+1}{2}, \frac{1}{2}\right). }$$ The alternating binomial coefficients $\binom{7}{j}$ in this sum are $$\binom{7}{j} = (1,-7,21,-35,35,-21,7,-1)$$ while the Beta function values are $$B\left(\frac{5+j+1}{2}, \frac{1}{2}\right) = \left(\frac{16}{15},\frac{5 \pi }{16},\frac{32}{35},\frac{35 \pi }{128},\frac{256}{315},\frac{63 \pi }{256},\frac{512}{693},\frac{231 \pi }{1024}\right),$$ giving $$1 = \int_0^{\pi/2} f(x)\mathrm{d}x = k \color{blue}{\frac{1}{2}\left(\frac{26672}{495} -\frac{17563}{1024}\pi\right)}.$$ Thus, $k$ is the reciprocal of the blue quantity.
Beta Distribution and how it is related to this question
The integral of $f$ can be expressed as a Beta function times a hypergeometric function. This suggests $f$ is not the density of any particular Beta distribution, but that it is indeed related. To ev
Beta Distribution and how it is related to this question The integral of $f$ can be expressed as a Beta function times a hypergeometric function. This suggests $f$ is not the density of any particular Beta distribution, but that it is indeed related. To evaluate $k$ it's simpler and more elementary to use the substitution $y = \sin^2(x),$ $\mathrm{d}y = 2\sin(x)\cos(x)\mathrm{d}x$ and observe $\cos(x) = (1-\sin^2(x))^{1/2}$ to evaluate integrals of the form $$\int_0^{\pi/2} \sin^m(x)\cos^n(x)\mathrm{d}x = \frac{1}{2}\int_0^1 y^{(m-1)/2} (1-y)^{(n-1)/2}\mathrm{d}y = \frac{1}{2}B\left(\frac{m+1}{2}, \frac{n+1}{2}\right)$$ and then apply the Binomial Theorem to expand $(1-\sin(x))^7,$ producing $$\eqalign{ \int_0^{\pi/2} f(x)\mathrm{d}x &= k \int_0^{\pi/2} \sin^5(x)(1-\sin(x))^7 \mathrm{d}x \\ &=k \sum_{j=0}^7 (-1)^j \binom{7}{j}\int_0^{\pi/2}\sin^{5+j}(x)\mathrm{d}x \\ &= \frac{k}{2} \sum_{j=0}^7 (-1)^j \binom{7}{j} B\left(\frac{5+j+1}{2}, \frac{1}{2}\right). }$$ The alternating binomial coefficients $\binom{7}{j}$ in this sum are $$\binom{7}{j} = (1,-7,21,-35,35,-21,7,-1)$$ while the Beta function values are $$B\left(\frac{5+j+1}{2}, \frac{1}{2}\right) = \left(\frac{16}{15},\frac{5 \pi }{16},\frac{32}{35},\frac{35 \pi }{128},\frac{256}{315},\frac{63 \pi }{256},\frac{512}{693},\frac{231 \pi }{1024}\right),$$ giving $$1 = \int_0^{\pi/2} f(x)\mathrm{d}x = k \color{blue}{\frac{1}{2}\left(\frac{26672}{495} -\frac{17563}{1024}\pi\right)}.$$ Thus, $k$ is the reciprocal of the blue quantity.
Beta Distribution and how it is related to this question The integral of $f$ can be expressed as a Beta function times a hypergeometric function. This suggests $f$ is not the density of any particular Beta distribution, but that it is indeed related. To ev
48,366
multivariate Student's t distribution: intuition for non-independence?
Let's look at the situation. As a point of departure we will first study the bivariate standard Normal distribution. I will do this by plotting vertical slices through its graph: these are given by the functions $$y\to \phi(x,y)$$ for $x = 0, \pm 1/2, \pm 1, \pm 3/2, \pm 2$ (where $\phi$ is the bivariate density). The reason for doing this is that variables $(X,Y)$ are independent if and only if the conditional distribution $Y\mid X=x$ does not vary with $x.$ As $x$ increases, the density grows smaller and the slices shrink down to the axis in the plots at the left. However, if we normalize these curves (rescaling each one vertically) so that each has unit area, thereby turning them into the conditional densities, they all coincide, as shown at the right. This is how we can tell $Y$ is independent of $X.$ Here is the same situation for a multivariate t distribution with $\nu=1$ degree of freedom (in two variables). The conditional densities, although centered at $0,$ have different shapes: they vary with $x.$ You can see that they spread out as $|x|$ grows larger. This can (easily) be demonstrated algebraically by examining the formula for the multivariate t density. Here is a "hand-waving" demonstration that might help us calibrate our intuition. As Glen_b pointed out in comments, the Multivariate t is the distribution of standard multivariate Normal vector $X=(X_1,X_2,\ldots,X_d)$ divided by an independent positive variable $Z.$ (A multiple of $Z^2$ has a Gamma distribution, but that detail doesn't matter.) Consider what happens to the preceding conditional distributions as $|X_1/Z|$ increases. When $|X_1/Z|$ is relatively large, it likely got that way through a combination of a large value of $|X_1|$ and a smaller than average value of $Z.$ Because $Z$ was small, and $Z$ simultaneously divides all the components of $X$, the values of $X_2/Z, X_3/Z, \ldots, X_d/Z$ (which obviously are scaled by the relatively large quantity $1/Z$) will thereby be more spread out. The larger $|X_1|$ gets, the larger $Z$ is likely to be and the more the other components are spread. Because the conditional distributions of the $X_i/Z$ become more spread out as $|X_1/Z|$ increases, the $X_i/Z$ cannot be independent of $X_1/Z.$
multivariate Student's t distribution: intuition for non-independence?
Let's look at the situation. As a point of departure we will first study the bivariate standard Normal distribution. I will do this by plotting vertical slices through its graph: these are given by
multivariate Student's t distribution: intuition for non-independence? Let's look at the situation. As a point of departure we will first study the bivariate standard Normal distribution. I will do this by plotting vertical slices through its graph: these are given by the functions $$y\to \phi(x,y)$$ for $x = 0, \pm 1/2, \pm 1, \pm 3/2, \pm 2$ (where $\phi$ is the bivariate density). The reason for doing this is that variables $(X,Y)$ are independent if and only if the conditional distribution $Y\mid X=x$ does not vary with $x.$ As $x$ increases, the density grows smaller and the slices shrink down to the axis in the plots at the left. However, if we normalize these curves (rescaling each one vertically) so that each has unit area, thereby turning them into the conditional densities, they all coincide, as shown at the right. This is how we can tell $Y$ is independent of $X.$ Here is the same situation for a multivariate t distribution with $\nu=1$ degree of freedom (in two variables). The conditional densities, although centered at $0,$ have different shapes: they vary with $x.$ You can see that they spread out as $|x|$ grows larger. This can (easily) be demonstrated algebraically by examining the formula for the multivariate t density. Here is a "hand-waving" demonstration that might help us calibrate our intuition. As Glen_b pointed out in comments, the Multivariate t is the distribution of standard multivariate Normal vector $X=(X_1,X_2,\ldots,X_d)$ divided by an independent positive variable $Z.$ (A multiple of $Z^2$ has a Gamma distribution, but that detail doesn't matter.) Consider what happens to the preceding conditional distributions as $|X_1/Z|$ increases. When $|X_1/Z|$ is relatively large, it likely got that way through a combination of a large value of $|X_1|$ and a smaller than average value of $Z.$ Because $Z$ was small, and $Z$ simultaneously divides all the components of $X$, the values of $X_2/Z, X_3/Z, \ldots, X_d/Z$ (which obviously are scaled by the relatively large quantity $1/Z$) will thereby be more spread out. The larger $|X_1|$ gets, the larger $Z$ is likely to be and the more the other components are spread. Because the conditional distributions of the $X_i/Z$ become more spread out as $|X_1/Z|$ increases, the $X_i/Z$ cannot be independent of $X_1/Z.$
multivariate Student's t distribution: intuition for non-independence? Let's look at the situation. As a point of departure we will first study the bivariate standard Normal distribution. I will do this by plotting vertical slices through its graph: these are given by
48,367
Deriving the Maximum Likelihood Estimation (MLE) of a parameter for an Inverse Gaussian Distribution
The full derivation of the MLEs for IID data from an inverse Gaussian distribution can be found in the answer to this related question. In your case you have added an additional layer of complication by having observable data values $t_i = u_i - x_i - \tau$ that depend on some conditioning covariates and an additional parameter. From this formulation, your sampling density is: $$f(\mathbf{u} | \mathbf{x}, \tau, \mu, \lambda) = \prod_{i=1}^n \Big( \frac{\lambda}{2 \pi (u_i-x_i-\tau)^3} \Big)^{1/2} \exp \Big( - \sum_{i=1}^n \frac{\lambda (u_i-x_i-\tau - \mu)^2}{2 \mu^2 (u_i-x_i-\tau)} \Big)$$ over the support $\mathbf{u} \geqslant \mathbf{x} + \tau \mathbf{1}$. The log-likelihood function is defined over $\tau \leqslant \min (u_i-x_i)$ and is given over this range by: $$\ell_{\mathbf{u},\mathbf{x}}(\tau, \mu, \lambda) = \text{const} + \frac{n}{2} \ln (\lambda) - \frac{3}{2} \sum_{i=1}^n \ln (u_i-x_i-\tau) - \frac{\lambda}{2 \mu^2 } \sum_{i=1}^n \frac{(u_i-x_i-\tau - \mu)^2}{(u_i-x_i-\tau)}.$$ Finding the MLE: To facilitate our analysis we define the functions: $$H_k(\tau) \equiv \frac{1}{n} \sum_{i=1}^n (u_i-x_i-\tau)^k.$$ We then have: $$\begin{equation} \begin{aligned} \frac{\partial \ell_{\mathbf{u},\mathbf{x}}}{\partial \tau}(\tau, \mu, \lambda) &= \frac{3}{2} \sum_{i=1}^n \frac{1}{u_i-x_i-\tau} + \frac{\lambda}{2 \mu^2 } \sum_{i=1}^n \frac{(u_i - x_i - \tau + \mu)(u_i-x_i-\tau - \mu)}{(u_i-x_i-\tau)^2} \\[10pt] &= \frac{3}{2} \sum_{i=1}^n \frac{1}{u_i-x_i-\tau} + \frac{\lambda}{2 \mu^2 } \sum_{i=1}^n \frac{(u_i - x_i - \tau)^2 -2 \mu (u_i-x_i-\tau) + \mu^2}{(u_i-x_i-\tau)^2} \\[10pt] &= \frac{3}{2} \sum_{i=1}^n \frac{1}{u_i-x_i-\tau} + \frac{\lambda}{2 \mu^2 } \Big[ n - 2\mu \sum_{i=1}^n \frac{1}{u_i-x_i-\tau} + \mu^2 \sum_{i=1}^n \frac{1}{(u_i-x_i-\tau)^2} \Big] \\[10pt] &= \frac{3n}{2} H_{-1}(\tau) + \frac{n \lambda}{2 \mu^2 } \Big[ 1 - 2 \mu H_{-1}(\tau) + \mu^2 H_{-2}(\tau) \Big]. \\[10pt] \end{aligned} \end{equation}$$ Taking $\tau$ to be fixed for the moment, the MLEs of the inverse Gaussian distribution are: $$\hat{\mu}(\tau) = H_1(\tau) \quad \quad \quad \frac{1}{\hat{\lambda}(\tau)} = H_{-1}(\tau) - \frac{1}{H_1(\tau)}.$$ Substituting these functions yields: $$\begin{equation} \begin{aligned} \frac{\partial \ell_{\mathbf{u},\mathbf{x}}}{\partial \tau}(\tau, \hat{\mu}(\tau), \hat{\lambda}(\tau)) &= \frac{3n}{2} H_{-1}(\tau) + \frac{n}{2 H_1(\tau)^2 } \frac{1 - 2 H_1(\tau) H_{-1}(\tau) + H_1(\tau)^2 H_{-2}(\tau)}{H_{-1}(\tau) - H_1(\tau)^{-1}} \\[10pt] &= \frac{n}{2} \cdot \frac{1}{H_1(\tau)^2} \Big[ 3 H_{-1}(\tau) H_1(\tau)^2 - \frac{2 H_1(\tau) H_{-1}(\tau) - H_1(\tau)^2 H_{-2}(\tau) - 1}{H_{-1}(\tau) - H_1(\tau)^{-1}} \Big]. \\[10pt] \end{aligned} \end{equation}$$ Setting this partial derivative to zero yields the critical point equation: $$1 + 3 H_{-1}(\tau)^2 H_1(\tau)^2 - 5 H_{-1}(\tau) H_1(\tau) + H_1(\tau)^2 H_{-2}(\tau) = 0.$$ This critical point equation will need to be solved numerically, as there is no simple expression for the solution.
Deriving the Maximum Likelihood Estimation (MLE) of a parameter for an Inverse Gaussian Distribution
The full derivation of the MLEs for IID data from an inverse Gaussian distribution can be found in the answer to this related question. In your case you have added an additional layer of complication
Deriving the Maximum Likelihood Estimation (MLE) of a parameter for an Inverse Gaussian Distribution The full derivation of the MLEs for IID data from an inverse Gaussian distribution can be found in the answer to this related question. In your case you have added an additional layer of complication by having observable data values $t_i = u_i - x_i - \tau$ that depend on some conditioning covariates and an additional parameter. From this formulation, your sampling density is: $$f(\mathbf{u} | \mathbf{x}, \tau, \mu, \lambda) = \prod_{i=1}^n \Big( \frac{\lambda}{2 \pi (u_i-x_i-\tau)^3} \Big)^{1/2} \exp \Big( - \sum_{i=1}^n \frac{\lambda (u_i-x_i-\tau - \mu)^2}{2 \mu^2 (u_i-x_i-\tau)} \Big)$$ over the support $\mathbf{u} \geqslant \mathbf{x} + \tau \mathbf{1}$. The log-likelihood function is defined over $\tau \leqslant \min (u_i-x_i)$ and is given over this range by: $$\ell_{\mathbf{u},\mathbf{x}}(\tau, \mu, \lambda) = \text{const} + \frac{n}{2} \ln (\lambda) - \frac{3}{2} \sum_{i=1}^n \ln (u_i-x_i-\tau) - \frac{\lambda}{2 \mu^2 } \sum_{i=1}^n \frac{(u_i-x_i-\tau - \mu)^2}{(u_i-x_i-\tau)}.$$ Finding the MLE: To facilitate our analysis we define the functions: $$H_k(\tau) \equiv \frac{1}{n} \sum_{i=1}^n (u_i-x_i-\tau)^k.$$ We then have: $$\begin{equation} \begin{aligned} \frac{\partial \ell_{\mathbf{u},\mathbf{x}}}{\partial \tau}(\tau, \mu, \lambda) &= \frac{3}{2} \sum_{i=1}^n \frac{1}{u_i-x_i-\tau} + \frac{\lambda}{2 \mu^2 } \sum_{i=1}^n \frac{(u_i - x_i - \tau + \mu)(u_i-x_i-\tau - \mu)}{(u_i-x_i-\tau)^2} \\[10pt] &= \frac{3}{2} \sum_{i=1}^n \frac{1}{u_i-x_i-\tau} + \frac{\lambda}{2 \mu^2 } \sum_{i=1}^n \frac{(u_i - x_i - \tau)^2 -2 \mu (u_i-x_i-\tau) + \mu^2}{(u_i-x_i-\tau)^2} \\[10pt] &= \frac{3}{2} \sum_{i=1}^n \frac{1}{u_i-x_i-\tau} + \frac{\lambda}{2 \mu^2 } \Big[ n - 2\mu \sum_{i=1}^n \frac{1}{u_i-x_i-\tau} + \mu^2 \sum_{i=1}^n \frac{1}{(u_i-x_i-\tau)^2} \Big] \\[10pt] &= \frac{3n}{2} H_{-1}(\tau) + \frac{n \lambda}{2 \mu^2 } \Big[ 1 - 2 \mu H_{-1}(\tau) + \mu^2 H_{-2}(\tau) \Big]. \\[10pt] \end{aligned} \end{equation}$$ Taking $\tau$ to be fixed for the moment, the MLEs of the inverse Gaussian distribution are: $$\hat{\mu}(\tau) = H_1(\tau) \quad \quad \quad \frac{1}{\hat{\lambda}(\tau)} = H_{-1}(\tau) - \frac{1}{H_1(\tau)}.$$ Substituting these functions yields: $$\begin{equation} \begin{aligned} \frac{\partial \ell_{\mathbf{u},\mathbf{x}}}{\partial \tau}(\tau, \hat{\mu}(\tau), \hat{\lambda}(\tau)) &= \frac{3n}{2} H_{-1}(\tau) + \frac{n}{2 H_1(\tau)^2 } \frac{1 - 2 H_1(\tau) H_{-1}(\tau) + H_1(\tau)^2 H_{-2}(\tau)}{H_{-1}(\tau) - H_1(\tau)^{-1}} \\[10pt] &= \frac{n}{2} \cdot \frac{1}{H_1(\tau)^2} \Big[ 3 H_{-1}(\tau) H_1(\tau)^2 - \frac{2 H_1(\tau) H_{-1}(\tau) - H_1(\tau)^2 H_{-2}(\tau) - 1}{H_{-1}(\tau) - H_1(\tau)^{-1}} \Big]. \\[10pt] \end{aligned} \end{equation}$$ Setting this partial derivative to zero yields the critical point equation: $$1 + 3 H_{-1}(\tau)^2 H_1(\tau)^2 - 5 H_{-1}(\tau) H_1(\tau) + H_1(\tau)^2 H_{-2}(\tau) = 0.$$ This critical point equation will need to be solved numerically, as there is no simple expression for the solution.
Deriving the Maximum Likelihood Estimation (MLE) of a parameter for an Inverse Gaussian Distribution The full derivation of the MLEs for IID data from an inverse Gaussian distribution can be found in the answer to this related question. In your case you have added an additional layer of complication
48,368
Higher Order of Vectorization in Backpropagation in Neural Network
You're right that that doesn't make sense as the Jacobian. Furthermore if multiplying jacobians was really how autodiff worked, any pointwise function applied on vector of length $n$ would result in a huge $n \times n$ Jacobian being created. This is not what happens in any competant autodiff implementation. In reality, it's not necessary to compute the jacobian in order to perform backpropagation. All that is needed is the "vector jacobian product", or VJP. If you have a function $f : \mathbb{R}^n \rightarrow \mathbb{R}^m$, then $\text{VJP} : \mathbb{R}^m \times \mathbb{R}^n \rightarrow \mathbb{R}^n$ is a function which computes $\text{VJP}(g,x) = J_f(x)^T g$, where $g$ is the incoming gradient vector $\frac{\partial \mathcal{L}}{\partial f}$ and $J_f(x)$ is the jacobian of $f$. Technically this is a JVP rather than VJP but that's just a matter of convention. The key point is that although one way to implement the VJP is explicitly computing the jacobian and then performing this vector-matrix product, if you are able to compute the VJP without doing that, it is also perfectly fine. For example, the VJP for $\sin(x)$ is just $\text{VJP}(g,x) = g \circ \cos(x)$. The VJP of $f(W, x) = Wx$ with respect to $x$ is simply $\text{VJP}(g, W, x) = W^Tg$ and the VJP with respect to $W$ is $\text{VJP}(g, W, x) = gx^T$ Returning to your question: the expression in 3.30 is actually just computing $\text{VJP}(g, W, x) = gx^T$, with all the terms on the RHS except for the right-most being part of $g$, and the last term being $x^T$.
Higher Order of Vectorization in Backpropagation in Neural Network
You're right that that doesn't make sense as the Jacobian. Furthermore if multiplying jacobians was really how autodiff worked, any pointwise function applied on vector of length $n$ would result in a
Higher Order of Vectorization in Backpropagation in Neural Network You're right that that doesn't make sense as the Jacobian. Furthermore if multiplying jacobians was really how autodiff worked, any pointwise function applied on vector of length $n$ would result in a huge $n \times n$ Jacobian being created. This is not what happens in any competant autodiff implementation. In reality, it's not necessary to compute the jacobian in order to perform backpropagation. All that is needed is the "vector jacobian product", or VJP. If you have a function $f : \mathbb{R}^n \rightarrow \mathbb{R}^m$, then $\text{VJP} : \mathbb{R}^m \times \mathbb{R}^n \rightarrow \mathbb{R}^n$ is a function which computes $\text{VJP}(g,x) = J_f(x)^T g$, where $g$ is the incoming gradient vector $\frac{\partial \mathcal{L}}{\partial f}$ and $J_f(x)$ is the jacobian of $f$. Technically this is a JVP rather than VJP but that's just a matter of convention. The key point is that although one way to implement the VJP is explicitly computing the jacobian and then performing this vector-matrix product, if you are able to compute the VJP without doing that, it is also perfectly fine. For example, the VJP for $\sin(x)$ is just $\text{VJP}(g,x) = g \circ \cos(x)$. The VJP of $f(W, x) = Wx$ with respect to $x$ is simply $\text{VJP}(g, W, x) = W^Tg$ and the VJP with respect to $W$ is $\text{VJP}(g, W, x) = gx^T$ Returning to your question: the expression in 3.30 is actually just computing $\text{VJP}(g, W, x) = gx^T$, with all the terms on the RHS except for the right-most being part of $g$, and the last term being $x^T$.
Higher Order of Vectorization in Backpropagation in Neural Network You're right that that doesn't make sense as the Jacobian. Furthermore if multiplying jacobians was really how autodiff worked, any pointwise function applied on vector of length $n$ would result in a
48,369
Higher Order of Vectorization in Backpropagation in Neural Network
$\frac{\partial \mathcal{L}}{\partial W^{[2]}}$ must be 2x3 as just like dimensions of $ W^{[2]}$. I suggest you to use the backprop formulas (and notation) given in Nielsen's book. When the networks gets bigger it is ease to follow According to that \begin{align*} \delta^3 &=a^{[3]}-y \\ \delta^2 &= ((W^{[3]^{T}} (a^{[3]}-y)) \odot g'(z^{[2]})) \\ \frac{\partial \mathcal{L}}{\partial w^{[2]}_{jk}} &= a^{[1]}_k \cdot \delta_j^2 \end{align*} And by going one more step: \begin{align*} \delta^1 &= ((W^{[2]^{T}} \delta^2 ) \odot g'(z^{[1]})) \\ \frac{\partial \mathcal{L}}{\partial w^{[1]}_{jk}} &= x^{(i)}_k \cdot \delta_j^1 \end{align*} where $ \delta^1 \in \mathbb{R}^{3\times 1}$ I hope at least for other people it will be usefull
Higher Order of Vectorization in Backpropagation in Neural Network
$\frac{\partial \mathcal{L}}{\partial W^{[2]}}$ must be 2x3 as just like dimensions of $ W^{[2]}$. I suggest you to use the backprop formulas (and notation) given in Nielsen's book. When the networks
Higher Order of Vectorization in Backpropagation in Neural Network $\frac{\partial \mathcal{L}}{\partial W^{[2]}}$ must be 2x3 as just like dimensions of $ W^{[2]}$. I suggest you to use the backprop formulas (and notation) given in Nielsen's book. When the networks gets bigger it is ease to follow According to that \begin{align*} \delta^3 &=a^{[3]}-y \\ \delta^2 &= ((W^{[3]^{T}} (a^{[3]}-y)) \odot g'(z^{[2]})) \\ \frac{\partial \mathcal{L}}{\partial w^{[2]}_{jk}} &= a^{[1]}_k \cdot \delta_j^2 \end{align*} And by going one more step: \begin{align*} \delta^1 &= ((W^{[2]^{T}} \delta^2 ) \odot g'(z^{[1]})) \\ \frac{\partial \mathcal{L}}{\partial w^{[1]}_{jk}} &= x^{(i)}_k \cdot \delta_j^1 \end{align*} where $ \delta^1 \in \mathbb{R}^{3\times 1}$ I hope at least for other people it will be usefull
Higher Order of Vectorization in Backpropagation in Neural Network $\frac{\partial \mathcal{L}}{\partial W^{[2]}}$ must be 2x3 as just like dimensions of $ W^{[2]}$. I suggest you to use the backprop formulas (and notation) given in Nielsen's book. When the networks
48,370
Higher Order of Vectorization in Backpropagation in Neural Network
I think this formula falls flat when trying to find the gradients, at least for the explanations given currently. Take a look at the last and the second last layer for example: is a [2x1] So, to get the gradients of the weights for this layer we will have to perform } [1x1] matrix X [2x1] matrix. How is that possible? Am I missing something here I have tried the rest of the layers the story is the same Simply put, the delta is supposed to have the same size as the bias in each layer since each delta corresponds to the hidden neuron/node the same as the bias, how can we get a derivative matrix from a product of the input matrix from each previously hidden layer and the input matrix of that layer??. Something doesn't add up
Higher Order of Vectorization in Backpropagation in Neural Network
I think this formula falls flat when trying to find the gradients, at least for the explanations given currently. Take a look at the last and the second last layer for example: is a [2x1] So, to get
Higher Order of Vectorization in Backpropagation in Neural Network I think this formula falls flat when trying to find the gradients, at least for the explanations given currently. Take a look at the last and the second last layer for example: is a [2x1] So, to get the gradients of the weights for this layer we will have to perform } [1x1] matrix X [2x1] matrix. How is that possible? Am I missing something here I have tried the rest of the layers the story is the same Simply put, the delta is supposed to have the same size as the bias in each layer since each delta corresponds to the hidden neuron/node the same as the bias, how can we get a derivative matrix from a product of the input matrix from each previously hidden layer and the input matrix of that layer??. Something doesn't add up
Higher Order of Vectorization in Backpropagation in Neural Network I think this formula falls flat when trying to find the gradients, at least for the explanations given currently. Take a look at the last and the second last layer for example: is a [2x1] So, to get
48,371
Correct understanding of De Finetti`s representation theorem
What is the proper interpretation of the parameter? The natural interpretation of the parameter $\Theta$ comes from the law-of-large numbers, which you have stated in your question. This says that the following equivalence holds almost surely: $$\Theta = \lim_{n \rightarrow \infty} \frac{1}{n} \sum_{i=1}^n X_i.$$ From this equation we see that $\Theta$ is the limiting proportion of positive outcomes ($X_i=1$) in the observeable sequence.$^\dagger$ Since you are familiar with frequentist thinking, you will recognise this as the frequentist definition of the probability of a positive outcome in the sequence. In fact, there is a good argument to be made that de Finetti's theorem is a statement of the equivalence that is used in frequentist theory as the definition of probability. To give an applied example of this interpretation, suppose that the outcomes $X_i$ are indicators for a sequence of coin flips, indicating an outcome of heads. In this case the observable sequence $X_1,X_2,X_3,...$ is an exchangeable sequence of coin flips and the parameter $\Theta$ is the long-run limiting proportion of heads in the sequence. In your question you also ask if the parameter is "just a construction in our heads". Well, it is no more of a construction than the notion that there is an infinite sequence of outcomes in the first place. If we are willing to assume that there is an infinite sequence of observable outcomes (which is itself arguably a mere hypothetical construction) then that sequence has a limiting proportion of positive outcomes$^\dagger$, and that limiting proportion is the parameter. $^\dagger$ There is a slight technicality here, since the limiting proportion is a Cesaro limit that does not exist for all possible sequences. The parameter can be defined as the Banach limit, which always exists; for discussion, see O'Neill (2009). Is the parameter an unknown constant? In classical frequentist analysis, the parameters are treated as "unknown constants". There is no such thing as an unknown constant in Bayesian statistics, since all unknown quantities are treated as random variables with a prior distribution. Hence, if you are going to treat $\Theta$ as an unknown constant, you are using frequentist analysis, not Bayesian analysis. Why isn't the parameter always the same? As shown above, the parameter $\Theta$ represents the limiting proportion of positive outcomes in the sequence. Clearly, one can imagine that there are different sequences of exchangeable outcomes where the limiting proportions of positive outcomes are different, and hence, the parameter would not be the same across different sequences. For example, a particular exchangeable sequence of coin-tosses might have a limiting proportion of heads of $\Theta = 0.5$, meaning that we are dealing with a "fair" coin. Alternatively, we might have an exchangeable sequence of coin-tosses with a limiting proportion of heads of $\Theta = 0.51$, meaning that we are dealing with coin that is "biased" towards heads. What does $\mu_\Theta$ stand for? This is the prior probability measure for the parameter (equivalent to its prior distribution). In Bayesian analysis, this measure represents our beliefs about the unknown parameter prior to seeing the data. After we see the data we then update this to a posterior belief, represented by the posterior probability measure (equivalently a posterior distribution).
Correct understanding of De Finetti`s representation theorem
What is the proper interpretation of the parameter? The natural interpretation of the parameter $\Theta$ comes from the law-of-large numbers, which you have stated in your question. This says that th
Correct understanding of De Finetti`s representation theorem What is the proper interpretation of the parameter? The natural interpretation of the parameter $\Theta$ comes from the law-of-large numbers, which you have stated in your question. This says that the following equivalence holds almost surely: $$\Theta = \lim_{n \rightarrow \infty} \frac{1}{n} \sum_{i=1}^n X_i.$$ From this equation we see that $\Theta$ is the limiting proportion of positive outcomes ($X_i=1$) in the observeable sequence.$^\dagger$ Since you are familiar with frequentist thinking, you will recognise this as the frequentist definition of the probability of a positive outcome in the sequence. In fact, there is a good argument to be made that de Finetti's theorem is a statement of the equivalence that is used in frequentist theory as the definition of probability. To give an applied example of this interpretation, suppose that the outcomes $X_i$ are indicators for a sequence of coin flips, indicating an outcome of heads. In this case the observable sequence $X_1,X_2,X_3,...$ is an exchangeable sequence of coin flips and the parameter $\Theta$ is the long-run limiting proportion of heads in the sequence. In your question you also ask if the parameter is "just a construction in our heads". Well, it is no more of a construction than the notion that there is an infinite sequence of outcomes in the first place. If we are willing to assume that there is an infinite sequence of observable outcomes (which is itself arguably a mere hypothetical construction) then that sequence has a limiting proportion of positive outcomes$^\dagger$, and that limiting proportion is the parameter. $^\dagger$ There is a slight technicality here, since the limiting proportion is a Cesaro limit that does not exist for all possible sequences. The parameter can be defined as the Banach limit, which always exists; for discussion, see O'Neill (2009). Is the parameter an unknown constant? In classical frequentist analysis, the parameters are treated as "unknown constants". There is no such thing as an unknown constant in Bayesian statistics, since all unknown quantities are treated as random variables with a prior distribution. Hence, if you are going to treat $\Theta$ as an unknown constant, you are using frequentist analysis, not Bayesian analysis. Why isn't the parameter always the same? As shown above, the parameter $\Theta$ represents the limiting proportion of positive outcomes in the sequence. Clearly, one can imagine that there are different sequences of exchangeable outcomes where the limiting proportions of positive outcomes are different, and hence, the parameter would not be the same across different sequences. For example, a particular exchangeable sequence of coin-tosses might have a limiting proportion of heads of $\Theta = 0.5$, meaning that we are dealing with a "fair" coin. Alternatively, we might have an exchangeable sequence of coin-tosses with a limiting proportion of heads of $\Theta = 0.51$, meaning that we are dealing with coin that is "biased" towards heads. What does $\mu_\Theta$ stand for? This is the prior probability measure for the parameter (equivalent to its prior distribution). In Bayesian analysis, this measure represents our beliefs about the unknown parameter prior to seeing the data. After we see the data we then update this to a posterior belief, represented by the posterior probability measure (equivalently a posterior distribution).
Correct understanding of De Finetti`s representation theorem What is the proper interpretation of the parameter? The natural interpretation of the parameter $\Theta$ comes from the law-of-large numbers, which you have stated in your question. This says that th
48,372
L1 vs L2 stability?
This is generally called "sensitivity analysis" or "stability". An excellent paper deriving bounds based on this is Stability and Generalization. The bounds of course aren't necessarily tight! If you look at Definition 19 and the follow-up Theorems and Lemmas you can see that if something is $\sigma$-admissable then there is a bound that is linear in $\sigma$ generally. For $L_1$ it's fairly simple to show that it is $1$-admissable (indeed, they state this for the $\epsilon$-insensitive $L_1$ loss for SVM - example 1, bottom of page 17 in the pdf (515 is the printed page number)), while $L_2$ requires the space $\mathcal{Y}$ to be bound - if one does the math this is basically because one can derive $$\sigma \geq \frac{|y_1^2 - y_2^2 - 2y'(y_1-y_2)|}{|y_1 - y_2|} = |y_1+y_2 - 2y'|$$. Thus, generally, one should expect the $L_1$ to actually have a better bound here. I don't think this completely addresses your question but it hopefully does give you a starting point for a more formal approach to analyzing your specifics.
L1 vs L2 stability?
This is generally called "sensitivity analysis" or "stability". An excellent paper deriving bounds based on this is Stability and Generalization. The bounds of course aren't necessarily tight! If you
L1 vs L2 stability? This is generally called "sensitivity analysis" or "stability". An excellent paper deriving bounds based on this is Stability and Generalization. The bounds of course aren't necessarily tight! If you look at Definition 19 and the follow-up Theorems and Lemmas you can see that if something is $\sigma$-admissable then there is a bound that is linear in $\sigma$ generally. For $L_1$ it's fairly simple to show that it is $1$-admissable (indeed, they state this for the $\epsilon$-insensitive $L_1$ loss for SVM - example 1, bottom of page 17 in the pdf (515 is the printed page number)), while $L_2$ requires the space $\mathcal{Y}$ to be bound - if one does the math this is basically because one can derive $$\sigma \geq \frac{|y_1^2 - y_2^2 - 2y'(y_1-y_2)|}{|y_1 - y_2|} = |y_1+y_2 - 2y'|$$. Thus, generally, one should expect the $L_1$ to actually have a better bound here. I don't think this completely addresses your question but it hopefully does give you a starting point for a more formal approach to analyzing your specifics.
L1 vs L2 stability? This is generally called "sensitivity analysis" or "stability". An excellent paper deriving bounds based on this is Stability and Generalization. The bounds of course aren't necessarily tight! If you
48,373
L1 vs L2 stability?
L1 norm is based on minimising Least Absolute Deviation, with absolute deviation being calculated: $$ AD = \Sigma^n_{i=1} |y_i-f(x_i)| $$ L2 norm is based on least squared deviation, with squared deviation being calculated: $$ LSD = \Sigma^n_{i=1} (y_i-f(x_i))^2 $$ So what is the difference for small vs large nudges? From the perspective of $AD$, if we think about it in terms of its proportion of mean difference then the nudge in $x_i$ will account for less than 1 proportion of the mean difference. if it is smaller than the mean difference. It will be greater than 1 if the nudge is large. The amount it influences the sum is linearly dependent on the magnitude of the nudge. From the perspective of $LSD$, if we think about it in terms of its proportion of mean variance then the nudge in $x_i$ will be <1 if it is smaller than the mean variance. The amount it influences the sum is dependent on the square magnitude of the nudge. It will be greater than 1 if the nudge is large. Note that if we square number less than 1 it becomes smaller, which means L2 de-emphasises any small nudges. I don't think the figure covers enough range, if the variance of the nudge is larger than the mean variance then L2 norm errors will in fact grow quicker than L1 error. There is a range of nudges over which L2 is expected to be more stable (where the variance of the nudge is less than the mean variance) and a region where it is less stable (where the nudge is large). In between there is a region where the two will be more comparable in terms of error, based on the interaction of $AD$ and $LSD$. This rapid escalation as nudge grows is covered by @MotiN in his answer, if the bounds are loose, L2 can weaken dramatically as the nudges grow.
L1 vs L2 stability?
L1 norm is based on minimising Least Absolute Deviation, with absolute deviation being calculated: $$ AD = \Sigma^n_{i=1} |y_i-f(x_i)| $$ L2 norm is based on least squared deviation, with squared devi
L1 vs L2 stability? L1 norm is based on minimising Least Absolute Deviation, with absolute deviation being calculated: $$ AD = \Sigma^n_{i=1} |y_i-f(x_i)| $$ L2 norm is based on least squared deviation, with squared deviation being calculated: $$ LSD = \Sigma^n_{i=1} (y_i-f(x_i))^2 $$ So what is the difference for small vs large nudges? From the perspective of $AD$, if we think about it in terms of its proportion of mean difference then the nudge in $x_i$ will account for less than 1 proportion of the mean difference. if it is smaller than the mean difference. It will be greater than 1 if the nudge is large. The amount it influences the sum is linearly dependent on the magnitude of the nudge. From the perspective of $LSD$, if we think about it in terms of its proportion of mean variance then the nudge in $x_i$ will be <1 if it is smaller than the mean variance. The amount it influences the sum is dependent on the square magnitude of the nudge. It will be greater than 1 if the nudge is large. Note that if we square number less than 1 it becomes smaller, which means L2 de-emphasises any small nudges. I don't think the figure covers enough range, if the variance of the nudge is larger than the mean variance then L2 norm errors will in fact grow quicker than L1 error. There is a range of nudges over which L2 is expected to be more stable (where the variance of the nudge is less than the mean variance) and a region where it is less stable (where the nudge is large). In between there is a region where the two will be more comparable in terms of error, based on the interaction of $AD$ and $LSD$. This rapid escalation as nudge grows is covered by @MotiN in his answer, if the bounds are loose, L2 can weaken dramatically as the nudges grow.
L1 vs L2 stability? L1 norm is based on minimising Least Absolute Deviation, with absolute deviation being calculated: $$ AD = \Sigma^n_{i=1} |y_i-f(x_i)| $$ L2 norm is based on least squared deviation, with squared devi
48,374
How to forecast integer time series in R?
When you are looking for suitable packages, use the CRAN task views. In this case, the time series task view contains the following line: Count time series models are handled in the tscount and acp packages. ZIM provides for Zero-Inflated Models for count time series. tsintermittent implements various models for analysing and forecasting intermittent demand time series. Then see what models are implemented, and check the references. The tscount package has a nice vignette on analysing count time using using GLMs. As to whether they are more efficient, that depends on the data and what you mean by efficiency. If a count time series model is a good fit, then it will be more efficient (in the statistical sense) to use it. It may not be more computationally efficient depending on how it is coded. The comments suggested you mean accurate rather than efficient. The only way to answer that is to try it and see.
How to forecast integer time series in R?
When you are looking for suitable packages, use the CRAN task views. In this case, the time series task view contains the following line: Count time series models are handled in the tscount and acp p
How to forecast integer time series in R? When you are looking for suitable packages, use the CRAN task views. In this case, the time series task view contains the following line: Count time series models are handled in the tscount and acp packages. ZIM provides for Zero-Inflated Models for count time series. tsintermittent implements various models for analysing and forecasting intermittent demand time series. Then see what models are implemented, and check the references. The tscount package has a nice vignette on analysing count time using using GLMs. As to whether they are more efficient, that depends on the data and what you mean by efficiency. If a count time series model is a good fit, then it will be more efficient (in the statistical sense) to use it. It may not be more computationally efficient depending on how it is coded. The comments suggested you mean accurate rather than efficient. The only way to answer that is to try it and see.
How to forecast integer time series in R? When you are looking for suitable packages, use the CRAN task views. In this case, the time series task view contains the following line: Count time series models are handled in the tscount and acp p
48,375
what is the difference between binary cross entropy and categorical cross entropy? [duplicate]
I would like to expand on ARMAN's answer: Not getting into formulas the biggest difference would be that categorical crossentropy is based on the assumption that only 1 class is correct out of all possible ones (so output should be something like [0,0,0,1,0] if the rating is 4) while binary_crossentropy works on each individual output separately implying that each case can belong to multiple classes (for instance if predicting what items a customer will get it is possible that they will buy multiple ones; i.e. output like [0,1,0,1,0] is a valid one if you are using binary_crossentropy). As ARMAN pointed out if you only have 2 classes a 2 output categorical_crossentropy is equivalent to 1 output binary_crossentropy one. In your specific case you should be using categorical_crossentropy since each review has exactly 1 rating. Binary_crossentropy gives you better scores but the outputs are not evaluated correctly. I would also recommend trying to use MSE loss since your data is ordinal (4 stars are closer to 5 than 1)
what is the difference between binary cross entropy and categorical cross entropy? [duplicate]
I would like to expand on ARMAN's answer: Not getting into formulas the biggest difference would be that categorical crossentropy is based on the assumption that only 1 class is correct out of all po
what is the difference between binary cross entropy and categorical cross entropy? [duplicate] I would like to expand on ARMAN's answer: Not getting into formulas the biggest difference would be that categorical crossentropy is based on the assumption that only 1 class is correct out of all possible ones (so output should be something like [0,0,0,1,0] if the rating is 4) while binary_crossentropy works on each individual output separately implying that each case can belong to multiple classes (for instance if predicting what items a customer will get it is possible that they will buy multiple ones; i.e. output like [0,1,0,1,0] is a valid one if you are using binary_crossentropy). As ARMAN pointed out if you only have 2 classes a 2 output categorical_crossentropy is equivalent to 1 output binary_crossentropy one. In your specific case you should be using categorical_crossentropy since each review has exactly 1 rating. Binary_crossentropy gives you better scores but the outputs are not evaluated correctly. I would also recommend trying to use MSE loss since your data is ordinal (4 stars are closer to 5 than 1)
what is the difference between binary cross entropy and categorical cross entropy? [duplicate] I would like to expand on ARMAN's answer: Not getting into formulas the biggest difference would be that categorical crossentropy is based on the assumption that only 1 class is correct out of all po
48,376
Meaning of variance term in confidence interval for Multiple Linear Regression
MSE measures the variance of the error. To be clear -- that's the variance of the model errors, not the variance of the data. You can see this by looking at $SSE = (y_i - f(x_i))^2$. $SSE$ gives the squared difference between the observed and fitted values. Linear regression models are fit by minimizing $MSE$. From the Gauss-Markov theorem, we know that minimizing $MSE$ (i.e. using the "ordinary least squares" estimator) gives the best linear unbiased estimator of the coefficients, where "best" means the estimator with the lowest variance. So the algorithm used to calculate an OLS regression model depends on the use of $MSE$, which estimates the variance of the model error (not the variance of the data), and gives the best linear unbiased estimator of the model coefficients (assuming the assumptions of regression, uncorrelated errors with expectation 0 and equal variance, are met). So there isn't a straightforward way to swap in a different estimate of variance, and it might not be what you're looking for anyway (again, variance of data vs. variance of model error). Furthermore, using a different approach will result in estimated coefficients no better than those obtained through OLS regression (based on MSE).
Meaning of variance term in confidence interval for Multiple Linear Regression
MSE measures the variance of the error. To be clear -- that's the variance of the model errors, not the variance of the data. You can see this by looking at $SSE = (y_i - f(x_i))^2$. $SSE$ gives the s
Meaning of variance term in confidence interval for Multiple Linear Regression MSE measures the variance of the error. To be clear -- that's the variance of the model errors, not the variance of the data. You can see this by looking at $SSE = (y_i - f(x_i))^2$. $SSE$ gives the squared difference between the observed and fitted values. Linear regression models are fit by minimizing $MSE$. From the Gauss-Markov theorem, we know that minimizing $MSE$ (i.e. using the "ordinary least squares" estimator) gives the best linear unbiased estimator of the coefficients, where "best" means the estimator with the lowest variance. So the algorithm used to calculate an OLS regression model depends on the use of $MSE$, which estimates the variance of the model error (not the variance of the data), and gives the best linear unbiased estimator of the model coefficients (assuming the assumptions of regression, uncorrelated errors with expectation 0 and equal variance, are met). So there isn't a straightforward way to swap in a different estimate of variance, and it might not be what you're looking for anyway (again, variance of data vs. variance of model error). Furthermore, using a different approach will result in estimated coefficients no better than those obtained through OLS regression (based on MSE).
Meaning of variance term in confidence interval for Multiple Linear Regression MSE measures the variance of the error. To be clear -- that's the variance of the model errors, not the variance of the data. You can see this by looking at $SSE = (y_i - f(x_i))^2$. $SSE$ gives the s
48,377
Meaning of variance term in confidence interval for Multiple Linear Regression
The value $\sigma^2 = \mathbb{V}(\varepsilon_i)$ is the error variance in the regression model, and the variance result in your post is a consequence of the underlying variance of the OLS coefficient estimator: $$\mathbb{V}(\hat{\boldsymbol{\beta}}) = \sigma^2 (\mathbf{x}^\text{T} \mathbf{x})^{-1}.$$ Since $\hat{y}(\mathbf{x}_0) = \mathbf{x}_0^\text{T} \hat{\boldsymbol{\beta}}$ you can use the ordinary rules for variances of random vectors to obtain $$\mathbb{V}(\hat{y}(\mathbf{x}_0)) = \mathbb{V}(\mathbf{x}_0^\text{T} \hat{\boldsymbol{\beta}}) = \mathbf{x}_0^\text{T} \mathbb{V}(\hat{\boldsymbol{\beta}}) \mathbf{x}_0 = \sigma^2 \cdot \mathbf{x}_0^\text{T} (\mathbf{x}^\text{T} \mathbf{x})^{-1} \mathbf{x}_0.$$ The formula for the confidence interval then follows from the following pivotal quantity: $$\frac{\hat{y}(\mathbf{x}_0) - \mu_0}{\hat{\sigma}/df_{Res}} \sim \text{Student's T}(df_{Res}),$$ where $\mu_0 = \mathbb{E}(y(\mathbf{x}_0))$ and $\hat{\sigma}$ is the standard bias-corrected MLE in the linear regression. Now, there is no particular reason you could not substitute this with a different estimator for $\sigma$ if you want to, but bear in mind that it could change the distribution of the pivotal quantity you are using to form your confidence interval. So the thing you would need to do if you want to substitute a different estimator is to see how this would affect the distribution of the newly created quantity. Many variance estimators have an asymptotic chi-squared distribution (via the CLT) so you might end up with the same distribution, and hence the same form for your confidence interval, but you should still check this.
Meaning of variance term in confidence interval for Multiple Linear Regression
The value $\sigma^2 = \mathbb{V}(\varepsilon_i)$ is the error variance in the regression model, and the variance result in your post is a consequence of the underlying variance of the OLS coefficient
Meaning of variance term in confidence interval for Multiple Linear Regression The value $\sigma^2 = \mathbb{V}(\varepsilon_i)$ is the error variance in the regression model, and the variance result in your post is a consequence of the underlying variance of the OLS coefficient estimator: $$\mathbb{V}(\hat{\boldsymbol{\beta}}) = \sigma^2 (\mathbf{x}^\text{T} \mathbf{x})^{-1}.$$ Since $\hat{y}(\mathbf{x}_0) = \mathbf{x}_0^\text{T} \hat{\boldsymbol{\beta}}$ you can use the ordinary rules for variances of random vectors to obtain $$\mathbb{V}(\hat{y}(\mathbf{x}_0)) = \mathbb{V}(\mathbf{x}_0^\text{T} \hat{\boldsymbol{\beta}}) = \mathbf{x}_0^\text{T} \mathbb{V}(\hat{\boldsymbol{\beta}}) \mathbf{x}_0 = \sigma^2 \cdot \mathbf{x}_0^\text{T} (\mathbf{x}^\text{T} \mathbf{x})^{-1} \mathbf{x}_0.$$ The formula for the confidence interval then follows from the following pivotal quantity: $$\frac{\hat{y}(\mathbf{x}_0) - \mu_0}{\hat{\sigma}/df_{Res}} \sim \text{Student's T}(df_{Res}),$$ where $\mu_0 = \mathbb{E}(y(\mathbf{x}_0))$ and $\hat{\sigma}$ is the standard bias-corrected MLE in the linear regression. Now, there is no particular reason you could not substitute this with a different estimator for $\sigma$ if you want to, but bear in mind that it could change the distribution of the pivotal quantity you are using to form your confidence interval. So the thing you would need to do if you want to substitute a different estimator is to see how this would affect the distribution of the newly created quantity. Many variance estimators have an asymptotic chi-squared distribution (via the CLT) so you might end up with the same distribution, and hence the same form for your confidence interval, but you should still check this.
Meaning of variance term in confidence interval for Multiple Linear Regression The value $\sigma^2 = \mathbb{V}(\varepsilon_i)$ is the error variance in the regression model, and the variance result in your post is a consequence of the underlying variance of the OLS coefficient
48,378
Is the maximum bound of Euclidean distance between two probability distributions equal to $\sqrt{2}$?
$d_{xy}^2 = \sum{(x-y)^2} = \sum x^2 + \sum y^2 - 2\sum xy$. Given that in probability vectors all values are nonnegative, $d^2$ is max when the last term is zero. Then $d^2 = \sum x^2 + \sum y^2$. In a probability vector all values (which are between 0 and 1) sum up to 1, $\sum x = \sum y = 1$. In such a vector, its theoretical maximum of $\sum v^2$ is attained when all its entries are 0 except one which is 1, and this maximum is $1 = \sum v$. Thus the theoretically maximal squared distance between two such vectors is 2: it is when $\sum x^2 = \sum x$ and $\sum y^2 = \sum y$. It also follows from the above description, that then $\sum xy$ can very easily happen to be zero (since in each vector there is just single nonzero element).
Is the maximum bound of Euclidean distance between two probability distributions equal to $\sqrt{2}$
$d_{xy}^2 = \sum{(x-y)^2} = \sum x^2 + \sum y^2 - 2\sum xy$. Given that in probability vectors all values are nonnegative, $d^2$ is max when the last term is zero. Then $d^2 = \sum x^2 + \sum y^2$. In
Is the maximum bound of Euclidean distance between two probability distributions equal to $\sqrt{2}$? $d_{xy}^2 = \sum{(x-y)^2} = \sum x^2 + \sum y^2 - 2\sum xy$. Given that in probability vectors all values are nonnegative, $d^2$ is max when the last term is zero. Then $d^2 = \sum x^2 + \sum y^2$. In a probability vector all values (which are between 0 and 1) sum up to 1, $\sum x = \sum y = 1$. In such a vector, its theoretical maximum of $\sum v^2$ is attained when all its entries are 0 except one which is 1, and this maximum is $1 = \sum v$. Thus the theoretically maximal squared distance between two such vectors is 2: it is when $\sum x^2 = \sum x$ and $\sum y^2 = \sum y$. It also follows from the above description, that then $\sum xy$ can very easily happen to be zero (since in each vector there is just single nonzero element).
Is the maximum bound of Euclidean distance between two probability distributions equal to $\sqrt{2}$ $d_{xy}^2 = \sum{(x-y)^2} = \sum x^2 + \sum y^2 - 2\sum xy$. Given that in probability vectors all values are nonnegative, $d^2$ is max when the last term is zero. Then $d^2 = \sum x^2 + \sum y^2$. In
48,379
Is the maximum bound of Euclidean distance between two probability distributions equal to $\sqrt{2}$?
Yes, in the 2 category case, $\sqrt{2}$ is the maxima achieved at both $A=(1,0), B=(0,1)$ and $A=(0,1), B=(1,0)$. I'll supply a heuristic argument which should be straightforward to rigorously demonstrate. Define $d_1 = A_1 - B_1$ and $d_2 = A_2 - B_2$. Then euclidean distance can be thought of as circular contours around the origin. Furthermore, we know the properties of probability distributions dictate that $A_1 + A_2 = 1$ and $A_i \in [0,1]$ (same for $B$). This implies that $d_1 = -d_2$, meaning our solution space is the line from the upper left of the contour plot at $(-1,1)$ to the lower right at $(1,-1)$. Given that euclidean distance contours are concentric circles w.r.t. $d_1$ and $d_2$, the maxima are the points furthest away from the origin, $d_1 = 1, d_2 = -1$ and $d_1 = -1, d_2 = 1$. It follows from the previous constraints on $A$ and $B$ that these points uniquely occur at $A=(1,0), B=(0,1)$ and $A=(0,1), B=(1,0)$ respectively. Edit, since OP is looking for a proof. I'm not sure of a well known result, but a proof of the 2 category case follows. Using $d_1=-d_2$, we can redefine euclidean distance as $e = \sqrt{d_1^2 + -d_1^2}$. Differentiating, we find $\frac{\partial e}{\partial d_1} = \frac{2d}{\sqrt{d^2}}$. This implies that $e$ is maximized at maximal $|d_1|$. To find maximal $|d_1|$, consider that it's constrained by the line $A_1=1-A_2$ in the region $A_1 \in [0,1]$. Our partial derivatives of $d_1$ are $\frac{\partial d_1}{\partial A_1} = 1$ and $\frac{\partial d_1}{\partial B_1} = -1$, so there's no stationary points, implying the minima & maxima happen on the boundaries. The first boundary, along the constrained line, is $A_1 = 0$ and $B_1 = 1$ with value $d_1 =-1$, and the other boundary point is $A_1 = 1$ and $B_1=0$ with a value of $d_1 = 1$. These are are maxima and minima of $d_1$, both of which maximize $|d_1|$, uniquely determine $d_2$, and as shown above maximize $e$.
Is the maximum bound of Euclidean distance between two probability distributions equal to $\sqrt{2}$
Yes, in the 2 category case, $\sqrt{2}$ is the maxima achieved at both $A=(1,0), B=(0,1)$ and $A=(0,1), B=(1,0)$. I'll supply a heuristic argument which should be straightforward to rigorously demonst
Is the maximum bound of Euclidean distance between two probability distributions equal to $\sqrt{2}$? Yes, in the 2 category case, $\sqrt{2}$ is the maxima achieved at both $A=(1,0), B=(0,1)$ and $A=(0,1), B=(1,0)$. I'll supply a heuristic argument which should be straightforward to rigorously demonstrate. Define $d_1 = A_1 - B_1$ and $d_2 = A_2 - B_2$. Then euclidean distance can be thought of as circular contours around the origin. Furthermore, we know the properties of probability distributions dictate that $A_1 + A_2 = 1$ and $A_i \in [0,1]$ (same for $B$). This implies that $d_1 = -d_2$, meaning our solution space is the line from the upper left of the contour plot at $(-1,1)$ to the lower right at $(1,-1)$. Given that euclidean distance contours are concentric circles w.r.t. $d_1$ and $d_2$, the maxima are the points furthest away from the origin, $d_1 = 1, d_2 = -1$ and $d_1 = -1, d_2 = 1$. It follows from the previous constraints on $A$ and $B$ that these points uniquely occur at $A=(1,0), B=(0,1)$ and $A=(0,1), B=(1,0)$ respectively. Edit, since OP is looking for a proof. I'm not sure of a well known result, but a proof of the 2 category case follows. Using $d_1=-d_2$, we can redefine euclidean distance as $e = \sqrt{d_1^2 + -d_1^2}$. Differentiating, we find $\frac{\partial e}{\partial d_1} = \frac{2d}{\sqrt{d^2}}$. This implies that $e$ is maximized at maximal $|d_1|$. To find maximal $|d_1|$, consider that it's constrained by the line $A_1=1-A_2$ in the region $A_1 \in [0,1]$. Our partial derivatives of $d_1$ are $\frac{\partial d_1}{\partial A_1} = 1$ and $\frac{\partial d_1}{\partial B_1} = -1$, so there's no stationary points, implying the minima & maxima happen on the boundaries. The first boundary, along the constrained line, is $A_1 = 0$ and $B_1 = 1$ with value $d_1 =-1$, and the other boundary point is $A_1 = 1$ and $B_1=0$ with a value of $d_1 = 1$. These are are maxima and minima of $d_1$, both of which maximize $|d_1|$, uniquely determine $d_2$, and as shown above maximize $e$.
Is the maximum bound of Euclidean distance between two probability distributions equal to $\sqrt{2}$ Yes, in the 2 category case, $\sqrt{2}$ is the maxima achieved at both $A=(1,0), B=(0,1)$ and $A=(0,1), B=(1,0)$. I'll supply a heuristic argument which should be straightforward to rigorously demonst
48,380
Do I need to discard 90% of experiments so that the sample is independent?
You definitely do not need to discard 90% of your observations. The passage talks about sampling from a (finite) population. If your population had 10,000 units in it, the passage recommends you draw a sample of size less than 1,000. My intuition on the reason for this is doing so yields properties of the random sample that are similar to as if you were drawing from an infinite sample of independent observations (or drawing with replacement from a finite population). If your sample is a larger percentage of the population, dependence among the observations might be induced in the following way: Imagine you had a population of 5 units and are sampling without replacement. If you've drawn two units randomly and are preparing to draw your third unit, the next draw depends on which of the other two units you selected; it is not independent of the other two. If you knew about your population and knew about who you already drawn, you can predict characteristics of who you draw next based on who you have drawn before. This is an independence violation. Many of our statistical methods depend on drawing from an infinite population or drawing with replacement from a finite population; drawing without replacement from a finite population induces the dependence I described above. It would appear that drawing a small enough sample (i.e., 10% of the population) without replacement would approximate drawing a sample with replacement from the same population in terms of its statistical properties. This is probably why the authors made this recommendation. This recommendation (probably) does not apply to your case. If you are "sampling" from a large enough population (i.e., all potential users of the website), then you will surely draw fewer than 10% of that population. The data you have collected in your sample should not suffer from a violation of independence due to the problem I described; if there is an independence violation, it is more related to the second clause in the passage (i.e., due to the design of your study).
Do I need to discard 90% of experiments so that the sample is independent?
You definitely do not need to discard 90% of your observations. The passage talks about sampling from a (finite) population. If your population had 10,000 units in it, the passage recommends you draw
Do I need to discard 90% of experiments so that the sample is independent? You definitely do not need to discard 90% of your observations. The passage talks about sampling from a (finite) population. If your population had 10,000 units in it, the passage recommends you draw a sample of size less than 1,000. My intuition on the reason for this is doing so yields properties of the random sample that are similar to as if you were drawing from an infinite sample of independent observations (or drawing with replacement from a finite population). If your sample is a larger percentage of the population, dependence among the observations might be induced in the following way: Imagine you had a population of 5 units and are sampling without replacement. If you've drawn two units randomly and are preparing to draw your third unit, the next draw depends on which of the other two units you selected; it is not independent of the other two. If you knew about your population and knew about who you already drawn, you can predict characteristics of who you draw next based on who you have drawn before. This is an independence violation. Many of our statistical methods depend on drawing from an infinite population or drawing with replacement from a finite population; drawing without replacement from a finite population induces the dependence I described above. It would appear that drawing a small enough sample (i.e., 10% of the population) without replacement would approximate drawing a sample with replacement from the same population in terms of its statistical properties. This is probably why the authors made this recommendation. This recommendation (probably) does not apply to your case. If you are "sampling" from a large enough population (i.e., all potential users of the website), then you will surely draw fewer than 10% of that population. The data you have collected in your sample should not suffer from a violation of independence due to the problem I described; if there is an independence violation, it is more related to the second clause in the passage (i.e., due to the design of your study).
Do I need to discard 90% of experiments so that the sample is independent? You definitely do not need to discard 90% of your observations. The passage talks about sampling from a (finite) population. If your population had 10,000 units in it, the passage recommends you draw
48,381
How should Type II SS be calculated in a mixed model?
The difference between the Type II tests from car::Anova and the anova method for lmerTest is due to how continuous explanatory variables are handled. The first passus in the Details section of help(Anova) describes how Anova handles them: The designations "type-II" and "type-III" are borrowed from SAS, but the definitions used here do not correspond precisely to those employed by SAS. Type-II tests are calculated according to the principle of marginality, testing each term after all others, except ignoring the term's higher-order relatives; so-called type-III tests violate marginality, testing each term in the model after all of the others. This definition of Type-II tests corresponds to the tests produced by SAS for analysis-of-variance models, where all of the predictors are factors, but not more generally (i.e., when there are quantitative predictors). Be very careful in formulating the model for type-III tests, or the hypotheses tested will not make sense. so while lmerTest implements the SAS definition of Type II as described in this paper, car::Anova does something different. The SAS definition is described here which mean that a test for a term is marginal to all terms that it is not contained in. The definition of containment is given in the SAS document as: Given two effects F1 and F2, F1 is said to be contained in F2 provided that the following two conditions are met: Both effects involve the same continuous variables (if any). F2 has more CLASS [factors in R] variables than F1 does, and if F1 has CLASS variables, they all appear in F2 So in your model B:C is not contained in A:B:C because A is continuous and the type II hypothesis of B:C is therefore marginal to A:B:C. car::Anova does not distinguish between factors and covariates in this context so if A was a factor car::Anova and lmerTest-anova agrees: d$A2 <- cut(d$A, quantile(d$A), include.lowest = TRUE) m2 <- lmer(Y ~ A2 * B * C + (1|ID), data=d) anova(m2, type=2, ddf="Ken") Type II Analysis of Variance Table with Kenward-Roger's method Sum Sq Mean Sq NumDF DenDF F value Pr(>F) A2 17.210 5.737 3 9 1.7035 0.23533 B 136.641 136.641 1 99 40.5756 5.987e-09 *** C 14.128 2.826 5 99 0.8391 0.52512 A2:B 11.137 3.712 3 99 1.1024 0.35193 A2:C 38.816 2.588 15 99 0.7684 0.70879 B:C 34.821 6.964 5 99 2.0680 0.07575 . A2:B:C 50.735 3.382 15 99 1.0044 0.45709 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Anova(m2, type=2, test="F") Analysis of Deviance Table (Type II Wald F tests with Kenward-Roger df) Response: Y F Df Df.res Pr(>F) A2 1.7035 3 9 0.23533 B 40.5756 1 99 5.987e-09 *** C 0.8391 5 99 0.52512 A2:B 1.1024 3 99 0.35193 A2:C 0.7684 15 99 0.70879 B:C 2.0680 5 99 0.07575 . A2:B:C 1.0044 15 99 0.45709 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 The lmerTest::show_tests() function can be used to see exactly which linear function of the model coefficients makes up the tested hypothesis. For example, we have that show_tests(anova(m1, type=2))$`B:C` (Intercept) A B1 C1 C2 C3 C4 C5 A:B1 A:C1 A:C2 A:C3 A:C4 A:C5 B1:C1 B1:C2 B1:C3 B1:C1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 B1:C2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 B1:C3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 B1:C4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 B1:C5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 B1:C4 B1:C5 A:B1:C1 A:B1:C2 A:B1:C3 A:B1:C4 A:B1:C5 B1:C1 0 0 0 0 0 0 0 B1:C2 0 0 0 0 0 0 0 B1:C3 0 0 0 0 0 0 0 B1:C4 1 0 0 0 0 0 0 B1:C5 0 1 0 0 0 0 0 so this contrast is marginal to all other terms in the model. Because the car::Anova Type II test for B:C is equivalent to the Type I test we can see that this contrast can be expressed as: show_tests(anova(m1, type=1), fractions = TRUE)$`B:C` (Intercept) A B1 C1 C2 C3 C4 C5 A:B1 A:C1 A:C2 A:C3 B1:C1 0 0 0 0 0 0 0 0 0 0 0 0 B1:C2 0 0 0 0 0 0 0 0 0 0 0 0 B1:C3 0 0 0 0 0 0 0 0 0 0 0 0 B1:C4 0 0 0 0 0 0 0 0 0 0 0 0 B1:C5 0 0 0 0 0 0 0 0 0 0 0 0 A:C4 A:C5 B1:C1 B1:C2 B1:C3 B1:C4 B1:C5 A:B1:C1 A:B1:C2 A:B1:C3 A:B1:C4 A:B1:C5 B1:C1 0 0 1 1/2 1/2 1/2 1/2 60/13 30/13 30/13 30/13 30/13 B1:C2 0 0 0 1 1/3 1/3 1/3 0 60/13 20/13 20/13 20/13 B1:C3 0 0 0 0 1 1/4 1/4 0 0 60/13 15/13 15/13 B1:C4 0 0 0 0 0 1 1/5 0 0 0 60/13 12/13 B1:C5 0 0 0 0 0 0 1 0 0 0 0 60/13 which is not marginal to the 3-way interaction term.
How should Type II SS be calculated in a mixed model?
The difference between the Type II tests from car::Anova and the anova method for lmerTest is due to how continuous explanatory variables are handled. The first passus in the Details section of help(A
How should Type II SS be calculated in a mixed model? The difference between the Type II tests from car::Anova and the anova method for lmerTest is due to how continuous explanatory variables are handled. The first passus in the Details section of help(Anova) describes how Anova handles them: The designations "type-II" and "type-III" are borrowed from SAS, but the definitions used here do not correspond precisely to those employed by SAS. Type-II tests are calculated according to the principle of marginality, testing each term after all others, except ignoring the term's higher-order relatives; so-called type-III tests violate marginality, testing each term in the model after all of the others. This definition of Type-II tests corresponds to the tests produced by SAS for analysis-of-variance models, where all of the predictors are factors, but not more generally (i.e., when there are quantitative predictors). Be very careful in formulating the model for type-III tests, or the hypotheses tested will not make sense. so while lmerTest implements the SAS definition of Type II as described in this paper, car::Anova does something different. The SAS definition is described here which mean that a test for a term is marginal to all terms that it is not contained in. The definition of containment is given in the SAS document as: Given two effects F1 and F2, F1 is said to be contained in F2 provided that the following two conditions are met: Both effects involve the same continuous variables (if any). F2 has more CLASS [factors in R] variables than F1 does, and if F1 has CLASS variables, they all appear in F2 So in your model B:C is not contained in A:B:C because A is continuous and the type II hypothesis of B:C is therefore marginal to A:B:C. car::Anova does not distinguish between factors and covariates in this context so if A was a factor car::Anova and lmerTest-anova agrees: d$A2 <- cut(d$A, quantile(d$A), include.lowest = TRUE) m2 <- lmer(Y ~ A2 * B * C + (1|ID), data=d) anova(m2, type=2, ddf="Ken") Type II Analysis of Variance Table with Kenward-Roger's method Sum Sq Mean Sq NumDF DenDF F value Pr(>F) A2 17.210 5.737 3 9 1.7035 0.23533 B 136.641 136.641 1 99 40.5756 5.987e-09 *** C 14.128 2.826 5 99 0.8391 0.52512 A2:B 11.137 3.712 3 99 1.1024 0.35193 A2:C 38.816 2.588 15 99 0.7684 0.70879 B:C 34.821 6.964 5 99 2.0680 0.07575 . A2:B:C 50.735 3.382 15 99 1.0044 0.45709 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Anova(m2, type=2, test="F") Analysis of Deviance Table (Type II Wald F tests with Kenward-Roger df) Response: Y F Df Df.res Pr(>F) A2 1.7035 3 9 0.23533 B 40.5756 1 99 5.987e-09 *** C 0.8391 5 99 0.52512 A2:B 1.1024 3 99 0.35193 A2:C 0.7684 15 99 0.70879 B:C 2.0680 5 99 0.07575 . A2:B:C 1.0044 15 99 0.45709 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 The lmerTest::show_tests() function can be used to see exactly which linear function of the model coefficients makes up the tested hypothesis. For example, we have that show_tests(anova(m1, type=2))$`B:C` (Intercept) A B1 C1 C2 C3 C4 C5 A:B1 A:C1 A:C2 A:C3 A:C4 A:C5 B1:C1 B1:C2 B1:C3 B1:C1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 B1:C2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 B1:C3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 B1:C4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 B1:C5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 B1:C4 B1:C5 A:B1:C1 A:B1:C2 A:B1:C3 A:B1:C4 A:B1:C5 B1:C1 0 0 0 0 0 0 0 B1:C2 0 0 0 0 0 0 0 B1:C3 0 0 0 0 0 0 0 B1:C4 1 0 0 0 0 0 0 B1:C5 0 1 0 0 0 0 0 so this contrast is marginal to all other terms in the model. Because the car::Anova Type II test for B:C is equivalent to the Type I test we can see that this contrast can be expressed as: show_tests(anova(m1, type=1), fractions = TRUE)$`B:C` (Intercept) A B1 C1 C2 C3 C4 C5 A:B1 A:C1 A:C2 A:C3 B1:C1 0 0 0 0 0 0 0 0 0 0 0 0 B1:C2 0 0 0 0 0 0 0 0 0 0 0 0 B1:C3 0 0 0 0 0 0 0 0 0 0 0 0 B1:C4 0 0 0 0 0 0 0 0 0 0 0 0 B1:C5 0 0 0 0 0 0 0 0 0 0 0 0 A:C4 A:C5 B1:C1 B1:C2 B1:C3 B1:C4 B1:C5 A:B1:C1 A:B1:C2 A:B1:C3 A:B1:C4 A:B1:C5 B1:C1 0 0 1 1/2 1/2 1/2 1/2 60/13 30/13 30/13 30/13 30/13 B1:C2 0 0 0 1 1/3 1/3 1/3 0 60/13 20/13 20/13 20/13 B1:C3 0 0 0 0 1 1/4 1/4 0 0 60/13 15/13 15/13 B1:C4 0 0 0 0 0 1 1/5 0 0 0 60/13 12/13 B1:C5 0 0 0 0 0 0 1 0 0 0 0 60/13 which is not marginal to the 3-way interaction term.
How should Type II SS be calculated in a mixed model? The difference between the Type II tests from car::Anova and the anova method for lmerTest is due to how continuous explanatory variables are handled. The first passus in the Details section of help(A
48,382
How should Type II SS be calculated in a mixed model?
The difference is because A is not centered around zero. Not sure yet why this matters though, or if it should (it doesn't seem that it should...); other answers which explore this more fully are most welcome. d$Ac <- d$A - mean(d$A) m2 <- lmer(Y ~ Ac*B*C + (1|ID), data=d) anova(m2, type=1) #> Type I Analysis of Variance Table with Satterthwaite's method #> Sum Sq Mean Sq NumDF DenDF F value Pr(>F) #> Ac 8.794 8.794 1 11 2.9075 0.11621 #> B 136.641 136.641 1 121 45.1784 6.269e-10 *** #> C 14.128 2.826 5 121 0.9343 0.46141 #> Ac:B 0.381 0.381 1 121 0.1259 0.72330 #> Ac:C 45.874 9.175 5 121 3.0335 0.01291 * #> B:C 34.821 6.964 5 121 2.3026 0.04882 * #> Ac:B:C 21.860 4.372 5 121 1.4456 0.21285 anova(m2, type=2) #> Type II Analysis of Variance Table with Satterthwaite's method #> Sum Sq Mean Sq NumDF DenDF F value Pr(>F) #> Ac 8.794 8.794 1 11 2.9075 0.11621 #> B 136.641 136.641 1 121 45.1784 6.269e-10 *** #> C 14.128 2.826 5 121 0.9343 0.46141 #> Ac:B 0.381 0.381 1 121 0.1259 0.72330 #> Ac:C 45.874 9.175 5 121 3.0335 0.01291 * #> B:C 34.821 6.964 5 121 2.3026 0.04882 * #> Ac:B:C 21.860 4.372 5 121 1.4456 0.21285 anova(m2, type=3) #> Type III Analysis of Variance Table with Satterthwaite's method #> Sum Sq Mean Sq NumDF DenDF F value Pr(>F) #> Ac 8.794 8.794 1 11 2.9075 0.11621 #> B 136.641 136.641 1 121 45.1784 6.269e-10 *** #> C 14.128 2.826 5 121 0.9343 0.46141 #> Ac:B 0.381 0.381 1 121 0.1259 0.72330 #> Ac:C 45.874 9.175 5 121 3.0335 0.01291 * #> B:C 34.821 6.964 5 121 2.3026 0.04882 * #> Ac:B:C 21.860 4.372 5 121 1.4456 0.21285
How should Type II SS be calculated in a mixed model?
The difference is because A is not centered around zero. Not sure yet why this matters though, or if it should (it doesn't seem that it should...); other answers which explore this more fully are most
How should Type II SS be calculated in a mixed model? The difference is because A is not centered around zero. Not sure yet why this matters though, or if it should (it doesn't seem that it should...); other answers which explore this more fully are most welcome. d$Ac <- d$A - mean(d$A) m2 <- lmer(Y ~ Ac*B*C + (1|ID), data=d) anova(m2, type=1) #> Type I Analysis of Variance Table with Satterthwaite's method #> Sum Sq Mean Sq NumDF DenDF F value Pr(>F) #> Ac 8.794 8.794 1 11 2.9075 0.11621 #> B 136.641 136.641 1 121 45.1784 6.269e-10 *** #> C 14.128 2.826 5 121 0.9343 0.46141 #> Ac:B 0.381 0.381 1 121 0.1259 0.72330 #> Ac:C 45.874 9.175 5 121 3.0335 0.01291 * #> B:C 34.821 6.964 5 121 2.3026 0.04882 * #> Ac:B:C 21.860 4.372 5 121 1.4456 0.21285 anova(m2, type=2) #> Type II Analysis of Variance Table with Satterthwaite's method #> Sum Sq Mean Sq NumDF DenDF F value Pr(>F) #> Ac 8.794 8.794 1 11 2.9075 0.11621 #> B 136.641 136.641 1 121 45.1784 6.269e-10 *** #> C 14.128 2.826 5 121 0.9343 0.46141 #> Ac:B 0.381 0.381 1 121 0.1259 0.72330 #> Ac:C 45.874 9.175 5 121 3.0335 0.01291 * #> B:C 34.821 6.964 5 121 2.3026 0.04882 * #> Ac:B:C 21.860 4.372 5 121 1.4456 0.21285 anova(m2, type=3) #> Type III Analysis of Variance Table with Satterthwaite's method #> Sum Sq Mean Sq NumDF DenDF F value Pr(>F) #> Ac 8.794 8.794 1 11 2.9075 0.11621 #> B 136.641 136.641 1 121 45.1784 6.269e-10 *** #> C 14.128 2.826 5 121 0.9343 0.46141 #> Ac:B 0.381 0.381 1 121 0.1259 0.72330 #> Ac:C 45.874 9.175 5 121 3.0335 0.01291 * #> B:C 34.821 6.964 5 121 2.3026 0.04882 * #> Ac:B:C 21.860 4.372 5 121 1.4456 0.21285
How should Type II SS be calculated in a mixed model? The difference is because A is not centered around zero. Not sure yet why this matters though, or if it should (it doesn't seem that it should...); other answers which explore this more fully are most
48,383
Intuition about the deep meaning of Bayesian priors and its influence on posteriors
Your statement echoes Jaynes. He said When we look at these problems on a sufficiently fundamental level and realize how careful one must be to specify the prior information before we have a well-posed problem, it becomes evident that there is, in fact, no logical difference between (3.51) and (4.3); exactly the same principles are needed to assign either sampling probabilities or prior probabilities, and one man’s sampling probability is another man’s prior probability. The equations in chapter three are on elementary sampling theory and four on elementary hypothesis testing. There are three primary methods to create Bayesian theory from Cox's, de Finetti's and Savage's axioms. Cox is built on logic, de Finetti on gambling and Savage on preferences. In all three cases, you do not get well-posed problems with arbitrary calculations. If you think of a probability statement as a statement about a logical assertion, then to get a proper answer all parts of the logical argument must be included. Likewise, when one gambles, one would be insane to purposely ignore information about which you were going to gamble on. Finally, it begs rationality with respect to preferences to ignore knowledge. I believe the mistake comes from a misunderstanding of probabilities as frequencies. They are not. Long run frequencies will not be derived from Bayesian methods. It can be the case they have nice Frequentist properties, but this is incidental. Now, this does ask is there any circumstance where one should ignore prior information and the answer is "yes." As long as one is not introducing contradictory information or damaging the assertion there can be. Consider the case of a high dimension model with prohibitive calculation costs that could be approximated closely by an approximate solution. Weakening the prior gives a solution when a strong prior makes it impossible to do the work. Likewise, consider a low dimension model where time is of the essence and determining a full prior would result in catastrophic losses due to the time constraints. This is the terrorist with the bomb scenario. In that case, it is rational to use less than the available information. Laziness or ignorance is not an excuse for ignoring the prior.
Intuition about the deep meaning of Bayesian priors and its influence on posteriors
Your statement echoes Jaynes. He said When we look at these problems on a sufficiently fundamental level and realize how careful one must be to specify the prior information before we have a well-po
Intuition about the deep meaning of Bayesian priors and its influence on posteriors Your statement echoes Jaynes. He said When we look at these problems on a sufficiently fundamental level and realize how careful one must be to specify the prior information before we have a well-posed problem, it becomes evident that there is, in fact, no logical difference between (3.51) and (4.3); exactly the same principles are needed to assign either sampling probabilities or prior probabilities, and one man’s sampling probability is another man’s prior probability. The equations in chapter three are on elementary sampling theory and four on elementary hypothesis testing. There are three primary methods to create Bayesian theory from Cox's, de Finetti's and Savage's axioms. Cox is built on logic, de Finetti on gambling and Savage on preferences. In all three cases, you do not get well-posed problems with arbitrary calculations. If you think of a probability statement as a statement about a logical assertion, then to get a proper answer all parts of the logical argument must be included. Likewise, when one gambles, one would be insane to purposely ignore information about which you were going to gamble on. Finally, it begs rationality with respect to preferences to ignore knowledge. I believe the mistake comes from a misunderstanding of probabilities as frequencies. They are not. Long run frequencies will not be derived from Bayesian methods. It can be the case they have nice Frequentist properties, but this is incidental. Now, this does ask is there any circumstance where one should ignore prior information and the answer is "yes." As long as one is not introducing contradictory information or damaging the assertion there can be. Consider the case of a high dimension model with prohibitive calculation costs that could be approximated closely by an approximate solution. Weakening the prior gives a solution when a strong prior makes it impossible to do the work. Likewise, consider a low dimension model where time is of the essence and determining a full prior would result in catastrophic losses due to the time constraints. This is the terrorist with the bomb scenario. In that case, it is rational to use less than the available information. Laziness or ignorance is not an excuse for ignoring the prior.
Intuition about the deep meaning of Bayesian priors and its influence on posteriors Your statement echoes Jaynes. He said When we look at these problems on a sufficiently fundamental level and realize how careful one must be to specify the prior information before we have a well-po
48,384
Intuition about the deep meaning of Bayesian priors and its influence on posteriors
This is how I read your question -- "Why are priors given arbitrary values when they have a bearing on the calculated posterior?" Note: I come from a physics background -- please let me know if you think I am using some terms wrong. I shall pose a series of atomic questions and answer them as I understand them from the perspective of Bayesian statistics. Notation and terms: I think of a system connected to each other causally in pairs (think of a directed graph). The quantities are divided into Query, Hidden and Evidence. The posterior probability is given by $P(Q|E) = \sum_H P(Q|E,H)P(H)$, it is marginal with respect to the variables in the Hidden($H$) class. The statement of Bayes' Theorem is $\underbrace{P(Q|E)}_{\text{posterior distribution of Q given E}} = \frac{{\overbrace{P(E|Q)}^{\text{likelihood of E given Q}}}\;\times\;{\overbrace{P(Q)}^{\text{prior distribution of Q}}}}{\sum_{Q'}P(E|Q')P(Q')}$ Is the prior important to the calculation of the posterior?: Given enough evidence/data and a simple enough event space, no. But a suitable choice of the prior can lead to the "correct" posterior given less evidence/data or less iterations. Is the prior distribution ignored in practice?: Given enough data, it is not important because you can use the valuable time you have to other things. But having a prior distribution available from experiments enables better sanity checks (tests or debugging) once the posterior is obtained. When is the prior distribution important at all? Less data is available Multiple similar competing hypotheses (correlated with having a larger event space) Philosophically important to Jayne's (or what I subjectively believe is Jayne's explanation of the prior -- I haven't had a lot of time to assimilate it yet) approach. What makes sense for statistical mechanics? It is fine to seek explanations over discrete event spaces without referring to the priors. But faced with multidimensional systems on which most problems scale as the factorial, it seems to me that maximizing the entropy over the given constraints IS a very pragmatic way. But then I,as a beginner, haven't had enough time to understand if this is the only/best possible choice.
Intuition about the deep meaning of Bayesian priors and its influence on posteriors
This is how I read your question -- "Why are priors given arbitrary values when they have a bearing on the calculated posterior?" Note: I come from a physics background -- please let me know if you th
Intuition about the deep meaning of Bayesian priors and its influence on posteriors This is how I read your question -- "Why are priors given arbitrary values when they have a bearing on the calculated posterior?" Note: I come from a physics background -- please let me know if you think I am using some terms wrong. I shall pose a series of atomic questions and answer them as I understand them from the perspective of Bayesian statistics. Notation and terms: I think of a system connected to each other causally in pairs (think of a directed graph). The quantities are divided into Query, Hidden and Evidence. The posterior probability is given by $P(Q|E) = \sum_H P(Q|E,H)P(H)$, it is marginal with respect to the variables in the Hidden($H$) class. The statement of Bayes' Theorem is $\underbrace{P(Q|E)}_{\text{posterior distribution of Q given E}} = \frac{{\overbrace{P(E|Q)}^{\text{likelihood of E given Q}}}\;\times\;{\overbrace{P(Q)}^{\text{prior distribution of Q}}}}{\sum_{Q'}P(E|Q')P(Q')}$ Is the prior important to the calculation of the posterior?: Given enough evidence/data and a simple enough event space, no. But a suitable choice of the prior can lead to the "correct" posterior given less evidence/data or less iterations. Is the prior distribution ignored in practice?: Given enough data, it is not important because you can use the valuable time you have to other things. But having a prior distribution available from experiments enables better sanity checks (tests or debugging) once the posterior is obtained. When is the prior distribution important at all? Less data is available Multiple similar competing hypotheses (correlated with having a larger event space) Philosophically important to Jayne's (or what I subjectively believe is Jayne's explanation of the prior -- I haven't had a lot of time to assimilate it yet) approach. What makes sense for statistical mechanics? It is fine to seek explanations over discrete event spaces without referring to the priors. But faced with multidimensional systems on which most problems scale as the factorial, it seems to me that maximizing the entropy over the given constraints IS a very pragmatic way. But then I,as a beginner, haven't had enough time to understand if this is the only/best possible choice.
Intuition about the deep meaning of Bayesian priors and its influence on posteriors This is how I read your question -- "Why are priors given arbitrary values when they have a bearing on the calculated posterior?" Note: I come from a physics background -- please let me know if you th
48,385
Intuition about the deep meaning of Bayesian priors and its influence on posteriors
My possibly idiosyncratic view is as follows. If we had an exact, fully-known, prior distribution on the parameters, possibly belief-based, and we knew the true likelihood function, the Bayesian paradigm gives us the optimal way of updating that prior with the likelihood to get a posterior. In real life we don't have either a prior or a likelihood, except in what seems to me to be rare cases, so we apply an intuitive "smoothness in function space" argument that runs as follows. As long as the prior we use for calculation is close to the real, unobservable prior we have, and the likelihood function we use for calculation is close to the real, usually unknowable likelihood function, applying the Bayesian paradigm will get us a posterior that is close to the real, uncalculatable posterior. Applying the Bayesian paradigm even with approximate priors and likelihoods is likely to get us closer (on average) than doing something else, because it eliminates a source of noise in the move from prior to posterior - noise due to using a suboptimal updating algorithm. This, then, is the value of trying to state your prior information as a probability distribution - it allows you to use the optimal updating algorithm, thereby reducing the error in the beliefs you form after you've looked at the data. As a rather lengthy side note, this implies that Bayesian robustness is a desirable feature of our overall process (assigning priors and likelihood functions, performing the update calculations), the more so as our confidence in the accuracy of our constructed / assumed prior and likelihood functions degrades. At some point, we'll have so little confidence in our ability to form any sort of reasonable approximation to one, the other, or both, that we may as well abandon the Bayesian paradigm and do something else. Alternatively, the cost of setting up and executing the Bayesian paradigm may be so great, relative to the gains thereof, that we are, again, better off doing something else, such as running a classical t-test, observing a t-statistic of 19.4, and rejecting the null hypothesis that we created just to make life simpler. Now, as to the influence priors have - that depends on the prior, the likelihood function, and the data. It is quite easy to find all sorts of real-world situations where the data overwhelms the prior, in which case even very different priors lead to very similar posteriors. In these situations, worrying about the likelihood is far more important than worrying about the prior. On the other hand, in situations where getting data is very cost or time-intensive, the prior information may have to be carefully extracted from the relevant experts in order to make the best use of it as we can. (This was the case in my previous job, in which I did reliability analysis for solar panels and trackers, among other things - testing a large, expensive piece of equipment that is supposed to track the sun to derive a mean time to failure is both time-consuming and expensive.) So, the influence of the prior is situational, and that same situationality drives where we should focus our efforts in order to make best use of the optimal updating algorithm that Bayes' Theorem gives us.
Intuition about the deep meaning of Bayesian priors and its influence on posteriors
My possibly idiosyncratic view is as follows. If we had an exact, fully-known, prior distribution on the parameters, possibly belief-based, and we knew the true likelihood function, the Bayesian para
Intuition about the deep meaning of Bayesian priors and its influence on posteriors My possibly idiosyncratic view is as follows. If we had an exact, fully-known, prior distribution on the parameters, possibly belief-based, and we knew the true likelihood function, the Bayesian paradigm gives us the optimal way of updating that prior with the likelihood to get a posterior. In real life we don't have either a prior or a likelihood, except in what seems to me to be rare cases, so we apply an intuitive "smoothness in function space" argument that runs as follows. As long as the prior we use for calculation is close to the real, unobservable prior we have, and the likelihood function we use for calculation is close to the real, usually unknowable likelihood function, applying the Bayesian paradigm will get us a posterior that is close to the real, uncalculatable posterior. Applying the Bayesian paradigm even with approximate priors and likelihoods is likely to get us closer (on average) than doing something else, because it eliminates a source of noise in the move from prior to posterior - noise due to using a suboptimal updating algorithm. This, then, is the value of trying to state your prior information as a probability distribution - it allows you to use the optimal updating algorithm, thereby reducing the error in the beliefs you form after you've looked at the data. As a rather lengthy side note, this implies that Bayesian robustness is a desirable feature of our overall process (assigning priors and likelihood functions, performing the update calculations), the more so as our confidence in the accuracy of our constructed / assumed prior and likelihood functions degrades. At some point, we'll have so little confidence in our ability to form any sort of reasonable approximation to one, the other, or both, that we may as well abandon the Bayesian paradigm and do something else. Alternatively, the cost of setting up and executing the Bayesian paradigm may be so great, relative to the gains thereof, that we are, again, better off doing something else, such as running a classical t-test, observing a t-statistic of 19.4, and rejecting the null hypothesis that we created just to make life simpler. Now, as to the influence priors have - that depends on the prior, the likelihood function, and the data. It is quite easy to find all sorts of real-world situations where the data overwhelms the prior, in which case even very different priors lead to very similar posteriors. In these situations, worrying about the likelihood is far more important than worrying about the prior. On the other hand, in situations where getting data is very cost or time-intensive, the prior information may have to be carefully extracted from the relevant experts in order to make the best use of it as we can. (This was the case in my previous job, in which I did reliability analysis for solar panels and trackers, among other things - testing a large, expensive piece of equipment that is supposed to track the sun to derive a mean time to failure is both time-consuming and expensive.) So, the influence of the prior is situational, and that same situationality drives where we should focus our efforts in order to make best use of the optimal updating algorithm that Bayes' Theorem gives us.
Intuition about the deep meaning of Bayesian priors and its influence on posteriors My possibly idiosyncratic view is as follows. If we had an exact, fully-known, prior distribution on the parameters, possibly belief-based, and we knew the true likelihood function, the Bayesian para
48,386
What is the meaning of fuzz factor?
The epsilon in Keras is a small floating point number used to generally avoid mistakes like divide by zero. An example of its usage is here in the Keras code for calculation of mean absolute percentage error https://github.com/keras-team/keras/blob/c67adf1765d600737b0606fd3fde48045413dee4/keras/losses.py#L22 from . import backend as K def mean_absolute_percentage_error(y_true, y_pred): diff = K.abs((y_true - y_pred) / K.clip(K.abs(y_true), K.epsilon(), None)) return 100. * K.mean(diff, axis=-1)
What is the meaning of fuzz factor?
The epsilon in Keras is a small floating point number used to generally avoid mistakes like divide by zero. An example of its usage is here in the Keras code for calculation of mean absolute percentag
What is the meaning of fuzz factor? The epsilon in Keras is a small floating point number used to generally avoid mistakes like divide by zero. An example of its usage is here in the Keras code for calculation of mean absolute percentage error https://github.com/keras-team/keras/blob/c67adf1765d600737b0606fd3fde48045413dee4/keras/losses.py#L22 from . import backend as K def mean_absolute_percentage_error(y_true, y_pred): diff = K.abs((y_true - y_pred) / K.clip(K.abs(y_true), K.epsilon(), None)) return 100. * K.mean(diff, axis=-1)
What is the meaning of fuzz factor? The epsilon in Keras is a small floating point number used to generally avoid mistakes like divide by zero. An example of its usage is here in the Keras code for calculation of mean absolute percentag
48,387
AUC for random classifier in case of unbalanced dataset
A random classifier gives AUC 0.5 in expectation regardless of class balance. @article{Fawcett:2006:IRA:1159473.1159475, author = {Fawcett, Tom}, title = {An Introduction to ROC Analysis}, journal = {Pattern Recogn. Lett.}, issue_date = {June 2006}, volume = {27}, number = {8}, month = jun, year = {2006}, issn = {0167-8655}, pages = {861--874}, numpages = {14}, url = {http://dx.doi.org/10.1016/j.patrec.2005.10.010}, doi = {10.1016/j.patrec.2005.10.010}, acmid = {1159475}, publisher = {Elsevier Science Inc.}, address = {New York, NY, USA}, keywords = {Classifier evaluation, Evaluation metrics, ROC analysis}, }
AUC for random classifier in case of unbalanced dataset
A random classifier gives AUC 0.5 in expectation regardless of class balance. @article{Fawcett:2006:IRA:1159473.1159475, author = {Fawcett, Tom}, title = {An Introduction to ROC Analysis}, journal
AUC for random classifier in case of unbalanced dataset A random classifier gives AUC 0.5 in expectation regardless of class balance. @article{Fawcett:2006:IRA:1159473.1159475, author = {Fawcett, Tom}, title = {An Introduction to ROC Analysis}, journal = {Pattern Recogn. Lett.}, issue_date = {June 2006}, volume = {27}, number = {8}, month = jun, year = {2006}, issn = {0167-8655}, pages = {861--874}, numpages = {14}, url = {http://dx.doi.org/10.1016/j.patrec.2005.10.010}, doi = {10.1016/j.patrec.2005.10.010}, acmid = {1159475}, publisher = {Elsevier Science Inc.}, address = {New York, NY, USA}, keywords = {Classifier evaluation, Evaluation metrics, ROC analysis}, }
AUC for random classifier in case of unbalanced dataset A random classifier gives AUC 0.5 in expectation regardless of class balance. @article{Fawcett:2006:IRA:1159473.1159475, author = {Fawcett, Tom}, title = {An Introduction to ROC Analysis}, journal
48,388
AUC for random classifier in case of unbalanced dataset
Yes, but see bellow. One of the advantages of AUC is precisely that it measures the classification accuracy regardless of how many positives and negatives there are. AUC is the Area Under the ROC curve. The ROC curve plots the False Positive Rate against the True Positive Rate, with the False Positive Rate being the ratio of misclassified negative cases (i.e. were negative and were labeled as positive divided by the number of total negative cases) and the True Positive Rate the ratio of correctly classified positive cases (i.e. were positive and were correctly classified as positive, divided by the number of positive cases). So, to your question: If my dataset is highly unbalanced say 90% negative data point and 10% positive data point , would using a random classifier give a AUC value of 0.5 ? Yes, I have marked in boldface the answer. Also, beware, you have to understand random as a classifier that randomly classifies the data but using the probability distribution of the data. For example, say you have a dataset with: $90$ positives $(1)$, $10$ negatives $(0)$. Random classifier here means a generator of cases with probabilities $P(1) = 0.9$ and $P(0)=0.1$. Note that if you simply generate $1$s and $0$s without taking into account the probability you will have a lower AUC. Also note that this is, as Sycorax has stated, in expectation, so when the number of cases goes to $\infty$.
AUC for random classifier in case of unbalanced dataset
Yes, but see bellow. One of the advantages of AUC is precisely that it measures the classification accuracy regardless of how many positives and negatives there are. AUC is the Area Under the ROC cu
AUC for random classifier in case of unbalanced dataset Yes, but see bellow. One of the advantages of AUC is precisely that it measures the classification accuracy regardless of how many positives and negatives there are. AUC is the Area Under the ROC curve. The ROC curve plots the False Positive Rate against the True Positive Rate, with the False Positive Rate being the ratio of misclassified negative cases (i.e. were negative and were labeled as positive divided by the number of total negative cases) and the True Positive Rate the ratio of correctly classified positive cases (i.e. were positive and were correctly classified as positive, divided by the number of positive cases). So, to your question: If my dataset is highly unbalanced say 90% negative data point and 10% positive data point , would using a random classifier give a AUC value of 0.5 ? Yes, I have marked in boldface the answer. Also, beware, you have to understand random as a classifier that randomly classifies the data but using the probability distribution of the data. For example, say you have a dataset with: $90$ positives $(1)$, $10$ negatives $(0)$. Random classifier here means a generator of cases with probabilities $P(1) = 0.9$ and $P(0)=0.1$. Note that if you simply generate $1$s and $0$s without taking into account the probability you will have a lower AUC. Also note that this is, as Sycorax has stated, in expectation, so when the number of cases goes to $\infty$.
AUC for random classifier in case of unbalanced dataset Yes, but see bellow. One of the advantages of AUC is precisely that it measures the classification accuracy regardless of how many positives and negatives there are. AUC is the Area Under the ROC cu
48,389
AUC for random classifier in case of unbalanced dataset
When there is a class imbalance, we must seek the business domain experts (if we are not) and see what they are after. are they after negative or positive data points. once that is freezed, and a go ahead for fixing a class imbalance, there are 7 major techniques to hanle.. please refer https://www.kdnuggets.com/2017/06/7-techniques-handle-imbalanced-data.html please note, tuning your model to perform for a particular accuracy is as good as killing the data.. if i did not mi understood your question.. class imbalance may or may not increase your accuracy.. so far i haven't seen any improvement, but if any i accept it as my learning.. leaving this to experts in this community :)
AUC for random classifier in case of unbalanced dataset
When there is a class imbalance, we must seek the business domain experts (if we are not) and see what they are after. are they after negative or positive data points. once that is freezed, and a go a
AUC for random classifier in case of unbalanced dataset When there is a class imbalance, we must seek the business domain experts (if we are not) and see what they are after. are they after negative or positive data points. once that is freezed, and a go ahead for fixing a class imbalance, there are 7 major techniques to hanle.. please refer https://www.kdnuggets.com/2017/06/7-techniques-handle-imbalanced-data.html please note, tuning your model to perform for a particular accuracy is as good as killing the data.. if i did not mi understood your question.. class imbalance may or may not increase your accuracy.. so far i haven't seen any improvement, but if any i accept it as my learning.. leaving this to experts in this community :)
AUC for random classifier in case of unbalanced dataset When there is a class imbalance, we must seek the business domain experts (if we are not) and see what they are after. are they after negative or positive data points. once that is freezed, and a go a
48,390
Expecations calculation question (regarding the autocovariance sequence of the square of a zero-mean stationary proces)
Your lecturer is wrong, as a simple counterexample will show. Consider the process $(X_t\mid t\in\mathbb Z)$ where $$(X_t) = (\ldots,-1,1,-1,1,\ldots) = ((-1)^t)$$ with probability $1/2$ and otherwise $$(X_t) = (\ldots,1,-1,1,-1,\ldots) = (-(-1)^t).$$ Simple calculations (using nothing more than the definitions of expectation and variance) show $E(X_t)=0,$ $\operatorname{Var}(X_t)=1,$ and $\operatorname{Cov}(X_t,X_{t+\tau})=(-1)^\tau$ for all integers $t$ and $\tau,$ establishing that $(X_t)$ is second-order (weakly) stationary. The remaining calculations are trivial, since $X^2_t = 1$ is a constant series and $X_tX_{t+\tau}=(-1)^\tau.$ The constancy of $(X^2_t)$ implies its covariance function must be $s_Y(\tau)=0.$ That indeed is what your formula produces, but the lecturer's formula (if indeed it was transcribed correctly) gives $$\eqalign{ &E(X^2_t)E(X^2_{t+\tau}) + 2E^2(X_tX_{t+\tau}) - E(X^2_t)E(X^2_{t+\tau}) \\ &= E(1)E(1) + 2(E((-1)^\tau))^2-E(1)E(1) \\ &= 2 \ne 0. }$$
Expecations calculation question (regarding the autocovariance sequence of the square of a zero-mean
Your lecturer is wrong, as a simple counterexample will show. Consider the process $(X_t\mid t\in\mathbb Z)$ where $$(X_t) = (\ldots,-1,1,-1,1,\ldots) = ((-1)^t)$$ with probability $1/2$ and otherwis
Expecations calculation question (regarding the autocovariance sequence of the square of a zero-mean stationary proces) Your lecturer is wrong, as a simple counterexample will show. Consider the process $(X_t\mid t\in\mathbb Z)$ where $$(X_t) = (\ldots,-1,1,-1,1,\ldots) = ((-1)^t)$$ with probability $1/2$ and otherwise $$(X_t) = (\ldots,1,-1,1,-1,\ldots) = (-(-1)^t).$$ Simple calculations (using nothing more than the definitions of expectation and variance) show $E(X_t)=0,$ $\operatorname{Var}(X_t)=1,$ and $\operatorname{Cov}(X_t,X_{t+\tau})=(-1)^\tau$ for all integers $t$ and $\tau,$ establishing that $(X_t)$ is second-order (weakly) stationary. The remaining calculations are trivial, since $X^2_t = 1$ is a constant series and $X_tX_{t+\tau}=(-1)^\tau.$ The constancy of $(X^2_t)$ implies its covariance function must be $s_Y(\tau)=0.$ That indeed is what your formula produces, but the lecturer's formula (if indeed it was transcribed correctly) gives $$\eqalign{ &E(X^2_t)E(X^2_{t+\tau}) + 2E^2(X_tX_{t+\tau}) - E(X^2_t)E(X^2_{t+\tau}) \\ &= E(1)E(1) + 2(E((-1)^\tau))^2-E(1)E(1) \\ &= 2 \ne 0. }$$
Expecations calculation question (regarding the autocovariance sequence of the square of a zero-mean Your lecturer is wrong, as a simple counterexample will show. Consider the process $(X_t\mid t\in\mathbb Z)$ where $$(X_t) = (\ldots,-1,1,-1,1,\ldots) = ((-1)^t)$$ with probability $1/2$ and otherwis
48,391
Does Percent Change Difference A Time Series
First of all, note that stationarity and differencing come up in the context of ARMA and ARIMA models (see here and here). Other forecasting models, such as exponential smoothing, don't require stationary data. As a toy example, I think of gdp and percent change in gdp. In the examples you link to, the percent change didn't make the data stationary. For a time series to become stationary, you have to stabilize both the mean and the variance. In your example the mean got stabilized but the variance didn't (it either seems to decrease over time or there seems to be some regime switch between 1980 and 1985). Also, as a secondary question, does differencing usually occur on the previous observation or could you difference this period this year vs this period last year. (Again we're speaking about ARIMA models here) You would do differencing with the previous year and not just the lagged value if you planned on using a seasonal ARIMA model with a yearly seasonality. A seasonal ARIMA model is basically a "double" ARIMA model, applied once to the raw series and once to the series with the seasonal lag. See here.
Does Percent Change Difference A Time Series
First of all, note that stationarity and differencing come up in the context of ARMA and ARIMA models (see here and here). Other forecasting models, such as exponential smoothing, don't require statio
Does Percent Change Difference A Time Series First of all, note that stationarity and differencing come up in the context of ARMA and ARIMA models (see here and here). Other forecasting models, such as exponential smoothing, don't require stationary data. As a toy example, I think of gdp and percent change in gdp. In the examples you link to, the percent change didn't make the data stationary. For a time series to become stationary, you have to stabilize both the mean and the variance. In your example the mean got stabilized but the variance didn't (it either seems to decrease over time or there seems to be some regime switch between 1980 and 1985). Also, as a secondary question, does differencing usually occur on the previous observation or could you difference this period this year vs this period last year. (Again we're speaking about ARIMA models here) You would do differencing with the previous year and not just the lagged value if you planned on using a seasonal ARIMA model with a yearly seasonality. A seasonal ARIMA model is basically a "double" ARIMA model, applied once to the raw series and once to the series with the seasonal lag. See here.
Does Percent Change Difference A Time Series First of all, note that stationarity and differencing come up in the context of ARMA and ARIMA models (see here and here). Other forecasting models, such as exponential smoothing, don't require statio
48,392
Does Percent Change Difference A Time Series
Differencing in GDP is quite popular, though it's not the only way to deal with nonstationarity in ARMA or regression. The jury's out on whether the log GPD series is unit root or a trend, i.e. $$\Delta \ln \mathrm{GDP}_t=X_t\beta+\varepsilon_t\\\varepsilon_t\sim\mathcal N(0,\sigma^2)$$ vs. $$\ln \mathrm{GDP}_t=X\beta_t+\varepsilon_t\\\varepsilon_t\sim\mathcal N(0,\sigma^2)$$ If you draw the log GDP series, it will look like a linear trend: So, you could do a few things to make GDP stationary: changes, percent changes, log-differencing. Alternatively, you could treat it as trend-stationary, meaning the linear trend of the log GDP. You could also apply methods that do not explicitly require stationarity. As to whether you should do differencing with overlapping or non-overlapping periods, it's a matter of preference. You could go with year over year quarterly overlapping series. They're smoother that quarter over quarter, and you don't need to deal with seasonality. The downside is that overlapping periods inevitably introduce autocorelations, so you have to address this.
Does Percent Change Difference A Time Series
Differencing in GDP is quite popular, though it's not the only way to deal with nonstationarity in ARMA or regression. The jury's out on whether the log GPD series is unit root or a trend, i.e. $$\Del
Does Percent Change Difference A Time Series Differencing in GDP is quite popular, though it's not the only way to deal with nonstationarity in ARMA or regression. The jury's out on whether the log GPD series is unit root or a trend, i.e. $$\Delta \ln \mathrm{GDP}_t=X_t\beta+\varepsilon_t\\\varepsilon_t\sim\mathcal N(0,\sigma^2)$$ vs. $$\ln \mathrm{GDP}_t=X\beta_t+\varepsilon_t\\\varepsilon_t\sim\mathcal N(0,\sigma^2)$$ If you draw the log GDP series, it will look like a linear trend: So, you could do a few things to make GDP stationary: changes, percent changes, log-differencing. Alternatively, you could treat it as trend-stationary, meaning the linear trend of the log GDP. You could also apply methods that do not explicitly require stationarity. As to whether you should do differencing with overlapping or non-overlapping periods, it's a matter of preference. You could go with year over year quarterly overlapping series. They're smoother that quarter over quarter, and you don't need to deal with seasonality. The downside is that overlapping periods inevitably introduce autocorelations, so you have to address this.
Does Percent Change Difference A Time Series Differencing in GDP is quite popular, though it's not the only way to deal with nonstationarity in ARMA or regression. The jury's out on whether the log GPD series is unit root or a trend, i.e. $$\Del
48,393
Why are the results of R's ccf and SciPy's correlate different?
The difference is due to different definitions of cross-correlation and autocorrelation in different domains. See Wikipedia's article on autocorrelation for more information, but here is the gist. In statistics, autocorrelation is defined as Pearson correlation of the signal with itself at different time lags. In signal processing, on the other hand, it is defined as convolution of the function with itself over all lags without any normalization. SciPy takes the latter definition, i.e. the one without normalization. To recover R's ccf results, substract the mean of the signals before running scipy.signal.correlate and divide with the product of standard deviations and length. result = ss.correlate(CsI - np.mean(CsI), WLS - np.mean(WLS), method='direct')/(np.std(CsI)*np.std(WLS)*len(CsI))
Why are the results of R's ccf and SciPy's correlate different?
The difference is due to different definitions of cross-correlation and autocorrelation in different domains. See Wikipedia's article on autocorrelation for more information, but here is the gist. In
Why are the results of R's ccf and SciPy's correlate different? The difference is due to different definitions of cross-correlation and autocorrelation in different domains. See Wikipedia's article on autocorrelation for more information, but here is the gist. In statistics, autocorrelation is defined as Pearson correlation of the signal with itself at different time lags. In signal processing, on the other hand, it is defined as convolution of the function with itself over all lags without any normalization. SciPy takes the latter definition, i.e. the one without normalization. To recover R's ccf results, substract the mean of the signals before running scipy.signal.correlate and divide with the product of standard deviations and length. result = ss.correlate(CsI - np.mean(CsI), WLS - np.mean(WLS), method='direct')/(np.std(CsI)*np.std(WLS)*len(CsI))
Why are the results of R's ccf and SciPy's correlate different? The difference is due to different definitions of cross-correlation and autocorrelation in different domains. See Wikipedia's article on autocorrelation for more information, but here is the gist. In
48,394
Does white noise imply wide-sense stationary?
Well, this depends on your definition of white noise. This question asks for that definition. One answer gives: A white noise process is a random process of random variables that are uncorrelated, have mean zero, and a finite variance. Formally, $X(t)$ is a white noise process if $E(X(t))=0,E(X(t)^2)=S^2$, and $E(X(t)X(h))=0$ for $t≠h$. A slightly stronger condition is that they are independent from one another; this is an "independent white noise process." Under this definition (the first of the two, the weaker one), which is I presume the same definition you have, your reasoning is perfectly correct and white noise is always wide-sense stationary. Note that this definition asks for the variances be finite, what I think you also do since you probably mean a finite number when you write $c_0$. For the stronger version of the definition given in the same quoted answer, the same applies. In contrast, the definition that Dilip Sarwate gave in his answer doesn't require the variances to be finite and hence allows for a white noise not to be wide-sense stationary as he explained. There are probably other definitions for white noise out there. Possibly, in the context of your exam another definition of white noise is assumed than the one in your book and therefore the apparent contradiction.
Does white noise imply wide-sense stationary?
Well, this depends on your definition of white noise. This question asks for that definition. One answer gives: A white noise process is a random process of random variables that are uncorrelated, ha
Does white noise imply wide-sense stationary? Well, this depends on your definition of white noise. This question asks for that definition. One answer gives: A white noise process is a random process of random variables that are uncorrelated, have mean zero, and a finite variance. Formally, $X(t)$ is a white noise process if $E(X(t))=0,E(X(t)^2)=S^2$, and $E(X(t)X(h))=0$ for $t≠h$. A slightly stronger condition is that they are independent from one another; this is an "independent white noise process." Under this definition (the first of the two, the weaker one), which is I presume the same definition you have, your reasoning is perfectly correct and white noise is always wide-sense stationary. Note that this definition asks for the variances be finite, what I think you also do since you probably mean a finite number when you write $c_0$. For the stronger version of the definition given in the same quoted answer, the same applies. In contrast, the definition that Dilip Sarwate gave in his answer doesn't require the variances to be finite and hence allows for a white noise not to be wide-sense stationary as he explained. There are probably other definitions for white noise out there. Possibly, in the context of your exam another definition of white noise is assumed than the one in your book and therefore the apparent contradiction.
Does white noise imply wide-sense stationary? Well, this depends on your definition of white noise. This question asks for that definition. One answer gives: A white noise process is a random process of random variables that are uncorrelated, ha
48,395
Does white noise imply wide-sense stationary?
White noise has the properties that you state, but those properties are not the properties that define white noise. As Michael Chernick's comment points out, a (discrete-time) white noise process is a collection of independent identically distributed zero-mean random variables, one for each time instant under consideration. If the random variables have finite variance $\sigma^2$, then the autocorrelation function of the process is $\sigma^2\delta[n]$ where $\delta[n]$ is the Kronecker delta function defined by $$\delta[n] = \begin{cases}1, &n=0,\\0, &n \neq 0.\end{cases}$$ Now, if the common distribution function of random variables does not have a variance, e.g. Cauchy random variables, then white noise is not a wide-sense-stationary process (even though it is a strictly stationary process).
Does white noise imply wide-sense stationary?
White noise has the properties that you state, but those properties are not the properties that define white noise. As Michael Chernick's comment points out, a (discrete-time) white noise process is a
Does white noise imply wide-sense stationary? White noise has the properties that you state, but those properties are not the properties that define white noise. As Michael Chernick's comment points out, a (discrete-time) white noise process is a collection of independent identically distributed zero-mean random variables, one for each time instant under consideration. If the random variables have finite variance $\sigma^2$, then the autocorrelation function of the process is $\sigma^2\delta[n]$ where $\delta[n]$ is the Kronecker delta function defined by $$\delta[n] = \begin{cases}1, &n=0,\\0, &n \neq 0.\end{cases}$$ Now, if the common distribution function of random variables does not have a variance, e.g. Cauchy random variables, then white noise is not a wide-sense-stationary process (even though it is a strictly stationary process).
Does white noise imply wide-sense stationary? White noise has the properties that you state, but those properties are not the properties that define white noise. As Michael Chernick's comment points out, a (discrete-time) white noise process is a
48,396
Does white noise imply wide-sense stationary?
Wise sense stationarity implies weak or covariance stationarity, i.e only the first two moments(mean and variance) are time-invariant or constant. A white noise time series in its simplest form has 0 mean, constant variance and is serially uncorrelated. Hence white noise implies wide-sense stationarity
Does white noise imply wide-sense stationary?
Wise sense stationarity implies weak or covariance stationarity, i.e only the first two moments(mean and variance) are time-invariant or constant. A white noise time series in its simplest form has 0
Does white noise imply wide-sense stationary? Wise sense stationarity implies weak or covariance stationarity, i.e only the first two moments(mean and variance) are time-invariant or constant. A white noise time series in its simplest form has 0 mean, constant variance and is serially uncorrelated. Hence white noise implies wide-sense stationarity
Does white noise imply wide-sense stationary? Wise sense stationarity implies weak or covariance stationarity, i.e only the first two moments(mean and variance) are time-invariant or constant. A white noise time series in its simplest form has 0
48,397
Why may results from model with interaction term and stratified model be different?
In general, the stratified model requires more power to estimate, is more flexible and general, but harder to draw inference from. You cannot directly calculate a $p$-value. You must either use a path-model, bootstrap or permutation test, or the $\delta$-method to obtain standard errors for the difference in regression parameters between two stratified models. A test which is inefficient but often performed is inspecting the coverage of 95% CIs. If either CI overlaps the point estimate of the other stratum coefficient, do not reject the null. This does not provide a powerful test. Inspecting the narrower CI only makes an anticonservative test. The interaction model by contrast is more efficient, requires homoscedasticity between the two strata, and the inference is just based on the product-term coefficient. The interaction model has the additional advantage that it can assess interaction between two continuous covariates. You couldn't, for instance, fit a stratified model for a continuous covariate without splitting one of the covariates into an arbitrary number of strata; this likely spends too much power. The homoscedasticity assumption (both between strata and overall) can be relaxed by using robust sandwich variance estimation. These models will consistently estimate the same thing only when the mean model is true. If, in fact, the mean model is not true, it is possible that, as the sample size grows to infinity, one model definitively says there's no interaction and the other says there is interaction. For that reason, with either model, you should perform some diagnostic comparisons to ensure there aren't egregious departures from the modeled mean. If there are further adjustment variables: the need for correct model specification in the interaction model and power drop in the stratified models is exacerbated. In a simple $X,W,Y$ analysis, the stratified models in total have 4 parameters and the interaction model has 4 parameters (intercept, X coefficient, W coefficient, and product-term: you must always adjust for the main effects). However, introduce adjustment variables $Z_1, \ldots, Z_p$, and the interaction model has 4+p parameters and the stratified model has 2+2*p parameters.
Why may results from model with interaction term and stratified model be different?
In general, the stratified model requires more power to estimate, is more flexible and general, but harder to draw inference from. You cannot directly calculate a $p$-value. You must either use a path
Why may results from model with interaction term and stratified model be different? In general, the stratified model requires more power to estimate, is more flexible and general, but harder to draw inference from. You cannot directly calculate a $p$-value. You must either use a path-model, bootstrap or permutation test, or the $\delta$-method to obtain standard errors for the difference in regression parameters between two stratified models. A test which is inefficient but often performed is inspecting the coverage of 95% CIs. If either CI overlaps the point estimate of the other stratum coefficient, do not reject the null. This does not provide a powerful test. Inspecting the narrower CI only makes an anticonservative test. The interaction model by contrast is more efficient, requires homoscedasticity between the two strata, and the inference is just based on the product-term coefficient. The interaction model has the additional advantage that it can assess interaction between two continuous covariates. You couldn't, for instance, fit a stratified model for a continuous covariate without splitting one of the covariates into an arbitrary number of strata; this likely spends too much power. The homoscedasticity assumption (both between strata and overall) can be relaxed by using robust sandwich variance estimation. These models will consistently estimate the same thing only when the mean model is true. If, in fact, the mean model is not true, it is possible that, as the sample size grows to infinity, one model definitively says there's no interaction and the other says there is interaction. For that reason, with either model, you should perform some diagnostic comparisons to ensure there aren't egregious departures from the modeled mean. If there are further adjustment variables: the need for correct model specification in the interaction model and power drop in the stratified models is exacerbated. In a simple $X,W,Y$ analysis, the stratified models in total have 4 parameters and the interaction model has 4 parameters (intercept, X coefficient, W coefficient, and product-term: you must always adjust for the main effects). However, introduce adjustment variables $Z_1, \ldots, Z_p$, and the interaction model has 4+p parameters and the stratified model has 2+2*p parameters.
Why may results from model with interaction term and stratified model be different? In general, the stratified model requires more power to estimate, is more flexible and general, but harder to draw inference from. You cannot directly calculate a $p$-value. You must either use a path
48,398
Why may results from model with interaction term and stratified model be different?
I delved quite into this issue and ended up putting out a paper on it, which may be helpful to you. https://ehp.niehs.nih.gov/doi/10.1289/EHP334 Basically, the stratified and product-term models encode different assumptions about the covariates. If you were to include product terms between the modifier and all covariates, that is functionally equivalent to a stratified model.
Why may results from model with interaction term and stratified model be different?
I delved quite into this issue and ended up putting out a paper on it, which may be helpful to you. https://ehp.niehs.nih.gov/doi/10.1289/EHP334 Basically, the stratified and product-term models enco
Why may results from model with interaction term and stratified model be different? I delved quite into this issue and ended up putting out a paper on it, which may be helpful to you. https://ehp.niehs.nih.gov/doi/10.1289/EHP334 Basically, the stratified and product-term models encode different assumptions about the covariates. If you were to include product terms between the modifier and all covariates, that is functionally equivalent to a stratified model.
Why may results from model with interaction term and stratified model be different? I delved quite into this issue and ended up putting out a paper on it, which may be helpful to you. https://ehp.niehs.nih.gov/doi/10.1289/EHP334 Basically, the stratified and product-term models enco
48,399
What is the difference among stochastic, batch and mini-batch learning styles?
Yes, your understanding is correct. In the case of batch or mini-batch back-propagation we really use the "average .... We should use the average gradient. However, you can choose the learning rate and account for averaging. If you use sum, the division term can be subsumed in the learning rate however, learning rate will now be dependent on batch size. This is another practical reason to use the average.
What is the difference among stochastic, batch and mini-batch learning styles?
Yes, your understanding is correct. In the case of batch or mini-batch back-propagation we really use the "average .... We should use the average gradient. However, you can choose the learning rat
What is the difference among stochastic, batch and mini-batch learning styles? Yes, your understanding is correct. In the case of batch or mini-batch back-propagation we really use the "average .... We should use the average gradient. However, you can choose the learning rate and account for averaging. If you use sum, the division term can be subsumed in the learning rate however, learning rate will now be dependent on batch size. This is another practical reason to use the average.
What is the difference among stochastic, batch and mini-batch learning styles? Yes, your understanding is correct. In the case of batch or mini-batch back-propagation we really use the "average .... We should use the average gradient. However, you can choose the learning rat
48,400
Is convergence in probability equivalent to "almost surely... something"
The answer is no: there is no such property. Any property of the form "a.s. something" that implies convergence in probability also implies a.s. convergence, hence cannot be equivalent to convergence in probability. Proof: Write $X=(X_n)_{n\in\mathbb{N}}$. Let's assume all variables $X_n$ are binary (in $\{0;1\}$) for simplicity. Then convergence in probability (implicitly to 0 in all that follows) of $X$ is simply: $$\lim_{n\rightarrow+\infty} P(X_n=1)=0$$ For any property $\psi$ of sequences of 0s and 1s, "a.s. $\psi(X)$" means "for almost all $\omega$, $\psi(X(\omega))$". For every infinite part $A$ of $\mathbb{N}$ define the property (of sequences of 0s and 1s): $$\phi_A(x): \exists n\in A\quad x_n=0$$ Clearly, convergence in probability of $X$ implies a.s. $\phi_A(X)$. Now assume convergence in probability of $X$ is implied by a.s. $\phi(X)$ for some property $\phi$. For any $A$, a.s. $\phi(X)$ implies convergence in probability of $X$ implies a.s. $\phi_A(X)$. Then it is clear that for any sequence $x$ of 0s and 1s, $\phi(x)$ implies $\phi_A(x)$: just use a random sequence $X$ identically equal to $x$. Thus $\phi(x)$ implies $\forall A\space \phi_A(x)$ where $A$ ranges over all infinite parts of $\mathbb{N}$. $\forall A\space \phi_A(x)$ says there is no infinite part of $\mathbb{N}$ where $x$ is constant to 1. In other words it says $x$ has only finitely many 1s. That is $x$ tends to 0. Thus a.s. $\phi(X)$ implies a.s. convergence of $X$. About about this property: $$\phi((x_n)_{n\in\mathbb{N}}): \forall\epsilon>0\quad\lim_{n\rightarrow+\infty}\frac{\#\{k<n\mid |x_k|>\epsilon\}}{n}=0$$ "Almost surely $\phi$" is not equivalent to convergence in probability. First, it is not implied by convergence in probability. A counter example is not so easy to find. The implication holds for independent variables. A counter example can be constructed by creating successive independent groups of 0/1 variables. All variables in the same group are equal: their common value is called the value of the group. And we have: the length of each group is large enough (say $2^k$) so that a group having value 1 influences the average sufficiently almost surely infinitely many groups have value 1 so that the above happens infinitely many times the probability for a group to have value 1 tends to 0 so that convergence in probability (to 0) holds The last two points can be be obtained by saying that the probability for the $k^{th}$ group to have value 1 is $1/k$ and using the second Borrel-Cantelli lemma. "Almost surely $\phi$" does not imply convergence in probability either. But it's not far from it. "For all subsequences of $X$, almost surely $\phi$ for this subsequence" implies convergence in probability and this is quite easy to prove. Going further than this seems to raise key theoretical difficulties: how to define asymptotic empirical frequency in a more general way than this.
Is convergence in probability equivalent to "almost surely... something"
The answer is no: there is no such property. Any property of the form "a.s. something" that implies convergence in probability also implies a.s. convergence, hence cannot be equivalent to convergence
Is convergence in probability equivalent to "almost surely... something" The answer is no: there is no such property. Any property of the form "a.s. something" that implies convergence in probability also implies a.s. convergence, hence cannot be equivalent to convergence in probability. Proof: Write $X=(X_n)_{n\in\mathbb{N}}$. Let's assume all variables $X_n$ are binary (in $\{0;1\}$) for simplicity. Then convergence in probability (implicitly to 0 in all that follows) of $X$ is simply: $$\lim_{n\rightarrow+\infty} P(X_n=1)=0$$ For any property $\psi$ of sequences of 0s and 1s, "a.s. $\psi(X)$" means "for almost all $\omega$, $\psi(X(\omega))$". For every infinite part $A$ of $\mathbb{N}$ define the property (of sequences of 0s and 1s): $$\phi_A(x): \exists n\in A\quad x_n=0$$ Clearly, convergence in probability of $X$ implies a.s. $\phi_A(X)$. Now assume convergence in probability of $X$ is implied by a.s. $\phi(X)$ for some property $\phi$. For any $A$, a.s. $\phi(X)$ implies convergence in probability of $X$ implies a.s. $\phi_A(X)$. Then it is clear that for any sequence $x$ of 0s and 1s, $\phi(x)$ implies $\phi_A(x)$: just use a random sequence $X$ identically equal to $x$. Thus $\phi(x)$ implies $\forall A\space \phi_A(x)$ where $A$ ranges over all infinite parts of $\mathbb{N}$. $\forall A\space \phi_A(x)$ says there is no infinite part of $\mathbb{N}$ where $x$ is constant to 1. In other words it says $x$ has only finitely many 1s. That is $x$ tends to 0. Thus a.s. $\phi(X)$ implies a.s. convergence of $X$. About about this property: $$\phi((x_n)_{n\in\mathbb{N}}): \forall\epsilon>0\quad\lim_{n\rightarrow+\infty}\frac{\#\{k<n\mid |x_k|>\epsilon\}}{n}=0$$ "Almost surely $\phi$" is not equivalent to convergence in probability. First, it is not implied by convergence in probability. A counter example is not so easy to find. The implication holds for independent variables. A counter example can be constructed by creating successive independent groups of 0/1 variables. All variables in the same group are equal: their common value is called the value of the group. And we have: the length of each group is large enough (say $2^k$) so that a group having value 1 influences the average sufficiently almost surely infinitely many groups have value 1 so that the above happens infinitely many times the probability for a group to have value 1 tends to 0 so that convergence in probability (to 0) holds The last two points can be be obtained by saying that the probability for the $k^{th}$ group to have value 1 is $1/k$ and using the second Borrel-Cantelli lemma. "Almost surely $\phi$" does not imply convergence in probability either. But it's not far from it. "For all subsequences of $X$, almost surely $\phi$ for this subsequence" implies convergence in probability and this is quite easy to prove. Going further than this seems to raise key theoretical difficulties: how to define asymptotic empirical frequency in a more general way than this.
Is convergence in probability equivalent to "almost surely... something" The answer is no: there is no such property. Any property of the form "a.s. something" that implies convergence in probability also implies a.s. convergence, hence cannot be equivalent to convergence