idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
39,901
Regression as mutual information minimization
You're proposing to minimize the mutual information between the residuals and the inputs. Minimal information between the residuals and inputs is indeed a desirable property. But, it's not generally sufficient for regression because it doesn't force the residuals to be small. For example, note that any constant value can be added to the model--thereby making the residuals arbitrarily large and the fit arbitrarily poor--without affecting the mutual information.
Regression as mutual information minimization
You're proposing to minimize the mutual information between the residuals and the inputs. Minimal information between the residuals and inputs is indeed a desirable property. But, it's not generally s
Regression as mutual information minimization You're proposing to minimize the mutual information between the residuals and the inputs. Minimal information between the residuals and inputs is indeed a desirable property. But, it's not generally sufficient for regression because it doesn't force the residuals to be small. For example, note that any constant value can be added to the model--thereby making the residuals arbitrarily large and the fit arbitrarily poor--without affecting the mutual information.
Regression as mutual information minimization You're proposing to minimize the mutual information between the residuals and the inputs. Minimal information between the residuals and inputs is indeed a desirable property. But, it's not generally s
39,902
Reproducing t-test in R gives different result than built-in function
You are making mistake in calculating your degree of freedom. Here is the code, that exactly reproduce the R t.test results. a <- c(5.36, 16.57, 0.62, 1.41, 0.64, 7.26) b <- c(19.12, 3.52, 3.38, 2.5, 3.6, 1.74) v1 <- var(a) v2 <- var(b) n1 <- length(a) n2 <- length(b) se <- sqrt(v1/n1 + v2/n2) nu <- se^4 / ((v1^2 /(n1^2*(n1 -1))) + (v2^2/(n2^2*(n2-1)))) #degree of freedom #Confidence Interval mean(a) - mean(b) + c(1, -1)* qt(.95, nu)*se > 6.372161 -7.038828 It exactly matches with t.test results t.test(a, b, conf.level = 0.9)
Reproducing t-test in R gives different result than built-in function
You are making mistake in calculating your degree of freedom. Here is the code, that exactly reproduce the R t.test results. a <- c(5.36, 16.57, 0.62, 1.41, 0.64, 7.26) b <- c(19.12, 3.52, 3.38, 2.5
Reproducing t-test in R gives different result than built-in function You are making mistake in calculating your degree of freedom. Here is the code, that exactly reproduce the R t.test results. a <- c(5.36, 16.57, 0.62, 1.41, 0.64, 7.26) b <- c(19.12, 3.52, 3.38, 2.5, 3.6, 1.74) v1 <- var(a) v2 <- var(b) n1 <- length(a) n2 <- length(b) se <- sqrt(v1/n1 + v2/n2) nu <- se^4 / ((v1^2 /(n1^2*(n1 -1))) + (v2^2/(n2^2*(n2-1)))) #degree of freedom #Confidence Interval mean(a) - mean(b) + c(1, -1)* qt(.95, nu)*se > 6.372161 -7.038828 It exactly matches with t.test results t.test(a, b, conf.level = 0.9)
Reproducing t-test in R gives different result than built-in function You are making mistake in calculating your degree of freedom. Here is the code, that exactly reproduce the R t.test results. a <- c(5.36, 16.57, 0.62, 1.41, 0.64, 7.26) b <- c(19.12, 3.52, 3.38, 2.5
39,903
Difference between using cv=5 or cv=KFold(n_splits=5) in cross_val_score()? [closed]
When an integer is passed to the cv parameter of cross_val_score(): StratifiedKFold is used if the estimator is a classifier and y is either binary or multiclass. In all other cases, KFold is used. from sklearn import datasets from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import cross_val_score, KFold, StratifiedKFold data = datasets.load_breast_cancer() x, y = data.data, data.target print(cross_val_score(DecisionTreeClassifier(random_state=1), x, y, cv=5)) print(cross_val_score(DecisionTreeClassifier(random_state=1), x, y, cv=KFold(n_splits=5))) print(cross_val_score(DecisionTreeClassifier(random_state=1), x, y, cv=StratifiedKFold(n_splits=5))) [0.90434783 0.90434783 0.92035398 0.94690265 0.91150442] [0.89473684 0.92982456 0.94736842 0.95614035 0.82300885] [0.90434783 0.90434783 0.92035398 0.94690265 0.91150442]
Difference between using cv=5 or cv=KFold(n_splits=5) in cross_val_score()? [closed]
When an integer is passed to the cv parameter of cross_val_score(): StratifiedKFold is used if the estimator is a classifier and y is either binary or multiclass. In all other cases, KFold is used.
Difference between using cv=5 or cv=KFold(n_splits=5) in cross_val_score()? [closed] When an integer is passed to the cv parameter of cross_val_score(): StratifiedKFold is used if the estimator is a classifier and y is either binary or multiclass. In all other cases, KFold is used. from sklearn import datasets from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import cross_val_score, KFold, StratifiedKFold data = datasets.load_breast_cancer() x, y = data.data, data.target print(cross_val_score(DecisionTreeClassifier(random_state=1), x, y, cv=5)) print(cross_val_score(DecisionTreeClassifier(random_state=1), x, y, cv=KFold(n_splits=5))) print(cross_val_score(DecisionTreeClassifier(random_state=1), x, y, cv=StratifiedKFold(n_splits=5))) [0.90434783 0.90434783 0.92035398 0.94690265 0.91150442] [0.89473684 0.92982456 0.94736842 0.95614035 0.82300885] [0.90434783 0.90434783 0.92035398 0.94690265 0.91150442]
Difference between using cv=5 or cv=KFold(n_splits=5) in cross_val_score()? [closed] When an integer is passed to the cv parameter of cross_val_score(): StratifiedKFold is used if the estimator is a classifier and y is either binary or multiclass. In all other cases, KFold is used.
39,904
Confusion on hinge loss and SVM
Searching for the quoted text, it seems the book is Data Science for Business (Provost and Fawcett), and they're describing the soft-margin SVM. Their description of the hinge loss is wrong. The problem is that it doesn't penalize misclassified points that lie within the margin, as you mentioned. In SVMs, smaller weights correspond to larger margins. So, using this "version" of the hinge loss would have pathological consequences: We could achieve the minimum possible loss (zero) simply by choosing weights small enough such that all points lie within the margin. Even if every single point is misclassified. Because the SVM optimization problem contains a regularization term that encourages small weights (i.e. large margins), the solution will always be the zero vector. This means the solution is completely independent of the data, and nothing is learned. Needless to say, this wouldn't make for a very good classifier. The correct expression for the hinge loss for a soft-margin SVM is: $$\max \Big( 0, 1 - y f(x) \Big)$$ where $f(x)$ is the output of the SVM given input $x$, and $y$ is the true class (-1 or 1). When the true class is -1 (as in your example), the hinge loss looks like this: Note that the loss is nonzero for misclassified points, as well as correctly classified points that fall within the margin. For a proper description of soft-margin SVMs using the hinge loss formulation, see The Elements of Statistical Learning (section 12.3.2) or the Wikipedia article.
Confusion on hinge loss and SVM
Searching for the quoted text, it seems the book is Data Science for Business (Provost and Fawcett), and they're describing the soft-margin SVM. Their description of the hinge loss is wrong. The probl
Confusion on hinge loss and SVM Searching for the quoted text, it seems the book is Data Science for Business (Provost and Fawcett), and they're describing the soft-margin SVM. Their description of the hinge loss is wrong. The problem is that it doesn't penalize misclassified points that lie within the margin, as you mentioned. In SVMs, smaller weights correspond to larger margins. So, using this "version" of the hinge loss would have pathological consequences: We could achieve the minimum possible loss (zero) simply by choosing weights small enough such that all points lie within the margin. Even if every single point is misclassified. Because the SVM optimization problem contains a regularization term that encourages small weights (i.e. large margins), the solution will always be the zero vector. This means the solution is completely independent of the data, and nothing is learned. Needless to say, this wouldn't make for a very good classifier. The correct expression for the hinge loss for a soft-margin SVM is: $$\max \Big( 0, 1 - y f(x) \Big)$$ where $f(x)$ is the output of the SVM given input $x$, and $y$ is the true class (-1 or 1). When the true class is -1 (as in your example), the hinge loss looks like this: Note that the loss is nonzero for misclassified points, as well as correctly classified points that fall within the margin. For a proper description of soft-margin SVMs using the hinge loss formulation, see The Elements of Statistical Learning (section 12.3.2) or the Wikipedia article.
Confusion on hinge loss and SVM Searching for the quoted text, it seems the book is Data Science for Business (Provost and Fawcett), and they're describing the soft-margin SVM. Their description of the hinge loss is wrong. The probl
39,905
Confusion on hinge loss and SVM
The (A) hinge function can be expressed as $$y_{i} = \gamma \max{\left(x_{i}-\theta, 0\right)} + \varepsilon_{i},$$ where: $\gamma$ is the change in slope after the hinge. In your example, this amounts to the slope following the hinge, since your hinge-only model (see below) assumes zero effect of $x$ on $y$ until the hinge. $\theta$ is the point (in $\boldsymbol{x}$) at which the hinge is located, and is a parameter estimated for the model. I believe your question is answered by considering that the location of the hinge is informed by the loss function. $\varepsilon_{i}$ is some error term with some distribution. Hinge functions can also be useful in changing any line: $$y_{i} = \alpha_{0} + \beta x_{i} + \gamma \max{\left(x_{i}-\theta, 0\right)} + \varepsilon_{i},$$ where: $\alpha$ is the model constant, and the intercept of the curve before the hinge (i.e. for $x < \theta$). Of course, if $\theta < 0$, then the curve intersects the $y$-axis after the hinge so $\alpha$ will not necessarily be the $y$-intercept of the bent line. $\beta$ is the slope of the line relating $y$ to $x$ $\gamma$ is the change in slope after the hinge. In addition, the hinge can be used to model how a functional relationship between $y$ and $x$ changes form, as in this model where the relationship becomes quadra $$y_{i} = \alpha_{0} + \beta x_{i} + \gamma \max{\left(x_{i}-\theta, 0\right)^{2}} + \varepsilon_{i},$$
Confusion on hinge loss and SVM
The (A) hinge function can be expressed as $$y_{i} = \gamma \max{\left(x_{i}-\theta, 0\right)} + \varepsilon_{i},$$ where: $\gamma$ is the change in slope after the hinge. In your example, this amou
Confusion on hinge loss and SVM The (A) hinge function can be expressed as $$y_{i} = \gamma \max{\left(x_{i}-\theta, 0\right)} + \varepsilon_{i},$$ where: $\gamma$ is the change in slope after the hinge. In your example, this amounts to the slope following the hinge, since your hinge-only model (see below) assumes zero effect of $x$ on $y$ until the hinge. $\theta$ is the point (in $\boldsymbol{x}$) at which the hinge is located, and is a parameter estimated for the model. I believe your question is answered by considering that the location of the hinge is informed by the loss function. $\varepsilon_{i}$ is some error term with some distribution. Hinge functions can also be useful in changing any line: $$y_{i} = \alpha_{0} + \beta x_{i} + \gamma \max{\left(x_{i}-\theta, 0\right)} + \varepsilon_{i},$$ where: $\alpha$ is the model constant, and the intercept of the curve before the hinge (i.e. for $x < \theta$). Of course, if $\theta < 0$, then the curve intersects the $y$-axis after the hinge so $\alpha$ will not necessarily be the $y$-intercept of the bent line. $\beta$ is the slope of the line relating $y$ to $x$ $\gamma$ is the change in slope after the hinge. In addition, the hinge can be used to model how a functional relationship between $y$ and $x$ changes form, as in this model where the relationship becomes quadra $$y_{i} = \alpha_{0} + \beta x_{i} + \gamma \max{\left(x_{i}-\theta, 0\right)^{2}} + \varepsilon_{i},$$
Confusion on hinge loss and SVM The (A) hinge function can be expressed as $$y_{i} = \gamma \max{\left(x_{i}-\theta, 0\right)} + \varepsilon_{i},$$ where: $\gamma$ is the change in slope after the hinge. In your example, this amou
39,906
MLE: Marginal vs Full Likelihood
The usual way of doing likelihood inference on a parameter of interest in the presence of nuisance parameters consists of using the Profile Likelihood function (see this link). In your context, the profile likelihood is: $$\mathcal{L}_P(\theta_1;\mathbf{x}) = \max_{\theta_2,\dots,\theta_n} \mathcal{L(\boldsymbol{\theta};\mathbf{x})}.$$ The object of interest is the normalized profile likelihood, which is nothing but $$R_P(\theta_1, {\bf x}) = \frac{\mathcal{L}_P(\theta_1;\mathbf{x})}{\mathcal{L(\boldsymbol{\widehat{\theta}};\mathbf{x})}}.$$ This function can be used to construct confidence intervals on the parameter of interest. A thorough study of the profile likelihood can be found in: Sprott, David A. Statistical Inference in Science. Springer Science & Business Media, 2008. In some cases, some people assign a distribution to the "nuisance parameters" ($\theta_2,\dots,\theta_n$, in your case) and integrate them out. This is hybrid between Bayesian and Classical inference, and it is called the integrated likelihood. However, this requires assuming a distribution on the nuisance parameters in order to guarantee that the integral is finite. See: Berger, James O., Brunero Liseo, and Robert L. Wolpert. "Integrated likelihood methods for eliminating nuisance parameters." Statistical Science 14.1 (1999): 1-28. Note that, if you do not assign a proper distribution on the nuisance parameters, there is no guarantee that the marginal/integrated likelihood function is finite. Using a distribution $\pi(\theta_2, \dots, \theta_n)$ guarantees that $\mathcal{L(\theta_1;\mathbf{x})} = \idotsint \mathcal{L(\boldsymbol{\theta};\mathbf{x})} \pi(\theta_2, \dots, \theta_n) \,d\theta_2 \dots d\theta_n < \infty,$ by the Bayes theorem (for regular models).
MLE: Marginal vs Full Likelihood
The usual way of doing likelihood inference on a parameter of interest in the presence of nuisance parameters consists of using the Profile Likelihood function (see this link). In your context, the pr
MLE: Marginal vs Full Likelihood The usual way of doing likelihood inference on a parameter of interest in the presence of nuisance parameters consists of using the Profile Likelihood function (see this link). In your context, the profile likelihood is: $$\mathcal{L}_P(\theta_1;\mathbf{x}) = \max_{\theta_2,\dots,\theta_n} \mathcal{L(\boldsymbol{\theta};\mathbf{x})}.$$ The object of interest is the normalized profile likelihood, which is nothing but $$R_P(\theta_1, {\bf x}) = \frac{\mathcal{L}_P(\theta_1;\mathbf{x})}{\mathcal{L(\boldsymbol{\widehat{\theta}};\mathbf{x})}}.$$ This function can be used to construct confidence intervals on the parameter of interest. A thorough study of the profile likelihood can be found in: Sprott, David A. Statistical Inference in Science. Springer Science & Business Media, 2008. In some cases, some people assign a distribution to the "nuisance parameters" ($\theta_2,\dots,\theta_n$, in your case) and integrate them out. This is hybrid between Bayesian and Classical inference, and it is called the integrated likelihood. However, this requires assuming a distribution on the nuisance parameters in order to guarantee that the integral is finite. See: Berger, James O., Brunero Liseo, and Robert L. Wolpert. "Integrated likelihood methods for eliminating nuisance parameters." Statistical Science 14.1 (1999): 1-28. Note that, if you do not assign a proper distribution on the nuisance parameters, there is no guarantee that the marginal/integrated likelihood function is finite. Using a distribution $\pi(\theta_2, \dots, \theta_n)$ guarantees that $\mathcal{L(\theta_1;\mathbf{x})} = \idotsint \mathcal{L(\boldsymbol{\theta};\mathbf{x})} \pi(\theta_2, \dots, \theta_n) \,d\theta_2 \dots d\theta_n < \infty,$ by the Bayes theorem (for regular models).
MLE: Marginal vs Full Likelihood The usual way of doing likelihood inference on a parameter of interest in the presence of nuisance parameters consists of using the Profile Likelihood function (see this link). In your context, the pr
39,907
MLE: Marginal vs Full Likelihood
A general answer is tough and +1 to @Shifer, but if you're looking for one particular example when integrating outperforms profiling then you might find the Neyman-Scott "paradox" interesting. The problem: suppose we have $\{(x_1,y_1),\dots,(x_n,y_n)\}$ where each pair $(x_i, y_i)$ is independent and $$ {x_i \choose y_i} \sim \mathcal N(\mu_i \mathbf 1, \sigma ^2 I_2). $$ Thus we have $n$ pairs of Gaussian RVs where all $2n$ RVs are independent but each pair has a different mean. The goal now is to estimate $\sigma^2$. I'm going to do this with matrices so I don't have summations all over the place, so I'll write this as$\newcommand{\e}{\varepsilon}$ $$ z = A\mu + \e $$ where $z = (x_1, y_1, \dots, x_n, y_n)^T$, $\e \sim \mathcal N(0, \sigma^2 I_{2n})$, $\mu = (\mu_1,\dots,\mu_n)^T$, and $$ A = \left(\begin{array}{cccccc} 1 & 0 & 0 & \dots & 0 & 0\\ 1 & 0 & 0 & \dots & 0& 0\\ 0 & 1 & 0 & \dots & 0& 0\\ 0 & 1 & 0 & \dots & 0& 0\\ & & & \vdots & & \\ 0 & 0 & 0 & \dots & 0& 1 \\ 0 & 0 & 0 & \dots & 0& 1 \end{array}\right) $$ so $A$ picks out the correct mean parameter for each pair and then the error is a spherical Gaussian. I'll use $\tau =1 / \sigma^2$ in some places to make the math (especially the derivatives) easier. Profiling The likelihood is $$ f(z | \mu, \sigma^2) = \left(\frac{\tau}{2\pi} \right)^n \exp\left(-\frac \tau 2 \|z- A\mu\|^2\right). $$ We'll first find the MLE of $\mu$ and plug that in to get a profiled likelihood. This is just OLS linear regression so $\hat\mu = (A^TA)^{-1}A^Tz$ in this case, and therefore the profiled log likelihood is (up to some constant) $$ \ell_p(y | \hat\mu, \tau) = -\frac \tau 2 \|z- A\hat \mu\|^2 + n\log \tau. $$ We don't actually need this, but it's worth noting that $A^TA = 2I$ so actually $\hat\mu$ is just the mean of each pair, i.e. $\hat\mu_i = \frac{x_i + y_i}{2}$. This leads to $$ \frac{\partial \ell_p}{\partial \tau} = -\frac 1 2 \|z- A\hat \mu\|^2 + n\tau^{-1}. $$ Solving for zero, and then inverting to get the MLE of $\sigma^2$, we get $$ \hat\sigma^2 = \frac{\|z- A\hat \mu\|^2}{2n} $$ (you can take the second derivative to show this is actually a max, and that is one of the main simplifications in using $\tau$ instead of $\sigma^2$). Let $H = A(A^TA)^{-1}A^T$ and note that $$ \|z- A\hat \mu\|^2 = \|(I-H)z\|^2 = z^T(I-H)z $$ so since $z \sim \mathcal N(A\mu, \sigma^2 I)$ we've got a Gaussian quadratic form. This means $$ E(z^T(I-H)z) = \sigma^2 \text{tr}(I-H) + \mu^TA^T(I-H)A\mu \\ = n\sigma^2 $$ (see e.g. here for a proof of this result for quadratic forms). All together this means $$ E(\hat\sigma^2) = \frac{n\sigma^2}{2n} = \frac{\sigma^2}2 $$ so $\hat\sigma^2$ is biased, and this bias does not go away as $n\to\infty$ (i.e. it is inconsistent). That's not good. Integrating So we can try something else. I'm going to suppose $\mu \sim \mathcal N(0, (\tau\lambda)^{-1} I)$ (and $\mu \perp \e$) and then I'll integrate $\mu$ out and maximize the resulting integrated likelihood (so it'll be like a MAP). I don't actually need to evaluate the integral in this case since $\mu$ and $\e$ being independent Gaussians means the marginal distribution is also Gaussian. In particular, $$ A\mu + \e \sim \mathcal N(0, (\tau\lambda)^{-1}(AA^T + \lambda I)) $$ so now I have $$ f_I(z | \tau, \lambda) = \left(\frac{\tau\lambda}{2\pi}\right)^{n} |AA^T + \lambda I|^{-1/2} \exp\left(-\frac{\tau\lambda}2 z^T(AA^T+\lambda I)^{-1}z\right). $$ I'm going to obtain my estimate by maximizing this w.r.t. $\tau$ so I'll take logs to get $$ \ell_I(z | \tau, \lambda) = n\log \tau - \frac{\tau\lambda}2 z^T(AA^T+\lambda I)^{-1}z $$ up to some constants. This leads to $$ \frac{\partial \ell_I}{\partial \tau} = \frac{n}{\tau} - \frac{\lambda}{2} z^T(AA^T+\lambda I)^{-1}z $$ so $$ \tilde \sigma^2 = \frac{\lambda}{2n}z^T(AA^T+\lambda I)^{-1}z. $$ This again is a Gaussian quadratic form although now $z \sim \mathcal N(0, (\sigma^2/\lambda)(AA^T+\lambda I))$ which means $$ E(z^T(AA^T+\lambda I)^{-1} z) = \frac{\sigma^2}\lambda \text{tr}\left[(AA^T+\lambda I)^{-1}(AA^T+\lambda I)\right] \\ = \frac{2n \sigma^2}\lambda $$ so $$ E(\tilde \sigma^2) = \frac{\lambda}{2n} \cdot \frac{2n \sigma^2}\lambda = \sigma^2 $$ so not only is this is unbiased but it is unbiased for any valid prior variance. This example definitely can feel a little contrived but it does align with at least the intuitive idea that when there are tons of parameters, integrating with respect to a sensible prior (and I think a Gaussian prior for normal means is often sensible) can lead to better results than profiling (I call this intuitive because I think of averages as being more stable than maxima). But I was fortunate here that everything was analytically tractable for the integration and in general you won't be so lucky. Again, in summary this is a big topic with lots of complexities but hopefully this was at least interesting.
MLE: Marginal vs Full Likelihood
A general answer is tough and +1 to @Shifer, but if you're looking for one particular example when integrating outperforms profiling then you might find the Neyman-Scott "paradox" interesting. The pro
MLE: Marginal vs Full Likelihood A general answer is tough and +1 to @Shifer, but if you're looking for one particular example when integrating outperforms profiling then you might find the Neyman-Scott "paradox" interesting. The problem: suppose we have $\{(x_1,y_1),\dots,(x_n,y_n)\}$ where each pair $(x_i, y_i)$ is independent and $$ {x_i \choose y_i} \sim \mathcal N(\mu_i \mathbf 1, \sigma ^2 I_2). $$ Thus we have $n$ pairs of Gaussian RVs where all $2n$ RVs are independent but each pair has a different mean. The goal now is to estimate $\sigma^2$. I'm going to do this with matrices so I don't have summations all over the place, so I'll write this as$\newcommand{\e}{\varepsilon}$ $$ z = A\mu + \e $$ where $z = (x_1, y_1, \dots, x_n, y_n)^T$, $\e \sim \mathcal N(0, \sigma^2 I_{2n})$, $\mu = (\mu_1,\dots,\mu_n)^T$, and $$ A = \left(\begin{array}{cccccc} 1 & 0 & 0 & \dots & 0 & 0\\ 1 & 0 & 0 & \dots & 0& 0\\ 0 & 1 & 0 & \dots & 0& 0\\ 0 & 1 & 0 & \dots & 0& 0\\ & & & \vdots & & \\ 0 & 0 & 0 & \dots & 0& 1 \\ 0 & 0 & 0 & \dots & 0& 1 \end{array}\right) $$ so $A$ picks out the correct mean parameter for each pair and then the error is a spherical Gaussian. I'll use $\tau =1 / \sigma^2$ in some places to make the math (especially the derivatives) easier. Profiling The likelihood is $$ f(z | \mu, \sigma^2) = \left(\frac{\tau}{2\pi} \right)^n \exp\left(-\frac \tau 2 \|z- A\mu\|^2\right). $$ We'll first find the MLE of $\mu$ and plug that in to get a profiled likelihood. This is just OLS linear regression so $\hat\mu = (A^TA)^{-1}A^Tz$ in this case, and therefore the profiled log likelihood is (up to some constant) $$ \ell_p(y | \hat\mu, \tau) = -\frac \tau 2 \|z- A\hat \mu\|^2 + n\log \tau. $$ We don't actually need this, but it's worth noting that $A^TA = 2I$ so actually $\hat\mu$ is just the mean of each pair, i.e. $\hat\mu_i = \frac{x_i + y_i}{2}$. This leads to $$ \frac{\partial \ell_p}{\partial \tau} = -\frac 1 2 \|z- A\hat \mu\|^2 + n\tau^{-1}. $$ Solving for zero, and then inverting to get the MLE of $\sigma^2$, we get $$ \hat\sigma^2 = \frac{\|z- A\hat \mu\|^2}{2n} $$ (you can take the second derivative to show this is actually a max, and that is one of the main simplifications in using $\tau$ instead of $\sigma^2$). Let $H = A(A^TA)^{-1}A^T$ and note that $$ \|z- A\hat \mu\|^2 = \|(I-H)z\|^2 = z^T(I-H)z $$ so since $z \sim \mathcal N(A\mu, \sigma^2 I)$ we've got a Gaussian quadratic form. This means $$ E(z^T(I-H)z) = \sigma^2 \text{tr}(I-H) + \mu^TA^T(I-H)A\mu \\ = n\sigma^2 $$ (see e.g. here for a proof of this result for quadratic forms). All together this means $$ E(\hat\sigma^2) = \frac{n\sigma^2}{2n} = \frac{\sigma^2}2 $$ so $\hat\sigma^2$ is biased, and this bias does not go away as $n\to\infty$ (i.e. it is inconsistent). That's not good. Integrating So we can try something else. I'm going to suppose $\mu \sim \mathcal N(0, (\tau\lambda)^{-1} I)$ (and $\mu \perp \e$) and then I'll integrate $\mu$ out and maximize the resulting integrated likelihood (so it'll be like a MAP). I don't actually need to evaluate the integral in this case since $\mu$ and $\e$ being independent Gaussians means the marginal distribution is also Gaussian. In particular, $$ A\mu + \e \sim \mathcal N(0, (\tau\lambda)^{-1}(AA^T + \lambda I)) $$ so now I have $$ f_I(z | \tau, \lambda) = \left(\frac{\tau\lambda}{2\pi}\right)^{n} |AA^T + \lambda I|^{-1/2} \exp\left(-\frac{\tau\lambda}2 z^T(AA^T+\lambda I)^{-1}z\right). $$ I'm going to obtain my estimate by maximizing this w.r.t. $\tau$ so I'll take logs to get $$ \ell_I(z | \tau, \lambda) = n\log \tau - \frac{\tau\lambda}2 z^T(AA^T+\lambda I)^{-1}z $$ up to some constants. This leads to $$ \frac{\partial \ell_I}{\partial \tau} = \frac{n}{\tau} - \frac{\lambda}{2} z^T(AA^T+\lambda I)^{-1}z $$ so $$ \tilde \sigma^2 = \frac{\lambda}{2n}z^T(AA^T+\lambda I)^{-1}z. $$ This again is a Gaussian quadratic form although now $z \sim \mathcal N(0, (\sigma^2/\lambda)(AA^T+\lambda I))$ which means $$ E(z^T(AA^T+\lambda I)^{-1} z) = \frac{\sigma^2}\lambda \text{tr}\left[(AA^T+\lambda I)^{-1}(AA^T+\lambda I)\right] \\ = \frac{2n \sigma^2}\lambda $$ so $$ E(\tilde \sigma^2) = \frac{\lambda}{2n} \cdot \frac{2n \sigma^2}\lambda = \sigma^2 $$ so not only is this is unbiased but it is unbiased for any valid prior variance. This example definitely can feel a little contrived but it does align with at least the intuitive idea that when there are tons of parameters, integrating with respect to a sensible prior (and I think a Gaussian prior for normal means is often sensible) can lead to better results than profiling (I call this intuitive because I think of averages as being more stable than maxima). But I was fortunate here that everything was analytically tractable for the integration and in general you won't be so lucky. Again, in summary this is a big topic with lots of complexities but hopefully this was at least interesting.
MLE: Marginal vs Full Likelihood A general answer is tough and +1 to @Shifer, but if you're looking for one particular example when integrating outperforms profiling then you might find the Neyman-Scott "paradox" interesting. The pro
39,908
What is the CDF of the Shapiro-Wilk $W$ statistic?
There are many tests of normality. Some of them are based on Q-Q plots (normal probability plots) of the data, measuring according to various criteria how "nearly linear" the Q-Q plot is. Intuitive view of Shapiro-Wilk test. In visual inspection of a Q-Q plot, the extreme values of a truly normal sample can appear to give an undue departure from a straight line. Here are Q-Q plots of three normal samples of size $n = 500.$ For each random sample from $\mathsf{Norm}(0,1),$ the central portion of the plot seems very close to linear, but the tails look a "little wobbly." (These are not cherry-picked examples, they are the first three normal samples of size 500 resulting from a set.seed statement in R based on today's date.) One feature of the Shapiro-Wilk test is that it tends to 'down-weight' observations in the two tails. The three P-values from shapiro.test are 0.3218225, 0.7221126, and 0.8429852, respectively (all above 0.05); so all three samples are consistent with sampling from a normal population. set.seed(822) shapiro.test(rnorm(500))$p.value [1] 0.3218225 shapiro.test(rnorm(500))$p.value [1] 0.7221126 shapiro.test(rnorm(500))$p.value [1] 0.8429852 You can check on this site and more generally online for discussions of the technical details of computing the Shapiro-Wilk test statistic. But for your purposes, perhaps this intuitive description will be a useful start. Power of Shapiro-Wilk test. As mentioned in the NIST Handbook (and its references) the Shapiro-Wilk test is known for its high power against various alternatives, compared with other tests of normality. In order to make sense of power, you must have a specific significance level and non-normal distribution in mind. Student's t distribution with $\nu = 10$ degrees of freedom (symmetrical with heavy tails, $\mu = 0,\, \sigma\approx 1.12)$ and the distribution $\mathsf{Gamma}(\text{shape}=10, \text{scale}=10)$ (right-skewed, $\mu=1,\, \sigma\approx 0.316)$ are somewhat close to normal in shape, as illustrated in the figure below. My impression is that the power in specific situations is usually obtained by simulation. The following simulations in R show approximate powers of the Shapiro-Wilk normality test at level 5% against each of these two alternative distributions for samples of size $n=500.$ The power is the probability of rejection when data are from the alternative distribution. Respective power values are about 65% for $\mathsf{T}(10)$ and above 99% for $\mathsf{Beta}(10,10).$ [With $m = 100,000$ iterations one can expect about two-place accuracy.] set.seed(818); m = 10^5; n = 500 p.val = replicate(m, shapiro.test(rt(n, 10))$p.val) mean(p.val < 0.05) [1] 0.65474 set.seed(2018); m = 10^5; n = 500 p.val = replicate(m, shapiro.test(rgamma(n, 10, 10))$p.val) mean(p.val < 0.05) [1] 0.99978 Addendum: P-values. Traditionally, significance of a Shapiro-Wilk test was determined using tabled values of the test statistic $W.$ The P-values used in modern software programs seem to be mainly due to the work of Patrick Royston published in Applied Statistics: (1982) Vol. 31, 115-124 and 176-180, and (1995) Vol. 44, 547-551. [See References in the R documentation for shapiro.test.] The second 1982 paper extended the algorithm for P-values to accommodate $n \le 2000.$ The general method is to find a serviceable transformation to make the test statistic $W$ approximately normal. Currently, R accepts datasets with sizes $3 \le n \le 5000.$ However, the best accuracy is not guaranteed for P-values above $0.1.$ Simulation can provide an intuitive idea of the null distribution of $W$ for a particular $n.$ In R, the code set.seed(1234); rnorm(500) produces a standard normal sample of size $n = 500$ for which R gives $W = 0.99623$ with p-value $= 0.2848.$ To simulate the distribution of $W$ for $n = 500,$ we use the program below: set.seed(1234); x = rnorm(500); w.obs=shapiro.test(x)$stat set.seed(2018); m = 10^5; n = 500 w = replicate(m, shapiro.test(rnorm(n))$stat) mean(w < w.obs) [1] 0.28396 The histogram below shows the simulated distribution of $W$ along with the observed value of $W = 0.9962$ for for our sample with P-value about 0.2848. The null distribution of exact P-values from a test with a continuous test statistic is $\mathsf{Unif}(0,1).$ If we run a simulation similar to the one above, but capturing P-values (instead of tests statistics), we can see how close the Shapiro-Wilk P-values come to this uniform distribution. Because our P-values are not based on a continuous test statistic and are not exactly correct, we do not quite have a perfect fit to uniform. [The left-most bar for P-values $< 0.05$ (rejection) has area approximately $0.05.$]
What is the CDF of the Shapiro-Wilk $W$ statistic?
There are many tests of normality. Some of them are based on Q-Q plots (normal probability plots) of the data, measuring according to various criteria how "nearly linear" the Q-Q plot is. Intuitive v
What is the CDF of the Shapiro-Wilk $W$ statistic? There are many tests of normality. Some of them are based on Q-Q plots (normal probability plots) of the data, measuring according to various criteria how "nearly linear" the Q-Q plot is. Intuitive view of Shapiro-Wilk test. In visual inspection of a Q-Q plot, the extreme values of a truly normal sample can appear to give an undue departure from a straight line. Here are Q-Q plots of three normal samples of size $n = 500.$ For each random sample from $\mathsf{Norm}(0,1),$ the central portion of the plot seems very close to linear, but the tails look a "little wobbly." (These are not cherry-picked examples, they are the first three normal samples of size 500 resulting from a set.seed statement in R based on today's date.) One feature of the Shapiro-Wilk test is that it tends to 'down-weight' observations in the two tails. The three P-values from shapiro.test are 0.3218225, 0.7221126, and 0.8429852, respectively (all above 0.05); so all three samples are consistent with sampling from a normal population. set.seed(822) shapiro.test(rnorm(500))$p.value [1] 0.3218225 shapiro.test(rnorm(500))$p.value [1] 0.7221126 shapiro.test(rnorm(500))$p.value [1] 0.8429852 You can check on this site and more generally online for discussions of the technical details of computing the Shapiro-Wilk test statistic. But for your purposes, perhaps this intuitive description will be a useful start. Power of Shapiro-Wilk test. As mentioned in the NIST Handbook (and its references) the Shapiro-Wilk test is known for its high power against various alternatives, compared with other tests of normality. In order to make sense of power, you must have a specific significance level and non-normal distribution in mind. Student's t distribution with $\nu = 10$ degrees of freedom (symmetrical with heavy tails, $\mu = 0,\, \sigma\approx 1.12)$ and the distribution $\mathsf{Gamma}(\text{shape}=10, \text{scale}=10)$ (right-skewed, $\mu=1,\, \sigma\approx 0.316)$ are somewhat close to normal in shape, as illustrated in the figure below. My impression is that the power in specific situations is usually obtained by simulation. The following simulations in R show approximate powers of the Shapiro-Wilk normality test at level 5% against each of these two alternative distributions for samples of size $n=500.$ The power is the probability of rejection when data are from the alternative distribution. Respective power values are about 65% for $\mathsf{T}(10)$ and above 99% for $\mathsf{Beta}(10,10).$ [With $m = 100,000$ iterations one can expect about two-place accuracy.] set.seed(818); m = 10^5; n = 500 p.val = replicate(m, shapiro.test(rt(n, 10))$p.val) mean(p.val < 0.05) [1] 0.65474 set.seed(2018); m = 10^5; n = 500 p.val = replicate(m, shapiro.test(rgamma(n, 10, 10))$p.val) mean(p.val < 0.05) [1] 0.99978 Addendum: P-values. Traditionally, significance of a Shapiro-Wilk test was determined using tabled values of the test statistic $W.$ The P-values used in modern software programs seem to be mainly due to the work of Patrick Royston published in Applied Statistics: (1982) Vol. 31, 115-124 and 176-180, and (1995) Vol. 44, 547-551. [See References in the R documentation for shapiro.test.] The second 1982 paper extended the algorithm for P-values to accommodate $n \le 2000.$ The general method is to find a serviceable transformation to make the test statistic $W$ approximately normal. Currently, R accepts datasets with sizes $3 \le n \le 5000.$ However, the best accuracy is not guaranteed for P-values above $0.1.$ Simulation can provide an intuitive idea of the null distribution of $W$ for a particular $n.$ In R, the code set.seed(1234); rnorm(500) produces a standard normal sample of size $n = 500$ for which R gives $W = 0.99623$ with p-value $= 0.2848.$ To simulate the distribution of $W$ for $n = 500,$ we use the program below: set.seed(1234); x = rnorm(500); w.obs=shapiro.test(x)$stat set.seed(2018); m = 10^5; n = 500 w = replicate(m, shapiro.test(rnorm(n))$stat) mean(w < w.obs) [1] 0.28396 The histogram below shows the simulated distribution of $W$ along with the observed value of $W = 0.9962$ for for our sample with P-value about 0.2848. The null distribution of exact P-values from a test with a continuous test statistic is $\mathsf{Unif}(0,1).$ If we run a simulation similar to the one above, but capturing P-values (instead of tests statistics), we can see how close the Shapiro-Wilk P-values come to this uniform distribution. Because our P-values are not based on a continuous test statistic and are not exactly correct, we do not quite have a perfect fit to uniform. [The left-most bar for P-values $< 0.05$ (rejection) has area approximately $0.05.$]
What is the CDF of the Shapiro-Wilk $W$ statistic? There are many tests of normality. Some of them are based on Q-Q plots (normal probability plots) of the data, measuring according to various criteria how "nearly linear" the Q-Q plot is. Intuitive v
39,909
What is the CDF of the Shapiro-Wilk $W$ statistic?
tl;dr: There is no known distribution. i.e. it doesn't have a name. You use simulations instead. I found this to give some insight: Basically, Shapiro and Wilk calculated the distribution of their statistic only for $n=3$, to be a truncated $Beta(\frac{1}{2}, \frac{1}{2})$ distribution for $\frac{3}{4} \le w \le 1$ and zero elsewhere. For $3 < n \le 20$, the coefficients values in the statistics (expected values of ordered statistic, times inverse of co-variance matrix, normed) were calculated precisely, but there's no name for the distribution. For $20 < n \le 50$ these coefficients were only approximated, and (monte carlo) simulations were used. Some researchers after tried to find the asymptotic distribution, which in itself has no name. Leslie, Stephens, and Fotopoulos showed that $n(w - \mathbb{E}(w)) \sim -\sum_{k=3}^{\infty}\frac{Z_k^2-1}{k}$ where $Z_k \sim N(0,1)$ and were i.i.d. But in any case that too was shown to converge "at a painfully slow rate. ... For such reasons, the limit distribution seems not to be of much practical use, and for $n > 50$, Monte Carlo simulation seems still to be needed."
What is the CDF of the Shapiro-Wilk $W$ statistic?
tl;dr: There is no known distribution. i.e. it doesn't have a name. You use simulations instead. I found this to give some insight: Basically, Shapiro and Wilk calculated the distribution of their sta
What is the CDF of the Shapiro-Wilk $W$ statistic? tl;dr: There is no known distribution. i.e. it doesn't have a name. You use simulations instead. I found this to give some insight: Basically, Shapiro and Wilk calculated the distribution of their statistic only for $n=3$, to be a truncated $Beta(\frac{1}{2}, \frac{1}{2})$ distribution for $\frac{3}{4} \le w \le 1$ and zero elsewhere. For $3 < n \le 20$, the coefficients values in the statistics (expected values of ordered statistic, times inverse of co-variance matrix, normed) were calculated precisely, but there's no name for the distribution. For $20 < n \le 50$ these coefficients were only approximated, and (monte carlo) simulations were used. Some researchers after tried to find the asymptotic distribution, which in itself has no name. Leslie, Stephens, and Fotopoulos showed that $n(w - \mathbb{E}(w)) \sim -\sum_{k=3}^{\infty}\frac{Z_k^2-1}{k}$ where $Z_k \sim N(0,1)$ and were i.i.d. But in any case that too was shown to converge "at a painfully slow rate. ... For such reasons, the limit distribution seems not to be of much practical use, and for $n > 50$, Monte Carlo simulation seems still to be needed."
What is the CDF of the Shapiro-Wilk $W$ statistic? tl;dr: There is no known distribution. i.e. it doesn't have a name. You use simulations instead. I found this to give some insight: Basically, Shapiro and Wilk calculated the distribution of their sta
39,910
ANOVA vs pairwise t-tests with multiple test correction
The difference becomes clear if you understand the null/alternative hypothesis of each test. ANOVA's null hypothesis is that the group means are the same, while the alternative is that at least one group mean is different from the others. This analysis does not tell you which group mean is different, or which differences between groups are significant, it only tells you that they are not the same. This sort of approach is favorable, because assuming the data satisfy the assumptions of linear models, we need not utilize any correction method and can interpret the p-value readily. Compare this to a t-test. The null hypothesis is usually that the difference between groups is zero. Assuming we utilize an appropriate test correction methodology, we will be able to say something to the effect of "the differences between group i and group j are significant at the $\alpha$ level of significance". While I think most practitioners would suggest something like a Bonferroni method to account for the inflation of the type 1 error when doing this, I would personally caution making inferences from these sorts of analysis.
ANOVA vs pairwise t-tests with multiple test correction
The difference becomes clear if you understand the null/alternative hypothesis of each test. ANOVA's null hypothesis is that the group means are the same, while the alternative is that at least one gr
ANOVA vs pairwise t-tests with multiple test correction The difference becomes clear if you understand the null/alternative hypothesis of each test. ANOVA's null hypothesis is that the group means are the same, while the alternative is that at least one group mean is different from the others. This analysis does not tell you which group mean is different, or which differences between groups are significant, it only tells you that they are not the same. This sort of approach is favorable, because assuming the data satisfy the assumptions of linear models, we need not utilize any correction method and can interpret the p-value readily. Compare this to a t-test. The null hypothesis is usually that the difference between groups is zero. Assuming we utilize an appropriate test correction methodology, we will be able to say something to the effect of "the differences between group i and group j are significant at the $\alpha$ level of significance". While I think most practitioners would suggest something like a Bonferroni method to account for the inflation of the type 1 error when doing this, I would personally caution making inferences from these sorts of analysis.
ANOVA vs pairwise t-tests with multiple test correction The difference becomes clear if you understand the null/alternative hypothesis of each test. ANOVA's null hypothesis is that the group means are the same, while the alternative is that at least one gr
39,911
Pairwise comparisons with emmeans for a mixed three-way interaction in a linear mixed-effects model
It shouldn't be necessary to fit a separate model just to do the post-hoc comparisons you want. You had tried: emms <- emmeans(fit1b, ~ AB*C) contrast(emms, interaction = "pairwise") but you can get the same results from the original model using by variables judiciously: emms1 <- emmeans(fit1, ~ A*B | C) con1 <- contrast(emms1, interaction = "pairwise") pairs(con1, by = NULL) The con1 results are the desired 1-d.f. interaction effects for each level of C (the by factor is remembered). Then we compare them pairwise, no longer using the by grouping. By default, a Tukey adjustment is made to the family of comparisons, but you may use a different method via adjust.
Pairwise comparisons with emmeans for a mixed three-way interaction in a linear mixed-effects model
It shouldn't be necessary to fit a separate model just to do the post-hoc comparisons you want. You had tried: emms <- emmeans(fit1b, ~ AB*C) contrast(emms, interaction = "pairwise") but you can get
Pairwise comparisons with emmeans for a mixed three-way interaction in a linear mixed-effects model It shouldn't be necessary to fit a separate model just to do the post-hoc comparisons you want. You had tried: emms <- emmeans(fit1b, ~ AB*C) contrast(emms, interaction = "pairwise") but you can get the same results from the original model using by variables judiciously: emms1 <- emmeans(fit1, ~ A*B | C) con1 <- contrast(emms1, interaction = "pairwise") pairs(con1, by = NULL) The con1 results are the desired 1-d.f. interaction effects for each level of C (the by factor is remembered). Then we compare them pairwise, no longer using the by grouping. By default, a Tukey adjustment is made to the family of comparisons, but you may use a different method via adjust.
Pairwise comparisons with emmeans for a mixed three-way interaction in a linear mixed-effects model It shouldn't be necessary to fit a separate model just to do the post-hoc comparisons you want. You had tried: emms <- emmeans(fit1b, ~ AB*C) contrast(emms, interaction = "pairwise") but you can get
39,912
Does a density forecast add value beyond a point forecast when the loss function is given?
I can think of one-and-a-half more or less realistic situations where a full density is better than a point forecast, even if the loss function is known. The nitpicky situation is the one where the user's loss function depends not only on the point forecast, but on a two-sided prediction-interval, or even the entire density, i.e., the loss function is a scoring-rules. Yes, a loss function is typically defined to depend on a single point forecast, so I'm loose with nomenclature here. Nevertheless situations like these do occur, e.g., in financial volatility forecasting. Or where I work, in retail replenishment forecasting: we may want to achieve a 95% service level, so on the face of it, we may only be interested in that (point) quantile forecast. However, a 95% quantile forecast may be 4, while we may be constrained to replenish in pack sizes of 8. In such a situation, it can be valuable to know what percentage 8 units correspond to. The more relevant situation is one where we are interested in functions of predictive densities. Again, consider retail forecasting: because of the delivery schedule, our replenishment order may need to cover three days, Tuesday to Thursday. However, we forecast on daily granularity. So we may be interested in the 95% quantile forecast of the sum of the demands, and for the convolution, we need the full densities. (We could also try to forecast on three-day bucket granularity, but that becomes problematic if, say, a promotion starts in the middle of the bucket.)
Does a density forecast add value beyond a point forecast when the loss function is given?
I can think of one-and-a-half more or less realistic situations where a full density is better than a point forecast, even if the loss function is known. The nitpicky situation is the one where the u
Does a density forecast add value beyond a point forecast when the loss function is given? I can think of one-and-a-half more or less realistic situations where a full density is better than a point forecast, even if the loss function is known. The nitpicky situation is the one where the user's loss function depends not only on the point forecast, but on a two-sided prediction-interval, or even the entire density, i.e., the loss function is a scoring-rules. Yes, a loss function is typically defined to depend on a single point forecast, so I'm loose with nomenclature here. Nevertheless situations like these do occur, e.g., in financial volatility forecasting. Or where I work, in retail replenishment forecasting: we may want to achieve a 95% service level, so on the face of it, we may only be interested in that (point) quantile forecast. However, a 95% quantile forecast may be 4, while we may be constrained to replenish in pack sizes of 8. In such a situation, it can be valuable to know what percentage 8 units correspond to. The more relevant situation is one where we are interested in functions of predictive densities. Again, consider retail forecasting: because of the delivery schedule, our replenishment order may need to cover three days, Tuesday to Thursday. However, we forecast on daily granularity. So we may be interested in the 95% quantile forecast of the sum of the demands, and for the convolution, we need the full densities. (We could also try to forecast on three-day bucket granularity, but that becomes problematic if, say, a promotion starts in the middle of the bucket.)
Does a density forecast add value beyond a point forecast when the loss function is given? I can think of one-and-a-half more or less realistic situations where a full density is better than a point forecast, even if the loss function is known. The nitpicky situation is the one where the u
39,913
Does a density forecast add value beyond a point forecast when the loss function is given?
Background (may be skipped) I will be thinking in decision-theoretic terms as follows. A user must choose an action $a$ among a set of possibilities $A$. The action will bring him/her some "utility" (a notion commonly used in economics) $u(a;s)$ depending on the state of nature $s$ that will be realized in the future, where $s \in S$, a set of all possible states. (Utility is basically the negative of loss, and what follows could be reformulated equivalently either in terms of utility or loss.) The user aims at maximizing the expected utility (or equivalently, minimizing the expected loss) w.r.t. the action, $$ \max_{a \in A} \mathbb{E}_{S} u(a;s). $$ The choice of action is based on the forecast of the state of nature to be realized. Given a density forecast $\hat f_S(\cdot)$, a user can calculate the expected utility of a particular action by integrating the utility of that action over the predicted distribution of the states of nature, $$ \mathbb{E}_{\hat S} u(a;s) = \int u(a;s) \hat f_S(s) ds. $$ Then he/she chooses the action (among all possible ones) that maximizes this expected utility, $\hat a^* := \arg\max_{a \in A} \mathbb{E}_{\hat S} u(a;s)$. The expected value of utility at this action, for this density forecast is $\hat u^*:=u(\hat a^*)$. If the utility function has a unique maximum (loss function has a unique minimum), the optimal action is unique. If the state of nature is a continuous random variable, there exists a point in the distribution (a state of nature) that yields exactly $\hat u^*$. That point defines the target of the "relevant" point forecast. Hence, the user will get exactly the same maximized (over all possible actions) expected utility regardless of whether the forecast he gets is a density forecast or the "relevant" point forecast (a unit probability mass on a certain state of nature), provided the quality of the two forecasts is "equally good" (the easiest to intuitively understand the latter is to consider the case where both the point and the density forecast are perfect). Main part (see background for more details) I think it is reasonable to assume that the usefulness of a forecast is fully reflected by the loss it incurs to a given user. Then the aim of a user is to choose a forecast that minimizes the expected loss. Hence, given a predicted distribution, the user will take a concrete function thereof (e.g. predicted mean) that minimizes the expected loss. The rest of the predicted density will not have any added value to the user. If the loss function has a unique minimum, the function will be single-valued, and that value will be the point forecast relevant for the user. For example, if the user's loss function is quadratic (which has a unique minimum at the mean of the true distribution), he/she will only care about the forecast of the mean. If another user is facing absolute loss (which has a unique minimum at the median of the true distribution), he/she will only care about the forecast of the median. Providing a density forecast for either of these users in addition to forecasts of mean and median, respectively, will be of zero added value to them. Elliott and Timmermann (2016a) write on p. 423-424 (regarding evaluation of density forecasts): One way to [evalute a density forecast] would be to convert the density forecast into a point forecast and use the methods for point forecast evaluation. This simple approach to evaluating density forecasts might be appropriate for a number of reasons. <...> [D]ensity forecasts can be justified on the grounds that there are multiple users with different loss functions. Any one of these users might examine the performance of a density forecast with reference to the specific loss function deemed appropriate for their problem. The relevant measure of forecast performance is the average loss calculated from each user’s specific loss function. Moreover, given a known loss function, a density forecast may even be inferior to a relevant point forecast, for the following two reasons. First, density forecasts are typically more difficult to produce than point forecasts. Second, they might trade off precision/accuracy at a particular point (say, mean or median) for precision/accuracy across the whole distribution that is being predicted. That is, if one is predicting the whole density, one might have to sacrifice some precision/accuracy for the forecast of the mean so as to get greater precision/accuracy elsewhere. As Elliott and Timmermann (2016b) write, [T]he relationships between the scoring rules popular in the literature and the underlying loss functions for individual users is not clear. Thus, it could well be that the scoring rule used provides a poor estimate of the feature of the conditional distribution that some users wish to construct. A similar quote can be found in Elliott and Timmermann (2016a), p. 277-278: It would seem that provision of a predictive density is superior to reporting a point forecast since it both (a) can be combined with a loss function to produce any point forecast; and (b) is independent of the loss function. In classical estimation of the predictive density, neither of these points really holds up in practice. <...> [I]n the classical setting the estimated predictive distributions depend on the loss function. All parameters of the predictive density need to be estimated and these estimates require some loss function, so loss functions are thrown back into the mix. The catch here is that the loss functions that are often employed in density estimation do not line up with those employed for point forecasting which can lead to inferior point forecasts. <...> Moreover, conditional distributions are difficult to estimate well, and so point forecasts based on estimates of the conditional density may be highly suboptimal from an estimation perspective. Hence, when a loss function is given, it might make sense to focus on forecasting the particular point tailored to the loss function rather than attempt to forecast the whole distribution. This might be easier to do and/or more accurate. A critical question to myself: may it be that the "relevant" point forecast cannot be expressed as a function of the unknown density but rather be different (as a function, not just its value) for different densities? Then a density forecast would be needed to find out which point forecast one is interested in, making a density forecast an inevitable step in the point forecasting process. References: Elliott, G., & Timmermann, A. (2016a). Economic forecasting. Princeton: Princeton University Press. Elliott, G., & Timmermann, A. (2016b). Forecasting in economics and finance. Annual Review of Economics, 8, 81-110.
Does a density forecast add value beyond a point forecast when the loss function is given?
Background (may be skipped) I will be thinking in decision-theoretic terms as follows. A user must choose an action $a$ among a set of possibilities $A$. The action will bring him/her some "utility" (
Does a density forecast add value beyond a point forecast when the loss function is given? Background (may be skipped) I will be thinking in decision-theoretic terms as follows. A user must choose an action $a$ among a set of possibilities $A$. The action will bring him/her some "utility" (a notion commonly used in economics) $u(a;s)$ depending on the state of nature $s$ that will be realized in the future, where $s \in S$, a set of all possible states. (Utility is basically the negative of loss, and what follows could be reformulated equivalently either in terms of utility or loss.) The user aims at maximizing the expected utility (or equivalently, minimizing the expected loss) w.r.t. the action, $$ \max_{a \in A} \mathbb{E}_{S} u(a;s). $$ The choice of action is based on the forecast of the state of nature to be realized. Given a density forecast $\hat f_S(\cdot)$, a user can calculate the expected utility of a particular action by integrating the utility of that action over the predicted distribution of the states of nature, $$ \mathbb{E}_{\hat S} u(a;s) = \int u(a;s) \hat f_S(s) ds. $$ Then he/she chooses the action (among all possible ones) that maximizes this expected utility, $\hat a^* := \arg\max_{a \in A} \mathbb{E}_{\hat S} u(a;s)$. The expected value of utility at this action, for this density forecast is $\hat u^*:=u(\hat a^*)$. If the utility function has a unique maximum (loss function has a unique minimum), the optimal action is unique. If the state of nature is a continuous random variable, there exists a point in the distribution (a state of nature) that yields exactly $\hat u^*$. That point defines the target of the "relevant" point forecast. Hence, the user will get exactly the same maximized (over all possible actions) expected utility regardless of whether the forecast he gets is a density forecast or the "relevant" point forecast (a unit probability mass on a certain state of nature), provided the quality of the two forecasts is "equally good" (the easiest to intuitively understand the latter is to consider the case where both the point and the density forecast are perfect). Main part (see background for more details) I think it is reasonable to assume that the usefulness of a forecast is fully reflected by the loss it incurs to a given user. Then the aim of a user is to choose a forecast that minimizes the expected loss. Hence, given a predicted distribution, the user will take a concrete function thereof (e.g. predicted mean) that minimizes the expected loss. The rest of the predicted density will not have any added value to the user. If the loss function has a unique minimum, the function will be single-valued, and that value will be the point forecast relevant for the user. For example, if the user's loss function is quadratic (which has a unique minimum at the mean of the true distribution), he/she will only care about the forecast of the mean. If another user is facing absolute loss (which has a unique minimum at the median of the true distribution), he/she will only care about the forecast of the median. Providing a density forecast for either of these users in addition to forecasts of mean and median, respectively, will be of zero added value to them. Elliott and Timmermann (2016a) write on p. 423-424 (regarding evaluation of density forecasts): One way to [evalute a density forecast] would be to convert the density forecast into a point forecast and use the methods for point forecast evaluation. This simple approach to evaluating density forecasts might be appropriate for a number of reasons. <...> [D]ensity forecasts can be justified on the grounds that there are multiple users with different loss functions. Any one of these users might examine the performance of a density forecast with reference to the specific loss function deemed appropriate for their problem. The relevant measure of forecast performance is the average loss calculated from each user’s specific loss function. Moreover, given a known loss function, a density forecast may even be inferior to a relevant point forecast, for the following two reasons. First, density forecasts are typically more difficult to produce than point forecasts. Second, they might trade off precision/accuracy at a particular point (say, mean or median) for precision/accuracy across the whole distribution that is being predicted. That is, if one is predicting the whole density, one might have to sacrifice some precision/accuracy for the forecast of the mean so as to get greater precision/accuracy elsewhere. As Elliott and Timmermann (2016b) write, [T]he relationships between the scoring rules popular in the literature and the underlying loss functions for individual users is not clear. Thus, it could well be that the scoring rule used provides a poor estimate of the feature of the conditional distribution that some users wish to construct. A similar quote can be found in Elliott and Timmermann (2016a), p. 277-278: It would seem that provision of a predictive density is superior to reporting a point forecast since it both (a) can be combined with a loss function to produce any point forecast; and (b) is independent of the loss function. In classical estimation of the predictive density, neither of these points really holds up in practice. <...> [I]n the classical setting the estimated predictive distributions depend on the loss function. All parameters of the predictive density need to be estimated and these estimates require some loss function, so loss functions are thrown back into the mix. The catch here is that the loss functions that are often employed in density estimation do not line up with those employed for point forecasting which can lead to inferior point forecasts. <...> Moreover, conditional distributions are difficult to estimate well, and so point forecasts based on estimates of the conditional density may be highly suboptimal from an estimation perspective. Hence, when a loss function is given, it might make sense to focus on forecasting the particular point tailored to the loss function rather than attempt to forecast the whole distribution. This might be easier to do and/or more accurate. A critical question to myself: may it be that the "relevant" point forecast cannot be expressed as a function of the unknown density but rather be different (as a function, not just its value) for different densities? Then a density forecast would be needed to find out which point forecast one is interested in, making a density forecast an inevitable step in the point forecasting process. References: Elliott, G., & Timmermann, A. (2016a). Economic forecasting. Princeton: Princeton University Press. Elliott, G., & Timmermann, A. (2016b). Forecasting in economics and finance. Annual Review of Economics, 8, 81-110.
Does a density forecast add value beyond a point forecast when the loss function is given? Background (may be skipped) I will be thinking in decision-theoretic terms as follows. A user must choose an action $a$ among a set of possibilities $A$. The action will bring him/her some "utility" (
39,914
Intuition behind Correlated Terms in Mixed Effect Models
Suppose, for concreteness, that we have a model y ~ 1 + x1 + x2 + (1|g) where x1 and x2 are (for simplicity) continuous predictor variables (g is a categorical grouping variable). This model states that the expected value of y changes linearly with changes in x1 and x2 and that there are differences in the intercept between groups (i.e. $\hat y = (\beta_0+b_{0,i}) + \beta_1 x_1 + \beta_2 x_2$), with $b_{0,i} \sim \textrm{Normal}(0,\sigma^2_0)$); that's what the 1 in (1|g) means. If we change the random effect to (1|g) + (x1|g) + (x2|g) (separate terms), or equivalently (1+x1+x2||g), that specifies that the intercept, slope with respect to x1, and slope with respect to x2 all vary across groups, but this variation is independent: we could write this model out as $$ \begin{split} \hat y_{ij} & = (\beta_0 + b_{0,i}) + (\beta_1 + b_{1,i}) x_1 + (\beta_2 + b_{2,i}) x_2 \\ b_{0,i} & \sim \textrm{Normal}(0,\sigma^2_0) \\ b_{1,i} & \sim \textrm{Normal}(0,\sigma^2_1) \\ b_{2,i} & \sim \textrm{Normal}(0,\sigma^2_2) \quad . \end{split} $$ So a particular group might have a higher-than-average intercept, a lower-than-average response to $x_1$, and an average response to $x_2$, but all of these term-specific effects are independent of each other. If we instead write (1+x1+x2|g), we can no longer specify models for $b_{k,i}$ separately: instead we have to write $$ \boldsymbol b_i = \{b_{0,i}, b_{1,i}, b_{2,i} \} \sim \textrm{MVN}(\boldsymbol 0, \Sigma) $$ (where MVN is "multivariate normal"). Now in addition to the separate variances for each varying term ($\sigma^2_0$, $\sigma^2_1$, $\sigma^2_2$) we also have to specify the covariances or correlations ($\rho_{01}$, $\rho_{02}$, $\rho_{12}$). For example, suppose that $\rho_{12}$, the correlation between the $x_1$ and the $x_2$ slopes, is negative. That means that groups that respond strongly (positively) to changes in $x_1$ are likely to respond weakly, or even in the opposite direction, to changes in $x_2$ -- similar logic applies to $\rho_{01}$ and $\rho_{02}$ (correlations between among-group variation in the intercept and the two slopes). You can compare the likelihood of a model with the full ("unstructured" or "general positive-definite") variance-covariance matrix to one with the diagonal (independent) variance-covariance matrix using likelihood ratio tests (or AIC): the independent-terms model is properly nested within the full model (i.e. starting with the full model and constraining $\rho_{01}=\rho_{02}=\rho_{12}$ gets you the reduced/nested model). Alternatively, you can find confidence intervals for individual $\rho_{ij}$ parameters. (nlme::lme does this by computing Wald intervals on a constrained (hyperbolic-tangent) scale and back-transforming; lme4::lmer does it by computing likelihood profiles).
Intuition behind Correlated Terms in Mixed Effect Models
Suppose, for concreteness, that we have a model y ~ 1 + x1 + x2 + (1|g) where x1 and x2 are (for simplicity) continuous predictor variables (g is a categorical grouping variable). This model states
Intuition behind Correlated Terms in Mixed Effect Models Suppose, for concreteness, that we have a model y ~ 1 + x1 + x2 + (1|g) where x1 and x2 are (for simplicity) continuous predictor variables (g is a categorical grouping variable). This model states that the expected value of y changes linearly with changes in x1 and x2 and that there are differences in the intercept between groups (i.e. $\hat y = (\beta_0+b_{0,i}) + \beta_1 x_1 + \beta_2 x_2$), with $b_{0,i} \sim \textrm{Normal}(0,\sigma^2_0)$); that's what the 1 in (1|g) means. If we change the random effect to (1|g) + (x1|g) + (x2|g) (separate terms), or equivalently (1+x1+x2||g), that specifies that the intercept, slope with respect to x1, and slope with respect to x2 all vary across groups, but this variation is independent: we could write this model out as $$ \begin{split} \hat y_{ij} & = (\beta_0 + b_{0,i}) + (\beta_1 + b_{1,i}) x_1 + (\beta_2 + b_{2,i}) x_2 \\ b_{0,i} & \sim \textrm{Normal}(0,\sigma^2_0) \\ b_{1,i} & \sim \textrm{Normal}(0,\sigma^2_1) \\ b_{2,i} & \sim \textrm{Normal}(0,\sigma^2_2) \quad . \end{split} $$ So a particular group might have a higher-than-average intercept, a lower-than-average response to $x_1$, and an average response to $x_2$, but all of these term-specific effects are independent of each other. If we instead write (1+x1+x2|g), we can no longer specify models for $b_{k,i}$ separately: instead we have to write $$ \boldsymbol b_i = \{b_{0,i}, b_{1,i}, b_{2,i} \} \sim \textrm{MVN}(\boldsymbol 0, \Sigma) $$ (where MVN is "multivariate normal"). Now in addition to the separate variances for each varying term ($\sigma^2_0$, $\sigma^2_1$, $\sigma^2_2$) we also have to specify the covariances or correlations ($\rho_{01}$, $\rho_{02}$, $\rho_{12}$). For example, suppose that $\rho_{12}$, the correlation between the $x_1$ and the $x_2$ slopes, is negative. That means that groups that respond strongly (positively) to changes in $x_1$ are likely to respond weakly, or even in the opposite direction, to changes in $x_2$ -- similar logic applies to $\rho_{01}$ and $\rho_{02}$ (correlations between among-group variation in the intercept and the two slopes). You can compare the likelihood of a model with the full ("unstructured" or "general positive-definite") variance-covariance matrix to one with the diagonal (independent) variance-covariance matrix using likelihood ratio tests (or AIC): the independent-terms model is properly nested within the full model (i.e. starting with the full model and constraining $\rho_{01}=\rho_{02}=\rho_{12}$ gets you the reduced/nested model). Alternatively, you can find confidence intervals for individual $\rho_{ij}$ parameters. (nlme::lme does this by computing Wald intervals on a constrained (hyperbolic-tangent) scale and back-transforming; lme4::lmer does it by computing likelihood profiles).
Intuition behind Correlated Terms in Mixed Effect Models Suppose, for concreteness, that we have a model y ~ 1 + x1 + x2 + (1|g) where x1 and x2 are (for simplicity) continuous predictor variables (g is a categorical grouping variable). This model states
39,915
Prove that the mean value of a convolution is the sum of the mean values of its individual parts
Like many demonstrations involving convolutions, it comes down to applying Fubini's Theorem. Let's establish notation and assumptions. Let $f$ and $g$ be integrable real-valued functions defined on $\mathbb{R}^n$ having unit integrals (with respect to Lebesgue measure): that is, $$1=\int_{\mathbb{R}^n} f(x) dx = \int_{\mathbb{R}^n} g(x) dx.$$ (For convenience, let's drop the "$\mathbb{R}^n$" subscript, because all integrals will be evaluated over this entire space.) The convolution $f\star g$ is the function defined by $$(f\star g)(x) = \int f(x-y) g(y) dy.$$ (This is guaranteed to exist when $f$ and $g$ are both bounded or whenever $f$ and $g$ are both probability density functions.) The mean of any integrable function is $$E[f] = \int x f(x) dx.$$ It might be infinite or undefined. Solution The question asks to compute $E[f\star g]$ (in the special case where $f$ and $g$ are nonnegative--but this assumption doesn't matter). Apply the definitions of $E$ and $\star$ to obtain a double integral; switch the order of integration according to Fubini's Theorem (which requires assuming $E[f\star g]$ is finite), then substitute $x-y\to u$ and exploit linearity of integration (which is a basic property established immediately whenever any theory of integration is developed). The result will appear because both $f$ and $g$ have unit integrals. For those who want to see the details, here they are: $$\eqalign{ E[f\star g] &= \int x (f\star g)(x) dx &\text{Definition of }E\\ &= \int x \left(\int f(x-y) g(y) dy\right) dx &\text{Definition of convolution}\\ &= \int g(y) \left(\int x f(x-y) dx\right) dy &\text{Fubini}\\ &= \int g(y) \left(\int (x-y)f(x-y) + yf(x-y) dx\right) dy&\text{Expand }x=(x-y)+y \\ &= \int g(y) \left(\int (x-y)f(x-y) dx + y\int f(x-y) dx\right) dy &\text{Linearity of integration}\\ &= \int g(y) \left(\int u f(u) du + y \int f(u) du\right) dy &\text{Substitution } x-y\to u\\ &= \int g(y) (E[f] + y(1)) dy &\text{Assumptions about }f\\ &= E[f]\int g(y) dy + \int y g(y) dy &\text{Linearity of integration}\\ &= E[f](1) + E[g] &\text{Assumptions about }g\\ &= E[f] + E[g]. }$$ These calculations are legitimate provided all three expectations $E[f\star g], E[f], E[g]$ are defined and finite. Fubini's Theorem requires only the finiteness of $E[f\star g],$ but the steps at the end (involving linearity) also need the finiteness of the other two expectations.
Prove that the mean value of a convolution is the sum of the mean values of its individual parts
Like many demonstrations involving convolutions, it comes down to applying Fubini's Theorem. Let's establish notation and assumptions. Let $f$ and $g$ be integrable real-valued functions defined on $\
Prove that the mean value of a convolution is the sum of the mean values of its individual parts Like many demonstrations involving convolutions, it comes down to applying Fubini's Theorem. Let's establish notation and assumptions. Let $f$ and $g$ be integrable real-valued functions defined on $\mathbb{R}^n$ having unit integrals (with respect to Lebesgue measure): that is, $$1=\int_{\mathbb{R}^n} f(x) dx = \int_{\mathbb{R}^n} g(x) dx.$$ (For convenience, let's drop the "$\mathbb{R}^n$" subscript, because all integrals will be evaluated over this entire space.) The convolution $f\star g$ is the function defined by $$(f\star g)(x) = \int f(x-y) g(y) dy.$$ (This is guaranteed to exist when $f$ and $g$ are both bounded or whenever $f$ and $g$ are both probability density functions.) The mean of any integrable function is $$E[f] = \int x f(x) dx.$$ It might be infinite or undefined. Solution The question asks to compute $E[f\star g]$ (in the special case where $f$ and $g$ are nonnegative--but this assumption doesn't matter). Apply the definitions of $E$ and $\star$ to obtain a double integral; switch the order of integration according to Fubini's Theorem (which requires assuming $E[f\star g]$ is finite), then substitute $x-y\to u$ and exploit linearity of integration (which is a basic property established immediately whenever any theory of integration is developed). The result will appear because both $f$ and $g$ have unit integrals. For those who want to see the details, here they are: $$\eqalign{ E[f\star g] &= \int x (f\star g)(x) dx &\text{Definition of }E\\ &= \int x \left(\int f(x-y) g(y) dy\right) dx &\text{Definition of convolution}\\ &= \int g(y) \left(\int x f(x-y) dx\right) dy &\text{Fubini}\\ &= \int g(y) \left(\int (x-y)f(x-y) + yf(x-y) dx\right) dy&\text{Expand }x=(x-y)+y \\ &= \int g(y) \left(\int (x-y)f(x-y) dx + y\int f(x-y) dx\right) dy &\text{Linearity of integration}\\ &= \int g(y) \left(\int u f(u) du + y \int f(u) du\right) dy &\text{Substitution } x-y\to u\\ &= \int g(y) (E[f] + y(1)) dy &\text{Assumptions about }f\\ &= E[f]\int g(y) dy + \int y g(y) dy &\text{Linearity of integration}\\ &= E[f](1) + E[g] &\text{Assumptions about }g\\ &= E[f] + E[g]. }$$ These calculations are legitimate provided all three expectations $E[f\star g], E[f], E[g]$ are defined and finite. Fubini's Theorem requires only the finiteness of $E[f\star g],$ but the steps at the end (involving linearity) also need the finiteness of the other two expectations.
Prove that the mean value of a convolution is the sum of the mean values of its individual parts Like many demonstrations involving convolutions, it comes down to applying Fubini's Theorem. Let's establish notation and assumptions. Let $f$ and $g$ be integrable real-valued functions defined on $\
39,916
Prove that the mean value of a convolution is the sum of the mean values of its individual parts
Convolving distributions corresponds to adding independent random variables. Given PDFs $f_X$ and $f_Y$, let $f_Z = f_X * f_Y$ denote their convolution. $f_Z$ is the PDF of a random variable $Z = X + Y$, where $X \sim f_X$, $Y \sim f_Y$, and $X$ and $Y$ are independent. By linearity of expectation, $E[Z] = E[X+Y] = E[X] + E[Y]$. Proof for linearity of expectation: Let $f_{X,Y}$ be the joint distribution of $X$ and $Y$, with marginal distributions $f_X$ and $f_Y$. $X$ and $Y$ need not be independent. The expected value of $X+Y$ is: $$E[X+Y] = \int_{-\infty}^\infty \int_{-\infty}^\infty (x+y) \ f_{X,Y}(x, y) \ dx \ dy$$ Re-arranging terms gives: $$E[X+Y] = \int_{-\infty}^\infty x \underbrace{\int_{-\infty}^\infty f_{X,Y}(x, y) \ dy}_{f_X} \ dx + \int_{-\infty}^\infty y \underbrace{\int_{-\infty}^\infty f_{X,Y}(x, y) \ dx}_{f_Y} dy$$ As indicated under the terms above, integrating over the joint distribution gives the marginal distributions for $X$ and $Y$, so: $$E[X+Y] = \int_{-\infty}^\infty x \ f_X(x) \ dx + \int_{-\infty}^\infty y \ f_Y(y) \ dy$$ This corresponds to the sum of the expected values of $X$ and $Y$: $$E[X+Y] = E[X] + E[Y]$$
Prove that the mean value of a convolution is the sum of the mean values of its individual parts
Convolving distributions corresponds to adding independent random variables. Given PDFs $f_X$ and $f_Y$, let $f_Z = f_X * f_Y$ denote their convolution. $f_Z$ is the PDF of a random variable $Z = X +
Prove that the mean value of a convolution is the sum of the mean values of its individual parts Convolving distributions corresponds to adding independent random variables. Given PDFs $f_X$ and $f_Y$, let $f_Z = f_X * f_Y$ denote their convolution. $f_Z$ is the PDF of a random variable $Z = X + Y$, where $X \sim f_X$, $Y \sim f_Y$, and $X$ and $Y$ are independent. By linearity of expectation, $E[Z] = E[X+Y] = E[X] + E[Y]$. Proof for linearity of expectation: Let $f_{X,Y}$ be the joint distribution of $X$ and $Y$, with marginal distributions $f_X$ and $f_Y$. $X$ and $Y$ need not be independent. The expected value of $X+Y$ is: $$E[X+Y] = \int_{-\infty}^\infty \int_{-\infty}^\infty (x+y) \ f_{X,Y}(x, y) \ dx \ dy$$ Re-arranging terms gives: $$E[X+Y] = \int_{-\infty}^\infty x \underbrace{\int_{-\infty}^\infty f_{X,Y}(x, y) \ dy}_{f_X} \ dx + \int_{-\infty}^\infty y \underbrace{\int_{-\infty}^\infty f_{X,Y}(x, y) \ dx}_{f_Y} dy$$ As indicated under the terms above, integrating over the joint distribution gives the marginal distributions for $X$ and $Y$, so: $$E[X+Y] = \int_{-\infty}^\infty x \ f_X(x) \ dx + \int_{-\infty}^\infty y \ f_Y(y) \ dy$$ This corresponds to the sum of the expected values of $X$ and $Y$: $$E[X+Y] = E[X] + E[Y]$$
Prove that the mean value of a convolution is the sum of the mean values of its individual parts Convolving distributions corresponds to adding independent random variables. Given PDFs $f_X$ and $f_Y$, let $f_Z = f_X * f_Y$ denote their convolution. $f_Z$ is the PDF of a random variable $Z = X +
39,917
Is PCG random number generator as good as claimed?
Other people have looked at the statistical qualities of the PCG generators and found them to be good, see for example https://lemire.me/blog/2017/08/22/testing-non-cryptographic-random-number-generators-my-results/. On that page you will also find references to other RNGs, that are all faster than MT. If being published and raw speed are important for you, you could use one of the RNGs from http://xoroshiro.di.unimi.it/, but note that in the + or * versions the lowest two bits are of problematic quality and are the reason why these generators fail some tests. So you should not use rand() % 2 to get a random boolean. You could use a sign test instead. The ** versions should produce high quality output for all bits though.
Is PCG random number generator as good as claimed?
Other people have looked at the statistical qualities of the PCG generators and found them to be good, see for example https://lemire.me/blog/2017/08/22/testing-non-cryptographic-random-number-generat
Is PCG random number generator as good as claimed? Other people have looked at the statistical qualities of the PCG generators and found them to be good, see for example https://lemire.me/blog/2017/08/22/testing-non-cryptographic-random-number-generators-my-results/. On that page you will also find references to other RNGs, that are all faster than MT. If being published and raw speed are important for you, you could use one of the RNGs from http://xoroshiro.di.unimi.it/, but note that in the + or * versions the lowest two bits are of problematic quality and are the reason why these generators fail some tests. So you should not use rand() % 2 to get a random boolean. You could use a sign test instead. The ** versions should produce high quality output for all bits though.
Is PCG random number generator as good as claimed? Other people have looked at the statistical qualities of the PCG generators and found them to be good, see for example https://lemire.me/blog/2017/08/22/testing-non-cryptographic-random-number-generat
39,918
Is PCG random number generator as good as claimed?
PCG generators are more or less fine if you use them single-stream in a single application (albeit they will be very slow if you use a state larger than 64 bits). In all other situations, they should be avoided. You can find a very detailed discussion here: http://prng.di.unimi.it/pcg.php . As for benchmarking: a PCG32 generator fits a single register due to the small state size (64 bits). xoroshiro128+ (or ++) uses more registers, and this takes time (they must be loaded and saved). In microbenchmarks this will not show up because the compiler keeps all the state in registers (which is why I always suggest to benchmark inside an application). Note, however, that a period of $2^{64}$ bits is really too short for any scientific application. You should have a period at least large as the square of the number of outputs you are using.
Is PCG random number generator as good as claimed?
PCG generators are more or less fine if you use them single-stream in a single application (albeit they will be very slow if you use a state larger than 64 bits). In all other situations, they should
Is PCG random number generator as good as claimed? PCG generators are more or less fine if you use them single-stream in a single application (albeit they will be very slow if you use a state larger than 64 bits). In all other situations, they should be avoided. You can find a very detailed discussion here: http://prng.di.unimi.it/pcg.php . As for benchmarking: a PCG32 generator fits a single register due to the small state size (64 bits). xoroshiro128+ (or ++) uses more registers, and this takes time (they must be loaded and saved). In microbenchmarks this will not show up because the compiler keeps all the state in registers (which is why I always suggest to benchmark inside an application). Note, however, that a period of $2^{64}$ bits is really too short for any scientific application. You should have a period at least large as the square of the number of outputs you are using.
Is PCG random number generator as good as claimed? PCG generators are more or less fine if you use them single-stream in a single application (albeit they will be very slow if you use a state larger than 64 bits). In all other situations, they should
39,919
Compute standard error from beta, p-value, sample size, and the number of regression parameters
In a linear regression, the $p$-value is calculated from a $t$-value, which is the coefficient divided by its standard error ($t=\hat{\beta}/\mathrm{SE}_{\hat{\beta}}$). The degrees of freedom used in the $t$-distribution for calculating the $p$-value are the residual degrees of freedom ($\mathrm{SE}_{\hat{\beta}}=\hat{\beta}/|t|$). The residual degrees of freedom, on the other hand are the total degrees of freedom of the variance $N-1$ minus the model degrees of freedom $k-1$, where $k$ is the number of parameters including the intercept. So the residual degrees of freedom are $(N-1)-(k-1) = N-k$. From this, you can use the quantile distribution of the $t$-distribution to calculate the standard error. Example: Assume that $\hat{\beta}=5.47, p = 0.004, N = 100, k = 4$. The residual degrees of freedom are $100-4 = 96$. We assume that the $p$-value is two-sided. Using R, the calculations are: t_val <- qt(0.004/2, df = 96) # Calculating the t-value using quantile function 5.47/abs(t_val) # Calculating standard error 1.854659 So the standard error was 1.85.
Compute standard error from beta, p-value, sample size, and the number of regression parameters
In a linear regression, the $p$-value is calculated from a $t$-value, which is the coefficient divided by its standard error ($t=\hat{\beta}/\mathrm{SE}_{\hat{\beta}}$). The degrees of freedom used in
Compute standard error from beta, p-value, sample size, and the number of regression parameters In a linear regression, the $p$-value is calculated from a $t$-value, which is the coefficient divided by its standard error ($t=\hat{\beta}/\mathrm{SE}_{\hat{\beta}}$). The degrees of freedom used in the $t$-distribution for calculating the $p$-value are the residual degrees of freedom ($\mathrm{SE}_{\hat{\beta}}=\hat{\beta}/|t|$). The residual degrees of freedom, on the other hand are the total degrees of freedom of the variance $N-1$ minus the model degrees of freedom $k-1$, where $k$ is the number of parameters including the intercept. So the residual degrees of freedom are $(N-1)-(k-1) = N-k$. From this, you can use the quantile distribution of the $t$-distribution to calculate the standard error. Example: Assume that $\hat{\beta}=5.47, p = 0.004, N = 100, k = 4$. The residual degrees of freedom are $100-4 = 96$. We assume that the $p$-value is two-sided. Using R, the calculations are: t_val <- qt(0.004/2, df = 96) # Calculating the t-value using quantile function 5.47/abs(t_val) # Calculating standard error 1.854659 So the standard error was 1.85.
Compute standard error from beta, p-value, sample size, and the number of regression parameters In a linear regression, the $p$-value is calculated from a $t$-value, which is the coefficient divided by its standard error ($t=\hat{\beta}/\mathrm{SE}_{\hat{\beta}}$). The degrees of freedom used in
39,920
Geometric distribution with random, varying success probability
Your algorithm independently draws a sequence of probabilities from random variables $P_i$ with distributions $F_i.$ Since these distributions must be bounded between $0$ and $1$ they all have expectations $p_i.$ It then observes a parallel sequence of independent Bernoulli$(P_i)$ variates $X_i,$ stopping the first time an outcome of $1$ is observed, and returns the number of zeros encountered along the way. Let's call this random variable $Y.$ Let $k$ be a possible value of $Y.$ Consider the survival function of $Y,$ given by the chance that $Y \ge k.$ Because (conditional on $(P)=(P_1,P_2,\ldots)$) the $X_i$ are independent, the chance that $Y$ equals or exceeds $k$ for any given set of probabilities is $$\eqalign{ \Pr(Y \ge k \mid (P)) &= \Pr(X_1=0 \mid P_1) \Pr(X_2=0\mid P_2) \cdots \Pr(X_k=0\mid P_k)\\ &=(1-P_1)(1-P_2)\cdots(1-P_k). }$$ Because the $P_i$ are independent, the expectation (taken over the process $(P)$) is $$\Pr(Y \ge k) = \mathbb{E}\left[\Pr\left(Y\ge k\mid (P)\right)\right] = \prod_{i=1}^k (1 - \mathbb{E}[P_i])=\prod_{i=1}^n (1-p_i).$$ Suppose now that all the expected probabilities $p_i$ are the same, equal to some common probability $\bar p.$ The preceding simplifies to $$\Pr(Y \ge k) = (1-\bar p)^k.$$ That's a Geometric distribution. It may be instructive to contrast this procedure with the very similar one in which you draw $\bar p$ at the outset of sampling the $X_i,$ effectively generating one realization of a Geometric distribution of parameter $\bar p,$ and then repeat this for new, independent values of $\bar p.$ The result is not a sample from a Geometric distribution, as you can see by considering this modification of the code: g[] := RandomVariate[GeometricDistribution[f[]]] Let's run it: SeedRandom[17]; ress = Table[g[], {i, 1000}]; Histogram[ress] The tail is too long to be Geometric. Replacing the While loop in g by a direct generation of $Y$ via GeometricDistribution was essential because sooner or later f[] will produce a very tiny value of p and the loop will go on for an extremely long time before a success ($1$) is observed. Using a loop becomes too inefficient--but it also helps us understand better how the modified process differs from the original one. In the original it's unlikely that the loop will go on for very long (assuming the values of f aren't concentrated near $0$), because different probabilities are generated at each step, thereby assuring it won't be caught trying to generate an extremely unlikely success.
Geometric distribution with random, varying success probability
Your algorithm independently draws a sequence of probabilities from random variables $P_i$ with distributions $F_i.$ Since these distributions must be bounded between $0$ and $1$ they all have expect
Geometric distribution with random, varying success probability Your algorithm independently draws a sequence of probabilities from random variables $P_i$ with distributions $F_i.$ Since these distributions must be bounded between $0$ and $1$ they all have expectations $p_i.$ It then observes a parallel sequence of independent Bernoulli$(P_i)$ variates $X_i,$ stopping the first time an outcome of $1$ is observed, and returns the number of zeros encountered along the way. Let's call this random variable $Y.$ Let $k$ be a possible value of $Y.$ Consider the survival function of $Y,$ given by the chance that $Y \ge k.$ Because (conditional on $(P)=(P_1,P_2,\ldots)$) the $X_i$ are independent, the chance that $Y$ equals or exceeds $k$ for any given set of probabilities is $$\eqalign{ \Pr(Y \ge k \mid (P)) &= \Pr(X_1=0 \mid P_1) \Pr(X_2=0\mid P_2) \cdots \Pr(X_k=0\mid P_k)\\ &=(1-P_1)(1-P_2)\cdots(1-P_k). }$$ Because the $P_i$ are independent, the expectation (taken over the process $(P)$) is $$\Pr(Y \ge k) = \mathbb{E}\left[\Pr\left(Y\ge k\mid (P)\right)\right] = \prod_{i=1}^k (1 - \mathbb{E}[P_i])=\prod_{i=1}^n (1-p_i).$$ Suppose now that all the expected probabilities $p_i$ are the same, equal to some common probability $\bar p.$ The preceding simplifies to $$\Pr(Y \ge k) = (1-\bar p)^k.$$ That's a Geometric distribution. It may be instructive to contrast this procedure with the very similar one in which you draw $\bar p$ at the outset of sampling the $X_i,$ effectively generating one realization of a Geometric distribution of parameter $\bar p,$ and then repeat this for new, independent values of $\bar p.$ The result is not a sample from a Geometric distribution, as you can see by considering this modification of the code: g[] := RandomVariate[GeometricDistribution[f[]]] Let's run it: SeedRandom[17]; ress = Table[g[], {i, 1000}]; Histogram[ress] The tail is too long to be Geometric. Replacing the While loop in g by a direct generation of $Y$ via GeometricDistribution was essential because sooner or later f[] will produce a very tiny value of p and the loop will go on for an extremely long time before a success ($1$) is observed. Using a loop becomes too inefficient--but it also helps us understand better how the modified process differs from the original one. In the original it's unlikely that the loop will go on for very long (assuming the values of f aren't concentrated near $0$), because different probabilities are generated at each step, thereby assuring it won't be caught trying to generate an extremely unlikely success.
Geometric distribution with random, varying success probability Your algorithm independently draws a sequence of probabilities from random variables $P_i$ with distributions $F_i.$ Since these distributions must be bounded between $0$ and $1$ they all have expect
39,921
Geometric distribution with random, varying success probability
My main question is: Is X still distributed Geometric? No. Indeed, the mean of the resulting distribution is not the same as the mean of a geometric with parameter equal to $E(p)$; this is because the mean of the geometric is not linear in $p$. The mean of the resulting distribution would correspond instead to the harmonic mean of the distribution of $p$ (i.e. to the arithmetic mean of the distribution of the means of the geometric distributions). But my probability skills won't let me confirm or reject this. Simulation is sufficient to see it. Try simulating say 100000 values from either a geom(1/3) or a geom(1/9) with equal chance and see that although the mean is equivalent to a geom(1/6), the proportion of 1's (or of 0's if you index your geometric from 0) is wrong, and the sd is wrong. Instead of the log-probabilities decreasing geometrically, they initially decrease at a similar rate to the one with the smaller mean then bend around and eventually in the far tail they decrease at a similar rate to the one with the larger mean. However, it's relatively easy to do the algebraic calculations for the first point (or for the variance) for a simple case like that. A complementary question is whether this is a well-known problem (or the solution uses a well-known theorem,) with a name. I suspect the answer may sound obvious to someone with a better statistics background than me. Sure, people do this kind of thing with distributions; it has all manner of uses. Having a distribution on a parameter is a form of mixture distribution. https://en.wikipedia.org/wiki/Mixture_distribution
Geometric distribution with random, varying success probability
My main question is: Is X still distributed Geometric? No. Indeed, the mean of the resulting distribution is not the same as the mean of a geometric with parameter equal to $E(p)$; this is because t
Geometric distribution with random, varying success probability My main question is: Is X still distributed Geometric? No. Indeed, the mean of the resulting distribution is not the same as the mean of a geometric with parameter equal to $E(p)$; this is because the mean of the geometric is not linear in $p$. The mean of the resulting distribution would correspond instead to the harmonic mean of the distribution of $p$ (i.e. to the arithmetic mean of the distribution of the means of the geometric distributions). But my probability skills won't let me confirm or reject this. Simulation is sufficient to see it. Try simulating say 100000 values from either a geom(1/3) or a geom(1/9) with equal chance and see that although the mean is equivalent to a geom(1/6), the proportion of 1's (or of 0's if you index your geometric from 0) is wrong, and the sd is wrong. Instead of the log-probabilities decreasing geometrically, they initially decrease at a similar rate to the one with the smaller mean then bend around and eventually in the far tail they decrease at a similar rate to the one with the larger mean. However, it's relatively easy to do the algebraic calculations for the first point (or for the variance) for a simple case like that. A complementary question is whether this is a well-known problem (or the solution uses a well-known theorem,) with a name. I suspect the answer may sound obvious to someone with a better statistics background than me. Sure, people do this kind of thing with distributions; it has all manner of uses. Having a distribution on a parameter is a form of mixture distribution. https://en.wikipedia.org/wiki/Mixture_distribution
Geometric distribution with random, varying success probability My main question is: Is X still distributed Geometric? No. Indeed, the mean of the resulting distribution is not the same as the mean of a geometric with parameter equal to $E(p)$; this is because t
39,922
Geometric distribution with random, varying success probability
The most common continuous distribution with the random variable with support between 0 and 1 is the beta distribution. When the "p" in the geometric distribution is a beta variable, you will end up with a geometric mixture distribution called "beta-geometric" distribution whose parameters can easily be derived. However, p does not have to stick in the 0-1 range if it is a variable. Thus if p were to take any other distribution-say exponential, you will obtain another geometric mixture distribution which we call exponential-geometric mixture distribution. The parameters of this distribution can also be easily derived. For reference google University of Nairobi repository for many of these mixture distributions
Geometric distribution with random, varying success probability
The most common continuous distribution with the random variable with support between 0 and 1 is the beta distribution. When the "p" in the geometric distribution is a beta variable, you will end up w
Geometric distribution with random, varying success probability The most common continuous distribution with the random variable with support between 0 and 1 is the beta distribution. When the "p" in the geometric distribution is a beta variable, you will end up with a geometric mixture distribution called "beta-geometric" distribution whose parameters can easily be derived. However, p does not have to stick in the 0-1 range if it is a variable. Thus if p were to take any other distribution-say exponential, you will obtain another geometric mixture distribution which we call exponential-geometric mixture distribution. The parameters of this distribution can also be easily derived. For reference google University of Nairobi repository for many of these mixture distributions
Geometric distribution with random, varying success probability The most common continuous distribution with the random variable with support between 0 and 1 is the beta distribution. When the "p" in the geometric distribution is a beta variable, you will end up w
39,923
How to extract specific information from text using Machine learning?
The problem you pose here is called named entity recognition (NER), or named entity extraction. There are multiple technologies (not necessary neural networks) that can be used for this problem, and some of them are quite mature. See e.g. this repo for an easy-to-plug-in solution, or try to apply the ne_chunk_sents function from the NLTK module in Python.
How to extract specific information from text using Machine learning?
The problem you pose here is called named entity recognition (NER), or named entity extraction. There are multiple technologies (not necessary neural networks) that can be used for this problem, and
How to extract specific information from text using Machine learning? The problem you pose here is called named entity recognition (NER), or named entity extraction. There are multiple technologies (not necessary neural networks) that can be used for this problem, and some of them are quite mature. See e.g. this repo for an easy-to-plug-in solution, or try to apply the ne_chunk_sents function from the NLTK module in Python.
How to extract specific information from text using Machine learning? The problem you pose here is called named entity recognition (NER), or named entity extraction. There are multiple technologies (not necessary neural networks) that can be used for this problem, and
39,924
How to extract specific information from text using Machine learning?
I think you could look into dependency parsing. Your fact tuples could be extracted from edges in depencency graph. PS1 If you want to do something on NLP you should check cs224n and not cs231n. I also recall cs224 contains a section on DL for dependency parsing. PS2 The dependency tree is taken from Stanford Neural Network Dependency Parser
How to extract specific information from text using Machine learning?
I think you could look into dependency parsing. Your fact tuples could be extracted from edges in depencency graph. PS1 If you want to do something on NLP you should check cs224n and not cs231n. I al
How to extract specific information from text using Machine learning? I think you could look into dependency parsing. Your fact tuples could be extracted from edges in depencency graph. PS1 If you want to do something on NLP you should check cs224n and not cs231n. I also recall cs224 contains a section on DL for dependency parsing. PS2 The dependency tree is taken from Stanford Neural Network Dependency Parser
How to extract specific information from text using Machine learning? I think you could look into dependency parsing. Your fact tuples could be extracted from edges in depencency graph. PS1 If you want to do something on NLP you should check cs224n and not cs231n. I al
39,925
Beta regression and regression diagnostics. Do we need to check for normality and other diagnostics?
Raw residuals will not necessarily be normally distributed. Here are some simulated data, following the Ferrari & Cribari-Neto (2004) reparameterization of the beta distribution: > set.seed(1839) > library(betareg) > inv_logit <- function(logit) exp(logit) / (1 + exp(logit)) > n <- 500 > x <- rnorm(n) > mu <- inv_logit(-5 + .5 * x) > phi <- exp(2 + .3 * x) > p <- mu * phi > q <- phi - (mu * phi) > y <- rbeta(n, p, q) > model <- betareg(y ~ x | x) > summary(model) Call: betareg(formula = y ~ x | x) Standardized weighted residuals 2: Min 1Q Median 3Q Max -4.8350 -0.4382 0.2585 0.6837 1.1680 Coefficients (mean model with logit link): Estimate Std. Error z value Pr(>|z|) (Intercept) -5.1468 0.2054 -25.054 < 2e-16 *** x 0.7231 0.1713 4.221 2.43e-05 *** Phi coefficients (precision model with log link): Estimate Std. Error z value Pr(>|z|) (Intercept) 2.1800 0.2091 10.427 <2e-16 *** x 0.1336 0.1761 0.759 0.448 --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Type of estimator: ML (maximum likelihood) Log-likelihood: 1.365e+04 on 4 Df Pseudo R-squared: 0.2678 Number of iterations: 38 (BFGS) + 2 (Fisher scoring) Note that this model recovers the parameters (1, .5, 2, and .3) reasonably well. But let's look at the raw residuals (i.e., observed - fitted): > qqnorm(model$residuals) > qqline(model$residuals) Even though I simulated the data exactly how the model works, the residuals are still non-normal. Why? I think the best introduction to generalized linear models as a whole—at least for me—is "Generalized Linear Models" by Coxe, West, and Aiken from Chapter 3 of The Oxford Handbook of Quantitative Methods (Vol. II), Edited by Todd Little and published in 2013. They make it clear that each generalized linear regression model is different—in part—because of how it models the residuals. They note that each generalized linear regression, including beta regression, follows three parts. One of these is the "random portion," which "defines the error distribution of the outcome variable." In table 3.1 on page 31, they show that, for beta regression, the error distribution is the beta distribution. That is, the outcome $Y$, conditional on parameters $p$ and $q$ (as defined in the reparameterization by Ferrari & Cribari-Neto), is beta distributed. So we should not expect the raw residuals of a beta regression to be normally distributed; they should be beta distributed. That being said, it appears to me from reading, asking questions on this site, and emailing with some folks that do beta regression research, that diagnostics for beta regression is an area of active research. There are no universally accepted answers yet—at least from my reading of the literature. There are a number of papers where people derive different types of residuals for beta regression that are meant to be normally distributed, so it is easier to do diagnostic checks. There are also some papers looking at influence statistics. I'm not sure what is implemented in Stata, but here are some papers I suggest reading and doing forward- and backward-searches on: Pereira (2017). On quantile residuals in beta regression. Communications in Statistics - Simulation and Computation. doi: doi.org/10.1080/03610918.2017.1381740. See https://arxiv.org/abs/1704.02917 for a pre-print. Espinheira, Santos, & Cribari-Neto (2017). On nonlinear beta regression residuals. Biometrical Journal, 59. doi: 10.1002/bimj.201600136 Espinheira, Ferrari, Cribari-Neto (2008). On beta regression residuals. Journal of Applied Statistics, 35. doi: 10.1080/02664760701834931 Espnheira, Ferrari, & Cribari-Neto (2008). Influence diagnostics in beta regression. Computational Statistics and Data Analysis, 52. doi: 10.1016/j.csda.2008.02.028 I myself—being someone who uses beta regression, has read a bit about it, but is not an expert on it—am not quite sold that there is any one (and implemented in software) method of doing robust diagnostic checks for beta regression. There are some beta regression experts here on CV that may correct me if I am wrong. Doing an edit here for a quick follow-up. The betareg package suggests using an "sweighted2" residual, as noted in the JSS article for the betareg package, but this still comes up non-normal (although better!) in the model I created above: > qqnorm(residuals(model, type = "sweighted2")) > qqline(residuals(model, type = "sweighted2"))
Beta regression and regression diagnostics. Do we need to check for normality and other diagnostics?
Raw residuals will not necessarily be normally distributed. Here are some simulated data, following the Ferrari & Cribari-Neto (2004) reparameterization of the beta distribution: > set.seed(1839) > li
Beta regression and regression diagnostics. Do we need to check for normality and other diagnostics? Raw residuals will not necessarily be normally distributed. Here are some simulated data, following the Ferrari & Cribari-Neto (2004) reparameterization of the beta distribution: > set.seed(1839) > library(betareg) > inv_logit <- function(logit) exp(logit) / (1 + exp(logit)) > n <- 500 > x <- rnorm(n) > mu <- inv_logit(-5 + .5 * x) > phi <- exp(2 + .3 * x) > p <- mu * phi > q <- phi - (mu * phi) > y <- rbeta(n, p, q) > model <- betareg(y ~ x | x) > summary(model) Call: betareg(formula = y ~ x | x) Standardized weighted residuals 2: Min 1Q Median 3Q Max -4.8350 -0.4382 0.2585 0.6837 1.1680 Coefficients (mean model with logit link): Estimate Std. Error z value Pr(>|z|) (Intercept) -5.1468 0.2054 -25.054 < 2e-16 *** x 0.7231 0.1713 4.221 2.43e-05 *** Phi coefficients (precision model with log link): Estimate Std. Error z value Pr(>|z|) (Intercept) 2.1800 0.2091 10.427 <2e-16 *** x 0.1336 0.1761 0.759 0.448 --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Type of estimator: ML (maximum likelihood) Log-likelihood: 1.365e+04 on 4 Df Pseudo R-squared: 0.2678 Number of iterations: 38 (BFGS) + 2 (Fisher scoring) Note that this model recovers the parameters (1, .5, 2, and .3) reasonably well. But let's look at the raw residuals (i.e., observed - fitted): > qqnorm(model$residuals) > qqline(model$residuals) Even though I simulated the data exactly how the model works, the residuals are still non-normal. Why? I think the best introduction to generalized linear models as a whole—at least for me—is "Generalized Linear Models" by Coxe, West, and Aiken from Chapter 3 of The Oxford Handbook of Quantitative Methods (Vol. II), Edited by Todd Little and published in 2013. They make it clear that each generalized linear regression model is different—in part—because of how it models the residuals. They note that each generalized linear regression, including beta regression, follows three parts. One of these is the "random portion," which "defines the error distribution of the outcome variable." In table 3.1 on page 31, they show that, for beta regression, the error distribution is the beta distribution. That is, the outcome $Y$, conditional on parameters $p$ and $q$ (as defined in the reparameterization by Ferrari & Cribari-Neto), is beta distributed. So we should not expect the raw residuals of a beta regression to be normally distributed; they should be beta distributed. That being said, it appears to me from reading, asking questions on this site, and emailing with some folks that do beta regression research, that diagnostics for beta regression is an area of active research. There are no universally accepted answers yet—at least from my reading of the literature. There are a number of papers where people derive different types of residuals for beta regression that are meant to be normally distributed, so it is easier to do diagnostic checks. There are also some papers looking at influence statistics. I'm not sure what is implemented in Stata, but here are some papers I suggest reading and doing forward- and backward-searches on: Pereira (2017). On quantile residuals in beta regression. Communications in Statistics - Simulation and Computation. doi: doi.org/10.1080/03610918.2017.1381740. See https://arxiv.org/abs/1704.02917 for a pre-print. Espinheira, Santos, & Cribari-Neto (2017). On nonlinear beta regression residuals. Biometrical Journal, 59. doi: 10.1002/bimj.201600136 Espinheira, Ferrari, Cribari-Neto (2008). On beta regression residuals. Journal of Applied Statistics, 35. doi: 10.1080/02664760701834931 Espnheira, Ferrari, & Cribari-Neto (2008). Influence diagnostics in beta regression. Computational Statistics and Data Analysis, 52. doi: 10.1016/j.csda.2008.02.028 I myself—being someone who uses beta regression, has read a bit about it, but is not an expert on it—am not quite sold that there is any one (and implemented in software) method of doing robust diagnostic checks for beta regression. There are some beta regression experts here on CV that may correct me if I am wrong. Doing an edit here for a quick follow-up. The betareg package suggests using an "sweighted2" residual, as noted in the JSS article for the betareg package, but this still comes up non-normal (although better!) in the model I created above: > qqnorm(residuals(model, type = "sweighted2")) > qqline(residuals(model, type = "sweighted2"))
Beta regression and regression diagnostics. Do we need to check for normality and other diagnostics? Raw residuals will not necessarily be normally distributed. Here are some simulated data, following the Ferrari & Cribari-Neto (2004) reparameterization of the beta distribution: > set.seed(1839) > li
39,926
Linear regression, independent variable stationarity
What you assume in a linear regression model is that the error term is a white noise process and, therefore, it must be stationary. There is no assumption that either the independent or dependant variables are stationary. However, consider the following simple linear regression model for time series data: $$Y_t = a + b X_t + \varepsilon_t$$ If $Y_t$ is stationary but $X_t$ is not, then if you rearrange the equation: $$Y_t - \varepsilon_t = a + bX_t$$ Then, the left-hand side is stationary, but the right-hand side is not, so the model can't be correct. If, instead, both variables are not stationary, then: $$Y_t - bX_t = a + \varepsilon_t$$ The right-hand side is stationary, but the left-hand side may or may not be. If it's not, then the model is wrong. It's possible for it to be stationary, as in a cointegration model for example, but it need not be. Violating the assumption about the stationarity of the error process can lead to all sorts of problems, like spurious regressions where what appears to be a significant coefficient is frequently really not at all significant.
Linear regression, independent variable stationarity
What you assume in a linear regression model is that the error term is a white noise process and, therefore, it must be stationary. There is no assumption that either the independent or dependant vari
Linear regression, independent variable stationarity What you assume in a linear regression model is that the error term is a white noise process and, therefore, it must be stationary. There is no assumption that either the independent or dependant variables are stationary. However, consider the following simple linear regression model for time series data: $$Y_t = a + b X_t + \varepsilon_t$$ If $Y_t$ is stationary but $X_t$ is not, then if you rearrange the equation: $$Y_t - \varepsilon_t = a + bX_t$$ Then, the left-hand side is stationary, but the right-hand side is not, so the model can't be correct. If, instead, both variables are not stationary, then: $$Y_t - bX_t = a + \varepsilon_t$$ The right-hand side is stationary, but the left-hand side may or may not be. If it's not, then the model is wrong. It's possible for it to be stationary, as in a cointegration model for example, but it need not be. Violating the assumption about the stationarity of the error process can lead to all sorts of problems, like spurious regressions where what appears to be a significant coefficient is frequently really not at all significant.
Linear regression, independent variable stationarity What you assume in a linear regression model is that the error term is a white noise process and, therefore, it must be stationary. There is no assumption that either the independent or dependant vari
39,927
What if Markov chain does not converge in a reasonable amount of time?
To answer your original question I am not a huge fan of using Gelman-Rubin mainly because it is somewhat handwavy for my taste. However, if you still want to use it, maybe try a multivariate Gelman-Rubin since it is possible the joint posterior of the weights have a complicated dependence structure that the univariate diagnostic is not able to capture. See answer here. I would suggest first looking at a trace plot for the weights that are slowly converging. Maybe it is a problem of multimodality etc than Gelman-Rubin is not able to catch. HMC is known to convergence fairly quickly usually in many situations. Maybe focus on the quality of the estimates obtained by analysing the variance in the estimates. You can find a discussion of the methods here. To actually improve converge of the chain, you can try different starting values for the slow converging chains. You can also may be tweak the HMC wherever possible. It is also possible that HMC just doesn't work here, and a variant of the Metropolis-Hastings algorithm might work better. I won't be able to say anything without knowing more about the problem.
What if Markov chain does not converge in a reasonable amount of time?
To answer your original question I am not a huge fan of using Gelman-Rubin mainly because it is somewhat handwavy for my taste. However, if you still want to use it, maybe try a multivariate Gelman-
What if Markov chain does not converge in a reasonable amount of time? To answer your original question I am not a huge fan of using Gelman-Rubin mainly because it is somewhat handwavy for my taste. However, if you still want to use it, maybe try a multivariate Gelman-Rubin since it is possible the joint posterior of the weights have a complicated dependence structure that the univariate diagnostic is not able to capture. See answer here. I would suggest first looking at a trace plot for the weights that are slowly converging. Maybe it is a problem of multimodality etc than Gelman-Rubin is not able to catch. HMC is known to convergence fairly quickly usually in many situations. Maybe focus on the quality of the estimates obtained by analysing the variance in the estimates. You can find a discussion of the methods here. To actually improve converge of the chain, you can try different starting values for the slow converging chains. You can also may be tweak the HMC wherever possible. It is also possible that HMC just doesn't work here, and a variant of the Metropolis-Hastings algorithm might work better. I won't be able to say anything without knowing more about the problem.
What if Markov chain does not converge in a reasonable amount of time? To answer your original question I am not a huge fan of using Gelman-Rubin mainly because it is somewhat handwavy for my taste. However, if you still want to use it, maybe try a multivariate Gelman-
39,928
What if Markov chain does not converge in a reasonable amount of time?
This was touched on by @Greenparker but the easiest way to speed a slowly converging model is to pick the right starting values. This is counterintuitive, as if you knew the right parameter values in the first place why would you fit the model. But, if you can fit a similar frequentist model quickly, use those parameter estimates as starting values. Again, this assumes that your chains are eventually converging. If they do not converge at all then you may have a redundant parameter problem such that there are multiple combinations of parameters that are equally plausible, resulting in chains that will never converge.
What if Markov chain does not converge in a reasonable amount of time?
This was touched on by @Greenparker but the easiest way to speed a slowly converging model is to pick the right starting values. This is counterintuitive, as if you knew the right parameter values in
What if Markov chain does not converge in a reasonable amount of time? This was touched on by @Greenparker but the easiest way to speed a slowly converging model is to pick the right starting values. This is counterintuitive, as if you knew the right parameter values in the first place why would you fit the model. But, if you can fit a similar frequentist model quickly, use those parameter estimates as starting values. Again, this assumes that your chains are eventually converging. If they do not converge at all then you may have a redundant parameter problem such that there are multiple combinations of parameters that are equally plausible, resulting in chains that will never converge.
What if Markov chain does not converge in a reasonable amount of time? This was touched on by @Greenparker but the easiest way to speed a slowly converging model is to pick the right starting values. This is counterintuitive, as if you knew the right parameter values in
39,929
When does deep learning fail?
One can generally think of two types of hardness results in machine learning: Information theoretic hardness in the context of statistical learning (namely, giving a lower bound to the minimal number of examples required to learn) and algorithmic hardness (i.e, a bad algorithmic choice means that the optimization becomes impossible). In the context of deep learning, discussing hardness is tricky, since we actually know very little in terms of why theoretically deep learning works. (Recall: The optimization problem solved in deep learning is that of minimizing a high dimensional highly non convex function, and is known to be NP-hard in general. i.e, there are no guarantees w.r.t reaching the global minimum. And yet in practice, practitioners have used variants of SGD to solve many problems very well. There have been some recent advances in giving a justifiable answer as to why this is so, but this is outside the scope of your question.) One very nice example for algorithmic hardness in deep learning is for trying to learn problems in which the gradient is non-informative. Deep learning currently uses some form of SGD to update the weights of the network. for example, mini-batches GD computes the gradient of the cost function over a random sample of $b$ examples w.r.t. to the parameters $\theta$ : $ \theta_{t+1} = \theta_t - \alpha_t \cdot \nabla_\theta J(\theta; x^{(i:i+b)},y^{(i:i+b)})$ In other words, DL optimization is trying to globally optimize a function by using local gradient information; This suggest that if a learning problem is characterized by non-informative gradients, then no deep learning architecture will be able to learn it. Learning random parities is the following learning problem: After choosing a vector $\boldsymbol{v^*} > \in \left\{ 0,1\right\}^d $, the goal is to train a predictor mapping $\boldsymbol{x\in}\left\{ 0,1\right\} ^{d}$ to $y=\left(-1\right)^{\left\langle \boldsymbol{x,v^{*}}\right\rangle }$, where $\boldsymbol{x}$ is uniformly distributed. In other words, we're trying to learn a mapping that determines if the number of 1's in a certain subset of coordinates of $\boldsymbol{x}$ (indicated by $\boldsymbol{v^*}$) is even or odd. In "Failures of Gradient-Based Deep Learning" (Shamir, 2017) the authors prove that the this problem (and more generally, every linear function composed with a periodic one) suffers from non-informative gradients, thus rendering the optimization problem as difficult. They also demonstrate this empirically, by measuring the accuracy as a function of the number of training iterations, for various input dimensions. The network used here is one fully connected layer of width $10d$ with ReLU activations, and a fully connected output layer with linear activation and a single unit. (The width is chosen as to ensure that the required parity function is indeed realized by such a network) Q: Why is it that learning parity only becomes difficult at around $d=30$?
When does deep learning fail?
One can generally think of two types of hardness results in machine learning: Information theoretic hardness in the context of statistical learning (namely, giving a lower bound to the minimal number
When does deep learning fail? One can generally think of two types of hardness results in machine learning: Information theoretic hardness in the context of statistical learning (namely, giving a lower bound to the minimal number of examples required to learn) and algorithmic hardness (i.e, a bad algorithmic choice means that the optimization becomes impossible). In the context of deep learning, discussing hardness is tricky, since we actually know very little in terms of why theoretically deep learning works. (Recall: The optimization problem solved in deep learning is that of minimizing a high dimensional highly non convex function, and is known to be NP-hard in general. i.e, there are no guarantees w.r.t reaching the global minimum. And yet in practice, practitioners have used variants of SGD to solve many problems very well. There have been some recent advances in giving a justifiable answer as to why this is so, but this is outside the scope of your question.) One very nice example for algorithmic hardness in deep learning is for trying to learn problems in which the gradient is non-informative. Deep learning currently uses some form of SGD to update the weights of the network. for example, mini-batches GD computes the gradient of the cost function over a random sample of $b$ examples w.r.t. to the parameters $\theta$ : $ \theta_{t+1} = \theta_t - \alpha_t \cdot \nabla_\theta J(\theta; x^{(i:i+b)},y^{(i:i+b)})$ In other words, DL optimization is trying to globally optimize a function by using local gradient information; This suggest that if a learning problem is characterized by non-informative gradients, then no deep learning architecture will be able to learn it. Learning random parities is the following learning problem: After choosing a vector $\boldsymbol{v^*} > \in \left\{ 0,1\right\}^d $, the goal is to train a predictor mapping $\boldsymbol{x\in}\left\{ 0,1\right\} ^{d}$ to $y=\left(-1\right)^{\left\langle \boldsymbol{x,v^{*}}\right\rangle }$, where $\boldsymbol{x}$ is uniformly distributed. In other words, we're trying to learn a mapping that determines if the number of 1's in a certain subset of coordinates of $\boldsymbol{x}$ (indicated by $\boldsymbol{v^*}$) is even or odd. In "Failures of Gradient-Based Deep Learning" (Shamir, 2017) the authors prove that the this problem (and more generally, every linear function composed with a periodic one) suffers from non-informative gradients, thus rendering the optimization problem as difficult. They also demonstrate this empirically, by measuring the accuracy as a function of the number of training iterations, for various input dimensions. The network used here is one fully connected layer of width $10d$ with ReLU activations, and a fully connected output layer with linear activation and a single unit. (The width is chosen as to ensure that the required parity function is indeed realized by such a network) Q: Why is it that learning parity only becomes difficult at around $d=30$?
When does deep learning fail? One can generally think of two types of hardness results in machine learning: Information theoretic hardness in the context of statistical learning (namely, giving a lower bound to the minimal number
39,930
When does deep learning fail?
It fails when you don't impose the right structure on the problem. Convolutional neural networks work because they assume pixels which are close to each other are related, so it makes sense to apply spatial convolutions to your features. And in so doing, you've reduced the hypothesis search space dramatically, which means deep learning is more likely to arrive at an optimal solution. If you apply deep learning to a problem where the features aren't amenable to spatial/temporal convolutions, then deep learning will fail, because it doesn't make sense to be summing up certain features and applying functions to the summation, which is what neural networks do. If someone can think of an example of where deep learning has been successfully applied to data which is not images or audio (or spatial/temporal data), I would be delighted to retract this answer.
When does deep learning fail?
It fails when you don't impose the right structure on the problem. Convolutional neural networks work because they assume pixels which are close to each other are related, so it makes sense to apply s
When does deep learning fail? It fails when you don't impose the right structure on the problem. Convolutional neural networks work because they assume pixels which are close to each other are related, so it makes sense to apply spatial convolutions to your features. And in so doing, you've reduced the hypothesis search space dramatically, which means deep learning is more likely to arrive at an optimal solution. If you apply deep learning to a problem where the features aren't amenable to spatial/temporal convolutions, then deep learning will fail, because it doesn't make sense to be summing up certain features and applying functions to the summation, which is what neural networks do. If someone can think of an example of where deep learning has been successfully applied to data which is not images or audio (or spatial/temporal data), I would be delighted to retract this answer.
When does deep learning fail? It fails when you don't impose the right structure on the problem. Convolutional neural networks work because they assume pixels which are close to each other are related, so it makes sense to apply s
39,931
Cook's distance vs. hat values
The cook's distance is given by the formula: $D_{i} = \frac{\sum_{j = 1}^{n} (\hat Y_j - \hat Y_{j(i)})^2}{pMSE}$ Where: $\hat Y_j$ is the fitted value for the j observation; $ \hat Y_{j(i)}$ is the fitted value for the j observation without including the i-th observation in the data that will generate the model; p is the number of parameters in the model; MSE ie the mean squared error of the model. This means that the cook's distance measures the influence of each observation in the model,or "what would happen if each observation wasn't in the model", and it's important because it's one way of detecting outliers that affects specially the regression line. When we don't look for and treat potential outliers in our data, it is possible that the adjusted coefficients for the model might not be the most representative, or appropriate, leading to incorrect inference. The hat values are the fitted values, or the predictions made by the model for each observation. It is quite different from the Cook's distance.
Cook's distance vs. hat values
The cook's distance is given by the formula: $D_{i} = \frac{\sum_{j = 1}^{n} (\hat Y_j - \hat Y_{j(i)})^2}{pMSE}$ Where: $\hat Y_j$ is the fitted value for the j observation; $ \hat Y_{j(i)}$ is th
Cook's distance vs. hat values The cook's distance is given by the formula: $D_{i} = \frac{\sum_{j = 1}^{n} (\hat Y_j - \hat Y_{j(i)})^2}{pMSE}$ Where: $\hat Y_j$ is the fitted value for the j observation; $ \hat Y_{j(i)}$ is the fitted value for the j observation without including the i-th observation in the data that will generate the model; p is the number of parameters in the model; MSE ie the mean squared error of the model. This means that the cook's distance measures the influence of each observation in the model,or "what would happen if each observation wasn't in the model", and it's important because it's one way of detecting outliers that affects specially the regression line. When we don't look for and treat potential outliers in our data, it is possible that the adjusted coefficients for the model might not be the most representative, or appropriate, leading to incorrect inference. The hat values are the fitted values, or the predictions made by the model for each observation. It is quite different from the Cook's distance.
Cook's distance vs. hat values The cook's distance is given by the formula: $D_{i} = \frac{\sum_{j = 1}^{n} (\hat Y_j - \hat Y_{j(i)})^2}{pMSE}$ Where: $\hat Y_j$ is the fitted value for the j observation; $ \hat Y_{j(i)}$ is th
39,932
Cook's distance vs. hat values
Cooks distance shows how much the whole regression model would change if $(x_i, y_i)$ is removed. I am not quite clear what you mean by "hat value". Do you mean $e_i = y_i - \hat{y}_i$, or $h_{ii}$ in the hat matrix $H$ (i.e. leverage)? Either way they are different from cooks distance. Note that cooks distance takes the form $$D_i = \frac{e_{i}^{2}}{s^{2} p}\left[\frac{h_{ii}}{(1-h_{ii})^2}\right]$$, so it's related to both residual $e_i$ and leverage $h_{ii}$. Large $D_i$ could be due to large $e_i$ or $h_{ii}$, or both. Possible reasons for large residual $e_i$: $y_i$ far from fitted value (possibly outlier) large leverage $h_{ii}$: $x_i$ far from other $x_{j}$'s (influential point due to the value of $x$)
Cook's distance vs. hat values
Cooks distance shows how much the whole regression model would change if $(x_i, y_i)$ is removed. I am not quite clear what you mean by "hat value". Do you mean $e_i = y_i - \hat{y}_i$, or $h_{ii}$ in
Cook's distance vs. hat values Cooks distance shows how much the whole regression model would change if $(x_i, y_i)$ is removed. I am not quite clear what you mean by "hat value". Do you mean $e_i = y_i - \hat{y}_i$, or $h_{ii}$ in the hat matrix $H$ (i.e. leverage)? Either way they are different from cooks distance. Note that cooks distance takes the form $$D_i = \frac{e_{i}^{2}}{s^{2} p}\left[\frac{h_{ii}}{(1-h_{ii})^2}\right]$$, so it's related to both residual $e_i$ and leverage $h_{ii}$. Large $D_i$ could be due to large $e_i$ or $h_{ii}$, or both. Possible reasons for large residual $e_i$: $y_i$ far from fitted value (possibly outlier) large leverage $h_{ii}$: $x_i$ far from other $x_{j}$'s (influential point due to the value of $x$)
Cook's distance vs. hat values Cooks distance shows how much the whole regression model would change if $(x_i, y_i)$ is removed. I am not quite clear what you mean by "hat value". Do you mean $e_i = y_i - \hat{y}_i$, or $h_{ii}$ in
39,933
Classification algorithm based on average distances from a test point to the points in each class
It is a nice idea, but has one major flaw - it is too sensitive to the spread of the data. To clarify the question, given $k$ disjoint clusters $ C_1, \ldots, C_k $, you ask whether it makes sense to classify a new sample $ x^* $ according to the rule $$ \arg\min_{i\in \left[k\right]} \frac{1}{\left| C_i \right|} \sum_{x \in C_i } \left\Vert x - x^*\right\Vert $$ Note that this rule is indeed similar to rules that exist as well known algorithms, like $$ \arg\min_{i\in \left[k\right]} \min_{x \in C_i } \left\Vert x- x^*\right\Vert $$ which is in fact 1-Nearest-Neighbors, or $$ \arg\min_{i\in \left[k\right]} \left\Vert \frac{1}{\left| C_i \right|} \sum_{x \in C_i }x - x^*\right\Vert$$ which in sklearn is called NearestCentroid, but is used by k-Means for cluster assignment and can be seen in LDA in the case where the underlying covariance matrix is the identity (up to scalar). (Note that in general, LDA also takes into account the shape [spread + orientation] of the clusters). In many cases, the proposed rule will behave similarly to NearestCentroid, especially if the clusters are well separated and have similar variance (in such case, I think it is possible to bound the average distance in terms of the distance from the centroid). However, as it averages distances over all the points in the cluster, it is blatantly biased toward low-variance clusters. I believe is the true source of the mislabelling you noticed. To illustrate this effect, we can plot the decision boundary of our classifiers. Plots are shamelessly based on sklearn's example. In the preceding plot, I generated two datasets from different normal distributions. The violet came from $$ \mathcal{N}\left(\begin{pmatrix}0 \\ 3\end{pmatrix}, \begin{pmatrix}10 & 2 \\ 2 & 1\end{pmatrix}^2\right)$$ and the yellow came from $$ \mathcal{N}\left(\begin{pmatrix}0 \\ -3\end{pmatrix}, \begin{pmatrix}1 & 0 \\ 0 & 1\end{pmatrix}\right)$$ Then, each point in the space is colored according to the rule. The line separating the regions is the decision boundary. There are 200 points in the violet cluster and 50 in the yellow cluster. The + marks the centroid of each cluster. Note that the violet cluster is not aligned with the axes in order to emphasize the difference between LDA and Nearest Centroid.
Classification algorithm based on average distances from a test point to the points in each class
It is a nice idea, but has one major flaw - it is too sensitive to the spread of the data. To clarify the question, given $k$ disjoint clusters $ C_1, \ldots, C_k $, you ask whether it makes sense to
Classification algorithm based on average distances from a test point to the points in each class It is a nice idea, but has one major flaw - it is too sensitive to the spread of the data. To clarify the question, given $k$ disjoint clusters $ C_1, \ldots, C_k $, you ask whether it makes sense to classify a new sample $ x^* $ according to the rule $$ \arg\min_{i\in \left[k\right]} \frac{1}{\left| C_i \right|} \sum_{x \in C_i } \left\Vert x - x^*\right\Vert $$ Note that this rule is indeed similar to rules that exist as well known algorithms, like $$ \arg\min_{i\in \left[k\right]} \min_{x \in C_i } \left\Vert x- x^*\right\Vert $$ which is in fact 1-Nearest-Neighbors, or $$ \arg\min_{i\in \left[k\right]} \left\Vert \frac{1}{\left| C_i \right|} \sum_{x \in C_i }x - x^*\right\Vert$$ which in sklearn is called NearestCentroid, but is used by k-Means for cluster assignment and can be seen in LDA in the case where the underlying covariance matrix is the identity (up to scalar). (Note that in general, LDA also takes into account the shape [spread + orientation] of the clusters). In many cases, the proposed rule will behave similarly to NearestCentroid, especially if the clusters are well separated and have similar variance (in such case, I think it is possible to bound the average distance in terms of the distance from the centroid). However, as it averages distances over all the points in the cluster, it is blatantly biased toward low-variance clusters. I believe is the true source of the mislabelling you noticed. To illustrate this effect, we can plot the decision boundary of our classifiers. Plots are shamelessly based on sklearn's example. In the preceding plot, I generated two datasets from different normal distributions. The violet came from $$ \mathcal{N}\left(\begin{pmatrix}0 \\ 3\end{pmatrix}, \begin{pmatrix}10 & 2 \\ 2 & 1\end{pmatrix}^2\right)$$ and the yellow came from $$ \mathcal{N}\left(\begin{pmatrix}0 \\ -3\end{pmatrix}, \begin{pmatrix}1 & 0 \\ 0 & 1\end{pmatrix}\right)$$ Then, each point in the space is colored according to the rule. The line separating the regions is the decision boundary. There are 200 points in the violet cluster and 50 in the yellow cluster. The + marks the centroid of each cluster. Note that the violet cluster is not aligned with the axes in order to emphasize the difference between LDA and Nearest Centroid.
Classification algorithm based on average distances from a test point to the points in each class It is a nice idea, but has one major flaw - it is too sensitive to the spread of the data. To clarify the question, given $k$ disjoint clusters $ C_1, \ldots, C_k $, you ask whether it makes sense to
39,934
Can you calculate $R^2$ from correlation coefficents in multiple linear regression?
The coefficient-of-determination can be determined from the correlations: Consider a multiple linear regression with $m$ explanatory vectors and an intercept term. First we define the correlation values for all the variables in the problem $r_i = \mathbb{Corr}(\mathbf{y},\mathbf{x}_i)$ and $r_{i,j} = \mathbb{Corr}(\mathbf{x}_i,\mathbf{x}_j)$. Now define the goodness of fit vector and design correlation matrix respectively by: $$\boldsymbol{r}_{\mathbf{y},\mathbf{x}} = \begin{bmatrix} r_1 \\ r_2 \\ \vdots \\ r_m \end{bmatrix} \quad \quad \quad \boldsymbol{r}_{\mathbf{x},\mathbf{x}} = \begin{bmatrix} r_{1,1} & r_{1,2} & \cdots & r_{1,m} \\ r_{2,1} & r_{2,2} & \cdots & r_{2,m} \\ \vdots & \vdots & \ddots & \vdots \\ r_{m,1} & r_{m,2} & \cdots & r_{m,m} \\ \end{bmatrix}.$$ The goodness-of-fit vector contains the correlations between the response vector and each of the explanatory vectors. The design correlation matrix contains the correlations between each pair of explanatory vectors. (Please note that these names are something I have made up, since neither matrix has a standard name that I am aware of. The first vector measures the goodness-of-fit of simple regressions on each of the individual explanatory vectors, which is why I use this name.) Now, with a bit of linear algebra it can be shown that the coefficient-of-determination for the multiple linear regression is given by the following quadratic form: $$R^2 = \boldsymbol{r}_{\mathbf{y},\mathbf{x}}^\text{T} \boldsymbol{r}_{\mathbf{x},\mathbf{x}}^{-1} \boldsymbol{r}_{\mathbf{y},\mathbf{x}}.$$ This form for the coefficient-of-determination is not all that well-known to statistical practitioners, but it is a very useful result, and assists in framing the goodness-of-fit of the multiple linear regression in its most fundamental terms. The square-root of the coefficient of determination gives us the multiple correlation coefficient, which is a multivariate extension of the absolute correlation. In the special case where $m=1$ you get $R^2 = r_1^2$ so that the coefficient-of-determination is the square of the correlation between the response vector and the (single) explanatory variable. As you can see, this form for the coefficient-of-determination for the multiple linear regression is framed fully in terms of correlations between the pairs of vectors going into the regression. This means that if you have a matrix of the pairwise correlations between all the vectors in the multiple regression (the response vector and each of the explanatory vectors) then you can directly determine the coefficient-of-determination without fitting the regression model. This result is more commonly presented in multivariate analysis (see e.g., Mardia, Kent and Bibby 1979, p. 168). The coefficient-of-determination is not generally equal to the sum of individual coefficients: In the case where all the explanatory vectors are uncorrelated with each other you get $\boldsymbol{r}_{\mathbf{x},\mathbf{x}} = \boldsymbol{I}$ which means that the above quadratic form reduces to $R^2 = \sum r_i^2$. However, this is a special case that only arises in practice in cases where the explanatory variables are set by the researcher. The explanatory variables are not generally uncorrelated, and so the coefficient-of-determination is determined by the above quadratic form. It is also useful to note that the coefficient-of-determination in a multiple linear regression can be above or below the sum of the individual coefficients-of-determination for corresponding simple linear regressions. Usually it is below this sum (since the total explanatory power is usually less than the sum of its parts) but sometimes it is above this sum.
Can you calculate $R^2$ from correlation coefficents in multiple linear regression?
The coefficient-of-determination can be determined from the correlations: Consider a multiple linear regression with $m$ explanatory vectors and an intercept term. First we define the correlation val
Can you calculate $R^2$ from correlation coefficents in multiple linear regression? The coefficient-of-determination can be determined from the correlations: Consider a multiple linear regression with $m$ explanatory vectors and an intercept term. First we define the correlation values for all the variables in the problem $r_i = \mathbb{Corr}(\mathbf{y},\mathbf{x}_i)$ and $r_{i,j} = \mathbb{Corr}(\mathbf{x}_i,\mathbf{x}_j)$. Now define the goodness of fit vector and design correlation matrix respectively by: $$\boldsymbol{r}_{\mathbf{y},\mathbf{x}} = \begin{bmatrix} r_1 \\ r_2 \\ \vdots \\ r_m \end{bmatrix} \quad \quad \quad \boldsymbol{r}_{\mathbf{x},\mathbf{x}} = \begin{bmatrix} r_{1,1} & r_{1,2} & \cdots & r_{1,m} \\ r_{2,1} & r_{2,2} & \cdots & r_{2,m} \\ \vdots & \vdots & \ddots & \vdots \\ r_{m,1} & r_{m,2} & \cdots & r_{m,m} \\ \end{bmatrix}.$$ The goodness-of-fit vector contains the correlations between the response vector and each of the explanatory vectors. The design correlation matrix contains the correlations between each pair of explanatory vectors. (Please note that these names are something I have made up, since neither matrix has a standard name that I am aware of. The first vector measures the goodness-of-fit of simple regressions on each of the individual explanatory vectors, which is why I use this name.) Now, with a bit of linear algebra it can be shown that the coefficient-of-determination for the multiple linear regression is given by the following quadratic form: $$R^2 = \boldsymbol{r}_{\mathbf{y},\mathbf{x}}^\text{T} \boldsymbol{r}_{\mathbf{x},\mathbf{x}}^{-1} \boldsymbol{r}_{\mathbf{y},\mathbf{x}}.$$ This form for the coefficient-of-determination is not all that well-known to statistical practitioners, but it is a very useful result, and assists in framing the goodness-of-fit of the multiple linear regression in its most fundamental terms. The square-root of the coefficient of determination gives us the multiple correlation coefficient, which is a multivariate extension of the absolute correlation. In the special case where $m=1$ you get $R^2 = r_1^2$ so that the coefficient-of-determination is the square of the correlation between the response vector and the (single) explanatory variable. As you can see, this form for the coefficient-of-determination for the multiple linear regression is framed fully in terms of correlations between the pairs of vectors going into the regression. This means that if you have a matrix of the pairwise correlations between all the vectors in the multiple regression (the response vector and each of the explanatory vectors) then you can directly determine the coefficient-of-determination without fitting the regression model. This result is more commonly presented in multivariate analysis (see e.g., Mardia, Kent and Bibby 1979, p. 168). The coefficient-of-determination is not generally equal to the sum of individual coefficients: In the case where all the explanatory vectors are uncorrelated with each other you get $\boldsymbol{r}_{\mathbf{x},\mathbf{x}} = \boldsymbol{I}$ which means that the above quadratic form reduces to $R^2 = \sum r_i^2$. However, this is a special case that only arises in practice in cases where the explanatory variables are set by the researcher. The explanatory variables are not generally uncorrelated, and so the coefficient-of-determination is determined by the above quadratic form. It is also useful to note that the coefficient-of-determination in a multiple linear regression can be above or below the sum of the individual coefficients-of-determination for corresponding simple linear regressions. Usually it is below this sum (since the total explanatory power is usually less than the sum of its parts) but sometimes it is above this sum.
Can you calculate $R^2$ from correlation coefficents in multiple linear regression? The coefficient-of-determination can be determined from the correlations: Consider a multiple linear regression with $m$ explanatory vectors and an intercept term. First we define the correlation val
39,935
Can you calculate $R^2$ from correlation coefficents in multiple linear regression?
If you have both the absolute errors from the regression and the dependent variable values, you can calculate $R^2$ as: $$1 - ({\rm absolute\ error\ variance} / {\rm dependent\ variable\ variance})$$ Using the above, the number of independent variables or their correlations is not needed, which is quite handy for multiple linear regression.
Can you calculate $R^2$ from correlation coefficents in multiple linear regression?
If you have both the absolute errors from the regression and the dependent variable values, you can calculate $R^2$ as: $$1 - ({\rm absolute\ error\ variance} / {\rm dependent\ variable\ variance})$$
Can you calculate $R^2$ from correlation coefficents in multiple linear regression? If you have both the absolute errors from the regression and the dependent variable values, you can calculate $R^2$ as: $$1 - ({\rm absolute\ error\ variance} / {\rm dependent\ variable\ variance})$$ Using the above, the number of independent variables or their correlations is not needed, which is quite handy for multiple linear regression.
Can you calculate $R^2$ from correlation coefficents in multiple linear regression? If you have both the absolute errors from the regression and the dependent variable values, you can calculate $R^2$ as: $$1 - ({\rm absolute\ error\ variance} / {\rm dependent\ variable\ variance})$$
39,936
How do I perform a statistical test for a Difference-in-Differences analysis?
The difference in differences is what is called an interaction in statistics (as Dimitriy Masterov has already pointed out). You want to test whether the time effect is different when you intervene compared with when you don't. Your data is most naturally modelled as binomial, i.e., the number of top scores out of total people surveyed in each area at each time follows a binomial distribution, assuming that all the customers respond independently. The standard statistical method for testing for interaction with binomial data is to run a binomial logistic regression. In the R language, the code for this is as follows. First input the data: > NTopScore <- c(64,82,44,60) > N <- c(130,118,110,100) > Area <- factor(c("A","A","B","B")) > Time <- factor(c(0,1,0,1)) > Proportion <- NTopScore / N Then fit the logistic regression. In R this is done by running a generalized linear model, and telling R that the data should be treated as binomial: > fit <- glm(Proportion~Area*Time, family=binomial, weights=N) > summary(fit) Call: glm(formula = Proportion ~ Area * Time, family = binomial, weights = N) Deviance Residuals: [1] 0 0 0 0 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -0.03077 0.17543 -0.175 0.86076 AreaB -0.37469 0.26202 -1.430 0.15271 Time1 0.85397 0.26599 3.211 0.00132 ** AreaB:Time1 -0.04304 0.38768 -0.111 0.91160 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 2.3047e+01 on 3 degrees of freedom Residual deviance: 7.1054e-15 on 0 degrees of freedom AIC: 28.523 Number of Fisher Scoring iterations: 3 We see that the the p-value for the interaction (difference in differences) is $P=0.9116$, obviously not significant. The model is fitted on a log-odds (logit) scale. The AreaB parameter shows that Area B gives a lower proportion than Area A at Time 0. The Time1 parameter shows that Time 1 gives a higher proportion than Time 0 in Area A. The AreaB:Time1 parameter is the difference in differences. Another way to fit the logistic regression is to estimate the before-after time effect separately for areas A and B. This shows that the time effect is virtually identical for the two areas, regardless of whether you had an intervention for not: > fit <- glm(Proportion~Area+Area:Time, family=binomial, weights=N) > summary(fit) Call: glm(formula = Proportion ~ Area + Area:Time, family = binomial, weights = N) Deviance Residuals: [1] 0 0 0 0 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -0.03077 0.17543 -0.175 0.86076 AreaB -0.37469 0.26202 -1.430 0.15271 AreaA:Time1 0.85397 0.26599 3.211 0.00132 ** AreaB:Time1 0.81093 0.28204 2.875 0.00404 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 2.3047e+01 on 3 degrees of freedom Residual deviance: -5.3291e-15 on 0 degrees of freedom AIC: 28.523 Number of Fisher Scoring iterations: 3 The time effect in Area A is 0.86397 and that in Area B is 0.81093. The difference in the time effects is $0.81093 - 0.86397 = -0.04304$, which is equal to the interaction estimate we saw in the first regression.
How do I perform a statistical test for a Difference-in-Differences analysis?
The difference in differences is what is called an interaction in statistics (as Dimitriy Masterov has already pointed out). You want to test whether the time effect is different when you intervene co
How do I perform a statistical test for a Difference-in-Differences analysis? The difference in differences is what is called an interaction in statistics (as Dimitriy Masterov has already pointed out). You want to test whether the time effect is different when you intervene compared with when you don't. Your data is most naturally modelled as binomial, i.e., the number of top scores out of total people surveyed in each area at each time follows a binomial distribution, assuming that all the customers respond independently. The standard statistical method for testing for interaction with binomial data is to run a binomial logistic regression. In the R language, the code for this is as follows. First input the data: > NTopScore <- c(64,82,44,60) > N <- c(130,118,110,100) > Area <- factor(c("A","A","B","B")) > Time <- factor(c(0,1,0,1)) > Proportion <- NTopScore / N Then fit the logistic regression. In R this is done by running a generalized linear model, and telling R that the data should be treated as binomial: > fit <- glm(Proportion~Area*Time, family=binomial, weights=N) > summary(fit) Call: glm(formula = Proportion ~ Area * Time, family = binomial, weights = N) Deviance Residuals: [1] 0 0 0 0 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -0.03077 0.17543 -0.175 0.86076 AreaB -0.37469 0.26202 -1.430 0.15271 Time1 0.85397 0.26599 3.211 0.00132 ** AreaB:Time1 -0.04304 0.38768 -0.111 0.91160 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 2.3047e+01 on 3 degrees of freedom Residual deviance: 7.1054e-15 on 0 degrees of freedom AIC: 28.523 Number of Fisher Scoring iterations: 3 We see that the the p-value for the interaction (difference in differences) is $P=0.9116$, obviously not significant. The model is fitted on a log-odds (logit) scale. The AreaB parameter shows that Area B gives a lower proportion than Area A at Time 0. The Time1 parameter shows that Time 1 gives a higher proportion than Time 0 in Area A. The AreaB:Time1 parameter is the difference in differences. Another way to fit the logistic regression is to estimate the before-after time effect separately for areas A and B. This shows that the time effect is virtually identical for the two areas, regardless of whether you had an intervention for not: > fit <- glm(Proportion~Area+Area:Time, family=binomial, weights=N) > summary(fit) Call: glm(formula = Proportion ~ Area + Area:Time, family = binomial, weights = N) Deviance Residuals: [1] 0 0 0 0 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -0.03077 0.17543 -0.175 0.86076 AreaB -0.37469 0.26202 -1.430 0.15271 AreaA:Time1 0.85397 0.26599 3.211 0.00132 ** AreaB:Time1 0.81093 0.28204 2.875 0.00404 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 2.3047e+01 on 3 degrees of freedom Residual deviance: -5.3291e-15 on 0 degrees of freedom AIC: 28.523 Number of Fisher Scoring iterations: 3 The time effect in Area A is 0.86397 and that in Area B is 0.81093. The difference in the time effects is $0.81093 - 0.86397 = -0.04304$, which is equal to the interaction estimate we saw in the first regression.
How do I perform a statistical test for a Difference-in-Differences analysis? The difference in differences is what is called an interaction in statistics (as Dimitriy Masterov has already pointed out). You want to test whether the time effect is different when you intervene co
39,937
How do I perform a statistical test for a Difference-in-Differences analysis?
The easiest solution is to use the linear regression formulation of DID: Regress the binary customer ratings on a constant, a post dummy, an area A dummy, and the interaction of last two. It may be appropriate to add other regressors measuring characteristics that are time-invariant at the individual level, but whose distribution changes through time at the group level. This may help with the significance issue below if it soaks up some residual variance. The DID is the coefficient on the interaction of post and group A. You can conduct the hypothesis test that it is zero or just look at the p-value or t-stat. Since treatment does not vary within area, the usual standard errors will be off (usually too small, but sometimes too large, when the within cluster error correlation is negative). The typical solution is to cluster the standard errors by area or cross-section, but with only 2-4 clusters, that will not work well since that is not enough clusters for the asymptotics to kick in. I don't really have a great solution for you here. Stata does compute something below, but it is not likely to be reliable (even with the small number of clusters adjustment). I do suspect that the correct adjustment will not yield significance since the conventional standard error on the DID coefficient is so large. People will often use simulation in cases like this to gauge how far off the SEs are. Here's this analysis for your data: . clear . input area_a time noobs rating area_a time noobs rating 1. 1 0 64 1 2. 1 0 66 0 3. 1 1 82 1 4. 1 1 36 0 5. 0 0 44 1 6. 0 0 56 0 7. 0 1 60 1 8. 0 1 40 0 9. end . egen cs = group(area_a time) . reg rating i.area_a##i.time [fw=noobs] Source | SS df MS Number of obs = 448 -------------+---------------------------------- F(3, 444) = 6.05 Model | 4.34181458 3 1.44727153 Prob > F = 0.0005 Residual | 106.149257 444 .239074903 R-squared = 0.0393 -------------+---------------------------------- Adj R-squared = 0.0328 Total | 110.491071 447 .247183605 Root MSE = .48895 ------------------------------------------------------------------------------ rating | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- 1.area_a | .0523077 .0650368 0.80 0.422 -.0755105 .1801259 1.time | .16 .0691484 2.31 0.021 .0241012 .2958988 | area_a#time | 1 1 | .0426076 .0929871 0.46 0.647 -.1401419 .225357 | _cons | .44 .0488953 9.00 0.000 .3439051 .5360949 ------------------------------------------------------------------------------ . reg rating i.area_a##i.time [fw=noobs], vce(cluster area) // or vce(cluster cs) Linear regression Number of obs = 448 F(0, 1) = . Prob > F = . R-squared = 0.0393 Root MSE = .48895 (Std. Err. adjusted for 2 clusters in area_a) ------------------------------------------------------------------------------ | Robust rating | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- 1.area_a | .0523077 1.08e-16 4.8e+14 0.000 .0523077 .0523077 1.time | .16 1.01e-16 1.6e+15 0.000 .16 .16 | area_a#time | 1 1 | .0426076 1.30e-16 3.3e+14 0.000 .0426076 .0426076 | _cons | .44 1.01e-16 4.4e+15 0.000 .44 .44 ------------------------------------------------------------------------------ The interpretation of the interaction coefficient in both specifications is a 4.3 percentage point increase in liking your service post-intervention. The linear model has the advantage of easier interpretation of an additive effect on a probability rather than a multiplicative effect on log odds. Moreover, in a non-linear logit model, clustering is problematic since the coefficients are identified up to scale only, the interpretation of interactions and identifying assumptions can get tricky, and the DID is no longer just the cross difference in the four means (see the Puhani paper cited below on the latter point). Finally, in a fully saturated model with all the interactions, the logit and the linear model will give identical point estimates of the cross difference marginal effect (though that is not the parameter you care about): . /* Cross difference to mimic OLS, but wrong */ . logit rating i.area_a##i.time [fw=noobs], nolog Logistic regression Number of obs = 448 LR chi2(3) = 17.87 Prob > chi2 = 0.0005 Log likelihood = -298.57102 Pseudo R2 = 0.0291 ------------------------------------------------------------------------------ rating | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- 1.area_a | .2103904 .2671347 0.79 0.431 -.3131839 .7339647 1.time | .6466272 .2867945 2.25 0.024 .0845203 1.208734 | area_a#time | 1 1 | .2073448 .3911528 0.53 0.596 -.5593006 .9739902 | _cons | -.2411621 .2014557 -1.20 0.231 -.6360081 .1536839 ------------------------------------------------------------------------------ . margins r.area_a#r.time Contrasts of adjusted predictions Model VCE : OIM Expression : Pr(rating), predict() ------------------------------------------------ | df chi2 P>chi2 -------------+---------------------------------- area_a#time | 1 0.21 0.6456 ------------------------------------------------ -------------------------------------------------------------------- | Delta-method | Contrast Std. Err. [95% Conf. Interval] -------------------+------------------------------------------------ area_a#time | (1 vs 0) (1 vs 0) | .0426076 .0926461 -.1389755 .2241906 -------------------------------------------------------------------- I believe the correct marginal effect of 4.6 percentage points is given by this: /* Puhani's DID Estimator */ gen at = area*time logit rating i.area_a i.time i.at [fw=noobs], nolog margins, at(area_a==1 time==1 at ==1) at(area_a==1 time==1 at==0) contrast(atcontrast(a._at) wald) "The Treatment Effect, the Cross Difference, and the Interaction Term in Nonlinear “Difference-in-Differences” Models by Patrick A. Puhani, Economics Letters, 2012, 115 (1), 85-87.
How do I perform a statistical test for a Difference-in-Differences analysis?
The easiest solution is to use the linear regression formulation of DID: Regress the binary customer ratings on a constant, a post dummy, an area A dummy, and the interaction of last two. It may be a
How do I perform a statistical test for a Difference-in-Differences analysis? The easiest solution is to use the linear regression formulation of DID: Regress the binary customer ratings on a constant, a post dummy, an area A dummy, and the interaction of last two. It may be appropriate to add other regressors measuring characteristics that are time-invariant at the individual level, but whose distribution changes through time at the group level. This may help with the significance issue below if it soaks up some residual variance. The DID is the coefficient on the interaction of post and group A. You can conduct the hypothesis test that it is zero or just look at the p-value or t-stat. Since treatment does not vary within area, the usual standard errors will be off (usually too small, but sometimes too large, when the within cluster error correlation is negative). The typical solution is to cluster the standard errors by area or cross-section, but with only 2-4 clusters, that will not work well since that is not enough clusters for the asymptotics to kick in. I don't really have a great solution for you here. Stata does compute something below, but it is not likely to be reliable (even with the small number of clusters adjustment). I do suspect that the correct adjustment will not yield significance since the conventional standard error on the DID coefficient is so large. People will often use simulation in cases like this to gauge how far off the SEs are. Here's this analysis for your data: . clear . input area_a time noobs rating area_a time noobs rating 1. 1 0 64 1 2. 1 0 66 0 3. 1 1 82 1 4. 1 1 36 0 5. 0 0 44 1 6. 0 0 56 0 7. 0 1 60 1 8. 0 1 40 0 9. end . egen cs = group(area_a time) . reg rating i.area_a##i.time [fw=noobs] Source | SS df MS Number of obs = 448 -------------+---------------------------------- F(3, 444) = 6.05 Model | 4.34181458 3 1.44727153 Prob > F = 0.0005 Residual | 106.149257 444 .239074903 R-squared = 0.0393 -------------+---------------------------------- Adj R-squared = 0.0328 Total | 110.491071 447 .247183605 Root MSE = .48895 ------------------------------------------------------------------------------ rating | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- 1.area_a | .0523077 .0650368 0.80 0.422 -.0755105 .1801259 1.time | .16 .0691484 2.31 0.021 .0241012 .2958988 | area_a#time | 1 1 | .0426076 .0929871 0.46 0.647 -.1401419 .225357 | _cons | .44 .0488953 9.00 0.000 .3439051 .5360949 ------------------------------------------------------------------------------ . reg rating i.area_a##i.time [fw=noobs], vce(cluster area) // or vce(cluster cs) Linear regression Number of obs = 448 F(0, 1) = . Prob > F = . R-squared = 0.0393 Root MSE = .48895 (Std. Err. adjusted for 2 clusters in area_a) ------------------------------------------------------------------------------ | Robust rating | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- 1.area_a | .0523077 1.08e-16 4.8e+14 0.000 .0523077 .0523077 1.time | .16 1.01e-16 1.6e+15 0.000 .16 .16 | area_a#time | 1 1 | .0426076 1.30e-16 3.3e+14 0.000 .0426076 .0426076 | _cons | .44 1.01e-16 4.4e+15 0.000 .44 .44 ------------------------------------------------------------------------------ The interpretation of the interaction coefficient in both specifications is a 4.3 percentage point increase in liking your service post-intervention. The linear model has the advantage of easier interpretation of an additive effect on a probability rather than a multiplicative effect on log odds. Moreover, in a non-linear logit model, clustering is problematic since the coefficients are identified up to scale only, the interpretation of interactions and identifying assumptions can get tricky, and the DID is no longer just the cross difference in the four means (see the Puhani paper cited below on the latter point). Finally, in a fully saturated model with all the interactions, the logit and the linear model will give identical point estimates of the cross difference marginal effect (though that is not the parameter you care about): . /* Cross difference to mimic OLS, but wrong */ . logit rating i.area_a##i.time [fw=noobs], nolog Logistic regression Number of obs = 448 LR chi2(3) = 17.87 Prob > chi2 = 0.0005 Log likelihood = -298.57102 Pseudo R2 = 0.0291 ------------------------------------------------------------------------------ rating | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- 1.area_a | .2103904 .2671347 0.79 0.431 -.3131839 .7339647 1.time | .6466272 .2867945 2.25 0.024 .0845203 1.208734 | area_a#time | 1 1 | .2073448 .3911528 0.53 0.596 -.5593006 .9739902 | _cons | -.2411621 .2014557 -1.20 0.231 -.6360081 .1536839 ------------------------------------------------------------------------------ . margins r.area_a#r.time Contrasts of adjusted predictions Model VCE : OIM Expression : Pr(rating), predict() ------------------------------------------------ | df chi2 P>chi2 -------------+---------------------------------- area_a#time | 1 0.21 0.6456 ------------------------------------------------ -------------------------------------------------------------------- | Delta-method | Contrast Std. Err. [95% Conf. Interval] -------------------+------------------------------------------------ area_a#time | (1 vs 0) (1 vs 0) | .0426076 .0926461 -.1389755 .2241906 -------------------------------------------------------------------- I believe the correct marginal effect of 4.6 percentage points is given by this: /* Puhani's DID Estimator */ gen at = area*time logit rating i.area_a i.time i.at [fw=noobs], nolog margins, at(area_a==1 time==1 at ==1) at(area_a==1 time==1 at==0) contrast(atcontrast(a._at) wald) "The Treatment Effect, the Cross Difference, and the Interaction Term in Nonlinear “Difference-in-Differences” Models by Patrick A. Puhani, Economics Letters, 2012, 115 (1), 85-87.
How do I perform a statistical test for a Difference-in-Differences analysis? The easiest solution is to use the linear regression formulation of DID: Regress the binary customer ratings on a constant, a post dummy, an area A dummy, and the interaction of last two. It may be a
39,938
How do I perform a statistical test for a Difference-in-Differences analysis?
Thanks for the question and response. I have the exact same issue, just that instead of 4 variables i have 8 variables as below: N_for_KPI <- c(683,538,2225,1458,294,307,922,781) N <- c(1951,1564,5683,4507,819,862,2479,2511) Wave <- factor(c("A","A","B","B","C","C")) Brand <- factor(c(0,1,0,1,0,1)) data = data.frame(N_for_KPI,N) Proportion <-N_for_KPI / N Proportion fit <- glm(Proportion~Wave*Brand, family=binomial, weights=N) summary(fit) Although i got the significance results, i got an error: > N_for_KPI <- c(683,538,2225,1458,294,307,922,781) > N <- c(1951,1564,5683,4507,819,862,2479,2511) > Wave <- factor(c("A","A","B","B","C","C")) > Brand <- factor(c(0,1,0,1,0,1)) > Proportion <-N_for_KPI / N > Proportion [1] 0.3500769 0.3439898 0.3915186 0.3234968 0.3589744 0.3561485 0.3719242 0.3110315 > fit <- glm(Proportion~Wave*Brand, family=binomial, weights=N) Error in model.frame.default(formula = Proportion ~ Wave * Brand, weights = N, : variable lengths differ (found for 'Wave') > summary(fit) Call: glm(formula = Proportion ~ Wave * Brand, family = binomial, weights = N) Deviance Residuals: [1] 0 0 0 0 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -2.9422 0.1047 -28.096 <2e-16 *** WaveB 0.0394 0.1203 0.328 0.743 Brand1 -0.1574 0.1507 -1.045 0.296 WaveB:Brand1 -0.4487 0.1786 -2.512 0.012 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 4.5383e+01 on 3 degrees of freedom Residual deviance: -4.9938e-13 on 0 degrees of freedom AIC: 35.137 Number of Fisher Scoring iterations: 3 In addition to the error, when I changed to a new sample sizes for N_for_KPI and N, the significant result didn't change Can you please help to advise how to fit 8 variables in this model? Thank you so much in advance!!
How do I perform a statistical test for a Difference-in-Differences analysis?
Thanks for the question and response. I have the exact same issue, just that instead of 4 variables i have 8 variables as below: N_for_KPI <- c(683,538,2225,1458,294,307,922,781) N <- c(1951,1564,5683
How do I perform a statistical test for a Difference-in-Differences analysis? Thanks for the question and response. I have the exact same issue, just that instead of 4 variables i have 8 variables as below: N_for_KPI <- c(683,538,2225,1458,294,307,922,781) N <- c(1951,1564,5683,4507,819,862,2479,2511) Wave <- factor(c("A","A","B","B","C","C")) Brand <- factor(c(0,1,0,1,0,1)) data = data.frame(N_for_KPI,N) Proportion <-N_for_KPI / N Proportion fit <- glm(Proportion~Wave*Brand, family=binomial, weights=N) summary(fit) Although i got the significance results, i got an error: > N_for_KPI <- c(683,538,2225,1458,294,307,922,781) > N <- c(1951,1564,5683,4507,819,862,2479,2511) > Wave <- factor(c("A","A","B","B","C","C")) > Brand <- factor(c(0,1,0,1,0,1)) > Proportion <-N_for_KPI / N > Proportion [1] 0.3500769 0.3439898 0.3915186 0.3234968 0.3589744 0.3561485 0.3719242 0.3110315 > fit <- glm(Proportion~Wave*Brand, family=binomial, weights=N) Error in model.frame.default(formula = Proportion ~ Wave * Brand, weights = N, : variable lengths differ (found for 'Wave') > summary(fit) Call: glm(formula = Proportion ~ Wave * Brand, family = binomial, weights = N) Deviance Residuals: [1] 0 0 0 0 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -2.9422 0.1047 -28.096 <2e-16 *** WaveB 0.0394 0.1203 0.328 0.743 Brand1 -0.1574 0.1507 -1.045 0.296 WaveB:Brand1 -0.4487 0.1786 -2.512 0.012 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 4.5383e+01 on 3 degrees of freedom Residual deviance: -4.9938e-13 on 0 degrees of freedom AIC: 35.137 Number of Fisher Scoring iterations: 3 In addition to the error, when I changed to a new sample sizes for N_for_KPI and N, the significant result didn't change Can you please help to advise how to fit 8 variables in this model? Thank you so much in advance!!
How do I perform a statistical test for a Difference-in-Differences analysis? Thanks for the question and response. I have the exact same issue, just that instead of 4 variables i have 8 variables as below: N_for_KPI <- c(683,538,2225,1458,294,307,922,781) N <- c(1951,1564,5683
39,939
Support vector machine margin term, why norm squared?
As far as I know, the square is introduced in the formulation for convenience. The norm will reach the optimum at the same point, and we get rid of an ugly square root. With respect to the hinge loss term, the square just makes no difference either, because of the presence of $\lambda$. Both $f(x)=\|x\|$ and $g(x)=\|x\|^2$ are surjective functions of the form $\mathbb R^d \rightarrow \mathbb R_+$. This implies that for any value of $w$ there exists $\lambda \in \mathbb R$ such that $\lambda \|w\|=\|w\|^2$. That is, for any solution that you find for the squared objective, you can find exactly the same one for the non-squared objective by tweaking $\lambda$. Since the square is introduced for convenience and it makes no effective difference, I doubt you'll be able to find an intuitive reason for its being there.
Support vector machine margin term, why norm squared?
As far as I know, the square is introduced in the formulation for convenience. The norm will reach the optimum at the same point, and we get rid of an ugly square root. With respect to the hinge loss
Support vector machine margin term, why norm squared? As far as I know, the square is introduced in the formulation for convenience. The norm will reach the optimum at the same point, and we get rid of an ugly square root. With respect to the hinge loss term, the square just makes no difference either, because of the presence of $\lambda$. Both $f(x)=\|x\|$ and $g(x)=\|x\|^2$ are surjective functions of the form $\mathbb R^d \rightarrow \mathbb R_+$. This implies that for any value of $w$ there exists $\lambda \in \mathbb R$ such that $\lambda \|w\|=\|w\|^2$. That is, for any solution that you find for the squared objective, you can find exactly the same one for the non-squared objective by tweaking $\lambda$. Since the square is introduced for convenience and it makes no effective difference, I doubt you'll be able to find an intuitive reason for its being there.
Support vector machine margin term, why norm squared? As far as I know, the square is introduced in the formulation for convenience. The norm will reach the optimum at the same point, and we get rid of an ugly square root. With respect to the hinge loss
39,940
Why does keras binary_crossentropy loss function return wrong values? [closed]
A mistake in your code: $$-\frac{1}{N}\sum_{i=1}^N [\color{red}{\hat{y}_i} \log(\hat{y}_i)+(1-y_i) \log(1-\hat{y}_i)]$$ It should be $$-\frac{1}{N}\sum_{i=1}^N [\color{blue}{y_i} \log(\hat{y}_i)+(1-y_i) \log(1-\hat{y}_i)]$$ Your code: result.append([y_pred[i][j] * math.log(y_pred[i][j]) + (1 - y_true[i][j]) * math.log(1 - y_pred[i][j]) for j in range(len(y_pred[i]))]) should be changed to result.append([y_true[i][j] * math.log(y_pred[i][j]) + (1 - y_true[i][j]) * math.log(1 - y_pred[i][j]) for j in range(len(y_pred[i]))]) where I have change your first y_pred to y_true. Edit: Also from keras documentation, we have binary_crossentropy(y_true, y_pred) rather than binary_crossentropy(y_pred, y_true)
Why does keras binary_crossentropy loss function return wrong values? [closed]
A mistake in your code: $$-\frac{1}{N}\sum_{i=1}^N [\color{red}{\hat{y}_i} \log(\hat{y}_i)+(1-y_i) \log(1-\hat{y}_i)]$$ It should be $$-\frac{1}{N}\sum_{i=1}^N [\color{blue}{y_i} \log(\hat{y}_i)+(1-
Why does keras binary_crossentropy loss function return wrong values? [closed] A mistake in your code: $$-\frac{1}{N}\sum_{i=1}^N [\color{red}{\hat{y}_i} \log(\hat{y}_i)+(1-y_i) \log(1-\hat{y}_i)]$$ It should be $$-\frac{1}{N}\sum_{i=1}^N [\color{blue}{y_i} \log(\hat{y}_i)+(1-y_i) \log(1-\hat{y}_i)]$$ Your code: result.append([y_pred[i][j] * math.log(y_pred[i][j]) + (1 - y_true[i][j]) * math.log(1 - y_pred[i][j]) for j in range(len(y_pred[i]))]) should be changed to result.append([y_true[i][j] * math.log(y_pred[i][j]) + (1 - y_true[i][j]) * math.log(1 - y_pred[i][j]) for j in range(len(y_pred[i]))]) where I have change your first y_pred to y_true. Edit: Also from keras documentation, we have binary_crossentropy(y_true, y_pred) rather than binary_crossentropy(y_pred, y_true)
Why does keras binary_crossentropy loss function return wrong values? [closed] A mistake in your code: $$-\frac{1}{N}\sum_{i=1}^N [\color{red}{\hat{y}_i} \log(\hat{y}_i)+(1-y_i) \log(1-\hat{y}_i)]$$ It should be $$-\frac{1}{N}\sum_{i=1}^N [\color{blue}{y_i} \log(\hat{y}_i)+(1-
39,941
Multivariate binary responses - advice on regression strategy
You are making a strong assumption that all the childhood events have equal weight in predicting adult outcomes. But given that, there are several possible ways to proceed. Here are three main approaches, one of which you've already mentioned. Turn the problem backwards to predict the number of childhood events given the outcome status of the 4 events. Use a semiparametric model so as not to impose a distribution on the count, i.e., proportional odds ordinal logistic model. The parameters of this backwards model will be hard to interpret but the overall test of association and overall measures of strength of association will be meaningful. Backwards models, when there is only one original predictor (as in your case) are useful because the extent to which X predicts Y is the same as the extent to which Y predicts X in the purely statistical sense. Use a full multivariate model for the 4 binary outcomes. There are several models from econometrics that will handle this situation. See Greene's book Econometric Analysis. Create a hierarchical ordering of A,B,C,D and assign to each person the worst of the 4 events that happened to them. Predict this ordinal outcome with a semiparametric ordinal response model. You didn't mention your sample size but that could be an issue. At least 96 observations are needed just to estimate a simple single proportion with no covariates.
Multivariate binary responses - advice on regression strategy
You are making a strong assumption that all the childhood events have equal weight in predicting adult outcomes. But given that, there are several possible ways to proceed. Here are three main appro
Multivariate binary responses - advice on regression strategy You are making a strong assumption that all the childhood events have equal weight in predicting adult outcomes. But given that, there are several possible ways to proceed. Here are three main approaches, one of which you've already mentioned. Turn the problem backwards to predict the number of childhood events given the outcome status of the 4 events. Use a semiparametric model so as not to impose a distribution on the count, i.e., proportional odds ordinal logistic model. The parameters of this backwards model will be hard to interpret but the overall test of association and overall measures of strength of association will be meaningful. Backwards models, when there is only one original predictor (as in your case) are useful because the extent to which X predicts Y is the same as the extent to which Y predicts X in the purely statistical sense. Use a full multivariate model for the 4 binary outcomes. There are several models from econometrics that will handle this situation. See Greene's book Econometric Analysis. Create a hierarchical ordering of A,B,C,D and assign to each person the worst of the 4 events that happened to them. Predict this ordinal outcome with a semiparametric ordinal response model. You didn't mention your sample size but that could be an issue. At least 96 observations are needed just to estimate a simple single proportion with no covariates.
Multivariate binary responses - advice on regression strategy You are making a strong assumption that all the childhood events have equal weight in predicting adult outcomes. But given that, there are several possible ways to proceed. Here are three main appro
39,942
Multivariate binary responses - advice on regression strategy
The multivariate probit model might be considered, as described in the Greene book mentioned by Frank Harrell. See also (Lesaffre and Mohlenberghs, 1991 Stat. Med 10, 1391-1403). The idea is to think of a multivariate Normal (4 dimensions) distribution of propensity or tolerance towards each event. You model the multivariate normal mean vector as four functions of the independent variable(s). Estimate the probability of each event given the mean vector via probit link function. Google the Greene book. You'll find some useful "links".
Multivariate binary responses - advice on regression strategy
The multivariate probit model might be considered, as described in the Greene book mentioned by Frank Harrell. See also (Lesaffre and Mohlenberghs, 1991 Stat. Med 10, 1391-1403). The idea is to think
Multivariate binary responses - advice on regression strategy The multivariate probit model might be considered, as described in the Greene book mentioned by Frank Harrell. See also (Lesaffre and Mohlenberghs, 1991 Stat. Med 10, 1391-1403). The idea is to think of a multivariate Normal (4 dimensions) distribution of propensity or tolerance towards each event. You model the multivariate normal mean vector as four functions of the independent variable(s). Estimate the probability of each event given the mean vector via probit link function. Google the Greene book. You'll find some useful "links".
Multivariate binary responses - advice on regression strategy The multivariate probit model might be considered, as described in the Greene book mentioned by Frank Harrell. See also (Lesaffre and Mohlenberghs, 1991 Stat. Med 10, 1391-1403). The idea is to think
39,943
Difference between smoothing splines and splines in R
Smoothing splines have all the knots (knots at each point), but then regularizes (shrinks the coefficients/smooths the fit) by adding a roughness penalty term (integrated squared second derivative times a smoothing parameter/tuning parameter) to the least squares criterion. In one way, it's sort of analogous to a kind of "weighted" ridge regression, if you're prepared to regard the way the basis functions come into the penalty as weights. Discrete versions of smoothing splines (which replace the integrated squared derivatives with summed squared differences) have a long history, dating back at least a century. They're different from regression splines, but the two are related in various ways. In between those you have penalized splines which have fewer than the full complement of knots but still use the roughness penalty to regularize (smooth) the fit. I wouldn't normally regard splines as a way to transform variables but (among other things) to estimate functional relationships -- though if your interest is specifically in identifying some smooth transformation, they could be used for that.
Difference between smoothing splines and splines in R
Smoothing splines have all the knots (knots at each point), but then regularizes (shrinks the coefficients/smooths the fit) by adding a roughness penalty term (integrated squared second derivative tim
Difference between smoothing splines and splines in R Smoothing splines have all the knots (knots at each point), but then regularizes (shrinks the coefficients/smooths the fit) by adding a roughness penalty term (integrated squared second derivative times a smoothing parameter/tuning parameter) to the least squares criterion. In one way, it's sort of analogous to a kind of "weighted" ridge regression, if you're prepared to regard the way the basis functions come into the penalty as weights. Discrete versions of smoothing splines (which replace the integrated squared derivatives with summed squared differences) have a long history, dating back at least a century. They're different from regression splines, but the two are related in various ways. In between those you have penalized splines which have fewer than the full complement of knots but still use the roughness penalty to regularize (smooth) the fit. I wouldn't normally regard splines as a way to transform variables but (among other things) to estimate functional relationships -- though if your interest is specifically in identifying some smooth transformation, they could be used for that.
Difference between smoothing splines and splines in R Smoothing splines have all the knots (knots at each point), but then regularizes (shrinks the coefficients/smooths the fit) by adding a roughness penalty term (integrated squared second derivative tim
39,944
Is variance homogeneity check necessary before t-test?
No, it is not necessary. Given that there is a test that accounts for heterogeneous variances (Welch's t-test), you can simply conduct it. For one, the tests for homogeneity of variance (HOV) are problematic in a number of ways. Some lack power, they - like other statistical tests - are too powerful with large sample sizes, effect sizes are missing for these tests, some are faulty under non-normality, ... The typical approach for most applied researchers is to conduct Levene's test, then decide whether to conduct Student's t-test or Welch's t-test based on the result of Levene's test. However, Zimmerman (2004) showed through simulation that conditioning the test on the result of Levene's test distorts the p-value of the test i.e. your p-value from Student's or Welch's is not reliable when you choose which one to do based on Levene's test. Furthermore, given that Welch's test is almost as powerful as Student's test under HOV, and it is much more powerful when HOV is absent, it is advisable to "just do Welch's test". Zimmerman, D. W. (2004). A note on preliminary tests of equality of variances. British Journal of Mathematical and Statistical Psychology, 57(1), 173–181. https://doi.org/10.1348/000711004849222 Here is another paper that gives the same basic advice: Delacre, M., Lakens, D., & Leys, C. (2017). Why Psychologists Should by Default Use Welch’s t-test Instead of Student’s t-test. International Review of Social Psychology, 30(1), 92–101. https://doi.org/10.5334/irsp.82
Is variance homogeneity check necessary before t-test?
No, it is not necessary. Given that there is a test that accounts for heterogeneous variances (Welch's t-test), you can simply conduct it. For one, the tests for homogeneity of variance (HOV) are prob
Is variance homogeneity check necessary before t-test? No, it is not necessary. Given that there is a test that accounts for heterogeneous variances (Welch's t-test), you can simply conduct it. For one, the tests for homogeneity of variance (HOV) are problematic in a number of ways. Some lack power, they - like other statistical tests - are too powerful with large sample sizes, effect sizes are missing for these tests, some are faulty under non-normality, ... The typical approach for most applied researchers is to conduct Levene's test, then decide whether to conduct Student's t-test or Welch's t-test based on the result of Levene's test. However, Zimmerman (2004) showed through simulation that conditioning the test on the result of Levene's test distorts the p-value of the test i.e. your p-value from Student's or Welch's is not reliable when you choose which one to do based on Levene's test. Furthermore, given that Welch's test is almost as powerful as Student's test under HOV, and it is much more powerful when HOV is absent, it is advisable to "just do Welch's test". Zimmerman, D. W. (2004). A note on preliminary tests of equality of variances. British Journal of Mathematical and Statistical Psychology, 57(1), 173–181. https://doi.org/10.1348/000711004849222 Here is another paper that gives the same basic advice: Delacre, M., Lakens, D., & Leys, C. (2017). Why Psychologists Should by Default Use Welch’s t-test Instead of Student’s t-test. International Review of Social Psychology, 30(1), 92–101. https://doi.org/10.5334/irsp.82
Is variance homogeneity check necessary before t-test? No, it is not necessary. Given that there is a test that accounts for heterogeneous variances (Welch's t-test), you can simply conduct it. For one, the tests for homogeneity of variance (HOV) are prob
39,945
Is variance homogeneity check necessary before t-test?
According to statistics textbooks, t-tests require the dependent variable to be normally distributed and the variance to be homogenous across conditions This is misleading. Generally, introductory statistics textbooks teach 2 (maybe 3 if you count paired stuff) two sample t-tests. Both tests assume that each of the two random samples are iid normal random samples, and are independent between each other. However they are different in that one assumes further that the two groups have equal variance, one makes no additional assumptions, but the sampling distribution of your test statistic is only approximately t-distributed. The assumption that both groups have the same variance is unverifiable. This is because this is an assumption about unobservable variance parameters. However, 1) there do exist tests that can test equality of variances between the two groups, and 2) you can sometimes reassure yourself looking at, perhaps, histograms of the two sets of data, checking to make sure they have the same variance, roughly. Regarding the first technique: like any hypothesis tests, there are the associated type 1 and type 2 error events. If you decide to formally test equality of variances before you test the means, since you are running two tests, you need to realize that there is some type 1 and type 2 error for your overall strategy.
Is variance homogeneity check necessary before t-test?
According to statistics textbooks, t-tests require the dependent variable to be normally distributed and the variance to be homogenous across conditions This is misleading. Generally, introductor
Is variance homogeneity check necessary before t-test? According to statistics textbooks, t-tests require the dependent variable to be normally distributed and the variance to be homogenous across conditions This is misleading. Generally, introductory statistics textbooks teach 2 (maybe 3 if you count paired stuff) two sample t-tests. Both tests assume that each of the two random samples are iid normal random samples, and are independent between each other. However they are different in that one assumes further that the two groups have equal variance, one makes no additional assumptions, but the sampling distribution of your test statistic is only approximately t-distributed. The assumption that both groups have the same variance is unverifiable. This is because this is an assumption about unobservable variance parameters. However, 1) there do exist tests that can test equality of variances between the two groups, and 2) you can sometimes reassure yourself looking at, perhaps, histograms of the two sets of data, checking to make sure they have the same variance, roughly. Regarding the first technique: like any hypothesis tests, there are the associated type 1 and type 2 error events. If you decide to formally test equality of variances before you test the means, since you are running two tests, you need to realize that there is some type 1 and type 2 error for your overall strategy.
Is variance homogeneity check necessary before t-test? According to statistics textbooks, t-tests require the dependent variable to be normally distributed and the variance to be homogenous across conditions This is misleading. Generally, introductor
39,946
Is variance homogeneity check necessary before t-test?
Not only is it not necessary, see user162986's answer, it can also imperil the interpretability of your test.
Is variance homogeneity check necessary before t-test?
Not only is it not necessary, see user162986's answer, it can also imperil the interpretability of your test.
Is variance homogeneity check necessary before t-test? Not only is it not necessary, see user162986's answer, it can also imperil the interpretability of your test.
Is variance homogeneity check necessary before t-test? Not only is it not necessary, see user162986's answer, it can also imperil the interpretability of your test.
39,947
Calculating error of mean of time series
Clearly you have good statistical intuition, because you are exactly right! Because of correlations between the individual terms, the standard error of the mean of the observations is not an accurate estimate of the error bar on the population mean from time series data. The actual variance of the sample mean $m$ is $$(\delta m)^2 = \frac{1}{n} \left[ g_0 + 2 \sum_{k=1}^{n-1} \frac{n-k}{n} g_k \right]$$ where $g_{k}$ is the co-variance between $x_i$ and $x_{i-k}$. It turns out to be a bit of a pain to apply this result. If you just plug in the estimated co-variances for the $g_k$, you get quite wrong results, essentially because of correlations between the $g_k$ estimators. There are a number of different ways you can proceed, with various pros and cons. One relatively simple approach without too much downside is to just drop the higher co-variances, for which you don't have good estimates anyway; it turns out that using a cutoff $k_{\rm max} \approx \sqrt{n}$ works out alright. For more discussion of these issues, see papers like Ryo Okui, “Asymptotically Unbiased Estimation of Autocovariances and Autocorrelations with Long Panel Data”, Econometric Theory (2010) 26: 1263. An earlier comment-er suggested just doing an ARIMA fit and taking the error bar on the mean from the error bar on the mean parameter of the ARIMA model. That's fine if the data are actually well-fit by an ARIMA model. But the approach I am suggesting here is model-independent.
Calculating error of mean of time series
Clearly you have good statistical intuition, because you are exactly right! Because of correlations between the individual terms, the standard error of the mean of the observations is not an accurate
Calculating error of mean of time series Clearly you have good statistical intuition, because you are exactly right! Because of correlations between the individual terms, the standard error of the mean of the observations is not an accurate estimate of the error bar on the population mean from time series data. The actual variance of the sample mean $m$ is $$(\delta m)^2 = \frac{1}{n} \left[ g_0 + 2 \sum_{k=1}^{n-1} \frac{n-k}{n} g_k \right]$$ where $g_{k}$ is the co-variance between $x_i$ and $x_{i-k}$. It turns out to be a bit of a pain to apply this result. If you just plug in the estimated co-variances for the $g_k$, you get quite wrong results, essentially because of correlations between the $g_k$ estimators. There are a number of different ways you can proceed, with various pros and cons. One relatively simple approach without too much downside is to just drop the higher co-variances, for which you don't have good estimates anyway; it turns out that using a cutoff $k_{\rm max} \approx \sqrt{n}$ works out alright. For more discussion of these issues, see papers like Ryo Okui, “Asymptotically Unbiased Estimation of Autocovariances and Autocorrelations with Long Panel Data”, Econometric Theory (2010) 26: 1263. An earlier comment-er suggested just doing an ARIMA fit and taking the error bar on the mean from the error bar on the mean parameter of the ARIMA model. That's fine if the data are actually well-fit by an ARIMA model. But the approach I am suggesting here is model-independent.
Calculating error of mean of time series Clearly you have good statistical intuition, because you are exactly right! Because of correlations between the individual terms, the standard error of the mean of the observations is not an accurate
39,948
Calculating error of mean of time series
I'm answering my own question for the reference of future people who might find it helpful. I've accepted David Wright's answer, though, because it contains the actual solution and he did all the work. In this case, I used ARIMA, specifically the auto.arima function in the R package forecast. I wanted to be able to implement my solution computationally due to the large size of my data sets, so R was very useful. Using this function, the mean and standard error thereof were simply printed to the screen, saving me a lot of work. Thank you, Carl, for mentioning ARIMA in the first place, and David Wright for expanding upon the process. You were both very helpful. I also enjoyed David's explanation of the model-independent method, and I'll definitely look into that more. I was only able to obtain an estimate for the mean when $d=0$ (that is, the ARIMA model featured no differencing). That makes sense, since the mean would be reduced to zero (or something close?) by differencing, but it's worth mentioning. As it transpired, the optimal ARIMA model (restricted to $(p, 0, q)$, as above) for most — but not all — of my data sets turned out to be $(0, 0, 0)$. If I'm not mistaken (which I may well be!), this means the variation in the data can be represented by white noise and wasn't correlated after all, so I was being over-cautious. Regardless, it's good to have confirmed that, and this question might prove useful to someone with a more strongly correlated time series at some point.
Calculating error of mean of time series
I'm answering my own question for the reference of future people who might find it helpful. I've accepted David Wright's answer, though, because it contains the actual solution and he did all the work
Calculating error of mean of time series I'm answering my own question for the reference of future people who might find it helpful. I've accepted David Wright's answer, though, because it contains the actual solution and he did all the work. In this case, I used ARIMA, specifically the auto.arima function in the R package forecast. I wanted to be able to implement my solution computationally due to the large size of my data sets, so R was very useful. Using this function, the mean and standard error thereof were simply printed to the screen, saving me a lot of work. Thank you, Carl, for mentioning ARIMA in the first place, and David Wright for expanding upon the process. You were both very helpful. I also enjoyed David's explanation of the model-independent method, and I'll definitely look into that more. I was only able to obtain an estimate for the mean when $d=0$ (that is, the ARIMA model featured no differencing). That makes sense, since the mean would be reduced to zero (or something close?) by differencing, but it's worth mentioning. As it transpired, the optimal ARIMA model (restricted to $(p, 0, q)$, as above) for most — but not all — of my data sets turned out to be $(0, 0, 0)$. If I'm not mistaken (which I may well be!), this means the variation in the data can be represented by white noise and wasn't correlated after all, so I was being over-cautious. Regardless, it's good to have confirmed that, and this question might prove useful to someone with a more strongly correlated time series at some point.
Calculating error of mean of time series I'm answering my own question for the reference of future people who might find it helpful. I've accepted David Wright's answer, though, because it contains the actual solution and he did all the work
39,949
Calculating error of mean of time series
Maybe "the blocking method" proposed by H. Flyvbjerg and H. G. Petersen in "Error Estimates on Averages of Correlated Data", J. Chem. Phys. 91, 461 (1989) would be useful.
Calculating error of mean of time series
Maybe "the blocking method" proposed by H. Flyvbjerg and H. G. Petersen in "Error Estimates on Averages of Correlated Data", J. Chem. Phys. 91, 461 (1989) would be useful.
Calculating error of mean of time series Maybe "the blocking method" proposed by H. Flyvbjerg and H. G. Petersen in "Error Estimates on Averages of Correlated Data", J. Chem. Phys. 91, 461 (1989) would be useful.
Calculating error of mean of time series Maybe "the blocking method" proposed by H. Flyvbjerg and H. G. Petersen in "Error Estimates on Averages of Correlated Data", J. Chem. Phys. 91, 461 (1989) would be useful.
39,950
How can the confidence interval for standard deviation not include the sample standard deviation?
[Aside: We're using a chi-squared distribution to obtain the confidence interval because this interval is obtained assuming we're sampling from a normal distribution.] While the interval for $\sigma$ not including the observed sample value of $s$ might at first glance seem surprising, it occurs for the simple reason that the distribution of a chi-squared random variable doesn't have its median at its degrees of freedom (that's where its expected value is, but the distribution is skewed -- the median is below the d.f.). For example, with $\nu=3$, the expected value ($3$) is nearly at the 61st percentile. Now for $\alpha$ near 1, $\alpha/2$ and $1-\alpha/2$ will both be very close to $\frac12$ and so the corresponding percentage points of a chi-square will be very close to the median. Consequently, if $\alpha$ is large enough (i.e. if the coverage of the interval, $1-\alpha$, is small enough), both percentage points can turn out to be below the mean, as in this example: In the particular example above, the $0.475$ quantile and the $0.525$ quantile each cut off half of the tail area that totals $0.95$, leaving $0.05$ between those endpoints. We see that both quantiles are well below $\nu=3$ (which is way up past the $0.60$ quantile -- even a 20% CI would have this issue). As a result of that, the chi-square percentage points divided by the degrees of freedom can both be below 1. If that happens, this makes both ends of the interval for $\sigma$ larger than $s$ (the sample standard deviation). In more detail -- The quantiles $\chi^2_{\alpha/2}/\nu$ and $\chi^2_{1-\alpha/2}/\nu$ are both below $1$. The interval for $\sigma^2$ is $(r_1 s^2,r_2 s^2)$ where $r_1=\frac{\nu}{\chi^2_{1-\alpha/2}}$ and $r_2=\frac{\nu}{\chi^2_{\alpha/2}}$ are the reciprocals of those quantities that are below $1$ (making the $r_i>1$). This means that the interval for $\sigma^2$ lays entirely above $s^2$ ... and the interval for $\sigma$ is obtained by taking square roots of those limits, so the interval for $\sigma$ also doesn't include $s$. Considered more directly the interval for $\sigma$ is $(\sqrt{r_1} s,\sqrt{r_2} s)$ -- and both $\sqrt{r_i}$ values are in turn greater than $1$, so the interval for $\sigma$ also doesn't include $s$. So while the expected value of $s^2$ is $\sigma^2$, $s^2$ is typically smaller than $\sigma^2$ (because the distribution of $s^2$ is skewed right), correspondingly you'd expect a very narrow interval for $\sigma^2$ to sit above the observed $s^2$. This carries through to the interval for $\sigma$. We should tend to see similar effects occur with other intervals that result from skewed distributions.
How can the confidence interval for standard deviation not include the sample standard deviation?
[Aside: We're using a chi-squared distribution to obtain the confidence interval because this interval is obtained assuming we're sampling from a normal distribution.] While the interval for $\sigma$
How can the confidence interval for standard deviation not include the sample standard deviation? [Aside: We're using a chi-squared distribution to obtain the confidence interval because this interval is obtained assuming we're sampling from a normal distribution.] While the interval for $\sigma$ not including the observed sample value of $s$ might at first glance seem surprising, it occurs for the simple reason that the distribution of a chi-squared random variable doesn't have its median at its degrees of freedom (that's where its expected value is, but the distribution is skewed -- the median is below the d.f.). For example, with $\nu=3$, the expected value ($3$) is nearly at the 61st percentile. Now for $\alpha$ near 1, $\alpha/2$ and $1-\alpha/2$ will both be very close to $\frac12$ and so the corresponding percentage points of a chi-square will be very close to the median. Consequently, if $\alpha$ is large enough (i.e. if the coverage of the interval, $1-\alpha$, is small enough), both percentage points can turn out to be below the mean, as in this example: In the particular example above, the $0.475$ quantile and the $0.525$ quantile each cut off half of the tail area that totals $0.95$, leaving $0.05$ between those endpoints. We see that both quantiles are well below $\nu=3$ (which is way up past the $0.60$ quantile -- even a 20% CI would have this issue). As a result of that, the chi-square percentage points divided by the degrees of freedom can both be below 1. If that happens, this makes both ends of the interval for $\sigma$ larger than $s$ (the sample standard deviation). In more detail -- The quantiles $\chi^2_{\alpha/2}/\nu$ and $\chi^2_{1-\alpha/2}/\nu$ are both below $1$. The interval for $\sigma^2$ is $(r_1 s^2,r_2 s^2)$ where $r_1=\frac{\nu}{\chi^2_{1-\alpha/2}}$ and $r_2=\frac{\nu}{\chi^2_{\alpha/2}}$ are the reciprocals of those quantities that are below $1$ (making the $r_i>1$). This means that the interval for $\sigma^2$ lays entirely above $s^2$ ... and the interval for $\sigma$ is obtained by taking square roots of those limits, so the interval for $\sigma$ also doesn't include $s$. Considered more directly the interval for $\sigma$ is $(\sqrt{r_1} s,\sqrt{r_2} s)$ -- and both $\sqrt{r_i}$ values are in turn greater than $1$, so the interval for $\sigma$ also doesn't include $s$. So while the expected value of $s^2$ is $\sigma^2$, $s^2$ is typically smaller than $\sigma^2$ (because the distribution of $s^2$ is skewed right), correspondingly you'd expect a very narrow interval for $\sigma^2$ to sit above the observed $s^2$. This carries through to the interval for $\sigma$. We should tend to see similar effects occur with other intervals that result from skewed distributions.
How can the confidence interval for standard deviation not include the sample standard deviation? [Aside: We're using a chi-squared distribution to obtain the confidence interval because this interval is obtained assuming we're sampling from a normal distribution.] While the interval for $\sigma$
39,951
Multiple regression avPlots vs termplot
termplot and crPlot: Partial-residual plots These functions display partial residuals on the y-axis and the focal variable on the x-axis together with the corresponding regression line. The slope of the regression line will be identical with the coefficient of the focal variable in the full model. Such type of graphs are also known as component-plus-residual plots or partial-residual plots. They are commonly used to detect possible non-linearity between a specific predictor and the response. Hence, the main use of this type of graph is to determine if a transformation of the focal predictor $x_i$ is needed. The partial-residual plot is created as follows: Regress the response $y$ on all predictors. Store the residuals of this model, $r = y - \hat{y} = y -\hat{\beta}X$. Now add back the estimated influence of the focal predictor, $x_i$ to get the partial residuals: $r^{\star}_i=r+\hat{\beta}_ix_i = y-\sum_{j\neq i}\hat{\beta}_jx_j$. Plot $r^{\star}_i$ vs. $x_i$ possibly adding a regression line. Using the data from the question: #===================================================================== # Partial residual plot #===================================================================== set.seed(142857) sex <- factor(rep(c("Male", "Female"), times= 500)) value1 <- scale(runif(1000, min=1, max=10)) value2 <- scale(runif(1000, min=1, max=100)) value3 <- scale(runif(1000, min=1, max=200)) response <- scale(runif(1000, min=1, max=100)) df <- data.frame(sex, response, value1, value2, value3) model <- lm(response ~ value1 + value2 + value3 + sex, data=df) # The partial residuals part_res <- resid(model) + df$value1*coef(model)["value1"] plot(part_res~value1, data = df, ylab = "Partial residuals", xlab = "value1", las = 1) abline(lm(part_res~value1, data = df), col = "steelblue2", lwd = 3) One can check easily that the plot is identical with the one created by termplot (output not shown here): termplot(model, terms = "value1", partial.resid = TRUE, se = TRUE, ask = FALSE, las = 1, col.res = "black") avPlot: Added-variable plots This function creates so called added-variable plots sometimes also called partial-regression plots. This type of graph displays the partial relationship between the response and the focal predictor $x_i$, adjusted for all the other predictors in the model. In effect, the added-variable plot reduces the $(k+1)$-dimensional regression problem to a sequence of 2D graphs (for more focal predictors). This kind of graph is created using the following steps: Calculate a model regressing $y$ on all predictors except the focal predictor $x_i$. Store the residuals from this model. The residuals from this model are the part of the response $y$ that is not "explained" by all the predictors except for $x_i$. Regress the focal predictor $x_i$ on all other predictors and store the residuals. These residuals are the part of $x_i$ that is not "explained" by the other predictors (i.e. the part of $x_i$ when we condition on the other predictors). Plot the residuals from step 1 on the y-axis and the residuals from step 2 on the x-axis. Add a regression line if you wish. Again using the above data: #===================================================================== # Added-variable plot #===================================================================== model2 <-lm(response ~ value2 + value3 + sex, data=df) resid2 <- residuals(model2) model3 <- lm(value1~value2 + value3 + sex, data=df) resid3 <- residuals(model3) plot(resid2~resid3, las = 1, xlab = "value1 | others", ylab = "response | others") abline(lm(resid2~resid3), col = "steelblue2", lwd = 3) This plot has some very useful properties: As in the partial-residual plot, the slope of the regression line is identical with the slope of the focal predictor $x_i$ in the full model. In contrast to the partial-residual plot, the residuals of the regression line in the added-variable plot are identical with the residuals of the full model. Because the values on the x-axis show values of the focal predictor $x_i$ conditional on the other predictors, points far to the left or right are cases for which the value of $x_i$ is unusual given the values of the other predictors. Hence, influential data values can be easily seen. The plot can be useful to detect nonlinearity, heteroscedasticity and unusual patterns. Comparison The Wikipedia page on the partial regression plot summarizes (small changes are mine): Partial regression plots [added-variable plots] are related to, but distinct from, partial residual plots. Partial regression plots are most commonly used to identify data points with high leverage and influential data points that might not have high leverage. Partial residual plots are most commonly used to identify the nature of the relationship between $Y$ and $X_i$ (given the effect of the other independent variables in the model). Note that since the simple correlation between the two sets of residuals plotted is equal to the partial correlation between the response variable and $X_i$, partial regression plots will show the correct strength of the linear relationship between the response variable and $X_i$. This is not true for partial residual plots. On the other hand, for the partial regression plot, the x-axis is not $X_i$. This limits its usefulness in determining the need for a transformation (which is the primary purpose of the partial residual plot). References Fox J, Weisberg S (2019): An R companion to applied regression. 3rd ed. Sage publications. Velleman P, Welsch R (1981): Efficient computing of regression diagnostics. The American Statistician. 35(4): 234-242.
Multiple regression avPlots vs termplot
termplot and crPlot: Partial-residual plots These functions display partial residuals on the y-axis and the focal variable on the x-axis together with the corresponding regression line. The slope of t
Multiple regression avPlots vs termplot termplot and crPlot: Partial-residual plots These functions display partial residuals on the y-axis and the focal variable on the x-axis together with the corresponding regression line. The slope of the regression line will be identical with the coefficient of the focal variable in the full model. Such type of graphs are also known as component-plus-residual plots or partial-residual plots. They are commonly used to detect possible non-linearity between a specific predictor and the response. Hence, the main use of this type of graph is to determine if a transformation of the focal predictor $x_i$ is needed. The partial-residual plot is created as follows: Regress the response $y$ on all predictors. Store the residuals of this model, $r = y - \hat{y} = y -\hat{\beta}X$. Now add back the estimated influence of the focal predictor, $x_i$ to get the partial residuals: $r^{\star}_i=r+\hat{\beta}_ix_i = y-\sum_{j\neq i}\hat{\beta}_jx_j$. Plot $r^{\star}_i$ vs. $x_i$ possibly adding a regression line. Using the data from the question: #===================================================================== # Partial residual plot #===================================================================== set.seed(142857) sex <- factor(rep(c("Male", "Female"), times= 500)) value1 <- scale(runif(1000, min=1, max=10)) value2 <- scale(runif(1000, min=1, max=100)) value3 <- scale(runif(1000, min=1, max=200)) response <- scale(runif(1000, min=1, max=100)) df <- data.frame(sex, response, value1, value2, value3) model <- lm(response ~ value1 + value2 + value3 + sex, data=df) # The partial residuals part_res <- resid(model) + df$value1*coef(model)["value1"] plot(part_res~value1, data = df, ylab = "Partial residuals", xlab = "value1", las = 1) abline(lm(part_res~value1, data = df), col = "steelblue2", lwd = 3) One can check easily that the plot is identical with the one created by termplot (output not shown here): termplot(model, terms = "value1", partial.resid = TRUE, se = TRUE, ask = FALSE, las = 1, col.res = "black") avPlot: Added-variable plots This function creates so called added-variable plots sometimes also called partial-regression plots. This type of graph displays the partial relationship between the response and the focal predictor $x_i$, adjusted for all the other predictors in the model. In effect, the added-variable plot reduces the $(k+1)$-dimensional regression problem to a sequence of 2D graphs (for more focal predictors). This kind of graph is created using the following steps: Calculate a model regressing $y$ on all predictors except the focal predictor $x_i$. Store the residuals from this model. The residuals from this model are the part of the response $y$ that is not "explained" by all the predictors except for $x_i$. Regress the focal predictor $x_i$ on all other predictors and store the residuals. These residuals are the part of $x_i$ that is not "explained" by the other predictors (i.e. the part of $x_i$ when we condition on the other predictors). Plot the residuals from step 1 on the y-axis and the residuals from step 2 on the x-axis. Add a regression line if you wish. Again using the above data: #===================================================================== # Added-variable plot #===================================================================== model2 <-lm(response ~ value2 + value3 + sex, data=df) resid2 <- residuals(model2) model3 <- lm(value1~value2 + value3 + sex, data=df) resid3 <- residuals(model3) plot(resid2~resid3, las = 1, xlab = "value1 | others", ylab = "response | others") abline(lm(resid2~resid3), col = "steelblue2", lwd = 3) This plot has some very useful properties: As in the partial-residual plot, the slope of the regression line is identical with the slope of the focal predictor $x_i$ in the full model. In contrast to the partial-residual plot, the residuals of the regression line in the added-variable plot are identical with the residuals of the full model. Because the values on the x-axis show values of the focal predictor $x_i$ conditional on the other predictors, points far to the left or right are cases for which the value of $x_i$ is unusual given the values of the other predictors. Hence, influential data values can be easily seen. The plot can be useful to detect nonlinearity, heteroscedasticity and unusual patterns. Comparison The Wikipedia page on the partial regression plot summarizes (small changes are mine): Partial regression plots [added-variable plots] are related to, but distinct from, partial residual plots. Partial regression plots are most commonly used to identify data points with high leverage and influential data points that might not have high leverage. Partial residual plots are most commonly used to identify the nature of the relationship between $Y$ and $X_i$ (given the effect of the other independent variables in the model). Note that since the simple correlation between the two sets of residuals plotted is equal to the partial correlation between the response variable and $X_i$, partial regression plots will show the correct strength of the linear relationship between the response variable and $X_i$. This is not true for partial residual plots. On the other hand, for the partial regression plot, the x-axis is not $X_i$. This limits its usefulness in determining the need for a transformation (which is the primary purpose of the partial residual plot). References Fox J, Weisberg S (2019): An R companion to applied regression. 3rd ed. Sage publications. Velleman P, Welsch R (1981): Efficient computing of regression diagnostics. The American Statistician. 35(4): 234-242.
Multiple regression avPlots vs termplot termplot and crPlot: Partial-residual plots These functions display partial residuals on the y-axis and the focal variable on the x-axis together with the corresponding regression line. The slope of t
39,952
Adding a white noise process to an AR(P) process
You have $$y_t = x_t + v_t \tag{1} $$ and $$ \phi(B)x_t = e_t. $$ Applying $\phi(B)$ to both sides of (1) yields \begin{align} \phi(B)y_t &= \phi(B)x_t + \phi(B) v_t \\ &= e_t + \phi(B) v_t. \tag{2} \end{align} Consider the right hand side of (2). This is clearly a covariance stationary process. By the Wold decomposition theorem it must have a moving average representation. Since the autocovariance function cuts off for lags $k>p$ it must be a $MA(p)$ process, say $(1-\theta_1B-\dots-\theta_p B^p) u_t$. Hence, $y_t$ must be a $ARMA(p,p)$ process. From the left hand side of (2), it is clear that its autoregressive parameters are equal to those of $x_t$. The moving average parameters $\theta_1,\theta_2,\dots,\theta_p$ and the white noise variance $\sigma_u^2$ of this $ARMA(p,p)$ process can be found by equating the autocovariance function of the right hand side of (2) with that of $\theta(B) u_t$ for lags $k=0,1,\dots,p$ and solving the $p+1$ resulting non-linear equations \begin{align} (1+\theta_1^2+\dots+\theta_p^2)\sigma_u^2 &= \sigma_e^2 + (1+\phi_1^2 +\dots +\phi_p^2)\sigma_v^2\\ (-\theta_1 + \theta_1\theta_2 +\dots+\theta_{p-1}\theta_p)\sigma_u^2 &= (-\phi_1 + \phi_1\phi_2 +\dots+\phi_{p-1}\phi_p)\sigma_v^2\\ &\vdots \tag{3} \\ (-\theta_{p-1} + \theta_1\theta_p)\sigma_u^2 &= (-\phi_{p-1} + \phi_1\phi_p)\sigma_v^2 \\ \theta_p \sigma_u^2&= \phi_p\sigma_v^2. \end{align} Here is a R-function that solves these equations and returns the parameters of the $ARMA(p,p)$-model. arplusnoise2arma <- function(phi,se = 1,sv) { p <- length(phi) # order of process # autocovariance of right hand side gamma0 <- ltsa:::tacvfARMA(theta=phi, maxLag = p,sigma2 = sv) gamma0[1] <- gamma0[1] + se # non-linear equations to solve resulting from equating autocov functions f <- function(par) { gamma1 <- ltsa::tacvfARMA(theta=par[1:p], maxLag = p, sigma2 = exp(par[p+1])) gamma0-gamma1 } # solve the non-linear system fit <- rootSolve:::multiroot(f, c(phi,1), maxiter=1000, rtol=1e-12) # parameters of the new ARMA, possibly non-invertible theta <- fit$root[1:p] sigma2 <- exp(fit$root[p+1]) # reparameterize the MA-part to make it invertible by moving roots outside unit circle r <- 1/polyroot(c(1,-theta)) for (i in 1:p) { if (Mod(r[i])>1) { sigma2 <- sigma2*r[i]^2 r[i] <- 1/r[i] } } sigma2 <- Re(sigma2) # compute the new coefficients of the MA-polynomial polycoef <- 1 for (i in 1:p) polycoef <- c(polycoef,0) - r[i]*c(0,polycoef) theta <- Re(-polycoef[-1]) # return the invertible ARMA(p,p) model list(model=list(phi=phi,theta=theta,sigma2=sigma2),estim.precis=fit$estim.precis) } The following example checks that the autocovariance functions indeed are the same for a simple stationary AR(3) model and the computed ARMA(3,3) model: > phi <- c(.2, -.1, .2) > Mod(polyroot(c(1,-phi))) [1] 1.678659 1.725853 1.725853 > result <- arplusnoise2arma(phi,1,.5) > result $model $model$phi [1] 0.2 -0.1 0.2 $model$theta [1] 0.07286795 -0.04104890 0.06545496 $model$sigma2 [1] 1.527768 $estim.precis [1] 4.176867e-14 > do.call(ltsa:::tacvfARMA, c(result$model, maxLag=10)) [1] 1.5793650794 0.1904761905 -0.0317460317 0.1904761905 0.0793650794 -0.0095238095 [7] 0.0282539683 0.0224761905 -0.0002349206 0.0033561905 0.0051899683 > ltsa:::tacvfARMA(phi=phi,theta=NULL,maxLag=10) [1] 1.0793650794 0.1904761905 -0.0317460317 0.1904761905 0.0793650794 -0.0095238095 [7] 0.0282539683 0.0224761905 -0.0002349206 0.0033561905 0.0051899683
Adding a white noise process to an AR(P) process
You have $$y_t = x_t + v_t \tag{1} $$ and $$ \phi(B)x_t = e_t. $$ Applying $\phi(B)$ to both sides of (1) yields \begin{align} \phi(B)y_t &= \phi(B)x_t + \phi(B) v_t \\ &= e_t + \phi(B) v_t. \tag{2}
Adding a white noise process to an AR(P) process You have $$y_t = x_t + v_t \tag{1} $$ and $$ \phi(B)x_t = e_t. $$ Applying $\phi(B)$ to both sides of (1) yields \begin{align} \phi(B)y_t &= \phi(B)x_t + \phi(B) v_t \\ &= e_t + \phi(B) v_t. \tag{2} \end{align} Consider the right hand side of (2). This is clearly a covariance stationary process. By the Wold decomposition theorem it must have a moving average representation. Since the autocovariance function cuts off for lags $k>p$ it must be a $MA(p)$ process, say $(1-\theta_1B-\dots-\theta_p B^p) u_t$. Hence, $y_t$ must be a $ARMA(p,p)$ process. From the left hand side of (2), it is clear that its autoregressive parameters are equal to those of $x_t$. The moving average parameters $\theta_1,\theta_2,\dots,\theta_p$ and the white noise variance $\sigma_u^2$ of this $ARMA(p,p)$ process can be found by equating the autocovariance function of the right hand side of (2) with that of $\theta(B) u_t$ for lags $k=0,1,\dots,p$ and solving the $p+1$ resulting non-linear equations \begin{align} (1+\theta_1^2+\dots+\theta_p^2)\sigma_u^2 &= \sigma_e^2 + (1+\phi_1^2 +\dots +\phi_p^2)\sigma_v^2\\ (-\theta_1 + \theta_1\theta_2 +\dots+\theta_{p-1}\theta_p)\sigma_u^2 &= (-\phi_1 + \phi_1\phi_2 +\dots+\phi_{p-1}\phi_p)\sigma_v^2\\ &\vdots \tag{3} \\ (-\theta_{p-1} + \theta_1\theta_p)\sigma_u^2 &= (-\phi_{p-1} + \phi_1\phi_p)\sigma_v^2 \\ \theta_p \sigma_u^2&= \phi_p\sigma_v^2. \end{align} Here is a R-function that solves these equations and returns the parameters of the $ARMA(p,p)$-model. arplusnoise2arma <- function(phi,se = 1,sv) { p <- length(phi) # order of process # autocovariance of right hand side gamma0 <- ltsa:::tacvfARMA(theta=phi, maxLag = p,sigma2 = sv) gamma0[1] <- gamma0[1] + se # non-linear equations to solve resulting from equating autocov functions f <- function(par) { gamma1 <- ltsa::tacvfARMA(theta=par[1:p], maxLag = p, sigma2 = exp(par[p+1])) gamma0-gamma1 } # solve the non-linear system fit <- rootSolve:::multiroot(f, c(phi,1), maxiter=1000, rtol=1e-12) # parameters of the new ARMA, possibly non-invertible theta <- fit$root[1:p] sigma2 <- exp(fit$root[p+1]) # reparameterize the MA-part to make it invertible by moving roots outside unit circle r <- 1/polyroot(c(1,-theta)) for (i in 1:p) { if (Mod(r[i])>1) { sigma2 <- sigma2*r[i]^2 r[i] <- 1/r[i] } } sigma2 <- Re(sigma2) # compute the new coefficients of the MA-polynomial polycoef <- 1 for (i in 1:p) polycoef <- c(polycoef,0) - r[i]*c(0,polycoef) theta <- Re(-polycoef[-1]) # return the invertible ARMA(p,p) model list(model=list(phi=phi,theta=theta,sigma2=sigma2),estim.precis=fit$estim.precis) } The following example checks that the autocovariance functions indeed are the same for a simple stationary AR(3) model and the computed ARMA(3,3) model: > phi <- c(.2, -.1, .2) > Mod(polyroot(c(1,-phi))) [1] 1.678659 1.725853 1.725853 > result <- arplusnoise2arma(phi,1,.5) > result $model $model$phi [1] 0.2 -0.1 0.2 $model$theta [1] 0.07286795 -0.04104890 0.06545496 $model$sigma2 [1] 1.527768 $estim.precis [1] 4.176867e-14 > do.call(ltsa:::tacvfARMA, c(result$model, maxLag=10)) [1] 1.5793650794 0.1904761905 -0.0317460317 0.1904761905 0.0793650794 -0.0095238095 [7] 0.0282539683 0.0224761905 -0.0002349206 0.0033561905 0.0051899683 > ltsa:::tacvfARMA(phi=phi,theta=NULL,maxLag=10) [1] 1.0793650794 0.1904761905 -0.0317460317 0.1904761905 0.0793650794 -0.0095238095 [7] 0.0282539683 0.0224761905 -0.0002349206 0.0033561905 0.0051899683
Adding a white noise process to an AR(P) process You have $$y_t = x_t + v_t \tag{1} $$ and $$ \phi(B)x_t = e_t. $$ Applying $\phi(B)$ to both sides of (1) yields \begin{align} \phi(B)y_t &= \phi(B)x_t + \phi(B) v_t \\ &= e_t + \phi(B) v_t. \tag{2}
39,953
What is the meaning of || (double vertical bar) in this KL divergence equation?
My understanding is that the double bar emphasises that the order of the arguments matters. The reminder is perhaps helpful because KL is used much like a distance, but it's not symmetric, so it's not a distance. The double bars don't actually mean something special over and above, say, a comma.
What is the meaning of || (double vertical bar) in this KL divergence equation?
My understanding is that the double bar emphasises that the order of the arguments matters. The reminder is perhaps helpful because KL is used much like a distance, but it's not symmetric, so it's not
What is the meaning of || (double vertical bar) in this KL divergence equation? My understanding is that the double bar emphasises that the order of the arguments matters. The reminder is perhaps helpful because KL is used much like a distance, but it's not symmetric, so it's not a distance. The double bars don't actually mean something special over and above, say, a comma.
What is the meaning of || (double vertical bar) in this KL divergence equation? My understanding is that the double bar emphasises that the order of the arguments matters. The reminder is perhaps helpful because KL is used much like a distance, but it's not symmetric, so it's not
39,954
What is implied by i.i.d.?
We often assume that random variables of our interest are independent and identically distributed (i.i.d.). You may also be interested in more broader term exchangability (see also here). What this means is that they are independent, so loosely speaking, knowing one value tells us nothing new about other value, and more formally if events $A$ and $B$ are independent, then $$ \Pr(A \cap B) = \Pr(A) \, \Pr(B) $$ So probabilities of observing each of the events alone, provide us with full information about probability of observing those events jointly. They not influence or interact with each other. By saying that they are identically distributed, we mean that they can be characterized by the same probability distribution. So they are "of the same kind" in terms of their probabilistic behavior. But if data in a time series are independent, aren't they just noise? Yes they are. For time series we are interested in the temporal dependence between the values, so we do not assume independence. Why should the deterministic part be serially independent? If it is deterministic, then it can't be independent.
What is implied by i.i.d.?
We often assume that random variables of our interest are independent and identically distributed (i.i.d.). You may also be interested in more broader term exchangability (see also here). What this me
What is implied by i.i.d.? We often assume that random variables of our interest are independent and identically distributed (i.i.d.). You may also be interested in more broader term exchangability (see also here). What this means is that they are independent, so loosely speaking, knowing one value tells us nothing new about other value, and more formally if events $A$ and $B$ are independent, then $$ \Pr(A \cap B) = \Pr(A) \, \Pr(B) $$ So probabilities of observing each of the events alone, provide us with full information about probability of observing those events jointly. They not influence or interact with each other. By saying that they are identically distributed, we mean that they can be characterized by the same probability distribution. So they are "of the same kind" in terms of their probabilistic behavior. But if data in a time series are independent, aren't they just noise? Yes they are. For time series we are interested in the temporal dependence between the values, so we do not assume independence. Why should the deterministic part be serially independent? If it is deterministic, then it can't be independent.
What is implied by i.i.d.? We often assume that random variables of our interest are independent and identically distributed (i.i.d.). You may also be interested in more broader term exchangability (see also here). What this me
39,955
Why are all the permutations of i.i.d. samples from a continuous distribution equally likely?
Although this is intuitively obvious, there is merit in providing a rigorous proof because it helps demonstrate that our definitions really do capture what we think ought to be true. The basic idea The independence of two variables $X$ and $Y$ is defined in terms of the probabilities of rectangles: $X$ and $Y$ are independent if and only if $$\Pr(X\in\mathcal{A}\text{ and }Y\in\mathcal{B}) = \Pr(X\in\mathcal{A})\Pr(Y\in\mathcal{B})$$ for all events $\mathcal{A}$ and $\mathcal{B}$. On the other hand, even the simplest example of the question (involving just $n=2$ variables) involves events of the form $X \lt Y$. This event is an infinite triangle, which in general cannot be expressed in terms of a finite--or even a countable--number of rectangles. The crux of the matter is to decompose these triangles into rectangles. It's geometrically obvious that an infinite number of rectangles is needed, so we're going to have to be a little careful. Demonstration for $n=2$ The case $n=2$ captures all the important ideas, so let's begin there and call the two iid variables $X$ and $Y$. Since some kind of limiting argument is needed, let's partition the real numbers at the outset into a union of intervals of length $e \gt 0$, with the intention of letting $e$ grow arbitrarily small. These intervals are going to carve the plane into rectangles. The important ones are these: $$\mathcal{A}(i,e) = (-\infty, ie) \times (ie, (i+1)e],$$ $$\mathcal{A}^\prime(i,e) = (ie, (i+1)e] \times (-\infty, ie),$$ and $$\mathcal{B}(i,e) = (ie, (i+1)e] \times (ie, (i+1)e].$$ The idea to what follows is that the union of red rectangles of the form $\mathcal{A}(i,e)$ just about fills up the region $X \lt Y$, the union of the blue rectangles $\mathcal{A}^\prime(i,e)$ just about fills up the region $Y \lt X$, and the remainder--the union of the squares $\mathcal{B}(i,e)$--has vanishing probability as $e\to 0$ provided the common distribution is continuous. The iid assumption shows the chance of each red $\mathcal{A}(i,e)$ equals the chance of its blue counterpart $\mathcal{A}^\prime(i,e)$, showing $\Pr(X\lt Y) = \Pr(Y\lt X)$ in the limit. As usual, let $F(x) = \Pr(X \le x) = \Pr(Y \le x)$ denote the common distribution function for the variables. Let $F_0(x) = \Pr(X \lt x) = \Pr(Y \lt x)$. With these we may compute probabilities of rectangles: $$\eqalign{ &\Pr((X,Y)\in \mathcal{A}(i,e) &= \Pr(X\lt ie)\Pr(ie \lt Y \le (i+1)e) \\ &&= F_0(ie)(F((i+1)e - F(ie)) \\ &&= (F((i+1)e - F(ie))F_0(ie) \\ &&= \Pr(ie \lt X \le (i+1)e)\Pr(Y\lt ie) \\ &=\Pr((X,Y)\in \mathcal{A}^\prime(i,e).\tag{1} }$$ There is the anticipated symmetry: the $\mathcal{A}(i,e)$ nearly fill up the upper triangular region $X \lt Y$ while the $\mathcal{A}^\prime(i,e)$ nearly fill up the lower triangular region $X \gt Y$. Furthermore, because the three kinds of rectangles partition the plane without overlap, the Law of Total Probability asserts $$1 = \sum_{i\in\mathbb{Z}} \mathcal{A}(i,e) + \sum_{i\in\mathbb{Z}} \mathcal{A}^\prime(i,e) + \sum_{i\in\mathbb{Z}} \mathcal{B}(i,e).\tag{2}$$ Consider these last terms, the sums of the squares $\mathcal{B}(i,e)$ straddling the diagonal. Again by independence, the probabilities of these squares are found by multiplying, yielding $$\Pr(\mathcal{B}(i,e)) = (F((i+1)e)-F(ie))^2.$$ When $F$ is continuous, the right hand side has a limit equal to $0$ as $e$ approaches $0$ (from above). This is an elementary (but somewhat technical) fact about continuous functions, so I will forgo the distraction of proving it. Assuming this fact, taking limits in (2), and exploiting the equality in $(1)$ yields $$\eqalign{ 1 &= \lim_{e\to0^{+}}\sum_{i\in\mathbb{Z}} \Pr(\mathcal{A}(i,e)) \\ &+ \lim_{e\to0^{+}}\sum_{i\in\mathbb{Z}} \Pr(\mathcal{A}^\prime(i,e)) \\ &+ \lim_{e\to0^{+}}\sum_{i\in\mathbb{Z}} \Pr(\mathcal{B}(i,e)) \\ &= \lim_{e\to0^{+}}\left(\Pr(\cup_{i\in\mathbb{Z}} \mathcal{A}(i,e)) + \Pr(\cup_{i\in\mathbb{Z}} \mathcal{A}^\prime(i,e))\right) + 0 \\ &\le 2\Pr(X \lt Y). }$$ Since $2\Pr(X \lt Y) \le 1$, the only possibility is $\Pr(X\lt Y) = \Pr(Y \lt X) = 1/2$. Demonstration for general $n$ The rest is mopping up. When all $X_i, i=1,2,\ldots, n$ are independent for $n\gt 2$, then $X_i$ and $X_j$ are a fortiori independent. The same argument shows that switching $X_i$ and $X_j$ in any ordering of all the variables does not change the probability. Since such swaps generate the group of all permutations, we deduce that all orderings have equal probabilities and the sum of those probabilities is $1$. Since there are $n!$ distinct orderings, each one has a probability of $1/n!$.
Why are all the permutations of i.i.d. samples from a continuous distribution equally likely?
Although this is intuitively obvious, there is merit in providing a rigorous proof because it helps demonstrate that our definitions really do capture what we think ought to be true. The basic idea T
Why are all the permutations of i.i.d. samples from a continuous distribution equally likely? Although this is intuitively obvious, there is merit in providing a rigorous proof because it helps demonstrate that our definitions really do capture what we think ought to be true. The basic idea The independence of two variables $X$ and $Y$ is defined in terms of the probabilities of rectangles: $X$ and $Y$ are independent if and only if $$\Pr(X\in\mathcal{A}\text{ and }Y\in\mathcal{B}) = \Pr(X\in\mathcal{A})\Pr(Y\in\mathcal{B})$$ for all events $\mathcal{A}$ and $\mathcal{B}$. On the other hand, even the simplest example of the question (involving just $n=2$ variables) involves events of the form $X \lt Y$. This event is an infinite triangle, which in general cannot be expressed in terms of a finite--or even a countable--number of rectangles. The crux of the matter is to decompose these triangles into rectangles. It's geometrically obvious that an infinite number of rectangles is needed, so we're going to have to be a little careful. Demonstration for $n=2$ The case $n=2$ captures all the important ideas, so let's begin there and call the two iid variables $X$ and $Y$. Since some kind of limiting argument is needed, let's partition the real numbers at the outset into a union of intervals of length $e \gt 0$, with the intention of letting $e$ grow arbitrarily small. These intervals are going to carve the plane into rectangles. The important ones are these: $$\mathcal{A}(i,e) = (-\infty, ie) \times (ie, (i+1)e],$$ $$\mathcal{A}^\prime(i,e) = (ie, (i+1)e] \times (-\infty, ie),$$ and $$\mathcal{B}(i,e) = (ie, (i+1)e] \times (ie, (i+1)e].$$ The idea to what follows is that the union of red rectangles of the form $\mathcal{A}(i,e)$ just about fills up the region $X \lt Y$, the union of the blue rectangles $\mathcal{A}^\prime(i,e)$ just about fills up the region $Y \lt X$, and the remainder--the union of the squares $\mathcal{B}(i,e)$--has vanishing probability as $e\to 0$ provided the common distribution is continuous. The iid assumption shows the chance of each red $\mathcal{A}(i,e)$ equals the chance of its blue counterpart $\mathcal{A}^\prime(i,e)$, showing $\Pr(X\lt Y) = \Pr(Y\lt X)$ in the limit. As usual, let $F(x) = \Pr(X \le x) = \Pr(Y \le x)$ denote the common distribution function for the variables. Let $F_0(x) = \Pr(X \lt x) = \Pr(Y \lt x)$. With these we may compute probabilities of rectangles: $$\eqalign{ &\Pr((X,Y)\in \mathcal{A}(i,e) &= \Pr(X\lt ie)\Pr(ie \lt Y \le (i+1)e) \\ &&= F_0(ie)(F((i+1)e - F(ie)) \\ &&= (F((i+1)e - F(ie))F_0(ie) \\ &&= \Pr(ie \lt X \le (i+1)e)\Pr(Y\lt ie) \\ &=\Pr((X,Y)\in \mathcal{A}^\prime(i,e).\tag{1} }$$ There is the anticipated symmetry: the $\mathcal{A}(i,e)$ nearly fill up the upper triangular region $X \lt Y$ while the $\mathcal{A}^\prime(i,e)$ nearly fill up the lower triangular region $X \gt Y$. Furthermore, because the three kinds of rectangles partition the plane without overlap, the Law of Total Probability asserts $$1 = \sum_{i\in\mathbb{Z}} \mathcal{A}(i,e) + \sum_{i\in\mathbb{Z}} \mathcal{A}^\prime(i,e) + \sum_{i\in\mathbb{Z}} \mathcal{B}(i,e).\tag{2}$$ Consider these last terms, the sums of the squares $\mathcal{B}(i,e)$ straddling the diagonal. Again by independence, the probabilities of these squares are found by multiplying, yielding $$\Pr(\mathcal{B}(i,e)) = (F((i+1)e)-F(ie))^2.$$ When $F$ is continuous, the right hand side has a limit equal to $0$ as $e$ approaches $0$ (from above). This is an elementary (but somewhat technical) fact about continuous functions, so I will forgo the distraction of proving it. Assuming this fact, taking limits in (2), and exploiting the equality in $(1)$ yields $$\eqalign{ 1 &= \lim_{e\to0^{+}}\sum_{i\in\mathbb{Z}} \Pr(\mathcal{A}(i,e)) \\ &+ \lim_{e\to0^{+}}\sum_{i\in\mathbb{Z}} \Pr(\mathcal{A}^\prime(i,e)) \\ &+ \lim_{e\to0^{+}}\sum_{i\in\mathbb{Z}} \Pr(\mathcal{B}(i,e)) \\ &= \lim_{e\to0^{+}}\left(\Pr(\cup_{i\in\mathbb{Z}} \mathcal{A}(i,e)) + \Pr(\cup_{i\in\mathbb{Z}} \mathcal{A}^\prime(i,e))\right) + 0 \\ &\le 2\Pr(X \lt Y). }$$ Since $2\Pr(X \lt Y) \le 1$, the only possibility is $\Pr(X\lt Y) = \Pr(Y \lt X) = 1/2$. Demonstration for general $n$ The rest is mopping up. When all $X_i, i=1,2,\ldots, n$ are independent for $n\gt 2$, then $X_i$ and $X_j$ are a fortiori independent. The same argument shows that switching $X_i$ and $X_j$ in any ordering of all the variables does not change the probability. Since such swaps generate the group of all permutations, we deduce that all orderings have equal probabilities and the sum of those probabilities is $1$. Since there are $n!$ distinct orderings, each one has a probability of $1/n!$.
Why are all the permutations of i.i.d. samples from a continuous distribution equally likely? Although this is intuitively obvious, there is merit in providing a rigorous proof because it helps demonstrate that our definitions really do capture what we think ought to be true. The basic idea T
39,956
What is cov(X,Y), where X=min(U,V) and Y=max(U,V) for independent Normal(0,1) variables U and V?
$\newcommand{\E}{\mathrm{E}}$ $\newcommand{\Var}{\mathrm{Var}}$ $\newcommand{\cov}{\mathrm{Cov}}$ $\newcommand{\Expect}{{\rm I\kern-.3em E}}$ As a direct consequence of the definition of covariance, $\cov (X,Y)= \E(XY)-\E(X)\E(Y)$. Fact 1: $U, V \overset{i.i.d.}{\sim} \mathcal{N}(0,1)$ $\Rightarrow U - V \sim \mathcal{N}(0,2)$ (sum of normally distributed random variables) $ \Rightarrow |U - V|$ is a half-normal random variable with parameter $\sigma = \sqrt2$ $ \Rightarrow \E (|U - V|) = \frac{\sigma\sqrt{2}}{\sqrt{\pi}} = \frac{\sqrt{2}\sqrt{2}}{\sqrt{\pi}} = \frac{2}{\sqrt{\pi}}$ Fact 2: $\E(X)+\E(Y) = \E(X+Y)$ (linearity of the expectation). We have $\E(X+Y) = \E (\min(U,V)+\max(U,V))= \E(U+V) = \E(U)+\E (V) = 0 + 0 = 0$. As a result, $\E(Y) = -\E(X)$. Fact 3: Since $Y-X = |U - V|$: $2\E(Y) = \E(Y)-\E(X) = \E(Y-X) = \E (|U - V|)= \frac{2}{\sqrt{\pi}}$, hence $\E(Y)= \frac{2}{2\sqrt{\pi}}= \frac{1}{\sqrt{\pi}}$ Fact 4: Since $XY=UV$, we have $\E(XY)=\E(UV)=\E (U)\E (V)=0$ Using these facts: $\cov (X,Y)= \E(XY)-\E(X)\E(Y)= 0 + \E(Y)\E(Y) = \frac{1}{\sqrt{\pi}}\frac{1}{\sqrt{\pi}}=\frac{1}{\pi}$.
What is cov(X,Y), where X=min(U,V) and Y=max(U,V) for independent Normal(0,1) variables U and V?
$\newcommand{\E}{\mathrm{E}}$ $\newcommand{\Var}{\mathrm{Var}}$ $\newcommand{\cov}{\mathrm{Cov}}$ $\newcommand{\Expect}{{\rm I\kern-.3em E}}$ As a direct consequence of the definition of covariance, $
What is cov(X,Y), where X=min(U,V) and Y=max(U,V) for independent Normal(0,1) variables U and V? $\newcommand{\E}{\mathrm{E}}$ $\newcommand{\Var}{\mathrm{Var}}$ $\newcommand{\cov}{\mathrm{Cov}}$ $\newcommand{\Expect}{{\rm I\kern-.3em E}}$ As a direct consequence of the definition of covariance, $\cov (X,Y)= \E(XY)-\E(X)\E(Y)$. Fact 1: $U, V \overset{i.i.d.}{\sim} \mathcal{N}(0,1)$ $\Rightarrow U - V \sim \mathcal{N}(0,2)$ (sum of normally distributed random variables) $ \Rightarrow |U - V|$ is a half-normal random variable with parameter $\sigma = \sqrt2$ $ \Rightarrow \E (|U - V|) = \frac{\sigma\sqrt{2}}{\sqrt{\pi}} = \frac{\sqrt{2}\sqrt{2}}{\sqrt{\pi}} = \frac{2}{\sqrt{\pi}}$ Fact 2: $\E(X)+\E(Y) = \E(X+Y)$ (linearity of the expectation). We have $\E(X+Y) = \E (\min(U,V)+\max(U,V))= \E(U+V) = \E(U)+\E (V) = 0 + 0 = 0$. As a result, $\E(Y) = -\E(X)$. Fact 3: Since $Y-X = |U - V|$: $2\E(Y) = \E(Y)-\E(X) = \E(Y-X) = \E (|U - V|)= \frac{2}{\sqrt{\pi}}$, hence $\E(Y)= \frac{2}{2\sqrt{\pi}}= \frac{1}{\sqrt{\pi}}$ Fact 4: Since $XY=UV$, we have $\E(XY)=\E(UV)=\E (U)\E (V)=0$ Using these facts: $\cov (X,Y)= \E(XY)-\E(X)\E(Y)= 0 + \E(Y)\E(Y) = \frac{1}{\sqrt{\pi}}\frac{1}{\sqrt{\pi}}=\frac{1}{\pi}$.
What is cov(X,Y), where X=min(U,V) and Y=max(U,V) for independent Normal(0,1) variables U and V? $\newcommand{\E}{\mathrm{E}}$ $\newcommand{\Var}{\mathrm{Var}}$ $\newcommand{\cov}{\mathrm{Cov}}$ $\newcommand{\Expect}{{\rm I\kern-.3em E}}$ As a direct consequence of the definition of covariance, $
39,957
How to interpret Bootstrap?
The intuitive idea behind the bootstrap is this: if your original dataset was a random draw from the full population, then if you take subsample from the sample (with replacement), then that too represents a draw from the full population. You can then estimate your model on all of those bootstrapped datasets. This gives you a large number of estimates and so you can e.g. look at the standard deviations of your estimates - it turns out that often this gives a good guess of the standard error of the estimates. Actually, the standard error of the estimates can be thought of excactly as this if you take the many datasets from the true population. Suppose for example there is one outlier in your dataset: then in many of your bootstrapped datasets that observation is not included and so for those datasets, you see the estimated coefficients change by a lot. Similarly, you can look at the F statistic for each of the bootstrap datasets. You could for example see how many times the model was rejected. But I am not sufficiently familiar with SPSS to know what it reports as the F stat: is it the average F statistic?
How to interpret Bootstrap?
The intuitive idea behind the bootstrap is this: if your original dataset was a random draw from the full population, then if you take subsample from the sample (with replacement), then that too repre
How to interpret Bootstrap? The intuitive idea behind the bootstrap is this: if your original dataset was a random draw from the full population, then if you take subsample from the sample (with replacement), then that too represents a draw from the full population. You can then estimate your model on all of those bootstrapped datasets. This gives you a large number of estimates and so you can e.g. look at the standard deviations of your estimates - it turns out that often this gives a good guess of the standard error of the estimates. Actually, the standard error of the estimates can be thought of excactly as this if you take the many datasets from the true population. Suppose for example there is one outlier in your dataset: then in many of your bootstrapped datasets that observation is not included and so for those datasets, you see the estimated coefficients change by a lot. Similarly, you can look at the F statistic for each of the bootstrap datasets. You could for example see how many times the model was rejected. But I am not sufficiently familiar with SPSS to know what it reports as the F stat: is it the average F statistic?
How to interpret Bootstrap? The intuitive idea behind the bootstrap is this: if your original dataset was a random draw from the full population, then if you take subsample from the sample (with replacement), then that too repre
39,958
How to interpret Bootstrap?
As @Superpronker mentioned it really depends on what SPSS is doing with the bootstrap. Including your code and the output would help a great deal. Also the bootstrap is a subject with a vast amount of literature. You could see this by simply looking at the bibliography in my 2007 edition of Bootstrap Methods published by Wiley. So I think you really also need at least a basic tutorial on the bootstrap. Sometimes going to Wikipedia helps with this sort of thing. In regression there are various ways to deal with issues like heteroskedasticity and non-normality. If the F test you refer to is from the OLS solution to linear regression where normality and homoskedasticity is ignored and by non-significance you mean that the F test can't tell you that any of the regression coefficients are different from 0, it may be that you should just ignore it and apply a different approach. The bootstrap can be one approach to deal with the problem. In regression there are two common bootstrap approaches. One is called bootstrapping residuals and the other is called bootstrapping vectors. You should want to find out which one SPSS is using. There is some literature that says bootstrapping vectors is more robust in the sense that it requires fewer assumptions. The vector is the set of observed values of $(Y, X_1, X_2, \ldots, X_k)$ where $Y$ is the dependent variable and the $X_j$ are the $k$ predictor variables in your model. From your problem description we do not know if $k$ is $1$ or $>1$. For each $j$ there is associated with $X_j$ a regression parameter $b_j$ that is estimated. The bootstrapping residuals method takes the $n$ residuals, where $n$ is your sample size, and it samples with replacement from this set of residuals. In the computer program this is done by the Monte Carlo method. The model is $Y=b_1 X_1 + b_2 X_2 + \ldots + b_k X_k +e$ where $e$ is an error term. You initially get n residuals by taking $y_i - \hat{b}_1 x_{1i} - \hat{b}_2 x_{2i}- \ldots -\hat{b}_k x_{ki}$ to be the $i$th residual. Here $\hat{b}_j$ denotes the estimate of the regression parameter $b_j$. We use the notation $y_i$ and $x_{ji}$ to represent the $i$th observed value of the dependent variable and the $i$th observed value of the $j$th predictor variable respectively. As this gets complicated, I suggest you look at a reference on bootstrapping residuals The 1993 Chapman and Hall text by Efron and Tibshirani is one possibility. The end results are bootstrap distributions for each regression parameter and one of several possible bootstrap confidence intervals could be used. Efron's percentile method is the most likely possibility. If the confidence interval does not contain 0 the regression parameter is considered significant.
How to interpret Bootstrap?
As @Superpronker mentioned it really depends on what SPSS is doing with the bootstrap. Including your code and the output would help a great deal. Also the bootstrap is a subject with a vast amount
How to interpret Bootstrap? As @Superpronker mentioned it really depends on what SPSS is doing with the bootstrap. Including your code and the output would help a great deal. Also the bootstrap is a subject with a vast amount of literature. You could see this by simply looking at the bibliography in my 2007 edition of Bootstrap Methods published by Wiley. So I think you really also need at least a basic tutorial on the bootstrap. Sometimes going to Wikipedia helps with this sort of thing. In regression there are various ways to deal with issues like heteroskedasticity and non-normality. If the F test you refer to is from the OLS solution to linear regression where normality and homoskedasticity is ignored and by non-significance you mean that the F test can't tell you that any of the regression coefficients are different from 0, it may be that you should just ignore it and apply a different approach. The bootstrap can be one approach to deal with the problem. In regression there are two common bootstrap approaches. One is called bootstrapping residuals and the other is called bootstrapping vectors. You should want to find out which one SPSS is using. There is some literature that says bootstrapping vectors is more robust in the sense that it requires fewer assumptions. The vector is the set of observed values of $(Y, X_1, X_2, \ldots, X_k)$ where $Y$ is the dependent variable and the $X_j$ are the $k$ predictor variables in your model. From your problem description we do not know if $k$ is $1$ or $>1$. For each $j$ there is associated with $X_j$ a regression parameter $b_j$ that is estimated. The bootstrapping residuals method takes the $n$ residuals, where $n$ is your sample size, and it samples with replacement from this set of residuals. In the computer program this is done by the Monte Carlo method. The model is $Y=b_1 X_1 + b_2 X_2 + \ldots + b_k X_k +e$ where $e$ is an error term. You initially get n residuals by taking $y_i - \hat{b}_1 x_{1i} - \hat{b}_2 x_{2i}- \ldots -\hat{b}_k x_{ki}$ to be the $i$th residual. Here $\hat{b}_j$ denotes the estimate of the regression parameter $b_j$. We use the notation $y_i$ and $x_{ji}$ to represent the $i$th observed value of the dependent variable and the $i$th observed value of the $j$th predictor variable respectively. As this gets complicated, I suggest you look at a reference on bootstrapping residuals The 1993 Chapman and Hall text by Efron and Tibshirani is one possibility. The end results are bootstrap distributions for each regression parameter and one of several possible bootstrap confidence intervals could be used. Efron's percentile method is the most likely possibility. If the confidence interval does not contain 0 the regression parameter is considered significant.
How to interpret Bootstrap? As @Superpronker mentioned it really depends on what SPSS is doing with the bootstrap. Including your code and the output would help a great deal. Also the bootstrap is a subject with a vast amount
39,959
How to interpret Bootstrap?
As a quick summary, the general bootstrap in SPSS Statistics is described thusly in the help. The Simple method is case resampling with replacement from the original dataset. The Stratified method is case resampling with replacement from the original dataset, within the strata defined by the cross-classification of strata variables. Some procedures have other options. The Algorithms manual, which is available online, covers details for jackknife, case, stratified, residual, and wild resampling. As for the user's original question, the question says "my data is neither normally distributed nor shows homoscedasticity", which could reflect a misconception about what the normality assumption means in regression. It is about the error term, not the variables in the equation. And a question for Michael: your books on bootstrapping are priced on Amazon for Kindle from 107 to 237 dollars! Why? I'd love to read one of these, but the cost is phenomenal. Unfortunately, I don't have a good library as an alternative to purchasing.
How to interpret Bootstrap?
As a quick summary, the general bootstrap in SPSS Statistics is described thusly in the help. The Simple method is case resampling with replacement from the original dataset. The Stratified method is
How to interpret Bootstrap? As a quick summary, the general bootstrap in SPSS Statistics is described thusly in the help. The Simple method is case resampling with replacement from the original dataset. The Stratified method is case resampling with replacement from the original dataset, within the strata defined by the cross-classification of strata variables. Some procedures have other options. The Algorithms manual, which is available online, covers details for jackknife, case, stratified, residual, and wild resampling. As for the user's original question, the question says "my data is neither normally distributed nor shows homoscedasticity", which could reflect a misconception about what the normality assumption means in regression. It is about the error term, not the variables in the equation. And a question for Michael: your books on bootstrapping are priced on Amazon for Kindle from 107 to 237 dollars! Why? I'd love to read one of these, but the cost is phenomenal. Unfortunately, I don't have a good library as an alternative to purchasing.
How to interpret Bootstrap? As a quick summary, the general bootstrap in SPSS Statistics is described thusly in the help. The Simple method is case resampling with replacement from the original dataset. The Stratified method is
39,960
How to simulate random effects models?
Just write down an (algebraic) formula for the model, and simulate from that description. I will give a very simple example, a model with multiple observations of the same subjects, with an exchangeable covariance structure. Such a structure can be represented with a random intercept for each subject. Also an subject-level covariate: $$ y_{ij}=\mu + \alpha x_i + \epsilon_i + \epsilon_{ij} $$ for $i=1,2,\dotsc,n$ and $j=1,\dotsc,k$ within each subject. So this is a balanced model. The same principle is used for unbalanced situations, but that gives more programming. Then we must specify values for fixed parameters and distributions for random effects $\epsilon_i, \epsilon_{ij}$. Some simple R code is: N <- 20 # Number subjects k <- 4 # Number obs within subject set.seed(7*11*13) # My public seed id <- as.factor(1:N) x <- runif(N, 1, 5) idran <- rnorm(N, 0, 1) obsran <- rnorm(N*k, 0, 2) mu <- 10. alpha <- 1. X <- rep(x, each=k) Y <- mu + alpha*X + rep(idran, each=k) + obsran A plot of this simulated data is: For more complex situations it would help with some preprogrammed package, there is a package simstudy on CRAN which can help. See also Model Matrices for Mixed Effects Models and https://stackoverflow.com/questions/30896540/extract-raw-model-matrix-of-random-effects-from-lmer-objects-lme4-r, https://stackoverflow.com/questions/55199251/how-to-create-a-simulation-of-a-small-data-set-in-r.
How to simulate random effects models?
Just write down an (algebraic) formula for the model, and simulate from that description. I will give a very simple example, a model with multiple observations of the same subjects, with an exchangeab
How to simulate random effects models? Just write down an (algebraic) formula for the model, and simulate from that description. I will give a very simple example, a model with multiple observations of the same subjects, with an exchangeable covariance structure. Such a structure can be represented with a random intercept for each subject. Also an subject-level covariate: $$ y_{ij}=\mu + \alpha x_i + \epsilon_i + \epsilon_{ij} $$ for $i=1,2,\dotsc,n$ and $j=1,\dotsc,k$ within each subject. So this is a balanced model. The same principle is used for unbalanced situations, but that gives more programming. Then we must specify values for fixed parameters and distributions for random effects $\epsilon_i, \epsilon_{ij}$. Some simple R code is: N <- 20 # Number subjects k <- 4 # Number obs within subject set.seed(7*11*13) # My public seed id <- as.factor(1:N) x <- runif(N, 1, 5) idran <- rnorm(N, 0, 1) obsran <- rnorm(N*k, 0, 2) mu <- 10. alpha <- 1. X <- rep(x, each=k) Y <- mu + alpha*X + rep(idran, each=k) + obsran A plot of this simulated data is: For more complex situations it would help with some preprogrammed package, there is a package simstudy on CRAN which can help. See also Model Matrices for Mixed Effects Models and https://stackoverflow.com/questions/30896540/extract-raw-model-matrix-of-random-effects-from-lmer-objects-lme4-r, https://stackoverflow.com/questions/55199251/how-to-create-a-simulation-of-a-small-data-set-in-r.
How to simulate random effects models? Just write down an (algebraic) formula for the model, and simulate from that description. I will give a very simple example, a model with multiple observations of the same subjects, with an exchangeab
39,961
How to simulate random effects models?
Here is how I simulate random effects. I'll demonstrate for linear regression, but extending it to a different GLM should be straight forward. Let's start with a random intercept model. The model is usually written as $$ y = XB + Z\gamma $$ Where $Z$ is an indicator for the group and $\gamma_i$ is normally distributed with mean 0 and some variance. Simulation of this model is as follows... groups <- 1:5 N <- 250 g <- factor(sample(groups, replace = TRUE, size = N), levels = groups) x <- rnorm(N) X <- model.matrix(~ x) Z <- model.matrix(~ g - 1) beta <- c(10, 2) gamma <- rnorm(length(groups), 0, 0.25) y = X %*% beta + Z%*% gamma + rnorm(N, 0, 0.3) Let's fit a mixed model and see if we recover some of these estimates library(lme4) model = lmer(y ~ x + (1|g), data = d) summary(model) linear mixed model fit by REML ['lmerMod'] Formula: y ~ x + (1 | g) Data: d REML criterion at convergence: 136.2 Scaled residuals: Min 1Q Median 3Q Max -2.85114 -0.65429 -0.00888 0.65268 2.63459 Random effects: Groups Name Variance Std.Dev. g (Intercept) 0.05771 0.2402 Residual 0.09173 0.3029 Number of obs: 250, groups: g, 5 Fixed effects: Estimate Std. Error t value (Intercept) 9.95696 0.10914 91.23 x 2.00198 0.01993 100.45 Correlation of Fixed Effects: (Intr) x -0.008 Fixed effects look good, and the group standard deviation (0.25) is estimated pretty accurately, and so is the residual standard deviation. Random slope models are similar. Under the assumption each slope comes from a normal distribution, then we can write the slope as $$ y = Bx + \beta_i x$$ Here $B$ is the population mean and $\beta_i$ is the effect of group i. Here is a simulation library(tidyverse) groups <- 1:5 N <- 250 g <- sample(groups, replace = TRUE, size = N) x <- rnorm(N) X <- model.matrix(~ x) B <- c(10, 2) beta <- rnorm(length(groups), 0, 2) y = X %*% B + x*beta[g] + rnorm(N, 0, 0.3) and a model ... library(lme4) d = tibble(y, x, g) model = lmer(y ~ x + (x|g), data = d) summary(model) Linear mixed model fit by REML ['lmerMod'] Formula: y ~ x + (x | g) Data: d REML criterion at convergence: 158.9 Scaled residuals: Min 1Q Median 3Q Max -2.95141 -0.65904 0.02218 0.61932 2.66614 Random effects: Groups Name Variance Std.Dev. Corr g (Intercept) 2.021e-05 0.004496 x 3.416e+00 1.848314 1.00 Residual 9.416e-02 0.306856 Number of obs: 250, groups: g, 5 Fixed effects: Estimate Std. Error t value (Intercept) 10.00883 0.01984 504.47 x 2.05913 0.82682 2.49 Correlation of Fixed Effects: (Intr) x 0.099 Here are the coefficients of the 5 groups coef(model) $g (Intercept) x 1 10.00135 -1.015180 2 10.01335 3.919787 3 10.00934 2.270760 4 10.01081 2.873636 5 10.00928 2.246626 and compare them to the true values B[2] + beta -0.9406479 3.9195119 2.2976457 2.8536623 2.3539863
How to simulate random effects models?
Here is how I simulate random effects. I'll demonstrate for linear regression, but extending it to a different GLM should be straight forward. Let's start with a random intercept model. The model is
How to simulate random effects models? Here is how I simulate random effects. I'll demonstrate for linear regression, but extending it to a different GLM should be straight forward. Let's start with a random intercept model. The model is usually written as $$ y = XB + Z\gamma $$ Where $Z$ is an indicator for the group and $\gamma_i$ is normally distributed with mean 0 and some variance. Simulation of this model is as follows... groups <- 1:5 N <- 250 g <- factor(sample(groups, replace = TRUE, size = N), levels = groups) x <- rnorm(N) X <- model.matrix(~ x) Z <- model.matrix(~ g - 1) beta <- c(10, 2) gamma <- rnorm(length(groups), 0, 0.25) y = X %*% beta + Z%*% gamma + rnorm(N, 0, 0.3) Let's fit a mixed model and see if we recover some of these estimates library(lme4) model = lmer(y ~ x + (1|g), data = d) summary(model) linear mixed model fit by REML ['lmerMod'] Formula: y ~ x + (1 | g) Data: d REML criterion at convergence: 136.2 Scaled residuals: Min 1Q Median 3Q Max -2.85114 -0.65429 -0.00888 0.65268 2.63459 Random effects: Groups Name Variance Std.Dev. g (Intercept) 0.05771 0.2402 Residual 0.09173 0.3029 Number of obs: 250, groups: g, 5 Fixed effects: Estimate Std. Error t value (Intercept) 9.95696 0.10914 91.23 x 2.00198 0.01993 100.45 Correlation of Fixed Effects: (Intr) x -0.008 Fixed effects look good, and the group standard deviation (0.25) is estimated pretty accurately, and so is the residual standard deviation. Random slope models are similar. Under the assumption each slope comes from a normal distribution, then we can write the slope as $$ y = Bx + \beta_i x$$ Here $B$ is the population mean and $\beta_i$ is the effect of group i. Here is a simulation library(tidyverse) groups <- 1:5 N <- 250 g <- sample(groups, replace = TRUE, size = N) x <- rnorm(N) X <- model.matrix(~ x) B <- c(10, 2) beta <- rnorm(length(groups), 0, 2) y = X %*% B + x*beta[g] + rnorm(N, 0, 0.3) and a model ... library(lme4) d = tibble(y, x, g) model = lmer(y ~ x + (x|g), data = d) summary(model) Linear mixed model fit by REML ['lmerMod'] Formula: y ~ x + (x | g) Data: d REML criterion at convergence: 158.9 Scaled residuals: Min 1Q Median 3Q Max -2.95141 -0.65904 0.02218 0.61932 2.66614 Random effects: Groups Name Variance Std.Dev. Corr g (Intercept) 2.021e-05 0.004496 x 3.416e+00 1.848314 1.00 Residual 9.416e-02 0.306856 Number of obs: 250, groups: g, 5 Fixed effects: Estimate Std. Error t value (Intercept) 10.00883 0.01984 504.47 x 2.05913 0.82682 2.49 Correlation of Fixed Effects: (Intr) x 0.099 Here are the coefficients of the 5 groups coef(model) $g (Intercept) x 1 10.00135 -1.015180 2 10.01335 3.919787 3 10.00934 2.270760 4 10.01081 2.873636 5 10.00928 2.246626 and compare them to the true values B[2] + beta -0.9406479 3.9195119 2.2976457 2.8536623 2.3539863
How to simulate random effects models? Here is how I simulate random effects. I'll demonstrate for linear regression, but extending it to a different GLM should be straight forward. Let's start with a random intercept model. The model is
39,962
CDF of $\frac{X_1X_2}{X_1+X_2+a}$, where $X_1$ and $X_2$ have exp. distributions
Thanks to the indications that @Did gave, I was able to derive the CDF of $Y=\frac{X_1X_2}{X_1+X_2+a}$ (note that in the question I use $X$ instead of $Y$) as follows: Based on the identity of events \begin{align} \left[ Y <y \right] = \left[ X_1 < y \right] \cup \left[ X_1 \ge y, X_2 < y (X_1+a)(X_1-y)^{-1} \right], \end{align} we get the following \begin{align} \nonumber \mathbb{P} \{ Y < y \} &= \mathbb{P}\{X_1 <y\}+ \int_y^\infty \mathbb{P}\{X_2 < y (x+a) (x-y)^{-1} \} f_{X_1}(x) \, dx \\ \nonumber &= 1-e^{-\lambda_1y} + \lambda_1 \int_y^\infty (1-e^{-\lambda_2 y(x+a)(x-y)^{-1}}) e^{-\lambda_1 x} dx \\ \nonumber &= 1-e^{-\lambda_1y}+e^{-\lambda_1y}- \lambda_1 \int_y^\infty e^{-\lambda_2 y(x+a)(x-y)^{-1}} e^{-\lambda_1 x} dx \\ \nonumber & =_{(i)} 1- \lambda_1 \int_{u=0}^\infty e^{ -\lambda_2 y(u+y+a)u^{-1}} e^{-\lambda_1 (u+y)} du \\ \nonumber &= 1- \lambda_1 e^{-(\lambda_1+\lambda_2)y} \int_{u=0}^\infty e^{-\lambda_2 y (y+a)u^{-1}} e^{-\lambda_1u} du \\ & =_{(ii)} 1- \lambda_1 e^{-(\lambda_1+\lambda_2)y} \, 2 \, \sqrt{ y(y+a) \lambda_2 \lambda_1^{-1} } \, K_1\left(2 \sqrt{ y(y+a) \lambda_2 \lambda_1 } \right), \\ & = 1- e^{-(\lambda_1+\lambda_2)y} \, 2 \, \sqrt{ y(y+a) \lambda_2 \lambda_1 } \, K_1\left(2 \sqrt{ y(y+a) \lambda_2 \lambda_1 } \right) \end{align} in which equality (i) is due to the change of variable $u=x-y$ and equality (ii) follows from [Table of Integrals, Series and Products, 7th edition - equation 3.471.9].
CDF of $\frac{X_1X_2}{X_1+X_2+a}$, where $X_1$ and $X_2$ have exp. distributions
Thanks to the indications that @Did gave, I was able to derive the CDF of $Y=\frac{X_1X_2}{X_1+X_2+a}$ (note that in the question I use $X$ instead of $Y$) as follows: Based on the identity of event
CDF of $\frac{X_1X_2}{X_1+X_2+a}$, where $X_1$ and $X_2$ have exp. distributions Thanks to the indications that @Did gave, I was able to derive the CDF of $Y=\frac{X_1X_2}{X_1+X_2+a}$ (note that in the question I use $X$ instead of $Y$) as follows: Based on the identity of events \begin{align} \left[ Y <y \right] = \left[ X_1 < y \right] \cup \left[ X_1 \ge y, X_2 < y (X_1+a)(X_1-y)^{-1} \right], \end{align} we get the following \begin{align} \nonumber \mathbb{P} \{ Y < y \} &= \mathbb{P}\{X_1 <y\}+ \int_y^\infty \mathbb{P}\{X_2 < y (x+a) (x-y)^{-1} \} f_{X_1}(x) \, dx \\ \nonumber &= 1-e^{-\lambda_1y} + \lambda_1 \int_y^\infty (1-e^{-\lambda_2 y(x+a)(x-y)^{-1}}) e^{-\lambda_1 x} dx \\ \nonumber &= 1-e^{-\lambda_1y}+e^{-\lambda_1y}- \lambda_1 \int_y^\infty e^{-\lambda_2 y(x+a)(x-y)^{-1}} e^{-\lambda_1 x} dx \\ \nonumber & =_{(i)} 1- \lambda_1 \int_{u=0}^\infty e^{ -\lambda_2 y(u+y+a)u^{-1}} e^{-\lambda_1 (u+y)} du \\ \nonumber &= 1- \lambda_1 e^{-(\lambda_1+\lambda_2)y} \int_{u=0}^\infty e^{-\lambda_2 y (y+a)u^{-1}} e^{-\lambda_1u} du \\ & =_{(ii)} 1- \lambda_1 e^{-(\lambda_1+\lambda_2)y} \, 2 \, \sqrt{ y(y+a) \lambda_2 \lambda_1^{-1} } \, K_1\left(2 \sqrt{ y(y+a) \lambda_2 \lambda_1 } \right), \\ & = 1- e^{-(\lambda_1+\lambda_2)y} \, 2 \, \sqrt{ y(y+a) \lambda_2 \lambda_1 } \, K_1\left(2 \sqrt{ y(y+a) \lambda_2 \lambda_1 } \right) \end{align} in which equality (i) is due to the change of variable $u=x-y$ and equality (ii) follows from [Table of Integrals, Series and Products, 7th edition - equation 3.471.9].
CDF of $\frac{X_1X_2}{X_1+X_2+a}$, where $X_1$ and $X_2$ have exp. distributions Thanks to the indications that @Did gave, I was able to derive the CDF of $Y=\frac{X_1X_2}{X_1+X_2+a}$ (note that in the question I use $X$ instead of $Y$) as follows: Based on the identity of event
39,963
CDF of $\frac{X_1X_2}{X_1+X_2+a}$, where $X_1$ and $X_2$ have exp. distributions
Let $X$ and $Y$ denote two independent exponential random variables and suppose that $V = \frac{XY}{X+Y+a}$ where $a > 0$. What is the CDF of $V$? First, note that $V > 0$. Let $v$ denote a positive constant, and let us try to determine the complementary CDF $P\{V > v\}$ by integrating the joint density of $X$ and $Y$ over that part of the first quadrantVwhere $V$ exceeds $v$. The set in question, call it $A$, is given by \begin{align} A &= \{(x,y)\colon x > 0, y > 0, \frac{xy}{x+y+a} > v\}\\ &= \{(x,y)\colon x > 0, y > 0, xy > v(x+y+a)\}\\ &= \{(x,y)\colon x > 0, y > 0, xy -vx -vy > av\}\\ &= \{(x,y)\colon x > 0, y > 0, (x-v)(y-v) > v^2 + av\}. \end{align} Now, the graph of the hyperbola $xy = b$ consists of two curves confined to the first and third quadrants respectively and passing through the points $\left(\sqrt{b}, \sqrt{b}\right)$ and $\left(-\sqrt{b}, -\sqrt{b}\right)$ respectively. Therefore, the graph of $(x-v)(y-v) = b$ is just these two curves shifted to the right by $v$ and shifted upwards by $v$, and the two curves now pass through $(\sqrt{b}+v, \sqrt{b}+v)$ and $(-\sqrt{b}+v, -\sqrt{b}+v)$ respectively. Note that the asymptotes of the curves are $x=v, y=v$. Now, when $b$ equals $v^2+av$, $\sqrt{b} > v$ and so the point $(-\sqrt{b}+v, -\sqrt{b}+v)$ is in the third quadrant. Consequently, the lower branch of the hyperbola does not lie in the first quadrant at all. (It does cross the $x$ and $y$ axes into the second and fourth quadrants but that is immaterial in this problem). It follows that we can express $A$ as $$A = \{(x,y)\colon x > v, y > v, (x-v)(y-v) > v^2 + av\}.$$ Hence, \begin{align} 1-F_V(v) &= P((X,Y)\in A)\\ &= \iint_A f_{X,Y}(x,y)\, \mathrm dy \, \mathrm dx\\ &= \int_v^\infty f_X(x) \left[ \int_{y = \frac{v^2+av}{x-v}}^\infty f_Y(y)\,\mathrm dy \right] \, \mathrm dx. \end{align} The inner integral is straightforward to evaluate; the outer one is trickier, needing special functions and tables of integrals to evaluate. An alternative calculation given in an answer by the OP (with the help of many suggestions from @Did) directly evaluates the CDF $P\{V \leq v\}$ by partitioning the set under consideration into the events $\{X \leq v\}$ and $\left\{X>v, 0 < Y \leq v\frac{x+a}{x-v}\right\}$.
CDF of $\frac{X_1X_2}{X_1+X_2+a}$, where $X_1$ and $X_2$ have exp. distributions
Let $X$ and $Y$ denote two independent exponential random variables and suppose that $V = \frac{XY}{X+Y+a}$ where $a > 0$. What is the CDF of $V$? First, note that $V > 0$. Let $v$ denote a positive
CDF of $\frac{X_1X_2}{X_1+X_2+a}$, where $X_1$ and $X_2$ have exp. distributions Let $X$ and $Y$ denote two independent exponential random variables and suppose that $V = \frac{XY}{X+Y+a}$ where $a > 0$. What is the CDF of $V$? First, note that $V > 0$. Let $v$ denote a positive constant, and let us try to determine the complementary CDF $P\{V > v\}$ by integrating the joint density of $X$ and $Y$ over that part of the first quadrantVwhere $V$ exceeds $v$. The set in question, call it $A$, is given by \begin{align} A &= \{(x,y)\colon x > 0, y > 0, \frac{xy}{x+y+a} > v\}\\ &= \{(x,y)\colon x > 0, y > 0, xy > v(x+y+a)\}\\ &= \{(x,y)\colon x > 0, y > 0, xy -vx -vy > av\}\\ &= \{(x,y)\colon x > 0, y > 0, (x-v)(y-v) > v^2 + av\}. \end{align} Now, the graph of the hyperbola $xy = b$ consists of two curves confined to the first and third quadrants respectively and passing through the points $\left(\sqrt{b}, \sqrt{b}\right)$ and $\left(-\sqrt{b}, -\sqrt{b}\right)$ respectively. Therefore, the graph of $(x-v)(y-v) = b$ is just these two curves shifted to the right by $v$ and shifted upwards by $v$, and the two curves now pass through $(\sqrt{b}+v, \sqrt{b}+v)$ and $(-\sqrt{b}+v, -\sqrt{b}+v)$ respectively. Note that the asymptotes of the curves are $x=v, y=v$. Now, when $b$ equals $v^2+av$, $\sqrt{b} > v$ and so the point $(-\sqrt{b}+v, -\sqrt{b}+v)$ is in the third quadrant. Consequently, the lower branch of the hyperbola does not lie in the first quadrant at all. (It does cross the $x$ and $y$ axes into the second and fourth quadrants but that is immaterial in this problem). It follows that we can express $A$ as $$A = \{(x,y)\colon x > v, y > v, (x-v)(y-v) > v^2 + av\}.$$ Hence, \begin{align} 1-F_V(v) &= P((X,Y)\in A)\\ &= \iint_A f_{X,Y}(x,y)\, \mathrm dy \, \mathrm dx\\ &= \int_v^\infty f_X(x) \left[ \int_{y = \frac{v^2+av}{x-v}}^\infty f_Y(y)\,\mathrm dy \right] \, \mathrm dx. \end{align} The inner integral is straightforward to evaluate; the outer one is trickier, needing special functions and tables of integrals to evaluate. An alternative calculation given in an answer by the OP (with the help of many suggestions from @Did) directly evaluates the CDF $P\{V \leq v\}$ by partitioning the set under consideration into the events $\{X \leq v\}$ and $\left\{X>v, 0 < Y \leq v\frac{x+a}{x-v}\right\}$.
CDF of $\frac{X_1X_2}{X_1+X_2+a}$, where $X_1$ and $X_2$ have exp. distributions Let $X$ and $Y$ denote two independent exponential random variables and suppose that $V = \frac{XY}{X+Y+a}$ where $a > 0$. What is the CDF of $V$? First, note that $V > 0$. Let $v$ denote a positive
39,964
Why sum of squared errors for logistic regression not used and instead maximum likelihood estimation is used to fit the model? [duplicate]
Firstly, least squares (or sum of squared errors) is a possible loss function to use to fit your coefficients. There's nothing technically wrong about it. However there are number of reasons why MLE is a more attractive option. In addition to those in the comments, here are two more: Computational efficiency Because the likelihood function of a logistic regression model is a member of the exponential family, we can use Fisher's Scoring algorithm to efficiently solve for $\beta$. In my experience, this algorithm converges in only a few steps. To solve least squares numerically will likely take longer. Lest this gets lost, per @vbox's comment: learning parameters for any machine learning model (such as logistic regression) is much easier if the cost function is convex. And, it's not too difficult to show that, for logistic regression, the cost function for the sum of squared errors is not convex, while the cost function for the log-likelihood is. MLE has very nice properties Solutions using MLE have nice properties such: consistency: meaning that with more data, our estimate of $\beta$ gets closer to the true value. asymptotic normality: meaning that with more data, our estimate of $\beta$ is approximately normal distributed with variance that decreases with $O(\frac{1}{n})$ functional invariance: nice property to have when dealing with multiple parameters (nuisance parameters) and calculating the profile likelihood. Among others. However using Least Squares does have some benefits Least squares tends to be more robust to outliers because an outlier can be wrong by at most 1 (because $(1-0)^2 = 1$), whereas under a negative log likelihood loss function, the distance can be arbitrarily large. For more information check this or this out. Edited My interpretation of the OPs question is why do we use MLE instead of a square loss function to determine $\beta$ in a logistic regression model of the form: $$logit(P(Y=1|X)) = x\beta$$ Where $P(Y=1|X) = f(x;\beta) = \frac{e^{x\beta}}{1 + e^{x\beta}} = \frac{1}{1 + e^{-x\beta}}$ So the loss function looks like: $$\sum_{i} (y_i - f(x_i;\beta))^2 = \sum_{i} (y_i - \frac{1}{1 + e^{-x\beta}})^2$$ where $y_i$'s take values 0/1. When I talk about computational efficiency, I mean finding the $\beta$ which minimizes the above vs. Fisher Scoring on the likelihood function.
Why sum of squared errors for logistic regression not used and instead maximum likelihood estimation
Firstly, least squares (or sum of squared errors) is a possible loss function to use to fit your coefficients. There's nothing technically wrong about it. However there are number of reasons why MLE i
Why sum of squared errors for logistic regression not used and instead maximum likelihood estimation is used to fit the model? [duplicate] Firstly, least squares (or sum of squared errors) is a possible loss function to use to fit your coefficients. There's nothing technically wrong about it. However there are number of reasons why MLE is a more attractive option. In addition to those in the comments, here are two more: Computational efficiency Because the likelihood function of a logistic regression model is a member of the exponential family, we can use Fisher's Scoring algorithm to efficiently solve for $\beta$. In my experience, this algorithm converges in only a few steps. To solve least squares numerically will likely take longer. Lest this gets lost, per @vbox's comment: learning parameters for any machine learning model (such as logistic regression) is much easier if the cost function is convex. And, it's not too difficult to show that, for logistic regression, the cost function for the sum of squared errors is not convex, while the cost function for the log-likelihood is. MLE has very nice properties Solutions using MLE have nice properties such: consistency: meaning that with more data, our estimate of $\beta$ gets closer to the true value. asymptotic normality: meaning that with more data, our estimate of $\beta$ is approximately normal distributed with variance that decreases with $O(\frac{1}{n})$ functional invariance: nice property to have when dealing with multiple parameters (nuisance parameters) and calculating the profile likelihood. Among others. However using Least Squares does have some benefits Least squares tends to be more robust to outliers because an outlier can be wrong by at most 1 (because $(1-0)^2 = 1$), whereas under a negative log likelihood loss function, the distance can be arbitrarily large. For more information check this or this out. Edited My interpretation of the OPs question is why do we use MLE instead of a square loss function to determine $\beta$ in a logistic regression model of the form: $$logit(P(Y=1|X)) = x\beta$$ Where $P(Y=1|X) = f(x;\beta) = \frac{e^{x\beta}}{1 + e^{x\beta}} = \frac{1}{1 + e^{-x\beta}}$ So the loss function looks like: $$\sum_{i} (y_i - f(x_i;\beta))^2 = \sum_{i} (y_i - \frac{1}{1 + e^{-x\beta}})^2$$ where $y_i$'s take values 0/1. When I talk about computational efficiency, I mean finding the $\beta$ which minimizes the above vs. Fisher Scoring on the likelihood function.
Why sum of squared errors for logistic regression not used and instead maximum likelihood estimation Firstly, least squares (or sum of squared errors) is a possible loss function to use to fit your coefficients. There's nothing technically wrong about it. However there are number of reasons why MLE i
39,965
Why sum of squared errors for logistic regression not used and instead maximum likelihood estimation is used to fit the model? [duplicate]
Maybe I'm not getting the point of ilanman's answer as well as of some of the comments here, but afaiks, the answer is simply that OLS = log L(Gaussian) i.e. the OLS corresponds to the log likelihood of a regression with a normal / Gaussian distribution. You can see this by logging the formula for the Gaussian - the $\sigma$ will factor out, and you see that the OLS maximizes the likelihood. So, OLS estimation IS MLE for a Gaussian error. Logistic Regression assumes a Bernoulli / Binomial error, that is why you don't do OLS.
Why sum of squared errors for logistic regression not used and instead maximum likelihood estimation
Maybe I'm not getting the point of ilanman's answer as well as of some of the comments here, but afaiks, the answer is simply that OLS = log L(Gaussian) i.e. the OLS corresponds to the log likelihood
Why sum of squared errors for logistic regression not used and instead maximum likelihood estimation is used to fit the model? [duplicate] Maybe I'm not getting the point of ilanman's answer as well as of some of the comments here, but afaiks, the answer is simply that OLS = log L(Gaussian) i.e. the OLS corresponds to the log likelihood of a regression with a normal / Gaussian distribution. You can see this by logging the formula for the Gaussian - the $\sigma$ will factor out, and you see that the OLS maximizes the likelihood. So, OLS estimation IS MLE for a Gaussian error. Logistic Regression assumes a Bernoulli / Binomial error, that is why you don't do OLS.
Why sum of squared errors for logistic regression not used and instead maximum likelihood estimation Maybe I'm not getting the point of ilanman's answer as well as of some of the comments here, but afaiks, the answer is simply that OLS = log L(Gaussian) i.e. the OLS corresponds to the log likelihood
39,966
Prove that a distribution is symmetric using moments
It is a fact that the function $$\phi_Y: n \to \mathbb{E}(|Y|^n)^{1/n},\ n \gt 0$$ (known as the $L_n$ norm of $Y$) is nondecreasing. The demonstration at https://stats.stackexchange.com/a/244221 uses Jensen's Inequality (as applied to a strictly convex function). That inequality is a strict inequality whenever $|Y|$ can take on more than one value with positive probability. Letting $Y=X - \bar X$ be the centered version of $X$, from the given values of the variance and fourth (central) moment we deduce $$\mathbb{E}(|Y|^4)^{1/4} = 4^{1/4} = \sqrt{2} = \operatorname{Var}(X)^{1/2} = \mathbb{E}(|Y|^2)^{1/2},$$ which shows $\phi_Y(4) = \phi_Y(2)$. Consequently, because $\phi$ has not decreased, $|Y|$ is almost surely constant, whence $X$ can take on at most two distinct values $\bar X \pm \sqrt{2}$ (almost surely). It is immediate that $X$ takes on each of those values with equal probability: that is, $X$ must be a shifted version of a Bernoulli$(1/2)$ variable that has been scaled by $\sqrt{8}$. The demonstration of (1) (zero third moment), (2) (symmetry about $0$), and (3) (boundedness) is now trivial. Notice that the same conclusions can be drawn whenever there are two moments $k \ne n$ for which $\phi_Y(k)=\phi_Y(n)$.
Prove that a distribution is symmetric using moments
It is a fact that the function $$\phi_Y: n \to \mathbb{E}(|Y|^n)^{1/n},\ n \gt 0$$ (known as the $L_n$ norm of $Y$) is nondecreasing. The demonstration at https://stats.stackexchange.com/a/244221 use
Prove that a distribution is symmetric using moments It is a fact that the function $$\phi_Y: n \to \mathbb{E}(|Y|^n)^{1/n},\ n \gt 0$$ (known as the $L_n$ norm of $Y$) is nondecreasing. The demonstration at https://stats.stackexchange.com/a/244221 uses Jensen's Inequality (as applied to a strictly convex function). That inequality is a strict inequality whenever $|Y|$ can take on more than one value with positive probability. Letting $Y=X - \bar X$ be the centered version of $X$, from the given values of the variance and fourth (central) moment we deduce $$\mathbb{E}(|Y|^4)^{1/4} = 4^{1/4} = \sqrt{2} = \operatorname{Var}(X)^{1/2} = \mathbb{E}(|Y|^2)^{1/2},$$ which shows $\phi_Y(4) = \phi_Y(2)$. Consequently, because $\phi$ has not decreased, $|Y|$ is almost surely constant, whence $X$ can take on at most two distinct values $\bar X \pm \sqrt{2}$ (almost surely). It is immediate that $X$ takes on each of those values with equal probability: that is, $X$ must be a shifted version of a Bernoulli$(1/2)$ variable that has been scaled by $\sqrt{8}$. The demonstration of (1) (zero third moment), (2) (symmetry about $0$), and (3) (boundedness) is now trivial. Notice that the same conclusions can be drawn whenever there are two moments $k \ne n$ for which $\phi_Y(k)=\phi_Y(n)$.
Prove that a distribution is symmetric using moments It is a fact that the function $$\phi_Y: n \to \mathbb{E}(|Y|^n)^{1/n},\ n \gt 0$$ (known as the $L_n$ norm of $Y$) is nondecreasing. The demonstration at https://stats.stackexchange.com/a/244221 use
39,967
Prove that a distribution is symmetric using moments
Here's one approach. This answers $a$ , probably $b$, and hopefully $c$. Summarizing what we know: $E[X]=0$, $E[X^2]=\mbox{Var}(X)=2$ and $E[X^4]=4$. Let $m_i:=E[X^i]$. Moments of any probability distribution must satisfy positive-definiteness, in the sense that any proper $n\times n$ sub matrix of the Hankel Moment Matrix be positive definite: $$H:=\left(\begin{matrix} m_0 & m_1 & m_2 & \cdots \\ m_1 & m_2 & m_3 & \cdots \\ m_2 & m_3 & m_4 & \cdots \\ \vdots & \vdots & \vdots & \ddots \\ \end{matrix}\right)$$. Picking $n=3$ gives us: $$H_4=\left(\begin{matrix} m_0 & m_1 & m_2 \\ m_1 & m_2 & m_3 \\ m_2 & m_3 & m_4 \\ \end{matrix}\right)=\left(\begin{matrix} 1 & 0 & 2 \\ 0 & 2 & m_3 \\ 2 & m_3 & 4 \\ \end{matrix}\right),$$ and a quick hand calculation gives: $\mbox{det}(H_4)=-m_3^2$. Since $H_4$ must be positive definite, it follows that $m_3=0$. To show that $X$ is symmetric about 0, it suffices to show that all odd moments are zero. I believe you can show this by induction on the Hankel sub-matrices. To show that $X$ is bounded, the idea I had is the following equivalence: $$P(|X|\leq R)=1 \Leftrightarrow E[|X|^k]\leq R^k, k=1,2,\cdots .$$ Maybe you can show this from the Hankel matrices?
Prove that a distribution is symmetric using moments
Here's one approach. This answers $a$ , probably $b$, and hopefully $c$. Summarizing what we know: $E[X]=0$, $E[X^2]=\mbox{Var}(X)=2$ and $E[X^4]=4$. Let $m_i:=E[X^i]$. Moments of any probability dis
Prove that a distribution is symmetric using moments Here's one approach. This answers $a$ , probably $b$, and hopefully $c$. Summarizing what we know: $E[X]=0$, $E[X^2]=\mbox{Var}(X)=2$ and $E[X^4]=4$. Let $m_i:=E[X^i]$. Moments of any probability distribution must satisfy positive-definiteness, in the sense that any proper $n\times n$ sub matrix of the Hankel Moment Matrix be positive definite: $$H:=\left(\begin{matrix} m_0 & m_1 & m_2 & \cdots \\ m_1 & m_2 & m_3 & \cdots \\ m_2 & m_3 & m_4 & \cdots \\ \vdots & \vdots & \vdots & \ddots \\ \end{matrix}\right)$$. Picking $n=3$ gives us: $$H_4=\left(\begin{matrix} m_0 & m_1 & m_2 \\ m_1 & m_2 & m_3 \\ m_2 & m_3 & m_4 \\ \end{matrix}\right)=\left(\begin{matrix} 1 & 0 & 2 \\ 0 & 2 & m_3 \\ 2 & m_3 & 4 \\ \end{matrix}\right),$$ and a quick hand calculation gives: $\mbox{det}(H_4)=-m_3^2$. Since $H_4$ must be positive definite, it follows that $m_3=0$. To show that $X$ is symmetric about 0, it suffices to show that all odd moments are zero. I believe you can show this by induction on the Hankel sub-matrices. To show that $X$ is bounded, the idea I had is the following equivalence: $$P(|X|\leq R)=1 \Leftrightarrow E[|X|^k]\leq R^k, k=1,2,\cdots .$$ Maybe you can show this from the Hankel matrices?
Prove that a distribution is symmetric using moments Here's one approach. This answers $a$ , probably $b$, and hopefully $c$. Summarizing what we know: $E[X]=0$, $E[X^2]=\mbox{Var}(X)=2$ and $E[X^4]=4$. Let $m_i:=E[X^i]$. Moments of any probability dis
39,968
Asymptotically Normally Distributed
But when we say "an estimator is asymptotically normally distributed", what does it mean? Using similar language to your first sentence, when we say an estimator is asymptotically normally distributed, we mean something like as the sample size increases, the sampling distribution of a suitably standardized version of the estimator converges in distribution to some particular normal distribution. Are "central limit theorem" and "asymptotically normally distributed" synonymous? Not in general, I think. Some quantity may be asymptotically normal but not come about as a result of any of the versions of the CLT (at least not in any obvious way - it might perhaps be that all of them can ultimately relate to the CLT, but I suspect it's possible to construct cases that would not). However, very many estimators can be cast as a kind of average of some random variable and in that case a CLT-type argument may be indeed possible. In some other cases you can combine the CLT with some other result to produce an argument that some estimator should be asymptotically normal (so the CLT may be involved but doesn't stand alone as the basis for the asymptotic normality).
Asymptotically Normally Distributed
But when we say "an estimator is asymptotically normally distributed", what does it mean? Using similar language to your first sentence, when we say an estimator is asymptotically normally distribute
Asymptotically Normally Distributed But when we say "an estimator is asymptotically normally distributed", what does it mean? Using similar language to your first sentence, when we say an estimator is asymptotically normally distributed, we mean something like as the sample size increases, the sampling distribution of a suitably standardized version of the estimator converges in distribution to some particular normal distribution. Are "central limit theorem" and "asymptotically normally distributed" synonymous? Not in general, I think. Some quantity may be asymptotically normal but not come about as a result of any of the versions of the CLT (at least not in any obvious way - it might perhaps be that all of them can ultimately relate to the CLT, but I suspect it's possible to construct cases that would not). However, very many estimators can be cast as a kind of average of some random variable and in that case a CLT-type argument may be indeed possible. In some other cases you can combine the CLT with some other result to produce an argument that some estimator should be asymptotically normal (so the CLT may be involved but doesn't stand alone as the basis for the asymptotic normality).
Asymptotically Normally Distributed But when we say "an estimator is asymptotically normally distributed", what does it mean? Using similar language to your first sentence, when we say an estimator is asymptotically normally distribute
39,969
Can a neural network tell if it has seen an image before?
I suppose you are not interested in only recognizing exactly the same image [1]. Instead, you want to know if an image is extremely similar to one you have already seen, and if so, retrieve that one. Why is that challenging? Let us first take a step back and assume that we were only looking at points in Euclidean space. That would be easy, because we could then check if the Euclidean distance $d(x, y) = \sqrt{\sum_i (x_i - y_i)^2}$ exceeds a threshold $\tau$, which you will tune to match the desired degree of necessary similarity. However, the Euclidean distance is not meaningful for images: in high dimensional space, nearly all points have equal distance to each other–something commonly referred to as the curse of dimensionality. That is why you need to project the images to some space, where it actually is. The easiest way (from a pragmatic perspective) is to get hold of a state-of-the-art image net classifier and use it as a feature extractor. The activations of the last layer before the classification is done will serve as a feature extractor for that. I.e. $d_f(x, y) = \sqrt{\sum_i (f(x)_i - f(y)_i)^2}$ with $f$ mapping an image to the last layers activations, will give you "meaningful" distances. "Meaningful" here refers to the fact that most humans would agree. There are ways to train these feature extractors from scratch. Convolutional Generative Adversarial Networks or convolutional variational auto-encoders come to my mind. If you want to know more, let me know. [1] If so, use a hash table for fast lookup and then compare.
Can a neural network tell if it has seen an image before?
I suppose you are not interested in only recognizing exactly the same image [1]. Instead, you want to know if an image is extremely similar to one you have already seen, and if so, retrieve that one.
Can a neural network tell if it has seen an image before? I suppose you are not interested in only recognizing exactly the same image [1]. Instead, you want to know if an image is extremely similar to one you have already seen, and if so, retrieve that one. Why is that challenging? Let us first take a step back and assume that we were only looking at points in Euclidean space. That would be easy, because we could then check if the Euclidean distance $d(x, y) = \sqrt{\sum_i (x_i - y_i)^2}$ exceeds a threshold $\tau$, which you will tune to match the desired degree of necessary similarity. However, the Euclidean distance is not meaningful for images: in high dimensional space, nearly all points have equal distance to each other–something commonly referred to as the curse of dimensionality. That is why you need to project the images to some space, where it actually is. The easiest way (from a pragmatic perspective) is to get hold of a state-of-the-art image net classifier and use it as a feature extractor. The activations of the last layer before the classification is done will serve as a feature extractor for that. I.e. $d_f(x, y) = \sqrt{\sum_i (f(x)_i - f(y)_i)^2}$ with $f$ mapping an image to the last layers activations, will give you "meaningful" distances. "Meaningful" here refers to the fact that most humans would agree. There are ways to train these feature extractors from scratch. Convolutional Generative Adversarial Networks or convolutional variational auto-encoders come to my mind. If you want to know more, let me know. [1] If so, use a hash table for fast lookup and then compare.
Can a neural network tell if it has seen an image before? I suppose you are not interested in only recognizing exactly the same image [1]. Instead, you want to know if an image is extremely similar to one you have already seen, and if so, retrieve that one.
39,970
Can a neural network tell if it has seen an image before?
You can overfit neural network with training images. If you propagate training image (known images) through the network you should get predicted value very close to the expected. For the main training task you can use convolutional autoencoders. Convolutional autoencoder suppose to learn features specific for the training samples.
Can a neural network tell if it has seen an image before?
You can overfit neural network with training images. If you propagate training image (known images) through the network you should get predicted value very close to the expected. For the main training
Can a neural network tell if it has seen an image before? You can overfit neural network with training images. If you propagate training image (known images) through the network you should get predicted value very close to the expected. For the main training task you can use convolutional autoencoders. Convolutional autoencoder suppose to learn features specific for the training samples.
Can a neural network tell if it has seen an image before? You can overfit neural network with training images. If you propagate training image (known images) through the network you should get predicted value very close to the expected. For the main training
39,971
Can a neural network tell if it has seen an image before?
My guess would be that it should be possible, but that it would require two distinct substructures - one to respond to the image and one to classify as old/new. I would do some reading on models of memory such as the ones you find on this page. Also, I'm not sure how robust such a model would be.
Can a neural network tell if it has seen an image before?
My guess would be that it should be possible, but that it would require two distinct substructures - one to respond to the image and one to classify as old/new. I would do some reading on models of me
Can a neural network tell if it has seen an image before? My guess would be that it should be possible, but that it would require two distinct substructures - one to respond to the image and one to classify as old/new. I would do some reading on models of memory such as the ones you find on this page. Also, I'm not sure how robust such a model would be.
Can a neural network tell if it has seen an image before? My guess would be that it should be possible, but that it would require two distinct substructures - one to respond to the image and one to classify as old/new. I would do some reading on models of me
39,972
Can a neural network tell if it has seen an image before?
I'm not aware of a paper but I would suggest the following scheme - 1) Take an image input and distort till you are ok with noise 2) Feed both image and its distortions to a siamese network and return True 3) Feed image and all other images and return False 4) Repeat This will give you a prediction on when the images are the same. You can adjust the distortions in your training data to tune to what level of noise are you ok with practically. The bad news is that you have to maintain a dictionary of ground truth images. Let's say you don't want to do that. So another way is to compress the image. I don't think you even need a neural net for that, anything like SVD/PCA will work. Once compressed, hash it and save that hash somewhere. Most images that are similar should 'hash' to the same thing. A new image should hash to something else. You can combine with other classifiers that count the number of objects in the picture, etc to create a better hash. But idea is the same. Let me know if either of these ideas are tenable.
Can a neural network tell if it has seen an image before?
I'm not aware of a paper but I would suggest the following scheme - 1) Take an image input and distort till you are ok with noise 2) Feed both image and its distortions to a siamese network and return
Can a neural network tell if it has seen an image before? I'm not aware of a paper but I would suggest the following scheme - 1) Take an image input and distort till you are ok with noise 2) Feed both image and its distortions to a siamese network and return True 3) Feed image and all other images and return False 4) Repeat This will give you a prediction on when the images are the same. You can adjust the distortions in your training data to tune to what level of noise are you ok with practically. The bad news is that you have to maintain a dictionary of ground truth images. Let's say you don't want to do that. So another way is to compress the image. I don't think you even need a neural net for that, anything like SVD/PCA will work. Once compressed, hash it and save that hash somewhere. Most images that are similar should 'hash' to the same thing. A new image should hash to something else. You can combine with other classifiers that count the number of objects in the picture, etc to create a better hash. But idea is the same. Let me know if either of these ideas are tenable.
Can a neural network tell if it has seen an image before? I'm not aware of a paper but I would suggest the following scheme - 1) Take an image input and distort till you are ok with noise 2) Feed both image and its distortions to a siamese network and return
39,973
How to prove this regularized matrix is invertible?
$X^TX$ is either PD or PSD.* If it's PD, it's already invertible. If it's PSD, its smallest eigenvalue is $0$. For any $\lambda>0$, you're making the smallest eigenvalue inside the parentheses positive. *Consider $(Xa)^2$; we can write $||Xa||^2_2\ge0\implies a^TX^TXa\ge 0$. The LHS is nonnegative, so the RHS must also be. And the RHS is immediately the definition of PSD; $a\neq 0$. As whuber points out, $(X^TX)_{1,1}$ must be positive for this to work. This will be true whenever the first column of $X$ is not a 0 vector. (We can verify that easily: the 1,1 entry is the inner product of the first column of X with itself, which must be nonnegative for real values because it is formed as the sum of squares and squares of reals are nonnegative.)
How to prove this regularized matrix is invertible?
$X^TX$ is either PD or PSD.* If it's PD, it's already invertible. If it's PSD, its smallest eigenvalue is $0$. For any $\lambda>0$, you're making the smallest eigenvalue inside the parentheses positiv
How to prove this regularized matrix is invertible? $X^TX$ is either PD or PSD.* If it's PD, it's already invertible. If it's PSD, its smallest eigenvalue is $0$. For any $\lambda>0$, you're making the smallest eigenvalue inside the parentheses positive. *Consider $(Xa)^2$; we can write $||Xa||^2_2\ge0\implies a^TX^TXa\ge 0$. The LHS is nonnegative, so the RHS must also be. And the RHS is immediately the definition of PSD; $a\neq 0$. As whuber points out, $(X^TX)_{1,1}$ must be positive for this to work. This will be true whenever the first column of $X$ is not a 0 vector. (We can verify that easily: the 1,1 entry is the inner product of the first column of X with itself, which must be nonnegative for real values because it is formed as the sum of squares and squares of reals are nonnegative.)
How to prove this regularized matrix is invertible? $X^TX$ is either PD or PSD.* If it's PD, it's already invertible. If it's PSD, its smallest eigenvalue is $0$. For any $\lambda>0$, you're making the smallest eigenvalue inside the parentheses positiv
39,974
How to prove this regularized matrix is invertible?
We are seeing regularization is adding a diagonal elements in $X^TX$, then, $X^TX$ is the full rank matrix. Full rank is invertible matrix.
How to prove this regularized matrix is invertible?
We are seeing regularization is adding a diagonal elements in $X^TX$, then, $X^TX$ is the full rank matrix. Full rank is invertible matrix.
How to prove this regularized matrix is invertible? We are seeing regularization is adding a diagonal elements in $X^TX$, then, $X^TX$ is the full rank matrix. Full rank is invertible matrix.
How to prove this regularized matrix is invertible? We are seeing regularization is adding a diagonal elements in $X^TX$, then, $X^TX$ is the full rank matrix. Full rank is invertible matrix.
39,975
How to prove this regularized matrix is invertible?
Iff $\vec{u}=\langle1, 0, 0, \ldots, 0\rangle$ is a null vector for $X$ (meaning the first column of $X$ is 0), the matrix $X^TX + \lambda M$ will be singular. In the $3x3$ case, $$\lambda M\vec{u} = \left[ \begin{matrix} 0 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{matrix} \right] \begin{bmatrix} 1\\0\\0 \end{bmatrix} = \vec{0} $$ Thus, $\left( X^TX + \lambda M \right) \vec{u} = X^TX\vec{u} + \lambda M \vec{u} = X^T\vec{0} + \vec{0} = \vec{0}$ In checking with positive definiteness, we know that $\vec{v}^T \lambda M\vec{v} > 0$ only if $\vec{v}\neq k\vec{u}$ for some nonzero $k$. Since $X^TX$ is symmetric, it is automatically PSD, so $\vec{v}^T\left(X^TX + \lambda M\right)\vec{v} > 0$ if $\vec{v}\neq k\vec{u}$. Assuming $\vec{v} = k\vec{u}$, \begin{align*} \vec{v}^T\left(X^TX + \lambda M\right)\vec{v} &= (k\vec{u})^T\left(X^TX + \lambda M\right)k\vec{u}\\ &= (k\vec{u})^T\left(X^TX\right)k\vec{u} + (k\vec{u})^T\left(\lambda M\right)k\vec{u}\\ &= k^2\vec{u}^T\left(X^TX\right)\vec{u} + \lambda k^2\vec{u}^T M\vec{u}\\ &= k^2\vec{u}^T\left(X^TX\right)\vec{u} + \lambda k^2\vec{u}^T \vec{0}\\ &= k^2\vec{u}^T\left(X^TX\right)\vec{u}\\ &= k^2\left(\vec{u}^T X^T\right)X\vec{u}\\ &= k^2(X\vec{u})^T X\vec{u}\\ &= k^2\|X\vec{u}\|_2^2\\ &\geq 0 \end{align*} This shows there exists a vector for which $X^T X$ may generally not be considered nonsingular. Working backwards, if you assume the matrix $X^TX + \lambda M$ is singular, you know there is some nonzero vector $\vec{v}$ for which $\left(X^TX + \lambda M\right)\vec{v}=\vec{0}$ and, therefore, $\vec{v}^T\left(X^TX + \lambda M\right)\vec{v} = 0$. For $\vec{v} = \langle v_1, v_2, \ldots, v_n\rangle$, $M\vec{v} = \langle 0, v_2, \ldots, v_n\rangle$ and $\vec{v}^T M\vec{v} = \sum_{k=2}^{n} v_k^2$. \begin{align*} 0 &= \vec{v}^T\left(X^TX + \lambda M\right)\vec{v}\\ &= \vec{v}^T\left(X^TX\right)\vec{v} + \vec{v}^T\lambda M\vec{v}\\ &= \|X\vec{v}\|_2^2 + \lambda\sum_{k=2}^{n} v_k^2\\ \end{align*} Clearly, this is only true if $v_k = 0$ for $2 \leq k \leq n$. Therefore, $\vec{v}=k\vec{u}$ for some nonzero $k$, meaning that $X$ has a zero first column. In practice, this will not happen, since each entry in the first column of $X$ is set to 1.
How to prove this regularized matrix is invertible?
Iff $\vec{u}=\langle1, 0, 0, \ldots, 0\rangle$ is a null vector for $X$ (meaning the first column of $X$ is 0), the matrix $X^TX + \lambda M$ will be singular. In the $3x3$ case, $$\lambda M\vec{u} =
How to prove this regularized matrix is invertible? Iff $\vec{u}=\langle1, 0, 0, \ldots, 0\rangle$ is a null vector for $X$ (meaning the first column of $X$ is 0), the matrix $X^TX + \lambda M$ will be singular. In the $3x3$ case, $$\lambda M\vec{u} = \left[ \begin{matrix} 0 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{matrix} \right] \begin{bmatrix} 1\\0\\0 \end{bmatrix} = \vec{0} $$ Thus, $\left( X^TX + \lambda M \right) \vec{u} = X^TX\vec{u} + \lambda M \vec{u} = X^T\vec{0} + \vec{0} = \vec{0}$ In checking with positive definiteness, we know that $\vec{v}^T \lambda M\vec{v} > 0$ only if $\vec{v}\neq k\vec{u}$ for some nonzero $k$. Since $X^TX$ is symmetric, it is automatically PSD, so $\vec{v}^T\left(X^TX + \lambda M\right)\vec{v} > 0$ if $\vec{v}\neq k\vec{u}$. Assuming $\vec{v} = k\vec{u}$, \begin{align*} \vec{v}^T\left(X^TX + \lambda M\right)\vec{v} &= (k\vec{u})^T\left(X^TX + \lambda M\right)k\vec{u}\\ &= (k\vec{u})^T\left(X^TX\right)k\vec{u} + (k\vec{u})^T\left(\lambda M\right)k\vec{u}\\ &= k^2\vec{u}^T\left(X^TX\right)\vec{u} + \lambda k^2\vec{u}^T M\vec{u}\\ &= k^2\vec{u}^T\left(X^TX\right)\vec{u} + \lambda k^2\vec{u}^T \vec{0}\\ &= k^2\vec{u}^T\left(X^TX\right)\vec{u}\\ &= k^2\left(\vec{u}^T X^T\right)X\vec{u}\\ &= k^2(X\vec{u})^T X\vec{u}\\ &= k^2\|X\vec{u}\|_2^2\\ &\geq 0 \end{align*} This shows there exists a vector for which $X^T X$ may generally not be considered nonsingular. Working backwards, if you assume the matrix $X^TX + \lambda M$ is singular, you know there is some nonzero vector $\vec{v}$ for which $\left(X^TX + \lambda M\right)\vec{v}=\vec{0}$ and, therefore, $\vec{v}^T\left(X^TX + \lambda M\right)\vec{v} = 0$. For $\vec{v} = \langle v_1, v_2, \ldots, v_n\rangle$, $M\vec{v} = \langle 0, v_2, \ldots, v_n\rangle$ and $\vec{v}^T M\vec{v} = \sum_{k=2}^{n} v_k^2$. \begin{align*} 0 &= \vec{v}^T\left(X^TX + \lambda M\right)\vec{v}\\ &= \vec{v}^T\left(X^TX\right)\vec{v} + \vec{v}^T\lambda M\vec{v}\\ &= \|X\vec{v}\|_2^2 + \lambda\sum_{k=2}^{n} v_k^2\\ \end{align*} Clearly, this is only true if $v_k = 0$ for $2 \leq k \leq n$. Therefore, $\vec{v}=k\vec{u}$ for some nonzero $k$, meaning that $X$ has a zero first column. In practice, this will not happen, since each entry in the first column of $X$ is set to 1.
How to prove this regularized matrix is invertible? Iff $\vec{u}=\langle1, 0, 0, \ldots, 0\rangle$ is a null vector for $X$ (meaning the first column of $X$ is 0), the matrix $X^TX + \lambda M$ will be singular. In the $3x3$ case, $$\lambda M\vec{u} =
39,976
What are .RDX and .RDB files for R? [closed]
Actually, the link you give leads to just two files. The first is a pdf file, as I'm sure you know. The second is a zip file containing an R package. You are supposed to unzip the file, then copy the folder "crhaz" to the "library" folder of your R installation. Then you run R and type library(crhaz) at the R prompt. You don't have to worry about any of the individual files that you might see in the package folder. The .rdx and .rdb files are binary files storing builds of R code and are only for internal use within R -- you cannot open them yourself. This is what I am sure the authors of the article expect you to do. I will say however that they have done a poor job of distributing their R package. They should provide a build of the package than can be installed using install.packages(), but they have instead just zipped the image of the Windows installation of their package. Later After a little more investigation, I can see that the crhaz package provided by Ronald Geskus with the 2011 article contained only one function, called crprep, and this function has recently been made part of the CRAN package "mstate". So you can now get the function for use in an up-to-date version of R by installing mstate: install.packages(mstate) library(mstate) ?crprep
What are .RDX and .RDB files for R? [closed]
Actually, the link you give leads to just two files. The first is a pdf file, as I'm sure you know. The second is a zip file containing an R package. You are supposed to unzip the file, then copy the
What are .RDX and .RDB files for R? [closed] Actually, the link you give leads to just two files. The first is a pdf file, as I'm sure you know. The second is a zip file containing an R package. You are supposed to unzip the file, then copy the folder "crhaz" to the "library" folder of your R installation. Then you run R and type library(crhaz) at the R prompt. You don't have to worry about any of the individual files that you might see in the package folder. The .rdx and .rdb files are binary files storing builds of R code and are only for internal use within R -- you cannot open them yourself. This is what I am sure the authors of the article expect you to do. I will say however that they have done a poor job of distributing their R package. They should provide a build of the package than can be installed using install.packages(), but they have instead just zipped the image of the Windows installation of their package. Later After a little more investigation, I can see that the crhaz package provided by Ronald Geskus with the 2011 article contained only one function, called crprep, and this function has recently been made part of the CRAN package "mstate". So you can now get the function for use in an up-to-date version of R by installing mstate: install.packages(mstate) library(mstate) ?crprep
What are .RDX and .RDB files for R? [closed] Actually, the link you give leads to just two files. The first is a pdf file, as I'm sure you know. The second is a zip file containing an R package. You are supposed to unzip the file, then copy the
39,977
Is there a mathematical expression that shows how LASSO shrinks coefficients (including some to zero)?
Firstly, I think it's worth noting that the description of what ridge does assumes that the data matrix is orthonormal. Secondly, the answer to your question is yes under those circumstances. The details may be found in "Elements of Statistical Learning" on p. 69 bis (section 3.4.3) . The short story is that $ \beta \to \text{sign}(\beta)\max(\beta-\lambda,0)$ is the formula. Please see the book for the complete discussion, better formatting, and details.
Is there a mathematical expression that shows how LASSO shrinks coefficients (including some to zero
Firstly, I think it's worth noting that the description of what ridge does assumes that the data matrix is orthonormal. Secondly, the answer to your question is yes under those circumstances. The de
Is there a mathematical expression that shows how LASSO shrinks coefficients (including some to zero)? Firstly, I think it's worth noting that the description of what ridge does assumes that the data matrix is orthonormal. Secondly, the answer to your question is yes under those circumstances. The details may be found in "Elements of Statistical Learning" on p. 69 bis (section 3.4.3) . The short story is that $ \beta \to \text{sign}(\beta)\max(\beta-\lambda,0)$ is the formula. Please see the book for the complete discussion, better formatting, and details.
Is there a mathematical expression that shows how LASSO shrinks coefficients (including some to zero Firstly, I think it's worth noting that the description of what ridge does assumes that the data matrix is orthonormal. Secondly, the answer to your question is yes under those circumstances. The de
39,978
Is there a mathematical expression that shows how LASSO shrinks coefficients (including some to zero)?
The question can be answered when one assumes an orthogonal matrix of predictors. Then the right singular vector matrix $\mathbf V$ equals the identity matrix $\mathbf I$, and the derivation below holds only for this case. Consider the lasso problem $$ \min_{\mathbf \beta} \frac12 ||\mathbf X \mathbf \beta - \mathbf y||^2_2 + \lambda ||\beta||_1 $$ Use the singular value decomposition \begin{align} \mathbf X &= \mathbf U \, \mathbf D\, \mathbf V^T\\ &= \sum_i \mathbf u_i \, d_i \mathbf v^T_i \end{align} and also expand $\beta$ and $\mathbf y$ as \begin{align} \beta &= \sum_i \beta_i \mathbf v_i\,\\ \mathbf y &= \sum_i y_i \mathbf u_i \end{align} Insert the whole stuff into the lasso functional to obtain \begin{align} \min_{\beta_i} \left(\sum_i \frac12 |\beta_i d_i - y_i|^2 + \lambda |\beta_i|\right) \end{align} In the SVD basis, the Lasso minimization problem thus decomposes into separate problems for each component. For a vanishing singular value, $d_i=0$, the solution is $\beta_i=0$, so let's consider only the case $d_i >0$. For this, one can rewrite the previous equation as $$\min_{\beta_i} \frac12 \left|\beta_i - \frac{y_i}{d_i}\right|^2 + \frac{\lambda}{d_i^2} \left|\beta_i\right|$$ As stated in the other answer, and also derived here, the solution of this problem is $$ \beta^{Lasso}_i \ = \ \begin{cases} 0 &,\ |d_i y_i| \le \lambda \\ \frac{y_i}{d_i} - \frac{\lambda}{d_i^2} & ,\ |d_i y_i| > \lambda \end{cases} $$ Compare this to the corresponding solution of the ordinary least squares problem and ridge regression (with regularization parameter $\alpha$): \begin{align} \beta_i^{OLS} &= \frac{y_i}{d_i}\\ \beta_i^{ridge} &= \frac{y_i}{\tilde d_i}, \quad \text{where}\ \ \tilde d_i = d_i \cdot \frac{d_i^2+\alpha}{d_i^2} \end{align} As is well known, ridge regression can be obtained by scaling a given singular value $d_i$ with a factor that depends on $d_i$ and $\alpha$. On the other hand, the Lasso solution is more complex and also depends on the target $y_i$. With a good amount of enforcement, one can write the adjusted singular values as $$ \beta_i^{Lasso} = \frac{y_i}{\bar d_i}, \quad \text{where} \ \ \bar d_i = d_i \frac{1}{\left(1 - \frac{\lambda}{d_i y_i}\right)\theta\big(|d_i y_i|-\lambda\big)} $$ where $\theta$ is the Heaviside step function, which is zero for $|d_i y_i|<\lambda$ and otherwise one. EDIT: Please note that for a general predictor matrix, the above statement is wrong, for two reasons: first and informally, if it would be correct, it would be the standard approach for an easy Lasso solution. Second, the formal reason why this doesn't work is that the L1-norm is not invariant under orthogonal transformations. So when one does the expansion of $\beta$ in two orthonormal basis sets, $\beta = \sum_i \beta_i \mathbf e_i = \sum_i b_i \mathbf v_i$, one has $\sum_i |\beta_i| \neq \sum_i |b_i|$. So, what it does is not to solve the Lasso problem, but rather the more exotic Lasso problem $$ \min_{\mathbf \beta} \frac12 ||\mathbf X \mathbf \beta - \mathbf y||^2_2 + \lambda ||\mathbf V^T \beta||_1 $$
Is there a mathematical expression that shows how LASSO shrinks coefficients (including some to zero
The question can be answered when one assumes an orthogonal matrix of predictors. Then the right singular vector matrix $\mathbf V$ equals the identity matrix $\mathbf I$, and the derivation below hol
Is there a mathematical expression that shows how LASSO shrinks coefficients (including some to zero)? The question can be answered when one assumes an orthogonal matrix of predictors. Then the right singular vector matrix $\mathbf V$ equals the identity matrix $\mathbf I$, and the derivation below holds only for this case. Consider the lasso problem $$ \min_{\mathbf \beta} \frac12 ||\mathbf X \mathbf \beta - \mathbf y||^2_2 + \lambda ||\beta||_1 $$ Use the singular value decomposition \begin{align} \mathbf X &= \mathbf U \, \mathbf D\, \mathbf V^T\\ &= \sum_i \mathbf u_i \, d_i \mathbf v^T_i \end{align} and also expand $\beta$ and $\mathbf y$ as \begin{align} \beta &= \sum_i \beta_i \mathbf v_i\,\\ \mathbf y &= \sum_i y_i \mathbf u_i \end{align} Insert the whole stuff into the lasso functional to obtain \begin{align} \min_{\beta_i} \left(\sum_i \frac12 |\beta_i d_i - y_i|^2 + \lambda |\beta_i|\right) \end{align} In the SVD basis, the Lasso minimization problem thus decomposes into separate problems for each component. For a vanishing singular value, $d_i=0$, the solution is $\beta_i=0$, so let's consider only the case $d_i >0$. For this, one can rewrite the previous equation as $$\min_{\beta_i} \frac12 \left|\beta_i - \frac{y_i}{d_i}\right|^2 + \frac{\lambda}{d_i^2} \left|\beta_i\right|$$ As stated in the other answer, and also derived here, the solution of this problem is $$ \beta^{Lasso}_i \ = \ \begin{cases} 0 &,\ |d_i y_i| \le \lambda \\ \frac{y_i}{d_i} - \frac{\lambda}{d_i^2} & ,\ |d_i y_i| > \lambda \end{cases} $$ Compare this to the corresponding solution of the ordinary least squares problem and ridge regression (with regularization parameter $\alpha$): \begin{align} \beta_i^{OLS} &= \frac{y_i}{d_i}\\ \beta_i^{ridge} &= \frac{y_i}{\tilde d_i}, \quad \text{where}\ \ \tilde d_i = d_i \cdot \frac{d_i^2+\alpha}{d_i^2} \end{align} As is well known, ridge regression can be obtained by scaling a given singular value $d_i$ with a factor that depends on $d_i$ and $\alpha$. On the other hand, the Lasso solution is more complex and also depends on the target $y_i$. With a good amount of enforcement, one can write the adjusted singular values as $$ \beta_i^{Lasso} = \frac{y_i}{\bar d_i}, \quad \text{where} \ \ \bar d_i = d_i \frac{1}{\left(1 - \frac{\lambda}{d_i y_i}\right)\theta\big(|d_i y_i|-\lambda\big)} $$ where $\theta$ is the Heaviside step function, which is zero for $|d_i y_i|<\lambda$ and otherwise one. EDIT: Please note that for a general predictor matrix, the above statement is wrong, for two reasons: first and informally, if it would be correct, it would be the standard approach for an easy Lasso solution. Second, the formal reason why this doesn't work is that the L1-norm is not invariant under orthogonal transformations. So when one does the expansion of $\beta$ in two orthonormal basis sets, $\beta = \sum_i \beta_i \mathbf e_i = \sum_i b_i \mathbf v_i$, one has $\sum_i |\beta_i| \neq \sum_i |b_i|$. So, what it does is not to solve the Lasso problem, but rather the more exotic Lasso problem $$ \min_{\mathbf \beta} \frac12 ||\mathbf X \mathbf \beta - \mathbf y||^2_2 + \lambda ||\mathbf V^T \beta||_1 $$
Is there a mathematical expression that shows how LASSO shrinks coefficients (including some to zero The question can be answered when one assumes an orthogonal matrix of predictors. Then the right singular vector matrix $\mathbf V$ equals the identity matrix $\mathbf I$, and the derivation below hol
39,979
What is the value of linear dimensionality reduction in the presence of nonlinear alternatives?
I'll expand on some of the points mentioned in the comments, and add a few more. Computational complexity. PCA is more efficient in terms of both time and memory than more complicated nonlinear dimensionality reduction (NLDR) techniques. This is an important issue when working with large datasets, for which NLDR techniques may not be feasible. Even simple implementations of PCA can work with large data sets, and tricks are available for scaling up massively. Scaling tricks are available for some NLDR techniques (e.g. landmark isomap and online training for autoencoders), but this isn't always the case. Sometimes linearity is appropriate. Sometimes the data really do lie near a low dimensional linear manifold. In these cases, linear techniques like PCA are most appropriate. Even when the manifold isn't perfectly linear, PCA may give a good enough approximation that more complicated techniques aren't warranted. Ease of use. PCA is straightforward to use. Given a particular implementation, there aren't any choices to make besides the number of dimensions. NLDR techniques typically require selecting at least one hyperparameter, and in some cases many hyperparameters. Running search procedures for hyperparameter tuning increases the already large computational cost of these methods. It's also necessary to choose one out of dozens of possible NLDR techniques to use in the first place, and the choice isn't always obvious. Different NLDR methods work well in different circumstances, and you may not know a priori which one is most appropriate. Forward mapping. PCA gives a mapping from the high dimensional to the low dimensional space. This makes it possible to apply the same transformation to out-of-sample data that wasn't part of the training set. This is necessary for cross-validation, and also useful when the same procedure must be extended to new data. Some NLDR techniques (e.g. autoencoders) also provide such a mapping natively, but most don't. Out-of-sample extension procedures have been devised for other NLDR methods, but they add to the complexity of the procedure by requiring additional runtime, learning, and/or hyperparameter tuning for the mapping itself. Nonlinear downstream algorithms. Dimensionality reduction is often used as a pre-processing step for downstream learning algorithms (e.g. supervised learning). It may not be necessary to learn nonlinear structure during pre-processing, because this can be done by downstream algorithms. If nonlinear structure is present, it may just be necessary to use more principal components than the true/intrinsic dimensionality of the data (e.g. the surface of a hemisphere is intrinsically two dimensional, but can be perfectly preserved using three dimensions). This is not to say that NLDR pre-processing can't help; in some cases it can. Overfitting. NLDR techniques have a greater capacity to overfit than PCA as a consequence of their increased model complexity, so care must be taken. Interpretability. In some cases, we may want to use dimensionality reduction to help understand the process that generated the data. PCA weights make it easier to say something in terms of the original dimensions of the data, but this isn't the case for many NLDR methods. Anthropological issues. PCA is an old, trusted, and widely known standard, which makes it a technique that people often reach for. Paper audiences, clients, and supervisors are more likely to be familiar with it. Awareness of NLDR algorithms is simply not as great, and implementations aren't as widely available. All of that said, NLDR is an exciting field, and there are clearly cases where NLDR obliterates PCA. I only focus on the virtues of PCA because that's what the question is about. It's all a matter of context; whether PCA or NLDR is more appropriate depends on the situation.
What is the value of linear dimensionality reduction in the presence of nonlinear alternatives?
I'll expand on some of the points mentioned in the comments, and add a few more. Computational complexity. PCA is more efficient in terms of both time and memory than more complicated nonlinear dimens
What is the value of linear dimensionality reduction in the presence of nonlinear alternatives? I'll expand on some of the points mentioned in the comments, and add a few more. Computational complexity. PCA is more efficient in terms of both time and memory than more complicated nonlinear dimensionality reduction (NLDR) techniques. This is an important issue when working with large datasets, for which NLDR techniques may not be feasible. Even simple implementations of PCA can work with large data sets, and tricks are available for scaling up massively. Scaling tricks are available for some NLDR techniques (e.g. landmark isomap and online training for autoencoders), but this isn't always the case. Sometimes linearity is appropriate. Sometimes the data really do lie near a low dimensional linear manifold. In these cases, linear techniques like PCA are most appropriate. Even when the manifold isn't perfectly linear, PCA may give a good enough approximation that more complicated techniques aren't warranted. Ease of use. PCA is straightforward to use. Given a particular implementation, there aren't any choices to make besides the number of dimensions. NLDR techniques typically require selecting at least one hyperparameter, and in some cases many hyperparameters. Running search procedures for hyperparameter tuning increases the already large computational cost of these methods. It's also necessary to choose one out of dozens of possible NLDR techniques to use in the first place, and the choice isn't always obvious. Different NLDR methods work well in different circumstances, and you may not know a priori which one is most appropriate. Forward mapping. PCA gives a mapping from the high dimensional to the low dimensional space. This makes it possible to apply the same transformation to out-of-sample data that wasn't part of the training set. This is necessary for cross-validation, and also useful when the same procedure must be extended to new data. Some NLDR techniques (e.g. autoencoders) also provide such a mapping natively, but most don't. Out-of-sample extension procedures have been devised for other NLDR methods, but they add to the complexity of the procedure by requiring additional runtime, learning, and/or hyperparameter tuning for the mapping itself. Nonlinear downstream algorithms. Dimensionality reduction is often used as a pre-processing step for downstream learning algorithms (e.g. supervised learning). It may not be necessary to learn nonlinear structure during pre-processing, because this can be done by downstream algorithms. If nonlinear structure is present, it may just be necessary to use more principal components than the true/intrinsic dimensionality of the data (e.g. the surface of a hemisphere is intrinsically two dimensional, but can be perfectly preserved using three dimensions). This is not to say that NLDR pre-processing can't help; in some cases it can. Overfitting. NLDR techniques have a greater capacity to overfit than PCA as a consequence of their increased model complexity, so care must be taken. Interpretability. In some cases, we may want to use dimensionality reduction to help understand the process that generated the data. PCA weights make it easier to say something in terms of the original dimensions of the data, but this isn't the case for many NLDR methods. Anthropological issues. PCA is an old, trusted, and widely known standard, which makes it a technique that people often reach for. Paper audiences, clients, and supervisors are more likely to be familiar with it. Awareness of NLDR algorithms is simply not as great, and implementations aren't as widely available. All of that said, NLDR is an exciting field, and there are clearly cases where NLDR obliterates PCA. I only focus on the virtues of PCA because that's what the question is about. It's all a matter of context; whether PCA or NLDR is more appropriate depends on the situation.
What is the value of linear dimensionality reduction in the presence of nonlinear alternatives? I'll expand on some of the points mentioned in the comments, and add a few more. Computational complexity. PCA is more efficient in terms of both time and memory than more complicated nonlinear dimens
39,980
Covariance matrix for a linear combination of correlated Gaussian random variables
If $X$ and $Y$ are correlated (univariate) normal random variables and $Z = AX+BY+C$, then the linearity of expectation and the bilinearity of the covariance function gives us that \begin{align} E[Z] &= AE[X] + BE[Y] + C,\tag{1}\\ \operatorname{cov}(Z,X) &= \operatorname{cov}(AX+BY+C,X) = A\operatorname{var}(X) + B\operatorname{cov}(Y,X)\\ \operatorname{cov}(Z,Y) &= \operatorname{cov}(AX+BY+C,Y) = B\operatorname{var}(Y) + A\operatorname{cov}(X,Y)\\ \operatorname{var}(Z) &= \operatorname{var}(AX+BY+C) \quad = A^2\operatorname{var}(X) + B^2\operatorname{var}(Y) + 2AB \operatorname{cov}(X,Y), \tag{2}\\ \end{align} but it is not necessarily true that $Z$ is a normal (a.k.a Gaussian) random variable. That $X$ and $Y$ are jointly normal random variables is sufficient to assert that $Z = AX+BY+C$ is a normal random variable. Note that $X$ and $Y$ are not required to be independent; they can be correlated as long as they are jointly normal. For examples of normal random variables $X$ and $Y$ that are not jointly normal and yet their sum $X+Y$ is normal, see the answers to Is joint normality a necessary condition for the sum of normal random variables to be normal?. As pointed out at the end of my own answer there, joint normality means that all linear combinations $aX+bY$ are normal, whereas in the special case being discussed there, only one linear combination $X+Y$ of non-jointly normal random variables is proven to be normal; most other linear combinations are not normal. More generally, if $X$ and $Y$ are (column) $n$-vector random variables with $n\times n$ covariance matrices $\Sigma_{X,X}$, $\Sigma_{Y,Y}$, and $n\times n$ crosscovariance matrix $\Sigma_{X,Y}$, $A$ and $B$ are $m\times n$ nonrandom matrices, and $Z$ and $C$ (column) $m$-vectors, then it is indeed true that \begin{align} E[Z] &= AE[X] + BE[Y] + C &\quad \scriptstyle{\text{compare with } (1)}\\ \Sigma_{Z,Z} &= A\Sigma_{X,X}A^T + B\Sigma_{Y,Y}B^T +2A\Sigma_{X,Y}B^T &\quad \scriptstyle{\text{compare with } (2)}\\ \end{align} but, as in the univariate case, it is not necessarily true that $Z$ is a normal vector (in the sense that the $m$ components $Z_i$ are jointly normal random variables). Once again, joint normality of $(X_1, X_2, \ldots, X_n, Y_1, Y_2, \ldots, Y_n)$ suffices to allow the assertion that $Z$ is a normal random vector.
Covariance matrix for a linear combination of correlated Gaussian random variables
If $X$ and $Y$ are correlated (univariate) normal random variables and $Z = AX+BY+C$, then the linearity of expectation and the bilinearity of the covariance function gives us that \begin{align} E[Z]
Covariance matrix for a linear combination of correlated Gaussian random variables If $X$ and $Y$ are correlated (univariate) normal random variables and $Z = AX+BY+C$, then the linearity of expectation and the bilinearity of the covariance function gives us that \begin{align} E[Z] &= AE[X] + BE[Y] + C,\tag{1}\\ \operatorname{cov}(Z,X) &= \operatorname{cov}(AX+BY+C,X) = A\operatorname{var}(X) + B\operatorname{cov}(Y,X)\\ \operatorname{cov}(Z,Y) &= \operatorname{cov}(AX+BY+C,Y) = B\operatorname{var}(Y) + A\operatorname{cov}(X,Y)\\ \operatorname{var}(Z) &= \operatorname{var}(AX+BY+C) \quad = A^2\operatorname{var}(X) + B^2\operatorname{var}(Y) + 2AB \operatorname{cov}(X,Y), \tag{2}\\ \end{align} but it is not necessarily true that $Z$ is a normal (a.k.a Gaussian) random variable. That $X$ and $Y$ are jointly normal random variables is sufficient to assert that $Z = AX+BY+C$ is a normal random variable. Note that $X$ and $Y$ are not required to be independent; they can be correlated as long as they are jointly normal. For examples of normal random variables $X$ and $Y$ that are not jointly normal and yet their sum $X+Y$ is normal, see the answers to Is joint normality a necessary condition for the sum of normal random variables to be normal?. As pointed out at the end of my own answer there, joint normality means that all linear combinations $aX+bY$ are normal, whereas in the special case being discussed there, only one linear combination $X+Y$ of non-jointly normal random variables is proven to be normal; most other linear combinations are not normal. More generally, if $X$ and $Y$ are (column) $n$-vector random variables with $n\times n$ covariance matrices $\Sigma_{X,X}$, $\Sigma_{Y,Y}$, and $n\times n$ crosscovariance matrix $\Sigma_{X,Y}$, $A$ and $B$ are $m\times n$ nonrandom matrices, and $Z$ and $C$ (column) $m$-vectors, then it is indeed true that \begin{align} E[Z] &= AE[X] + BE[Y] + C &\quad \scriptstyle{\text{compare with } (1)}\\ \Sigma_{Z,Z} &= A\Sigma_{X,X}A^T + B\Sigma_{Y,Y}B^T +2A\Sigma_{X,Y}B^T &\quad \scriptstyle{\text{compare with } (2)}\\ \end{align} but, as in the univariate case, it is not necessarily true that $Z$ is a normal vector (in the sense that the $m$ components $Z_i$ are jointly normal random variables). Once again, joint normality of $(X_1, X_2, \ldots, X_n, Y_1, Y_2, \ldots, Y_n)$ suffices to allow the assertion that $Z$ is a normal random vector.
Covariance matrix for a linear combination of correlated Gaussian random variables If $X$ and $Y$ are correlated (univariate) normal random variables and $Z = AX+BY+C$, then the linearity of expectation and the bilinearity of the covariance function gives us that \begin{align} E[Z]
39,981
Covariance matrix for a linear combination of correlated Gaussian random variables
Covariance matrix $\Sigma_{xy}$ can be written: $$ \Sigma_{xy} = \left[ \begin{array}{cc} c_{xx} & c_{xy} \\ c_{yx} & c_{yy} \end{array} \right] $$ Where $c_{ab}$ denotes the covariance between $a$ and $b$. (Note symmetric so $c_{ab} = c_{ba}$.) The covariance matrix for $X$,$Y$, and $Z$ would be: $$ \Sigma_{xyz} = \left[ \begin{array}{ccc} c_{xx} & c_{xy} & c_{xz} \\ c_{yx} & c_{yy} & c_{yz} \\ c_{zx} & c_{zy} & c_{zz} \\ \end{array} \right] $$ You need to find the additional terms: $$\begin{align*} c_{zx} &= E[(Z - E[z])(X - E[x])] = \quad ?\\ c_{zy} &= E[(Z - E[z])(Y - E[y])] = \quad ?\\ c_{zz} &= E[(Z - E[z])^2] = \quad ?\\ \end{align*}$$
Covariance matrix for a linear combination of correlated Gaussian random variables
Covariance matrix $\Sigma_{xy}$ can be written: $$ \Sigma_{xy} = \left[ \begin{array}{cc} c_{xx} & c_{xy} \\ c_{yx} & c_{yy} \end{array} \right] $$ Where $c_{ab}$ denotes the covariance between $a$ a
Covariance matrix for a linear combination of correlated Gaussian random variables Covariance matrix $\Sigma_{xy}$ can be written: $$ \Sigma_{xy} = \left[ \begin{array}{cc} c_{xx} & c_{xy} \\ c_{yx} & c_{yy} \end{array} \right] $$ Where $c_{ab}$ denotes the covariance between $a$ and $b$. (Note symmetric so $c_{ab} = c_{ba}$.) The covariance matrix for $X$,$Y$, and $Z$ would be: $$ \Sigma_{xyz} = \left[ \begin{array}{ccc} c_{xx} & c_{xy} & c_{xz} \\ c_{yx} & c_{yy} & c_{yz} \\ c_{zx} & c_{zy} & c_{zz} \\ \end{array} \right] $$ You need to find the additional terms: $$\begin{align*} c_{zx} &= E[(Z - E[z])(X - E[x])] = \quad ?\\ c_{zy} &= E[(Z - E[z])(Y - E[y])] = \quad ?\\ c_{zz} &= E[(Z - E[z])^2] = \quad ?\\ \end{align*}$$
Covariance matrix for a linear combination of correlated Gaussian random variables Covariance matrix $\Sigma_{xy}$ can be written: $$ \Sigma_{xy} = \left[ \begin{array}{cc} c_{xx} & c_{xy} \\ c_{yx} & c_{yy} \end{array} \right] $$ Where $c_{ab}$ denotes the covariance between $a$ a
39,982
Is support vector clustering a method for implementing k-means, or is it a different clustering algorithm?
The algorithms are completely different. The only common thing between them is that they both are clustering algorithms. K-means searches for K centers, and attachment of points to them, such that: each point is attached to the closest center each center is the average (center of gravity) of all points attached to it It is done iteratively. We start from random centers, attach each point to the closest center, move each center to the average of points attached to it, reattach each point to the closest center, move each center the average of points attached to it now, and so on until the iterations converge. At the end we have K centers, each one "owns" all points which are closer to it than to any other center. The hidden assumption is that there are K "real" clusters, each one is normally distributed around its center, and all normal distributions are spherical and have the same radius. Support vector clustering has the following idea: let us transform the points from their space to a higher dimensionality feature space. Find a minimal enclosing sphere in this feature space. In the original space, the sphere becomes a set of disjoing regions. Each region becomes a cluster. (There are also important details like how we choose the feature space, and how we do the transformation, and how we define disjoint regions.) These disjoint regions are completely different from the regions around K centers in the K-means algorithm. For example, in 2 dimensions, the K-means regions are polygons. The regions of SVC are amoeba-like areas.
Is support vector clustering a method for implementing k-means, or is it a different clustering algo
The algorithms are completely different. The only common thing between them is that they both are clustering algorithms. K-means searches for K centers, and attachment of points to them, such that:
Is support vector clustering a method for implementing k-means, or is it a different clustering algorithm? The algorithms are completely different. The only common thing between them is that they both are clustering algorithms. K-means searches for K centers, and attachment of points to them, such that: each point is attached to the closest center each center is the average (center of gravity) of all points attached to it It is done iteratively. We start from random centers, attach each point to the closest center, move each center to the average of points attached to it, reattach each point to the closest center, move each center the average of points attached to it now, and so on until the iterations converge. At the end we have K centers, each one "owns" all points which are closer to it than to any other center. The hidden assumption is that there are K "real" clusters, each one is normally distributed around its center, and all normal distributions are spherical and have the same radius. Support vector clustering has the following idea: let us transform the points from their space to a higher dimensionality feature space. Find a minimal enclosing sphere in this feature space. In the original space, the sphere becomes a set of disjoing regions. Each region becomes a cluster. (There are also important details like how we choose the feature space, and how we do the transformation, and how we define disjoint regions.) These disjoint regions are completely different from the regions around K centers in the K-means algorithm. For example, in 2 dimensions, the K-means regions are polygons. The regions of SVC are amoeba-like areas.
Is support vector clustering a method for implementing k-means, or is it a different clustering algo The algorithms are completely different. The only common thing between them is that they both are clustering algorithms. K-means searches for K centers, and attachment of points to them, such that:
39,983
Is support vector clustering a method for implementing k-means, or is it a different clustering algorithm?
They are different. The high level difference is that k-means doesn't use a kernel, and so has black and white boundaries between clusters, where SVC uses a Gaussian kernel. The significance of the kernel is that it allows for a smooth transition at the boundary rather than a hard change. The paper introducing Support Vector Clustering is by Horn, Ben-Hur, Siegelman and Vapnick (2001) and available here - http://www.jmlr.org/papers/volume2/horn01a/rev1/horn01ar1.pdf. However, to understand how the decision boundary in a support vector works, it might be more useful to start with something like the wikipedia page on Support Vector Machines or this earlier CV answer: Help me understand Support Vector Machines I see that in an answer to another question you asked, it was suggested to read the Elements of Statistical Learning by Hastie, et al. The explanation of the decision boundaries in k-means and support vector machines is especially good in this text.
Is support vector clustering a method for implementing k-means, or is it a different clustering algo
They are different. The high level difference is that k-means doesn't use a kernel, and so has black and white boundaries between clusters, where SVC uses a Gaussian kernel. The significance of the ke
Is support vector clustering a method for implementing k-means, or is it a different clustering algorithm? They are different. The high level difference is that k-means doesn't use a kernel, and so has black and white boundaries between clusters, where SVC uses a Gaussian kernel. The significance of the kernel is that it allows for a smooth transition at the boundary rather than a hard change. The paper introducing Support Vector Clustering is by Horn, Ben-Hur, Siegelman and Vapnick (2001) and available here - http://www.jmlr.org/papers/volume2/horn01a/rev1/horn01ar1.pdf. However, to understand how the decision boundary in a support vector works, it might be more useful to start with something like the wikipedia page on Support Vector Machines or this earlier CV answer: Help me understand Support Vector Machines I see that in an answer to another question you asked, it was suggested to read the Elements of Statistical Learning by Hastie, et al. The explanation of the decision boundaries in k-means and support vector machines is especially good in this text.
Is support vector clustering a method for implementing k-means, or is it a different clustering algo They are different. The high level difference is that k-means doesn't use a kernel, and so has black and white boundaries between clusters, where SVC uses a Gaussian kernel. The significance of the ke
39,984
Robustness of GLM to link function
If you're fitting only nominal categorical predictors (and models of full order), the link function will be of essentially no consequence --- in the sense that it doesn't alter the fit. Here's an example using log and identity links with a Poisson glm. First the data (y is the response, a count, and x1f and x2f have the levels of the factors): y 157 909 249 144 876 248 34 205 62 26 243 48 x1f 1 1 1 1 1 1 2 2 2 2 2 2 x2f 1 2 3 1 2 3 1 2 3 1 2 3 Here's the fitted values for the full model with interaction: fitted(glm(y ~ x1f+x2f+x1f:x2f, family=poisson(link="log"))) 1 2 3 4 5 6 7 8 9 10 11 12 150.5 892.5 248.5 150.5 892.5 248.5 30.0 224.0 55.0 30.0 224.0 55.0 fitted(glm(y ~ x1f+x2f+x1f:x2f, family=poisson(link="identity"))) 1 2 3 4 5 6 7 8 9 10 11 12 150.5 892.5 248.5 150.5 892.5 248.5 30.0 224.0 55.0 30.0 224.0 55.0 We see the fit didn't change even though the link function did. If you're fitting categorical models which leave some interactions out (such as a main effects only model), then the link function can matter, because under some link functions, those interactions may indeed disappear (leaving the smaller model suitable and more easily interpreted) --- but then those simpler, additive models won't be suitable for other link functions. Continuing the earlier example, omitting the interactions: fitted(glm(y ~ x1f+x2f, family=poisson(link="log"))) 1 2 3 4 5 6 7 8 145.65183 900.94330 244.90487 145.65183 900.94330 244.90487 34.84817 215.55670 9 10 11 12 58.59513 34.84817 215.55670 58.59513 fitted(glm(y ~ x1f+x2f, family=poisson(link="identity"))) 1 2 3 4 5 6 7 8 238.72879 618.67616 268.07978 238.72879 618.67616 268.07978 21.90564 401.85300 9 10 11 12 51.25663 21.90564 401.85300 51.25663 Now we see the fitted values are indeed different. In this case, the log link gives a reasonable fit, but the identity link gives quite a poor fit. If you're fitting continuous predictors, then it may matter quite a bit --- even ignoring the issue of interactions. One example would be with binomial GLMs --- in many cases, the fit with a probit and a cloglog link can look quite different, even though they both have $g$ taking $(0,1)$ to $\mathbb{R}$. How much it might matter really depends on the specifics of the problem and your tolerance of deviation. In many cases ease of interpretation matters more than differences in fit (at least where those differences tend to be small), but you have competition between how easy the link function is to deal with and how interpretable the linear predictor is, and you also have the issue of potential lack of fit: if your curve relating the mean of your binomial variates to the predictor(s) isn't symmetric, it might be much more interpretable to choose a more suitable link than to expand the model class.
Robustness of GLM to link function
If you're fitting only nominal categorical predictors (and models of full order), the link function will be of essentially no consequence --- in the sense that it doesn't alter the fit. Here's an exam
Robustness of GLM to link function If you're fitting only nominal categorical predictors (and models of full order), the link function will be of essentially no consequence --- in the sense that it doesn't alter the fit. Here's an example using log and identity links with a Poisson glm. First the data (y is the response, a count, and x1f and x2f have the levels of the factors): y 157 909 249 144 876 248 34 205 62 26 243 48 x1f 1 1 1 1 1 1 2 2 2 2 2 2 x2f 1 2 3 1 2 3 1 2 3 1 2 3 Here's the fitted values for the full model with interaction: fitted(glm(y ~ x1f+x2f+x1f:x2f, family=poisson(link="log"))) 1 2 3 4 5 6 7 8 9 10 11 12 150.5 892.5 248.5 150.5 892.5 248.5 30.0 224.0 55.0 30.0 224.0 55.0 fitted(glm(y ~ x1f+x2f+x1f:x2f, family=poisson(link="identity"))) 1 2 3 4 5 6 7 8 9 10 11 12 150.5 892.5 248.5 150.5 892.5 248.5 30.0 224.0 55.0 30.0 224.0 55.0 We see the fit didn't change even though the link function did. If you're fitting categorical models which leave some interactions out (such as a main effects only model), then the link function can matter, because under some link functions, those interactions may indeed disappear (leaving the smaller model suitable and more easily interpreted) --- but then those simpler, additive models won't be suitable for other link functions. Continuing the earlier example, omitting the interactions: fitted(glm(y ~ x1f+x2f, family=poisson(link="log"))) 1 2 3 4 5 6 7 8 145.65183 900.94330 244.90487 145.65183 900.94330 244.90487 34.84817 215.55670 9 10 11 12 58.59513 34.84817 215.55670 58.59513 fitted(glm(y ~ x1f+x2f, family=poisson(link="identity"))) 1 2 3 4 5 6 7 8 238.72879 618.67616 268.07978 238.72879 618.67616 268.07978 21.90564 401.85300 9 10 11 12 51.25663 21.90564 401.85300 51.25663 Now we see the fitted values are indeed different. In this case, the log link gives a reasonable fit, but the identity link gives quite a poor fit. If you're fitting continuous predictors, then it may matter quite a bit --- even ignoring the issue of interactions. One example would be with binomial GLMs --- in many cases, the fit with a probit and a cloglog link can look quite different, even though they both have $g$ taking $(0,1)$ to $\mathbb{R}$. How much it might matter really depends on the specifics of the problem and your tolerance of deviation. In many cases ease of interpretation matters more than differences in fit (at least where those differences tend to be small), but you have competition between how easy the link function is to deal with and how interpretable the linear predictor is, and you also have the issue of potential lack of fit: if your curve relating the mean of your binomial variates to the predictor(s) isn't symmetric, it might be much more interpretable to choose a more suitable link than to expand the model class.
Robustness of GLM to link function If you're fitting only nominal categorical predictors (and models of full order), the link function will be of essentially no consequence --- in the sense that it doesn't alter the fit. Here's an exam
39,985
Most suitable optimizer for the Gaussain process likelihood function
I think this is an open-ended question because a lot will depend on the actual dataset you are optimising against, how close your first candidate solution $s_0$ is to a local optimum and if you are interested / are able to use derivative information or not. I have used R's standard optim function and generally I have found that the L-BFGS-B algorithm is the fastest or close to the fastest from the default optimisation algorithms available. That is when I supply a derivative function. In the GPML Matlab Code the authors also provide a L-BFGS-B implementation so I suspect they too found that the L-BFGS-B algorithm is reasonably competitive when someone provides derivative information within the context of a general application. Another option is to use derivative free optimisation. Rios and Sahinidis, 2013 review paper: "Derivative-free optimization: A review of algorithms and comparison of software implementations" in the Journal of Global Optimisation seems to be your best bet for something exhaustive. Within R the minqa package that provide derivative-free optimization by quadratic approximation (QA) routines. The package contains some of Powell's most famous "optimisation children": UOBYQA, NEWUOA and BOBYQA. I have found UOBYQA to be the fastest of the three for toy problems despite Wikipedia general advice: "For general usage, NEWUOA is recommended to replace UOBYQA.". This is not very surprising, log-likelihoods are smooth functions with well-defined derivatives so NEWUOA might not a enjoy an obvious advantage. Again this shows that there is no silver-bullet. On that matter, I have played around with some Particle Swarm Optimisation (PSO) and Covariance Matrix Adaptation Evolution Strategy algorithms included in the R package hydroPSO and cmaes respectively but in general while faster and far more informative than the canned Simulated Annealing (SANN) in optim they were not remotely competitive in terms of speed with QA routines. Notice that estimating the hyper-parameters vector $\theta$ for a log-likelihood function is usually a smooth and (at least locally) convex problem so stochastic optimisation generally will not offer a great advantage. To recap: I would suggest using L-BFGS-B with derivative information. If derivative information is hard to obtain (eg. due to complicated kernels functions) use quadratic approximation routines.
Most suitable optimizer for the Gaussain process likelihood function
I think this is an open-ended question because a lot will depend on the actual dataset you are optimising against, how close your first candidate solution $s_0$ is to a local optimum and if you are in
Most suitable optimizer for the Gaussain process likelihood function I think this is an open-ended question because a lot will depend on the actual dataset you are optimising against, how close your first candidate solution $s_0$ is to a local optimum and if you are interested / are able to use derivative information or not. I have used R's standard optim function and generally I have found that the L-BFGS-B algorithm is the fastest or close to the fastest from the default optimisation algorithms available. That is when I supply a derivative function. In the GPML Matlab Code the authors also provide a L-BFGS-B implementation so I suspect they too found that the L-BFGS-B algorithm is reasonably competitive when someone provides derivative information within the context of a general application. Another option is to use derivative free optimisation. Rios and Sahinidis, 2013 review paper: "Derivative-free optimization: A review of algorithms and comparison of software implementations" in the Journal of Global Optimisation seems to be your best bet for something exhaustive. Within R the minqa package that provide derivative-free optimization by quadratic approximation (QA) routines. The package contains some of Powell's most famous "optimisation children": UOBYQA, NEWUOA and BOBYQA. I have found UOBYQA to be the fastest of the three for toy problems despite Wikipedia general advice: "For general usage, NEWUOA is recommended to replace UOBYQA.". This is not very surprising, log-likelihoods are smooth functions with well-defined derivatives so NEWUOA might not a enjoy an obvious advantage. Again this shows that there is no silver-bullet. On that matter, I have played around with some Particle Swarm Optimisation (PSO) and Covariance Matrix Adaptation Evolution Strategy algorithms included in the R package hydroPSO and cmaes respectively but in general while faster and far more informative than the canned Simulated Annealing (SANN) in optim they were not remotely competitive in terms of speed with QA routines. Notice that estimating the hyper-parameters vector $\theta$ for a log-likelihood function is usually a smooth and (at least locally) convex problem so stochastic optimisation generally will not offer a great advantage. To recap: I would suggest using L-BFGS-B with derivative information. If derivative information is hard to obtain (eg. due to complicated kernels functions) use quadratic approximation routines.
Most suitable optimizer for the Gaussain process likelihood function I think this is an open-ended question because a lot will depend on the actual dataset you are optimising against, how close your first candidate solution $s_0$ is to a local optimum and if you are in
39,986
If two traits have known correlation, can you predict probability they'll "align" for a random pair?
No - knowing the correlation (and even linear regression formula) between two traits is not enough to predict the probability that a higher BMI will have a higher blood pressure. See Anscombe's quartet for a visual example of four dissimilar distributions with identical correlations and fitted linear regression lines to see where making probability predictions based upon the correlation can lead you astray. If you make simplifying assumptions: i.e., a linear relationship between BMI and blood pressure and normal distributions then yes, you could construct prediction intervals for new measurements using the least squares equation. However when working with real-world data I would advise avoiding assumptions about the data distribution. A better alternative would be to use bootstrapping to estimate the cumulative distribution function.
If two traits have known correlation, can you predict probability they'll "align" for a random pair?
No - knowing the correlation (and even linear regression formula) between two traits is not enough to predict the probability that a higher BMI will have a higher blood pressure. See Anscombe's quarte
If two traits have known correlation, can you predict probability they'll "align" for a random pair? No - knowing the correlation (and even linear regression formula) between two traits is not enough to predict the probability that a higher BMI will have a higher blood pressure. See Anscombe's quartet for a visual example of four dissimilar distributions with identical correlations and fitted linear regression lines to see where making probability predictions based upon the correlation can lead you astray. If you make simplifying assumptions: i.e., a linear relationship between BMI and blood pressure and normal distributions then yes, you could construct prediction intervals for new measurements using the least squares equation. However when working with real-world data I would advise avoiding assumptions about the data distribution. A better alternative would be to use bootstrapping to estimate the cumulative distribution function.
If two traits have known correlation, can you predict probability they'll "align" for a random pair? No - knowing the correlation (and even linear regression formula) between two traits is not enough to predict the probability that a higher BMI will have a higher blood pressure. See Anscombe's quarte
39,987
If two traits have known correlation, can you predict probability they'll "align" for a random pair?
If two traits have known correlation, can you predict probability they'll “align” for a random pair? It depends on which population correlation you look at. For the Pearson correlation you mention ($\rho$), the answer is "no", at least not without additional assumptions. (RobertF's answer is correct) If instead you know the population Kendall correlation (Kendall's tau, here denoted $\tau_K$) in a continuous bivariate distribution then the answer is actually yes. The population Kendall correlation is the difference between the probability of a concordant pair and the probability of a discordant pair: $$\tau_K = p_C-p_D$$ (the sample Kendall correlation is similarly the difference in sample proportions of concordant and discordant pairs). Since in continuous bivariate populations $p_C+p_D=1$, if you know $\tau_K$ you can calculate $p_C$: $\tau_K = p_C-p_D$ $ = p_C-(1-p_C)$ $ = 2p_C-1$ Hence $p_C = \frac12(\tau_K+1)$, a nice simple result. While $\tau_K$ determines the probability you ask for (at least in the continuous case), the relationship between $\rho$ and $\tau_K$ depends on the structure of the bivariate relationship between the variables (i.e. the copula). If you assume bivariate normality, then you could work out the (nonlinear) connection between $\tau_K$ and $\rho$. In fact this is a well-known result; we have: $$\tau_K = \frac{2}{\pi}\arcsin(\rho)$$ - see sec 5.3.2 of Embrechts et al. (2005) [1], which result can also be found in various places -- for example in Meyer (2009) [2]. So in that case $$p_C = \frac{\arcsin(\rho)}{\pi}+\frac12\,.$$ (However, an assumption of bivariate normality would seem dubious for BMI and blood pressure) This relationship between $\tau_K$ and $\rho$ actually holds for elliptical distributions more generally. See for example Lindskog, McNeil, & Schmock (2003)[3]. However, again, this assumption for BMI and blood pressure may be dubious -- for example, both measures in practice tend to be right-skew. [1] Embrechts, P., Frey, R., McNeil, A.J. (2005), Quantitative Risk Management: Concepts, Techniques, Tools, Princeton series in Finance, Princeton University Press [2] Meyer, C. (2009), The Bivariate Normal Copula, arXiv:0912.2816v1[math.PR] pdf (December 15) [3] Lindskog, F., McNeil, A.J., Schmock, U., (2003), "Kendall’s tau for elliptical distributions" in: Credit Risk; Measurement, Evaluation and Management, ed. G. Bol et al., Contributions to Economics, Physica-Verlag Heidelberg, pp.149–156. (or see http://www.macs.hw.ac.uk/~mcneil/ftp/KendallsTau.pdf)
If two traits have known correlation, can you predict probability they'll "align" for a random pair?
If two traits have known correlation, can you predict probability they'll “align” for a random pair? It depends on which population correlation you look at. For the Pearson correlation you mention ($
If two traits have known correlation, can you predict probability they'll "align" for a random pair? If two traits have known correlation, can you predict probability they'll “align” for a random pair? It depends on which population correlation you look at. For the Pearson correlation you mention ($\rho$), the answer is "no", at least not without additional assumptions. (RobertF's answer is correct) If instead you know the population Kendall correlation (Kendall's tau, here denoted $\tau_K$) in a continuous bivariate distribution then the answer is actually yes. The population Kendall correlation is the difference between the probability of a concordant pair and the probability of a discordant pair: $$\tau_K = p_C-p_D$$ (the sample Kendall correlation is similarly the difference in sample proportions of concordant and discordant pairs). Since in continuous bivariate populations $p_C+p_D=1$, if you know $\tau_K$ you can calculate $p_C$: $\tau_K = p_C-p_D$ $ = p_C-(1-p_C)$ $ = 2p_C-1$ Hence $p_C = \frac12(\tau_K+1)$, a nice simple result. While $\tau_K$ determines the probability you ask for (at least in the continuous case), the relationship between $\rho$ and $\tau_K$ depends on the structure of the bivariate relationship between the variables (i.e. the copula). If you assume bivariate normality, then you could work out the (nonlinear) connection between $\tau_K$ and $\rho$. In fact this is a well-known result; we have: $$\tau_K = \frac{2}{\pi}\arcsin(\rho)$$ - see sec 5.3.2 of Embrechts et al. (2005) [1], which result can also be found in various places -- for example in Meyer (2009) [2]. So in that case $$p_C = \frac{\arcsin(\rho)}{\pi}+\frac12\,.$$ (However, an assumption of bivariate normality would seem dubious for BMI and blood pressure) This relationship between $\tau_K$ and $\rho$ actually holds for elliptical distributions more generally. See for example Lindskog, McNeil, & Schmock (2003)[3]. However, again, this assumption for BMI and blood pressure may be dubious -- for example, both measures in practice tend to be right-skew. [1] Embrechts, P., Frey, R., McNeil, A.J. (2005), Quantitative Risk Management: Concepts, Techniques, Tools, Princeton series in Finance, Princeton University Press [2] Meyer, C. (2009), The Bivariate Normal Copula, arXiv:0912.2816v1[math.PR] pdf (December 15) [3] Lindskog, F., McNeil, A.J., Schmock, U., (2003), "Kendall’s tau for elliptical distributions" in: Credit Risk; Measurement, Evaluation and Management, ed. G. Bol et al., Contributions to Economics, Physica-Verlag Heidelberg, pp.149–156. (or see http://www.macs.hw.ac.uk/~mcneil/ftp/KendallsTau.pdf)
If two traits have known correlation, can you predict probability they'll "align" for a random pair? If two traits have known correlation, can you predict probability they'll “align” for a random pair? It depends on which population correlation you look at. For the Pearson correlation you mention ($
39,988
If two traits have known correlation, can you predict probability they'll "align" for a random pair?
I recommend increasing the variables you are measuring. Age, gender, location etc. weight them in your formula to lower the probability of false negatives. Maximize your ROC curve. It would be interesting to see a model that keeps the same correlation given datasets over different decades.
If two traits have known correlation, can you predict probability they'll "align" for a random pair?
I recommend increasing the variables you are measuring. Age, gender, location etc. weight them in your formula to lower the probability of false negatives. Maximize your ROC curve. It would be interes
If two traits have known correlation, can you predict probability they'll "align" for a random pair? I recommend increasing the variables you are measuring. Age, gender, location etc. weight them in your formula to lower the probability of false negatives. Maximize your ROC curve. It would be interesting to see a model that keeps the same correlation given datasets over different decades.
If two traits have known correlation, can you predict probability they'll "align" for a random pair? I recommend increasing the variables you are measuring. Age, gender, location etc. weight them in your formula to lower the probability of false negatives. Maximize your ROC curve. It would be interes
39,989
Is the logarithmic transformation sufficient to tame every distribution?
The answer is NO. You can construct such distributions that are untamed by log following the example of Log-Cauchy distribution.
Is the logarithmic transformation sufficient to tame every distribution?
The answer is NO. You can construct such distributions that are untamed by log following the example of Log-Cauchy distribution.
Is the logarithmic transformation sufficient to tame every distribution? The answer is NO. You can construct such distributions that are untamed by log following the example of Log-Cauchy distribution.
Is the logarithmic transformation sufficient to tame every distribution? The answer is NO. You can construct such distributions that are untamed by log following the example of Log-Cauchy distribution.
39,990
Is the logarithmic transformation sufficient to tame every distribution?
No. Consider the following situations: If your distribution is discrete and only takes a few different values, taking logs generally doesn't really "tame" anything. e.g. 99.9% chance of 2.8 and 0.1% chance of ten billion, take natural logs and you have 99.9% chance of 1.03 and 0.1% chance of 23(ish). Take logs again and you have 99.9% chance of 0.03 and 0.1% chance of ~3.14. Take logs again and you have 99.9% chance of -3.53 and 0.1% chance of ~1.14. In each case your distribution remains a scaled Bernoulli, so it has exactly the same skewness ($\gamma_1 \approx 31.5$), and exactly the same proportion of the distribution beyond 3,10, and 30 sd's above the mean. If your distribution is symmetric or only mildly right skew, taking logs will often make it distinctly left skew (and if your distribution is left-skew, taking logs will generally make it more left skew). take any random variable, $X$, with a distribution you regard as "just" inside the boundary of "tame" (set up so that $e^X$ is definitely not tame), however you want to measure it. Exponentiate twice ($Y=e^{(e^X)}$). Taking logs only once leaves it as "not tame" by that measure of tame. As Nick Cox points out in comments, you can't take logs of values that aren't positive -- consider a symmetric distribution on the real line that's "not tame" in the tail (it doesn't have to be centered at 0, but let's do that anyway). You can't even take logs of the values that aren't positive, so trying to take logs won't work.
Is the logarithmic transformation sufficient to tame every distribution?
No. Consider the following situations: If your distribution is discrete and only takes a few different values, taking logs generally doesn't really "tame" anything. e.g. 99.9% chance of 2.8 and 0.1%
Is the logarithmic transformation sufficient to tame every distribution? No. Consider the following situations: If your distribution is discrete and only takes a few different values, taking logs generally doesn't really "tame" anything. e.g. 99.9% chance of 2.8 and 0.1% chance of ten billion, take natural logs and you have 99.9% chance of 1.03 and 0.1% chance of 23(ish). Take logs again and you have 99.9% chance of 0.03 and 0.1% chance of ~3.14. Take logs again and you have 99.9% chance of -3.53 and 0.1% chance of ~1.14. In each case your distribution remains a scaled Bernoulli, so it has exactly the same skewness ($\gamma_1 \approx 31.5$), and exactly the same proportion of the distribution beyond 3,10, and 30 sd's above the mean. If your distribution is symmetric or only mildly right skew, taking logs will often make it distinctly left skew (and if your distribution is left-skew, taking logs will generally make it more left skew). take any random variable, $X$, with a distribution you regard as "just" inside the boundary of "tame" (set up so that $e^X$ is definitely not tame), however you want to measure it. Exponentiate twice ($Y=e^{(e^X)}$). Taking logs only once leaves it as "not tame" by that measure of tame. As Nick Cox points out in comments, you can't take logs of values that aren't positive -- consider a symmetric distribution on the real line that's "not tame" in the tail (it doesn't have to be centered at 0, but let's do that anyway). You can't even take logs of the values that aren't positive, so trying to take logs won't work.
Is the logarithmic transformation sufficient to tame every distribution? No. Consider the following situations: If your distribution is discrete and only takes a few different values, taking logs generally doesn't really "tame" anything. e.g. 99.9% chance of 2.8 and 0.1%
39,991
Why does the sum of residuals equal 0 from a graphical perspective?
To fit $\hat y = b_1 + b_2 x$ by OLS we minimise the residual sum of squares, $RSS = \sum_{i=1}^n e_i^2$. As the question states, we can do this analytically by setting $$\frac{\partial RSS}{\partial b_1}=0 , \qquad \frac{\partial RSS}{\partial b_2}=0 $$ and solving the resulting normal equations. Note that from the normal equations we can deduce $\sum_{i=1}^n e_i = 0$ and $\sum_{i=1}^n x_i e_i = 0$; the latter is equivalent (think about the inner or "dot" product) to stating that the vector of observations $\mathbf{x} = (x_1, x_2, \dots, x_n)^t$ in $\mathbb{R}^n$ is orthogonal (perpendicular) to the vector of residuals $\mathbf{e} = (e_1, e_2, \dots, e_n)^t$, and analogously your requirement that the sum of residuals is zero is equivalent to the statement that $\mathbf{e}$ is orthogonal to the vector of ones, $\mathbf{1}_n=(1,1,\dots,1)^t$. Both these results can be seen geometrically from knowing the design matrix $\mathbf{X}$ includes a column of ones to represent the intercept term and another column for the $x_i$ data, and that the vector of residuals is orthogonal to each column of the design matrix because the hat matrix $\mathbf{H}$ is an orthogonal projection onto the column space of $\mathbf{X}$. For more on how to interpret the diagram, see Geometric interpretation of multiple correlation coefficient $R$ and coefficient of determination $R^2$. But that takes place in the $n$-dimensional subject space; it would be nice to develop some intuition from the scatter plot itself. To illustrate OLS geometrically, draw a vertical line segment from each point $(x_i, y_i)$ to its fitted value on the regression line $(x_i, \hat y_i)$, then draw a square with this side. The length of this segment is the magnitude of the residual $|e_i$| (note $e_i$ is positive for points above the line, e.g. with the blue square, negative for points below, e.g. with the red square) and the area of the square is $e_i^2$. I have illustrated two examples points, and for convenience placed the squares on whichever side avoids overlapping the regression line. The RSS is the sum of the areas of all the squares (both red and blue squares counting as positive area), and to find the OLS solution we seek $b_1$ and $b_2$ that minimise this area. Now take any regression line (not necessarily the OLS one) and consider translating its intercept up by $\delta b_1$. I have drawn the new regression line as dotted, and overlaid the residual squares based on both new and original fitted values. For the point on the left with a positive residual, this shift in the regression line has reduced the area of the residual square — this is logical since the observed point lay above the line, and the regression line has moved up closer to it, so the fit is better. Two $\color{blue}{\text{blue}}$ rectangular strips (which I consider to extend the whole length of the larger, original $e_1$ by $e_1$ square) have been cut off the left and bottom sides. But these overlap (the grey square at bottom left) so subtracting both rectangles has double-counted the grey square, and we need to add this area back on once again. Overall the new (reduced) residual square is found from the original square, minus the two blue rectangular strips, plus the grey square. For points below the line with a negative residual, raising the line has worsened the fit and made the residual more negative; the residual square has grown by two $\color{red}{\text{red}}$ rectangular strips (running across the sides of the smaller, original $e_2$ by $e_2$ square) plus the grey square in the upper right. If we dissect the residual squares for all data points in this way, then total the results, $$\color{Gold}{\text{New RSS}} = \text{Old RSS} + \sum \color{red}{\text{red rectangles}} - \sum \color{blue}{\text{blue rectangles}} + \sum \color{grey}{\text{grey squares}}$$ From the diagram it's clear $\delta e_i = - \delta b_1$ for all points — raising the intercept increases each fitted $\hat y_i$ by $\delta b_1$ so $e_i = y_i - \hat y_i$ falls by $\delta b_1$. Each $\color{grey}{\text{grey}}$square has area $(\delta b_1)^2$. Each horizontally-aligned rectangles is as tall as the intercept was shifted, and as wide as the original residual for that point. The corresponding vertically-aligned rectangles are congruent, just rotated by a right angle. So I can line up all the $\color{blue}{\text{blue}}$ rectangles to form a single rectangle as high as the change in intercept and twice as wide as the sum of the positive residuals. The $\color{red}{\text{red}}$ rectangles form a single rectangle just as high, and twice the width of the sum of the (absolute values of the) negative residuals. Now suppose the original $\hat y = b_1 + b_2 x$ satisfied $\sum_{i=1}^n e_i=0$, which occurs (prove it!) if the line passes through the centroid $(\bar x, \bar y)$. Since the positive and negative residuals cancel out, the blue and red rectangles must too: $$\color{Gold}{\text{RSS after change in intercept}} = \text{RSS of line through centroid} + \color{grey}{n(\delta b_1)^2}$$ Whatever the gradient of our original line, adjusting the intercept so it avoids the centroid will make the RSS worse, by the area of the grey squares. The least-squares line must therefore pass through the centroid and have $\sum_{i=1}^n e_i=0$. This does not tell us anything about which gradient minimises the RSS, but we can adapt our approach to consider a fixed intercept $b_1$ and a change in slope of $\delta b_2$. This time, the fitted values $\hat y_i$ rise by $x_i \delta b_2$, so fitted values rise (and residuals fall) by more the further $x_i$ lies to the right: the rectangles do not all have the same height, nor the grey squares the same area. But otherwise the dissection is much as before. We find the area of a $\color{grey}{\text{grey}}$ square is $(x_i \delta b_2)^2$ and the area of a rectangle is proportional (by $\pm \delta b_2$) to $x_i e_i$. For the same reasons as before we want the $\color{blue}{\text{blue}}$ and $\color{red}{\text{red}}$ rectangles to balance out, which would require the positive and negative values of $x_i e_i$ to cancel, i.e. $\sum_{i=1}^n x_i e_i =0$. Totalling, $$\color{Gold}{\text{RSS after change in slope}} = \text{RSS of line for which }\sum_{i=1}^n x_i e_i \text{ is zero} + \color{grey}{(\delta b_2)^2\sum_{i=1}^n x_i^2}$$ Regardless of the intercept, if we draw a line with a slope such that $\sum_{i=1}^n x_i e_i = 0$, then any changes to the slope will result in an RSS which is worse (higher) by the area of the grey squares. The least-squares line must therefore satisfy $\sum_{i=1}^n x_i e_i = 0$. The diagrams oversimplify things: I didn't consider cases like a point with positive residual that develops a negative residual after the line sweeps above it, or where $x_i$ was negative. But we can verify the intuition algebraically: $$\sum_{i=1}^n (e_i+\delta e_i)^2 = \sum_{i=1}^n e_i^2 + 2 \sum_{i=1}^n e_i \delta e_i + \sum_{i=1}^n (\delta e_i)^2 \tag{1}$$ The middle term represents the rectangular areas; note that the red and blue rectangles will take opposite signs since $e_i$ was positive for the blue case and negative for the red. Writing RSS as a function of $b_1$ and $b_2$, $$RSS(b_1, b_2) = \sum_{i=1}^n e_i^2 = \sum_{i=1}^n (y_i - b_1 - b_2 x_i)^2 \tag{2}$$ Translating the regression line from $\hat y = b_1 + b_2 x$ to $\hat y = (b_1 + \delta b_1) + b_2 x$ reduces the residuals by the change in the intercept, $\delta e_i = -b_1$, so $(1)$ yields $$ \begin{align} RSS(b_1 + \delta b_1, b_2) &= RSS(b_1, b_2) + 2 \sum_{i=1}^n e_i (-\delta b_1) + \sum_{i=1}^n (-\delta b_1)^2 \\ RSS(b_1 + \delta b_1, b_2) &= RSS(b_1, b_2) - 2 \delta b_1 \sum_{i=1}^n e_i + n(\delta b_1)^2 \tag{3} \end{align} $$ Switching from $\hat y = b_1 + b_2 x$ to $\hat y = b_1 + (b_2 + \delta b_2) x$ gives $\delta e_i = -x_i \delta b_2$, and $(1)$ yields $$ \begin{align} RSS(b_1, b_2 + \delta b_2) &= RSS(b_1, b_2) + 2 \sum_{i=1}^n e_i (-x_i \delta b_2) + \sum_{i=1}^n (-x_i \delta b_2)^2 \\ RSS(b_1, b_2 + \delta b_2) &= RSS(b_1, b_2) - 2 \delta b_2 \sum_{i=1}^n x_i e_i + (\delta b_2)^2 \sum_{i=1}^n x_i^2 \tag{4} \end{align} $$ The argument can then proceed as before. Note that $(2)$ shows RSS is a quadratic function of both slope and intercept, so the expansions $(3)$ and $(4)$ could alternatively be derived by using the Taylor expansion $$f(a+h) = f(a) + f'(a) h + \frac{1}{2} f''(a) h^2$$ with the partial derivatives $$\frac{\partial RSS}{\partial b_1}= -2 \sum_{i=1}^n e_i, \quad \frac{\partial^2 RSS}{\partial b_1^2}=2n, \quad \frac{\partial RSS}{\partial b_2}=-2\sum_{i=1}^n x_i e_i, \quad \frac{\partial^2 RSS}{\partial b_2^2}= 2\sum_{i=1}^n x_i^2 $$ The $h$ represents our change $\delta b_1$ or $\delta b_2$. Note that setting the coefficient of the linear term in the change to zero (which we did by getting the red and blue rectangles to cancel out) is equivalent to putting $f'(a)=0$, i.e. ensuring the point we are expanding about satisfies the first order conditions to be a turning point. Checking that the coefficient of the quadratic term in the change is positive is equivalent to verifying $f''(a)>0$, which is the second order condition for this turning point to be a minimum: this is equivalent to our argument that, because the grey squares were being added on, the RSS was lower before the change. Note that we were considering the residuals, $e_i = y_i - \hat y_i$, and not the errors, $\varepsilon_i = y_i - (\beta_1 + \beta_2 x_i)$ (where the betas are the "true" population regression parameters). Both residuals and errors are stochastic, in that if we re-sampled we would get a whole new bunch of random errors, a new OLS regression estimate, and a whole new set of residuals (we'd be fitting a different line to different points). The total of the residuals (as measured from the new OLS line) would still be zero. There's no restriction on the sum of the error terms, though. How could there be, remembering that the errors are generally assumed to be independent (or at least uncorrelated)? However, since the expected value of each error term is zero, the expected value of their sum is zero. The arguments above should indicate that the sum of residuals would no longer be guaranteed to be zero if the intercept is removed from the model. Nor need the residuals sum to zero if the line was fitted by a method other than OLS. A vivid example of that is provided by Least Absolute Deviations: when moving an outlier up and down, you'll find (try it) that the LAD regression line remains "latched" to other points, and won't budge to take account of this change. The fact you can vary one residual while the others stays constant illustrates very dramatically that the sum of residuals is not invariant.
Why does the sum of residuals equal 0 from a graphical perspective?
To fit $\hat y = b_1 + b_2 x$ by OLS we minimise the residual sum of squares, $RSS = \sum_{i=1}^n e_i^2$. As the question states, we can do this analytically by setting $$\frac{\partial RSS}{\partial
Why does the sum of residuals equal 0 from a graphical perspective? To fit $\hat y = b_1 + b_2 x$ by OLS we minimise the residual sum of squares, $RSS = \sum_{i=1}^n e_i^2$. As the question states, we can do this analytically by setting $$\frac{\partial RSS}{\partial b_1}=0 , \qquad \frac{\partial RSS}{\partial b_2}=0 $$ and solving the resulting normal equations. Note that from the normal equations we can deduce $\sum_{i=1}^n e_i = 0$ and $\sum_{i=1}^n x_i e_i = 0$; the latter is equivalent (think about the inner or "dot" product) to stating that the vector of observations $\mathbf{x} = (x_1, x_2, \dots, x_n)^t$ in $\mathbb{R}^n$ is orthogonal (perpendicular) to the vector of residuals $\mathbf{e} = (e_1, e_2, \dots, e_n)^t$, and analogously your requirement that the sum of residuals is zero is equivalent to the statement that $\mathbf{e}$ is orthogonal to the vector of ones, $\mathbf{1}_n=(1,1,\dots,1)^t$. Both these results can be seen geometrically from knowing the design matrix $\mathbf{X}$ includes a column of ones to represent the intercept term and another column for the $x_i$ data, and that the vector of residuals is orthogonal to each column of the design matrix because the hat matrix $\mathbf{H}$ is an orthogonal projection onto the column space of $\mathbf{X}$. For more on how to interpret the diagram, see Geometric interpretation of multiple correlation coefficient $R$ and coefficient of determination $R^2$. But that takes place in the $n$-dimensional subject space; it would be nice to develop some intuition from the scatter plot itself. To illustrate OLS geometrically, draw a vertical line segment from each point $(x_i, y_i)$ to its fitted value on the regression line $(x_i, \hat y_i)$, then draw a square with this side. The length of this segment is the magnitude of the residual $|e_i$| (note $e_i$ is positive for points above the line, e.g. with the blue square, negative for points below, e.g. with the red square) and the area of the square is $e_i^2$. I have illustrated two examples points, and for convenience placed the squares on whichever side avoids overlapping the regression line. The RSS is the sum of the areas of all the squares (both red and blue squares counting as positive area), and to find the OLS solution we seek $b_1$ and $b_2$ that minimise this area. Now take any regression line (not necessarily the OLS one) and consider translating its intercept up by $\delta b_1$. I have drawn the new regression line as dotted, and overlaid the residual squares based on both new and original fitted values. For the point on the left with a positive residual, this shift in the regression line has reduced the area of the residual square — this is logical since the observed point lay above the line, and the regression line has moved up closer to it, so the fit is better. Two $\color{blue}{\text{blue}}$ rectangular strips (which I consider to extend the whole length of the larger, original $e_1$ by $e_1$ square) have been cut off the left and bottom sides. But these overlap (the grey square at bottom left) so subtracting both rectangles has double-counted the grey square, and we need to add this area back on once again. Overall the new (reduced) residual square is found from the original square, minus the two blue rectangular strips, plus the grey square. For points below the line with a negative residual, raising the line has worsened the fit and made the residual more negative; the residual square has grown by two $\color{red}{\text{red}}$ rectangular strips (running across the sides of the smaller, original $e_2$ by $e_2$ square) plus the grey square in the upper right. If we dissect the residual squares for all data points in this way, then total the results, $$\color{Gold}{\text{New RSS}} = \text{Old RSS} + \sum \color{red}{\text{red rectangles}} - \sum \color{blue}{\text{blue rectangles}} + \sum \color{grey}{\text{grey squares}}$$ From the diagram it's clear $\delta e_i = - \delta b_1$ for all points — raising the intercept increases each fitted $\hat y_i$ by $\delta b_1$ so $e_i = y_i - \hat y_i$ falls by $\delta b_1$. Each $\color{grey}{\text{grey}}$square has area $(\delta b_1)^2$. Each horizontally-aligned rectangles is as tall as the intercept was shifted, and as wide as the original residual for that point. The corresponding vertically-aligned rectangles are congruent, just rotated by a right angle. So I can line up all the $\color{blue}{\text{blue}}$ rectangles to form a single rectangle as high as the change in intercept and twice as wide as the sum of the positive residuals. The $\color{red}{\text{red}}$ rectangles form a single rectangle just as high, and twice the width of the sum of the (absolute values of the) negative residuals. Now suppose the original $\hat y = b_1 + b_2 x$ satisfied $\sum_{i=1}^n e_i=0$, which occurs (prove it!) if the line passes through the centroid $(\bar x, \bar y)$. Since the positive and negative residuals cancel out, the blue and red rectangles must too: $$\color{Gold}{\text{RSS after change in intercept}} = \text{RSS of line through centroid} + \color{grey}{n(\delta b_1)^2}$$ Whatever the gradient of our original line, adjusting the intercept so it avoids the centroid will make the RSS worse, by the area of the grey squares. The least-squares line must therefore pass through the centroid and have $\sum_{i=1}^n e_i=0$. This does not tell us anything about which gradient minimises the RSS, but we can adapt our approach to consider a fixed intercept $b_1$ and a change in slope of $\delta b_2$. This time, the fitted values $\hat y_i$ rise by $x_i \delta b_2$, so fitted values rise (and residuals fall) by more the further $x_i$ lies to the right: the rectangles do not all have the same height, nor the grey squares the same area. But otherwise the dissection is much as before. We find the area of a $\color{grey}{\text{grey}}$ square is $(x_i \delta b_2)^2$ and the area of a rectangle is proportional (by $\pm \delta b_2$) to $x_i e_i$. For the same reasons as before we want the $\color{blue}{\text{blue}}$ and $\color{red}{\text{red}}$ rectangles to balance out, which would require the positive and negative values of $x_i e_i$ to cancel, i.e. $\sum_{i=1}^n x_i e_i =0$. Totalling, $$\color{Gold}{\text{RSS after change in slope}} = \text{RSS of line for which }\sum_{i=1}^n x_i e_i \text{ is zero} + \color{grey}{(\delta b_2)^2\sum_{i=1}^n x_i^2}$$ Regardless of the intercept, if we draw a line with a slope such that $\sum_{i=1}^n x_i e_i = 0$, then any changes to the slope will result in an RSS which is worse (higher) by the area of the grey squares. The least-squares line must therefore satisfy $\sum_{i=1}^n x_i e_i = 0$. The diagrams oversimplify things: I didn't consider cases like a point with positive residual that develops a negative residual after the line sweeps above it, or where $x_i$ was negative. But we can verify the intuition algebraically: $$\sum_{i=1}^n (e_i+\delta e_i)^2 = \sum_{i=1}^n e_i^2 + 2 \sum_{i=1}^n e_i \delta e_i + \sum_{i=1}^n (\delta e_i)^2 \tag{1}$$ The middle term represents the rectangular areas; note that the red and blue rectangles will take opposite signs since $e_i$ was positive for the blue case and negative for the red. Writing RSS as a function of $b_1$ and $b_2$, $$RSS(b_1, b_2) = \sum_{i=1}^n e_i^2 = \sum_{i=1}^n (y_i - b_1 - b_2 x_i)^2 \tag{2}$$ Translating the regression line from $\hat y = b_1 + b_2 x$ to $\hat y = (b_1 + \delta b_1) + b_2 x$ reduces the residuals by the change in the intercept, $\delta e_i = -b_1$, so $(1)$ yields $$ \begin{align} RSS(b_1 + \delta b_1, b_2) &= RSS(b_1, b_2) + 2 \sum_{i=1}^n e_i (-\delta b_1) + \sum_{i=1}^n (-\delta b_1)^2 \\ RSS(b_1 + \delta b_1, b_2) &= RSS(b_1, b_2) - 2 \delta b_1 \sum_{i=1}^n e_i + n(\delta b_1)^2 \tag{3} \end{align} $$ Switching from $\hat y = b_1 + b_2 x$ to $\hat y = b_1 + (b_2 + \delta b_2) x$ gives $\delta e_i = -x_i \delta b_2$, and $(1)$ yields $$ \begin{align} RSS(b_1, b_2 + \delta b_2) &= RSS(b_1, b_2) + 2 \sum_{i=1}^n e_i (-x_i \delta b_2) + \sum_{i=1}^n (-x_i \delta b_2)^2 \\ RSS(b_1, b_2 + \delta b_2) &= RSS(b_1, b_2) - 2 \delta b_2 \sum_{i=1}^n x_i e_i + (\delta b_2)^2 \sum_{i=1}^n x_i^2 \tag{4} \end{align} $$ The argument can then proceed as before. Note that $(2)$ shows RSS is a quadratic function of both slope and intercept, so the expansions $(3)$ and $(4)$ could alternatively be derived by using the Taylor expansion $$f(a+h) = f(a) + f'(a) h + \frac{1}{2} f''(a) h^2$$ with the partial derivatives $$\frac{\partial RSS}{\partial b_1}= -2 \sum_{i=1}^n e_i, \quad \frac{\partial^2 RSS}{\partial b_1^2}=2n, \quad \frac{\partial RSS}{\partial b_2}=-2\sum_{i=1}^n x_i e_i, \quad \frac{\partial^2 RSS}{\partial b_2^2}= 2\sum_{i=1}^n x_i^2 $$ The $h$ represents our change $\delta b_1$ or $\delta b_2$. Note that setting the coefficient of the linear term in the change to zero (which we did by getting the red and blue rectangles to cancel out) is equivalent to putting $f'(a)=0$, i.e. ensuring the point we are expanding about satisfies the first order conditions to be a turning point. Checking that the coefficient of the quadratic term in the change is positive is equivalent to verifying $f''(a)>0$, which is the second order condition for this turning point to be a minimum: this is equivalent to our argument that, because the grey squares were being added on, the RSS was lower before the change. Note that we were considering the residuals, $e_i = y_i - \hat y_i$, and not the errors, $\varepsilon_i = y_i - (\beta_1 + \beta_2 x_i)$ (where the betas are the "true" population regression parameters). Both residuals and errors are stochastic, in that if we re-sampled we would get a whole new bunch of random errors, a new OLS regression estimate, and a whole new set of residuals (we'd be fitting a different line to different points). The total of the residuals (as measured from the new OLS line) would still be zero. There's no restriction on the sum of the error terms, though. How could there be, remembering that the errors are generally assumed to be independent (or at least uncorrelated)? However, since the expected value of each error term is zero, the expected value of their sum is zero. The arguments above should indicate that the sum of residuals would no longer be guaranteed to be zero if the intercept is removed from the model. Nor need the residuals sum to zero if the line was fitted by a method other than OLS. A vivid example of that is provided by Least Absolute Deviations: when moving an outlier up and down, you'll find (try it) that the LAD regression line remains "latched" to other points, and won't budge to take account of this change. The fact you can vary one residual while the others stays constant illustrates very dramatically that the sum of residuals is not invariant.
Why does the sum of residuals equal 0 from a graphical perspective? To fit $\hat y = b_1 + b_2 x$ by OLS we minimise the residual sum of squares, $RSS = \sum_{i=1}^n e_i^2$. As the question states, we can do this analytically by setting $$\frac{\partial RSS}{\partial
39,992
Why does the sum of residuals equal 0 from a graphical perspective?
One geometrical answer that might or might not resonate with you is that two legs of any right triangle are perpendicular. It applies here because the "constant" or "intercept" in the regression results in projecting the response $y$ onto the vector $\hat{y}=\bar{y}(1,1,\ldots,1)$, which is one leg, and the residuals are the other leg $y-\hat{y} = (e_1, e_2, \ldots, e_n)$. This picture shows a response $y$ regressed against $x_1 = (1,1,\ldots,1)$. The coefficient is $\alpha = \bar{y}$ and the residual is $y_{\cdot 1}$. It is geometrically obvious that $y_{\cdot 1}$ and $x$ are perpendicular. Their perpendicularity, expressed in terms of the dot product, says the sum of the residuals is zero, because $$\eqalign{ 0 &= y_{\cdot 1} \cdot x_1 = (e_1, e_2, \ldots, e_n)\cdot (1, 1, \ldots, 1) = e_1(1) + e_2(1) + \cdots + e_n(1) \\ &= e_1+e_2+\cdots+e_n.}$$ Notice This is exact, not an expectation. After all, it's just geometry! We did not need to make any probabilistic assumptions about $y$. The result requires that the regression include a constant. The result is true for all multiple regressions that span a constant, because they can be carried out by first regressing $y$ against the constant and then performing a multiple regression of its residuals. The image is taken from a longer document I have posted on how to control for variables. It elaborates on point (3).
Why does the sum of residuals equal 0 from a graphical perspective?
One geometrical answer that might or might not resonate with you is that two legs of any right triangle are perpendicular. It applies here because the "constant" or "intercept" in the regression re
Why does the sum of residuals equal 0 from a graphical perspective? One geometrical answer that might or might not resonate with you is that two legs of any right triangle are perpendicular. It applies here because the "constant" or "intercept" in the regression results in projecting the response $y$ onto the vector $\hat{y}=\bar{y}(1,1,\ldots,1)$, which is one leg, and the residuals are the other leg $y-\hat{y} = (e_1, e_2, \ldots, e_n)$. This picture shows a response $y$ regressed against $x_1 = (1,1,\ldots,1)$. The coefficient is $\alpha = \bar{y}$ and the residual is $y_{\cdot 1}$. It is geometrically obvious that $y_{\cdot 1}$ and $x$ are perpendicular. Their perpendicularity, expressed in terms of the dot product, says the sum of the residuals is zero, because $$\eqalign{ 0 &= y_{\cdot 1} \cdot x_1 = (e_1, e_2, \ldots, e_n)\cdot (1, 1, \ldots, 1) = e_1(1) + e_2(1) + \cdots + e_n(1) \\ &= e_1+e_2+\cdots+e_n.}$$ Notice This is exact, not an expectation. After all, it's just geometry! We did not need to make any probabilistic assumptions about $y$. The result requires that the regression include a constant. The result is true for all multiple regressions that span a constant, because they can be carried out by first regressing $y$ against the constant and then performing a multiple regression of its residuals. The image is taken from a longer document I have posted on how to control for variables. It elaborates on point (3).
Why does the sum of residuals equal 0 from a graphical perspective? One geometrical answer that might or might not resonate with you is that two legs of any right triangle are perpendicular. It applies here because the "constant" or "intercept" in the regression re
39,993
Why does the sum of residuals equal 0 from a graphical perspective?
You must keep in mind the difference between the true values of the parameters and your estimates of them. Graphically, you can think of the situation as the true line and the estimated line. The usual way of returning the estimated line from the data just happens to correspond to the sum of residuals being zero. The residuals are deviations from the estimated line and the errors are deviations from the true line, the sum of errors doesn't necessarily equal zero but that is its expected value. So if you knew what the true line and you generated observations from that you would see that the expected value of the distribution of sum of errors would be zero. I.e. you would see the sum of errors distributed around zero if you ran multiple simulations. All of linear regression theory is predicated on the true line existing.
Why does the sum of residuals equal 0 from a graphical perspective?
You must keep in mind the difference between the true values of the parameters and your estimates of them. Graphically, you can think of the situation as the true line and the estimated line. The us
Why does the sum of residuals equal 0 from a graphical perspective? You must keep in mind the difference between the true values of the parameters and your estimates of them. Graphically, you can think of the situation as the true line and the estimated line. The usual way of returning the estimated line from the data just happens to correspond to the sum of residuals being zero. The residuals are deviations from the estimated line and the errors are deviations from the true line, the sum of errors doesn't necessarily equal zero but that is its expected value. So if you knew what the true line and you generated observations from that you would see that the expected value of the distribution of sum of errors would be zero. I.e. you would see the sum of errors distributed around zero if you ran multiple simulations. All of linear regression theory is predicated on the true line existing.
Why does the sum of residuals equal 0 from a graphical perspective? You must keep in mind the difference between the true values of the parameters and your estimates of them. Graphically, you can think of the situation as the true line and the estimated line. The us
39,994
How to check for the distribution stability?
First, about measuring the fit. The Kolmogorov–Smirnov test is for a one-dimensional distribution. Though it was extended to multivariate data, it wasn't designed for time series. I'm not sure how do you use your time series data. If you are interested in just the probability of an event to happen, you can use the test. However, note that you will lose all information like "Event B always comes after event A" and that might be where the gold is. Going a step backwards, you said that you used your old data as the train and the new data as your test. You can split your data in different ways and avoid the problem in the first place. In case that you are interested in predicting future behavior, you can split each time series into past (train) and test (future). Note that you can choose a different point for any series. This way you can train on that that now is considered to be your future and still get a valid estimation. Sometimes it is required that all the series will be split in the same point in time. In this case, you might consider creating a few data sets (e.g., one ending in January, one ending in February, etc). The advantage here is that you will be able to estimate how good your model is as time changes. Note that while it is likely that the underlying distribution will change, your model is probably looking for a narrower aspect and might be more robust. You might be coping with a problem of concept drift (or time-related domain adaptation). Reading some surveys on these topics might give you some useful ideas.
How to check for the distribution stability?
First, about measuring the fit. The Kolmogorov–Smirnov test is for a one-dimensional distribution. Though it was extended to multivariate data, it wasn't designed for time series. I'm not sure how do
How to check for the distribution stability? First, about measuring the fit. The Kolmogorov–Smirnov test is for a one-dimensional distribution. Though it was extended to multivariate data, it wasn't designed for time series. I'm not sure how do you use your time series data. If you are interested in just the probability of an event to happen, you can use the test. However, note that you will lose all information like "Event B always comes after event A" and that might be where the gold is. Going a step backwards, you said that you used your old data as the train and the new data as your test. You can split your data in different ways and avoid the problem in the first place. In case that you are interested in predicting future behavior, you can split each time series into past (train) and test (future). Note that you can choose a different point for any series. This way you can train on that that now is considered to be your future and still get a valid estimation. Sometimes it is required that all the series will be split in the same point in time. In this case, you might consider creating a few data sets (e.g., one ending in January, one ending in February, etc). The advantage here is that you will be able to estimate how good your model is as time changes. Note that while it is likely that the underlying distribution will change, your model is probably looking for a narrower aspect and might be more robust. You might be coping with a problem of concept drift (or time-related domain adaptation). Reading some surveys on these topics might give you some useful ideas.
How to check for the distribution stability? First, about measuring the fit. The Kolmogorov–Smirnov test is for a one-dimensional distribution. Though it was extended to multivariate data, it wasn't designed for time series. I'm not sure how do
39,995
How to check for the distribution stability?
In the limit as the quality of your random sampling method approaches perfect randomness, and in the limit as number of samples approach infinity, then the distributions of the training and testing samples will become identical. But after considering your comment to dsaxton, it appears that you are in a special case where you are dealing with time series problems. Usually with common learning tasks when the time of the sample arrival is ignored, it is implied that all samples occur at the same time. Therefore, the PDFs of the training samples are assumed to be the closest estimation to the PDF of the testing set (since they all occur at the same time). In this case, random sampling is your friend. But, since you are not assuming the simplistic assumption above, instead you are acknowledging the fact that testing instances are necessarily those that appear after the training samples (which is a more realistic assumption), then it is part of your time series problem that your model must deal with the fact that the PDF of the arriving samples changes as a function of the time. Therefore, when dealing with time series problems, you must not eliminate the shift/change in PDF as time passes across the training and testing sample sets. Instead, take it as a challenge to identify how well your model is adopting to the fact that the PDF is shifting/changing over time. If you eliminate such challenge from the evaluation by constructing training and testing set that maintain the same PDF (despite the time shift), then you are essentially performing an evaluation that does not show how useful your prediction model is in time series problems. Alternatively, you can consider that time series problems are a special case of domain adaptation problems, where the domain variation is caused by variations in the time. So in summary the answer is: you must not ensure identical/similar PDFs between training and testing samples, because it is a primary objective of your model to adapt to the fact that the PDF is shifting by time. Q: In your opinion, could I use the K-S two-sample test? A: No. Q: Alternatively, can you suggest some other statistical measure or test better than that one to check for the distribution stability? A: Yes. Do nothing about it. If you wish more samples to better identify the PDF shift as a function time, then this is another problem (where you can find out amount of training samples that you model needs less/more samples compared to other models).
How to check for the distribution stability?
In the limit as the quality of your random sampling method approaches perfect randomness, and in the limit as number of samples approach infinity, then the distributions of the training and testing sa
How to check for the distribution stability? In the limit as the quality of your random sampling method approaches perfect randomness, and in the limit as number of samples approach infinity, then the distributions of the training and testing samples will become identical. But after considering your comment to dsaxton, it appears that you are in a special case where you are dealing with time series problems. Usually with common learning tasks when the time of the sample arrival is ignored, it is implied that all samples occur at the same time. Therefore, the PDFs of the training samples are assumed to be the closest estimation to the PDF of the testing set (since they all occur at the same time). In this case, random sampling is your friend. But, since you are not assuming the simplistic assumption above, instead you are acknowledging the fact that testing instances are necessarily those that appear after the training samples (which is a more realistic assumption), then it is part of your time series problem that your model must deal with the fact that the PDF of the arriving samples changes as a function of the time. Therefore, when dealing with time series problems, you must not eliminate the shift/change in PDF as time passes across the training and testing sample sets. Instead, take it as a challenge to identify how well your model is adopting to the fact that the PDF is shifting/changing over time. If you eliminate such challenge from the evaluation by constructing training and testing set that maintain the same PDF (despite the time shift), then you are essentially performing an evaluation that does not show how useful your prediction model is in time series problems. Alternatively, you can consider that time series problems are a special case of domain adaptation problems, where the domain variation is caused by variations in the time. So in summary the answer is: you must not ensure identical/similar PDFs between training and testing samples, because it is a primary objective of your model to adapt to the fact that the PDF is shifting by time. Q: In your opinion, could I use the K-S two-sample test? A: No. Q: Alternatively, can you suggest some other statistical measure or test better than that one to check for the distribution stability? A: Yes. Do nothing about it. If you wish more samples to better identify the PDF shift as a function time, then this is another problem (where you can find out amount of training samples that you model needs less/more samples compared to other models).
How to check for the distribution stability? In the limit as the quality of your random sampling method approaches perfect randomness, and in the limit as number of samples approach infinity, then the distributions of the training and testing sa
39,996
How to check for the distribution stability?
If you selected training and test sets randomly from your data, you shouldn't have to worry about equal distributions. More importantly, you test your model, which you generated from the training set, on the test set and by doing this you verify that you can use your model to predict other values than the ones from the training set. If this works, you won't have to worry about the sets being equally distributed.
How to check for the distribution stability?
If you selected training and test sets randomly from your data, you shouldn't have to worry about equal distributions. More importantly, you test your model, which you generated from the training set,
How to check for the distribution stability? If you selected training and test sets randomly from your data, you shouldn't have to worry about equal distributions. More importantly, you test your model, which you generated from the training set, on the test set and by doing this you verify that you can use your model to predict other values than the ones from the training set. If this works, you won't have to worry about the sets being equally distributed.
How to check for the distribution stability? If you selected training and test sets randomly from your data, you shouldn't have to worry about equal distributions. More importantly, you test your model, which you generated from the training set,
39,997
How to check for the distribution stability?
It looks like you're generally looking at the question of whether your time series is stationarity. The other answers discuss great ways to handle time series while doing machine learning. Separately from the ML lens, if you just want to understand the stationarity of your data, you can use the Augmented Dickey-Fuller (ADF) test or the Kwiatkowski-Phillips-Schmidt-Shin (KPSS) test. Both of these tests are implemented in the python statsmodels package, though it should be noted that these tests shouldn't be directly interchanged as they test for opposite hypotheses. This blog post gives a great explanation
How to check for the distribution stability?
It looks like you're generally looking at the question of whether your time series is stationarity. The other answers discuss great ways to handle time series while doing machine learning. Separately
How to check for the distribution stability? It looks like you're generally looking at the question of whether your time series is stationarity. The other answers discuss great ways to handle time series while doing machine learning. Separately from the ML lens, if you just want to understand the stationarity of your data, you can use the Augmented Dickey-Fuller (ADF) test or the Kwiatkowski-Phillips-Schmidt-Shin (KPSS) test. Both of these tests are implemented in the python statsmodels package, though it should be noted that these tests shouldn't be directly interchanged as they test for opposite hypotheses. This blog post gives a great explanation
How to check for the distribution stability? It looks like you're generally looking at the question of whether your time series is stationarity. The other answers discuss great ways to handle time series while doing machine learning. Separately
39,998
Oscillating validation accuracy for a convolutional neural network?
This is likely due to the ordering of your dataset. If there's many observations of the same class in a sequence the weights of the network will move too far in the direction of classifying this class. A common cause is if you balance the classes in your dataset by resampling observations and appending them to the dataset. Shuffle your dataset - that should help you avoid the fluctuations in accuracy (and perhaps obtain a higher accuracy overall).
Oscillating validation accuracy for a convolutional neural network?
This is likely due to the ordering of your dataset. If there's many observations of the same class in a sequence the weights of the network will move too far in the direction of classifying this class
Oscillating validation accuracy for a convolutional neural network? This is likely due to the ordering of your dataset. If there's many observations of the same class in a sequence the weights of the network will move too far in the direction of classifying this class. A common cause is if you balance the classes in your dataset by resampling observations and appending them to the dataset. Shuffle your dataset - that should help you avoid the fluctuations in accuracy (and perhaps obtain a higher accuracy overall).
Oscillating validation accuracy for a convolutional neural network? This is likely due to the ordering of your dataset. If there's many observations of the same class in a sequence the weights of the network will move too far in the direction of classifying this class
39,999
Oscillating validation accuracy for a convolutional neural network?
I had the same issue in the past and found out that the learning rate usually is the cause of oscillation. Try lowering your learning rate or using learning rate decay and keep training until the curve converges.
Oscillating validation accuracy for a convolutional neural network?
I had the same issue in the past and found out that the learning rate usually is the cause of oscillation. Try lowering your learning rate or using learning rate decay and keep training until the cur
Oscillating validation accuracy for a convolutional neural network? I had the same issue in the past and found out that the learning rate usually is the cause of oscillation. Try lowering your learning rate or using learning rate decay and keep training until the curve converges.
Oscillating validation accuracy for a convolutional neural network? I had the same issue in the past and found out that the learning rate usually is the cause of oscillation. Try lowering your learning rate or using learning rate decay and keep training until the cur
40,000
Oscillating validation accuracy for a convolutional neural network?
Probably the learning rate is too high. The system is overfitting almost immediately, as overall the accuracy is falling after the first epoch. If you want to find the sweet spot using early stopping you surely need a lower learning rate to extend your choice. In addition, as suggested in other answers, I would use a learning rate scheduling. Moreover, you may have a look at the size of your gradients. Exploding gradients may cause this kind of oscillation. Than using gradient clipping may be useful
Oscillating validation accuracy for a convolutional neural network?
Probably the learning rate is too high. The system is overfitting almost immediately, as overall the accuracy is falling after the first epoch. If you want to find the sweet spot using early stopping
Oscillating validation accuracy for a convolutional neural network? Probably the learning rate is too high. The system is overfitting almost immediately, as overall the accuracy is falling after the first epoch. If you want to find the sweet spot using early stopping you surely need a lower learning rate to extend your choice. In addition, as suggested in other answers, I would use a learning rate scheduling. Moreover, you may have a look at the size of your gradients. Exploding gradients may cause this kind of oscillation. Than using gradient clipping may be useful
Oscillating validation accuracy for a convolutional neural network? Probably the learning rate is too high. The system is overfitting almost immediately, as overall the accuracy is falling after the first epoch. If you want to find the sweet spot using early stopping