idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
25,601
How to compare 2 non-stationary time series to determine a correlation?
This answer containd some graphics
How to compare 2 non-stationary time series to determine a correlation?
This answer containd some graphics
How to compare 2 non-stationary time series to determine a correlation? This answer containd some graphics
How to compare 2 non-stationary time series to determine a correlation? This answer containd some graphics
25,602
Bootstrapping estimates of out-of-sample error
The short answer, if I understand the questions, is "no". Out of sample error is out of your sample and no bootstrapping or other analytical effort with your sample can calculate it. In answer to your comment on whether the bootstrap can be used in checking a model with data outside a training set: two possible interpretations. It would be fine, and absolutely standard, to fit a model on your training set with traditional methods and then use bootstrapping on the training set to check for things like distribution of your estimators, etc. Then use your final model from that training set to test against the test set. It would be possible to do a bootstrap-like procedure that involves a loop around: selecting a subset of the whole sample as your training set fit a model to that training set of the data compare that model to the testing set of the remaining data and generate some kind of test statistic that says how well the model from the training set goes against the test set. And then considering the results of doing that many times. Certainly, it would give you some insight into the robustness of your train/test process. It would reassure you that the particular model you got was not just due to the chance of what ended up in the test set in your one split. However, it's difficult to say exactly why but there seems to me to be a philosophical clash between the idea of a testing/training division and the bootstrap. Perhaps if I didn't think of it as a bootstrap, but just a robustness test of the train/test process it would be ok...
Bootstrapping estimates of out-of-sample error
The short answer, if I understand the questions, is "no". Out of sample error is out of your sample and no bootstrapping or other analytical effort with your sample can calculate it. In answer to you
Bootstrapping estimates of out-of-sample error The short answer, if I understand the questions, is "no". Out of sample error is out of your sample and no bootstrapping or other analytical effort with your sample can calculate it. In answer to your comment on whether the bootstrap can be used in checking a model with data outside a training set: two possible interpretations. It would be fine, and absolutely standard, to fit a model on your training set with traditional methods and then use bootstrapping on the training set to check for things like distribution of your estimators, etc. Then use your final model from that training set to test against the test set. It would be possible to do a bootstrap-like procedure that involves a loop around: selecting a subset of the whole sample as your training set fit a model to that training set of the data compare that model to the testing set of the remaining data and generate some kind of test statistic that says how well the model from the training set goes against the test set. And then considering the results of doing that many times. Certainly, it would give you some insight into the robustness of your train/test process. It would reassure you that the particular model you got was not just due to the chance of what ended up in the test set in your one split. However, it's difficult to say exactly why but there seems to me to be a philosophical clash between the idea of a testing/training division and the bootstrap. Perhaps if I didn't think of it as a bootstrap, but just a robustness test of the train/test process it would be ok...
Bootstrapping estimates of out-of-sample error The short answer, if I understand the questions, is "no". Out of sample error is out of your sample and no bootstrapping or other analytical effort with your sample can calculate it. In answer to you
25,603
Bootstrapping estimates of out-of-sample error
This calls for the standard Efron-Gong "optimism" bootstrap. In R you can do this: require(rms) # Allow age to interact with sex and age and BP to have nonlinear effects # using restricted cubic splines (5 and 4 knots) f <- ols(y ~ rcs(age,5)*sex + rcs(blood.pressure,4), x=TRUE, y=TRUE) validate(f, B=300) This will give you the bootstrap overfitting-corrected estimate of $R^2$, MSE, and other indexes. To get a bootstrap overfitting-corrected calibration curve (estimate of relationship between $\hat{Y}$ and $Y$), run plot(calibrate(f, B=300)). This type of bootstrap estimates the likely future performance of the final model on new subjects from the same "stream" of subjects. Some observations are duplicated, triplicated, etc., and "training" and "test" datasets overlap during the bootstrap. The bootstrap provides a highly competitive estimate of future performance, along the lines of 100 repeats of 10-fold cross-validation.
Bootstrapping estimates of out-of-sample error
This calls for the standard Efron-Gong "optimism" bootstrap. In R you can do this: require(rms) # Allow age to interact with sex and age and BP to have nonlinear effects # using restricted cubic spli
Bootstrapping estimates of out-of-sample error This calls for the standard Efron-Gong "optimism" bootstrap. In R you can do this: require(rms) # Allow age to interact with sex and age and BP to have nonlinear effects # using restricted cubic splines (5 and 4 knots) f <- ols(y ~ rcs(age,5)*sex + rcs(blood.pressure,4), x=TRUE, y=TRUE) validate(f, B=300) This will give you the bootstrap overfitting-corrected estimate of $R^2$, MSE, and other indexes. To get a bootstrap overfitting-corrected calibration curve (estimate of relationship between $\hat{Y}$ and $Y$), run plot(calibrate(f, B=300)). This type of bootstrap estimates the likely future performance of the final model on new subjects from the same "stream" of subjects. Some observations are duplicated, triplicated, etc., and "training" and "test" datasets overlap during the bootstrap. The bootstrap provides a highly competitive estimate of future performance, along the lines of 100 repeats of 10-fold cross-validation.
Bootstrapping estimates of out-of-sample error This calls for the standard Efron-Gong "optimism" bootstrap. In R you can do this: require(rms) # Allow age to interact with sex and age and BP to have nonlinear effects # using restricted cubic spli
25,604
Bootstrapping estimates of out-of-sample error
Bootstrap is neither in-sample or out-of-sample test. Consider the bootstrap logic: 1. a statistic is computed in the original sample; 2. a resample is constructed by sampling from the sample with replacement (this sample is considered to be a possible sample from the same population) 3. the same statistic a computed 4. step 2 and 3 are repeated and the distribution of the obtained statistics is then used to construct a confidence interval Now translate this to the notion of out-of-sample testing, where you estimate a prediction model based on the original sample and then test out-of-sample. The out-of-sample sample should be any sample other than the original sample drawn from the same population. Resampling with replacement provides you with such a sample, or indeed many such samples should you so wish. Now you can use the original model estimates from your original prediction model to predict outcomes in the new resample(s). You can now compute a model-fit statistic to see if these predicted outcomes predict a similar share of the variation in the original sample and all of the resamples. Are all results comparatively similar, then overfitting is no issue. Are the results of resamples (significantly) worse than the model fit in the original sample, then you've got evidence of overfitting. When comparing different training models, you can select the model with the best (average) modelfit in the resamples. More advanced strategies involve the variance of the modelfit, but add little in my opinion. Best wishes
Bootstrapping estimates of out-of-sample error
Bootstrap is neither in-sample or out-of-sample test. Consider the bootstrap logic: 1. a statistic is computed in the original sample; 2. a resample is constructed by sampling from the sample with r
Bootstrapping estimates of out-of-sample error Bootstrap is neither in-sample or out-of-sample test. Consider the bootstrap logic: 1. a statistic is computed in the original sample; 2. a resample is constructed by sampling from the sample with replacement (this sample is considered to be a possible sample from the same population) 3. the same statistic a computed 4. step 2 and 3 are repeated and the distribution of the obtained statistics is then used to construct a confidence interval Now translate this to the notion of out-of-sample testing, where you estimate a prediction model based on the original sample and then test out-of-sample. The out-of-sample sample should be any sample other than the original sample drawn from the same population. Resampling with replacement provides you with such a sample, or indeed many such samples should you so wish. Now you can use the original model estimates from your original prediction model to predict outcomes in the new resample(s). You can now compute a model-fit statistic to see if these predicted outcomes predict a similar share of the variation in the original sample and all of the resamples. Are all results comparatively similar, then overfitting is no issue. Are the results of resamples (significantly) worse than the model fit in the original sample, then you've got evidence of overfitting. When comparing different training models, you can select the model with the best (average) modelfit in the resamples. More advanced strategies involve the variance of the modelfit, but add little in my opinion. Best wishes
Bootstrapping estimates of out-of-sample error Bootstrap is neither in-sample or out-of-sample test. Consider the bootstrap logic: 1. a statistic is computed in the original sample; 2. a resample is constructed by sampling from the sample with r
25,605
Probability of an event that is not measureable
As I stated in the comments how to deal with these types of events (non-measurable sets) is described in book: Weak convergence and empirical processes by A. van der Vaart and A. Wellner. You can browse the first few pages. The solution how to deal with these sets is quite simple. Approximate them with measurable sets. So suppose we have a probability space $(\Omega,\mathcal{A},P)$. For any set $B$ define outer probability (it is in page 6 in the book): $$P^*(B)=\inf\{(P(A), B\subset A, A\in \mathcal{A}\}$$ It turns out that you can build a very fruitful theory with this sort of definition.
Probability of an event that is not measureable
As I stated in the comments how to deal with these types of events (non-measurable sets) is described in book: Weak convergence and empirical processes by A. van der Vaart and A. Wellner. You can brow
Probability of an event that is not measureable As I stated in the comments how to deal with these types of events (non-measurable sets) is described in book: Weak convergence and empirical processes by A. van der Vaart and A. Wellner. You can browse the first few pages. The solution how to deal with these sets is quite simple. Approximate them with measurable sets. So suppose we have a probability space $(\Omega,\mathcal{A},P)$. For any set $B$ define outer probability (it is in page 6 in the book): $$P^*(B)=\inf\{(P(A), B\subset A, A\in \mathcal{A}\}$$ It turns out that you can build a very fruitful theory with this sort of definition.
Probability of an event that is not measureable As I stated in the comments how to deal with these types of events (non-measurable sets) is described in book: Weak convergence and empirical processes by A. van der Vaart and A. Wellner. You can brow
25,606
Probability of an event that is not measureable
Edit: In light of cardinal's comment: All I say below is implicitly about the Lebesgue measure (a complete measure). Rereading your question, it seems that that is also what you are asking about. In the general Borel measure case, it might be possible to extend the measure to include your set (something which is not possible with the Lebesgue measure because it is already as big as can be). The probability of such an event would not be defined. Period. Much like a real valued function is not defined for a (non-real) complex number, a probability measure is defined on measurable sets but not on the non-measurable sets. So what statements could we make about such an event? Well, for starters, such an event would have to be defined using the axiom of choice. This means that all sets which we can describe by some rule are excluded. I.e., all the sets we are generally interested in are excluded. But couldn't we say something about the probability of a non-measurable event? Put a bound on it or something? Banach-Tarski's paradox shows that this will not work. If the measure of the finite number of pieces that Banach-Tarski decomposes the sphere into had an upper bound (say, the measure of the sphere), by constructing enough spheres we would run into a contradiction. By a similar argument backwards, we see that the pieces cannot have a non-trivial lower bound. I haven't shown that all non-measurable sets are this problematic, although I believe that a cleverer person than I should be able to come up with an argument showing that we cannot in any consistent way put any non-trivial boundon the "measure" of any non-measurable set (challenge to the community). In summary, we can not make any statement about the probability measure of such a set, this is not the end of the world because all relevant sets are measurable.
Probability of an event that is not measureable
Edit: In light of cardinal's comment: All I say below is implicitly about the Lebesgue measure (a complete measure). Rereading your question, it seems that that is also what you are asking about. In t
Probability of an event that is not measureable Edit: In light of cardinal's comment: All I say below is implicitly about the Lebesgue measure (a complete measure). Rereading your question, it seems that that is also what you are asking about. In the general Borel measure case, it might be possible to extend the measure to include your set (something which is not possible with the Lebesgue measure because it is already as big as can be). The probability of such an event would not be defined. Period. Much like a real valued function is not defined for a (non-real) complex number, a probability measure is defined on measurable sets but not on the non-measurable sets. So what statements could we make about such an event? Well, for starters, such an event would have to be defined using the axiom of choice. This means that all sets which we can describe by some rule are excluded. I.e., all the sets we are generally interested in are excluded. But couldn't we say something about the probability of a non-measurable event? Put a bound on it or something? Banach-Tarski's paradox shows that this will not work. If the measure of the finite number of pieces that Banach-Tarski decomposes the sphere into had an upper bound (say, the measure of the sphere), by constructing enough spheres we would run into a contradiction. By a similar argument backwards, we see that the pieces cannot have a non-trivial lower bound. I haven't shown that all non-measurable sets are this problematic, although I believe that a cleverer person than I should be able to come up with an argument showing that we cannot in any consistent way put any non-trivial boundon the "measure" of any non-measurable set (challenge to the community). In summary, we can not make any statement about the probability measure of such a set, this is not the end of the world because all relevant sets are measurable.
Probability of an event that is not measureable Edit: In light of cardinal's comment: All I say below is implicitly about the Lebesgue measure (a complete measure). Rereading your question, it seems that that is also what you are asking about. In t
25,607
Probability of an event that is not measureable
There are already good answers, but let me contribute with another point. The Lebesgue measure is often considered on the Lebesgue $\sigma$-algebra, which is complete, and, as already pointed out, we need the axiom of choice to establish Lebesgue non-measurable sets. In general probability theory, and, in particular, in relation to stochastic processes, it is far from obvious that you can make a relevant completion of the $\sigma$-algebra, and non-measurable events are less exotic. In some sense, the gap between the Borel $\sigma$-algebra and the Lebesgue $\sigma$-algebra on $\mathbb{R}$ is more interesting than the exotic sets not in the Lebesgue $\sigma$-algebra. The problem that I mostly see, that is related to the question, is that a set (or a function) may not be obviously measurable. In some cases you can prove that it actually is, but it may be difficult, and in other cases you can only prove that it is measurable when you extend the $\sigma$-algebra by the null sets of some measure. To investigate the extensions of Borel $\sigma$-algebras on topological spaces you will often encounter so-called Souslin sets or analytic sets, which need not be Borel measurable.
Probability of an event that is not measureable
There are already good answers, but let me contribute with another point. The Lebesgue measure is often considered on the Lebesgue $\sigma$-algebra, which is complete, and, as already pointed out, we
Probability of an event that is not measureable There are already good answers, but let me contribute with another point. The Lebesgue measure is often considered on the Lebesgue $\sigma$-algebra, which is complete, and, as already pointed out, we need the axiom of choice to establish Lebesgue non-measurable sets. In general probability theory, and, in particular, in relation to stochastic processes, it is far from obvious that you can make a relevant completion of the $\sigma$-algebra, and non-measurable events are less exotic. In some sense, the gap between the Borel $\sigma$-algebra and the Lebesgue $\sigma$-algebra on $\mathbb{R}$ is more interesting than the exotic sets not in the Lebesgue $\sigma$-algebra. The problem that I mostly see, that is related to the question, is that a set (or a function) may not be obviously measurable. In some cases you can prove that it actually is, but it may be difficult, and in other cases you can only prove that it is measurable when you extend the $\sigma$-algebra by the null sets of some measure. To investigate the extensions of Borel $\sigma$-algebras on topological spaces you will often encounter so-called Souslin sets or analytic sets, which need not be Borel measurable.
Probability of an event that is not measureable There are already good answers, but let me contribute with another point. The Lebesgue measure is often considered on the Lebesgue $\sigma$-algebra, which is complete, and, as already pointed out, we
25,608
Randomized trace technique
NB The stated result does not depend on any assumption of normality or even independence of the coordinates of $\newcommand{\x}{\mathbf{x}}\newcommand{\e}{\mathbb{E}}\newcommand{\tr}{\mathbf{tr}}\newcommand{\A}{\mathbf{A}}\x$. It does not depend on $\A$ being positive definite either. Indeed, suppose only that the coordinates of $\x$ have zero mean, variance of one and are uncorrelated (but not necessarily independent); that is, $\e \x_i = 0$, $\e \x_i^2 = 1$, and $\e \x_i \x_j = 0$ for all $i \neq j$. Bare-hands approach Let $\A = (a_{ij})$ be an arbitrary $n \times n$ matrix. By definition $\tr(\A) = \sum_{i=1}^n a_{ii}$. Then, $$ \tr(\A) = \sum_{i=1}^n a_{ii} = \sum_{i=1}^n a_{ii} \e \x_i^2 = \sum_{i=1}^n a_{ii} \e \x_i^2 + \sum_{i\neq j} a_{ij} \e \x_i \x_j , $$ and so we are done. In case that's not quite obvious, note that the right-hand side, by linearity of expectation, is $$ \sum_{i=1}^n a_{ii} \e \x_i^2 + \sum_{i\neq j} a_{ij} \e \x_i \x_j = \e\Big(\sum_{i=1}^n \sum_{j=1}^n a_{ij} \x_i \x_j \Big) = \e(\x^T \A \x) $$ Proof via trace properties There is another way to write this that is suggestive, but relies, conceptually on slightly more advanced tools. We need that both expectation and the trace operator are linear and that, for any two matrices $\A$ and $\newcommand{\B}{\mathbf{B}}\B$ of appropriate dimensions, $\tr(\A\B) = \tr(\B\A)$. Then, since $\x^T \A \x = \tr(\x^T \A \x)$, we have $$ \e(\x^T \A \x) = \e( \tr(\x^T \A \x) ) = \e( \tr(\A \x \x^T) ) = \tr( \e( \A \x \x^T ) ) = \tr( \A \e \x \x^T ), $$ and so, $$ \e(\x^T \A \x) = \tr(\A \mathbf{I}) = \tr(\A) . $$ Quadratic forms, inner products and ellipsoids If $\A$ is positive definite, then an inner product on $\mathbf{R}^n$ can be defined via $\langle \x, \mathbf{y} \rangle_{\A} = \x^T \A \mathbf{y}$ and $\mathcal{E}_{\A} = \{\x: \x^T \A \x = 1\}$ defines an ellipsoid in $\mathbf{R}^n$ centered at the origin.
Randomized trace technique
NB The stated result does not depend on any assumption of normality or even independence of the coordinates of $\newcommand{\x}{\mathbf{x}}\newcommand{\e}{\mathbb{E}}\newcommand{\tr}{\mathbf{tr}}\newc
Randomized trace technique NB The stated result does not depend on any assumption of normality or even independence of the coordinates of $\newcommand{\x}{\mathbf{x}}\newcommand{\e}{\mathbb{E}}\newcommand{\tr}{\mathbf{tr}}\newcommand{\A}{\mathbf{A}}\x$. It does not depend on $\A$ being positive definite either. Indeed, suppose only that the coordinates of $\x$ have zero mean, variance of one and are uncorrelated (but not necessarily independent); that is, $\e \x_i = 0$, $\e \x_i^2 = 1$, and $\e \x_i \x_j = 0$ for all $i \neq j$. Bare-hands approach Let $\A = (a_{ij})$ be an arbitrary $n \times n$ matrix. By definition $\tr(\A) = \sum_{i=1}^n a_{ii}$. Then, $$ \tr(\A) = \sum_{i=1}^n a_{ii} = \sum_{i=1}^n a_{ii} \e \x_i^2 = \sum_{i=1}^n a_{ii} \e \x_i^2 + \sum_{i\neq j} a_{ij} \e \x_i \x_j , $$ and so we are done. In case that's not quite obvious, note that the right-hand side, by linearity of expectation, is $$ \sum_{i=1}^n a_{ii} \e \x_i^2 + \sum_{i\neq j} a_{ij} \e \x_i \x_j = \e\Big(\sum_{i=1}^n \sum_{j=1}^n a_{ij} \x_i \x_j \Big) = \e(\x^T \A \x) $$ Proof via trace properties There is another way to write this that is suggestive, but relies, conceptually on slightly more advanced tools. We need that both expectation and the trace operator are linear and that, for any two matrices $\A$ and $\newcommand{\B}{\mathbf{B}}\B$ of appropriate dimensions, $\tr(\A\B) = \tr(\B\A)$. Then, since $\x^T \A \x = \tr(\x^T \A \x)$, we have $$ \e(\x^T \A \x) = \e( \tr(\x^T \A \x) ) = \e( \tr(\A \x \x^T) ) = \tr( \e( \A \x \x^T ) ) = \tr( \A \e \x \x^T ), $$ and so, $$ \e(\x^T \A \x) = \tr(\A \mathbf{I}) = \tr(\A) . $$ Quadratic forms, inner products and ellipsoids If $\A$ is positive definite, then an inner product on $\mathbf{R}^n$ can be defined via $\langle \x, \mathbf{y} \rangle_{\A} = \x^T \A \mathbf{y}$ and $\mathcal{E}_{\A} = \{\x: \x^T \A \x = 1\}$ defines an ellipsoid in $\mathbf{R}^n$ centered at the origin.
Randomized trace technique NB The stated result does not depend on any assumption of normality or even independence of the coordinates of $\newcommand{\x}{\mathbf{x}}\newcommand{\e}{\mathbb{E}}\newcommand{\tr}{\mathbf{tr}}\newc
25,609
Randomized trace technique
If $A$ is symmetric positive definite, then $A = U^tDU$ with $U$ orthonormal, and $D$ diagonal with eigenvalues on the diagonal. Since $x$ has identity covariance matrix, and $U$ is orthonormal, $Ux$ is also has an identity covariance matrix. Hence writing $y = Ux$, we have $E[x^TAx] = E[y^tDy]$. Since the expectation operator is linear, this is just $\sum_{i=0}^n \lambda_i E[y_i^2]$. Each $y_i$ is chi-square with 1 degree of freedom, so has expected value 1. Hence the expectation is the sum of eigenvalues. Geometrically, symmetric positive definite matrices $A$ are in 1-1 correspondence with ellipsoids -- given by the equation $x^TAx = 1$. The lengths of the ellipsoid's axes are given by $1/\sqrt\lambda_i$ where $\lambda_i$ are the eigenvalues. When $A = C^{-1}$ where $C$ is the covariance matrix, this is the square of the Mahalanobis distance.
Randomized trace technique
If $A$ is symmetric positive definite, then $A = U^tDU$ with $U$ orthonormal, and $D$ diagonal with eigenvalues on the diagonal. Since $x$ has identity covariance matrix, and $U$ is orthonormal, $Ux$
Randomized trace technique If $A$ is symmetric positive definite, then $A = U^tDU$ with $U$ orthonormal, and $D$ diagonal with eigenvalues on the diagonal. Since $x$ has identity covariance matrix, and $U$ is orthonormal, $Ux$ is also has an identity covariance matrix. Hence writing $y = Ux$, we have $E[x^TAx] = E[y^tDy]$. Since the expectation operator is linear, this is just $\sum_{i=0}^n \lambda_i E[y_i^2]$. Each $y_i$ is chi-square with 1 degree of freedom, so has expected value 1. Hence the expectation is the sum of eigenvalues. Geometrically, symmetric positive definite matrices $A$ are in 1-1 correspondence with ellipsoids -- given by the equation $x^TAx = 1$. The lengths of the ellipsoid's axes are given by $1/\sqrt\lambda_i$ where $\lambda_i$ are the eigenvalues. When $A = C^{-1}$ where $C$ is the covariance matrix, this is the square of the Mahalanobis distance.
Randomized trace technique If $A$ is symmetric positive definite, then $A = U^tDU$ with $U$ orthonormal, and $D$ diagonal with eigenvalues on the diagonal. Since $x$ has identity covariance matrix, and $U$ is orthonormal, $Ux$
25,610
Randomized trace technique
Let me address the "what is its practical importance" part of the question. There are many situations in which we have the ability to compute matrix vector products $Ax$ efficiently even if we don't have a stored copy of the matrix $A$ or don't have enough storage to save a copy of $A$. For example, $A$ might be of size 100,000 by 100,000 and fully dense- it would require 80 gigabytes of RAM to store such a matrix in double preciison floating point format. Randomized algorithms like this can be used to estimate the trace of $A$ or (using a related algorithm) individual diagonal entries of $A$. Some applications of this technique to large scale geophysical inversion problems are discussed in J. K. MacCarthy, B. Borchers, and R. C. Aster. Efficient stochastic estimation of the model resolution matrix diagonal a nd generalized cross validation for large geophysical inverse problems. Journal of Geophysical Research, 116, B10304, 2011. Link to the paper
Randomized trace technique
Let me address the "what is its practical importance" part of the question. There are many situations in which we have the ability to compute matrix vector products $Ax$ efficiently even if we don't
Randomized trace technique Let me address the "what is its practical importance" part of the question. There are many situations in which we have the ability to compute matrix vector products $Ax$ efficiently even if we don't have a stored copy of the matrix $A$ or don't have enough storage to save a copy of $A$. For example, $A$ might be of size 100,000 by 100,000 and fully dense- it would require 80 gigabytes of RAM to store such a matrix in double preciison floating point format. Randomized algorithms like this can be used to estimate the trace of $A$ or (using a related algorithm) individual diagonal entries of $A$. Some applications of this technique to large scale geophysical inversion problems are discussed in J. K. MacCarthy, B. Borchers, and R. C. Aster. Efficient stochastic estimation of the model resolution matrix diagonal a nd generalized cross validation for large geophysical inverse problems. Journal of Geophysical Research, 116, B10304, 2011. Link to the paper
Randomized trace technique Let me address the "what is its practical importance" part of the question. There are many situations in which we have the ability to compute matrix vector products $Ax$ efficiently even if we don't
25,611
What is the power of the regression F test?
The noncentrality parameter is $\delta^{2}$, the projection for the restricted model is $P_{r}$, $\beta$ is the vector of true parameters, $X$ is the design matrix for the unrestricted (true) model, $|| x ||$ is the norm: $$ \delta^{2} = \frac{|| X \beta - P_{r} X \beta ||^{2}}{\sigma^{2}} $$ You can read the formula like this: $E(y | X) = X \beta$ is the vector of expected values conditional on the design matrix $X$. If you treat $X \beta$ as an empirical data vector $y$, then its projection onto the restricted model subspace is $P_{r} X \beta$, which gives you the prediction $\hat{y}$ from the restricted model for that "data". Consequently, $X \beta - P_{r} X \beta$ is analogous to $y - \hat{y}$ and gives you the error of that prediction. Hence $|| X \beta - P_{r} X \beta ||^{2}$ gives the sum of squares of that error. If the restricted model is true, then $X \beta$ already is within the subspace defined by $X_{r}$, and $P_{r} X \beta = X \beta$, such that the noncentrality parameter is $0$. You should find this in Mardia, Kent & Bibby. (1980). Multivariate Analysis.
What is the power of the regression F test?
The noncentrality parameter is $\delta^{2}$, the projection for the restricted model is $P_{r}$, $\beta$ is the vector of true parameters, $X$ is the design matrix for the unrestricted (true) model, $
What is the power of the regression F test? The noncentrality parameter is $\delta^{2}$, the projection for the restricted model is $P_{r}$, $\beta$ is the vector of true parameters, $X$ is the design matrix for the unrestricted (true) model, $|| x ||$ is the norm: $$ \delta^{2} = \frac{|| X \beta - P_{r} X \beta ||^{2}}{\sigma^{2}} $$ You can read the formula like this: $E(y | X) = X \beta$ is the vector of expected values conditional on the design matrix $X$. If you treat $X \beta$ as an empirical data vector $y$, then its projection onto the restricted model subspace is $P_{r} X \beta$, which gives you the prediction $\hat{y}$ from the restricted model for that "data". Consequently, $X \beta - P_{r} X \beta$ is analogous to $y - \hat{y}$ and gives you the error of that prediction. Hence $|| X \beta - P_{r} X \beta ||^{2}$ gives the sum of squares of that error. If the restricted model is true, then $X \beta$ already is within the subspace defined by $X_{r}$, and $P_{r} X \beta = X \beta$, such that the noncentrality parameter is $0$. You should find this in Mardia, Kent & Bibby. (1980). Multivariate Analysis.
What is the power of the regression F test? The noncentrality parameter is $\delta^{2}$, the projection for the restricted model is $P_{r}$, $\beta$ is the vector of true parameters, $X$ is the design matrix for the unrestricted (true) model, $
25,612
What is the power of the regression F test?
I confirmed @caracal's answer with a Monte Carlo experiment. I generated random instances from a linear model (with the size random), computed the F-statistic and computed the p-value using the non-centrality parameter $$ \delta^2 = \frac{||X\beta_1 - X\beta_2||^2}{\sigma^2}, $$ Then I plotted the empirical cdf of these p-values. If the non-centrality parameter (and the code!) is correct, I should get a near uniform cdf, which is the case: Here is the R code (pardon the style, I'm still learning): #sum of squares sum2 <- function(x) { return(sum(x * x)) } #random integer between n and 2n rint <- function(n) { return(ceiling(runif(1,min=n,max=2*n))) } #generate random instance from linear model plus noise. #n observations of p2 vector #regress against all variables and against a subset of p1 of them #compute the F-statistic for the test of the p2-p1 marginal variables #compute the p-value under the putative non-centrality parameter gend <- function(n,p1,p2,sig = 1) { beta2 <- matrix(rnorm(p2,sd=0.1),nrow=p2) beta1 <- matrix(beta2[1:p1],nrow=p1) X <- matrix(rnorm(n*p2),nrow=n,ncol=p2) yt1 <- X[,1:p1] %*% beta1 yt2 <- X %*% beta2 y <- yt2 + matrix(rnorm(n,mean=0,sd=sig),nrow=n) ncp <- (sum2(yt2 - yt1)) / (sig ** 2) bhat2 <- lm(y ~ X - 1) bhat1 <- lm(y ~ X[,1:p1] - 1) SSE1 <- sum2(bhat1$residual) SSE2 <- sum2(bhat2$residual) df1 <- bhat1$df.residual df2 <- bhat2$df.residual Fstat <- ((SSE1 - SSE2) / (df1 - df2)) / (SSE2 / bhat2$df.residual) pval <- pf(Fstat,df=df1-df2,df2=df2,ncp=ncp) return(pval) } #call the above function, but randomize the problem size (within reason) genr <- function(n,p1,p2,sig=1) { use.p1 <- rint(p1) use.p2 <- use.p1 + rint(p2 - p1) return(gend(n=rint(n),p1=use.p1,p2=use.p2,sig=sig+runif(1))) } ntrial <- 4096 ssize <- 256 z <- replicate(ntrial,genr(ssize,p1=4,p2=10)) plot(ecdf(z))
What is the power of the regression F test?
I confirmed @caracal's answer with a Monte Carlo experiment. I generated random instances from a linear model (with the size random), computed the F-statistic and computed the p-value using the non-ce
What is the power of the regression F test? I confirmed @caracal's answer with a Monte Carlo experiment. I generated random instances from a linear model (with the size random), computed the F-statistic and computed the p-value using the non-centrality parameter $$ \delta^2 = \frac{||X\beta_1 - X\beta_2||^2}{\sigma^2}, $$ Then I plotted the empirical cdf of these p-values. If the non-centrality parameter (and the code!) is correct, I should get a near uniform cdf, which is the case: Here is the R code (pardon the style, I'm still learning): #sum of squares sum2 <- function(x) { return(sum(x * x)) } #random integer between n and 2n rint <- function(n) { return(ceiling(runif(1,min=n,max=2*n))) } #generate random instance from linear model plus noise. #n observations of p2 vector #regress against all variables and against a subset of p1 of them #compute the F-statistic for the test of the p2-p1 marginal variables #compute the p-value under the putative non-centrality parameter gend <- function(n,p1,p2,sig = 1) { beta2 <- matrix(rnorm(p2,sd=0.1),nrow=p2) beta1 <- matrix(beta2[1:p1],nrow=p1) X <- matrix(rnorm(n*p2),nrow=n,ncol=p2) yt1 <- X[,1:p1] %*% beta1 yt2 <- X %*% beta2 y <- yt2 + matrix(rnorm(n,mean=0,sd=sig),nrow=n) ncp <- (sum2(yt2 - yt1)) / (sig ** 2) bhat2 <- lm(y ~ X - 1) bhat1 <- lm(y ~ X[,1:p1] - 1) SSE1 <- sum2(bhat1$residual) SSE2 <- sum2(bhat2$residual) df1 <- bhat1$df.residual df2 <- bhat2$df.residual Fstat <- ((SSE1 - SSE2) / (df1 - df2)) / (SSE2 / bhat2$df.residual) pval <- pf(Fstat,df=df1-df2,df2=df2,ncp=ncp) return(pval) } #call the above function, but randomize the problem size (within reason) genr <- function(n,p1,p2,sig=1) { use.p1 <- rint(p1) use.p2 <- use.p1 + rint(p2 - p1) return(gend(n=rint(n),p1=use.p1,p2=use.p2,sig=sig+runif(1))) } ntrial <- 4096 ssize <- 256 z <- replicate(ntrial,genr(ssize,p1=4,p2=10)) plot(ecdf(z))
What is the power of the regression F test? I confirmed @caracal's answer with a Monte Carlo experiment. I generated random instances from a linear model (with the size random), computed the F-statistic and computed the p-value using the non-ce
25,613
What is the median of a non-central t distribution?
You can approximate it. For example, I made the following nonlinear fits for $\nu$ (degrees of freedom) from 1 through 20 and $\delta$ (noncentrality parameter) from 0 through 5 (in steps of 1/2). Let $$a(\nu) = 0.963158 + \frac{0.051726}{\nu-0.705428} + 0.0112409\log(\nu),$$ $$b(\nu) = -0.0214885+\frac{0.406419}{0.659586 +\nu}+0.00531844 \log(\nu),$$ and $$g(\nu, \delta) = \delta + a(\nu) \exp(b(\nu) \delta) - 1.$$ Then $g$ estimates the median to within 0.15 for $\nu=1$, 0.03 for $\nu=2$, .015 for $\nu=3$, and .007 for $\nu = 4, 5, \ldots, 20$. The estimation was done by computing the values of $a$ and $b$ for each value of $\nu$ from 1 through 20 and then separately fitting $a$ and $b$ to $\nu$. I examined plots of $a$ and $b$ to determine an appropriate functional form for these fits. You can do better by focusing on the intervals of these parameters of interest to you. In particular, if you're not interested in really small values of $\nu$ you could easily improve these estimates, likely to within 0.005 consistently. Here are plots of the median versus $\delta$ for $\nu=1$, the hardest case, and the negative residuals (true median minus approximate value) versus $\delta$: The residuals are truly small compared to the medians. BTW, for all but the smallest degrees of freedom the median is close to the noncentrality parameter. Here's a graph of the median, for $\delta$ from 0 to 5 and $\nu$ (treated as a real parameter) from 1 to 20. For many purposes using $\delta$ to estimate the median might be good enough. Here is a plot of the error (relative to $\delta$) made by assuming the median equals $\delta$ (for $\nu$ from 2 through 20).
What is the median of a non-central t distribution?
You can approximate it. For example, I made the following nonlinear fits for $\nu$ (degrees of freedom) from 1 through 20 and $\delta$ (noncentrality parameter) from 0 through 5 (in steps of 1/2). Le
What is the median of a non-central t distribution? You can approximate it. For example, I made the following nonlinear fits for $\nu$ (degrees of freedom) from 1 through 20 and $\delta$ (noncentrality parameter) from 0 through 5 (in steps of 1/2). Let $$a(\nu) = 0.963158 + \frac{0.051726}{\nu-0.705428} + 0.0112409\log(\nu),$$ $$b(\nu) = -0.0214885+\frac{0.406419}{0.659586 +\nu}+0.00531844 \log(\nu),$$ and $$g(\nu, \delta) = \delta + a(\nu) \exp(b(\nu) \delta) - 1.$$ Then $g$ estimates the median to within 0.15 for $\nu=1$, 0.03 for $\nu=2$, .015 for $\nu=3$, and .007 for $\nu = 4, 5, \ldots, 20$. The estimation was done by computing the values of $a$ and $b$ for each value of $\nu$ from 1 through 20 and then separately fitting $a$ and $b$ to $\nu$. I examined plots of $a$ and $b$ to determine an appropriate functional form for these fits. You can do better by focusing on the intervals of these parameters of interest to you. In particular, if you're not interested in really small values of $\nu$ you could easily improve these estimates, likely to within 0.005 consistently. Here are plots of the median versus $\delta$ for $\nu=1$, the hardest case, and the negative residuals (true median minus approximate value) versus $\delta$: The residuals are truly small compared to the medians. BTW, for all but the smallest degrees of freedom the median is close to the noncentrality parameter. Here's a graph of the median, for $\delta$ from 0 to 5 and $\nu$ (treated as a real parameter) from 1 to 20. For many purposes using $\delta$ to estimate the median might be good enough. Here is a plot of the error (relative to $\delta$) made by assuming the median equals $\delta$ (for $\nu$ from 2 through 20).
What is the median of a non-central t distribution? You can approximate it. For example, I made the following nonlinear fits for $\nu$ (degrees of freedom) from 1 through 20 and $\delta$ (noncentrality parameter) from 0 through 5 (in steps of 1/2). Le
25,614
What is the median of a non-central t distribution?
If you are interested in (degrees of freedom) ν > 2, the following asymptotic expression [derived from an interpolative approximation to the noncentral student-t quantile, DL Bartley, Ann. Occup. Hyg., Vol. 52, 2008] is sufficiently accurate for many purposes: Median[ t[δ,ν] ] ~ δ(1 + 1/(3ν)). With ν > 2, the maximum magnitude of the bias of the above expression relative to the noncentral student-t median is about 2% and falls off quickly with increasing ν. The contour diagram shows the bias of the asymptotic approximation relative to the noncentral student-t median:
What is the median of a non-central t distribution?
If you are interested in (degrees of freedom) ν > 2, the following asymptotic expression [derived from an interpolative approximation to the noncentral student-t quantile, DL Bartley, Ann. Occup. Hyg.
What is the median of a non-central t distribution? If you are interested in (degrees of freedom) ν > 2, the following asymptotic expression [derived from an interpolative approximation to the noncentral student-t quantile, DL Bartley, Ann. Occup. Hyg., Vol. 52, 2008] is sufficiently accurate for many purposes: Median[ t[δ,ν] ] ~ δ(1 + 1/(3ν)). With ν > 2, the maximum magnitude of the bias of the above expression relative to the noncentral student-t median is about 2% and falls off quickly with increasing ν. The contour diagram shows the bias of the asymptotic approximation relative to the noncentral student-t median:
What is the median of a non-central t distribution? If you are interested in (degrees of freedom) ν > 2, the following asymptotic expression [derived from an interpolative approximation to the noncentral student-t quantile, DL Bartley, Ann. Occup. Hyg.
25,615
Using Singular Value Decomposition to Compute Variance Covariance Matrix from linear regression model
First, recall that under assumptions of multivariate normality of the linear-regression model, we have that $$ \hat{\beta} \sim \mathcal{N}( \beta, \sigma^2 (X^T X)^{-1} ) . $$ Now, if $X = U D V^T$ where the right-hand side is the SVD of X, then we get that $X^T X = V D U^T U D V = V D^2 V^T$. Hence, $$ (X^T X)^{-1} = V D^{-2} V^T . $$ We're still missing the estimate of the variance, which is $$ \hat{\sigma}^2 = \frac{1}{n - p} (y^T y - \hat{\beta}^T X^T y) . $$ Though I haven't checked, hopefully vcov returns $\hat{\sigma}^2 V D^{-2} V^T$. Note: You wrote $V D^2 V^T$, which is $X^T X$, but we need the inverse for the variance-covariance matrix. Also note that in $R$, to do this computation you need to do vcov.matrix <- var.est * (v %*% d^(-2) %*% t(v)) observing that for matrix multiplication we use %*% instead of just *. var.est above is the estimate of the variance of the noise. (Also, I've made the assumptions that $X$ is full-rank and $n \geq p$ throughout. If this is not the case, you'll have to make minor modifications to the above.)
Using Singular Value Decomposition to Compute Variance Covariance Matrix from linear regression mode
First, recall that under assumptions of multivariate normality of the linear-regression model, we have that $$ \hat{\beta} \sim \mathcal{N}( \beta, \sigma^2 (X^T X)^{-1} ) . $$ Now, if $X = U D V^T$
Using Singular Value Decomposition to Compute Variance Covariance Matrix from linear regression model First, recall that under assumptions of multivariate normality of the linear-regression model, we have that $$ \hat{\beta} \sim \mathcal{N}( \beta, \sigma^2 (X^T X)^{-1} ) . $$ Now, if $X = U D V^T$ where the right-hand side is the SVD of X, then we get that $X^T X = V D U^T U D V = V D^2 V^T$. Hence, $$ (X^T X)^{-1} = V D^{-2} V^T . $$ We're still missing the estimate of the variance, which is $$ \hat{\sigma}^2 = \frac{1}{n - p} (y^T y - \hat{\beta}^T X^T y) . $$ Though I haven't checked, hopefully vcov returns $\hat{\sigma}^2 V D^{-2} V^T$. Note: You wrote $V D^2 V^T$, which is $X^T X$, but we need the inverse for the variance-covariance matrix. Also note that in $R$, to do this computation you need to do vcov.matrix <- var.est * (v %*% d^(-2) %*% t(v)) observing that for matrix multiplication we use %*% instead of just *. var.est above is the estimate of the variance of the noise. (Also, I've made the assumptions that $X$ is full-rank and $n \geq p$ throughout. If this is not the case, you'll have to make minor modifications to the above.)
Using Singular Value Decomposition to Compute Variance Covariance Matrix from linear regression mode First, recall that under assumptions of multivariate normality of the linear-regression model, we have that $$ \hat{\beta} \sim \mathcal{N}( \beta, \sigma^2 (X^T X)^{-1} ) . $$ Now, if $X = U D V^T$
25,616
How to detect which one is the better study when they give you conflicting results?
I think Jeromy's answer is sufficient if you are examining two experimental studies or an actual meta-analysis. But often times we are faced with examining two non-experimental studies, and are tasked with assessing the validity of those two disparate findings. As Cyrus's grocery list of questions suggests, the topic itself is not amenable to short response, and whole books are in essence aimed to address such a question. For anyone interested in conducting research on non-experimental data, I would highly suggest you read Experimental and quasi-experimental designs for generalized causal inference by William R. Shadish, Thomas D. Cook, Donald Thomas Campbell (Also I have heard that the older versions of this text are just as good). Several items Jeromy referred to (bigger sample sizes, and greater methodological rigour), and everything that Cyrus mentions would be considered what Campbell and Cook refer to as "Internal Validity". These include aspects of the research design and the statistical methods used to assess the relationship between X and Y. In particular as critics we are concerned about aspects of either that could bias the results, and diminish the reliability of the findings. As this is a forum devoted to statistical analysis, much of the answers are centered around statistical methods to ensure unbiased estimates of whatever relationship you are assessing. But their are other aspects of the research design unrelated to statistical analysis that diminish the validity of the findings no matter what rigourous lengths one goes to in their statistical analysis (such as Cyrus's mention of several aspects of experiment fidelity can be addressed but not solved with statistical methods, and if they occur will always diminish the validity of the studies results). There are many other aspects of internal validity that become crucial to assess in comparing results of non-experimental studies that are not mentioned here, and aspects of research designs that can distinguish reliability of findings. I don't think it is quite appropriate to go into too much detail here, but I would often take the results of a quasi-experimental study (such as an interrupted time series or a matched case-control) more seriously than I would a study that is not quasi experimental, regardless of the other aspects Jeromy or Cyrus mentioned (of course within some reason). Campbell and Cook also refer to the "external validity" of studies. This aspect of research design is often much smaller in scope, and does not deserve as much attention as internal validity. External validity essentially deals with the generalizability of the findings, and I would say laymen can often assess external validity reasonably well as long as they are familiar with the subject. Long story short read Shadish's, Cook's and Campbell's book.
How to detect which one is the better study when they give you conflicting results?
I think Jeromy's answer is sufficient if you are examining two experimental studies or an actual meta-analysis. But often times we are faced with examining two non-experimental studies, and are tasked
How to detect which one is the better study when they give you conflicting results? I think Jeromy's answer is sufficient if you are examining two experimental studies or an actual meta-analysis. But often times we are faced with examining two non-experimental studies, and are tasked with assessing the validity of those two disparate findings. As Cyrus's grocery list of questions suggests, the topic itself is not amenable to short response, and whole books are in essence aimed to address such a question. For anyone interested in conducting research on non-experimental data, I would highly suggest you read Experimental and quasi-experimental designs for generalized causal inference by William R. Shadish, Thomas D. Cook, Donald Thomas Campbell (Also I have heard that the older versions of this text are just as good). Several items Jeromy referred to (bigger sample sizes, and greater methodological rigour), and everything that Cyrus mentions would be considered what Campbell and Cook refer to as "Internal Validity". These include aspects of the research design and the statistical methods used to assess the relationship between X and Y. In particular as critics we are concerned about aspects of either that could bias the results, and diminish the reliability of the findings. As this is a forum devoted to statistical analysis, much of the answers are centered around statistical methods to ensure unbiased estimates of whatever relationship you are assessing. But their are other aspects of the research design unrelated to statistical analysis that diminish the validity of the findings no matter what rigourous lengths one goes to in their statistical analysis (such as Cyrus's mention of several aspects of experiment fidelity can be addressed but not solved with statistical methods, and if they occur will always diminish the validity of the studies results). There are many other aspects of internal validity that become crucial to assess in comparing results of non-experimental studies that are not mentioned here, and aspects of research designs that can distinguish reliability of findings. I don't think it is quite appropriate to go into too much detail here, but I would often take the results of a quasi-experimental study (such as an interrupted time series or a matched case-control) more seriously than I would a study that is not quasi experimental, regardless of the other aspects Jeromy or Cyrus mentioned (of course within some reason). Campbell and Cook also refer to the "external validity" of studies. This aspect of research design is often much smaller in scope, and does not deserve as much attention as internal validity. External validity essentially deals with the generalizability of the findings, and I would say laymen can often assess external validity reasonably well as long as they are familiar with the subject. Long story short read Shadish's, Cook's and Campbell's book.
How to detect which one is the better study when they give you conflicting results? I think Jeromy's answer is sufficient if you are examining two experimental studies or an actual meta-analysis. But often times we are faced with examining two non-experimental studies, and are tasked
25,617
How to detect which one is the better study when they give you conflicting results?
The meta analysis literature is relevant to your question. Using meta-analytic techniques you could generate an estimate of the effect of interest pooled across studies. Such techniques often weight studies in terms of their sample size. Within the meta analysis context researchers talk about fixed effect and random effect models (see Hunter and Schmidt, 2002). A fixed effect model assumes that all studies are estimating the same population effect. A random-effects model assumes that studies differ in the population effect that is being estimated. A random-effects model is typically more appropriate. As more studies accumulate looking at a particular relationship, more sophisticated approaches become possible. For example, you can code studies in terms of various properties, such as perceived quality, and then examine empirically whether the effect size varies with these study characteristics. Beyond quality there may be some theoretically relevant differences between the studies which would moderate the relationship (e.g., characteristic of the sample, dosage levels, etc.). In general, I tend to trust studies with: bigger sample sizes greater methodological rigour a confirmatory orientation (e.g., not a study where they tested for correlations between 100 different nutrients and 50 health outcomes) absence of conflict of interest (e.g., not by a company with a commercial interest in showing a relationship; not by a researcher who has an incentive to find a significant result) But that said you need to keep random sampling and theoretically meaningful differences between studies as a plausible explanation of conflicting study findings.
How to detect which one is the better study when they give you conflicting results?
The meta analysis literature is relevant to your question. Using meta-analytic techniques you could generate an estimate of the effect of interest pooled across studies. Such techniques often weight s
How to detect which one is the better study when they give you conflicting results? The meta analysis literature is relevant to your question. Using meta-analytic techniques you could generate an estimate of the effect of interest pooled across studies. Such techniques often weight studies in terms of their sample size. Within the meta analysis context researchers talk about fixed effect and random effect models (see Hunter and Schmidt, 2002). A fixed effect model assumes that all studies are estimating the same population effect. A random-effects model assumes that studies differ in the population effect that is being estimated. A random-effects model is typically more appropriate. As more studies accumulate looking at a particular relationship, more sophisticated approaches become possible. For example, you can code studies in terms of various properties, such as perceived quality, and then examine empirically whether the effect size varies with these study characteristics. Beyond quality there may be some theoretically relevant differences between the studies which would moderate the relationship (e.g., characteristic of the sample, dosage levels, etc.). In general, I tend to trust studies with: bigger sample sizes greater methodological rigour a confirmatory orientation (e.g., not a study where they tested for correlations between 100 different nutrients and 50 health outcomes) absence of conflict of interest (e.g., not by a company with a commercial interest in showing a relationship; not by a researcher who has an incentive to find a significant result) But that said you need to keep random sampling and theoretically meaningful differences between studies as a plausible explanation of conflicting study findings.
How to detect which one is the better study when they give you conflicting results? The meta analysis literature is relevant to your question. Using meta-analytic techniques you could generate an estimate of the effect of interest pooled across studies. Such techniques often weight s
25,618
How to detect which one is the better study when they give you conflicting results?
I would hold off on considering meta-analysis until you've scrutinized sources if potential bias or variation in the target populations. If these are studies of treatment effects, was treatment randomly assigned? Were there deviations from the protocol? Was there noncompliance? Is there missing outcome data? Were the samples drawn from the same frame? Was there refusal to participate? Implementation errors? Were standard errors computed correctly, accounting for clustering and robust to various parametric assumptions? Only after you have answered these questions do I think meta-analysis issues start to enter the picture. It must be rare that for any two studies meta-analysis is appropriate, unless you are willing to make sone heroic assumptions.
How to detect which one is the better study when they give you conflicting results?
I would hold off on considering meta-analysis until you've scrutinized sources if potential bias or variation in the target populations. If these are studies of treatment effects, was treatment random
How to detect which one is the better study when they give you conflicting results? I would hold off on considering meta-analysis until you've scrutinized sources if potential bias or variation in the target populations. If these are studies of treatment effects, was treatment randomly assigned? Were there deviations from the protocol? Was there noncompliance? Is there missing outcome data? Were the samples drawn from the same frame? Was there refusal to participate? Implementation errors? Were standard errors computed correctly, accounting for clustering and robust to various parametric assumptions? Only after you have answered these questions do I think meta-analysis issues start to enter the picture. It must be rare that for any two studies meta-analysis is appropriate, unless you are willing to make sone heroic assumptions.
How to detect which one is the better study when they give you conflicting results? I would hold off on considering meta-analysis until you've scrutinized sources if potential bias or variation in the target populations. If these are studies of treatment effects, was treatment random
25,619
Is this summary of MCMC correct?
My comments on these assertions: We can't directly evaluate the posterior as the normalising constant is too hard to calculate for interesting problems. Instead we sample from it. No, the normalising constant$$\in_\Theta \pi(\theta)f(x|\theta)\,\text d\theta$$being unknown is not the issue for being unable to handle inference from the posterior distribution. The complexity of the posterior density is the primary reason for running simulations. (The normalising constant is mostly useful to compute the evidence in Bayesian hypothesis testing.) We do this by engineering a Markov chain that has the same stationary distribution as the target distribution (the posterior in our case) This is correct (if one possibility). Note that MCMC is a general simulation method that is not restricted to Bayesian computation. When we have reached this stationary state we continue to run the Markov chain and sample from it to build up our empirical distribution of the posterior Not exactly as "reaching stationarity" is most often impossible to detect/assert in practice. Some techniques exist, but they are not exact and mileage [varies][5]. Exact (or perfect) sampling is restricted to some ordered settings and very costly. However, the ergodic theorem validates the use of Monte Carlo averages in this setting without "waiting" for stationarity. All Markov chains are completely described by their transition probabilities. The generic term is transition kernel, as the target distribution often is absolutely continuous. Some MCMC methods use continuous time processes, in which case there is no transition kernel stricto sensus. We therefore control/engineer the Markov chain by controlling the transition probabilities. All MCMC algorithms work from this principle but the exact method for generating these transition probabilities differs between algorithms. Markov chain Monte Carlo algorithm are indeed validated by the fact that their transition kernel ensures stationarity for the target distribution$$\pi(\theta'|x) = \int_\Theta \pi(\theta|x)K(\theta,\theta')\,\text d\theta\tag{1}$$ If we have a particular algorithm for generating these transition probabilities, we can verify that it converges to the stationary distribution by using the detailed balance equation on the proposed transition probabilities No, detailed balance is not a necessary condition for stationarity wrt the correct target. Take for instance the Gibbs samplers or the Langevin version (MALA), which are usually not reversible and hence do not check detailed balance. They are nonetheless valid and satisfy global balance (1). Thus the remaining challenge is to come up with a method to generate these transition probabilities Not really, since there exist families of generic MCMC algorithms such as random walk Metropolis-Hastings algorithms or Hamiltonian Monte Carlo. The challenge is more into calibrating a given algorithm or choosing between algorithms.
Is this summary of MCMC correct?
My comments on these assertions: We can't directly evaluate the posterior as the normalising constant is too hard to calculate for interesting problems. Instead we sample from it. No, the normalisin
Is this summary of MCMC correct? My comments on these assertions: We can't directly evaluate the posterior as the normalising constant is too hard to calculate for interesting problems. Instead we sample from it. No, the normalising constant$$\in_\Theta \pi(\theta)f(x|\theta)\,\text d\theta$$being unknown is not the issue for being unable to handle inference from the posterior distribution. The complexity of the posterior density is the primary reason for running simulations. (The normalising constant is mostly useful to compute the evidence in Bayesian hypothesis testing.) We do this by engineering a Markov chain that has the same stationary distribution as the target distribution (the posterior in our case) This is correct (if one possibility). Note that MCMC is a general simulation method that is not restricted to Bayesian computation. When we have reached this stationary state we continue to run the Markov chain and sample from it to build up our empirical distribution of the posterior Not exactly as "reaching stationarity" is most often impossible to detect/assert in practice. Some techniques exist, but they are not exact and mileage [varies][5]. Exact (or perfect) sampling is restricted to some ordered settings and very costly. However, the ergodic theorem validates the use of Monte Carlo averages in this setting without "waiting" for stationarity. All Markov chains are completely described by their transition probabilities. The generic term is transition kernel, as the target distribution often is absolutely continuous. Some MCMC methods use continuous time processes, in which case there is no transition kernel stricto sensus. We therefore control/engineer the Markov chain by controlling the transition probabilities. All MCMC algorithms work from this principle but the exact method for generating these transition probabilities differs between algorithms. Markov chain Monte Carlo algorithm are indeed validated by the fact that their transition kernel ensures stationarity for the target distribution$$\pi(\theta'|x) = \int_\Theta \pi(\theta|x)K(\theta,\theta')\,\text d\theta\tag{1}$$ If we have a particular algorithm for generating these transition probabilities, we can verify that it converges to the stationary distribution by using the detailed balance equation on the proposed transition probabilities No, detailed balance is not a necessary condition for stationarity wrt the correct target. Take for instance the Gibbs samplers or the Langevin version (MALA), which are usually not reversible and hence do not check detailed balance. They are nonetheless valid and satisfy global balance (1). Thus the remaining challenge is to come up with a method to generate these transition probabilities Not really, since there exist families of generic MCMC algorithms such as random walk Metropolis-Hastings algorithms or Hamiltonian Monte Carlo. The challenge is more into calibrating a given algorithm or choosing between algorithms.
Is this summary of MCMC correct? My comments on these assertions: We can't directly evaluate the posterior as the normalising constant is too hard to calculate for interesting problems. Instead we sample from it. No, the normalisin
25,620
How to estimate the variance of correlated observations?
Let $R = (\rho_{ij})$ be the correlation matrix so that the covariance matrix is $\Sigma = \sigma^2 R.$ Consider $\mathbf x = (x_1,\ldots, x_n)^\prime$ to be a single observation of the $n$-variate Normal distribution with zero mean and $\Sigma$ covariance. Because the log likelihood of $N\ge 1$ independent such observations is, up to an additive constant (depending only on $N,$ $n,$ and $R$) given by $$\Lambda(\sigma) = \sum_{i=1}^N (-n \log \sigma) - \frac{1}{2\sigma^2} \sum_{i = 1}^N\mathbf{x}_i^\prime\, R^{-1}\,\mathbf{x}_i,$$ it has critical points where $\sigma\to 0,$ $\sigma\to\infty,$ and at any solutions of $$0 = \frac{\mathrm d}{\mathrm{d}\sigma}\Lambda(\sigma) = -\frac{nN}{\sigma} + \frac{1}{\sigma^3} \sum_{i = 1}^N\mathbf{x}_i^\prime\, R^{-1}\,\mathbf{x}_i.$$ Unless the form $ \sum_{i = 1}^N\mathbf{x}_i^\prime\, R^{-1}\,\mathbf{x}_i$ is zero, there is a unique global maximum at $$\hat\sigma^2 = \frac{1}{nN} \sum_{i = 1}^N \mathbf{x}_i^\prime\, R^{-1}\,\mathbf{x}_i.$$ This is the Maximum Likelihood estimate. It exists even when $N=1$ (which is the situation posited in the question). Intuitively, this will be superior to any estimate that ignores the correlations assumed in $R.$ To check, we could compute the Fisher Information matrix for this estimate -- but I will leave that to you, in part because the result shouldn't be convincing for small values of $N$ (where the maximum likelihood asymptotic results might not apply). To illustrate what actually happens, here are one thousand estimates of $\sigma^2$ from experiments with $n=4$ and $N=1.$ The value of $\sigma$ was set to $1$ throughout. In these experiments, the correlation was always $$R = \pmatrix{1 &0.3055569 &0.5513377 &0.5100989\\ 0.3055569 &1 &0.1240151 &0.09634469\\ 0.5513377 &0.12401511 &1 &-0.4209064\\ 0.5100989 &0.09634469 &-0.4209064 &1}$$ (as generated randomly at the outset). In the figure the leftmost panel is a histogram of the foregoing Maximum Likelihood estimates; the middle panel is a histogram of estimates using the usual (unbiased) variance estimator; and the right panel is a QQ plot of the two sets of estimates. The slanted line is the line of equality. You can see the usual variance estimator tends to yield more extreme values. It is also biased (due to ignoring the correlation): the mean of the MLEs is 0.986 -- surprisingly close to the true value of $\sigma^2 =1^2 =1$ while the mean of the usual estimates is only 0.791. (I write "surprisingly" because it is well-known the usual maximum likelihood estimator of $\sigma^2,$ where no correlation is involved, has a bias of order $1/(nN),$ which is pretty large in this case.) You may experiment with the R code that produced these figures by modifying the values of n, sigma, N, n.sim, Rho, and the random number seed 17. f <- function(x, Rho) { # The MLE of sigma^2 given data `x` and correlation `Rho` S <- solve(Rho) sum(apply(x, 1, function(x) x %*% S %*% x)) / length(c(x)) } n <- 4 sigma <- 1 N <- 1 n.sim <- 1e3 set.seed(17) # # Generate a random correlation matrix. Larger values of `d` yield more # spherical matrices in general. # d <- 1 Rho <- cor(matrix(rnorm(n*(n+d)), ncol=n)) (ev <- eigen(Rho, only.values=TRUE)$values) # # Run the experiments. # library(MASS) sim <- replicate(n.sim, { x <- matrix(mvrnorm(N, rep(0,n), sigma^2 * Rho), N) c(f(x, Rho), var(c(x))) }) (rowMeans(sim)) # # Plot the results. # par(mfrow=c(1,3)) hist(sim[1,], col=gray(.93), xlab="Estimate", main=expression(paste("Histogram of Estimates of ", sigma^2))) abline(v = sigma^2, col="Red", lwd=2) hist(sim[2,],col=gray(.93), xlab="Estimate", main=expression(paste("Histogram of Independent Estimates of ", sigma^2))) abline(v = sigma^2, col="Red", lwd=2) y1 <- sort(sim[1,]) y2 <- sort(sim[2,]) plot(y1, y2,asp=1, xlab="Correlation-based estimate", ylab="Independent estimate") abline(0:1, col="Red", lwd=2) par(mfrow=c(1,1))
How to estimate the variance of correlated observations?
Let $R = (\rho_{ij})$ be the correlation matrix so that the covariance matrix is $\Sigma = \sigma^2 R.$ Consider $\mathbf x = (x_1,\ldots, x_n)^\prime$ to be a single observation of the $n$-variate
How to estimate the variance of correlated observations? Let $R = (\rho_{ij})$ be the correlation matrix so that the covariance matrix is $\Sigma = \sigma^2 R.$ Consider $\mathbf x = (x_1,\ldots, x_n)^\prime$ to be a single observation of the $n$-variate Normal distribution with zero mean and $\Sigma$ covariance. Because the log likelihood of $N\ge 1$ independent such observations is, up to an additive constant (depending only on $N,$ $n,$ and $R$) given by $$\Lambda(\sigma) = \sum_{i=1}^N (-n \log \sigma) - \frac{1}{2\sigma^2} \sum_{i = 1}^N\mathbf{x}_i^\prime\, R^{-1}\,\mathbf{x}_i,$$ it has critical points where $\sigma\to 0,$ $\sigma\to\infty,$ and at any solutions of $$0 = \frac{\mathrm d}{\mathrm{d}\sigma}\Lambda(\sigma) = -\frac{nN}{\sigma} + \frac{1}{\sigma^3} \sum_{i = 1}^N\mathbf{x}_i^\prime\, R^{-1}\,\mathbf{x}_i.$$ Unless the form $ \sum_{i = 1}^N\mathbf{x}_i^\prime\, R^{-1}\,\mathbf{x}_i$ is zero, there is a unique global maximum at $$\hat\sigma^2 = \frac{1}{nN} \sum_{i = 1}^N \mathbf{x}_i^\prime\, R^{-1}\,\mathbf{x}_i.$$ This is the Maximum Likelihood estimate. It exists even when $N=1$ (which is the situation posited in the question). Intuitively, this will be superior to any estimate that ignores the correlations assumed in $R.$ To check, we could compute the Fisher Information matrix for this estimate -- but I will leave that to you, in part because the result shouldn't be convincing for small values of $N$ (where the maximum likelihood asymptotic results might not apply). To illustrate what actually happens, here are one thousand estimates of $\sigma^2$ from experiments with $n=4$ and $N=1.$ The value of $\sigma$ was set to $1$ throughout. In these experiments, the correlation was always $$R = \pmatrix{1 &0.3055569 &0.5513377 &0.5100989\\ 0.3055569 &1 &0.1240151 &0.09634469\\ 0.5513377 &0.12401511 &1 &-0.4209064\\ 0.5100989 &0.09634469 &-0.4209064 &1}$$ (as generated randomly at the outset). In the figure the leftmost panel is a histogram of the foregoing Maximum Likelihood estimates; the middle panel is a histogram of estimates using the usual (unbiased) variance estimator; and the right panel is a QQ plot of the two sets of estimates. The slanted line is the line of equality. You can see the usual variance estimator tends to yield more extreme values. It is also biased (due to ignoring the correlation): the mean of the MLEs is 0.986 -- surprisingly close to the true value of $\sigma^2 =1^2 =1$ while the mean of the usual estimates is only 0.791. (I write "surprisingly" because it is well-known the usual maximum likelihood estimator of $\sigma^2,$ where no correlation is involved, has a bias of order $1/(nN),$ which is pretty large in this case.) You may experiment with the R code that produced these figures by modifying the values of n, sigma, N, n.sim, Rho, and the random number seed 17. f <- function(x, Rho) { # The MLE of sigma^2 given data `x` and correlation `Rho` S <- solve(Rho) sum(apply(x, 1, function(x) x %*% S %*% x)) / length(c(x)) } n <- 4 sigma <- 1 N <- 1 n.sim <- 1e3 set.seed(17) # # Generate a random correlation matrix. Larger values of `d` yield more # spherical matrices in general. # d <- 1 Rho <- cor(matrix(rnorm(n*(n+d)), ncol=n)) (ev <- eigen(Rho, only.values=TRUE)$values) # # Run the experiments. # library(MASS) sim <- replicate(n.sim, { x <- matrix(mvrnorm(N, rep(0,n), sigma^2 * Rho), N) c(f(x, Rho), var(c(x))) }) (rowMeans(sim)) # # Plot the results. # par(mfrow=c(1,3)) hist(sim[1,], col=gray(.93), xlab="Estimate", main=expression(paste("Histogram of Estimates of ", sigma^2))) abline(v = sigma^2, col="Red", lwd=2) hist(sim[2,],col=gray(.93), xlab="Estimate", main=expression(paste("Histogram of Independent Estimates of ", sigma^2))) abline(v = sigma^2, col="Red", lwd=2) y1 <- sort(sim[1,]) y2 <- sort(sim[2,]) plot(y1, y2,asp=1, xlab="Correlation-based estimate", ylab="Independent estimate") abline(0:1, col="Red", lwd=2) par(mfrow=c(1,1))
How to estimate the variance of correlated observations? Let $R = (\rho_{ij})$ be the correlation matrix so that the covariance matrix is $\Sigma = \sigma^2 R.$ Consider $\mathbf x = (x_1,\ldots, x_n)^\prime$ to be a single observation of the $n$-variate
25,621
How to estimate the variance of correlated observations?
You can apply GLS equations, e.g. top of p.5 in these lectures: http://halweb.uc3m.es/esp/Personal/personas/durban/esp/web/notes/gls.pdf The equation for MSE is $\sigma^2=1/n \sum_{ij} x_i r^{-1}_{ij} x_j$ if you set X to zero and correspond your r to their V This equation doesn’t work for extreme cases such as correlation 1 between all variables. This case your usual equation will still produce a number
How to estimate the variance of correlated observations?
You can apply GLS equations, e.g. top of p.5 in these lectures: http://halweb.uc3m.es/esp/Personal/personas/durban/esp/web/notes/gls.pdf The equation for MSE is $\sigma^2=1/n \sum_{ij} x_i r^{-1}_{ij
How to estimate the variance of correlated observations? You can apply GLS equations, e.g. top of p.5 in these lectures: http://halweb.uc3m.es/esp/Personal/personas/durban/esp/web/notes/gls.pdf The equation for MSE is $\sigma^2=1/n \sum_{ij} x_i r^{-1}_{ij} x_j$ if you set X to zero and correspond your r to their V This equation doesn’t work for extreme cases such as correlation 1 between all variables. This case your usual equation will still produce a number
How to estimate the variance of correlated observations? You can apply GLS equations, e.g. top of p.5 in these lectures: http://halweb.uc3m.es/esp/Personal/personas/durban/esp/web/notes/gls.pdf The equation for MSE is $\sigma^2=1/n \sum_{ij} x_i r^{-1}_{ij
25,622
How do you know something isn't random?
An obligatory Dilbert comic: If you have a random number generator that at random generates "4" with probability $p$, the probability of observing it $n$ times in a row is $p^n$, assuming that the draws are independent. Notice that the more times you observe "4", the smaller the probability gets, but it will never go down to zero. There will always be a slight chance of observing one more "4" in a row. So you can never be 100% certain.
How do you know something isn't random?
An obligatory Dilbert comic: If you have a random number generator that at random generates "4" with probability $p$, the probability of observing it $n$ times in a row is $p^n$, assuming that the dr
How do you know something isn't random? An obligatory Dilbert comic: If you have a random number generator that at random generates "4" with probability $p$, the probability of observing it $n$ times in a row is $p^n$, assuming that the draws are independent. Notice that the more times you observe "4", the smaller the probability gets, but it will never go down to zero. There will always be a slight chance of observing one more "4" in a row. So you can never be 100% certain.
How do you know something isn't random? An obligatory Dilbert comic: If you have a random number generator that at random generates "4" with probability $p$, the probability of observing it $n$ times in a row is $p^n$, assuming that the dr
25,623
How do you know something isn't random?
how would you know with 100% certainty it wasn't random? You wouldn't. This gets into why there are many different probability interpretations.
How do you know something isn't random?
how would you know with 100% certainty it wasn't random? You wouldn't. This gets into why there are many different probability interpretations.
How do you know something isn't random? how would you know with 100% certainty it wasn't random? You wouldn't. This gets into why there are many different probability interpretations.
How do you know something isn't random? how would you know with 100% certainty it wasn't random? You wouldn't. This gets into why there are many different probability interpretations.
25,624
How do you know something isn't random?
All answers seem to focus on the nature of random while the question concerns truly the nature of knowledge. What is it to know something? You seem to implicitly allude to an unattainable and impractical notion of knowledge, some God-like insight into matters. We're human, for us knowledge is not an absolute state of clarity and vision. We know that it's Friday today, that milk is white and the Winter is coming etc. Unfortunately the subject is outside the field of statistics. Hence, my terse answer: if your RNG keeps returning 4, then you will know that it's not random after a few trials. You and I know that the Sun will rise tomorrow. If someone doesn't then they should see a therapist to deal with anxiety, maybe take some pills etc. The point is that it's not the subject of statistics in this case.
How do you know something isn't random?
All answers seem to focus on the nature of random while the question concerns truly the nature of knowledge. What is it to know something? You seem to implicitly allude to an unattainable and impracti
How do you know something isn't random? All answers seem to focus on the nature of random while the question concerns truly the nature of knowledge. What is it to know something? You seem to implicitly allude to an unattainable and impractical notion of knowledge, some God-like insight into matters. We're human, for us knowledge is not an absolute state of clarity and vision. We know that it's Friday today, that milk is white and the Winter is coming etc. Unfortunately the subject is outside the field of statistics. Hence, my terse answer: if your RNG keeps returning 4, then you will know that it's not random after a few trials. You and I know that the Sun will rise tomorrow. If someone doesn't then they should see a therapist to deal with anxiety, maybe take some pills etc. The point is that it's not the subject of statistics in this case.
How do you know something isn't random? All answers seem to focus on the nature of random while the question concerns truly the nature of knowledge. What is it to know something? You seem to implicitly allude to an unattainable and impracti
25,625
How do you know something isn't random?
Checking for the randomness can be viewed as "discover patterns". In your random generator example, we can calculate the probability of certain events (for example consecutive 4 for 10 times) and conducting experiment to verify our assumption. For example, we know certain thing is very less likely to happen and it is happening all the time (say hitting the jackpot all the time). Then we are suspecting the problem of the random generator. Of course we cannot sure, but we can say, it is highly likely (say 99.9999999%) the data is not from random. And In real world we dot not need to have a black or white answer, we just simply do not trust the random generator and do not use it.
How do you know something isn't random?
Checking for the randomness can be viewed as "discover patterns". In your random generator example, we can calculate the probability of certain events (for example consecutive 4 for 10 times) and cond
How do you know something isn't random? Checking for the randomness can be viewed as "discover patterns". In your random generator example, we can calculate the probability of certain events (for example consecutive 4 for 10 times) and conducting experiment to verify our assumption. For example, we know certain thing is very less likely to happen and it is happening all the time (say hitting the jackpot all the time). Then we are suspecting the problem of the random generator. Of course we cannot sure, but we can say, it is highly likely (say 99.9999999%) the data is not from random. And In real world we dot not need to have a black or white answer, we just simply do not trust the random generator and do not use it.
How do you know something isn't random? Checking for the randomness can be viewed as "discover patterns". In your random generator example, we can calculate the probability of certain events (for example consecutive 4 for 10 times) and cond
25,626
How do you know something isn't random?
In addition to the correct point that @Tim and @HaitaoDu make, there may be additional reasons to distrust a supposed random number generator that "always return[s] 4". Suppose I know that the numbers are produced by a deterministic algorithm running in a computer (not, say, incorporating inputs from a quantum mechanical device), even if I don't know that the algorithm is actually return 4. Every deterministic pseudorandom number generating algorithm has a finite period--i.e. after outputting a certain quantity of "random" numbers, the sequence of returned numbers must repeat. So if I keep drawing more and more numbers from your "random number" program, at some point the quantity of numbers that have been produced will be close to the largest period that any known random number generating algorithm would produce. The fact that I have only seen the number 4 through all of those draws, and am now close to exceeding any reasonably possible period--or even close to half of such a period--would be evidence that what is producing the numbers is not a legitimate attempt at a pseudorandom number generating algorithm. Granted, it may take a lot of time and computing power to get to this point. (The largest pseudorandom number algorithm period that I have heard of is that of a standard Mersenne Twister, $2^{ 219937−1}$.) Furthermore, any non-horrible random number generating algorithm is supposed to try to approximate a uniform distribution with independent draws--of 1 through 10, in this case. There have been many algorithms proposed that are not good, and even very bad, because they failed to come sufficiently close to this goal. I am thinking of algorithms that are not truly horrible. return 4 is beyond truly horrible, and that will be apparent after many draws: if a non-horrible pseudorandom number generator has produced enough numbers that a moderately significant percentage of its period has been used up, those numbers can't all be the same, because even a bad, but not truly horrible algorithm would superficially seem to return numbers that are independent and uniformly distributed. A very long sequence of 4's that take up a significant percentage of the possible period of a psedurorandom number generating algorithm does not have that appearance, even superficially, so the algorithm that produces that long sequence of 4's must be truly horrible--at the very least.
How do you know something isn't random?
In addition to the correct point that @Tim and @HaitaoDu make, there may be additional reasons to distrust a supposed random number generator that "always return[s] 4". Suppose I know that the numbers
How do you know something isn't random? In addition to the correct point that @Tim and @HaitaoDu make, there may be additional reasons to distrust a supposed random number generator that "always return[s] 4". Suppose I know that the numbers are produced by a deterministic algorithm running in a computer (not, say, incorporating inputs from a quantum mechanical device), even if I don't know that the algorithm is actually return 4. Every deterministic pseudorandom number generating algorithm has a finite period--i.e. after outputting a certain quantity of "random" numbers, the sequence of returned numbers must repeat. So if I keep drawing more and more numbers from your "random number" program, at some point the quantity of numbers that have been produced will be close to the largest period that any known random number generating algorithm would produce. The fact that I have only seen the number 4 through all of those draws, and am now close to exceeding any reasonably possible period--or even close to half of such a period--would be evidence that what is producing the numbers is not a legitimate attempt at a pseudorandom number generating algorithm. Granted, it may take a lot of time and computing power to get to this point. (The largest pseudorandom number algorithm period that I have heard of is that of a standard Mersenne Twister, $2^{ 219937−1}$.) Furthermore, any non-horrible random number generating algorithm is supposed to try to approximate a uniform distribution with independent draws--of 1 through 10, in this case. There have been many algorithms proposed that are not good, and even very bad, because they failed to come sufficiently close to this goal. I am thinking of algorithms that are not truly horrible. return 4 is beyond truly horrible, and that will be apparent after many draws: if a non-horrible pseudorandom number generator has produced enough numbers that a moderately significant percentage of its period has been used up, those numbers can't all be the same, because even a bad, but not truly horrible algorithm would superficially seem to return numbers that are independent and uniformly distributed. A very long sequence of 4's that take up a significant percentage of the possible period of a psedurorandom number generating algorithm does not have that appearance, even superficially, so the algorithm that produces that long sequence of 4's must be truly horrible--at the very least.
How do you know something isn't random? In addition to the correct point that @Tim and @HaitaoDu make, there may be additional reasons to distrust a supposed random number generator that "always return[s] 4". Suppose I know that the numbers
25,627
Were SVMs developed as a method of efficiently training neural networks?
Let me start with a quote by a co-inventor of the Support Vector Machines, Isabelle Guyon: At the time, everybody was working on multi-layer Perceptrons (the ancestors of deep learning), and my first work on optimal margin algorithms was simply some active set method to enlarge their margin, inspired by the 'minover.' Bernhard Boser, my husband, was even making dedicated hardware for MLPs! I had heated discussions with Vladimir Vapnik, who shared my office and was pushing another optimal margin algorithm that he invented in the 1960's. The invention of SVMs happened when Bernhard decided to implement Vladimir's algorithm in the three months we had left before we moved to Berkeley. Or, somewhat more "official": SVM is a learning technique developed by V. Vapnik and his team (AT&T Bell Labs., 1985) that can be seen as a new method for training polynomial, neural network, or Radial Basis Functions classifiers. [E. Osuna, R. Freund, and F. Girosit: "Training support vector machines: an application to face detection". Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 130-136, 1997] But the whole history is, of course, much longer and more convoluted. Below I offer a very short and slim timeline, just as a brief overview: in 1958, Rosenblatt, presents the first biologically motivated artificial neural network [F. Rosenblatt: "The perceptron: a probabilistic model for infomation storage and organization in the Brain". Psychological Review, 65(6):386-408] in 1964, Vapnik and Chervonenkis present the maximum margin criterion for training the perceptron [V. Vapnik and A. Chervonenkis: "A note on one class of perceptrons". Automation and Remote Control, 25(1):112-120] also in 1964, Aizerman, Braverman, and Roznoer introduce the interpretation of kernels ("potential functions") as inner products in a feature space and prove that "perceptron can be considered to be a realization of potential function method" [M.A. Aizerman, E.M. Braverman and L.I. Roznoer: "Theoretical Foundations of the Potential Function Method in Pattern Recognition Learning". Automation and Remote Control, 25(12):821-837] in 1969, Minsky and Papert publish their book "Perceptrons: An Introduction to Computational Geometry" and show the limitations of (single-layer) perceptrons. in 1986, Rumelhart, Hinton, and Williams (re)invent the backpropagation algorithm and make learning in multi-layer perceptrons ("feed-forward neural networks") possible [D.E. Rumelhart, G.E. Hinton, and R.J. Williams: "Learning representations by back-propagating errors". Nature 323(6088):533–536]. in 1992, Boser, Guyon, and Vapnik (the three from the introductory quote) present what we now call the "Support Vector Machine". They state: "The technique is applicable to a wide variety of classification functions, including Perceptrons [...]" [B.E. Boser, I.M. Guyon, and V.N. Vapnik: "A Training Algorithm for Optimal Margin Classifiers". Proceedings of the 5th Annual Workshop on Computational Learning Theory (COLT'92), 144-152]
Were SVMs developed as a method of efficiently training neural networks?
Let me start with a quote by a co-inventor of the Support Vector Machines, Isabelle Guyon: At the time, everybody was working on multi-layer Perceptrons (the ancestors of deep learning), and my first
Were SVMs developed as a method of efficiently training neural networks? Let me start with a quote by a co-inventor of the Support Vector Machines, Isabelle Guyon: At the time, everybody was working on multi-layer Perceptrons (the ancestors of deep learning), and my first work on optimal margin algorithms was simply some active set method to enlarge their margin, inspired by the 'minover.' Bernhard Boser, my husband, was even making dedicated hardware for MLPs! I had heated discussions with Vladimir Vapnik, who shared my office and was pushing another optimal margin algorithm that he invented in the 1960's. The invention of SVMs happened when Bernhard decided to implement Vladimir's algorithm in the three months we had left before we moved to Berkeley. Or, somewhat more "official": SVM is a learning technique developed by V. Vapnik and his team (AT&T Bell Labs., 1985) that can be seen as a new method for training polynomial, neural network, or Radial Basis Functions classifiers. [E. Osuna, R. Freund, and F. Girosit: "Training support vector machines: an application to face detection". Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 130-136, 1997] But the whole history is, of course, much longer and more convoluted. Below I offer a very short and slim timeline, just as a brief overview: in 1958, Rosenblatt, presents the first biologically motivated artificial neural network [F. Rosenblatt: "The perceptron: a probabilistic model for infomation storage and organization in the Brain". Psychological Review, 65(6):386-408] in 1964, Vapnik and Chervonenkis present the maximum margin criterion for training the perceptron [V. Vapnik and A. Chervonenkis: "A note on one class of perceptrons". Automation and Remote Control, 25(1):112-120] also in 1964, Aizerman, Braverman, and Roznoer introduce the interpretation of kernels ("potential functions") as inner products in a feature space and prove that "perceptron can be considered to be a realization of potential function method" [M.A. Aizerman, E.M. Braverman and L.I. Roznoer: "Theoretical Foundations of the Potential Function Method in Pattern Recognition Learning". Automation and Remote Control, 25(12):821-837] in 1969, Minsky and Papert publish their book "Perceptrons: An Introduction to Computational Geometry" and show the limitations of (single-layer) perceptrons. in 1986, Rumelhart, Hinton, and Williams (re)invent the backpropagation algorithm and make learning in multi-layer perceptrons ("feed-forward neural networks") possible [D.E. Rumelhart, G.E. Hinton, and R.J. Williams: "Learning representations by back-propagating errors". Nature 323(6088):533–536]. in 1992, Boser, Guyon, and Vapnik (the three from the introductory quote) present what we now call the "Support Vector Machine". They state: "The technique is applicable to a wide variety of classification functions, including Perceptrons [...]" [B.E. Boser, I.M. Guyon, and V.N. Vapnik: "A Training Algorithm for Optimal Margin Classifiers". Proceedings of the 5th Annual Workshop on Computational Learning Theory (COLT'92), 144-152]
Were SVMs developed as a method of efficiently training neural networks? Let me start with a quote by a co-inventor of the Support Vector Machines, Isabelle Guyon: At the time, everybody was working on multi-layer Perceptrons (the ancestors of deep learning), and my first
25,628
When can you apply the bootstrap to time series models?
Before getting to my answer, I think I should point out that there is a mismatch between your question title and the body of the question. Bootstrapping time series is in general a very wide topic that must grapple with the various nuances of the particular model under consideration. When applied to the specific case of cointegrated time series, there are some methods that take just such care of the specific relationships between the collection of time series. First, a quick review of relevant concepts so that we have a common starting point. Stochastic Processes The time series under consideration will be discrete-time stochastic processes. Recall that a stocastic process is a collection of random variables, with the discrete-time qualifier describing the cardinality of the index set. So we can write a time series as $\{X_{t}\}_{t\in \mathbb{N}}$, where each $X_{t}$ is a random variable and the index set is $\mathbb{N} = \{0, 1, 2, \dots\}$. A sample from such a time series consists of a sequence of observations $x_{0}, x_{1}, x_{2}, \dots$ such that $x_{i}$ is a realization of random variable $X_{i}$. This is a minimal, extremely general definition, so usually more structure is assumed to hold in order to bring to bear heavier machinery. The structure of interest is the joint distribution of the infinite series of random variables, and unless we are dealing with white noise, determining this joint distribution is where the work happens. Obviously, we also will in practice only have access to a finite length sample $x_{0}, x_{1}, \dots, x_{n}$, and models typically impose constraints that imply any underlying joint structure (hopefully) can be captured by such a finite sample. As you likely are aware, there are numerous models embodying the various functional forms these structural assumptions take; familiar ones like ARIMA, GARCH, VAR, and maybe less familiar ones (assuming the selected model is correctly specified) all try proceed by some kind of transformation or model fit to capture the regular structure, and whatever residual stochasticity is left between the fitted values and the observations can be modeled in a simple form (typically Gaussian). Bootstrapping The general idea of the bootstrap is to replace the theoretical distribution with the empirical distribution, and to use the observed data as if it consists of the theoretical population. Should certain conditions be met, which intuitively correspond to the data being 'representative' of the population, then resampling from the data can approximate sampling from the population. In a basic formulation of the bootstrap, the data are assumed to be generated by an iid process - each sample is an independent draw from the same distribution. Given a data set $x_{1}, \dots, x_{n}$, we randomly resample with replacement a data set $x^*_{1}, \dots, x^*_{n}$, where each $x^*_{i}$ is an independent draw from the uniform distribution over $x_{1}, \dots, x_{n}$. In other words, each $x^*_{i}$ is an independent realization of the random variable $X^*$ which has a discrete uniform distribution over the observations, with a probability mass of $\frac{1}{n}$ on each data point $x_{i}$. Note how this mirrors the assumed sampling mechanism from the population, where each $x_{i}$ is an independent realization of the random variable $X$ which has the theoretical population distribution of interest. Hopefully laying everything out explicitly makes clear when the bootstrap makes sense: if your original sampling procedure consisted of iid draws from some fixed but unknown distribution, and each sample point is taken to reveal an equal amount of information about this distribution, then uniformly resampling from the data can reasonably replace sampling from the population. With these resamples you can do all the usual things, like estimating the distributions of model parameters and summary statistics, then using those distributions to perform inference. Bootstrapping Time Series Based on the above discussion, it should be clear that applying a basic bootstrap to time series data is in general a bad idea. The basic bootstrap above crucially depends on the initial sample consisting of iid draws from a fixed population distribution - which in general will not hold for various time series models. This issue is further exacerbated by model misspecification, which in practice should always be a consideration - hedge your bets. Again, depending on the particular model assumed to hold, there are specific modifications to the basic bootstrapping procedure that are model aware and maybe even robust to misspecification. Which method you utilize will depend on first determining the model and considering consequences of misspecification. I'll describe a couple general methods for time series, and point to some sources for specific approaches to the cointegrated case. One widely applied bootstrapping technique for time series is the block bootstrap. The underlying idea is that since the sequential nature of the sample $x_{0}, x_{1}, \dots, x_{n}$ encodes information of interest, we want our resampling procedure to capture this very sequential information. This idea is in the spirit of the basic bootstrap, as the resampling procedure tries to reflect the original sampling procedure. To perform a block bootstrap, you set some block size $\ell$, and split your data into contiguous blocks $x_{i}, x_{i+1}, \dots, x_{i + l - 1}$. You then perform resampling with replacement of the blocks of data in order to generate a bootstrapped sample, with a uniform distribution over all blocks. Here too, there are various nuances, depending on whether you allow your initial blocks to overlap or not, how you concatenate them, etc. One major point to observe about this class of methods is that while the blocks are contiguous, resampling effectively shuffles the order of the blocks. This implies that block bootstrapping retains local sequential dependence (within each block), but global sequential dependence is lost due to this shuffling. This is why block bootsrap methods may be a good choice when working with ARIMA, STL, or local regression models; as long as your block size $\ell$ has been chosen to capture the most important 'length' of the model (assuming it is correctly specified), then the shuffling of the blocks incurred by resampling shouldn't cause too much trouble. But you'll need to weigh the appropriateness based on your model, goal, and data, and still may need to experiment to determine the appropriate block size - assuming you have a long enough sample to accommodate the appropriate block size a large enough number of times in the first place. See [1] for some specific applications. If you are using R, the tsboot function in the boot package implements several variants of the block bootstrap. Another type of bootstrapping applied to time series is a sieve bootstrap. The name comes from sieve estimators. Here again we try to have our resampling procedure emulate the original sampling method, but rather than resampling the data, we generate a new data set by using an AR model on the residuals, with individual residuals resampled using the empirical distribution over the observed residuals. The underlying AR model is assumed to be infinite order, but each resampling AR model is of finite order - though the order is allowed to grow at a rate determined by the sample size. This asymptotic increase in the order is the 'sieve' part of the name, as you get closer to the target model with increasing sample size. See [2] and [3] for an overview of the sieve bootstrap. The AR model is how we capture the sequential dependence structure in this case. Because new synthetic data are being simulated in a recursive manner, sieve bootstrap methods try to retain the global sequential dependence in the data - contrast this with the local properties of block bootstraps. This method may also be the one you want to apply for cointegrated time series, as there appear to be issues with resampling the data directly in the case of cointegrated time series [4]. See [5] for a specific application of sieve bootstrapping to cointegrated models. If you're using R, then the tseriesEntropy package has a surrogate.AR function which implements a sieve bootstrap. There are other bootstrapping methods that can be applied to time series, and variations of the general methods mentioned - other methods to check out may be the stationary bootstrap and wild bootstrap. For a general overview of bootstrapping time series, see [6]. As mlofton mentioned, and I have hopefully illustrated, bootstrapping time series is a complex problem with various solutions designed for particular circumstances. Another reference by the authors MacKinnon and Davidson they mention which is informative can be found here [7]. Sorry I have avoided explicit mathematical formulations of techniques, but your question seemed to seek a somewhat intuitive explanation of what considerations determine appropriate methods for bootstrapping time series, and as I mentioned, the appropriateness of any particular technique depends on the specifics of your model, goals, and data. Hopefully the references will point you in the right direction. References Petropoulos, F., Hyndman, R.J. and Bergmeir, C., 2018. Exploring the sources of uncertainty: Why does bagging for time series forecasting work?. European Journal of Operational Research, 268(2), pp.545-554. Bühlmann, P., 1997. Sieve bootstrap for time series. Bernoulli, 3(2), pp.123-148. Andrés, M.A., Peña, D. and Romo, J., 2002. Forecasting time series with sieve bootstrap. Journal of Statistical Planning and Inference, 100(1), pp.1-11. Li, H. and Maddala, G.S., 1997. Bootstrapping cointegrating regressions. Journal of Econometrics, 80(2), pp.297-318. Chang, Y., Park, J.Y. and Song, K., 2006. Bootstrapping cointegrating regressions. Journal of Econometrics, 133(2), pp.703-739. Bühlmann, P., 2002. Bootstraps for time series. Statistical science, pp.52-72. Davidson, R. and MacKinnon, J.G., 2006. Bootstrap methods in econometrics.
When can you apply the bootstrap to time series models?
Before getting to my answer, I think I should point out that there is a mismatch between your question title and the body of the question. Bootstrapping time series is in general a very wide topic tha
When can you apply the bootstrap to time series models? Before getting to my answer, I think I should point out that there is a mismatch between your question title and the body of the question. Bootstrapping time series is in general a very wide topic that must grapple with the various nuances of the particular model under consideration. When applied to the specific case of cointegrated time series, there are some methods that take just such care of the specific relationships between the collection of time series. First, a quick review of relevant concepts so that we have a common starting point. Stochastic Processes The time series under consideration will be discrete-time stochastic processes. Recall that a stocastic process is a collection of random variables, with the discrete-time qualifier describing the cardinality of the index set. So we can write a time series as $\{X_{t}\}_{t\in \mathbb{N}}$, where each $X_{t}$ is a random variable and the index set is $\mathbb{N} = \{0, 1, 2, \dots\}$. A sample from such a time series consists of a sequence of observations $x_{0}, x_{1}, x_{2}, \dots$ such that $x_{i}$ is a realization of random variable $X_{i}$. This is a minimal, extremely general definition, so usually more structure is assumed to hold in order to bring to bear heavier machinery. The structure of interest is the joint distribution of the infinite series of random variables, and unless we are dealing with white noise, determining this joint distribution is where the work happens. Obviously, we also will in practice only have access to a finite length sample $x_{0}, x_{1}, \dots, x_{n}$, and models typically impose constraints that imply any underlying joint structure (hopefully) can be captured by such a finite sample. As you likely are aware, there are numerous models embodying the various functional forms these structural assumptions take; familiar ones like ARIMA, GARCH, VAR, and maybe less familiar ones (assuming the selected model is correctly specified) all try proceed by some kind of transformation or model fit to capture the regular structure, and whatever residual stochasticity is left between the fitted values and the observations can be modeled in a simple form (typically Gaussian). Bootstrapping The general idea of the bootstrap is to replace the theoretical distribution with the empirical distribution, and to use the observed data as if it consists of the theoretical population. Should certain conditions be met, which intuitively correspond to the data being 'representative' of the population, then resampling from the data can approximate sampling from the population. In a basic formulation of the bootstrap, the data are assumed to be generated by an iid process - each sample is an independent draw from the same distribution. Given a data set $x_{1}, \dots, x_{n}$, we randomly resample with replacement a data set $x^*_{1}, \dots, x^*_{n}$, where each $x^*_{i}$ is an independent draw from the uniform distribution over $x_{1}, \dots, x_{n}$. In other words, each $x^*_{i}$ is an independent realization of the random variable $X^*$ which has a discrete uniform distribution over the observations, with a probability mass of $\frac{1}{n}$ on each data point $x_{i}$. Note how this mirrors the assumed sampling mechanism from the population, where each $x_{i}$ is an independent realization of the random variable $X$ which has the theoretical population distribution of interest. Hopefully laying everything out explicitly makes clear when the bootstrap makes sense: if your original sampling procedure consisted of iid draws from some fixed but unknown distribution, and each sample point is taken to reveal an equal amount of information about this distribution, then uniformly resampling from the data can reasonably replace sampling from the population. With these resamples you can do all the usual things, like estimating the distributions of model parameters and summary statistics, then using those distributions to perform inference. Bootstrapping Time Series Based on the above discussion, it should be clear that applying a basic bootstrap to time series data is in general a bad idea. The basic bootstrap above crucially depends on the initial sample consisting of iid draws from a fixed population distribution - which in general will not hold for various time series models. This issue is further exacerbated by model misspecification, which in practice should always be a consideration - hedge your bets. Again, depending on the particular model assumed to hold, there are specific modifications to the basic bootstrapping procedure that are model aware and maybe even robust to misspecification. Which method you utilize will depend on first determining the model and considering consequences of misspecification. I'll describe a couple general methods for time series, and point to some sources for specific approaches to the cointegrated case. One widely applied bootstrapping technique for time series is the block bootstrap. The underlying idea is that since the sequential nature of the sample $x_{0}, x_{1}, \dots, x_{n}$ encodes information of interest, we want our resampling procedure to capture this very sequential information. This idea is in the spirit of the basic bootstrap, as the resampling procedure tries to reflect the original sampling procedure. To perform a block bootstrap, you set some block size $\ell$, and split your data into contiguous blocks $x_{i}, x_{i+1}, \dots, x_{i + l - 1}$. You then perform resampling with replacement of the blocks of data in order to generate a bootstrapped sample, with a uniform distribution over all blocks. Here too, there are various nuances, depending on whether you allow your initial blocks to overlap or not, how you concatenate them, etc. One major point to observe about this class of methods is that while the blocks are contiguous, resampling effectively shuffles the order of the blocks. This implies that block bootstrapping retains local sequential dependence (within each block), but global sequential dependence is lost due to this shuffling. This is why block bootsrap methods may be a good choice when working with ARIMA, STL, or local regression models; as long as your block size $\ell$ has been chosen to capture the most important 'length' of the model (assuming it is correctly specified), then the shuffling of the blocks incurred by resampling shouldn't cause too much trouble. But you'll need to weigh the appropriateness based on your model, goal, and data, and still may need to experiment to determine the appropriate block size - assuming you have a long enough sample to accommodate the appropriate block size a large enough number of times in the first place. See [1] for some specific applications. If you are using R, the tsboot function in the boot package implements several variants of the block bootstrap. Another type of bootstrapping applied to time series is a sieve bootstrap. The name comes from sieve estimators. Here again we try to have our resampling procedure emulate the original sampling method, but rather than resampling the data, we generate a new data set by using an AR model on the residuals, with individual residuals resampled using the empirical distribution over the observed residuals. The underlying AR model is assumed to be infinite order, but each resampling AR model is of finite order - though the order is allowed to grow at a rate determined by the sample size. This asymptotic increase in the order is the 'sieve' part of the name, as you get closer to the target model with increasing sample size. See [2] and [3] for an overview of the sieve bootstrap. The AR model is how we capture the sequential dependence structure in this case. Because new synthetic data are being simulated in a recursive manner, sieve bootstrap methods try to retain the global sequential dependence in the data - contrast this with the local properties of block bootstraps. This method may also be the one you want to apply for cointegrated time series, as there appear to be issues with resampling the data directly in the case of cointegrated time series [4]. See [5] for a specific application of sieve bootstrapping to cointegrated models. If you're using R, then the tseriesEntropy package has a surrogate.AR function which implements a sieve bootstrap. There are other bootstrapping methods that can be applied to time series, and variations of the general methods mentioned - other methods to check out may be the stationary bootstrap and wild bootstrap. For a general overview of bootstrapping time series, see [6]. As mlofton mentioned, and I have hopefully illustrated, bootstrapping time series is a complex problem with various solutions designed for particular circumstances. Another reference by the authors MacKinnon and Davidson they mention which is informative can be found here [7]. Sorry I have avoided explicit mathematical formulations of techniques, but your question seemed to seek a somewhat intuitive explanation of what considerations determine appropriate methods for bootstrapping time series, and as I mentioned, the appropriateness of any particular technique depends on the specifics of your model, goals, and data. Hopefully the references will point you in the right direction. References Petropoulos, F., Hyndman, R.J. and Bergmeir, C., 2018. Exploring the sources of uncertainty: Why does bagging for time series forecasting work?. European Journal of Operational Research, 268(2), pp.545-554. Bühlmann, P., 1997. Sieve bootstrap for time series. Bernoulli, 3(2), pp.123-148. Andrés, M.A., Peña, D. and Romo, J., 2002. Forecasting time series with sieve bootstrap. Journal of Statistical Planning and Inference, 100(1), pp.1-11. Li, H. and Maddala, G.S., 1997. Bootstrapping cointegrating regressions. Journal of Econometrics, 80(2), pp.297-318. Chang, Y., Park, J.Y. and Song, K., 2006. Bootstrapping cointegrating regressions. Journal of Econometrics, 133(2), pp.703-739. Bühlmann, P., 2002. Bootstraps for time series. Statistical science, pp.52-72. Davidson, R. and MacKinnon, J.G., 2006. Bootstrap methods in econometrics.
When can you apply the bootstrap to time series models? Before getting to my answer, I think I should point out that there is a mismatch between your question title and the body of the question. Bootstrapping time series is in general a very wide topic tha
25,629
When can you apply the bootstrap to time series models?
Not sure that the following will help in your specific case as I don't know the data, but I'd suggest this procedures anyway: (1) Decompose the $Y_t$ sample according to the STR decomposition (seasonalyty/trend decomposition based on regression, see work by Hyndman at all). (2) In simplified terms, STR produces a result: $Y_t = Season_t + Trend_t + R_t$. Note that $R_t$ are i.i.d. residuals, which can be bootstrapped in order to re-create a new $Y_t$ dataset. (3) Fit the bootstrapped $Y_t$ dataset using your model. (4) Repeat (2)-(3) 1,000 times. The above procedure yields a distribution over $\beta$-s which you may use to assess uncertainties of interest.
When can you apply the bootstrap to time series models?
Not sure that the following will help in your specific case as I don't know the data, but I'd suggest this procedures anyway: (1) Decompose the $Y_t$ sample according to the STR decomposition (seasona
When can you apply the bootstrap to time series models? Not sure that the following will help in your specific case as I don't know the data, but I'd suggest this procedures anyway: (1) Decompose the $Y_t$ sample according to the STR decomposition (seasonalyty/trend decomposition based on regression, see work by Hyndman at all). (2) In simplified terms, STR produces a result: $Y_t = Season_t + Trend_t + R_t$. Note that $R_t$ are i.i.d. residuals, which can be bootstrapped in order to re-create a new $Y_t$ dataset. (3) Fit the bootstrapped $Y_t$ dataset using your model. (4) Repeat (2)-(3) 1,000 times. The above procedure yields a distribution over $\beta$-s which you may use to assess uncertainties of interest.
When can you apply the bootstrap to time series models? Not sure that the following will help in your specific case as I don't know the data, but I'd suggest this procedures anyway: (1) Decompose the $Y_t$ sample according to the STR decomposition (seasona
25,630
How does facebook prophet handle missing data?
Models like ARIMA are defined in terms of lagged variables, so you need the subsequent points. Prophet (Taylor and Letham, 2017) is defined in terms of regression-like model $$ y(t) =g(t) +s(t) +h(t) + \varepsilon_t $$ where $g(t)$ is the trend function which models non-periodic changes in the value of the time series, $s(t)$ represents periodic changes (e.g., weekly and yearly seasonality), and $h(t)$ represents the effects of holidays which occur on potentially irregular schedules over one or more days. The error term $\varepsilon_t$ represents any idiosyncratic changes which are not accommodated by the model; later we will make the parametric assumption that $\varepsilon_t$ is normally distributed. The trend function $g(t)$ is defined in terms of piecewise regression, seasonality $s(t)$ uses Fourier terms, and holiday effects $s(t)$ are just dummies. None of the features needs you to have all the points, since if lacking information, it wouldn't use it to estimate anything, but will just interpolate between the known points. Saying it differently, if you have points $a < b < c$, but $b$ is unknown, then you can still fit the line (or curve) to $a$ and $c$ and interpolate for $b$. What Prophet does, it just fits many different lines (trend), curves (seasonalities) and constants (dummies) and combines them together.
How does facebook prophet handle missing data?
Models like ARIMA are defined in terms of lagged variables, so you need the subsequent points. Prophet (Taylor and Letham, 2017) is defined in terms of regression-like model $$ y(t) =g(t) +s(t) +h(t)
How does facebook prophet handle missing data? Models like ARIMA are defined in terms of lagged variables, so you need the subsequent points. Prophet (Taylor and Letham, 2017) is defined in terms of regression-like model $$ y(t) =g(t) +s(t) +h(t) + \varepsilon_t $$ where $g(t)$ is the trend function which models non-periodic changes in the value of the time series, $s(t)$ represents periodic changes (e.g., weekly and yearly seasonality), and $h(t)$ represents the effects of holidays which occur on potentially irregular schedules over one or more days. The error term $\varepsilon_t$ represents any idiosyncratic changes which are not accommodated by the model; later we will make the parametric assumption that $\varepsilon_t$ is normally distributed. The trend function $g(t)$ is defined in terms of piecewise regression, seasonality $s(t)$ uses Fourier terms, and holiday effects $s(t)$ are just dummies. None of the features needs you to have all the points, since if lacking information, it wouldn't use it to estimate anything, but will just interpolate between the known points. Saying it differently, if you have points $a < b < c$, but $b$ is unknown, then you can still fit the line (or curve) to $a$ and $c$ and interpolate for $b$. What Prophet does, it just fits many different lines (trend), curves (seasonalities) and constants (dummies) and combines them together.
How does facebook prophet handle missing data? Models like ARIMA are defined in terms of lagged variables, so you need the subsequent points. Prophet (Taylor and Letham, 2017) is defined in terms of regression-like model $$ y(t) =g(t) +s(t) +h(t)
25,631
Feature Importance in Isolation Forest
I believe it was not implemented in scikit-learn because in contrast with Random Forest algorithm, Isolation Forest feature to split at each node is selected at random. So it is not possible to have a notion of feature importance similar to RF. Having said that, If you are very confident about the results of Isolation Forest classifier and you have a capacity to train another model then you could use the output of Isolation Forest i.e -1/1 values as target-class to train a Random Forest classifier. This will give you feature importance for detecting anomaly. Please note that I haven't tried this myself, so I can't comment on accuracy of this proposed approach.
Feature Importance in Isolation Forest
I believe it was not implemented in scikit-learn because in contrast with Random Forest algorithm, Isolation Forest feature to split at each node is selected at random. So it is not possible to have a
Feature Importance in Isolation Forest I believe it was not implemented in scikit-learn because in contrast with Random Forest algorithm, Isolation Forest feature to split at each node is selected at random. So it is not possible to have a notion of feature importance similar to RF. Having said that, If you are very confident about the results of Isolation Forest classifier and you have a capacity to train another model then you could use the output of Isolation Forest i.e -1/1 values as target-class to train a Random Forest classifier. This will give you feature importance for detecting anomaly. Please note that I haven't tried this myself, so I can't comment on accuracy of this proposed approach.
Feature Importance in Isolation Forest I believe it was not implemented in scikit-learn because in contrast with Random Forest algorithm, Isolation Forest feature to split at each node is selected at random. So it is not possible to have a
25,632
Feature Importance in Isolation Forest
Have you tried looking at SHAP statistics as a way of measuring feature importance for your isolation forest exercise? Here's a good explanation of SHAP https://towardsdatascience.com/explain-your-model-with-the-shap-values-bc36aac4de3d and you can build an explainer object with any tree based model. From there you can also look at how your features affect individual predictions.
Feature Importance in Isolation Forest
Have you tried looking at SHAP statistics as a way of measuring feature importance for your isolation forest exercise? Here's a good explanation of SHAP https://towardsdatascience.com/explain-your-mod
Feature Importance in Isolation Forest Have you tried looking at SHAP statistics as a way of measuring feature importance for your isolation forest exercise? Here's a good explanation of SHAP https://towardsdatascience.com/explain-your-model-with-the-shap-values-bc36aac4de3d and you can build an explainer object with any tree based model. From there you can also look at how your features affect individual predictions.
Feature Importance in Isolation Forest Have you tried looking at SHAP statistics as a way of measuring feature importance for your isolation forest exercise? Here's a good explanation of SHAP https://towardsdatascience.com/explain-your-mod
25,633
Feature Importance in Isolation Forest
Briefly reiterating first what others have mentioned, one could use a surrogate model like a BDT that is intentionally overtrained on the output of the isolation forest (instead of using the binary output as Khurram Majeed suggested, you could directly train it on the anomaly score). However, we cannot be sure that the surrogate model is learning the same decision paths as the iForest. On the other hand, the principle behind SHAP is clearer than any surrogate model used to explain the iForest (another good explanation of SHAP). Perhaps because the BDT is doing a poorer job, I found that the feature importances derived from a surrogate BDT and SHAP agreed only roughly, and sometimes disagreed wildly. Both local and global feature importances were considered in this comparison. Despite all this the iForest is interpretable if its underlying principle is taken advantage of: that outliers have short decision paths on average over all the trees in the iForest. The features that were cut on in these short paths must be more important than those cut on in long decision paths. So here's what you can do to get feature importances: Determine a threshold for decision path length. The authors of the iForest algorithm recommend from empirical studies a subsampling size of 256 [ref]. This is the number of events (sampled from all the data) that is fed into each tree. If you're using this subsampling size, the trees in the iForest can only grow up to $\log_2 (256) = 8$ nodes in depth. Thus you might choose a path length threshold of 3 or 4. For each event, loop through the trees in the iForest and select paths that are shorter than the threshold path length. In these paths, count which features are being cut at each node, the depth of each node, and the numbers of events split at that node. If you're using sklearn's implementation of the iForest, this script may help you in digging through their tree structure. This plot shows what you should have at this stage. Now that you have feature counts of all the trees at each cut depth, you can condense these into a final feature ranking. This can be done by assigning weights to each node and adding the counts up. For instance you may want to assign larger weights to feature counts that are cut higher up in the tree (and are thus more responsible in creating a overall shorter decision path), and smaller weights to features cut further down. You can also incorporate the ratio of events split at each node into your weights - if there is a large disparity of events split by a certain cut, then that cut is probably important. Note that there is a lot of freedom to assign weights here, and it may be that your choices are ad-hoc and dataset dependent. I'm not entirely sure, but this may be the reason that feature importances were not implemented in sklearn's iForest. Anyway, here's a condensation of the feature counts shown above. I'm using a simple 'geometric weight' of 0.5 - essential the number of counts cut first get no modification, the number of counts counts cut second get halved, and those cut third get quartered. As you can see, there are a lot of features that aren't very important to isolate this particular event. Here is the SHAP force plot for this event, nice agreement even with a very simple weight assignment! I've shown the feature rankings for an individual event, but you can easily get global importances by averaging the feature counts over entire dataset. This way you can drop features that don't contribute much to the iForest classification. If you're using sklearn or other Python based implementations, the biggest disadvantage to this technique is speed. It takes a while to root through all the trees, and if you're interested in global importances you'll have to loop though all the events as well.
Feature Importance in Isolation Forest
Briefly reiterating first what others have mentioned, one could use a surrogate model like a BDT that is intentionally overtrained on the output of the isolation forest (instead of using the binary ou
Feature Importance in Isolation Forest Briefly reiterating first what others have mentioned, one could use a surrogate model like a BDT that is intentionally overtrained on the output of the isolation forest (instead of using the binary output as Khurram Majeed suggested, you could directly train it on the anomaly score). However, we cannot be sure that the surrogate model is learning the same decision paths as the iForest. On the other hand, the principle behind SHAP is clearer than any surrogate model used to explain the iForest (another good explanation of SHAP). Perhaps because the BDT is doing a poorer job, I found that the feature importances derived from a surrogate BDT and SHAP agreed only roughly, and sometimes disagreed wildly. Both local and global feature importances were considered in this comparison. Despite all this the iForest is interpretable if its underlying principle is taken advantage of: that outliers have short decision paths on average over all the trees in the iForest. The features that were cut on in these short paths must be more important than those cut on in long decision paths. So here's what you can do to get feature importances: Determine a threshold for decision path length. The authors of the iForest algorithm recommend from empirical studies a subsampling size of 256 [ref]. This is the number of events (sampled from all the data) that is fed into each tree. If you're using this subsampling size, the trees in the iForest can only grow up to $\log_2 (256) = 8$ nodes in depth. Thus you might choose a path length threshold of 3 or 4. For each event, loop through the trees in the iForest and select paths that are shorter than the threshold path length. In these paths, count which features are being cut at each node, the depth of each node, and the numbers of events split at that node. If you're using sklearn's implementation of the iForest, this script may help you in digging through their tree structure. This plot shows what you should have at this stage. Now that you have feature counts of all the trees at each cut depth, you can condense these into a final feature ranking. This can be done by assigning weights to each node and adding the counts up. For instance you may want to assign larger weights to feature counts that are cut higher up in the tree (and are thus more responsible in creating a overall shorter decision path), and smaller weights to features cut further down. You can also incorporate the ratio of events split at each node into your weights - if there is a large disparity of events split by a certain cut, then that cut is probably important. Note that there is a lot of freedom to assign weights here, and it may be that your choices are ad-hoc and dataset dependent. I'm not entirely sure, but this may be the reason that feature importances were not implemented in sklearn's iForest. Anyway, here's a condensation of the feature counts shown above. I'm using a simple 'geometric weight' of 0.5 - essential the number of counts cut first get no modification, the number of counts counts cut second get halved, and those cut third get quartered. As you can see, there are a lot of features that aren't very important to isolate this particular event. Here is the SHAP force plot for this event, nice agreement even with a very simple weight assignment! I've shown the feature rankings for an individual event, but you can easily get global importances by averaging the feature counts over entire dataset. This way you can drop features that don't contribute much to the iForest classification. If you're using sklearn or other Python based implementations, the biggest disadvantage to this technique is speed. It takes a while to root through all the trees, and if you're interested in global importances you'll have to loop though all the events as well.
Feature Importance in Isolation Forest Briefly reiterating first what others have mentioned, one could use a surrogate model like a BDT that is intentionally overtrained on the output of the isolation forest (instead of using the binary ou
25,634
Feature Importance in Isolation Forest
Since a while back one can use SHAP to exlain scikit-learn Isolation Forest models. Example code and output in this answer.
Feature Importance in Isolation Forest
Since a while back one can use SHAP to exlain scikit-learn Isolation Forest models. Example code and output in this answer.
Feature Importance in Isolation Forest Since a while back one can use SHAP to exlain scikit-learn Isolation Forest models. Example code and output in this answer.
Feature Importance in Isolation Forest Since a while back one can use SHAP to exlain scikit-learn Isolation Forest models. Example code and output in this answer.
25,635
Feature Importance in Isolation Forest
Interpretable Anomaly Detection with DIFFI: Depth-based Isolation Forest Feature Importance In this paper, they proposed model-specific methods (i.e. methods based on the particular structure of the IF model) to address the mentioned issues. Specifically, A global interpretability method, called Depth-based Isolation Forest Feature Importance (DIFFI), to provide Global Feature Importances (GFIs) which represents a condensed measure describing the macro behaviour of the IF model on training data. A local version of the DIFFI method, called Local-DIFFI, to provide Local Feature Importances (LFIs) is aimed at interpreting individual predictions made by the IF model at test time. A simple and effective procedure to perform unsupervised feature selection for Anomaly Detection problems based on the DIFFI method. @article{carletti2020interpretable, title={Interpretable anomaly detection with diffi: Depth-based feature importance for the isolation forest}, author={Carletti, Mattia and Terzi, Matteo and Susto, Gian Antonio}, journal={arXiv preprint arXiv:2007.11117}, year={2020} } https://doi.org/10.48550/arXiv.2007.11117 Code for the paper "Interpretable Anomaly Detection with DIFFI: Depth-based Isolation Forest Feature Importance".
Feature Importance in Isolation Forest
Interpretable Anomaly Detection with DIFFI: Depth-based Isolation Forest Feature Importance In this paper, they proposed model-specific methods (i.e. methods based on the particular structure of the
Feature Importance in Isolation Forest Interpretable Anomaly Detection with DIFFI: Depth-based Isolation Forest Feature Importance In this paper, they proposed model-specific methods (i.e. methods based on the particular structure of the IF model) to address the mentioned issues. Specifically, A global interpretability method, called Depth-based Isolation Forest Feature Importance (DIFFI), to provide Global Feature Importances (GFIs) which represents a condensed measure describing the macro behaviour of the IF model on training data. A local version of the DIFFI method, called Local-DIFFI, to provide Local Feature Importances (LFIs) is aimed at interpreting individual predictions made by the IF model at test time. A simple and effective procedure to perform unsupervised feature selection for Anomaly Detection problems based on the DIFFI method. @article{carletti2020interpretable, title={Interpretable anomaly detection with diffi: Depth-based feature importance for the isolation forest}, author={Carletti, Mattia and Terzi, Matteo and Susto, Gian Antonio}, journal={arXiv preprint arXiv:2007.11117}, year={2020} } https://doi.org/10.48550/arXiv.2007.11117 Code for the paper "Interpretable Anomaly Detection with DIFFI: Depth-based Isolation Forest Feature Importance".
Feature Importance in Isolation Forest Interpretable Anomaly Detection with DIFFI: Depth-based Isolation Forest Feature Importance In this paper, they proposed model-specific methods (i.e. methods based on the particular structure of the
25,636
Why does my LSTM take so much time to train?
However much it pains me to say this, Deep learning is slow, get used to it. There are some things you could do to speed up your training though: What GPU are you using? A friend of mine was doing some research on LSTM's last year and training them on her NVIDIA GTX7?? GPU. Since this was going painfully slow, they tried to train the network on a more modern CPU, which actually led to a speed-up by a non trivial factor. What framework are you using? While most frameworks are somewhat comparable, I have heard rumors (https://arxiv.org/pdf/1608.07249.pdf) that some frameworks are slower than others. It might be worthwhile to switch frameworks if you're going to be doing a lot of training. Is it possible to train your network on your company/university hardware? Universities and research companies usually have some powerful hardware at their disposal. If this is not an option, maybe you can look into using some cloud-computing power. All these solutions obviously assume your model itself is as optimal as it can be (In terms of training time and accuracy), which is also something you need to consider, but is outside of the scope of this answer.
Why does my LSTM take so much time to train?
However much it pains me to say this, Deep learning is slow, get used to it. There are some things you could do to speed up your training though: What GPU are you using? A friend of mine was doing so
Why does my LSTM take so much time to train? However much it pains me to say this, Deep learning is slow, get used to it. There are some things you could do to speed up your training though: What GPU are you using? A friend of mine was doing some research on LSTM's last year and training them on her NVIDIA GTX7?? GPU. Since this was going painfully slow, they tried to train the network on a more modern CPU, which actually led to a speed-up by a non trivial factor. What framework are you using? While most frameworks are somewhat comparable, I have heard rumors (https://arxiv.org/pdf/1608.07249.pdf) that some frameworks are slower than others. It might be worthwhile to switch frameworks if you're going to be doing a lot of training. Is it possible to train your network on your company/university hardware? Universities and research companies usually have some powerful hardware at their disposal. If this is not an option, maybe you can look into using some cloud-computing power. All these solutions obviously assume your model itself is as optimal as it can be (In terms of training time and accuracy), which is also something you need to consider, but is outside of the scope of this answer.
Why does my LSTM take so much time to train? However much it pains me to say this, Deep learning is slow, get used to it. There are some things you could do to speed up your training though: What GPU are you using? A friend of mine was doing so
25,637
Why does my LSTM take so much time to train?
If anyone else is struggling with this, I had the same issue - my LSTM layer was adding about 1 hour per epoch to the training time. I realised it was because in my IDE (PyCharm) I'd used the automatic import option when trying to use a method I hadn't yet manually imported at the top of my script. By default the automatic import statement was from tensorflow.python.keras.layers import Bidirectional, LSTM When I changed this to from tensorflow.keras.layers import Bidirectional, LSTM Training time per epoch dropped to just 4 minutes.
Why does my LSTM take so much time to train?
If anyone else is struggling with this, I had the same issue - my LSTM layer was adding about 1 hour per epoch to the training time. I realised it was because in my IDE (PyCharm) I'd used the automati
Why does my LSTM take so much time to train? If anyone else is struggling with this, I had the same issue - my LSTM layer was adding about 1 hour per epoch to the training time. I realised it was because in my IDE (PyCharm) I'd used the automatic import option when trying to use a method I hadn't yet manually imported at the top of my script. By default the automatic import statement was from tensorflow.python.keras.layers import Bidirectional, LSTM When I changed this to from tensorflow.keras.layers import Bidirectional, LSTM Training time per epoch dropped to just 4 minutes.
Why does my LSTM take so much time to train? If anyone else is struggling with this, I had the same issue - my LSTM layer was adding about 1 hour per epoch to the training time. I realised it was because in my IDE (PyCharm) I'd used the automati
25,638
sklearn.metrics.accuracy_score vs. LogisticRegression().score?
I wish I could just take this back...amazing what happens when you put your confusion down in writing (and read the source code). One is testing accuracy, the other is training accuracy. To clarify: results.score(X_train, y_train) is the training accuracy, while accuracy_score(y_test, results.predict(X_test)) is the testing accuracy. The way I found out that they do the same thing is by inspecting the SK Learn source code. Turns out that the .score() method in the LogisticRegression class directly calls the sklearn.metrics.accuracy_score method... I ran a test to double check and it's confirmed: Training with LR.score: model.score(X_train, y_train) 0.72053675612602097 Testing with LR.score: model.score(X_test, y_test) 0.79582673005810878 Testing with accuracy_score: accuracy_score(y_test, model.predict(X_test)) 0.79582673005810878
sklearn.metrics.accuracy_score vs. LogisticRegression().score?
I wish I could just take this back...amazing what happens when you put your confusion down in writing (and read the source code). One is testing accuracy, the other is training accuracy. To clarify:
sklearn.metrics.accuracy_score vs. LogisticRegression().score? I wish I could just take this back...amazing what happens when you put your confusion down in writing (and read the source code). One is testing accuracy, the other is training accuracy. To clarify: results.score(X_train, y_train) is the training accuracy, while accuracy_score(y_test, results.predict(X_test)) is the testing accuracy. The way I found out that they do the same thing is by inspecting the SK Learn source code. Turns out that the .score() method in the LogisticRegression class directly calls the sklearn.metrics.accuracy_score method... I ran a test to double check and it's confirmed: Training with LR.score: model.score(X_train, y_train) 0.72053675612602097 Testing with LR.score: model.score(X_test, y_test) 0.79582673005810878 Testing with accuracy_score: accuracy_score(y_test, model.predict(X_test)) 0.79582673005810878
sklearn.metrics.accuracy_score vs. LogisticRegression().score? I wish I could just take this back...amazing what happens when you put your confusion down in writing (and read the source code). One is testing accuracy, the other is training accuracy. To clarify:
25,639
Logistic regression and scaling of features
I was under the belief that scaling of features should not affect the result of logistic regression. However, in the example below, when I scale the second feature by uncommenting the commented line, the AUC changes substantially (from 0.970 to 0.520) ... I believe this has to do with regularization That is a good guess. If you look at the documentation for sklearn.linear_model.LogisticRegression, you can see the first parameter is: penalty : str, ‘l1’ or ‘l2’, default: ‘l2’ - Used to specify the norm used in the penalization. The ‘newton-cg’, ‘sag’ and ‘lbfgs’ solvers support only l2 penalties. Regularization makes the predictor dependent on the scale of the features. If so, is there a best practice to normalize the features when doing logistic regression with regularization? Yes. The authors of Elements of Statistical Learning recommend doing so. In sklearn, use sklearn.preprocessing.StandardScaler.
Logistic regression and scaling of features
I was under the belief that scaling of features should not affect the result of logistic regression. However, in the example below, when I scale the second feature by uncommenting the commented line,
Logistic regression and scaling of features I was under the belief that scaling of features should not affect the result of logistic regression. However, in the example below, when I scale the second feature by uncommenting the commented line, the AUC changes substantially (from 0.970 to 0.520) ... I believe this has to do with regularization That is a good guess. If you look at the documentation for sklearn.linear_model.LogisticRegression, you can see the first parameter is: penalty : str, ‘l1’ or ‘l2’, default: ‘l2’ - Used to specify the norm used in the penalization. The ‘newton-cg’, ‘sag’ and ‘lbfgs’ solvers support only l2 penalties. Regularization makes the predictor dependent on the scale of the features. If so, is there a best practice to normalize the features when doing logistic regression with regularization? Yes. The authors of Elements of Statistical Learning recommend doing so. In sklearn, use sklearn.preprocessing.StandardScaler.
Logistic regression and scaling of features I was under the belief that scaling of features should not affect the result of logistic regression. However, in the example below, when I scale the second feature by uncommenting the commented line,
25,640
Is MAPE a good error measurement statistic? And what alternatives are there?
No, actually MAPE is very poor error measure as discussed by Stephan Kolassa in Best way to optimize MAPE and Prediction Accuracy - Another Measurement than MAPE and Minimizing symmetric mean absolute percentage error (SMAPE) and on those slides. You can also check the following paper: Tofallis, C. (2015). A better measure of relative prediction accuracy for model selection and model estimation. Journal of the Operational Research Society, 66(8), 1352-1362. It is also discussed by Goodwin and Lawton (1999) in the On the asymmetry of the symmetric MAPE paper Despite its widespread use, the MAPE has several disadvantages (Armstrong & Collopy, 1992; Makridakis, 1993). In particular, Makridakis has argued that the MAPE is asymmetric in that ‘equal errors above the actual value result in a greater APE than those below the actual value’. Similarly, Armstrong and Collopy argued that ‘the MAPE ... puts a heavier penalty on forecasts that exceed the actual than those that are less than the actual. For example, the MAPE is bounded on the low side by an error of 100%, but there is no bound on the high side’. The quoted (Makridakis, 1993) paper gives a nice example for the asymmetry, when the predicted value is $150$ and the forecast is $100$, MAPE is $|\tfrac{150-100}{150}| = 33.33\%$, while when the predicted value is $100$ and the forecast is $150$ MAPE is $|\tfrac{100-150}{100}| = 50\%$ despite the fact that both forecasts are wrong by $50$ units! What the above references, and the number of other sources, show is that if you use MAPE as a criterion for selecting your forecasts, this would lead to biased and underestimated results. Moreover you run into problems when the predicted value is equal to zero. In the How to interpret error measures in Weka output? thread you can find a brief review of other error measures.
Is MAPE a good error measurement statistic? And what alternatives are there?
No, actually MAPE is very poor error measure as discussed by Stephan Kolassa in Best way to optimize MAPE and Prediction Accuracy - Another Measurement than MAPE and Minimizing symmetric mean absolute
Is MAPE a good error measurement statistic? And what alternatives are there? No, actually MAPE is very poor error measure as discussed by Stephan Kolassa in Best way to optimize MAPE and Prediction Accuracy - Another Measurement than MAPE and Minimizing symmetric mean absolute percentage error (SMAPE) and on those slides. You can also check the following paper: Tofallis, C. (2015). A better measure of relative prediction accuracy for model selection and model estimation. Journal of the Operational Research Society, 66(8), 1352-1362. It is also discussed by Goodwin and Lawton (1999) in the On the asymmetry of the symmetric MAPE paper Despite its widespread use, the MAPE has several disadvantages (Armstrong & Collopy, 1992; Makridakis, 1993). In particular, Makridakis has argued that the MAPE is asymmetric in that ‘equal errors above the actual value result in a greater APE than those below the actual value’. Similarly, Armstrong and Collopy argued that ‘the MAPE ... puts a heavier penalty on forecasts that exceed the actual than those that are less than the actual. For example, the MAPE is bounded on the low side by an error of 100%, but there is no bound on the high side’. The quoted (Makridakis, 1993) paper gives a nice example for the asymmetry, when the predicted value is $150$ and the forecast is $100$, MAPE is $|\tfrac{150-100}{150}| = 33.33\%$, while when the predicted value is $100$ and the forecast is $150$ MAPE is $|\tfrac{100-150}{100}| = 50\%$ despite the fact that both forecasts are wrong by $50$ units! What the above references, and the number of other sources, show is that if you use MAPE as a criterion for selecting your forecasts, this would lead to biased and underestimated results. Moreover you run into problems when the predicted value is equal to zero. In the How to interpret error measures in Weka output? thread you can find a brief review of other error measures.
Is MAPE a good error measurement statistic? And what alternatives are there? No, actually MAPE is very poor error measure as discussed by Stephan Kolassa in Best way to optimize MAPE and Prediction Accuracy - Another Measurement than MAPE and Minimizing symmetric mean absolute
25,641
Relationship between MLE and least squares in case of linear regression
The linear regression model $Y = X\beta + \epsilon$, where $\epsilon \sim N(0,I\sigma^2)$ $Y \in \mathbb{R}^{n}$, $X \in \mathbb{R}^{n \times p}$ and $\beta \in \mathbb{R}^{p}$ Note that our model error (residual) is ${\bf \epsilon = Y - X\beta}$. Our goal is to find a vector of $\beta$s that minimize the $L_2$ norm squared of this error. Least Squares Given data $(x_1,y_1),...,(x_n,y_n)$ where each $x_{i}$ is $p$ dimensional, we seek to find: $$\widehat{\beta}_{LS} = {\underset \beta {\text{argmin}}} ||{\bf \epsilon}||^2 = {\underset \beta {\text{argmin}}} ||{\bf Y - X\beta}||^2 = {\underset \beta {\text{argmin}}} \sum_{i=1}^{n} ( y_i - x_{i}\beta)^2 $$ Maximum Likelihood Using the model above, we can set up the likelihood of the data given the parameters $\beta$ as: $$L(Y|X,\beta) = \prod_{i=1}^{n} f(y_i|x_i,\beta) $$ where $f(y_i|x_i,\beta)$ is the pdf of a normal distribution with mean 0 and variance $\sigma^2$. Plugging it in: $$L(Y|X,\beta) = \prod_{i=1}^{n} \frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{(y_i - x_i\beta)^2}{2\sigma^2}}$$ Now generally when dealing with likelihoods its mathematically easier to take the log before continuing (products become sums, exponentials go away), so let's do that. $$\log L(Y|X,\beta) = \sum_{i=1}^{n} \log(\frac{1}{\sqrt{2\pi\sigma^2}}) -\frac{(y_i - x_i\beta)^2}{2\sigma^2}$$ Since we want the maximum likelihood estimate, we want to find the maximum of the equation above, with respect to $\beta$. The first term doesn't impact our estimate of $\beta$, so we can ignore it: $$ \widehat{\beta}_{MLE} = {\underset \beta {\text{argmax}}} \sum_{i=1}^{n} -\frac{(y_i - x_i\beta)^2}{2\sigma^2}$$ Note that the denominator is a constant with respect to $\beta$. Finally, notice that there is a negative sign in front of the sum. So finding the maximum of a negative number is like finding the minimum of it without the negative. In other words: $$ \widehat{\beta}_{MLE} = {\underset \beta {\text{argmin}}} \sum_{i=1}^{n} (y_i - x_i\beta)^2 = \widehat{\beta}_{LS}$$ Recall that for this to work, we had to make certain model assumptions (normality of error terms, 0 mean, constant variance). This makes least squares equivalent to MLE under certain conditions. See here and here for more discussion. For completeness, note that the solution can be written as: $${\bf \beta = (X^TX)^{-1}X^Ty} $$
Relationship between MLE and least squares in case of linear regression
The linear regression model $Y = X\beta + \epsilon$, where $\epsilon \sim N(0,I\sigma^2)$ $Y \in \mathbb{R}^{n}$, $X \in \mathbb{R}^{n \times p}$ and $\beta \in \mathbb{R}^{p}$ Note that our model err
Relationship between MLE and least squares in case of linear regression The linear regression model $Y = X\beta + \epsilon$, where $\epsilon \sim N(0,I\sigma^2)$ $Y \in \mathbb{R}^{n}$, $X \in \mathbb{R}^{n \times p}$ and $\beta \in \mathbb{R}^{p}$ Note that our model error (residual) is ${\bf \epsilon = Y - X\beta}$. Our goal is to find a vector of $\beta$s that minimize the $L_2$ norm squared of this error. Least Squares Given data $(x_1,y_1),...,(x_n,y_n)$ where each $x_{i}$ is $p$ dimensional, we seek to find: $$\widehat{\beta}_{LS} = {\underset \beta {\text{argmin}}} ||{\bf \epsilon}||^2 = {\underset \beta {\text{argmin}}} ||{\bf Y - X\beta}||^2 = {\underset \beta {\text{argmin}}} \sum_{i=1}^{n} ( y_i - x_{i}\beta)^2 $$ Maximum Likelihood Using the model above, we can set up the likelihood of the data given the parameters $\beta$ as: $$L(Y|X,\beta) = \prod_{i=1}^{n} f(y_i|x_i,\beta) $$ where $f(y_i|x_i,\beta)$ is the pdf of a normal distribution with mean 0 and variance $\sigma^2$. Plugging it in: $$L(Y|X,\beta) = \prod_{i=1}^{n} \frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{(y_i - x_i\beta)^2}{2\sigma^2}}$$ Now generally when dealing with likelihoods its mathematically easier to take the log before continuing (products become sums, exponentials go away), so let's do that. $$\log L(Y|X,\beta) = \sum_{i=1}^{n} \log(\frac{1}{\sqrt{2\pi\sigma^2}}) -\frac{(y_i - x_i\beta)^2}{2\sigma^2}$$ Since we want the maximum likelihood estimate, we want to find the maximum of the equation above, with respect to $\beta$. The first term doesn't impact our estimate of $\beta$, so we can ignore it: $$ \widehat{\beta}_{MLE} = {\underset \beta {\text{argmax}}} \sum_{i=1}^{n} -\frac{(y_i - x_i\beta)^2}{2\sigma^2}$$ Note that the denominator is a constant with respect to $\beta$. Finally, notice that there is a negative sign in front of the sum. So finding the maximum of a negative number is like finding the minimum of it without the negative. In other words: $$ \widehat{\beta}_{MLE} = {\underset \beta {\text{argmin}}} \sum_{i=1}^{n} (y_i - x_i\beta)^2 = \widehat{\beta}_{LS}$$ Recall that for this to work, we had to make certain model assumptions (normality of error terms, 0 mean, constant variance). This makes least squares equivalent to MLE under certain conditions. See here and here for more discussion. For completeness, note that the solution can be written as: $${\bf \beta = (X^TX)^{-1}X^Ty} $$
Relationship between MLE and least squares in case of linear regression The linear regression model $Y = X\beta + \epsilon$, where $\epsilon \sim N(0,I\sigma^2)$ $Y \in \mathbb{R}^{n}$, $X \in \mathbb{R}^{n \times p}$ and $\beta \in \mathbb{R}^{p}$ Note that our model err
25,642
Friedman test and post-hoc test for Python
I am currently looking into this issue myself; according to this paper there are a number of possibilities to perform posthoc-tests (Update: an extension regarding the use of non-parametric tests can be found here): Perform the Nemenyi-test for all pairwise combinations; this is similar to the Tukey-test for ANOVA. Perform the Bonferroni-Dunn-test; in this setting one compares all values to a list of control values. Alternatively, one can perform step-up and step-down procedures sequentially testing hypotheses ordered by their significance. One can use Holm's step-down procedure, Hochberg's step-up procedure or Hommel's procedure. The STAC Python library seems to include all these tests, except for Hommel's procedure.
Friedman test and post-hoc test for Python
I am currently looking into this issue myself; according to this paper there are a number of possibilities to perform posthoc-tests (Update: an extension regarding the use of non-parametric tests can
Friedman test and post-hoc test for Python I am currently looking into this issue myself; according to this paper there are a number of possibilities to perform posthoc-tests (Update: an extension regarding the use of non-parametric tests can be found here): Perform the Nemenyi-test for all pairwise combinations; this is similar to the Tukey-test for ANOVA. Perform the Bonferroni-Dunn-test; in this setting one compares all values to a list of control values. Alternatively, one can perform step-up and step-down procedures sequentially testing hypotheses ordered by their significance. One can use Holm's step-down procedure, Hochberg's step-up procedure or Hommel's procedure. The STAC Python library seems to include all these tests, except for Hommel's procedure.
Friedman test and post-hoc test for Python I am currently looking into this issue myself; according to this paper there are a number of possibilities to perform posthoc-tests (Update: an extension regarding the use of non-parametric tests can
25,643
Friedman test and post-hoc test for Python
Complementing the other answer, since you asked about implementation of the post-hoc tests in Python: the Orange library implements the post-hoc tests (Nemenyi and Bonferroni-Dunn), including a function to draw a Critical Difference diagram [1] http://docs.orange.biolab.si/3/data-mining-library/reference/evaluation.cd.html (see the section "CD diagram") [1] Janez Demsar, Statistical Comparisons of Classifiers over Multiple Data Sets, 7(Jan):1–30, 2006.
Friedman test and post-hoc test for Python
Complementing the other answer, since you asked about implementation of the post-hoc tests in Python: the Orange library implements the post-hoc tests (Nemenyi and Bonferroni-Dunn), including a functi
Friedman test and post-hoc test for Python Complementing the other answer, since you asked about implementation of the post-hoc tests in Python: the Orange library implements the post-hoc tests (Nemenyi and Bonferroni-Dunn), including a function to draw a Critical Difference diagram [1] http://docs.orange.biolab.si/3/data-mining-library/reference/evaluation.cd.html (see the section "CD diagram") [1] Janez Demsar, Statistical Comparisons of Classifiers over Multiple Data Sets, 7(Jan):1–30, 2006.
Friedman test and post-hoc test for Python Complementing the other answer, since you asked about implementation of the post-hoc tests in Python: the Orange library implements the post-hoc tests (Nemenyi and Bonferroni-Dunn), including a functi
25,644
Friedman test and post-hoc test for Python
You can perform any of the following tests with scikit-posthocs package: Conover, Nemenyi, Siegel, and Miller post-hoc tests.
Friedman test and post-hoc test for Python
You can perform any of the following tests with scikit-posthocs package: Conover, Nemenyi, Siegel, and Miller post-hoc tests.
Friedman test and post-hoc test for Python You can perform any of the following tests with scikit-posthocs package: Conover, Nemenyi, Siegel, and Miller post-hoc tests.
Friedman test and post-hoc test for Python You can perform any of the following tests with scikit-posthocs package: Conover, Nemenyi, Siegel, and Miller post-hoc tests.
25,645
How do you fit a Poisson distribution to table data?
By "fitting distribution to the data" we mean that some distribution (i.e. mathematical function) is used as a model, that can be used to approximate the empirical distribution of the data you have. If you are fitting distribution to the data, you need to infer the distribution parameters from the data. You can do this by using some software that will do this for you automatically (e.g. fitdistrplus in R), or by calculating it by hand from your data, e.g using maximum likelihood (see relevant entry in Wikipedia about Poisson distribution). On the plot below you can see your data plotted with fitted Poisson distribution. As you can see, the line doesn't fit perfectly, as it is only an approximation. Among other methods, one of the approaches to this problem is to use maximum likelihood. Recall that likelihood is a function of parameters for the fixed data and by maximizing this function we can find "most likely" parameters given the data we have, i.e. $$ L(\lambda|x_1,\dots,x_n) = \prod_i f(x_i|\lambda) $$ where in your case $f$ is Poisson probability mass function. The direct, numerical way to find appropriate $\lambda$ would be to use optimization algorithm. For this first you define the likelihood function and then ask the algorithm to find the point where the function reaches it's maximum: # negative log-likelihood (since this algorithm looks for minimum) llik <- function(lambda) -sum(dpois(x, lambda, log = TRUE)*y) opt.fit <- optimize(llik, c(0, 10))$minimum You can notice something odd about this code: I multiply dpois() by y. The data you have is provided in form of a table, where for each value of $x_i$ we have accompanying counts $y_i$, while likelihood function is defined in terms of raw data, rather than such tables. You could re-create the raw data from this values by repeating each of the $x_i$'s exactly $y_i$ times (i.e. rep(x, y) in R) and using this as input to your statistical software, but you could take more clever approach. Likelihood is a product of $f(x_i|\lambda)$. Multiplying $f(x_i|\lambda)$ for identical $x_i$'s exactly $y_i$ times is the same as taking $y_i$-th power of it: $f(x_i|\lambda)^{y_i}$. Here we are maximizing log-likelihood (see here why we take log), so $\prod_i f(x_i|\lambda)^{y_i}$ becomes: $\sum_i \log f(x_i|\lambda) \times y_i$. That is how we obtained likelihood function for tabular data. However, there is more simple way to go. We know that empirical mean of $x$'s is the maximum likelihood estimator of $\lambda$ (i.e. it lets us to estimate such value of $\lambda$ that maximizes the likelihood), so rather than using optimization software, we can simply calculate the mean. Since you have data in form of a table with counts, the most direct way to go would be to simply use weighted mean weighted mean of $x_i$'s where $y_i$'s are used as weights. mx <- sum(x*(y/sum(y))) This leads to identical results as if you calculated arithmetic mean from the raw data. Both maximizing the likelihood using optimization algorithm, and taking the mean lead to almost exactly the same results: > mx [1] 0.3995092 > opt.fit [1] 0.3995127 So $y$'s are not mentioned anywhere in your notes as they are created artificially as a way of storing this data in aggregated form (as a table), rather than listing all the $4075$ raw $x$'s. As showed above, you can take advantage of having data in this format. The above procedures let you to find the "best fitting" $\lambda$ and this is how you fit distribution to the data -- by finding such parameters of the distribution, that makes it fit to the empirical data. You commented that it is still unclear for you why $y_i$'s are considered as weights. Arithmetic mean can be considered as a special case of weighted mean where all the weights are the same and equal to $1/N$: $$ \frac{x_1 + \dots + x_n}{N} = \frac{1}{N} \left( x_1 + \dots + x_n \right) = \frac{1}{N}x_1 + \dots + \frac{1}{N}x_n $$ Now think of how your data is stored. $x_6 = 5$ and $y_6 = 4$ means that you have four fives $ x_6 = \{5,5,5,5\} $, $x_7 = 6$ and $y_7 = 2$ means $x_7 = \{6,6\}$ etc. When you calculate mean, you first need to sum them, so: $5+5+5+5 = 5 \times 4 = x_6 \times y_6$. This leads to using counts as weights for weighted mean giving exactly the same as arithmetic mean with raw data $$ \frac{x_1 y_1 + \dots + x_n y_n}{y_1 + \dots + y_n} = \\ \frac{x_1 y_1}{N} + \dots + \frac{x_n y_n}{N} = \\ \overbrace{ \frac{x_1}{N} + \dots + \frac{x_1}{N} }^{y_1 ~ \text{times}} + \dots + \overbrace{ \frac{x_n}{N} + \dots + \frac{x_n}{N} }^{y_n ~ \text{times}} $$ where $N = \sum_i y_i$. The same idea was applied to the likelihood function that was weighted by counts. What could be misleading in here is that in some cases we use $x_i$ to denote $i$-th observed value of $X$, while in your case $x_i$ is a specific value of $X$ that was observed $y_i$ times. As it was said before, this is just an alternative way of storing the same data.
How do you fit a Poisson distribution to table data?
By "fitting distribution to the data" we mean that some distribution (i.e. mathematical function) is used as a model, that can be used to approximate the empirical distribution of the data you have. I
How do you fit a Poisson distribution to table data? By "fitting distribution to the data" we mean that some distribution (i.e. mathematical function) is used as a model, that can be used to approximate the empirical distribution of the data you have. If you are fitting distribution to the data, you need to infer the distribution parameters from the data. You can do this by using some software that will do this for you automatically (e.g. fitdistrplus in R), or by calculating it by hand from your data, e.g using maximum likelihood (see relevant entry in Wikipedia about Poisson distribution). On the plot below you can see your data plotted with fitted Poisson distribution. As you can see, the line doesn't fit perfectly, as it is only an approximation. Among other methods, one of the approaches to this problem is to use maximum likelihood. Recall that likelihood is a function of parameters for the fixed data and by maximizing this function we can find "most likely" parameters given the data we have, i.e. $$ L(\lambda|x_1,\dots,x_n) = \prod_i f(x_i|\lambda) $$ where in your case $f$ is Poisson probability mass function. The direct, numerical way to find appropriate $\lambda$ would be to use optimization algorithm. For this first you define the likelihood function and then ask the algorithm to find the point where the function reaches it's maximum: # negative log-likelihood (since this algorithm looks for minimum) llik <- function(lambda) -sum(dpois(x, lambda, log = TRUE)*y) opt.fit <- optimize(llik, c(0, 10))$minimum You can notice something odd about this code: I multiply dpois() by y. The data you have is provided in form of a table, where for each value of $x_i$ we have accompanying counts $y_i$, while likelihood function is defined in terms of raw data, rather than such tables. You could re-create the raw data from this values by repeating each of the $x_i$'s exactly $y_i$ times (i.e. rep(x, y) in R) and using this as input to your statistical software, but you could take more clever approach. Likelihood is a product of $f(x_i|\lambda)$. Multiplying $f(x_i|\lambda)$ for identical $x_i$'s exactly $y_i$ times is the same as taking $y_i$-th power of it: $f(x_i|\lambda)^{y_i}$. Here we are maximizing log-likelihood (see here why we take log), so $\prod_i f(x_i|\lambda)^{y_i}$ becomes: $\sum_i \log f(x_i|\lambda) \times y_i$. That is how we obtained likelihood function for tabular data. However, there is more simple way to go. We know that empirical mean of $x$'s is the maximum likelihood estimator of $\lambda$ (i.e. it lets us to estimate such value of $\lambda$ that maximizes the likelihood), so rather than using optimization software, we can simply calculate the mean. Since you have data in form of a table with counts, the most direct way to go would be to simply use weighted mean weighted mean of $x_i$'s where $y_i$'s are used as weights. mx <- sum(x*(y/sum(y))) This leads to identical results as if you calculated arithmetic mean from the raw data. Both maximizing the likelihood using optimization algorithm, and taking the mean lead to almost exactly the same results: > mx [1] 0.3995092 > opt.fit [1] 0.3995127 So $y$'s are not mentioned anywhere in your notes as they are created artificially as a way of storing this data in aggregated form (as a table), rather than listing all the $4075$ raw $x$'s. As showed above, you can take advantage of having data in this format. The above procedures let you to find the "best fitting" $\lambda$ and this is how you fit distribution to the data -- by finding such parameters of the distribution, that makes it fit to the empirical data. You commented that it is still unclear for you why $y_i$'s are considered as weights. Arithmetic mean can be considered as a special case of weighted mean where all the weights are the same and equal to $1/N$: $$ \frac{x_1 + \dots + x_n}{N} = \frac{1}{N} \left( x_1 + \dots + x_n \right) = \frac{1}{N}x_1 + \dots + \frac{1}{N}x_n $$ Now think of how your data is stored. $x_6 = 5$ and $y_6 = 4$ means that you have four fives $ x_6 = \{5,5,5,5\} $, $x_7 = 6$ and $y_7 = 2$ means $x_7 = \{6,6\}$ etc. When you calculate mean, you first need to sum them, so: $5+5+5+5 = 5 \times 4 = x_6 \times y_6$. This leads to using counts as weights for weighted mean giving exactly the same as arithmetic mean with raw data $$ \frac{x_1 y_1 + \dots + x_n y_n}{y_1 + \dots + y_n} = \\ \frac{x_1 y_1}{N} + \dots + \frac{x_n y_n}{N} = \\ \overbrace{ \frac{x_1}{N} + \dots + \frac{x_1}{N} }^{y_1 ~ \text{times}} + \dots + \overbrace{ \frac{x_n}{N} + \dots + \frac{x_n}{N} }^{y_n ~ \text{times}} $$ where $N = \sum_i y_i$. The same idea was applied to the likelihood function that was weighted by counts. What could be misleading in here is that in some cases we use $x_i$ to denote $i$-th observed value of $X$, while in your case $x_i$ is a specific value of $X$ that was observed $y_i$ times. As it was said before, this is just an alternative way of storing the same data.
How do you fit a Poisson distribution to table data? By "fitting distribution to the data" we mean that some distribution (i.e. mathematical function) is used as a model, that can be used to approximate the empirical distribution of the data you have. I
25,646
How do you fit a Poisson distribution to table data?
I guess the answer is to find the mean of the data, which will be the lambda of the Poisson process. Given the data comes in frequency table, find the expected value /weighted average, which as explained above, is the same as the arithmetic average of the raw data.
How do you fit a Poisson distribution to table data?
I guess the answer is to find the mean of the data, which will be the lambda of the Poisson process. Given the data comes in frequency table, find the expected value /weighted average, which as explai
How do you fit a Poisson distribution to table data? I guess the answer is to find the mean of the data, which will be the lambda of the Poisson process. Given the data comes in frequency table, find the expected value /weighted average, which as explained above, is the same as the arithmetic average of the raw data.
How do you fit a Poisson distribution to table data? I guess the answer is to find the mean of the data, which will be the lambda of the Poisson process. Given the data comes in frequency table, find the expected value /weighted average, which as explai
25,647
Measure the uniformity of a distribution over weekdays
The earth mover distance, also known as the Wasserstein metric, measures the distance between two histograms. Essentially, it considers one histogram as a number of piles of dirt and then assesses how much dirt one needs to move and how far (!) to turn this histogram into the other. You would measure the distance between your distribution and a uniform one over the days of the week. This of course accounts for the nearness of days - it's easier to move "dirt" from Monday to Tuesday than from Monday to Thursday, so (1/2,0,0,1/2,0,0,0) would have a lower earth mover distance from the uniform distribution than a histogram that is concentrated on Monday and Tuesday. What this does not do is consider the "circularity" of the week, i.e., that Saturday and Sunday are as close together as are Sunday and Monday. For that, you would need to look for an earth mover distance defined on circular probability mass distributions. This should be doable using a suitable optimization approach. EDIT: In R, the emd package calculates earth mover distances between histograms. You can address the "circularity" issue in a fairly simple (though ad-hoc) way. Calculate an earth mover distance $d_1$ between your distribution and a uniform distibution on Monday through Sunday. Calculate a distance $d_2$ against a uniform distribution on Tuesday through Monday. Calculate a distance $d_3$ against a uniform distribution on Wednesday through Tuesday. ... Finally, as the final distance, use the mean of $d_1, \dots, d_7$. This takes care of the circularity at the expense of a couple of additional calculations. 2nd EDIT: this is not the circular earth mover distance as such. For that, you'd need to look through some of the literature a search will turn up. If the best way to move dirt between days involves moving it two days from Saturday to Monday, this will show up in five out of the seven $d_i$, but not in the remaining two (where the dirt will need to be moved five days). However, I'd still consider this a potentially useful way to at least consider the circularity in some manner - certainly better than just using a single histogram and defining the week as going from Sunday to Saturday or in some other arbitrary manner. Plus, while some links above turn up implementations for the circular earth mover distance, I'm not aware of one for R, which is probably the most-used language here.
Measure the uniformity of a distribution over weekdays
The earth mover distance, also known as the Wasserstein metric, measures the distance between two histograms. Essentially, it considers one histogram as a number of piles of dirt and then assesses how
Measure the uniformity of a distribution over weekdays The earth mover distance, also known as the Wasserstein metric, measures the distance between two histograms. Essentially, it considers one histogram as a number of piles of dirt and then assesses how much dirt one needs to move and how far (!) to turn this histogram into the other. You would measure the distance between your distribution and a uniform one over the days of the week. This of course accounts for the nearness of days - it's easier to move "dirt" from Monday to Tuesday than from Monday to Thursday, so (1/2,0,0,1/2,0,0,0) would have a lower earth mover distance from the uniform distribution than a histogram that is concentrated on Monday and Tuesday. What this does not do is consider the "circularity" of the week, i.e., that Saturday and Sunday are as close together as are Sunday and Monday. For that, you would need to look for an earth mover distance defined on circular probability mass distributions. This should be doable using a suitable optimization approach. EDIT: In R, the emd package calculates earth mover distances between histograms. You can address the "circularity" issue in a fairly simple (though ad-hoc) way. Calculate an earth mover distance $d_1$ between your distribution and a uniform distibution on Monday through Sunday. Calculate a distance $d_2$ against a uniform distribution on Tuesday through Monday. Calculate a distance $d_3$ against a uniform distribution on Wednesday through Tuesday. ... Finally, as the final distance, use the mean of $d_1, \dots, d_7$. This takes care of the circularity at the expense of a couple of additional calculations. 2nd EDIT: this is not the circular earth mover distance as such. For that, you'd need to look through some of the literature a search will turn up. If the best way to move dirt between days involves moving it two days from Saturday to Monday, this will show up in five out of the seven $d_i$, but not in the remaining two (where the dirt will need to be moved five days). However, I'd still consider this a potentially useful way to at least consider the circularity in some manner - certainly better than just using a single histogram and defining the week as going from Sunday to Saturday or in some other arbitrary manner. Plus, while some links above turn up implementations for the circular earth mover distance, I'm not aware of one for R, which is probably the most-used language here.
Measure the uniformity of a distribution over weekdays The earth mover distance, also known as the Wasserstein metric, measures the distance between two histograms. Essentially, it considers one histogram as a number of piles of dirt and then assesses how
25,648
Why proximal gradient descent instead of plain subgradient methods for Lasso?
An approximate solution can indeed be found for lasso using subgradient methods. For example, say we want to minimize the following loss function: $$f(w; \lambda) = \| y - Xw \|_2^2 + \lambda \|w\|_1$$ The gradient of the penalty term is $-\lambda$ for $w_i < 0$ and $\lambda$ for $w_i > 0$, but the penalty term is nondifferentiable at $0$. Instead, we can use the subgradient $\lambda \text{sgn}(w)$, which is the same but has a value of $0$ for $w_i = 0$. The corresponding subgradient for the loss function is: $$g(w; \lambda) = -2X^T (y - X w) + \lambda \text{sgn}(w)$$ We can minimize the loss function using an approach similar to gradient descent, but using the subgradient (which is equal to the gradient everywhere except $0$, where the gradient is undefined). The solution can be very close to the true lasso solution, but may not contain exact zeros--where weights should have been zero, they make take extremely small values instead. This lack of true sparsity is one reason not to use subgradient methods for lasso. Dedicated solvers take advantage of the problem structure to produce truly sparse solutions in a computationally efficient way. This post says that, besides producing sparse solutions, dedicated methods (including proximal gradient methods) have faster convergence rates than subgradient methods. He gives some references.
Why proximal gradient descent instead of plain subgradient methods for Lasso?
An approximate solution can indeed be found for lasso using subgradient methods. For example, say we want to minimize the following loss function: $$f(w; \lambda) = \| y - Xw \|_2^2 + \lambda \|w\|_1$
Why proximal gradient descent instead of plain subgradient methods for Lasso? An approximate solution can indeed be found for lasso using subgradient methods. For example, say we want to minimize the following loss function: $$f(w; \lambda) = \| y - Xw \|_2^2 + \lambda \|w\|_1$$ The gradient of the penalty term is $-\lambda$ for $w_i < 0$ and $\lambda$ for $w_i > 0$, but the penalty term is nondifferentiable at $0$. Instead, we can use the subgradient $\lambda \text{sgn}(w)$, which is the same but has a value of $0$ for $w_i = 0$. The corresponding subgradient for the loss function is: $$g(w; \lambda) = -2X^T (y - X w) + \lambda \text{sgn}(w)$$ We can minimize the loss function using an approach similar to gradient descent, but using the subgradient (which is equal to the gradient everywhere except $0$, where the gradient is undefined). The solution can be very close to the true lasso solution, but may not contain exact zeros--where weights should have been zero, they make take extremely small values instead. This lack of true sparsity is one reason not to use subgradient methods for lasso. Dedicated solvers take advantage of the problem structure to produce truly sparse solutions in a computationally efficient way. This post says that, besides producing sparse solutions, dedicated methods (including proximal gradient methods) have faster convergence rates than subgradient methods. He gives some references.
Why proximal gradient descent instead of plain subgradient methods for Lasso? An approximate solution can indeed be found for lasso using subgradient methods. For example, say we want to minimize the following loss function: $$f(w; \lambda) = \| y - Xw \|_2^2 + \lambda \|w\|_1$
25,649
L2-norms of gradients increasing during training of deep neural network
I guess I got what is a problem with a gradient norm value. Basically negative gradient shows a direction to a local minimum value, but it doesn't say how far it is. For this reason you are able to configure you step proportion. When your weight combination is closer to the minimum value your constant step could be bigger than is necessary and some times it hits in wrong direction and in next epooch network try to solve this problem. Momentum algorithm use modified approach. After each iteration it increases weight update if sign for the gradient the same (by an additional parameter that is added to the $\Delta w$ value). In terms of vectors this addition operation can increase magnitude of the vector and change it direction as well, so you are able to miss perfect step even more. To fix this problem network sometimes needs a bigger vector, because minimum value a little further than in the previous epoch. To prove that theory I build small experiment. First of all I reproduce the same behaviour but for simpler network architecture with less number of iterations. import numpy as np from numpy.linalg import norm import matplotlib.pyplot as plt from sklearn.datasets import make_regression from sklearn import preprocessing from sklearn.pipeline import Pipeline from neupy import algorithms plt.style.use('ggplot') grad_norm = [] def train_epoch_end_signal(network): global grad_norm # Get gradient for the last layer grad_norm.append(norm(network.gradients[-1])) data, target = make_regression(n_samples=10000, n_features=50, n_targets=1) target_scaler = preprocessing.MinMaxScaler() target = target_scaler.fit_transform(target) mnet = Pipeline([ ('scaler', preprocessing.MinMaxScaler()), ('momentum', algorithms.Momentum( (50, 30, 1), step=1e-10, show_epoch=1, shuffle_data=True, verbose=False, train_epoch_end_signal=train_epoch_end_signal, )), ]) mnet.fit(data, target, momentum__epochs=100) After training I checked all gradients on plot. Below you can see similar behaviour as yours. plt.figure(figsize=(12, 8)) plt.plot(grad_norm) plt.title("Momentum algorithm final layer gradient 2-Norm") plt.ylabel("Gradient 2-Norm") plt.xlabel("Epoch") plt.show() Also if look closer into the training procedure results after each epoch you will find that errors are vary as well. plt.figure(figsize=(12, 8)) network = mnet.steps[-1][1] network.plot_errors() plt.show() Next I using almost the same settings create another network, but for this time I select Golden search algorithm for step selection on each epoch. grad_norm = [] def train_epoch_end_signal(network): global grad_norm # Get gradient for the last layer grad_norm.append(norm(network.gradients[-1])) if network.epoch % 20 == 0: print("Epoch #{}: step = {}".format(network.epoch, network.step)) mnet = Pipeline([ ('scaler', preprocessing.MinMaxScaler()), ('momentum', algorithms.Momentum( (50, 30, 1), step=1e-10, show_epoch=1, shuffle_data=True, verbose=False, train_epoch_end_signal=train_epoch_end_signal, optimizations=[algorithms.LinearSearch] )), ]) mnet.fit(data, target, momentum__epochs=100) Output below shows step variation at each 20 epoch. Epoch #0: step = 0.5278640466583575 Epoch #20: step = 1.103484809236065e-13 Epoch #40: step = 0.01315561773591515 Epoch #60: step = 0.018180616551587894 Epoch #80: step = 0.00547810271094794 And if you after that training look closer into the results you will find that variation in 2-norm is much smaller plt.figure(figsize=(12, 8)) plt.plot(grad_norm) plt.title("Momentum algorithm final layer gradient 2-Norm") plt.ylabel("Gradient 2-Norm") plt.xlabel("Epoch") plt.show() And also this optimization reduce variation of errors as well plt.figure(figsize=(12, 8)) network = mnet.steps[-1][1] network.plot_errors() plt.show() As you can see the main problem with gradient is in the step length. It's important to note that even with a high variation your network can give you improve in your prediction accuracy after each iteration.
L2-norms of gradients increasing during training of deep neural network
I guess I got what is a problem with a gradient norm value. Basically negative gradient shows a direction to a local minimum value, but it doesn't say how far it is. For this reason you are able to co
L2-norms of gradients increasing during training of deep neural network I guess I got what is a problem with a gradient norm value. Basically negative gradient shows a direction to a local minimum value, but it doesn't say how far it is. For this reason you are able to configure you step proportion. When your weight combination is closer to the minimum value your constant step could be bigger than is necessary and some times it hits in wrong direction and in next epooch network try to solve this problem. Momentum algorithm use modified approach. After each iteration it increases weight update if sign for the gradient the same (by an additional parameter that is added to the $\Delta w$ value). In terms of vectors this addition operation can increase magnitude of the vector and change it direction as well, so you are able to miss perfect step even more. To fix this problem network sometimes needs a bigger vector, because minimum value a little further than in the previous epoch. To prove that theory I build small experiment. First of all I reproduce the same behaviour but for simpler network architecture with less number of iterations. import numpy as np from numpy.linalg import norm import matplotlib.pyplot as plt from sklearn.datasets import make_regression from sklearn import preprocessing from sklearn.pipeline import Pipeline from neupy import algorithms plt.style.use('ggplot') grad_norm = [] def train_epoch_end_signal(network): global grad_norm # Get gradient for the last layer grad_norm.append(norm(network.gradients[-1])) data, target = make_regression(n_samples=10000, n_features=50, n_targets=1) target_scaler = preprocessing.MinMaxScaler() target = target_scaler.fit_transform(target) mnet = Pipeline([ ('scaler', preprocessing.MinMaxScaler()), ('momentum', algorithms.Momentum( (50, 30, 1), step=1e-10, show_epoch=1, shuffle_data=True, verbose=False, train_epoch_end_signal=train_epoch_end_signal, )), ]) mnet.fit(data, target, momentum__epochs=100) After training I checked all gradients on plot. Below you can see similar behaviour as yours. plt.figure(figsize=(12, 8)) plt.plot(grad_norm) plt.title("Momentum algorithm final layer gradient 2-Norm") plt.ylabel("Gradient 2-Norm") plt.xlabel("Epoch") plt.show() Also if look closer into the training procedure results after each epoch you will find that errors are vary as well. plt.figure(figsize=(12, 8)) network = mnet.steps[-1][1] network.plot_errors() plt.show() Next I using almost the same settings create another network, but for this time I select Golden search algorithm for step selection on each epoch. grad_norm = [] def train_epoch_end_signal(network): global grad_norm # Get gradient for the last layer grad_norm.append(norm(network.gradients[-1])) if network.epoch % 20 == 0: print("Epoch #{}: step = {}".format(network.epoch, network.step)) mnet = Pipeline([ ('scaler', preprocessing.MinMaxScaler()), ('momentum', algorithms.Momentum( (50, 30, 1), step=1e-10, show_epoch=1, shuffle_data=True, verbose=False, train_epoch_end_signal=train_epoch_end_signal, optimizations=[algorithms.LinearSearch] )), ]) mnet.fit(data, target, momentum__epochs=100) Output below shows step variation at each 20 epoch. Epoch #0: step = 0.5278640466583575 Epoch #20: step = 1.103484809236065e-13 Epoch #40: step = 0.01315561773591515 Epoch #60: step = 0.018180616551587894 Epoch #80: step = 0.00547810271094794 And if you after that training look closer into the results you will find that variation in 2-norm is much smaller plt.figure(figsize=(12, 8)) plt.plot(grad_norm) plt.title("Momentum algorithm final layer gradient 2-Norm") plt.ylabel("Gradient 2-Norm") plt.xlabel("Epoch") plt.show() And also this optimization reduce variation of errors as well plt.figure(figsize=(12, 8)) network = mnet.steps[-1][1] network.plot_errors() plt.show() As you can see the main problem with gradient is in the step length. It's important to note that even with a high variation your network can give you improve in your prediction accuracy after each iteration.
L2-norms of gradients increasing during training of deep neural network I guess I got what is a problem with a gradient norm value. Basically negative gradient shows a direction to a local minimum value, but it doesn't say how far it is. For this reason you are able to co
25,650
L2-norms of gradients increasing during training of deep neural network
This is normal situation in training of convolutional NNN. Your loss function decrease, but loss gradients go up. This is true both for Vanilla SGD, and for SGD with momentum. The reason for this is that initial learning rate is too high for some layers. There is very good Ian Goodfellow' tutorial on Optimization of NN, where he explains why this happens: http://videolectures.net/deeplearning2015_goodfellow_network_optimization/ , minute 29
L2-norms of gradients increasing during training of deep neural network
This is normal situation in training of convolutional NNN. Your loss function decrease, but loss gradients go up. This is true both for Vanilla SGD, and for SGD with momentum. The reason for this is t
L2-norms of gradients increasing during training of deep neural network This is normal situation in training of convolutional NNN. Your loss function decrease, but loss gradients go up. This is true both for Vanilla SGD, and for SGD with momentum. The reason for this is that initial learning rate is too high for some layers. There is very good Ian Goodfellow' tutorial on Optimization of NN, where he explains why this happens: http://videolectures.net/deeplearning2015_goodfellow_network_optimization/ , minute 29
L2-norms of gradients increasing during training of deep neural network This is normal situation in training of convolutional NNN. Your loss function decrease, but loss gradients go up. This is true both for Vanilla SGD, and for SGD with momentum. The reason for this is t
25,651
Estimate variance of a population if population mean is known
Yes, it is true. In the language of statistics, we would say that if you have no knowledge of the population mean, then the quantity $$\frac{1}{n-1} \sum_{i=1}^n \left(x_i-\bar{x} \right)^2$$ is unbiased, which simply means that it estimates the population variance correctly on average. But if you do know the population mean, there is no need to use an estimate for it- this is what the $\bar{x}$ serves for-and the finite-sample correction that comes with it. In fact, it can be shown that the quantity $$\frac{1}{n} \sum_{i=1}^n \left(x_i-\mu \right)^2$$ is not only unbiased but also has lower variance than the quantity above. This is quite intuitive as part of the uncertainty has now been removed. So we use this one in this situation. It is worth noting that the estimators will differ very little in large sample sizes and hence they are asymptotically equivalent.
Estimate variance of a population if population mean is known
Yes, it is true. In the language of statistics, we would say that if you have no knowledge of the population mean, then the quantity $$\frac{1}{n-1} \sum_{i=1}^n \left(x_i-\bar{x} \right)^2$$ is unbia
Estimate variance of a population if population mean is known Yes, it is true. In the language of statistics, we would say that if you have no knowledge of the population mean, then the quantity $$\frac{1}{n-1} \sum_{i=1}^n \left(x_i-\bar{x} \right)^2$$ is unbiased, which simply means that it estimates the population variance correctly on average. But if you do know the population mean, there is no need to use an estimate for it- this is what the $\bar{x}$ serves for-and the finite-sample correction that comes with it. In fact, it can be shown that the quantity $$\frac{1}{n} \sum_{i=1}^n \left(x_i-\mu \right)^2$$ is not only unbiased but also has lower variance than the quantity above. This is quite intuitive as part of the uncertainty has now been removed. So we use this one in this situation. It is worth noting that the estimators will differ very little in large sample sizes and hence they are asymptotically equivalent.
Estimate variance of a population if population mean is known Yes, it is true. In the language of statistics, we would say that if you have no knowledge of the population mean, then the quantity $$\frac{1}{n-1} \sum_{i=1}^n \left(x_i-\bar{x} \right)^2$$ is unbia
25,652
Is the median a "metric" or a "topological" property?
The flaw in your reasoning is that something that depends on a metric cannot be a topological property. Take compactness of metric spaces. This can be defined in terms of the metric: compactness means that the space is complete (depends on the metric) and totally bounded (depends on the metric). It turns out though, that this property is an invariant under homeomorphism, and indeed, can be defined in terms of only the topology (finite sub covers of any cover, the usual way). Another example is the various homology theories. Only singular homology is truly topological in its definition. All the others, simplicial, cellular, De Rham (cohomology, but grant me a little looseness), etc, depend on extra structure, but turn out to be equivalent (and quite a bit easier to work with). This comes up a lot in math, sometimes the easiest way to go about defining something is in terms of some ancillary structure, and then it is demonstrated that the resulting entity does not, in fact, depend on the choice of ancillary structure at all.
Is the median a "metric" or a "topological" property?
The flaw in your reasoning is that something that depends on a metric cannot be a topological property. Take compactness of metric spaces. This can be defined in terms of the metric: compactness mean
Is the median a "metric" or a "topological" property? The flaw in your reasoning is that something that depends on a metric cannot be a topological property. Take compactness of metric spaces. This can be defined in terms of the metric: compactness means that the space is complete (depends on the metric) and totally bounded (depends on the metric). It turns out though, that this property is an invariant under homeomorphism, and indeed, can be defined in terms of only the topology (finite sub covers of any cover, the usual way). Another example is the various homology theories. Only singular homology is truly topological in its definition. All the others, simplicial, cellular, De Rham (cohomology, but grant me a little looseness), etc, depend on extra structure, but turn out to be equivalent (and quite a bit easier to work with). This comes up a lot in math, sometimes the easiest way to go about defining something is in terms of some ancillary structure, and then it is demonstrated that the resulting entity does not, in fact, depend on the choice of ancillary structure at all.
Is the median a "metric" or a "topological" property? The flaw in your reasoning is that something that depends on a metric cannot be a topological property. Take compactness of metric spaces. This can be defined in terms of the metric: compactness mean
25,653
Equivalent to Welch's t-test in GLS framework
It's an interesting question. One thing to note is that allowing unequal variances will only change the $t$-statistic if the groups are of unequal size. If the two groups are of equal size (i.e., $n_1=n_2=n$), then Welch's $t$-test (denoted $t_w$) and Student's $t$-test (denoted $t_s$) give the same test statistic, since $$ t_w= \frac{\bar{y}_1-\bar{y}_2}{\sqrt{\frac{s^2_1}{n_1}+\frac{s^2_2}{n_2}}}= \frac{\bar{y}_1-\bar{y}_2}{\sqrt{\frac{s^2_1+s^2_2}{n}}}= \frac{\bar{y}_1-\bar{y}_2}{\sqrt{(\frac{s^2_1+s^2_2}{2})(\frac{2}{n})}}= t_s $$ I point this out because the sleep study example you give in your post involves equal group sizes, which is why running your example returns the same $t$-statistic in all cases. Anyway, to answer your question, this can be done in nlme::gls() by using the weights parameter combined with nlme::varIdent(). Below I generate some data with unequal group sizes and unequal variances, then show how to fit the models assuming or not assuming equal variance, using both t.test and a regression function (lm or gls): # generate data with unequal group sizes and unequal variances set.seed(497203) dat <- data.frame(group=rep.int(c("A","B"), c(10,20)), y = rnorm(30, mean=rep.int(c(0,1), c(10,20)), sd=rep.int(c(1,2),c(10,20)))) # the t-statistic assuming equal variances t.test(y ~ group, data = dat, var.equal = TRUE) summary(lm(y ~ group, data = dat)) # the t-statistic not assuming equal variances t.test(y ~ group, data = dat, var.equal = FALSE) library(nlme) summary(gls(y ~ group, data = dat, weights=varIdent(form = ~ 1 | group))) # a hack to achieve the same thing in lmer # (lmerControl options are needed to prevent lmer from complaining # about too many levels of the grouping variable) dat <- transform(dat, obs=factor(1:nrow(dat)), dummy=as.numeric(group=="B")) library('lme4') summary(lmer(y ~ group + (dummy-1|obs), data=dat, control=lmerControl(check.nobs.vs.nlev = "ignore", check.nobs.vs.nRE = "ignore"))) You also asked about getting the same degrees of freedom. The degrees of freedom are based on the Satterthwaite approximation, and t.test applies the approximation by default as that is part of the solution described by Welch. But gls does not do so. Theoretically this could be done, and I believe PROC MIXED in SAS will do so, so you should be able to reproduce the results exactly in PROC MIXED. Maybe (probably) there is some R package that will make it easy to get the Satterthwaite DFs for general regression models (with continuous predictors), but I don't know what it is. Update by @amoeba: The Satterthwaite approximation is implemented as the default one in the lmerTest package, so to get $p$-value exactly matching Welch's t-test one can run: library('lmerTest') summary(lmer(y ~ group + (dummy-1|obs), data=dat, control=lmerControl(check.nobs.vs.nlev = "ignore", check.nobs.vs.nRE = "ignore")))
Equivalent to Welch's t-test in GLS framework
It's an interesting question. One thing to note is that allowing unequal variances will only change the $t$-statistic if the groups are of unequal size. If the two groups are of equal size (i.e., $n_1
Equivalent to Welch's t-test in GLS framework It's an interesting question. One thing to note is that allowing unequal variances will only change the $t$-statistic if the groups are of unequal size. If the two groups are of equal size (i.e., $n_1=n_2=n$), then Welch's $t$-test (denoted $t_w$) and Student's $t$-test (denoted $t_s$) give the same test statistic, since $$ t_w= \frac{\bar{y}_1-\bar{y}_2}{\sqrt{\frac{s^2_1}{n_1}+\frac{s^2_2}{n_2}}}= \frac{\bar{y}_1-\bar{y}_2}{\sqrt{\frac{s^2_1+s^2_2}{n}}}= \frac{\bar{y}_1-\bar{y}_2}{\sqrt{(\frac{s^2_1+s^2_2}{2})(\frac{2}{n})}}= t_s $$ I point this out because the sleep study example you give in your post involves equal group sizes, which is why running your example returns the same $t$-statistic in all cases. Anyway, to answer your question, this can be done in nlme::gls() by using the weights parameter combined with nlme::varIdent(). Below I generate some data with unequal group sizes and unequal variances, then show how to fit the models assuming or not assuming equal variance, using both t.test and a regression function (lm or gls): # generate data with unequal group sizes and unequal variances set.seed(497203) dat <- data.frame(group=rep.int(c("A","B"), c(10,20)), y = rnorm(30, mean=rep.int(c(0,1), c(10,20)), sd=rep.int(c(1,2),c(10,20)))) # the t-statistic assuming equal variances t.test(y ~ group, data = dat, var.equal = TRUE) summary(lm(y ~ group, data = dat)) # the t-statistic not assuming equal variances t.test(y ~ group, data = dat, var.equal = FALSE) library(nlme) summary(gls(y ~ group, data = dat, weights=varIdent(form = ~ 1 | group))) # a hack to achieve the same thing in lmer # (lmerControl options are needed to prevent lmer from complaining # about too many levels of the grouping variable) dat <- transform(dat, obs=factor(1:nrow(dat)), dummy=as.numeric(group=="B")) library('lme4') summary(lmer(y ~ group + (dummy-1|obs), data=dat, control=lmerControl(check.nobs.vs.nlev = "ignore", check.nobs.vs.nRE = "ignore"))) You also asked about getting the same degrees of freedom. The degrees of freedom are based on the Satterthwaite approximation, and t.test applies the approximation by default as that is part of the solution described by Welch. But gls does not do so. Theoretically this could be done, and I believe PROC MIXED in SAS will do so, so you should be able to reproduce the results exactly in PROC MIXED. Maybe (probably) there is some R package that will make it easy to get the Satterthwaite DFs for general regression models (with continuous predictors), but I don't know what it is. Update by @amoeba: The Satterthwaite approximation is implemented as the default one in the lmerTest package, so to get $p$-value exactly matching Welch's t-test one can run: library('lmerTest') summary(lmer(y ~ group + (dummy-1|obs), data=dat, control=lmerControl(check.nobs.vs.nlev = "ignore", check.nobs.vs.nRE = "ignore")))
Equivalent to Welch's t-test in GLS framework It's an interesting question. One thing to note is that allowing unequal variances will only change the $t$-statistic if the groups are of unequal size. If the two groups are of equal size (i.e., $n_1
25,654
Equivalent to Welch's t-test in GLS framework
if m2 is the gls object from Jake Westfall's answer, then the Satterthwaite df and associated p-value are computed using contrast(emmeans(m2)) from the emmeans package. In Jake's example, the unadjusted and adjusted df are very similar, so the p-value and any interpretation is effectively the same. Here is an example where it matters (I use a smaller n and reverse the group with the larger variance). library(nlme) library(emmeans) set.seed(497203) n1 <- 8 n2 <- 4 dat <- data.frame(group=rep.int(c("A","B"), c(n1,n2)), y = rnorm(n1+n2, mean=rep.int(c(0,1), c(n1,n2)), sd=rep.int(c(1,2),c(n1,n2)))) # the t-statistic assuming equal variances t.student <- t.test(y ~ group, data = dat, var.equal = TRUE) m1 <- lm(y ~ group, data = dat) # the t-statistic not assuming equal variances t.welch <- t.test(y ~ group, data = dat, var.equal = FALSE) m2 <- gls(y ~ group, data = dat, weights=varIdent(form = ~ 1 | group)) m2.contrast <- contrast(emmeans(m2, specs="group")) Gathering the df, t, and p from each into a single table gives: method df t p 1: Student t 10 -2.3012090076821 0.0441633525165716 2: lm 10 2.3012090076821 0.0441633525165716 3: Welch t 3.91598345952776 -1.87308830595972 0.135881711655436 4: gls 10 1.87308831515667 0.0905471567272453 5: emmeans 3.9158589862009 -1.87308831515667 0.13588402534431
Equivalent to Welch's t-test in GLS framework
if m2 is the gls object from Jake Westfall's answer, then the Satterthwaite df and associated p-value are computed using contrast(emmeans(m2)) from the emmeans package. In Jake's example, the unadjust
Equivalent to Welch's t-test in GLS framework if m2 is the gls object from Jake Westfall's answer, then the Satterthwaite df and associated p-value are computed using contrast(emmeans(m2)) from the emmeans package. In Jake's example, the unadjusted and adjusted df are very similar, so the p-value and any interpretation is effectively the same. Here is an example where it matters (I use a smaller n and reverse the group with the larger variance). library(nlme) library(emmeans) set.seed(497203) n1 <- 8 n2 <- 4 dat <- data.frame(group=rep.int(c("A","B"), c(n1,n2)), y = rnorm(n1+n2, mean=rep.int(c(0,1), c(n1,n2)), sd=rep.int(c(1,2),c(n1,n2)))) # the t-statistic assuming equal variances t.student <- t.test(y ~ group, data = dat, var.equal = TRUE) m1 <- lm(y ~ group, data = dat) # the t-statistic not assuming equal variances t.welch <- t.test(y ~ group, data = dat, var.equal = FALSE) m2 <- gls(y ~ group, data = dat, weights=varIdent(form = ~ 1 | group)) m2.contrast <- contrast(emmeans(m2, specs="group")) Gathering the df, t, and p from each into a single table gives: method df t p 1: Student t 10 -2.3012090076821 0.0441633525165716 2: lm 10 2.3012090076821 0.0441633525165716 3: Welch t 3.91598345952776 -1.87308830595972 0.135881711655436 4: gls 10 1.87308831515667 0.0905471567272453 5: emmeans 3.9158589862009 -1.87308831515667 0.13588402534431
Equivalent to Welch's t-test in GLS framework if m2 is the gls object from Jake Westfall's answer, then the Satterthwaite df and associated p-value are computed using contrast(emmeans(m2)) from the emmeans package. In Jake's example, the unadjust
25,655
How should I model a continuous dependent variable in the $[0, \infty]$ range?
Censored vs. inflated vs. hurdle Censored, hurdle, and inflated models work by adding a point mass on top of an existing probability density. The difference lies in where the mass is added, and how. For now, just consider adding a point mass at 0, but the concept generalizes easily to other cases. All of them imply a two-step data generating process for some variable $Y$: Draw to determine whether $Y = 0$ or $Y > 0$. If $Y > 0$, draw to determine the value of $Y$. Inflated and hurdle models Both inflated (usually zero-inflated) and hurdle models work by explicitly and separately specifying $\operatorname{Pr}(Y = 0) = \pi$, so that the DGP becomes: Draw once from $Z \sim Bernoulli(\pi)$ to obtain realization $z$. If $z = 0$, set $y = z = 0$. If $z = 1$, draw once from $Y^* \sim D^*(\theta^*)$ and set $y = y^*$. In an inflated model, $\operatorname{Pr}(Y^* = 0) > 0$. In a hurdle model, $\operatorname{Pr}(Y^* = 0) = 0$. That's the only difference. Both of these models lead to a density with the following form: $$ f_D(y) = \mathbb{I}(y = 0) \cdot \operatorname{Pr}(Y = 0) + \mathbb{I}(y \geq 0) \cdot \operatorname{Pr}(Y \geq 0) \cdot f_{D^*}(y) $$ where $\mathbb{I}$ is an indicator function. That is, a point mass is simply added at zero and in this case that mass is simply $\operatorname{Pr}(Z = 0) = 1 - \pi$. You are free to estimate $p$ directly, or to set $g(\pi) = X\beta$ for some invertible $g$ like the logit function. $D^*$ can also depend on $X\beta$. In that case, the model works by "layering" a logistic regression for $Z$ under another regression model for $Y^*$. Censored models Censored models also add mass at a boundary. They accomplish this by "cutting off" a probability distribution, and then "bunching up" the excess at that boundary. The easiest way to conceptualize these models is in terms of a latent variable $Y^* \sim D^*$ with CDF $F_{D^*}$. Then $\operatorname{Pr}(Y^* \leq y^*) = F_{D^*}(y^*)$. This is a very general model; regression is the special case in which $F_{D^*}$ depends on $X\beta$. The observed $Y$ is then assumed to be related to $Y^*$ by: $$ Y = \begin{align}\begin{cases} 0 &Y^* \leq 0 \\ Y^* &Y^* > 0 \end{cases}\end{align} $$ This implies a density of the form $$ f_D(y) = \mathbb{I}(y = 0) \cdot F_{D^*}(0) + \mathbb{I}(y \geq 0) \cdot \left(1 - F_{D^*}(0)\right) \cdot f_{D^*}(y) $$ and can be easily extended. Putting it together Look at the densities: $$\begin{align} f_D(y) &= \mathbb{I}(y = 0) \cdot \pi &+ &\mathbb{I}(y \geq 0) \cdot \left(1 - \pi\right) &\cdot &f_{D^*}(y) \\ f_D(y) &= \mathbb{I}(y = 0) \cdot F_{D^*}(0) &+ &\mathbb{I}(y \geq 0) \cdot \left(1 - F_{D^*}(0)\right) &\cdot &f_{D^*}(y) \end{align}$$ and notice that they both have the same form: $$ \mathbb{I}(y = 0) \cdot \delta + \mathbb{I}(y \geq 0) \cdot \left(1 - \delta\right) \cdot f_{D^*}(y) $$ because they accomplish the same goal: building the density for $Y$ by adding a point mass $\delta$ to the density for some $Y^*$. The inflated/hurdle model sets $\delta$ by way of an external Bernoulli process. The censored model determines $\delta$ by "cutting off" $Y^*$ at a boundary, and then "clumping" the left-over mass at that boundary. In fact, you can always postulate a hurdle model that "looks like" a censored model. Consider a hurdle model where $D^*$ is parameterized by $\mu = X\beta$ and $Z$ is parameterized by $g(\pi) = X\beta$. Then you can just set $g = F_{D^*}^{-1}$. An inverse CDF is always a valid link function in logistic regression, and indeed one reason logistic regression is called "logistic" is that the standard logit link is actually the inverse CDF of the standard logistic distribution. You can come full circle on this idea, as well: Bernoulli regression models with any inverse CDF link (like the logit or probit) can be conceptualized as latent variable models with a threshold for observing 1 or 0. Censored regression is a special case of hurdle regression where the implied latent variable $Z^*$ is the same as $Y^*$. Which one should you use? If you have a compelling "censoring story," use a censored model. One classic usage of the Tobit model -- the econometric name for censored Gaussian linear regression -- is for modeling survey responses that are "top-coded." Wages are often reported this way, where all wages above some cutoff, say 100,000, are just coded as 100,000. This is not the same thing as truncation, where individuals with wages above 100,000 are not observed at all. This might occur in a survey that is only administered to individuals with wages under 100,000. Another use for censoring, as described by whuber in the comments, is when you are taking measurements with an instrument that has limited precision. Suppose your distance-measuring device could not tell the difference between 0 and $\epsilon$. Then you could censor your distribution at $\epsilon$. Otherwise, a hurdle or inflated model is a safe choice. It usually isn't wrong to hypothesize a general two-step data generating process, and it can offer some insight into your data that you might not have had otherwise. On the other hand, you can use a censored model without a censoring story to create the same effect as a hurdle model without having to specify a separate "on/off" process. This is the approach of Sigrist and Stahel (2010), who censor a shifted gamma distribution just as a way to model data in $[0, 1]$. That paper is particularly interesting because it demonstrates how modular these models are: you can actually zero-inflate a censored model (section 3.3), or you can extend the "latent variable story" to several overlapping latent variables (section 3.1). Truncation Edit: removed, because this solution was incorrect
How should I model a continuous dependent variable in the $[0, \infty]$ range?
Censored vs. inflated vs. hurdle Censored, hurdle, and inflated models work by adding a point mass on top of an existing probability density. The difference lies in where the mass is added, and how. F
How should I model a continuous dependent variable in the $[0, \infty]$ range? Censored vs. inflated vs. hurdle Censored, hurdle, and inflated models work by adding a point mass on top of an existing probability density. The difference lies in where the mass is added, and how. For now, just consider adding a point mass at 0, but the concept generalizes easily to other cases. All of them imply a two-step data generating process for some variable $Y$: Draw to determine whether $Y = 0$ or $Y > 0$. If $Y > 0$, draw to determine the value of $Y$. Inflated and hurdle models Both inflated (usually zero-inflated) and hurdle models work by explicitly and separately specifying $\operatorname{Pr}(Y = 0) = \pi$, so that the DGP becomes: Draw once from $Z \sim Bernoulli(\pi)$ to obtain realization $z$. If $z = 0$, set $y = z = 0$. If $z = 1$, draw once from $Y^* \sim D^*(\theta^*)$ and set $y = y^*$. In an inflated model, $\operatorname{Pr}(Y^* = 0) > 0$. In a hurdle model, $\operatorname{Pr}(Y^* = 0) = 0$. That's the only difference. Both of these models lead to a density with the following form: $$ f_D(y) = \mathbb{I}(y = 0) \cdot \operatorname{Pr}(Y = 0) + \mathbb{I}(y \geq 0) \cdot \operatorname{Pr}(Y \geq 0) \cdot f_{D^*}(y) $$ where $\mathbb{I}$ is an indicator function. That is, a point mass is simply added at zero and in this case that mass is simply $\operatorname{Pr}(Z = 0) = 1 - \pi$. You are free to estimate $p$ directly, or to set $g(\pi) = X\beta$ for some invertible $g$ like the logit function. $D^*$ can also depend on $X\beta$. In that case, the model works by "layering" a logistic regression for $Z$ under another regression model for $Y^*$. Censored models Censored models also add mass at a boundary. They accomplish this by "cutting off" a probability distribution, and then "bunching up" the excess at that boundary. The easiest way to conceptualize these models is in terms of a latent variable $Y^* \sim D^*$ with CDF $F_{D^*}$. Then $\operatorname{Pr}(Y^* \leq y^*) = F_{D^*}(y^*)$. This is a very general model; regression is the special case in which $F_{D^*}$ depends on $X\beta$. The observed $Y$ is then assumed to be related to $Y^*$ by: $$ Y = \begin{align}\begin{cases} 0 &Y^* \leq 0 \\ Y^* &Y^* > 0 \end{cases}\end{align} $$ This implies a density of the form $$ f_D(y) = \mathbb{I}(y = 0) \cdot F_{D^*}(0) + \mathbb{I}(y \geq 0) \cdot \left(1 - F_{D^*}(0)\right) \cdot f_{D^*}(y) $$ and can be easily extended. Putting it together Look at the densities: $$\begin{align} f_D(y) &= \mathbb{I}(y = 0) \cdot \pi &+ &\mathbb{I}(y \geq 0) \cdot \left(1 - \pi\right) &\cdot &f_{D^*}(y) \\ f_D(y) &= \mathbb{I}(y = 0) \cdot F_{D^*}(0) &+ &\mathbb{I}(y \geq 0) \cdot \left(1 - F_{D^*}(0)\right) &\cdot &f_{D^*}(y) \end{align}$$ and notice that they both have the same form: $$ \mathbb{I}(y = 0) \cdot \delta + \mathbb{I}(y \geq 0) \cdot \left(1 - \delta\right) \cdot f_{D^*}(y) $$ because they accomplish the same goal: building the density for $Y$ by adding a point mass $\delta$ to the density for some $Y^*$. The inflated/hurdle model sets $\delta$ by way of an external Bernoulli process. The censored model determines $\delta$ by "cutting off" $Y^*$ at a boundary, and then "clumping" the left-over mass at that boundary. In fact, you can always postulate a hurdle model that "looks like" a censored model. Consider a hurdle model where $D^*$ is parameterized by $\mu = X\beta$ and $Z$ is parameterized by $g(\pi) = X\beta$. Then you can just set $g = F_{D^*}^{-1}$. An inverse CDF is always a valid link function in logistic regression, and indeed one reason logistic regression is called "logistic" is that the standard logit link is actually the inverse CDF of the standard logistic distribution. You can come full circle on this idea, as well: Bernoulli regression models with any inverse CDF link (like the logit or probit) can be conceptualized as latent variable models with a threshold for observing 1 or 0. Censored regression is a special case of hurdle regression where the implied latent variable $Z^*$ is the same as $Y^*$. Which one should you use? If you have a compelling "censoring story," use a censored model. One classic usage of the Tobit model -- the econometric name for censored Gaussian linear regression -- is for modeling survey responses that are "top-coded." Wages are often reported this way, where all wages above some cutoff, say 100,000, are just coded as 100,000. This is not the same thing as truncation, where individuals with wages above 100,000 are not observed at all. This might occur in a survey that is only administered to individuals with wages under 100,000. Another use for censoring, as described by whuber in the comments, is when you are taking measurements with an instrument that has limited precision. Suppose your distance-measuring device could not tell the difference between 0 and $\epsilon$. Then you could censor your distribution at $\epsilon$. Otherwise, a hurdle or inflated model is a safe choice. It usually isn't wrong to hypothesize a general two-step data generating process, and it can offer some insight into your data that you might not have had otherwise. On the other hand, you can use a censored model without a censoring story to create the same effect as a hurdle model without having to specify a separate "on/off" process. This is the approach of Sigrist and Stahel (2010), who censor a shifted gamma distribution just as a way to model data in $[0, 1]$. That paper is particularly interesting because it demonstrates how modular these models are: you can actually zero-inflate a censored model (section 3.3), or you can extend the "latent variable story" to several overlapping latent variables (section 3.1). Truncation Edit: removed, because this solution was incorrect
How should I model a continuous dependent variable in the $[0, \infty]$ range? Censored vs. inflated vs. hurdle Censored, hurdle, and inflated models work by adding a point mass on top of an existing probability density. The difference lies in where the mass is added, and how. F
25,656
How should I model a continuous dependent variable in the $[0, \infty]$ range?
Let me start by saying that applying OLS is entirely possible, many real life applications does this. It causes (sometimes) the problem that you can end up with fitted values less than 0 - I assume this is what you are worried about? But if only very few fitted values are below 0, then I would not worry about it. The tobit model can (as you say) be used in the case of censored or truncated models. But it also applies directly to your case, in fact the tobit model was invented your case. Y "piles" up at 0, and is otherwise rougly continuous. The thing to remember is that the tobit model is difficult to interpret, you would need to rely on APE and PEA. See the comments below. You could also apply the possion regression model, which has an almost OLS like interpretation - but it's normally used with count data. Wooldridge 2012 CHAP 17, contains a very neat discussion of the subject.
How should I model a continuous dependent variable in the $[0, \infty]$ range?
Let me start by saying that applying OLS is entirely possible, many real life applications does this. It causes (sometimes) the problem that you can end up with fitted values less than 0 - I assume th
How should I model a continuous dependent variable in the $[0, \infty]$ range? Let me start by saying that applying OLS is entirely possible, many real life applications does this. It causes (sometimes) the problem that you can end up with fitted values less than 0 - I assume this is what you are worried about? But if only very few fitted values are below 0, then I would not worry about it. The tobit model can (as you say) be used in the case of censored or truncated models. But it also applies directly to your case, in fact the tobit model was invented your case. Y "piles" up at 0, and is otherwise rougly continuous. The thing to remember is that the tobit model is difficult to interpret, you would need to rely on APE and PEA. See the comments below. You could also apply the possion regression model, which has an almost OLS like interpretation - but it's normally used with count data. Wooldridge 2012 CHAP 17, contains a very neat discussion of the subject.
How should I model a continuous dependent variable in the $[0, \infty]$ range? Let me start by saying that applying OLS is entirely possible, many real life applications does this. It causes (sometimes) the problem that you can end up with fitted values less than 0 - I assume th
25,657
Expectation, Variance and Correlation of a bivariate Lognormal distribution
$\text{Cov}(X_1,X_2)=E(X_1X_2)-E(X_1)E(X_2)$ $E(X_1X_2)=E(e^{Y_1+Y_2})$ Now the distribution of $Y_1+Y_2$ is normal (and straightforward), so $E(e^{Y_1+Y_2})$ is just the expectation of a univariate lognormal. The $E(X_1)E(X_2)$ term you can already do. As a result, it's straightforward to write $\text{Cov}(X_1,X_2)$ in terms of $\mu,\sigma$ and $\rho$ and thereby to solve for $\rho$. $Y_1+Y_2\sim N(\mu_1+\mu_2,\sigma_1^2+\sigma_2^2+2\rho\sigma_1\sigma_2)$, so $e^{Y_1+Y_2}\sim logN(\mu_1+\mu_2,\sigma_1^2+\sigma_2^2+2\rho\sigma_1\sigma_2)$, which has expectation $\exp[\mu_1+\mu_2+\frac{1}{2}(\sigma_1^2+\sigma_2^2+2\rho\sigma_1\sigma_2)]$. $E(X_i)=\exp(\mu_i+\frac{1}{2}\sigma_i^2)$ So $\text{Cov}(X_1,X_2)=E(X_1)E(X_2)[\exp(\rho\sigma_1\sigma_2)-1]$ And hence: $\exp(\rho\sigma_1\sigma_2)-1=\frac{\text{Cov}(X_1,X_2)}{E(X_1)E(X_2)}$ $\rho=\log(\frac{\text{Cov}(X_1,X_2)}{E(X_1)E(X_2)}+1)\cdot\frac{1}{\sigma_1\sigma_2}$ You can extend this approach to calculating $\rho_{ij}$ from $\text{Cov}(X_i,X_j)$ and the other quantities. However, if you're trying to do this to estimate parameters from a sample, using sample moments of a lognormal to do parameter estimation (i.e. method-of-moments) doesn't always perform all that well. (You might consider MLE if you can.)
Expectation, Variance and Correlation of a bivariate Lognormal distribution
$\text{Cov}(X_1,X_2)=E(X_1X_2)-E(X_1)E(X_2)$ $E(X_1X_2)=E(e^{Y_1+Y_2})$ Now the distribution of $Y_1+Y_2$ is normal (and straightforward), so $E(e^{Y_1+Y_2})$ is just the expectation of a univariate l
Expectation, Variance and Correlation of a bivariate Lognormal distribution $\text{Cov}(X_1,X_2)=E(X_1X_2)-E(X_1)E(X_2)$ $E(X_1X_2)=E(e^{Y_1+Y_2})$ Now the distribution of $Y_1+Y_2$ is normal (and straightforward), so $E(e^{Y_1+Y_2})$ is just the expectation of a univariate lognormal. The $E(X_1)E(X_2)$ term you can already do. As a result, it's straightforward to write $\text{Cov}(X_1,X_2)$ in terms of $\mu,\sigma$ and $\rho$ and thereby to solve for $\rho$. $Y_1+Y_2\sim N(\mu_1+\mu_2,\sigma_1^2+\sigma_2^2+2\rho\sigma_1\sigma_2)$, so $e^{Y_1+Y_2}\sim logN(\mu_1+\mu_2,\sigma_1^2+\sigma_2^2+2\rho\sigma_1\sigma_2)$, which has expectation $\exp[\mu_1+\mu_2+\frac{1}{2}(\sigma_1^2+\sigma_2^2+2\rho\sigma_1\sigma_2)]$. $E(X_i)=\exp(\mu_i+\frac{1}{2}\sigma_i^2)$ So $\text{Cov}(X_1,X_2)=E(X_1)E(X_2)[\exp(\rho\sigma_1\sigma_2)-1]$ And hence: $\exp(\rho\sigma_1\sigma_2)-1=\frac{\text{Cov}(X_1,X_2)}{E(X_1)E(X_2)}$ $\rho=\log(\frac{\text{Cov}(X_1,X_2)}{E(X_1)E(X_2)}+1)\cdot\frac{1}{\sigma_1\sigma_2}$ You can extend this approach to calculating $\rho_{ij}$ from $\text{Cov}(X_i,X_j)$ and the other quantities. However, if you're trying to do this to estimate parameters from a sample, using sample moments of a lognormal to do parameter estimation (i.e. method-of-moments) doesn't always perform all that well. (You might consider MLE if you can.)
Expectation, Variance and Correlation of a bivariate Lognormal distribution $\text{Cov}(X_1,X_2)=E(X_1X_2)-E(X_1)E(X_2)$ $E(X_1X_2)=E(e^{Y_1+Y_2})$ Now the distribution of $Y_1+Y_2$ is normal (and straightforward), so $E(e^{Y_1+Y_2})$ is just the expectation of a univariate l
25,658
Binomial glmm with a categorical variable with full successes
Your intuition is exactly right. This phenomenon is called complete separation. You can find quite a lot (now that you know its name) Googling around ... It is fairly thoroughly discussed here in a general context, and here in the context of GLMMs. The standard solution to this problem is to add a small term that pushes the parameters back toward zero -- in frequentist contexts this is called a penalized or bias-corrected method. The standard algorithm is due to Firth (1993, "Bias reduction of maximum likelihood estimates" Biometrika 80, 27-38), and is implemented in the logistf package on CRAN. In Bayesian contexts this is framed as adding a weak prior to the fixed-effect parameters. To my knowledge Firth's algorithm hasn't been extended to GLMMs, but you can use the Bayesian trick by using the blme package, which puts a thin Bayesian layer over the top of the lme4 package. Here's an example from the above-linked GLMM discussion: cmod_blme_L2 <- bglmer(predation~ttt+(1|block),data=newdat, family=binomial, fixef.prior = normal(cov = diag(9,4))) The first two lines in this example are exactly the same as we would use in the standard glmer model; the last specifies that the prior for the fixed effects is a multivariate normal distribution with a diagonal variance-covariance matrix. The matrix is 4x4 (because we have 4 fixed-effect parameters in this example), and the prior variance of each parameter is 9 (corresponding to a standard deviation of 3, which is pretty weak -- that means +/- 2SD is (-6,6), which is a very large range on the logit scale). The very large standard errors of the parameters in your example are an example of a phenomenon closely related to complete separation (it occurs whenever we get extreme parameter values in a logistic model) called the Hauck-Donner effect. Two more potentially useful references (I haven't dug into them yet myself): Gelman A, Jakulin A, Pittau MG and Su TS (2008) A weakly informative default prior distribution for logistic and other regression models. Annals of Applied Statistics, 2, 1360–383. José Cortiñas Abrahantes and Marc Aerts (2012) A solution to separation for clustered binary data Statistical Modelling 12(1):3–27 doi: 10.1177/1471082X1001200102 A more recent Google scholar search for "bglmer 'complete separation'" finds: Quiñones, A. E., and W. T. Wcislo. “Cryptic Extended Brood Care in the Facultatively Eusocial Sweat Bee Megalopta genalis.” Insectes Sociaux 62.3 (2015): 307–313.
Binomial glmm with a categorical variable with full successes
Your intuition is exactly right. This phenomenon is called complete separation. You can find quite a lot (now that you know its name) Googling around ... It is fairly thoroughly discussed here in a g
Binomial glmm with a categorical variable with full successes Your intuition is exactly right. This phenomenon is called complete separation. You can find quite a lot (now that you know its name) Googling around ... It is fairly thoroughly discussed here in a general context, and here in the context of GLMMs. The standard solution to this problem is to add a small term that pushes the parameters back toward zero -- in frequentist contexts this is called a penalized or bias-corrected method. The standard algorithm is due to Firth (1993, "Bias reduction of maximum likelihood estimates" Biometrika 80, 27-38), and is implemented in the logistf package on CRAN. In Bayesian contexts this is framed as adding a weak prior to the fixed-effect parameters. To my knowledge Firth's algorithm hasn't been extended to GLMMs, but you can use the Bayesian trick by using the blme package, which puts a thin Bayesian layer over the top of the lme4 package. Here's an example from the above-linked GLMM discussion: cmod_blme_L2 <- bglmer(predation~ttt+(1|block),data=newdat, family=binomial, fixef.prior = normal(cov = diag(9,4))) The first two lines in this example are exactly the same as we would use in the standard glmer model; the last specifies that the prior for the fixed effects is a multivariate normal distribution with a diagonal variance-covariance matrix. The matrix is 4x4 (because we have 4 fixed-effect parameters in this example), and the prior variance of each parameter is 9 (corresponding to a standard deviation of 3, which is pretty weak -- that means +/- 2SD is (-6,6), which is a very large range on the logit scale). The very large standard errors of the parameters in your example are an example of a phenomenon closely related to complete separation (it occurs whenever we get extreme parameter values in a logistic model) called the Hauck-Donner effect. Two more potentially useful references (I haven't dug into them yet myself): Gelman A, Jakulin A, Pittau MG and Su TS (2008) A weakly informative default prior distribution for logistic and other regression models. Annals of Applied Statistics, 2, 1360–383. José Cortiñas Abrahantes and Marc Aerts (2012) A solution to separation for clustered binary data Statistical Modelling 12(1):3–27 doi: 10.1177/1471082X1001200102 A more recent Google scholar search for "bglmer 'complete separation'" finds: Quiñones, A. E., and W. T. Wcislo. “Cryptic Extended Brood Care in the Facultatively Eusocial Sweat Bee Megalopta genalis.” Insectes Sociaux 62.3 (2015): 307–313.
Binomial glmm with a categorical variable with full successes Your intuition is exactly right. This phenomenon is called complete separation. You can find quite a lot (now that you know its name) Googling around ... It is fairly thoroughly discussed here in a g
25,659
glm in R - which pvalue represents the goodness of fit of entire model?
You can either do an asymptotic chi-square test of (59.598-50.611) vs a chi-square with (58-56) df, or use anova() on your glm object (that doesn't do the test directly, but at least calculates (59.598-50.611) and (58-56) for you). This is effectively analysis of deviance. Here's the sort of calculations you could do (on a different data set, which comes with R): spray1=glm(count~spray,family=poisson,data=InsectSprays) # full model spray0=glm(count~1,family=poisson,data=InsectSprays) # null model with(anova(spray0,spray1),pchisq(Deviance,Df,lower.tail=FALSE)[2]) Which gives the p-value for an asymptotic chi square statstic based on the deviance. Or you can use the deviance and df.residual functions to do this: pchisq(deviance(spray0)-deviance(spray1), df.residual(spray0)-df.residual(spray1), lower.tail=FALSE) -- Many people would use the comparison between full and null-model AIC (or in some cases, perhaps a comparison between a model of interest and the saturated model) to work out whether the model was better than the null in that sense. -- Am I right to wonder that the Pr(>|z|) for (Intercept) represents the significance of the model? It doesn't. Indeed, the intercept p-value is usually not of direct interest. If you're considering a model with a dispersion parameter, I have seen some people argue for doing an F-test instead of an asymptotic chi-square; it corresponds to people using a t-test instead of a z on the individual coefficients. It's not likely to be a reasonable approximation in small samples. I haven't seen a derivation or simulation that would suggest the F is necessarily a suitable approximation (i.e. better than the asymptotic result) in the case of GLMs in general. One might well exist, but I haven't seen it.
glm in R - which pvalue represents the goodness of fit of entire model?
You can either do an asymptotic chi-square test of (59.598-50.611) vs a chi-square with (58-56) df, or use anova() on your glm object (that doesn't do the test directly, but at least calculates (59.59
glm in R - which pvalue represents the goodness of fit of entire model? You can either do an asymptotic chi-square test of (59.598-50.611) vs a chi-square with (58-56) df, or use anova() on your glm object (that doesn't do the test directly, but at least calculates (59.598-50.611) and (58-56) for you). This is effectively analysis of deviance. Here's the sort of calculations you could do (on a different data set, which comes with R): spray1=glm(count~spray,family=poisson,data=InsectSprays) # full model spray0=glm(count~1,family=poisson,data=InsectSprays) # null model with(anova(spray0,spray1),pchisq(Deviance,Df,lower.tail=FALSE)[2]) Which gives the p-value for an asymptotic chi square statstic based on the deviance. Or you can use the deviance and df.residual functions to do this: pchisq(deviance(spray0)-deviance(spray1), df.residual(spray0)-df.residual(spray1), lower.tail=FALSE) -- Many people would use the comparison between full and null-model AIC (or in some cases, perhaps a comparison between a model of interest and the saturated model) to work out whether the model was better than the null in that sense. -- Am I right to wonder that the Pr(>|z|) for (Intercept) represents the significance of the model? It doesn't. Indeed, the intercept p-value is usually not of direct interest. If you're considering a model with a dispersion parameter, I have seen some people argue for doing an F-test instead of an asymptotic chi-square; it corresponds to people using a t-test instead of a z on the individual coefficients. It's not likely to be a reasonable approximation in small samples. I haven't seen a derivation or simulation that would suggest the F is necessarily a suitable approximation (i.e. better than the asymptotic result) in the case of GLMs in general. One might well exist, but I haven't seen it.
glm in R - which pvalue represents the goodness of fit of entire model? You can either do an asymptotic chi-square test of (59.598-50.611) vs a chi-square with (58-56) df, or use anova() on your glm object (that doesn't do the test directly, but at least calculates (59.59
25,660
glm in R - which pvalue represents the goodness of fit of entire model?
Assuming that you model is in the object 'fit' you could use this code to perform a log-liklihood test on your binomial model As you have noted a F-test is not appropriate, but this test will test if your model is predicted better than random. LLR = -2 * (fit$null.deviance - fit$deviance) This is the formula for the Log-likelihood ratio test. pchisq(LLR, 2, lower.tail = FALSE) And this will give you the p-value. Althought I'm not 100% confident that is the correct df. I am pretty sure it is the difference in the number of parameters, of which you have 2 in your saturated model and none in the Null model, ergo df = 3 - 1 = 2. But that might be something to follow up on.
glm in R - which pvalue represents the goodness of fit of entire model?
Assuming that you model is in the object 'fit' you could use this code to perform a log-liklihood test on your binomial model As you have noted a F-test is not appropriate, but this test will test if
glm in R - which pvalue represents the goodness of fit of entire model? Assuming that you model is in the object 'fit' you could use this code to perform a log-liklihood test on your binomial model As you have noted a F-test is not appropriate, but this test will test if your model is predicted better than random. LLR = -2 * (fit$null.deviance - fit$deviance) This is the formula for the Log-likelihood ratio test. pchisq(LLR, 2, lower.tail = FALSE) And this will give you the p-value. Althought I'm not 100% confident that is the correct df. I am pretty sure it is the difference in the number of parameters, of which you have 2 in your saturated model and none in the Null model, ergo df = 3 - 1 = 2. But that might be something to follow up on.
glm in R - which pvalue represents the goodness of fit of entire model? Assuming that you model is in the object 'fit' you could use this code to perform a log-liklihood test on your binomial model As you have noted a F-test is not appropriate, but this test will test if
25,661
glm in R - which pvalue represents the goodness of fit of entire model?
As @SamPassmore mentioned, you can use Analysis of Deviance (see for example car::Anova() for something similar-ish) to get something roughly equivalent to the $F$-test, but with a $\chi^2$ distribution. Related to this is the likelihood ratio test (comparison of your model to the null model), but these tests only perform well asymptotically. Alternatively, you can look at AIC, or related measures like BIC. Please note though: For this type of model, it's hard to get something like p-value for the same reasons it's hard to define a meaningful $R^2$-value, see for example this "sermon" by Doug Bates.
glm in R - which pvalue represents the goodness of fit of entire model?
As @SamPassmore mentioned, you can use Analysis of Deviance (see for example car::Anova() for something similar-ish) to get something roughly equivalent to the $F$-test, but with a $\chi^2$ distributi
glm in R - which pvalue represents the goodness of fit of entire model? As @SamPassmore mentioned, you can use Analysis of Deviance (see for example car::Anova() for something similar-ish) to get something roughly equivalent to the $F$-test, but with a $\chi^2$ distribution. Related to this is the likelihood ratio test (comparison of your model to the null model), but these tests only perform well asymptotically. Alternatively, you can look at AIC, or related measures like BIC. Please note though: For this type of model, it's hard to get something like p-value for the same reasons it's hard to define a meaningful $R^2$-value, see for example this "sermon" by Doug Bates.
glm in R - which pvalue represents the goodness of fit of entire model? As @SamPassmore mentioned, you can use Analysis of Deviance (see for example car::Anova() for something similar-ish) to get something roughly equivalent to the $F$-test, but with a $\chi^2$ distributi
25,662
How to extract information from a scatterplot matrix when you have large N, discrete data, & many variables?
I'm not sure if this is of any help for you, but for primary EDA I really like the tabplot package. Gives you a good sense of what possible correlations there may be within your data. install.packages("tabplot") tableplot(breast) # gives you the unsorted image below tableplot(breast, sortCol="class") # gives you a sorted image according to class
How to extract information from a scatterplot matrix when you have large N, discrete data, & many va
I'm not sure if this is of any help for you, but for primary EDA I really like the tabplot package. Gives you a good sense of what possible correlations there may be within your data. install.package
How to extract information from a scatterplot matrix when you have large N, discrete data, & many variables? I'm not sure if this is of any help for you, but for primary EDA I really like the tabplot package. Gives you a good sense of what possible correlations there may be within your data. install.packages("tabplot") tableplot(breast) # gives you the unsorted image below tableplot(breast, sortCol="class") # gives you a sorted image according to class
How to extract information from a scatterplot matrix when you have large N, discrete data, & many va I'm not sure if this is of any help for you, but for primary EDA I really like the tabplot package. Gives you a good sense of what possible correlations there may be within your data. install.package
25,663
How to extract information from a scatterplot matrix when you have large N, discrete data, & many variables?
There are a number of issues that make it difficult or impossible to extract any usable information from your scatterplot matrix. You have too many variables displayed together. When you have lots of variables in a scatterplot matrix, each plot becomes too small to be useful. The thing to notice is that many plots are duplicated, which wastes space. Also, although you do want to see every combination, you don't have to plot them all together. Notice that you can break a scatterplot matrix into smaller blocks of four or five (a number that is usefully visualizable). You just need to make multiple plots, one for each block. Since you have a lot of data at discrete points in the space, they end up stacking on top of each other. Thus, you cannot see how many points are at each location. There are several tricks to help you deal with this. The first is to jitter. Jittering means adding a small amount of noise to the values in your dataset. The noise is taken from a uniform distribution centered on your value plus or minus some small amount. There are algorithms for determining an optimal amount, but since your data come in whole units from one to ten, $.5$ seems like a good choice. With so much data, even jittering will make the patters hard to discern. You can use colors that are highly saturated, but largely transparent to account for this. Where there is a lot of data stacked on top of each other, the color will become darker, and where there is little density, the color will be lighter. For the transparency to work, you will need solid symbols to display your data, whereas R uses hollow circles by default. Using these strategies, here is some example R code and the plots made: # the alpha argument in rgb() lets you set the transparency cols2 = c(rgb(red=255, green=0, blue=0, alpha=50, maxColorValue=255), rgb(red=0, green=0, blue=255, alpha=50, maxColorValue=255) ) cols2 = ifelse(breast$class==2, cols2[1], cols2[2]) # here we jitter the data set.seed(6141) # this makes the example exactly reproducible jbreast = apply(breast[,1:9], 2, FUN=function(x){ jitter(x, amount=.5) }) jbreast = cbind(jbreast, class=breast[,10]) # the class variable is not jittered windows() # the 1st 5 variables, using pch=16 pairs(jbreast[,1:5], col=cols2, pch=16) windows() # the 2nd 5 variables pairs(jbreast[,6:10], col=cols2, pch=16) windows() # to match up the 1st & 2nd sets requires more coding layout(matrix(1:25, nrow=5, byrow=T)) par(mar=c(.5,.5,.5,.5), oma=c(2,2,2,2)) for(i in 1:5){ for(j in 6:10){ plot(jbreast[,j], jbreast[,i], col=cols2, pch=16, axes=F, main="", xlab="", ylab="") box() if(j==6 ){ mtext(colnames(jbreast)[i], side=2, cex=.7, line=1) } if(i==5 ){ mtext(colnames(jbreast)[j], side=1, cex=.7, line=1) } if(j==10){ axis(side=4, seq(2,10,2), cex.axis=.8) } if(i==1 ){ axis(side=3, seq(2,10,2), cex.axis=.8) } } }
How to extract information from a scatterplot matrix when you have large N, discrete data, & many va
There are a number of issues that make it difficult or impossible to extract any usable information from your scatterplot matrix. You have too many variables displayed together. When you have lots
How to extract information from a scatterplot matrix when you have large N, discrete data, & many variables? There are a number of issues that make it difficult or impossible to extract any usable information from your scatterplot matrix. You have too many variables displayed together. When you have lots of variables in a scatterplot matrix, each plot becomes too small to be useful. The thing to notice is that many plots are duplicated, which wastes space. Also, although you do want to see every combination, you don't have to plot them all together. Notice that you can break a scatterplot matrix into smaller blocks of four or five (a number that is usefully visualizable). You just need to make multiple plots, one for each block. Since you have a lot of data at discrete points in the space, they end up stacking on top of each other. Thus, you cannot see how many points are at each location. There are several tricks to help you deal with this. The first is to jitter. Jittering means adding a small amount of noise to the values in your dataset. The noise is taken from a uniform distribution centered on your value plus or minus some small amount. There are algorithms for determining an optimal amount, but since your data come in whole units from one to ten, $.5$ seems like a good choice. With so much data, even jittering will make the patters hard to discern. You can use colors that are highly saturated, but largely transparent to account for this. Where there is a lot of data stacked on top of each other, the color will become darker, and where there is little density, the color will be lighter. For the transparency to work, you will need solid symbols to display your data, whereas R uses hollow circles by default. Using these strategies, here is some example R code and the plots made: # the alpha argument in rgb() lets you set the transparency cols2 = c(rgb(red=255, green=0, blue=0, alpha=50, maxColorValue=255), rgb(red=0, green=0, blue=255, alpha=50, maxColorValue=255) ) cols2 = ifelse(breast$class==2, cols2[1], cols2[2]) # here we jitter the data set.seed(6141) # this makes the example exactly reproducible jbreast = apply(breast[,1:9], 2, FUN=function(x){ jitter(x, amount=.5) }) jbreast = cbind(jbreast, class=breast[,10]) # the class variable is not jittered windows() # the 1st 5 variables, using pch=16 pairs(jbreast[,1:5], col=cols2, pch=16) windows() # the 2nd 5 variables pairs(jbreast[,6:10], col=cols2, pch=16) windows() # to match up the 1st & 2nd sets requires more coding layout(matrix(1:25, nrow=5, byrow=T)) par(mar=c(.5,.5,.5,.5), oma=c(2,2,2,2)) for(i in 1:5){ for(j in 6:10){ plot(jbreast[,j], jbreast[,i], col=cols2, pch=16, axes=F, main="", xlab="", ylab="") box() if(j==6 ){ mtext(colnames(jbreast)[i], side=2, cex=.7, line=1) } if(i==5 ){ mtext(colnames(jbreast)[j], side=1, cex=.7, line=1) } if(j==10){ axis(side=4, seq(2,10,2), cex.axis=.8) } if(i==1 ){ axis(side=3, seq(2,10,2), cex.axis=.8) } } }
How to extract information from a scatterplot matrix when you have large N, discrete data, & many va There are a number of issues that make it difficult or impossible to extract any usable information from your scatterplot matrix. You have too many variables displayed together. When you have lots
25,664
How to extract information from a scatterplot matrix when you have large N, discrete data, & many variables?
It is difficult to visualize more than 3-4 dimensions in a single plot. One option would be to use principal components analysis (PCA) to compress the data and then visualize it in the main dimensions. There are several different packages in R (as well as the base prcomp function) that make this syntactically easy (see CRAN); interpreting the plots, loadings, is another story, but I think easier than a 10 variable ordinal scatterplot matrix.
How to extract information from a scatterplot matrix when you have large N, discrete data, & many va
It is difficult to visualize more than 3-4 dimensions in a single plot. One option would be to use principal components analysis (PCA) to compress the data and then visualize it in the main dimension
How to extract information from a scatterplot matrix when you have large N, discrete data, & many variables? It is difficult to visualize more than 3-4 dimensions in a single plot. One option would be to use principal components analysis (PCA) to compress the data and then visualize it in the main dimensions. There are several different packages in R (as well as the base prcomp function) that make this syntactically easy (see CRAN); interpreting the plots, loadings, is another story, but I think easier than a 10 variable ordinal scatterplot matrix.
How to extract information from a scatterplot matrix when you have large N, discrete data, & many va It is difficult to visualize more than 3-4 dimensions in a single plot. One option would be to use principal components analysis (PCA) to compress the data and then visualize it in the main dimension
25,665
p-value subtlety: greater-equal vs. greater
"As or more extreme" is correct. Formally, then, if the distribution is such that the probability of getting the test statistic itself is positive, that probability (and anything equally extreme, such as the corresponding value in the other tail) should be included in the p-value. Of course, with a continuous statistic, that probability of exact equality is 0. It makes no difference if we say $>$ or $\geq$.
p-value subtlety: greater-equal vs. greater
"As or more extreme" is correct. Formally, then, if the distribution is such that the probability of getting the test statistic itself is positive, that probability (and anything equally extreme, such
p-value subtlety: greater-equal vs. greater "As or more extreme" is correct. Formally, then, if the distribution is such that the probability of getting the test statistic itself is positive, that probability (and anything equally extreme, such as the corresponding value in the other tail) should be included in the p-value. Of course, with a continuous statistic, that probability of exact equality is 0. It makes no difference if we say $>$ or $\geq$.
p-value subtlety: greater-equal vs. greater "As or more extreme" is correct. Formally, then, if the distribution is such that the probability of getting the test statistic itself is positive, that probability (and anything equally extreme, such
25,666
p-value subtlety: greater-equal vs. greater
The first point of $\geq$ is that the hypothesis space is topologically closed within the whole parameter space. Without considering randomness, this can be a useful convention if you have some assertion about a converging sequence of parameters belonging to the hypothesis because then you would know that the limit does not suddenly belong to the alternative. Now considering the probability distributions, they are (usually) right-continuous. That means that the mapping of the closed hypothesis space to the $[0,1]$ interval is closed again. That's why confidence intervals are also closed by convention. This enhances mathematics. Imagine, you would construct a confidence interval for the location parameter of an asymmetric probability distribution. There, you would have to trade the length to the upper tail for the length to the lower tail. The probability in both tails should sum up to $\alpha$. To have the CI as informative as possible, you would have to shorten the CI's length such that its coverage probability is still $\geq 1-\alpha$. This is a closed set. You can find an optimal solution there by some iterative algorithm, e.g. Banach's fixed point theorem. If it were an open set, you cannot do this.
p-value subtlety: greater-equal vs. greater
The first point of $\geq$ is that the hypothesis space is topologically closed within the whole parameter space. Without considering randomness, this can be a useful convention if you have some assert
p-value subtlety: greater-equal vs. greater The first point of $\geq$ is that the hypothesis space is topologically closed within the whole parameter space. Without considering randomness, this can be a useful convention if you have some assertion about a converging sequence of parameters belonging to the hypothesis because then you would know that the limit does not suddenly belong to the alternative. Now considering the probability distributions, they are (usually) right-continuous. That means that the mapping of the closed hypothesis space to the $[0,1]$ interval is closed again. That's why confidence intervals are also closed by convention. This enhances mathematics. Imagine, you would construct a confidence interval for the location parameter of an asymmetric probability distribution. There, you would have to trade the length to the upper tail for the length to the lower tail. The probability in both tails should sum up to $\alpha$. To have the CI as informative as possible, you would have to shorten the CI's length such that its coverage probability is still $\geq 1-\alpha$. This is a closed set. You can find an optimal solution there by some iterative algorithm, e.g. Banach's fixed point theorem. If it were an open set, you cannot do this.
p-value subtlety: greater-equal vs. greater The first point of $\geq$ is that the hypothesis space is topologically closed within the whole parameter space. Without considering randomness, this can be a useful convention if you have some assert
25,667
Calculate P value for the correlation coefficient
The following is an excerpt from Miles and Banyard's (2007) "Understanding and Using Statistics in Psychology --- A Practical Introduction" on "Calculating the exact significance of a Pearson correlation in MS Excel": Inconveniently, this is not completely straightforward - Excel will not give us the exact p-value for any value of r. However, it will give the exact $p$-value for any value of $t$, and it’s not too hard to convert $r$ to $t$. The formula you need is this one: And then you use the tdist() function in Excel. So, we have a value of $r = 0.44$, and $N = 19$. We can use Excel to turn the $r$ into $t$, so in the Excel sheet (at Cell A1, let’s say) we type: =(0.44 * sqrt(19 – 2))/(sqrt(1-0.44^2)) This gives a value of $t = 2.02$. We then use the tdist() function to find the associated $p$. We need to tell Excel 3 things. First, the value of $t$, second, the degrees of freedom, which are equal to $N – 2 = 17$, and third, the number of tails – either 1 or 2, and we always use 2 tails. If the value from the first calculation is stored in cell A1, we can write: =tdist(A1, 17, 2) Which gives a result of $p = 0.059$. Should you ever want to calculate a critical value for a Pearson correlation, the process is reversed. You first calculate the critical value for $t$, and then you convert this into $r$. Let’s say we wanted to know the critical value for a correlation for $p = 0.05$. We first find the value of $t$ that gives a $p$ of $0.05$. We use the excel function tinv(). We need to tell Excel two things, the probability that we are interested in, and the degrees of freedom. Into cell A1 We type: =tinv(0.05, 17) Excel tells us that the answer is $2.11$. We then need to turn that into a value of r. The formula is the reverse of the one above, which takes a bit of algebra, so we’ll tell you what it is: We type the formula into Excel =A1/(SQRT(A1 * A1 + 19 - 2 )) And we get the answer that the critical value is 0.0456. References: "Understanding and Using Statistics in Psychology: A Practical Introduction" Google Books How to Calculate the P-Value & Its Correlation in Excel ehow
Calculate P value for the correlation coefficient
The following is an excerpt from Miles and Banyard's (2007) "Understanding and Using Statistics in Psychology --- A Practical Introduction" on "Calculating the exact significance of a Pearson correlat
Calculate P value for the correlation coefficient The following is an excerpt from Miles and Banyard's (2007) "Understanding and Using Statistics in Psychology --- A Practical Introduction" on "Calculating the exact significance of a Pearson correlation in MS Excel": Inconveniently, this is not completely straightforward - Excel will not give us the exact p-value for any value of r. However, it will give the exact $p$-value for any value of $t$, and it’s not too hard to convert $r$ to $t$. The formula you need is this one: And then you use the tdist() function in Excel. So, we have a value of $r = 0.44$, and $N = 19$. We can use Excel to turn the $r$ into $t$, so in the Excel sheet (at Cell A1, let’s say) we type: =(0.44 * sqrt(19 – 2))/(sqrt(1-0.44^2)) This gives a value of $t = 2.02$. We then use the tdist() function to find the associated $p$. We need to tell Excel 3 things. First, the value of $t$, second, the degrees of freedom, which are equal to $N – 2 = 17$, and third, the number of tails – either 1 or 2, and we always use 2 tails. If the value from the first calculation is stored in cell A1, we can write: =tdist(A1, 17, 2) Which gives a result of $p = 0.059$. Should you ever want to calculate a critical value for a Pearson correlation, the process is reversed. You first calculate the critical value for $t$, and then you convert this into $r$. Let’s say we wanted to know the critical value for a correlation for $p = 0.05$. We first find the value of $t$ that gives a $p$ of $0.05$. We use the excel function tinv(). We need to tell Excel two things, the probability that we are interested in, and the degrees of freedom. Into cell A1 We type: =tinv(0.05, 17) Excel tells us that the answer is $2.11$. We then need to turn that into a value of r. The formula is the reverse of the one above, which takes a bit of algebra, so we’ll tell you what it is: We type the formula into Excel =A1/(SQRT(A1 * A1 + 19 - 2 )) And we get the answer that the critical value is 0.0456. References: "Understanding and Using Statistics in Psychology: A Practical Introduction" Google Books How to Calculate the P-Value & Its Correlation in Excel ehow
Calculate P value for the correlation coefficient The following is an excerpt from Miles and Banyard's (2007) "Understanding and Using Statistics in Psychology --- A Practical Introduction" on "Calculating the exact significance of a Pearson correlat
25,668
Assessing the Contribution of each Predictor in Linear Regression
Some quick answers... It means basically nothing to compare the values of the regression coefficients unless the predictors are standardized and the model is specified correctly, especially when the predictors are inter-correlated (which is definitely the case - look at the warning at the bottom of the output). Just see what happens to the coefficients if you drop one of the predictors from the model. Chances are, one or more of them changes radically, possibly even changing sign. Generally speaking, the coefficients tell you about the additional contribution of each variables, given the others in the model. You can assess the strength of each variable's contribution by the absolute value of each $t$ statistic. The one with the greatest $\lvert t\rvert$ makes the greatest contribution. It can be shown that the $t$ statistic, squared, is equal to the $F$ statistic based on the model-r eduction principle, whereby you remove one predictor from the model and measure how much the $SSE$ increases. If it increases a lot, then that predictor must be pretty important because including it accounts for a lot of unexplained variation. The F statistics are all proportional to those $SSE$ changes, so the one with the biggest $|t|=\sqrt{t^2} = \sqrt{F}$ is the one that makes the most difference. You haven't dropped anything; you have just chosen a parameterization. You will obtain exactly the same predictions regardless of which indicator is dropped. The interpretation of each regression coefficient is that it is the amount by which the prediction changes from the prediction obtained for the category whose indicator was dropped. To get a better idea of relative weights, I suggest using, instead of $k-1$ indicators, the variables $x_1=I_1-I_k, x_2=I_2-I_k,...,x_{k-1}=I_{k-1}-I_k$ where the $I_i$ are the indicators. The coefficient $b_i$ of of $x_i$ is then an estimate of the effect of the $i$th category minus the average of all $k$ of them; and you can obtain the analogous effect for the $k$th level by the fact that $b_1+b_2+\cdots+b_k=0$, thus $b_k=-(b_1+b_2+\cdots b_{k-1})$. The variables $x_i$ are called sum-to-zero contrasts (in R, you get them using "contr.sum", but it doesn't look like that's what you're using).
Assessing the Contribution of each Predictor in Linear Regression
Some quick answers... It means basically nothing to compare the values of the regression coefficients unless the predictors are standardized and the model is specified correctly, especially when the
Assessing the Contribution of each Predictor in Linear Regression Some quick answers... It means basically nothing to compare the values of the regression coefficients unless the predictors are standardized and the model is specified correctly, especially when the predictors are inter-correlated (which is definitely the case - look at the warning at the bottom of the output). Just see what happens to the coefficients if you drop one of the predictors from the model. Chances are, one or more of them changes radically, possibly even changing sign. Generally speaking, the coefficients tell you about the additional contribution of each variables, given the others in the model. You can assess the strength of each variable's contribution by the absolute value of each $t$ statistic. The one with the greatest $\lvert t\rvert$ makes the greatest contribution. It can be shown that the $t$ statistic, squared, is equal to the $F$ statistic based on the model-r eduction principle, whereby you remove one predictor from the model and measure how much the $SSE$ increases. If it increases a lot, then that predictor must be pretty important because including it accounts for a lot of unexplained variation. The F statistics are all proportional to those $SSE$ changes, so the one with the biggest $|t|=\sqrt{t^2} = \sqrt{F}$ is the one that makes the most difference. You haven't dropped anything; you have just chosen a parameterization. You will obtain exactly the same predictions regardless of which indicator is dropped. The interpretation of each regression coefficient is that it is the amount by which the prediction changes from the prediction obtained for the category whose indicator was dropped. To get a better idea of relative weights, I suggest using, instead of $k-1$ indicators, the variables $x_1=I_1-I_k, x_2=I_2-I_k,...,x_{k-1}=I_{k-1}-I_k$ where the $I_i$ are the indicators. The coefficient $b_i$ of of $x_i$ is then an estimate of the effect of the $i$th category minus the average of all $k$ of them; and you can obtain the analogous effect for the $k$th level by the fact that $b_1+b_2+\cdots+b_k=0$, thus $b_k=-(b_1+b_2+\cdots b_{k-1})$. The variables $x_i$ are called sum-to-zero contrasts (in R, you get them using "contr.sum", but it doesn't look like that's what you're using).
Assessing the Contribution of each Predictor in Linear Regression Some quick answers... It means basically nothing to compare the values of the regression coefficients unless the predictors are standardized and the model is specified correctly, especially when the
25,669
Bias of the maximum likelihood estimator of an exponential distribution
I cannot speak as to the use of these symbols but let me show you instead the traditional way, why the mle is biased. Recall that the exponential distribution is a special case of the General Gamma distribution with two parameters, shape $a$ and rate $b$. The pdf of a Gamma Random Variable is: $$f_Y (y)= \frac{1}{\Gamma(a) b^a} y^{a-1} e^{-y/b}, \ 0<y<\infty$$ where $\Gamma (.)$ is the gamma function. Alternative parameterisations exist, see for example the wikipedia page. If you put $a=1$ and $b=1/\lambda$ you arrive at the pdf of the exponential distribution: $$f_Y(y)=\lambda e^{-\lambda y},0<y<\infty$$ One of the most important properties of a gamma RV is the additivity property, simply put that means that if $X$ is a $\Gamma(a,b)$ RV, $\sum_{i=1}^n X_i$ is also a Gamma RV with $a^{*}=\sum a_i$ and $b^{*}=b$ as before. Define $Y=\sum X_i$ and as noted above $Y$ is also a Gamma RV with shape parameter equal to $n$, $\sum_{i=1}^n 1 $, that is and rate parameter $1/\lambda$ as $X$ above. Now take the expectation $E[Y^{-1}]$ $$ E\left [ Y^{-1} \right]=\int_0^{\infty}\frac{y^{-1}y^{n-1}\lambda^n}{\Gamma(n)}\times e^{-\lambda y}dy=\int_0^{\infty}\frac{y^{n-2}\lambda^n}{\Gamma(n)}\times e^{-\lambda y}dy$$ Comparing the latter integral with an integral of a Gamma distribution with shape parameter $n-1$ and rate one $1/\lambda$ and using the fact that $\Gamma(n)=(n-1) \times \Gamma(n-1)$ we see that it equals $\frac{\lambda}{n-1}$. Thus $$E\left[ \hat{\theta} \right]=E\left[ \frac{n}{Y} \right]=n \times E\left[Y^{-1}\right]=\frac{n}{n-1} \lambda$$ which clearly shows that the mle is biased. Note, however, that the mle is consistent. We also know that under some regularity conditions, the mle is asymptotically efficient and normally distributed, with mean the true parameter $\theta$ and variance $\{nI(\theta) \}^{-1} $. It is therefore an optimal estimator. Does that help?
Bias of the maximum likelihood estimator of an exponential distribution
I cannot speak as to the use of these symbols but let me show you instead the traditional way, why the mle is biased. Recall that the exponential distribution is a special case of the General Gamma di
Bias of the maximum likelihood estimator of an exponential distribution I cannot speak as to the use of these symbols but let me show you instead the traditional way, why the mle is biased. Recall that the exponential distribution is a special case of the General Gamma distribution with two parameters, shape $a$ and rate $b$. The pdf of a Gamma Random Variable is: $$f_Y (y)= \frac{1}{\Gamma(a) b^a} y^{a-1} e^{-y/b}, \ 0<y<\infty$$ where $\Gamma (.)$ is the gamma function. Alternative parameterisations exist, see for example the wikipedia page. If you put $a=1$ and $b=1/\lambda$ you arrive at the pdf of the exponential distribution: $$f_Y(y)=\lambda e^{-\lambda y},0<y<\infty$$ One of the most important properties of a gamma RV is the additivity property, simply put that means that if $X$ is a $\Gamma(a,b)$ RV, $\sum_{i=1}^n X_i$ is also a Gamma RV with $a^{*}=\sum a_i$ and $b^{*}=b$ as before. Define $Y=\sum X_i$ and as noted above $Y$ is also a Gamma RV with shape parameter equal to $n$, $\sum_{i=1}^n 1 $, that is and rate parameter $1/\lambda$ as $X$ above. Now take the expectation $E[Y^{-1}]$ $$ E\left [ Y^{-1} \right]=\int_0^{\infty}\frac{y^{-1}y^{n-1}\lambda^n}{\Gamma(n)}\times e^{-\lambda y}dy=\int_0^{\infty}\frac{y^{n-2}\lambda^n}{\Gamma(n)}\times e^{-\lambda y}dy$$ Comparing the latter integral with an integral of a Gamma distribution with shape parameter $n-1$ and rate one $1/\lambda$ and using the fact that $\Gamma(n)=(n-1) \times \Gamma(n-1)$ we see that it equals $\frac{\lambda}{n-1}$. Thus $$E\left[ \hat{\theta} \right]=E\left[ \frac{n}{Y} \right]=n \times E\left[Y^{-1}\right]=\frac{n}{n-1} \lambda$$ which clearly shows that the mle is biased. Note, however, that the mle is consistent. We also know that under some regularity conditions, the mle is asymptotically efficient and normally distributed, with mean the true parameter $\theta$ and variance $\{nI(\theta) \}^{-1} $. It is therefore an optimal estimator. Does that help?
Bias of the maximum likelihood estimator of an exponential distribution I cannot speak as to the use of these symbols but let me show you instead the traditional way, why the mle is biased. Recall that the exponential distribution is a special case of the General Gamma di
25,670
How to calculate 95% confidence interval for non-linear equation?
This QA on this site explains the math to create confidence bands around curves generated by nonlinear regression: Shape of confidence and prediction intervals for nonlinear regression If you read further, it will help to distinguish confidence intervals for the parameters from confidence bands for the curve. Looking at your graph, it sure looks like you have data from four animals, measuring each on many days. If so, fitting all the data at once violates one of the assumptions of regression -- that each data point be independent (or that each residual has independent "error"). You might consider fitting each animal's tracing individually, or use a mixed model to fit them all at once.
How to calculate 95% confidence interval for non-linear equation?
This QA on this site explains the math to create confidence bands around curves generated by nonlinear regression: Shape of confidence and prediction intervals for nonlinear regression If you read fur
How to calculate 95% confidence interval for non-linear equation? This QA on this site explains the math to create confidence bands around curves generated by nonlinear regression: Shape of confidence and prediction intervals for nonlinear regression If you read further, it will help to distinguish confidence intervals for the parameters from confidence bands for the curve. Looking at your graph, it sure looks like you have data from four animals, measuring each on many days. If so, fitting all the data at once violates one of the assumptions of regression -- that each data point be independent (or that each residual has independent "error"). You might consider fitting each animal's tracing individually, or use a mixed model to fit them all at once.
How to calculate 95% confidence interval for non-linear equation? This QA on this site explains the math to create confidence bands around curves generated by nonlinear regression: Shape of confidence and prediction intervals for nonlinear regression If you read fur
25,671
What's the best way to visualize the effects of categories & their prevalence in logistic regression?
I agree with @PeterFlom that the example is odd, but setting that aside, I notice that the explanatory variable is categorical. If that is consistently true, it simplifies this greatly. I would use mosaic plots to present these effects. A mosaic plot displays conditional proportions vertically, but the width of each category is scaled relative to its marginal (i.e., unconditional) proportion in the sample. Here is an example with the data from the Titanic disaster, created using R: data(Titanic) sex.table = margin.table(Titanic, margin=c(2,4)) class.table = margin.table(Titanic, margin=c(1,4)) round(prop.table(t(sex.table), margin=2), digits=3) # Sex # Survived Male Female # No 0.788 0.268 # Yes 0.212 0.732 round(prop.table(t(class.table), margin=2), digits=3) # Class # Survived 1st 2nd 3rd Crew # No 0.375 0.586 0.748 0.760 # Yes 0.625 0.414 0.252 0.240 windows(height=3, width=6) par(mai=c(.5,.4,.1,0), mfrow=c(1,2)) mosaicplot(sex.table, main="") mosaicplot(class.table, main="") On the left, we see that women were much more likely to survive, but men accounted for perhaps about 80% of the people on board. So increasing the percentage of male survivors would have meant many more lives saved than even a larger increase in the percentage of female survivors. This is somewhat analogous to your example. There is another example on the right where the crew and steerage made up the largest proportion of people, but had the lowest probability of surviving. (For what it's worth, this isn't a full analysis of these data, because class and sex were also non-independent on the Titanic, but it is enough to illustrate the ideas for this question.)
What's the best way to visualize the effects of categories & their prevalence in logistic regression
I agree with @PeterFlom that the example is odd, but setting that aside, I notice that the explanatory variable is categorical. If that is consistently true, it simplifies this greatly. I would use
What's the best way to visualize the effects of categories & their prevalence in logistic regression? I agree with @PeterFlom that the example is odd, but setting that aside, I notice that the explanatory variable is categorical. If that is consistently true, it simplifies this greatly. I would use mosaic plots to present these effects. A mosaic plot displays conditional proportions vertically, but the width of each category is scaled relative to its marginal (i.e., unconditional) proportion in the sample. Here is an example with the data from the Titanic disaster, created using R: data(Titanic) sex.table = margin.table(Titanic, margin=c(2,4)) class.table = margin.table(Titanic, margin=c(1,4)) round(prop.table(t(sex.table), margin=2), digits=3) # Sex # Survived Male Female # No 0.788 0.268 # Yes 0.212 0.732 round(prop.table(t(class.table), margin=2), digits=3) # Class # Survived 1st 2nd 3rd Crew # No 0.375 0.586 0.748 0.760 # Yes 0.625 0.414 0.252 0.240 windows(height=3, width=6) par(mai=c(.5,.4,.1,0), mfrow=c(1,2)) mosaicplot(sex.table, main="") mosaicplot(class.table, main="") On the left, we see that women were much more likely to survive, but men accounted for perhaps about 80% of the people on board. So increasing the percentage of male survivors would have meant many more lives saved than even a larger increase in the percentage of female survivors. This is somewhat analogous to your example. There is another example on the right where the crew and steerage made up the largest proportion of people, but had the lowest probability of surviving. (For what it's worth, this isn't a full analysis of these data, because class and sex were also non-independent on the Titanic, but it is enough to illustrate the ideas for this question.)
What's the best way to visualize the effects of categories & their prevalence in logistic regression I agree with @PeterFlom that the example is odd, but setting that aside, I notice that the explanatory variable is categorical. If that is consistently true, it simplifies this greatly. I would use
25,672
What's the best way to visualize the effects of categories & their prevalence in logistic regression?
I'm a little curious as to what society had only 10% men... but... One thing you could do is plot the odds ratios and label each with the size of the sample. If you want both variables to be represented graphically, you could make a bubble chart, with the position of each bubble on the y axis matching the size of the odds ratio and the area of the bubble proportional to the sample size.
What's the best way to visualize the effects of categories & their prevalence in logistic regression
I'm a little curious as to what society had only 10% men... but... One thing you could do is plot the odds ratios and label each with the size of the sample. If you want both variables to be represen
What's the best way to visualize the effects of categories & their prevalence in logistic regression? I'm a little curious as to what society had only 10% men... but... One thing you could do is plot the odds ratios and label each with the size of the sample. If you want both variables to be represented graphically, you could make a bubble chart, with the position of each bubble on the y axis matching the size of the odds ratio and the area of the bubble proportional to the sample size.
What's the best way to visualize the effects of categories & their prevalence in logistic regression I'm a little curious as to what society had only 10% men... but... One thing you could do is plot the odds ratios and label each with the size of the sample. If you want both variables to be represen
25,673
Normality assumption in linear regression
Expanding on Hong Oois comment with an image. Here is an image of a dataset where none of the marginals are normally distributed but the residuals still are, thus the assumptions of linear regression are still valid: The image was generated by the following R code: library(psych) x <- rbinom(100, 1, 0.3) y <- rnorm(length(x), 5 + x * 5, 1) scatter.hist(x, y, correl=F, density=F, ellipse=F, xlab="x", ylab="y")
Normality assumption in linear regression
Expanding on Hong Oois comment with an image. Here is an image of a dataset where none of the marginals are normally distributed but the residuals still are, thus the assumptions of linear regression
Normality assumption in linear regression Expanding on Hong Oois comment with an image. Here is an image of a dataset where none of the marginals are normally distributed but the residuals still are, thus the assumptions of linear regression are still valid: The image was generated by the following R code: library(psych) x <- rbinom(100, 1, 0.3) y <- rnorm(length(x), 5 + x * 5, 1) scatter.hist(x, y, correl=F, density=F, ellipse=F, xlab="x", ylab="y")
Normality assumption in linear regression Expanding on Hong Oois comment with an image. Here is an image of a dataset where none of the marginals are normally distributed but the residuals still are, thus the assumptions of linear regression
25,674
Can the scaling values in a linear discriminant analysis (LDA) be used to plot explanatory variables on the linear discriminants?
Principal components analysis and Linear discriminant analysis outputs; iris data. I will not be drawing biplots because biplots can drawn with various normalizations and therefore may look different. Since I'm not R user I have difficulty to track down how you produced your plots, to repeat them. Instead, I will do PCA and LDA and show the results, in a manner similar to this (you might want to read). Both analyses done in SPSS. Principal components of iris data: The analysis will be based on covariances (not correlations) between the 4 variables. Eigenvalues (component variances) and the proportion of overall variance explained PC1 4.228241706 .924618723 PC2 .242670748 .053066483 PC3 .078209500 .017102610 PC4 .023835093 .005212184 # @Etienne's comment: # Eigenvalues are obtained in R by # (princomp(iris[,-5])$sdev)^2 or (prcomp(iris[,-5])$sdev)^2. # Proportion of variance explained is obtained in R by # summary(princomp(iris[,-5])) or summary(prcomp(iris[,-5])) Eigenvectors (cosines of rotation of variables into components) PC1 PC2 PC3 PC4 SLength .3613865918 .6565887713 -.5820298513 .3154871929 SWidth -.0845225141 .7301614348 .5979108301 -.3197231037 PLength .8566706060 -.1733726628 .0762360758 -.4798389870 PWidth .3582891972 -.0754810199 .5458314320 .7536574253 # @Etienne's comment: # This is obtained in R by # prcomp(iris[,-5])$rotation or princomp(iris[,-5])$loadings Loadings (eigenvectors normalized to respective eigenvalues; loadings are the covariances between variables and standardized components) PC1 PC2 PC3 PC4 SLength .743108002 .323446284 -.162770244 .048706863 SWidth -.173801015 .359689372 .167211512 -.049360829 PLength 1.761545107 -.085406187 .021320152 -.074080509 PWidth .736738926 -.037183175 .152647008 .116354292 # @Etienne's comment: # Loadings can be obtained in R with # t(t(princomp(iris[,-5])$loadings) * princomp(iris[,-5])$sdev) or # t(t(prcomp(iris[,-5])$rotation) * prcomp(iris[,-5])$sdev) Standardized (rescaled) loadings (loadings divided by st. deviations of the respective variables) PC1 PC2 PC3 PC4 SLength .897401762 .390604412 -.196566721 .058820016 SWidth -.398748472 .825228709 .383630296 -.113247642 PLength .997873942 -.048380599 .012077365 -.041964868 PWidth .966547516 -.048781602 .200261695 .152648309 Raw component scores (Centered 4-variable data multiplied by eigenvectors) PC1 PC2 PC3 PC4 -2.684125626 .319397247 -.027914828 .002262437 -2.714141687 -.177001225 -.210464272 .099026550 -2.888990569 -.144949426 .017900256 .019968390 -2.745342856 -.318298979 .031559374 -.075575817 -2.728716537 .326754513 .090079241 -.061258593 -2.280859633 .741330449 .168677658 -.024200858 -2.820537751 -.089461385 .257892158 -.048143106 -2.626144973 .163384960 -.021879318 -.045297871 -2.886382732 -.578311754 .020759570 -.026744736 -2.672755798 -.113774246 -.197632725 -.056295401 ... etc. # @Etienne's comment: # This is obtained in R with # prcomp(iris[,-5])$x or princomp(iris[,-5])$scores. # Can also be eigenvector normalized for plotting Standardized (to unit variances) component scores, when multiplied by loadings return original centered variables. It is important to stress that it is loadings, not eigenvectors, by which we typically interpret principal components (or factors in factor analysis) - if we need to interpret. Loadings are the regressional coefficients of modeling variables by standardized components. At the same time, because components don't intercorrelate, they are the covariances between such components and the variables. Standardized (rescaled) loadings, like correlations, cannot exceed 1, and are more handy to interpret because the effect of unequal variances of variables is taken off. It is loadings, not eigenvectors, that are typically displayed on a biplot side-by-side with component scores; the latter are often displayed column-normalized. Linear discriminants of iris data: There is 3 classes and 4 variables: min(3-1,4)=2 discriminants can be extracted. Only the extraction (no classification of data points) will be done. The Within scatter matrix 38.95620000 13.63000000 24.62460000 5.64500000 13.63000000 16.96200000 8.12080000 4.80840000 24.62460000 8.12080000 27.22260000 6.27180000 5.64500000 4.80840000 6.27180000 6.15660000 The Between scatter matrix 63.2121333 -19.9526667 165.2484000 71.2793333 -19.9526667 11.3449333 -57.2396000 -22.9326667 165.2484000 -57.2396000 437.1028000 186.7740000 71.2793333 -22.9326667 186.7740000 80.4133333 Eigenvalues and canonical correlations (Canonical correlation squared is SSbetween/SStotal of ANOVA by that discriminant) Dis1 32.19192920 .98482089 Dis2 .28539104 .47119702 # @Etienne's comment: # In R eigenvalues are expected from # lda(as.factor(Species)~.,data=iris)$svd, but this produces # Dis1 Dis2 # 48.642644 4.579983 # @ttnphns' comment: # The difference might be due to different computational approach # (e.g. me used eigendecomposition and R used svd?) and is of no importance. # Canonical correlations though should be the same. Eigenvectors Dis1 Dis2 SLength -.0684059150 .0019879117 SWidth -.1265612055 .1785267025 PLength .1815528774 -.0768635659 PWidth .2318028594 .2341722673 Eigenvectors (as before, but column-normalized to SS=1: cosines of rotation of variables into discriminants). Dis1 Dis2 SLength -.2087418215 .0065319640 SWidth -.3862036868 .5866105531 PLength .5540117156 -.2525615400 PWidth .7073503964 .7694530921 Unstandardized discriminant coefficients (proportionally related to eigenvectors) Dis1 Dis2 SLength -.829377642 .024102149 SWidth -1.534473068 2.164521235 PLength 2.201211656 -.931921210 PWidth 2.810460309 2.839187853 # @Etienne's comment: # This is obtained in R with # lda(as.factor(Species)~.,data=iris)$scaling # which is described as being standardized discriminant coefficients in the function definition. Standardized discriminant coefficients Dis1 Dis2 SLength -.4269548486 .0124075316 SWidth -.5212416758 .7352613085 PLength .9472572487 -.4010378190 PWidth .5751607719 .5810398645 Pooled within-groups correlations between variables and discriminants Dis1 Dis2 SLength .2225959415 .3108117231 SWidth -.1190115149 .8636809224 PLength .7060653811 .1677013843 PWidth .6331779262 .7372420588 Discriminant scores (Centered 4-variable data multiplied by unstandardized coefficients) Dis1 Dis2 -8.061799783 .300420621 -7.128687721 -.786660426 -7.489827971 -.265384488 -6.813200569 -.670631068 -8.132309326 .514462530 -7.701946744 1.461720967 -7.212617624 .355836209 -7.605293546 -.011633838 -6.560551593 -1.015163624 -7.343059893 -.947319209 ... etc. # @Etienne's comment: # This is obtained in R with # predict(lda(as.factor(Species)~.,data=iris), iris[,-5])$x About computations at extraction of discriminants in LDA please look here. We interpret discriminants usually by discriminant coefficients or standardized discriminant coefficients (the latter are more handy because differential variance in variables is taken off). This is like in PCA. But, note: the coefficients here are the regressional coefficients of modeling discriminants by variables, not vice versa, like it was in PCA. Because variables are not uncorrelated, the coefficients cannot be seen as covariances between variables and discriminants. Yet we have another matrix instead which may serve as an alternative source of interpretation of discriminants - pooled within-group correlations between the discriminants and the variables. Because discriminants are uncorrelated, like PCs, this matrix is in a sense analogous to the standardized loadings of PCA. In all, while in PCA we have the only matrix - loadings - to help interpret the latents, in LDA we have two alternative matrices for that. If you need to plot (biplot or whatever), you have to decide whether to plot coefficients or correlations. And, of course, needless to remind that in PCA of iris data the components don't "know" that there are 3 classes; they can't be expected to discriminate classes. Discriminants do "know" there are classes and it is their natural job which is to discriminate.
Can the scaling values in a linear discriminant analysis (LDA) be used to plot explanatory variables
Principal components analysis and Linear discriminant analysis outputs; iris data. I will not be drawing biplots because biplots can drawn with various normalizations and therefore may look different.
Can the scaling values in a linear discriminant analysis (LDA) be used to plot explanatory variables on the linear discriminants? Principal components analysis and Linear discriminant analysis outputs; iris data. I will not be drawing biplots because biplots can drawn with various normalizations and therefore may look different. Since I'm not R user I have difficulty to track down how you produced your plots, to repeat them. Instead, I will do PCA and LDA and show the results, in a manner similar to this (you might want to read). Both analyses done in SPSS. Principal components of iris data: The analysis will be based on covariances (not correlations) between the 4 variables. Eigenvalues (component variances) and the proportion of overall variance explained PC1 4.228241706 .924618723 PC2 .242670748 .053066483 PC3 .078209500 .017102610 PC4 .023835093 .005212184 # @Etienne's comment: # Eigenvalues are obtained in R by # (princomp(iris[,-5])$sdev)^2 or (prcomp(iris[,-5])$sdev)^2. # Proportion of variance explained is obtained in R by # summary(princomp(iris[,-5])) or summary(prcomp(iris[,-5])) Eigenvectors (cosines of rotation of variables into components) PC1 PC2 PC3 PC4 SLength .3613865918 .6565887713 -.5820298513 .3154871929 SWidth -.0845225141 .7301614348 .5979108301 -.3197231037 PLength .8566706060 -.1733726628 .0762360758 -.4798389870 PWidth .3582891972 -.0754810199 .5458314320 .7536574253 # @Etienne's comment: # This is obtained in R by # prcomp(iris[,-5])$rotation or princomp(iris[,-5])$loadings Loadings (eigenvectors normalized to respective eigenvalues; loadings are the covariances between variables and standardized components) PC1 PC2 PC3 PC4 SLength .743108002 .323446284 -.162770244 .048706863 SWidth -.173801015 .359689372 .167211512 -.049360829 PLength 1.761545107 -.085406187 .021320152 -.074080509 PWidth .736738926 -.037183175 .152647008 .116354292 # @Etienne's comment: # Loadings can be obtained in R with # t(t(princomp(iris[,-5])$loadings) * princomp(iris[,-5])$sdev) or # t(t(prcomp(iris[,-5])$rotation) * prcomp(iris[,-5])$sdev) Standardized (rescaled) loadings (loadings divided by st. deviations of the respective variables) PC1 PC2 PC3 PC4 SLength .897401762 .390604412 -.196566721 .058820016 SWidth -.398748472 .825228709 .383630296 -.113247642 PLength .997873942 -.048380599 .012077365 -.041964868 PWidth .966547516 -.048781602 .200261695 .152648309 Raw component scores (Centered 4-variable data multiplied by eigenvectors) PC1 PC2 PC3 PC4 -2.684125626 .319397247 -.027914828 .002262437 -2.714141687 -.177001225 -.210464272 .099026550 -2.888990569 -.144949426 .017900256 .019968390 -2.745342856 -.318298979 .031559374 -.075575817 -2.728716537 .326754513 .090079241 -.061258593 -2.280859633 .741330449 .168677658 -.024200858 -2.820537751 -.089461385 .257892158 -.048143106 -2.626144973 .163384960 -.021879318 -.045297871 -2.886382732 -.578311754 .020759570 -.026744736 -2.672755798 -.113774246 -.197632725 -.056295401 ... etc. # @Etienne's comment: # This is obtained in R with # prcomp(iris[,-5])$x or princomp(iris[,-5])$scores. # Can also be eigenvector normalized for plotting Standardized (to unit variances) component scores, when multiplied by loadings return original centered variables. It is important to stress that it is loadings, not eigenvectors, by which we typically interpret principal components (or factors in factor analysis) - if we need to interpret. Loadings are the regressional coefficients of modeling variables by standardized components. At the same time, because components don't intercorrelate, they are the covariances between such components and the variables. Standardized (rescaled) loadings, like correlations, cannot exceed 1, and are more handy to interpret because the effect of unequal variances of variables is taken off. It is loadings, not eigenvectors, that are typically displayed on a biplot side-by-side with component scores; the latter are often displayed column-normalized. Linear discriminants of iris data: There is 3 classes and 4 variables: min(3-1,4)=2 discriminants can be extracted. Only the extraction (no classification of data points) will be done. The Within scatter matrix 38.95620000 13.63000000 24.62460000 5.64500000 13.63000000 16.96200000 8.12080000 4.80840000 24.62460000 8.12080000 27.22260000 6.27180000 5.64500000 4.80840000 6.27180000 6.15660000 The Between scatter matrix 63.2121333 -19.9526667 165.2484000 71.2793333 -19.9526667 11.3449333 -57.2396000 -22.9326667 165.2484000 -57.2396000 437.1028000 186.7740000 71.2793333 -22.9326667 186.7740000 80.4133333 Eigenvalues and canonical correlations (Canonical correlation squared is SSbetween/SStotal of ANOVA by that discriminant) Dis1 32.19192920 .98482089 Dis2 .28539104 .47119702 # @Etienne's comment: # In R eigenvalues are expected from # lda(as.factor(Species)~.,data=iris)$svd, but this produces # Dis1 Dis2 # 48.642644 4.579983 # @ttnphns' comment: # The difference might be due to different computational approach # (e.g. me used eigendecomposition and R used svd?) and is of no importance. # Canonical correlations though should be the same. Eigenvectors Dis1 Dis2 SLength -.0684059150 .0019879117 SWidth -.1265612055 .1785267025 PLength .1815528774 -.0768635659 PWidth .2318028594 .2341722673 Eigenvectors (as before, but column-normalized to SS=1: cosines of rotation of variables into discriminants). Dis1 Dis2 SLength -.2087418215 .0065319640 SWidth -.3862036868 .5866105531 PLength .5540117156 -.2525615400 PWidth .7073503964 .7694530921 Unstandardized discriminant coefficients (proportionally related to eigenvectors) Dis1 Dis2 SLength -.829377642 .024102149 SWidth -1.534473068 2.164521235 PLength 2.201211656 -.931921210 PWidth 2.810460309 2.839187853 # @Etienne's comment: # This is obtained in R with # lda(as.factor(Species)~.,data=iris)$scaling # which is described as being standardized discriminant coefficients in the function definition. Standardized discriminant coefficients Dis1 Dis2 SLength -.4269548486 .0124075316 SWidth -.5212416758 .7352613085 PLength .9472572487 -.4010378190 PWidth .5751607719 .5810398645 Pooled within-groups correlations between variables and discriminants Dis1 Dis2 SLength .2225959415 .3108117231 SWidth -.1190115149 .8636809224 PLength .7060653811 .1677013843 PWidth .6331779262 .7372420588 Discriminant scores (Centered 4-variable data multiplied by unstandardized coefficients) Dis1 Dis2 -8.061799783 .300420621 -7.128687721 -.786660426 -7.489827971 -.265384488 -6.813200569 -.670631068 -8.132309326 .514462530 -7.701946744 1.461720967 -7.212617624 .355836209 -7.605293546 -.011633838 -6.560551593 -1.015163624 -7.343059893 -.947319209 ... etc. # @Etienne's comment: # This is obtained in R with # predict(lda(as.factor(Species)~.,data=iris), iris[,-5])$x About computations at extraction of discriminants in LDA please look here. We interpret discriminants usually by discriminant coefficients or standardized discriminant coefficients (the latter are more handy because differential variance in variables is taken off). This is like in PCA. But, note: the coefficients here are the regressional coefficients of modeling discriminants by variables, not vice versa, like it was in PCA. Because variables are not uncorrelated, the coefficients cannot be seen as covariances between variables and discriminants. Yet we have another matrix instead which may serve as an alternative source of interpretation of discriminants - pooled within-group correlations between the discriminants and the variables. Because discriminants are uncorrelated, like PCs, this matrix is in a sense analogous to the standardized loadings of PCA. In all, while in PCA we have the only matrix - loadings - to help interpret the latents, in LDA we have two alternative matrices for that. If you need to plot (biplot or whatever), you have to decide whether to plot coefficients or correlations. And, of course, needless to remind that in PCA of iris data the components don't "know" that there are 3 classes; they can't be expected to discriminate classes. Discriminants do "know" there are classes and it is their natural job which is to discriminate.
Can the scaling values in a linear discriminant analysis (LDA) be used to plot explanatory variables Principal components analysis and Linear discriminant analysis outputs; iris data. I will not be drawing biplots because biplots can drawn with various normalizations and therefore may look different.
25,675
Can the scaling values in a linear discriminant analysis (LDA) be used to plot explanatory variables on the linear discriminants?
My understanding is that biplots of linear discriminant analyses can be done, it is implemented in fact in R packages ggbiplot and ggord and another function to do it is posted in this StackOverflow thread. Also the book "Biplots in practice" by M. Greenacre has one chapter (chapter 11, see pdf) on it and in Figure 11.5 it shows a biplot of a linear discriminant analysis of the iris dataset:
Can the scaling values in a linear discriminant analysis (LDA) be used to plot explanatory variables
My understanding is that biplots of linear discriminant analyses can be done, it is implemented in fact in R packages ggbiplot and ggord and another function to do it is posted in this StackOverflow t
Can the scaling values in a linear discriminant analysis (LDA) be used to plot explanatory variables on the linear discriminants? My understanding is that biplots of linear discriminant analyses can be done, it is implemented in fact in R packages ggbiplot and ggord and another function to do it is posted in this StackOverflow thread. Also the book "Biplots in practice" by M. Greenacre has one chapter (chapter 11, see pdf) on it and in Figure 11.5 it shows a biplot of a linear discriminant analysis of the iris dataset:
Can the scaling values in a linear discriminant analysis (LDA) be used to plot explanatory variables My understanding is that biplots of linear discriminant analyses can be done, it is implemented in fact in R packages ggbiplot and ggord and another function to do it is posted in this StackOverflow t
25,676
Can the scaling values in a linear discriminant analysis (LDA) be used to plot explanatory variables on the linear discriminants?
I know this was asked over a year ago, and ttnphns gave an excellent and in-depth answer, but I thought I'd add a couple of comments for those (like me) that are interested in PCA and LDA for their usefulness in ecological sciences, but have limited statistical background (not statisticians). PCs in PCA are linear combinations of original variables that sequentially maximally explain total variance in the multidimensional dataset. You will have as many PCs as you do original variables. The percent of the variance the PCs explain is given by the eigenvalues of the similarity matrix used, and the coefficient for each original variable on each new PC is given by the eigenvectors. PCA has no assumptions about groups. PCA is very good for seeing how multiple variables change in value across your data (in a biplot, for example). Interpreting a PCA relies heavily on the biplot. LDA is different for a very important reason - it creates new variables (LDs) by maximizing variance between groups. These are still linear combinations of original variables, but rather than explain as much variance as possible with each sequential LD, instead they are drawn to maximize the DIFFERENCE between groups along that new variable. Rather than a similarity matrix, LDA (and MANOVA) use a comparison matrix of between and within groups sum of squares and cross-products. The eigenvectors of this matrix - the coefficients that the OP was originally concerned with - describe how much the original variables contribute to the formation of the new LDs. For these reasons, the eigenvectors from the PCA will give you a better idea how a variable changes in value across your data cloud, and how important it is to total variance in your dataset, than the LDA. However, the LDA, particularly in combination with a MANOVA, will give you a statistical test of difference in multivariate centroids of your groups, and an estimate of error in allocation of points to their respective groups (in a sense, multivariate effect size). In an LDA, even if a variable changes linearly (and significantly) across groups, its coefficient on an LD may not indicate the "scale" of that effect, and depends entirely on the other variables included in the analysis. I hope that was clear. Thanks for your time. See a picture below...
Can the scaling values in a linear discriminant analysis (LDA) be used to plot explanatory variables
I know this was asked over a year ago, and ttnphns gave an excellent and in-depth answer, but I thought I'd add a couple of comments for those (like me) that are interested in PCA and LDA for their us
Can the scaling values in a linear discriminant analysis (LDA) be used to plot explanatory variables on the linear discriminants? I know this was asked over a year ago, and ttnphns gave an excellent and in-depth answer, but I thought I'd add a couple of comments for those (like me) that are interested in PCA and LDA for their usefulness in ecological sciences, but have limited statistical background (not statisticians). PCs in PCA are linear combinations of original variables that sequentially maximally explain total variance in the multidimensional dataset. You will have as many PCs as you do original variables. The percent of the variance the PCs explain is given by the eigenvalues of the similarity matrix used, and the coefficient for each original variable on each new PC is given by the eigenvectors. PCA has no assumptions about groups. PCA is very good for seeing how multiple variables change in value across your data (in a biplot, for example). Interpreting a PCA relies heavily on the biplot. LDA is different for a very important reason - it creates new variables (LDs) by maximizing variance between groups. These are still linear combinations of original variables, but rather than explain as much variance as possible with each sequential LD, instead they are drawn to maximize the DIFFERENCE between groups along that new variable. Rather than a similarity matrix, LDA (and MANOVA) use a comparison matrix of between and within groups sum of squares and cross-products. The eigenvectors of this matrix - the coefficients that the OP was originally concerned with - describe how much the original variables contribute to the formation of the new LDs. For these reasons, the eigenvectors from the PCA will give you a better idea how a variable changes in value across your data cloud, and how important it is to total variance in your dataset, than the LDA. However, the LDA, particularly in combination with a MANOVA, will give you a statistical test of difference in multivariate centroids of your groups, and an estimate of error in allocation of points to their respective groups (in a sense, multivariate effect size). In an LDA, even if a variable changes linearly (and significantly) across groups, its coefficient on an LD may not indicate the "scale" of that effect, and depends entirely on the other variables included in the analysis. I hope that was clear. Thanks for your time. See a picture below...
Can the scaling values in a linear discriminant analysis (LDA) be used to plot explanatory variables I know this was asked over a year ago, and ttnphns gave an excellent and in-depth answer, but I thought I'd add a couple of comments for those (like me) that are interested in PCA and LDA for their us
25,677
Ridge regression results different in using lm.ridge and glmnet
glmnet standardizes the y variable and uses the mean squared errors instead of sum of squared errors. So you need to make the appropriate adjustments to match their outputs. library(ElemStatLearn) library(glmnet) library(MASS) dof2lambda <- function(d, dof) { obj <- function(lam, dof) (dof - sum(d ^ 2 / (d ^ 2 + lam))) ^ 2 sapply(dof, function(x) optimize(obj, c(0, 1e4), x)$minimum) } lambda2dof <- function(d, lam) { obj <- function(dof, lam) (dof - sum(d ^ 2 / (d ^ 2 + lam))) ^ 2 sapply(lam, function(x) optimize(obj, c(0, length(d)), x)$minimum) } dat <- prostate train <- subset(dat, train, select = -train) test <- subset(dat, !train, select = -train) train.x <- as.matrix(scale(subset(train, select = -lpsa))) train.y <- as.matrix(scale(train$lpsa)) d <- svd(train.x)$d dof <- seq(1, 8, 0.1) lam <- dof2lambda(d, dof) ridge1 <- lm.ridge(train.y ~ train.x, lambda = lam) ridge2 <- glmnet(train.x, train.y, alpha = 0, lambda = lam / nrow(train.x)) matplot(dof, t(ridge1$coef), type = 'l') matplot(lambda2dof(d, ridge2$lambda * nrow(train.x)), t(ridge2$beta), type = 'l')
Ridge regression results different in using lm.ridge and glmnet
glmnet standardizes the y variable and uses the mean squared errors instead of sum of squared errors. So you need to make the appropriate adjustments to match their outputs. library(ElemStatLearn) lib
Ridge regression results different in using lm.ridge and glmnet glmnet standardizes the y variable and uses the mean squared errors instead of sum of squared errors. So you need to make the appropriate adjustments to match their outputs. library(ElemStatLearn) library(glmnet) library(MASS) dof2lambda <- function(d, dof) { obj <- function(lam, dof) (dof - sum(d ^ 2 / (d ^ 2 + lam))) ^ 2 sapply(dof, function(x) optimize(obj, c(0, 1e4), x)$minimum) } lambda2dof <- function(d, lam) { obj <- function(dof, lam) (dof - sum(d ^ 2 / (d ^ 2 + lam))) ^ 2 sapply(lam, function(x) optimize(obj, c(0, length(d)), x)$minimum) } dat <- prostate train <- subset(dat, train, select = -train) test <- subset(dat, !train, select = -train) train.x <- as.matrix(scale(subset(train, select = -lpsa))) train.y <- as.matrix(scale(train$lpsa)) d <- svd(train.x)$d dof <- seq(1, 8, 0.1) lam <- dof2lambda(d, dof) ridge1 <- lm.ridge(train.y ~ train.x, lambda = lam) ridge2 <- glmnet(train.x, train.y, alpha = 0, lambda = lam / nrow(train.x)) matplot(dof, t(ridge1$coef), type = 'l') matplot(lambda2dof(d, ridge2$lambda * nrow(train.x)), t(ridge2$beta), type = 'l')
Ridge regression results different in using lm.ridge and glmnet glmnet standardizes the y variable and uses the mean squared errors instead of sum of squared errors. So you need to make the appropriate adjustments to match their outputs. library(ElemStatLearn) lib
25,678
Using principal components analysis vs correspondence analysis
PCA works on the values where as CA works on the relative values. Both are fine for relative abundance data of the sort you mention (with one major caveat, see later). With % data you already have a relative measure, but there will still be differences. Ask yourself do you want to emphasise the pattern in the abundant species/taxa (i.e. the ones with large %cover), or do you want to focus on the patterns of relative composition? If the former, use PCA. If the latter use CA. What I mean by the two questions is would you want A = {50, 20, 10} B = { 5, 2, 1} to be considered different or the the same? A and B are two samples and the values are the %cover of three taxa shown. (This example turned out poorly, assume there is bare ground! ;-) PCA would consider these very different because of the Euclidean distance used, but CA would consider these two samples as being very similar because the have the same relative profile. The big caveat here is the closed compositional nature of the data. If you have a few groups (Sand, Silt, Clay, for example) that sum to 1 (100%) then neither approach is correct and you could move to a more appropriate analysis via Aitchison's Log-ratio PCA which was designed for closed compositional data. (IIRC to do this you need to centre by rows and columns, and log transform the data.) There are other approaches too. If you use R, then one book that would be useful is Analyzing Compositional Data with R.
Using principal components analysis vs correspondence analysis
PCA works on the values where as CA works on the relative values. Both are fine for relative abundance data of the sort you mention (with one major caveat, see later). With % data you already have a r
Using principal components analysis vs correspondence analysis PCA works on the values where as CA works on the relative values. Both are fine for relative abundance data of the sort you mention (with one major caveat, see later). With % data you already have a relative measure, but there will still be differences. Ask yourself do you want to emphasise the pattern in the abundant species/taxa (i.e. the ones with large %cover), or do you want to focus on the patterns of relative composition? If the former, use PCA. If the latter use CA. What I mean by the two questions is would you want A = {50, 20, 10} B = { 5, 2, 1} to be considered different or the the same? A and B are two samples and the values are the %cover of three taxa shown. (This example turned out poorly, assume there is bare ground! ;-) PCA would consider these very different because of the Euclidean distance used, but CA would consider these two samples as being very similar because the have the same relative profile. The big caveat here is the closed compositional nature of the data. If you have a few groups (Sand, Silt, Clay, for example) that sum to 1 (100%) then neither approach is correct and you could move to a more appropriate analysis via Aitchison's Log-ratio PCA which was designed for closed compositional data. (IIRC to do this you need to centre by rows and columns, and log transform the data.) There are other approaches too. If you use R, then one book that would be useful is Analyzing Compositional Data with R.
Using principal components analysis vs correspondence analysis PCA works on the values where as CA works on the relative values. Both are fine for relative abundance data of the sort you mention (with one major caveat, see later). With % data you already have a r
25,679
Using principal components analysis vs correspondence analysis
Sorry to revive an oldie, but a student asked me about this. I might be looking at the wrong end of the stick, but the answer by @GavinSimpson seems to me a bit on a tangent, not really aiming at a key distinction between the two methods. A PCA can be run either on a correlation matrix or a covariance matrix. If you run it on the correlation matrix (instead of the covariance matrix), then the absolute values are not being considered. Any multiplication of a row by a constant would not change the results. So the main difference between PCA and a CA must be something else. Underneath the hood, the PCA considers Euclidean distances between points, the CA a qui-squared distances between points. In practice the CA seems to work better with non-continous data (as in nominal, categorical, a considerable amout of 0's, etc). If someone can explain what is the difference in methods regarding the linear algebra underneath the hood in a way that is diggestable to humans, then hats off to them and please post here as I'd love to read it.
Using principal components analysis vs correspondence analysis
Sorry to revive an oldie, but a student asked me about this. I might be looking at the wrong end of the stick, but the answer by @GavinSimpson seems to me a bit on a tangent, not really aiming at a ke
Using principal components analysis vs correspondence analysis Sorry to revive an oldie, but a student asked me about this. I might be looking at the wrong end of the stick, but the answer by @GavinSimpson seems to me a bit on a tangent, not really aiming at a key distinction between the two methods. A PCA can be run either on a correlation matrix or a covariance matrix. If you run it on the correlation matrix (instead of the covariance matrix), then the absolute values are not being considered. Any multiplication of a row by a constant would not change the results. So the main difference between PCA and a CA must be something else. Underneath the hood, the PCA considers Euclidean distances between points, the CA a qui-squared distances between points. In practice the CA seems to work better with non-continous data (as in nominal, categorical, a considerable amout of 0's, etc). If someone can explain what is the difference in methods regarding the linear algebra underneath the hood in a way that is diggestable to humans, then hats off to them and please post here as I'd love to read it.
Using principal components analysis vs correspondence analysis Sorry to revive an oldie, but a student asked me about this. I might be looking at the wrong end of the stick, but the answer by @GavinSimpson seems to me a bit on a tangent, not really aiming at a ke
25,680
If you run OLS regression on cross sectional data, should you test for autocorrelation in residuals?
The true distinction between data, is whether there exists, or not, a natural ordering of them that corresponds to real-world structures, and is relevant to the issue at hand. Of course, the clearest (and indisputable) "natural ordering" is that of time, and hence the usual dichotomy "cross-sectional / time series". But as pointed out in the comments, we may have non-time series data that nevertheless possess a natural spatial ordering. In such a case all the concepts and tools developed in the context of time-series analysis apply here equally well, since you are supposed to realize that a meaningful spatial ordering exists, and not only preserve it, but also examine what it may imply for the series of the error term, among other things related to the whole model (like the existence of a trend, that would make the data non-stationarity for example). For a (crude) example, assume that you collect data on number of cars that has stopped in various stop-in establishments along a highway, on a particular day (that's the dependent variable). Your regressors measure the various facilities/services each stop-in offers, and perhaps other things like distance from highway exits/entrances. These establishments are naturally ordered along the highway... But does this matter? Should we maintain the ordering, and even wonder whether the error term is auto-correlated? Certainly: assume that some facilities/services on establishment No 1 are in reality non-functional during this particular day (this event would be captured by the error term). Cars intending to use these particular facilities/services will nevertheless stop-in, because they do not know about the problem. But they will find out about the problem, and so, because of the problem, they will also stop in the next establishment, No 2, where, if what they want is on offer, they will receive the services and they won't stop in establishment No 3 - but there is a possibility that establishment No 2 will appear expensive, and so they will, after all, try also establishment No 3: This means that the dependent variables of the three establishments may not be independent, which is equivalent to say that there is the possibility of correlation of the three corresponding error terms, and not "equally", but depending on their respective positions. So the spatial ordering is to be preserved, and tests for autocorrelation must be executed -and they will be meaningful. If on the other hand no such "natural" and meaningful ordering appears to be present for a specific data set, then the possible correlation between observations should not be designated as "autocorrelation" because it would be misleading, and the tools specifically developed for ordered data are inapplicable. But correlation may very well exist, although in such case, it is rather more difficult to detect and estimate it.
If you run OLS regression on cross sectional data, should you test for autocorrelation in residuals?
The true distinction between data, is whether there exists, or not, a natural ordering of them that corresponds to real-world structures, and is relevant to the issue at hand. Of course, the clearest
If you run OLS regression on cross sectional data, should you test for autocorrelation in residuals? The true distinction between data, is whether there exists, or not, a natural ordering of them that corresponds to real-world structures, and is relevant to the issue at hand. Of course, the clearest (and indisputable) "natural ordering" is that of time, and hence the usual dichotomy "cross-sectional / time series". But as pointed out in the comments, we may have non-time series data that nevertheless possess a natural spatial ordering. In such a case all the concepts and tools developed in the context of time-series analysis apply here equally well, since you are supposed to realize that a meaningful spatial ordering exists, and not only preserve it, but also examine what it may imply for the series of the error term, among other things related to the whole model (like the existence of a trend, that would make the data non-stationarity for example). For a (crude) example, assume that you collect data on number of cars that has stopped in various stop-in establishments along a highway, on a particular day (that's the dependent variable). Your regressors measure the various facilities/services each stop-in offers, and perhaps other things like distance from highway exits/entrances. These establishments are naturally ordered along the highway... But does this matter? Should we maintain the ordering, and even wonder whether the error term is auto-correlated? Certainly: assume that some facilities/services on establishment No 1 are in reality non-functional during this particular day (this event would be captured by the error term). Cars intending to use these particular facilities/services will nevertheless stop-in, because they do not know about the problem. But they will find out about the problem, and so, because of the problem, they will also stop in the next establishment, No 2, where, if what they want is on offer, they will receive the services and they won't stop in establishment No 3 - but there is a possibility that establishment No 2 will appear expensive, and so they will, after all, try also establishment No 3: This means that the dependent variables of the three establishments may not be independent, which is equivalent to say that there is the possibility of correlation of the three corresponding error terms, and not "equally", but depending on their respective positions. So the spatial ordering is to be preserved, and tests for autocorrelation must be executed -and they will be meaningful. If on the other hand no such "natural" and meaningful ordering appears to be present for a specific data set, then the possible correlation between observations should not be designated as "autocorrelation" because it would be misleading, and the tools specifically developed for ordered data are inapplicable. But correlation may very well exist, although in such case, it is rather more difficult to detect and estimate it.
If you run OLS regression on cross sectional data, should you test for autocorrelation in residuals? The true distinction between data, is whether there exists, or not, a natural ordering of them that corresponds to real-world structures, and is relevant to the issue at hand. Of course, the clearest
25,681
If you run OLS regression on cross sectional data, should you test for autocorrelation in residuals?
Just adding another example (much more common) in which you will probably find autocorrelation in crossectional data, and is when you have groups of observations. For example, if you have the one math scores from a standardized exam of a thousand kids, but these kids came from 100 different schools, it would be appropriate to think that observations are not independent since the school's overall math performance could be related to the students' individual performance. In this case, if you omit the school ID term in your model you will be omitting a relevant variable, which could bias your estimates. Also, if there is a relevant difference in the distribution of math scores is observed apart from the mean (variance, skewness, and kurtosis), you should probably consider using robust errors in your models (or cluster the errors at the school level). This won't change your coefficients, but could dramatically change your model's t-test and f-test statistics since you are now accounting for possible violations of the 4th OLS assumption (constant variance). To sum up, if you have groups in you cross-sectional data, and is plausible that these group matter, therefore it is also plausible that the observations are not independent. Thus, you should control by group (through a fixed-effect model by the group for example) and use robust errors at the group level, to have much more confidence both in your coefficients and its p-values.
If you run OLS regression on cross sectional data, should you test for autocorrelation in residuals?
Just adding another example (much more common) in which you will probably find autocorrelation in crossectional data, and is when you have groups of observations. For example, if you have the one math
If you run OLS regression on cross sectional data, should you test for autocorrelation in residuals? Just adding another example (much more common) in which you will probably find autocorrelation in crossectional data, and is when you have groups of observations. For example, if you have the one math scores from a standardized exam of a thousand kids, but these kids came from 100 different schools, it would be appropriate to think that observations are not independent since the school's overall math performance could be related to the students' individual performance. In this case, if you omit the school ID term in your model you will be omitting a relevant variable, which could bias your estimates. Also, if there is a relevant difference in the distribution of math scores is observed apart from the mean (variance, skewness, and kurtosis), you should probably consider using robust errors in your models (or cluster the errors at the school level). This won't change your coefficients, but could dramatically change your model's t-test and f-test statistics since you are now accounting for possible violations of the 4th OLS assumption (constant variance). To sum up, if you have groups in you cross-sectional data, and is plausible that these group matter, therefore it is also plausible that the observations are not independent. Thus, you should control by group (through a fixed-effect model by the group for example) and use robust errors at the group level, to have much more confidence both in your coefficients and its p-values.
If you run OLS regression on cross sectional data, should you test for autocorrelation in residuals? Just adding another example (much more common) in which you will probably find autocorrelation in crossectional data, and is when you have groups of observations. For example, if you have the one math
25,682
Diagonal elements of the projection matrix
This is several years later, but I found the notation very difficult in the asker's question and self-answer, so here's a cleaner solution. We have $\mathbf{H} = \mathbf{X}(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T$ where $(1,...,1)^T$ is a column of $\mathbf{X}$. We want to show that the diagonals $h_{ii}$ of $\mathbf{H}$ have $h_{ii} \geq 1/n$. Define $\mathbf{P} = \mathbf{H} - \mathbf{C}$, where $$\mathbf{C} = \frac{1}{n}\begin{pmatrix}1 & \dots & 1 \\ \vdots & \ddots & \vdots \\ 1 & \dots & 1 \end{pmatrix}$$ the matrix consisting of only $1/n$. This is the projection matrix onto the space spanned by $(1, ..., 1)$. Then $$\mathbf{P}^2 = \mathbf{H}^2 - \mathbf{H}\mathbf{C} - \mathbf{C}\mathbf{H} + \mathbf{C}^2 = \mathbf{H} - \mathbf{H}\mathbf{C} - \mathbf{C}\mathbf{H} + \mathbf{C}$$ However, $\mathbf{H}$ orthogonally projects onto $\text{Col}(\mathbf{X})$, and $\mathbf{C}$ orthogonally projects onto $\text{span}\{(1,...,1)\} \subset \text{Col}(\mathbf{X})$, so obviously $\mathbf{H}\mathbf{C} = \mathbf{C}$. Still intuitively, but less obviously, $\mathbf{C}\mathbf{H} = \mathbf{C}$. To see this, we can compute $\mathbf{C} = \mathbf{C}\big(\mathbf{H} + (\mathbf{I} - \mathbf{H})\big)$, and note that $\mathbf{C}(\mathbf{I} - \mathbf{H}) = 0$ because $\mathbf{I} - \mathbf{H}$ projects onto $\text{Col}(\mathbf{X})^\perp$. Therefore we have $\mathbf{P}^2 = \mathbf{H} - \mathbf{C} = \mathbf{P}$. So $\mathbf{P}$ is also a projection matrix. So $h_{ii} = p_{ii} + c_{ii} = p_{ii} + 1/n$. Since projection matrices are always positive semidefinite, the diagonals of $\mathbf{P}$ satisfy $p_{ii} \geq 0$. (In fact, you can show that since $\mathbf{P}$ is symmetric and idempotent, it satisfies $0 \leq p_{ii} \leq 1$.) Then $h_{ii} \geq 1/n$ as needed.
Diagonal elements of the projection matrix
This is several years later, but I found the notation very difficult in the asker's question and self-answer, so here's a cleaner solution. We have $\mathbf{H} = \mathbf{X}(\mathbf{X}^T\mathbf{X})^{-
Diagonal elements of the projection matrix This is several years later, but I found the notation very difficult in the asker's question and self-answer, so here's a cleaner solution. We have $\mathbf{H} = \mathbf{X}(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T$ where $(1,...,1)^T$ is a column of $\mathbf{X}$. We want to show that the diagonals $h_{ii}$ of $\mathbf{H}$ have $h_{ii} \geq 1/n$. Define $\mathbf{P} = \mathbf{H} - \mathbf{C}$, where $$\mathbf{C} = \frac{1}{n}\begin{pmatrix}1 & \dots & 1 \\ \vdots & \ddots & \vdots \\ 1 & \dots & 1 \end{pmatrix}$$ the matrix consisting of only $1/n$. This is the projection matrix onto the space spanned by $(1, ..., 1)$. Then $$\mathbf{P}^2 = \mathbf{H}^2 - \mathbf{H}\mathbf{C} - \mathbf{C}\mathbf{H} + \mathbf{C}^2 = \mathbf{H} - \mathbf{H}\mathbf{C} - \mathbf{C}\mathbf{H} + \mathbf{C}$$ However, $\mathbf{H}$ orthogonally projects onto $\text{Col}(\mathbf{X})$, and $\mathbf{C}$ orthogonally projects onto $\text{span}\{(1,...,1)\} \subset \text{Col}(\mathbf{X})$, so obviously $\mathbf{H}\mathbf{C} = \mathbf{C}$. Still intuitively, but less obviously, $\mathbf{C}\mathbf{H} = \mathbf{C}$. To see this, we can compute $\mathbf{C} = \mathbf{C}\big(\mathbf{H} + (\mathbf{I} - \mathbf{H})\big)$, and note that $\mathbf{C}(\mathbf{I} - \mathbf{H}) = 0$ because $\mathbf{I} - \mathbf{H}$ projects onto $\text{Col}(\mathbf{X})^\perp$. Therefore we have $\mathbf{P}^2 = \mathbf{H} - \mathbf{C} = \mathbf{P}$. So $\mathbf{P}$ is also a projection matrix. So $h_{ii} = p_{ii} + c_{ii} = p_{ii} + 1/n$. Since projection matrices are always positive semidefinite, the diagonals of $\mathbf{P}$ satisfy $p_{ii} \geq 0$. (In fact, you can show that since $\mathbf{P}$ is symmetric and idempotent, it satisfies $0 \leq p_{ii} \leq 1$.) Then $h_{ii} \geq 1/n$ as needed.
Diagonal elements of the projection matrix This is several years later, but I found the notation very difficult in the asker's question and self-answer, so here's a cleaner solution. We have $\mathbf{H} = \mathbf{X}(\mathbf{X}^T\mathbf{X})^{-
25,683
Diagonal elements of the projection matrix
For prove that $h_{ii} \geq (1/n)$, we can center $H_c=X(X_c' X_c)^{-1}X_c'$ , $\mathbf{H_c}=\begin{bmatrix}x_{11}-\bar x_1 &... &x_{1n}-\bar x_1 \\⋮ & ⋱ &⋮\\ x_{n1}-\bar x_n & ... & x_{nn}-\bar x_n\end{bmatrix}$ $y=\alpha1+ X_c'\beta +\epsilon⇒ \hat y=\hat \alpha1+ X_c'\hat\beta ⇒ \hat y=\bar y+ X_c'\hat\beta= \bar y+ X_c'(X_c' X_c)^{-1}X_c'y⇒ \hat y=[(1/n) 1'y]1+H_cy$ $=[1/n\begin{bmatrix}1&... &1\\⋮ & ⋱ &⋮\\1 & ... & 1\end{bmatrix}+H_c ] y=Hy $ Then $ H=1/n\begin{bmatrix}1&... &1\\⋮ & ⋱ &⋮\\1 & ... & 1\end{bmatrix}+H_c $⇒ $h_{ii} \geq (1/n)$ because $H_c$ is a positive definite matrix.
Diagonal elements of the projection matrix
For prove that $h_{ii} \geq (1/n)$, we can center $H_c=X(X_c' X_c)^{-1}X_c'$ , $\mathbf{H_c}=\begin{bmatrix}x_{11}-\bar x_1 &... &x_{1n}-\bar x_1 \\⋮ & ⋱ &⋮\\ x_{n1}-\bar x_n & ... & x_{nn}-\bar x_n
Diagonal elements of the projection matrix For prove that $h_{ii} \geq (1/n)$, we can center $H_c=X(X_c' X_c)^{-1}X_c'$ , $\mathbf{H_c}=\begin{bmatrix}x_{11}-\bar x_1 &... &x_{1n}-\bar x_1 \\⋮ & ⋱ &⋮\\ x_{n1}-\bar x_n & ... & x_{nn}-\bar x_n\end{bmatrix}$ $y=\alpha1+ X_c'\beta +\epsilon⇒ \hat y=\hat \alpha1+ X_c'\hat\beta ⇒ \hat y=\bar y+ X_c'\hat\beta= \bar y+ X_c'(X_c' X_c)^{-1}X_c'y⇒ \hat y=[(1/n) 1'y]1+H_cy$ $=[1/n\begin{bmatrix}1&... &1\\⋮ & ⋱ &⋮\\1 & ... & 1\end{bmatrix}+H_c ] y=Hy $ Then $ H=1/n\begin{bmatrix}1&... &1\\⋮ & ⋱ &⋮\\1 & ... & 1\end{bmatrix}+H_c $⇒ $h_{ii} \geq (1/n)$ because $H_c$ is a positive definite matrix.
Diagonal elements of the projection matrix For prove that $h_{ii} \geq (1/n)$, we can center $H_c=X(X_c' X_c)^{-1}X_c'$ , $\mathbf{H_c}=\begin{bmatrix}x_{11}-\bar x_1 &... &x_{1n}-\bar x_1 \\⋮ & ⋱ &⋮\\ x_{n1}-\bar x_n & ... & x_{nn}-\bar x_n
25,684
Diagonal elements of the projection matrix
Here is another answer that that only uses the fact that all the eigenvalues of a symmetric idempotent matrix are at most 1, see one of the previous answers or prove it yourself, it's quite easy. Let $H$ denote the hat matrix. The $i$th diagonal element of the hat matrix is given by $$h_{ii} = \mathbf{e}_i^{t} \mathbf{H} \mathbf{e}_i,$$ where $\mathbf{e}_i^{t}$ is the vector whose $i$th element is 1 and the rest are 0s. Consider the quadratic form on the unit sphere given by $$ f(\mathbf{x}) = \frac{\mathbf{x}^{t} \mathbf{H} \mathbf{x}}{\mathbf{x}^{t} \mathbf{x}}. $$ It is well known that the maximum of this expression is $\lambda_n$, the largest eigenvalue of the matrix $\mathbf{H}$. Returning to the diagonal elements of the hat matrix, one therefore has $$h_{ii} = \mathbf{e}_i^{t} \mathbf{H} \mathbf{e}_i = \frac{\mathbf{e}_i^{t} \mathbf{H} \mathbf{e}_i}{\mathbf{e}_i^{t} \mathbf{e}_i} \underbrace{\mathbf{e}_i^{t} \mathbf{e}_i}_{ = 1} \leq \lambda_n \leq1 $$ and this gives us what we need.
Diagonal elements of the projection matrix
Here is another answer that that only uses the fact that all the eigenvalues of a symmetric idempotent matrix are at most 1, see one of the previous answers or prove it yourself, it's quite easy. Let
Diagonal elements of the projection matrix Here is another answer that that only uses the fact that all the eigenvalues of a symmetric idempotent matrix are at most 1, see one of the previous answers or prove it yourself, it's quite easy. Let $H$ denote the hat matrix. The $i$th diagonal element of the hat matrix is given by $$h_{ii} = \mathbf{e}_i^{t} \mathbf{H} \mathbf{e}_i,$$ where $\mathbf{e}_i^{t}$ is the vector whose $i$th element is 1 and the rest are 0s. Consider the quadratic form on the unit sphere given by $$ f(\mathbf{x}) = \frac{\mathbf{x}^{t} \mathbf{H} \mathbf{x}}{\mathbf{x}^{t} \mathbf{x}}. $$ It is well known that the maximum of this expression is $\lambda_n$, the largest eigenvalue of the matrix $\mathbf{H}$. Returning to the diagonal elements of the hat matrix, one therefore has $$h_{ii} = \mathbf{e}_i^{t} \mathbf{H} \mathbf{e}_i = \frac{\mathbf{e}_i^{t} \mathbf{H} \mathbf{e}_i}{\mathbf{e}_i^{t} \mathbf{e}_i} \underbrace{\mathbf{e}_i^{t} \mathbf{e}_i}_{ = 1} \leq \lambda_n \leq1 $$ and this gives us what we need.
Diagonal elements of the projection matrix Here is another answer that that only uses the fact that all the eigenvalues of a symmetric idempotent matrix are at most 1, see one of the previous answers or prove it yourself, it's quite easy. Let
25,685
Diagonal elements of the projection matrix
Unlike the clever "centering" proof given by Drew N, the proof below is kind of brutal-force (but it has the advantage of giving even a sharper lower bound than $n^{-1}$, see $(1)$). Partition the design matrix $X \in \mathbb{R}^{n \times p}$ to be $X = \begin{bmatrix} e & Z\end{bmatrix}$, where $e$ is the intercept term consisting of $n$ ones, $Z$ is the matrix consisting of $X$'s remaining $p - 1$ columns. By block matrix inversion formula, \begin{align} & (X'X)^{-1} = \begin{bmatrix} n & e'Z \\ Z'e & Z'Z \end{bmatrix}^{-1} \\ =& \begin{bmatrix} n^{-1} + n^{-2}e'Z(Z'PZ)^{-1}Z'e & -n^{-1}e'Z(Z'PZ)^{-1} \\ -n^{-1}(Z'PZ)^{-1}Z'e & (Z'PZ)^{-1} \end{bmatrix}, \end{align} where $P = I_{(n)} - n^{-1}ee'$ is idempotent, hence $A := Z'PZ$ is invertible (hence positive definite. The invertibility of $A$ is proved in detail at the end). It then follows by $h_{ii} = x_i'(X'X)^{-1}x_i$ that \begin{align} & h_{ii} = \begin{bmatrix} 1 & z_i' \end{bmatrix} \begin{bmatrix} n^{-1} + n^{-2}e'ZA^{-1}Z'e & -n^{-1}e'ZA^{-1} \\ -n^{-1}A^{-1}Z'e & A^{-1} \end{bmatrix} \begin{bmatrix} 1 \\ z_i \end{bmatrix} \\ =& \begin{bmatrix} n^{-1} + n^{-2}e'ZA^{-1}Z'e - n^{-1}z_i'A^{-1}Z'e & - n^{-1}e'ZA^{-1} + z_i'A^{-1} \end{bmatrix} \begin{bmatrix} 1 \\ z_i \end{bmatrix} \\ =& n^{-1} + n^{-2}e'ZA^{-1}Z'e - n^{-1}z_i'A^{-1}Z'e - n^{-1}e'ZA^{-1}z_i + z_i'A^{-1}z_i \\ \geq & n^{-1} + \left(n^{-1}\sqrt{e'ZA^{-1}Z'e} - \sqrt{z_iA^{-1}z_i}\right)^2 \tag{1} \\ \geq & n^{-1}. \end{align} In $(1)$, we used Cauchy-Schwarz inequality \begin{align} |z_i'A^{-1}Z'e|^2 \leq z_i'A^{-1}z_i \times e'ZA^{-1}Z'e. \end{align} Proof of $A$ is invertible: Since $P$ is idempotent, to show $A$ is invertible, it suffices to show $\operatorname{rank}(PZ) = p - 1$, which (by rank-nullity theorem) is implied by $PZx = 0$ only has $0$ solution. If $PZx = 0$, then $Zx \in \operatorname{Ker}(I_{(n)} - n^{-1}ee') = \operatorname{span}(e)$. But since $X$ has full column rank, $e$ and $Zx \in \operatorname{span}(Z)$ are linearly independent, whence $Zx = 0$, which implies $x = 0$ due to $\operatorname{rank}(Z) = p - 1$. This completes the proof.
Diagonal elements of the projection matrix
Unlike the clever "centering" proof given by Drew N, the proof below is kind of brutal-force (but it has the advantage of giving even a sharper lower bound than $n^{-1}$, see $(1)$). Partition the des
Diagonal elements of the projection matrix Unlike the clever "centering" proof given by Drew N, the proof below is kind of brutal-force (but it has the advantage of giving even a sharper lower bound than $n^{-1}$, see $(1)$). Partition the design matrix $X \in \mathbb{R}^{n \times p}$ to be $X = \begin{bmatrix} e & Z\end{bmatrix}$, where $e$ is the intercept term consisting of $n$ ones, $Z$ is the matrix consisting of $X$'s remaining $p - 1$ columns. By block matrix inversion formula, \begin{align} & (X'X)^{-1} = \begin{bmatrix} n & e'Z \\ Z'e & Z'Z \end{bmatrix}^{-1} \\ =& \begin{bmatrix} n^{-1} + n^{-2}e'Z(Z'PZ)^{-1}Z'e & -n^{-1}e'Z(Z'PZ)^{-1} \\ -n^{-1}(Z'PZ)^{-1}Z'e & (Z'PZ)^{-1} \end{bmatrix}, \end{align} where $P = I_{(n)} - n^{-1}ee'$ is idempotent, hence $A := Z'PZ$ is invertible (hence positive definite. The invertibility of $A$ is proved in detail at the end). It then follows by $h_{ii} = x_i'(X'X)^{-1}x_i$ that \begin{align} & h_{ii} = \begin{bmatrix} 1 & z_i' \end{bmatrix} \begin{bmatrix} n^{-1} + n^{-2}e'ZA^{-1}Z'e & -n^{-1}e'ZA^{-1} \\ -n^{-1}A^{-1}Z'e & A^{-1} \end{bmatrix} \begin{bmatrix} 1 \\ z_i \end{bmatrix} \\ =& \begin{bmatrix} n^{-1} + n^{-2}e'ZA^{-1}Z'e - n^{-1}z_i'A^{-1}Z'e & - n^{-1}e'ZA^{-1} + z_i'A^{-1} \end{bmatrix} \begin{bmatrix} 1 \\ z_i \end{bmatrix} \\ =& n^{-1} + n^{-2}e'ZA^{-1}Z'e - n^{-1}z_i'A^{-1}Z'e - n^{-1}e'ZA^{-1}z_i + z_i'A^{-1}z_i \\ \geq & n^{-1} + \left(n^{-1}\sqrt{e'ZA^{-1}Z'e} - \sqrt{z_iA^{-1}z_i}\right)^2 \tag{1} \\ \geq & n^{-1}. \end{align} In $(1)$, we used Cauchy-Schwarz inequality \begin{align} |z_i'A^{-1}Z'e|^2 \leq z_i'A^{-1}z_i \times e'ZA^{-1}Z'e. \end{align} Proof of $A$ is invertible: Since $P$ is idempotent, to show $A$ is invertible, it suffices to show $\operatorname{rank}(PZ) = p - 1$, which (by rank-nullity theorem) is implied by $PZx = 0$ only has $0$ solution. If $PZx = 0$, then $Zx \in \operatorname{Ker}(I_{(n)} - n^{-1}ee') = \operatorname{span}(e)$. But since $X$ has full column rank, $e$ and $Zx \in \operatorname{span}(Z)$ are linearly independent, whence $Zx = 0$, which implies $x = 0$ due to $\operatorname{rank}(Z) = p - 1$. This completes the proof.
Diagonal elements of the projection matrix Unlike the clever "centering" proof given by Drew N, the proof below is kind of brutal-force (but it has the advantage of giving even a sharper lower bound than $n^{-1}$, see $(1)$). Partition the des
25,686
Diagonal elements of the projection matrix
Here is another simpler (and perhaps more illuminating) proof that is based on QR decomposition of the design matrix $X$. Suppose the QR decomposition of $X$ is $X = QR$, where $Q \in \mathbb{R}^{n \times p}$ is a matrix whose columns are orthogonal (so that $Q'Q = I_{(p)}$), $R \in \mathbb{R}^{p \times p}$ is an upper-triangular matrix. Since $\operatorname{rank}(X) = p$, $\operatorname{rank}(R)$ must be at least $p$, thus $R$ is invertible. Also note that since the first column of $X$ is $e$, the Gram-Schmidt procedure implies that the first column of $Q$ is $\frac{1}{\sqrt{n}}e$. Denote by $e_i$ the length-$n$ column vector of all zeros but the $i$-th position $1$, it then follows that \begin{align} & h_{ii} = e_i'X(X'X)^{-1}X'e_i = e_i'QR(R'Q'QR)^{-1}R'Q'e_i \\ =& e_i'QR(R'R)^{-1}R'Q'e_i = e_i'QRR^{-1}(R')^{-1}R'Q'e_i \\ =& e_i'QQ'e_i = \tilde{q}_i'\tilde{q}_i \\ =& n^{-1} + Q_{i2}^2 + \cdots + Q_{ip}^2 \geq n^{-1}, \end{align} where $\tilde{q}_i' = \begin{bmatrix}\frac{1}{\sqrt{n}} & Q_{i2} & \cdots & Q_{ip}\end{bmatrix}$ denotes the $i$-th row of $Q$. This completes the proof. More details on QR decomposition: Suppose $X = \begin{bmatrix} e & x_2 & \cdots & x_p \end{bmatrix}$. By assumption, $\{e, x_2, \ldots, x_p\}$ are linearly independent, which allows us to apply the Gram-Schmidt procedure to obtain an orthonormal group $\{q_1, q_2, \ldots, q_p\}$ based on $\{e, x_2, \ldots, x_p\}$ as follows: \begin{align} & z_1 = e, \; q_1 = \frac{z_1}{\|z_1\|}, \tag{1} \\ & z_2 = x_2 - \frac{x_2'z_1}{z_1'z_1}z_1, \; q_2 = \frac{z_2}{\|z_2\|}, \\ & \cdots \cdots \cdots \\ & z_p = x_p - \frac{x_p'z_{n - 1}}{z_{n - 1}'z_{n - 1}}z_{n - 1} - \cdots - \frac{x_p'z_1}{z_1'z_1}z_1, \; q_p = \frac{z_p}{\|z_p\|}. \end{align} In matrical form, the above transformation can be recorded as \begin{align} X = \begin{bmatrix} e & x_2 & \cdots & x_p \end{bmatrix} &= \begin{bmatrix} q_1 & q_2 & \cdots & q_p \end{bmatrix} \begin{pmatrix} \|z_1\| & \frac{x_2'z_1}{\|z_1\|} & \cdots & \frac{x_p'z_1}{\|z_1\|} \\ & \|z_2\| & \cdots & \frac{x_p'z_2}{\|z_2\|} \\ & & \ddots & \vdots \\ & & & \|z_p\| \end{pmatrix}\\ &=: QR. \tag{2} \end{align} It is thus evident from $(1)$ and $(2)$ that $q_1 = \frac{1}{\sqrt{n}}e$.
Diagonal elements of the projection matrix
Here is another simpler (and perhaps more illuminating) proof that is based on QR decomposition of the design matrix $X$. Suppose the QR decomposition of $X$ is $X = QR$, where $Q \in \mathbb{R}^{n \t
Diagonal elements of the projection matrix Here is another simpler (and perhaps more illuminating) proof that is based on QR decomposition of the design matrix $X$. Suppose the QR decomposition of $X$ is $X = QR$, where $Q \in \mathbb{R}^{n \times p}$ is a matrix whose columns are orthogonal (so that $Q'Q = I_{(p)}$), $R \in \mathbb{R}^{p \times p}$ is an upper-triangular matrix. Since $\operatorname{rank}(X) = p$, $\operatorname{rank}(R)$ must be at least $p$, thus $R$ is invertible. Also note that since the first column of $X$ is $e$, the Gram-Schmidt procedure implies that the first column of $Q$ is $\frac{1}{\sqrt{n}}e$. Denote by $e_i$ the length-$n$ column vector of all zeros but the $i$-th position $1$, it then follows that \begin{align} & h_{ii} = e_i'X(X'X)^{-1}X'e_i = e_i'QR(R'Q'QR)^{-1}R'Q'e_i \\ =& e_i'QR(R'R)^{-1}R'Q'e_i = e_i'QRR^{-1}(R')^{-1}R'Q'e_i \\ =& e_i'QQ'e_i = \tilde{q}_i'\tilde{q}_i \\ =& n^{-1} + Q_{i2}^2 + \cdots + Q_{ip}^2 \geq n^{-1}, \end{align} where $\tilde{q}_i' = \begin{bmatrix}\frac{1}{\sqrt{n}} & Q_{i2} & \cdots & Q_{ip}\end{bmatrix}$ denotes the $i$-th row of $Q$. This completes the proof. More details on QR decomposition: Suppose $X = \begin{bmatrix} e & x_2 & \cdots & x_p \end{bmatrix}$. By assumption, $\{e, x_2, \ldots, x_p\}$ are linearly independent, which allows us to apply the Gram-Schmidt procedure to obtain an orthonormal group $\{q_1, q_2, \ldots, q_p\}$ based on $\{e, x_2, \ldots, x_p\}$ as follows: \begin{align} & z_1 = e, \; q_1 = \frac{z_1}{\|z_1\|}, \tag{1} \\ & z_2 = x_2 - \frac{x_2'z_1}{z_1'z_1}z_1, \; q_2 = \frac{z_2}{\|z_2\|}, \\ & \cdots \cdots \cdots \\ & z_p = x_p - \frac{x_p'z_{n - 1}}{z_{n - 1}'z_{n - 1}}z_{n - 1} - \cdots - \frac{x_p'z_1}{z_1'z_1}z_1, \; q_p = \frac{z_p}{\|z_p\|}. \end{align} In matrical form, the above transformation can be recorded as \begin{align} X = \begin{bmatrix} e & x_2 & \cdots & x_p \end{bmatrix} &= \begin{bmatrix} q_1 & q_2 & \cdots & q_p \end{bmatrix} \begin{pmatrix} \|z_1\| & \frac{x_2'z_1}{\|z_1\|} & \cdots & \frac{x_p'z_1}{\|z_1\|} \\ & \|z_2\| & \cdots & \frac{x_p'z_2}{\|z_2\|} \\ & & \ddots & \vdots \\ & & & \|z_p\| \end{pmatrix}\\ &=: QR. \tag{2} \end{align} It is thus evident from $(1)$ and $(2)$ that $q_1 = \frac{1}{\sqrt{n}}e$.
Diagonal elements of the projection matrix Here is another simpler (and perhaps more illuminating) proof that is based on QR decomposition of the design matrix $X$. Suppose the QR decomposition of $X$ is $X = QR$, where $Q \in \mathbb{R}^{n \t
25,687
Diagonal elements of the projection matrix
$H = H^\top H$ implies $h_{ii} = h_{ii}^2 + \sum_{j \neq i} h_{ij}^2$, so $h_{ii} \geq h_{ii}^2$, hence $h_{ii} \leq 1$. Assuming presence of an intercept, we have $h_{ii} = 1/n + D_i^2/\left(n-1\right) \geq 1/n$, where $D_i^2$ is the squared Mahalanobis distance between the $i$-th row of a design matrix $X$ with 0-centered regressors and the origin.
Diagonal elements of the projection matrix
$H = H^\top H$ implies $h_{ii} = h_{ii}^2 + \sum_{j \neq i} h_{ij}^2$, so $h_{ii} \geq h_{ii}^2$, hence $h_{ii} \leq 1$. Assuming presence of an intercept, we have $h_{ii} = 1/n + D_i^2/\left(n-1\righ
Diagonal elements of the projection matrix $H = H^\top H$ implies $h_{ii} = h_{ii}^2 + \sum_{j \neq i} h_{ij}^2$, so $h_{ii} \geq h_{ii}^2$, hence $h_{ii} \leq 1$. Assuming presence of an intercept, we have $h_{ii} = 1/n + D_i^2/\left(n-1\right) \geq 1/n$, where $D_i^2$ is the squared Mahalanobis distance between the $i$-th row of a design matrix $X$ with 0-centered regressors and the origin.
Diagonal elements of the projection matrix $H = H^\top H$ implies $h_{ii} = h_{ii}^2 + \sum_{j \neq i} h_{ij}^2$, so $h_{ii} \geq h_{ii}^2$, hence $h_{ii} \leq 1$. Assuming presence of an intercept, we have $h_{ii} = 1/n + D_i^2/\left(n-1\righ
25,688
Good books/papers on credit scoring
If you are new to the scoring world, your first book should be by naeem siddiqi on credit scoring using SAS. If you have not taken the class go for it. The class main focus is the overall understanding of scoring and selling SAS enterprise miner for millions of dollars. If you need theory you need a categorical data analysis and Data mining class from a near by university. Even after taking these classes you will still need help. currently the most popular techniques used are logistic regression neural networks support vector machines and random forests clustering, discriminant analysis, factor analysis, principal components are a must as well. Credit scoring by elizabeth mays will also give you a good overview. I also took a credit risk modeling class by SAS institute, which helped me a little. It is a constant learning process and its never done. Bayesian folks like their methods as well. Edit i also forgot to mention. Logistic regression in the most popular technique out there and will always be the one that banks will continue to use. Other methods are very difficult to sell to the upper management people, unless your bank is willing to care less about understanding these methods and their focus remains risk taking and money making.
Good books/papers on credit scoring
If you are new to the scoring world, your first book should be by naeem siddiqi on credit scoring using SAS. If you have not taken the class go for it. The class main focus is the overall understandin
Good books/papers on credit scoring If you are new to the scoring world, your first book should be by naeem siddiqi on credit scoring using SAS. If you have not taken the class go for it. The class main focus is the overall understanding of scoring and selling SAS enterprise miner for millions of dollars. If you need theory you need a categorical data analysis and Data mining class from a near by university. Even after taking these classes you will still need help. currently the most popular techniques used are logistic regression neural networks support vector machines and random forests clustering, discriminant analysis, factor analysis, principal components are a must as well. Credit scoring by elizabeth mays will also give you a good overview. I also took a credit risk modeling class by SAS institute, which helped me a little. It is a constant learning process and its never done. Bayesian folks like their methods as well. Edit i also forgot to mention. Logistic regression in the most popular technique out there and will always be the one that banks will continue to use. Other methods are very difficult to sell to the upper management people, unless your bank is willing to care less about understanding these methods and their focus remains risk taking and money making.
Good books/papers on credit scoring If you are new to the scoring world, your first book should be by naeem siddiqi on credit scoring using SAS. If you have not taken the class go for it. The class main focus is the overall understandin
25,689
Good books/papers on credit scoring
I work in the credit scoring field. Even though I like exploring different approaches I find that logistic regression is often good enough if not the best approach. I have not surveyed the most recent papers on the topic but from memory in most papers you will see that other approaches such as neural nets model typically do not offer significant lift in terms of predictive power (as measured by GINI and AR). Also these models tend to be much harder to make sense of to a layman (often the most seniors executives do not have backgrounds in statistics), and the scorecard approach using logistic regression seems to offer the easiest to explain models. True, most scorecards don't take into account interactions, but again there is no study in the literature that can clearly demonstrate that incorporating interactions consistently and significant increase predictive power. Having said that, there has been recently some interests in building scorecards using survival analysis techniques as it holds a few advantages over logistic regression. Namely, we can more easily incorporate macro economic factors into the model, we can use more recent data in the model build instead of having to rely on data at least 12 months ago (as the binary indicator in logistics is usually defined as defaulted within the next 12 months). In that regard my thesis could offer another perspective in that it explores building credit scorecards using survival analysis. I showed how survival analysis scorecards "look and feel" the same as logistic regression scorecards, hence they can be introduced without causing too much trouble. In my thesis I also described the ABBA algorithm which is a novel approach to binning variables. https://www.google.com.sg/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CDAQFjAA&url=http%3A%2F%2Fftpmirror.your.org%2Fpub%2Fwikimedia%2Fimages%2Fwikipedia%2Fcommons%2F2%2F2f%2FAbout_Time_-_Building_Credit_Scorecards_with_Survival_Analysis.pdf&ei=8D8MUorrJs2Trgf56YCwCQ&usg=AFQjCNGxWRH1naJS4UqH_ckwzTx3GsaP8g&sig2=kcEvjUUcn_wT93igxpYYDA&bvm=bv.50768961,d.bmk Update: I make no claim to whether my thesis is any good. It's just another perspective from a practitioner in the field.
Good books/papers on credit scoring
I work in the credit scoring field. Even though I like exploring different approaches I find that logistic regression is often good enough if not the best approach. I have not surveyed the most recent
Good books/papers on credit scoring I work in the credit scoring field. Even though I like exploring different approaches I find that logistic regression is often good enough if not the best approach. I have not surveyed the most recent papers on the topic but from memory in most papers you will see that other approaches such as neural nets model typically do not offer significant lift in terms of predictive power (as measured by GINI and AR). Also these models tend to be much harder to make sense of to a layman (often the most seniors executives do not have backgrounds in statistics), and the scorecard approach using logistic regression seems to offer the easiest to explain models. True, most scorecards don't take into account interactions, but again there is no study in the literature that can clearly demonstrate that incorporating interactions consistently and significant increase predictive power. Having said that, there has been recently some interests in building scorecards using survival analysis techniques as it holds a few advantages over logistic regression. Namely, we can more easily incorporate macro economic factors into the model, we can use more recent data in the model build instead of having to rely on data at least 12 months ago (as the binary indicator in logistics is usually defined as defaulted within the next 12 months). In that regard my thesis could offer another perspective in that it explores building credit scorecards using survival analysis. I showed how survival analysis scorecards "look and feel" the same as logistic regression scorecards, hence they can be introduced without causing too much trouble. In my thesis I also described the ABBA algorithm which is a novel approach to binning variables. https://www.google.com.sg/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CDAQFjAA&url=http%3A%2F%2Fftpmirror.your.org%2Fpub%2Fwikimedia%2Fimages%2Fwikipedia%2Fcommons%2F2%2F2f%2FAbout_Time_-_Building_Credit_Scorecards_with_Survival_Analysis.pdf&ei=8D8MUorrJs2Trgf56YCwCQ&usg=AFQjCNGxWRH1naJS4UqH_ckwzTx3GsaP8g&sig2=kcEvjUUcn_wT93igxpYYDA&bvm=bv.50768961,d.bmk Update: I make no claim to whether my thesis is any good. It's just another perspective from a practitioner in the field.
Good books/papers on credit scoring I work in the credit scoring field. Even though I like exploring different approaches I find that logistic regression is often good enough if not the best approach. I have not surveyed the most recent
25,690
Good books/papers on credit scoring
I have referred to Guide to Credit Scoring in R by D. Sharma in the past and it is a good introductory reference on approaches including logistic regression and tree based methods The above guide uses the German Credit Data which has a rich set of features. If you search for the dataset, you will find other alternative approaches, analysis, and comparisons that may help inform feature selection and model choice for your dataset Neural networks is a fair choice for a binary classification problem as this one. In the real world, a credit scoring model is also expected to provide reasons for why a loan application (say) was rejected. Therefore it helps to have a model where you can identify what features in one's credit history result in a low credit score and cause an application to be denied. Features in regression and tree based approaches are easier to interpret compared to neural networks. If you are evaluating purely on fit, NN is worth a try
Good books/papers on credit scoring
I have referred to Guide to Credit Scoring in R by D. Sharma in the past and it is a good introductory reference on approaches including logistic regression and tree based methods The above guide uses
Good books/papers on credit scoring I have referred to Guide to Credit Scoring in R by D. Sharma in the past and it is a good introductory reference on approaches including logistic regression and tree based methods The above guide uses the German Credit Data which has a rich set of features. If you search for the dataset, you will find other alternative approaches, analysis, and comparisons that may help inform feature selection and model choice for your dataset Neural networks is a fair choice for a binary classification problem as this one. In the real world, a credit scoring model is also expected to provide reasons for why a loan application (say) was rejected. Therefore it helps to have a model where you can identify what features in one's credit history result in a low credit score and cause an application to be denied. Features in regression and tree based approaches are easier to interpret compared to neural networks. If you are evaluating purely on fit, NN is worth a try
Good books/papers on credit scoring I have referred to Guide to Credit Scoring in R by D. Sharma in the past and it is a good introductory reference on approaches including logistic regression and tree based methods The above guide uses
25,691
How can I test the same categorical variable across two populations?
Let me (a) first explain the underlying idea rather than the mechanics - they become more obvious in retrospect. Then (b) I'll talk about the chi-square (and whether it's appropriate - it may not be!), and then (c) I'll talk about how to do it in R. (a) Under the null, the populations are the same. Imagine you put your two cohorts into one large data set but add a column which holds the cohort labels. Then under the null, the cohort label is effectively just a random label which tells you nothing more about the distribution the observation came from. Under the alternative, of course, the cohort labels matter - knowing the cohort label tells you more than not knowing it because the distributions under the two labels are different. (This immediately suggests some kind of permutation test/randomization test where a statistic - one sensitive to the alternative - computed on the sample is compared with the distribution of the same statistic with the cohort labels reassigned to the rows at random. If you did all possible reassignments its a permutation test, if you only sample them it's a randomization test.) (b) So now, how to do a chi-square? You compute expected values under the null. Since the cohort labels don't matter under the null, you compute the expected number in each cell based on the overall distribution: Status A B ... E ... G ... Total Cohort 1: 10 15 18 84 Cohort 2: 9 7 25 78 Total: 19 22 ... 43 ... 162 So if the distribution was the same, there'd be no association between cohort and status, and (conditional on the row totals as well as the column totals) the expected number in cell $(i,j)$ is row-total-i $\times$ column-total-j / overall-total So you just get an ordinary chi-square test of independence. HOWEVER! If the status labels form an ordered category, this chi-square test is throwing away a lot of information - it will have low power against interesting alternatives (such as a slight shift toward higher or lower categories). You should in that situation do something more suitable - that is, which takes into account that ordering. There are many options. -- (c) Now about how to do it in R - it depends on how your data are currently set up in R - it would really help to have a reproducible example like a subset of your data! I will assume you have it in a data frame with two columns, one with the status (a factor) and one with the cohort (a second factor). Like so: status cohort 1 B Cohort1 2 B Cohort1 3 D Cohort1 4 B Cohort1 5 C Cohort1 6 D Cohort1 . . . 25 G Cohort2 26 E Cohort2 27 E Cohort2 28 D Cohort2 29 C Cohort2 30 G Cohort2 Then if that was a data frame called statusresults you'd get a table like the one I did earlier with: > with(statusresults,table(cohort,status)) status cohort A B C D E F G Cohort1 2 6 7 3 0 0 0 Cohort2 0 0 2 2 4 1 3 And for the chisquare test, you'd just go: > with(statusresults, chisq.test(status, cohort)) Pearson's Chi-squared test data: status and cohort X-squared = 18.5185, df = 6, p-value = 0.005059 Warning message: In chisq.test(status, cohort) : Chi-squared approximation may be incorrect (the warning is because the expected counts are low in some cells, given I used a very small sample) If you have ordered categories for status you should say so, so that we can discuss other possibilities for the analysis than the plain chisquare.
How can I test the same categorical variable across two populations?
Let me (a) first explain the underlying idea rather than the mechanics - they become more obvious in retrospect. Then (b) I'll talk about the chi-square (and whether it's appropriate - it may not be!)
How can I test the same categorical variable across two populations? Let me (a) first explain the underlying idea rather than the mechanics - they become more obvious in retrospect. Then (b) I'll talk about the chi-square (and whether it's appropriate - it may not be!), and then (c) I'll talk about how to do it in R. (a) Under the null, the populations are the same. Imagine you put your two cohorts into one large data set but add a column which holds the cohort labels. Then under the null, the cohort label is effectively just a random label which tells you nothing more about the distribution the observation came from. Under the alternative, of course, the cohort labels matter - knowing the cohort label tells you more than not knowing it because the distributions under the two labels are different. (This immediately suggests some kind of permutation test/randomization test where a statistic - one sensitive to the alternative - computed on the sample is compared with the distribution of the same statistic with the cohort labels reassigned to the rows at random. If you did all possible reassignments its a permutation test, if you only sample them it's a randomization test.) (b) So now, how to do a chi-square? You compute expected values under the null. Since the cohort labels don't matter under the null, you compute the expected number in each cell based on the overall distribution: Status A B ... E ... G ... Total Cohort 1: 10 15 18 84 Cohort 2: 9 7 25 78 Total: 19 22 ... 43 ... 162 So if the distribution was the same, there'd be no association between cohort and status, and (conditional on the row totals as well as the column totals) the expected number in cell $(i,j)$ is row-total-i $\times$ column-total-j / overall-total So you just get an ordinary chi-square test of independence. HOWEVER! If the status labels form an ordered category, this chi-square test is throwing away a lot of information - it will have low power against interesting alternatives (such as a slight shift toward higher or lower categories). You should in that situation do something more suitable - that is, which takes into account that ordering. There are many options. -- (c) Now about how to do it in R - it depends on how your data are currently set up in R - it would really help to have a reproducible example like a subset of your data! I will assume you have it in a data frame with two columns, one with the status (a factor) and one with the cohort (a second factor). Like so: status cohort 1 B Cohort1 2 B Cohort1 3 D Cohort1 4 B Cohort1 5 C Cohort1 6 D Cohort1 . . . 25 G Cohort2 26 E Cohort2 27 E Cohort2 28 D Cohort2 29 C Cohort2 30 G Cohort2 Then if that was a data frame called statusresults you'd get a table like the one I did earlier with: > with(statusresults,table(cohort,status)) status cohort A B C D E F G Cohort1 2 6 7 3 0 0 0 Cohort2 0 0 2 2 4 1 3 And for the chisquare test, you'd just go: > with(statusresults, chisq.test(status, cohort)) Pearson's Chi-squared test data: status and cohort X-squared = 18.5185, df = 6, p-value = 0.005059 Warning message: In chisq.test(status, cohort) : Chi-squared approximation may be incorrect (the warning is because the expected counts are low in some cells, given I used a very small sample) If you have ordered categories for status you should say so, so that we can discuss other possibilities for the analysis than the plain chisquare.
How can I test the same categorical variable across two populations? Let me (a) first explain the underlying idea rather than the mechanics - they become more obvious in retrospect. Then (b) I'll talk about the chi-square (and whether it's appropriate - it may not be!)
25,692
How can I test the same categorical variable across two populations?
You are right regarding the idea of doing a Chi-squared test. So here it is: #Create two data sets (id, outcome and group label) Dat1 <- as.data.frame(cbind(1:999,sample(c("A","G","E"),999,replace=T,prob=c(.2,.4,.4)),"group1")) Dat2 <- as.data.frame(cbind(1:500,sample(c("A","G","E"),500,replace=T,prob=c(.4,.2,.4)),"group2")) #Combine data sets Dat <- rbind(Dat1,Dat2) #Receive descriptive statistics and compute Chi-Square attach(Dat) table(V3,V2) chisq.test(table(V3,V2)) detach(Dat) If it is correct you Chi-Square will be significant, hence there is a significant difference between the distributions of the two groups. For a reference to start with see: http://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test http://www.statmethods.net/stats/frequencies.html
How can I test the same categorical variable across two populations?
You are right regarding the idea of doing a Chi-squared test. So here it is: #Create two data sets (id, outcome and group label) Dat1 <- as.data.frame(cbind(1:999,sample(c("A","G","E"),999,replace=T,p
How can I test the same categorical variable across two populations? You are right regarding the idea of doing a Chi-squared test. So here it is: #Create two data sets (id, outcome and group label) Dat1 <- as.data.frame(cbind(1:999,sample(c("A","G","E"),999,replace=T,prob=c(.2,.4,.4)),"group1")) Dat2 <- as.data.frame(cbind(1:500,sample(c("A","G","E"),500,replace=T,prob=c(.4,.2,.4)),"group2")) #Combine data sets Dat <- rbind(Dat1,Dat2) #Receive descriptive statistics and compute Chi-Square attach(Dat) table(V3,V2) chisq.test(table(V3,V2)) detach(Dat) If it is correct you Chi-Square will be significant, hence there is a significant difference between the distributions of the two groups. For a reference to start with see: http://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test http://www.statmethods.net/stats/frequencies.html
How can I test the same categorical variable across two populations? You are right regarding the idea of doing a Chi-squared test. So here it is: #Create two data sets (id, outcome and group label) Dat1 <- as.data.frame(cbind(1:999,sample(c("A","G","E"),999,replace=T,p
25,693
How can I test the same categorical variable across two populations?
You might be interested in this paper [1]. Excerpt from the abstract: The goal of the two-sample test (a.k.a. the homogeneity test) is, given two sets of samples, to judge whether the probability distributions behind the samples are the same or not. In this paper, we propose a novel non-parametric method of two-sample test based on a least-squares density ratio estimator. Through various experiments, we show that the proposed method overall produces smaller type-II error (i.e., the probability of judging the two distributions to be the same when they are actually different) than a state-of-the-art method, with slightly larger type-I error (i.e., the probability of judging the two distributions to be different when they are actually the same). The authors also provide matlab code for the same [2]. [1] http://www.ms.k.u-tokyo.ac.jp/2011/LSTT.pdf [2] http://www.ms.k.u-tokyo.ac.jp/software.html#uLSIF
How can I test the same categorical variable across two populations?
You might be interested in this paper [1]. Excerpt from the abstract: The goal of the two-sample test (a.k.a. the homogeneity test) is, given two sets of samples, to judge whether the probability dis
How can I test the same categorical variable across two populations? You might be interested in this paper [1]. Excerpt from the abstract: The goal of the two-sample test (a.k.a. the homogeneity test) is, given two sets of samples, to judge whether the probability distributions behind the samples are the same or not. In this paper, we propose a novel non-parametric method of two-sample test based on a least-squares density ratio estimator. Through various experiments, we show that the proposed method overall produces smaller type-II error (i.e., the probability of judging the two distributions to be the same when they are actually different) than a state-of-the-art method, with slightly larger type-I error (i.e., the probability of judging the two distributions to be different when they are actually the same). The authors also provide matlab code for the same [2]. [1] http://www.ms.k.u-tokyo.ac.jp/2011/LSTT.pdf [2] http://www.ms.k.u-tokyo.ac.jp/software.html#uLSIF
How can I test the same categorical variable across two populations? You might be interested in this paper [1]. Excerpt from the abstract: The goal of the two-sample test (a.k.a. the homogeneity test) is, given two sets of samples, to judge whether the probability dis
25,694
Calculating log-likelihood for given MLE (Markov Chains)
Let $ \{ X_i \}_{i=1}^{T}$ be a path of the markov chain and let $P_{\theta}(X_1, ..., X_T)$ be the probability of observing the path when $\theta$ is the true parameter value (a.k.a. the likelihood function for $\theta$). Using the definition of conditional probability, we know $$ P_{\theta}(X_1, ..., X_T) = P_{\theta}(X_T | X_{T-1}, ..., X_1) \cdot P_{\theta}(X_1, ..., X_{T-1})$$ Since this is a markov chain, we know that $P_{\theta}(X_T | X_{T-1}, ..., X_1) = P_{\theta}(X_T | X_{T-1} )$, so this simplifies this to $$ P_{\theta}(X_1, ..., X_T) = P_{\theta}(X_T | X_{T-1}) \cdot P_{\theta}(X_1, ..., X_{T-1})$$ Now if you repeat this same logic $T$ times, you get $$ P_{\theta}(X_1, ..., X_T) = \prod_{i=1}^{T} P_{\theta}(X_i | X_{i-1} ) $$ where $X_0$ is to be interpreted as the initial state of the process. The terms on the right hand side are just elements of the transition matrix. Since it was the log-likelihood you requested, the final answer is: $$ {\bf L}(\theta) = \sum_{i=1}^{T} \log \Big( P_{\theta}(X_i | X_{i-1} ) \Big) $$ This is the likelihood of a single markov chain - if your data set includes several (independent) markov chains then the full likelihood will be a sum of terms of this form.
Calculating log-likelihood for given MLE (Markov Chains)
Let $ \{ X_i \}_{i=1}^{T}$ be a path of the markov chain and let $P_{\theta}(X_1, ..., X_T)$ be the probability of observing the path when $\theta$ is the true parameter value (a.k.a. the likelihood f
Calculating log-likelihood for given MLE (Markov Chains) Let $ \{ X_i \}_{i=1}^{T}$ be a path of the markov chain and let $P_{\theta}(X_1, ..., X_T)$ be the probability of observing the path when $\theta$ is the true parameter value (a.k.a. the likelihood function for $\theta$). Using the definition of conditional probability, we know $$ P_{\theta}(X_1, ..., X_T) = P_{\theta}(X_T | X_{T-1}, ..., X_1) \cdot P_{\theta}(X_1, ..., X_{T-1})$$ Since this is a markov chain, we know that $P_{\theta}(X_T | X_{T-1}, ..., X_1) = P_{\theta}(X_T | X_{T-1} )$, so this simplifies this to $$ P_{\theta}(X_1, ..., X_T) = P_{\theta}(X_T | X_{T-1}) \cdot P_{\theta}(X_1, ..., X_{T-1})$$ Now if you repeat this same logic $T$ times, you get $$ P_{\theta}(X_1, ..., X_T) = \prod_{i=1}^{T} P_{\theta}(X_i | X_{i-1} ) $$ where $X_0$ is to be interpreted as the initial state of the process. The terms on the right hand side are just elements of the transition matrix. Since it was the log-likelihood you requested, the final answer is: $$ {\bf L}(\theta) = \sum_{i=1}^{T} \log \Big( P_{\theta}(X_i | X_{i-1} ) \Big) $$ This is the likelihood of a single markov chain - if your data set includes several (independent) markov chains then the full likelihood will be a sum of terms of this form.
Calculating log-likelihood for given MLE (Markov Chains) Let $ \{ X_i \}_{i=1}^{T}$ be a path of the markov chain and let $P_{\theta}(X_1, ..., X_T)$ be the probability of observing the path when $\theta$ is the true parameter value (a.k.a. the likelihood f
25,695
What is the standard deviation of the sum of THREE correlated random variables?
The variance of the sum of three variables is given by $Var(aX+bY+cZ) = a^2Var(X) + b^2Var(Y) + c^2Var(Z) + 2abCov(X,Y) + 2acCov(X,Z) + 2bcCov(Y,Z)$
What is the standard deviation of the sum of THREE correlated random variables?
The variance of the sum of three variables is given by $Var(aX+bY+cZ) = a^2Var(X) + b^2Var(Y) + c^2Var(Z) + 2abCov(X,Y) + 2acCov(X,Z) + 2bcCov(Y,Z)$
What is the standard deviation of the sum of THREE correlated random variables? The variance of the sum of three variables is given by $Var(aX+bY+cZ) = a^2Var(X) + b^2Var(Y) + c^2Var(Z) + 2abCov(X,Y) + 2acCov(X,Z) + 2bcCov(Y,Z)$
What is the standard deviation of the sum of THREE correlated random variables? The variance of the sum of three variables is given by $Var(aX+bY+cZ) = a^2Var(X) + b^2Var(Y) + c^2Var(Z) + 2abCov(X,Y) + 2acCov(X,Z) + 2bcCov(Y,Z)$
25,696
Asymptotic distribution of maximum order statistic of IID random normals
With $M_n:= \mathrm{max}(X_1,\,X_2,\,\dots,\,X_n)$ it can be shown that $(M_n-b_n)/a_n$ is approximately Gumbel for some known $a_n>0$ and $b_n$. See http://www.panix.com/~kts/Thesis/extreme/extreme2.html and the herein quoted "example 1.1.7" from the book by de Haan and Ferreira: Extreme Value theory, an Introduction.
Asymptotic distribution of maximum order statistic of IID random normals
With $M_n:= \mathrm{max}(X_1,\,X_2,\,\dots,\,X_n)$ it can be shown that $(M_n-b_n)/a_n$ is approximately Gumbel for some known $a_n>0$ and $b_n$. See http://www.panix.com/~kts/Thesis/extreme/extreme2
Asymptotic distribution of maximum order statistic of IID random normals With $M_n:= \mathrm{max}(X_1,\,X_2,\,\dots,\,X_n)$ it can be shown that $(M_n-b_n)/a_n$ is approximately Gumbel for some known $a_n>0$ and $b_n$. See http://www.panix.com/~kts/Thesis/extreme/extreme2.html and the herein quoted "example 1.1.7" from the book by de Haan and Ferreira: Extreme Value theory, an Introduction.
Asymptotic distribution of maximum order statistic of IID random normals With $M_n:= \mathrm{max}(X_1,\,X_2,\,\dots,\,X_n)$ it can be shown that $(M_n-b_n)/a_n$ is approximately Gumbel for some known $a_n>0$ and $b_n$. See http://www.panix.com/~kts/Thesis/extreme/extreme2
25,697
Asymptotic distribution of maximum order statistic of IID random normals
Check the book Tail Risk of Hedge Funds: An Extreme Value Application, chapter 3, section 3.1. They mention that the limiting distribution of the maxima follows either Gumbel, Frechet or Weibull distribution, whatever the parent distribution F.
Asymptotic distribution of maximum order statistic of IID random normals
Check the book Tail Risk of Hedge Funds: An Extreme Value Application, chapter 3, section 3.1. They mention that the limiting distribution of the maxima follows either Gumbel, Frechet or Weibull distr
Asymptotic distribution of maximum order statistic of IID random normals Check the book Tail Risk of Hedge Funds: An Extreme Value Application, chapter 3, section 3.1. They mention that the limiting distribution of the maxima follows either Gumbel, Frechet or Weibull distribution, whatever the parent distribution F.
Asymptotic distribution of maximum order statistic of IID random normals Check the book Tail Risk of Hedge Funds: An Extreme Value Application, chapter 3, section 3.1. They mention that the limiting distribution of the maxima follows either Gumbel, Frechet or Weibull distr
25,698
Remove duplicates from training set for classification
No, it is not acceptable. The repetitions are what provide the weight of the evidence. If you remove your duplicates, a four-leaf clover is as significant as a regular, three-leaf clover, since each will occur once, whereas in real life there is a four-leaf clover for every 10,000 regular clovers. Even if your priors are "quite skewed", as you say, the purpose of the training set is to accumulate real-life experience, which you will not achieve if you lose the frequency information.
Remove duplicates from training set for classification
No, it is not acceptable. The repetitions are what provide the weight of the evidence. If you remove your duplicates, a four-leaf clover is as significant as a regular, three-leaf clover, since each w
Remove duplicates from training set for classification No, it is not acceptable. The repetitions are what provide the weight of the evidence. If you remove your duplicates, a four-leaf clover is as significant as a regular, three-leaf clover, since each will occur once, whereas in real life there is a four-leaf clover for every 10,000 regular clovers. Even if your priors are "quite skewed", as you say, the purpose of the training set is to accumulate real-life experience, which you will not achieve if you lose the frequency information.
Remove duplicates from training set for classification No, it is not acceptable. The repetitions are what provide the weight of the evidence. If you remove your duplicates, a four-leaf clover is as significant as a regular, three-leaf clover, since each w
25,699
Remove duplicates from training set for classification
I agree with the previous answer but here are my reservations. It is advisable to remove duplicates while segregating samples for training and testing for specific classifiers such as Decision Trees. Say, 20% of your data belonged to a particular class and $\frac{1}{4}^{th}$ of those seeped into testing, then algorithms such as Decision Trees will create gateways to that class with the duplicate samples. This could provide misleading results on the test set because essentially there is a very specific gateway to the correct output. When you deploy that classifier to completely new data, it could perform astonishingly poor if there are no samples similar to the above said 20% samples. Argument: One may argue that this situation points to a flawed dataset but I think this is true to real life applications. Removing duplicates for Neural Networks, Bayesian models etc is not acceptable.
Remove duplicates from training set for classification
I agree with the previous answer but here are my reservations. It is advisable to remove duplicates while segregating samples for training and testing for specific classifiers such as Decision Trees.
Remove duplicates from training set for classification I agree with the previous answer but here are my reservations. It is advisable to remove duplicates while segregating samples for training and testing for specific classifiers such as Decision Trees. Say, 20% of your data belonged to a particular class and $\frac{1}{4}^{th}$ of those seeped into testing, then algorithms such as Decision Trees will create gateways to that class with the duplicate samples. This could provide misleading results on the test set because essentially there is a very specific gateway to the correct output. When you deploy that classifier to completely new data, it could perform astonishingly poor if there are no samples similar to the above said 20% samples. Argument: One may argue that this situation points to a flawed dataset but I think this is true to real life applications. Removing duplicates for Neural Networks, Bayesian models etc is not acceptable.
Remove duplicates from training set for classification I agree with the previous answer but here are my reservations. It is advisable to remove duplicates while segregating samples for training and testing for specific classifiers such as Decision Trees.
25,700
How best to analyze length of stay data in a hospital-based RCT?
I'm actually embarking on a project that does exactly this, although with observational, rather than clinical data. My thoughts have been that because of the unusual shape of most length of stay data, and the really well characterized time scale (you know both the origin and exit time essentially perfectly), the question lends itself really well to survival analysis of some sort. Three options to consider: Cox proportional hazards models, as you've suggested, for comparing between the treatment and exposed arms. Straight Kaplan-Meyer curves, using a log-rank or one of the other tests to examine the differences between them. Miguel Hernan has argued that this is actually the preferable method to use in many cases, as it does not necessarily assume a constant hazard ratio. As you've got a clinical trial, the difficulty of producing covariate adjusted Kaplan-Meyer curves shouldn't be a problem, but even if there are some residual variables you want to control for, this can be done with inverse-probability-of-treatment weights. Parametric survival models. There are, admittedly, less commonly used, but in my case I need a parametric estimate of the underlying hazard, so these are really the only way to go. I wouldn't suggest jumping straight into using the Generalized Gamma model. It's something of a pain to work with - I'd try a simple Exponential, Weibull and Log-Normal and see if any of those produce acceptable fits.
How best to analyze length of stay data in a hospital-based RCT?
I'm actually embarking on a project that does exactly this, although with observational, rather than clinical data. My thoughts have been that because of the unusual shape of most length of stay data,
How best to analyze length of stay data in a hospital-based RCT? I'm actually embarking on a project that does exactly this, although with observational, rather than clinical data. My thoughts have been that because of the unusual shape of most length of stay data, and the really well characterized time scale (you know both the origin and exit time essentially perfectly), the question lends itself really well to survival analysis of some sort. Three options to consider: Cox proportional hazards models, as you've suggested, for comparing between the treatment and exposed arms. Straight Kaplan-Meyer curves, using a log-rank or one of the other tests to examine the differences between them. Miguel Hernan has argued that this is actually the preferable method to use in many cases, as it does not necessarily assume a constant hazard ratio. As you've got a clinical trial, the difficulty of producing covariate adjusted Kaplan-Meyer curves shouldn't be a problem, but even if there are some residual variables you want to control for, this can be done with inverse-probability-of-treatment weights. Parametric survival models. There are, admittedly, less commonly used, but in my case I need a parametric estimate of the underlying hazard, so these are really the only way to go. I wouldn't suggest jumping straight into using the Generalized Gamma model. It's something of a pain to work with - I'd try a simple Exponential, Weibull and Log-Normal and see if any of those produce acceptable fits.
How best to analyze length of stay data in a hospital-based RCT? I'm actually embarking on a project that does exactly this, although with observational, rather than clinical data. My thoughts have been that because of the unusual shape of most length of stay data,