idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
35,701 | A statistical interaction is significant, but the author denies it. Why? | I don't have the required reputation to vote, so I'll add it as an answer instead.
I fully agree with what @whuber said. The typical approach in this kind of study is to a priori declare a level of significance. Quoted from the article, the authors indeed do this,
To accommodate the many comparisons made, two-tailed P values of less than 0.01 for the secondary outcomes and less than 0.001 for other outcomes were considered to indicate statistical significance
and
... for these exploratory analyses, the Breslow–Day test of homogeneity was used and P values of less than 0.05 were considered to indicate statistical significance
To mention a result as "marginally significant" is plainly wrong when you have already declared your levels as significant. Either something is significant, or it is not. Just to add, the authors also calculated the study had 80% power, assuming a detection level of alpha < 0.05.
On the other hand, if the authors provide an effect size (such as the OR) that has a p-value < 0.05 but is extremely close to 1, then I think it is fully justified to say "this was indeed significant, but has no clinical relevance due to the low effect size". | A statistical interaction is significant, but the author denies it. Why? | I don't have the required reputation to vote, so I'll add it as an answer instead.
I fully agree with what @whuber said. The typical approach in this kind of study is to a priori declare a level of si | A statistical interaction is significant, but the author denies it. Why?
I don't have the required reputation to vote, so I'll add it as an answer instead.
I fully agree with what @whuber said. The typical approach in this kind of study is to a priori declare a level of significance. Quoted from the article, the authors indeed do this,
To accommodate the many comparisons made, two-tailed P values of less than 0.01 for the secondary outcomes and less than 0.001 for other outcomes were considered to indicate statistical significance
and
... for these exploratory analyses, the Breslow–Day test of homogeneity was used and P values of less than 0.05 were considered to indicate statistical significance
To mention a result as "marginally significant" is plainly wrong when you have already declared your levels as significant. Either something is significant, or it is not. Just to add, the authors also calculated the study had 80% power, assuming a detection level of alpha < 0.05.
On the other hand, if the authors provide an effect size (such as the OR) that has a p-value < 0.05 but is extremely close to 1, then I think it is fully justified to say "this was indeed significant, but has no clinical relevance due to the low effect size". | A statistical interaction is significant, but the author denies it. Why?
I don't have the required reputation to vote, so I'll add it as an answer instead.
I fully agree with what @whuber said. The typical approach in this kind of study is to a priori declare a level of si |
35,702 | The concept of efficiency | Efficiency is a "per se" concept in the sense that it is a measure of how variable (and biased) the estimator is from the "true" parameter. There is an actual numeric value for efficiency associated with a given estimator at a given sample-size for a given loss function. This actual number is related to the estimator AND the sample-size AND the loss function.
Asymptotic efficiency looks at how efficient the estimator is as the sample size increases. More important is how rapidly the estimator becomes efficient but this can be more difficult to determine.
Relative efficiency looks at how efficient the estimator is relative to an alternative estimator (typically at a GIVEN sample-size).
Efficiency requires the specification of some loss function. Originally, this was variance when only unbiased estimators were considered. These days, this is most often MSE (mean-squared-error which accounts for bias and variability). Other loss-functions can be used. The classical Cramer-Rao bound was for unbiased estimators only but was extended to many of these other loss functions (most especially for MSE loss).
An important adjunct concept is admissibility and domination of estimators.
The Wikipedia entry has many links. | The concept of efficiency | Efficiency is a "per se" concept in the sense that it is a measure of how variable (and biased) the estimator is from the "true" parameter. There is an actual numeric value for efficiency associated | The concept of efficiency
Efficiency is a "per se" concept in the sense that it is a measure of how variable (and biased) the estimator is from the "true" parameter. There is an actual numeric value for efficiency associated with a given estimator at a given sample-size for a given loss function. This actual number is related to the estimator AND the sample-size AND the loss function.
Asymptotic efficiency looks at how efficient the estimator is as the sample size increases. More important is how rapidly the estimator becomes efficient but this can be more difficult to determine.
Relative efficiency looks at how efficient the estimator is relative to an alternative estimator (typically at a GIVEN sample-size).
Efficiency requires the specification of some loss function. Originally, this was variance when only unbiased estimators were considered. These days, this is most often MSE (mean-squared-error which accounts for bias and variability). Other loss-functions can be used. The classical Cramer-Rao bound was for unbiased estimators only but was extended to many of these other loss functions (most especially for MSE loss).
An important adjunct concept is admissibility and domination of estimators.
The Wikipedia entry has many links. | The concept of efficiency
Efficiency is a "per se" concept in the sense that it is a measure of how variable (and biased) the estimator is from the "true" parameter. There is an actual numeric value for efficiency associated |
35,703 | The concept of efficiency | I wonder at the global relevance of a concept of efficiency outside [and even inside] the restricted case of unbiased estimators. The general (frequentist) version is that the variance of an estimator $δ$ of [any transform of] $θ$ with bias b(θ) is
$$
I(θ)⁻¹ (1+b'(θ))²
$$
while a Bayesian version is the van Trees inequality on the integrated squared error loss
$$
(\mathbb{E}(I(θ))+I(π))⁻¹
$$
where $I(θ)$ and $I(π)$ are the Fisher information and the prior entropy, respectively. But this opens a whole can of worms, in my opinion since
establishing that a given estimator is efficient requires computing
both the bias and the variance of that estimator, not an easy task
when considering a Bayes estimator or even the James-Stein
estimator. I actually do not know if any of the estimators
dominating the standard Normal mean estimator has been shown to be
efficient (although there exist results for closed form expressions
of the James-Stein estimator quadratic risk, including one of mine in
the Canadian Journal of Statistics). Or
is there a result indicating that a (any?) proper Bayes estimator associated with the
quadratic loss is by default efficient in either the first or second
sense?
while the initial Fréchet-Darmois-Cramèr-Rao bound is restricted to unbiased estimators (i.e., $b(θ)≡0$) and unable to produce efficient estimators in all settings but for the natural parameter in the setting of exponential families, moving to the general case means there exists one efficiency notion for every bias function $b(θ)$, which makes the notion quite weak, while not necessarily producing efficient estimators anyway, the major impediment to taking this notion seriously;
moving from the variance to the squared error loss is not more "natural" than using any [other] convex combination of variance and squared bias, creating a whole new class of optimalities;
I never got into the van Trees inequality so cannot say much, except that the comparison between various priors is delicate since the integrated risks are against different parameter measures. | The concept of efficiency | I wonder at the global relevance of a concept of efficiency outside [and even inside] the restricted case of unbiased estimators. The general (frequentist) version is that the variance of an estimator | The concept of efficiency
I wonder at the global relevance of a concept of efficiency outside [and even inside] the restricted case of unbiased estimators. The general (frequentist) version is that the variance of an estimator $δ$ of [any transform of] $θ$ with bias b(θ) is
$$
I(θ)⁻¹ (1+b'(θ))²
$$
while a Bayesian version is the van Trees inequality on the integrated squared error loss
$$
(\mathbb{E}(I(θ))+I(π))⁻¹
$$
where $I(θ)$ and $I(π)$ are the Fisher information and the prior entropy, respectively. But this opens a whole can of worms, in my opinion since
establishing that a given estimator is efficient requires computing
both the bias and the variance of that estimator, not an easy task
when considering a Bayes estimator or even the James-Stein
estimator. I actually do not know if any of the estimators
dominating the standard Normal mean estimator has been shown to be
efficient (although there exist results for closed form expressions
of the James-Stein estimator quadratic risk, including one of mine in
the Canadian Journal of Statistics). Or
is there a result indicating that a (any?) proper Bayes estimator associated with the
quadratic loss is by default efficient in either the first or second
sense?
while the initial Fréchet-Darmois-Cramèr-Rao bound is restricted to unbiased estimators (i.e., $b(θ)≡0$) and unable to produce efficient estimators in all settings but for the natural parameter in the setting of exponential families, moving to the general case means there exists one efficiency notion for every bias function $b(θ)$, which makes the notion quite weak, while not necessarily producing efficient estimators anyway, the major impediment to taking this notion seriously;
moving from the variance to the squared error loss is not more "natural" than using any [other] convex combination of variance and squared bias, creating a whole new class of optimalities;
I never got into the van Trees inequality so cannot say much, except that the comparison between various priors is delicate since the integrated risks are against different parameter measures. | The concept of efficiency
I wonder at the global relevance of a concept of efficiency outside [and even inside] the restricted case of unbiased estimators. The general (frequentist) version is that the variance of an estimator |
35,704 | The concept of efficiency | Yes an efficient estimator is one that attains the CRB, thus only unbiased estimators are considered. It's not a characterization it's a definition. | The concept of efficiency | Yes an efficient estimator is one that attains the CRB, thus only unbiased estimators are considered. It's not a characterization it's a definition. | The concept of efficiency
Yes an efficient estimator is one that attains the CRB, thus only unbiased estimators are considered. It's not a characterization it's a definition. | The concept of efficiency
Yes an efficient estimator is one that attains the CRB, thus only unbiased estimators are considered. It's not a characterization it's a definition. |
35,705 | Where is the explanatory effect of common variance among covariates accounted for in regression procedures? | Even though you say that the geometry of this is fairly clear to you, I think it is a good idea to review it. I made this back of an envelope sketch:
Left subplot is the same figure as in the book: consider two predictors $x_1$ and $x_2$; as vectors, $\mathbf x_1$ and $\mathbf x_2$ span a plane in the $n$-dimensional space, and $\mathbf y$ is being projected onto this plane resulting in the $\hat {\mathbf y}$.
Middle subplot shows the $X$ plane in the case when $\mathbf x_1$ and $\mathbf x_2$ are not orthogonal, but both have unit length. The regression coefficients $\beta_1$ and $\beta_2$ can be obtained by a non-orthogonal projection of $\hat{\mathbf y}$ onto $\mathbf x_1$ and $\mathbf x_2$: that should be pretty clear from the picture. But what happens when we follow the orthogonalization route?
The two orthogonalized vectors $\mathbf z_1$ and $\mathbf z_2$ from Algorithm 3.1 are also shown on the figure. Note that each of them is obtained via a separate Gram-Schmidt orthogonalization procedure (separate run of Algorithm 3.1): $\mathbf z_1$ is the residual of $\mathbf x_1$ when regressed on $\mathbf x_2$ ans $\mathbf z_2$ is the residual of $\mathbf x_2$ when regressed on $\mathbf x_1$. Therefore $\mathbf z_1$ and $\mathbf z_2$ are orthogonal to $\mathbf x_2$ and $\mathbf x_1$ respectively, and their lengths are less than $1$. This is crucial.
As stated in the book, the regression coefficient $\beta_i$ can be obtained as $$\beta_i = \frac{\mathbf z_i \cdot \mathbf y}{\|\mathbf z_i\|^2} =\frac{\mathbf e_{\mathbf z_i} \cdot \mathbf y}{\|\mathbf z_i\|},$$ where $\mathbf e_{\mathbf z_{i}}$ denotes a unit vector in the direction of $\mathbf z_i$. When I project $\hat{\mathbf y}$ onto $\mathbf z_i$ on my drawing, the length of the projection (shown on the figure) is the nominator of this fraction. To get the actual $\beta_i$ value, one needs to divide by the length of $\mathbf z_i$ which is smaller than $1$, i.e. the $\beta_i$ will be larger than the length of the projection.
Now consider what happens in the extreme case of very high correlation (right subplot). Both $\beta_i$ are sizeable, but both $\mathbf z_i$ vectors are tiny, and the projections of $\hat{\mathbf y}$ onto the directions of $\mathbf z_i$ will also be tiny; this is I think what is ultimately worrying you. However, to get $\beta_i$ values, we will have to rescale these projections by inverse lengths of $\mathbf z_i$, obtaining the correct values.
Following the Gram-Schmidt procedure, the residual of X1 or X2 on the other covariates (in this case, just each other) effectively remove the common variance between them (this may be where I am misunderstanding), but surely doing so removes the common element that manages to explain the relationship with Y?
To repeat: yes, the "common variance" is almost (but not entirely) "removed" from the residuals -- that's why projections on $\mathbf z_1$ and $\mathbf z_2$ will be so short. However, the Gram-Schmidt procedure can account for it by normalizing by the lengths of $\mathbf z_1$ and $\mathbf z_2$; the lengths are inversely related to the correlation between $\mathbf x_1$ and $\mathbf x_2$, so in the end the balance gets restored.
Update 1
Following the discussion with @mpiktas in the comments: the above description is not how Gram-Schmidt procedure would usually be applied to compute regression coefficients. Instead of running Algorithm 3.1 many times (each time rearranging the sequence of predictors), one can obtain all regression coefficients from the single run. This is noted in Hastie et al. on the next page (page 55) and is the content of Exercise 3.4. But as I understood OP's question, it referred to the multiple-runs approach (that yields explicit formulas for $\beta_i$).
Update 2
In reply to OP's comment:
I am trying to understand how 'common explanatory power' of a (sub)set of covariates is 'spread between' the coefficient estimates of those covariates. I think the explanation lies somewhere between the geometric illustration you have provided and mpiktas point about how the coefficients should sum to the regression coefficient of the common factor
I think if you are trying to understand how the "shared part" of the predictors is being represented in the regression coefficients, then you do not need to think about Gram-Schmidt at all. Yes, it will be "spread out" between the predictors. Perhaps a more useful way to think about it is in terms of transforming the predictors with PCA to get orthogonal predictors. In your example there will be a large first principal component with almost equal weights for $x_1$ and $x_2$. So the corresponding regression coefficient will have to be "split" between $x_1$ and $x_2$ in equal proportions. The second principal component will be small and $\mathbf y$ will be almost orthogonal to it.
In my answer above I assumed that you are specifically confused about Gram-Schmidt procedure and the resulting formula for $\beta_i$ in terms of $z_i$. | Where is the explanatory effect of common variance among covariates accounted for in regression proc | Even though you say that the geometry of this is fairly clear to you, I think it is a good idea to review it. I made this back of an envelope sketch:
Left subplot is the same figure as in the book: c | Where is the explanatory effect of common variance among covariates accounted for in regression procedures?
Even though you say that the geometry of this is fairly clear to you, I think it is a good idea to review it. I made this back of an envelope sketch:
Left subplot is the same figure as in the book: consider two predictors $x_1$ and $x_2$; as vectors, $\mathbf x_1$ and $\mathbf x_2$ span a plane in the $n$-dimensional space, and $\mathbf y$ is being projected onto this plane resulting in the $\hat {\mathbf y}$.
Middle subplot shows the $X$ plane in the case when $\mathbf x_1$ and $\mathbf x_2$ are not orthogonal, but both have unit length. The regression coefficients $\beta_1$ and $\beta_2$ can be obtained by a non-orthogonal projection of $\hat{\mathbf y}$ onto $\mathbf x_1$ and $\mathbf x_2$: that should be pretty clear from the picture. But what happens when we follow the orthogonalization route?
The two orthogonalized vectors $\mathbf z_1$ and $\mathbf z_2$ from Algorithm 3.1 are also shown on the figure. Note that each of them is obtained via a separate Gram-Schmidt orthogonalization procedure (separate run of Algorithm 3.1): $\mathbf z_1$ is the residual of $\mathbf x_1$ when regressed on $\mathbf x_2$ ans $\mathbf z_2$ is the residual of $\mathbf x_2$ when regressed on $\mathbf x_1$. Therefore $\mathbf z_1$ and $\mathbf z_2$ are orthogonal to $\mathbf x_2$ and $\mathbf x_1$ respectively, and their lengths are less than $1$. This is crucial.
As stated in the book, the regression coefficient $\beta_i$ can be obtained as $$\beta_i = \frac{\mathbf z_i \cdot \mathbf y}{\|\mathbf z_i\|^2} =\frac{\mathbf e_{\mathbf z_i} \cdot \mathbf y}{\|\mathbf z_i\|},$$ where $\mathbf e_{\mathbf z_{i}}$ denotes a unit vector in the direction of $\mathbf z_i$. When I project $\hat{\mathbf y}$ onto $\mathbf z_i$ on my drawing, the length of the projection (shown on the figure) is the nominator of this fraction. To get the actual $\beta_i$ value, one needs to divide by the length of $\mathbf z_i$ which is smaller than $1$, i.e. the $\beta_i$ will be larger than the length of the projection.
Now consider what happens in the extreme case of very high correlation (right subplot). Both $\beta_i$ are sizeable, but both $\mathbf z_i$ vectors are tiny, and the projections of $\hat{\mathbf y}$ onto the directions of $\mathbf z_i$ will also be tiny; this is I think what is ultimately worrying you. However, to get $\beta_i$ values, we will have to rescale these projections by inverse lengths of $\mathbf z_i$, obtaining the correct values.
Following the Gram-Schmidt procedure, the residual of X1 or X2 on the other covariates (in this case, just each other) effectively remove the common variance between them (this may be where I am misunderstanding), but surely doing so removes the common element that manages to explain the relationship with Y?
To repeat: yes, the "common variance" is almost (but not entirely) "removed" from the residuals -- that's why projections on $\mathbf z_1$ and $\mathbf z_2$ will be so short. However, the Gram-Schmidt procedure can account for it by normalizing by the lengths of $\mathbf z_1$ and $\mathbf z_2$; the lengths are inversely related to the correlation between $\mathbf x_1$ and $\mathbf x_2$, so in the end the balance gets restored.
Update 1
Following the discussion with @mpiktas in the comments: the above description is not how Gram-Schmidt procedure would usually be applied to compute regression coefficients. Instead of running Algorithm 3.1 many times (each time rearranging the sequence of predictors), one can obtain all regression coefficients from the single run. This is noted in Hastie et al. on the next page (page 55) and is the content of Exercise 3.4. But as I understood OP's question, it referred to the multiple-runs approach (that yields explicit formulas for $\beta_i$).
Update 2
In reply to OP's comment:
I am trying to understand how 'common explanatory power' of a (sub)set of covariates is 'spread between' the coefficient estimates of those covariates. I think the explanation lies somewhere between the geometric illustration you have provided and mpiktas point about how the coefficients should sum to the regression coefficient of the common factor
I think if you are trying to understand how the "shared part" of the predictors is being represented in the regression coefficients, then you do not need to think about Gram-Schmidt at all. Yes, it will be "spread out" between the predictors. Perhaps a more useful way to think about it is in terms of transforming the predictors with PCA to get orthogonal predictors. In your example there will be a large first principal component with almost equal weights for $x_1$ and $x_2$. So the corresponding regression coefficient will have to be "split" between $x_1$ and $x_2$ in equal proportions. The second principal component will be small and $\mathbf y$ will be almost orthogonal to it.
In my answer above I assumed that you are specifically confused about Gram-Schmidt procedure and the resulting formula for $\beta_i$ in terms of $z_i$. | Where is the explanatory effect of common variance among covariates accounted for in regression proc
Even though you say that the geometry of this is fairly clear to you, I think it is a good idea to review it. I made this back of an envelope sketch:
Left subplot is the same figure as in the book: c |
35,706 | Where is the explanatory effect of common variance among covariates accounted for in regression procedures? | The GS procedure would start with $X_1$ and then move to orthogonalizing $X_2$. Since $X_1$ and $X_2$ share $X$ the result would practically be zero in your example. But the common element $X$ remains, because we started with $X_1$, and $X_1$ still has $X$.
Since $X_1$ and $X_2$ share common $X$, we would get that the remainder of $X_2$ after orthogonalization is practically zero as is stated in citation.
In this case one could argue that original multiple regression problem is ill posed, so there is no sense to proceed, i.e. we should stop GS process and restate original multiple regression problem as $Y\sim X_1$. In this case we do not lose the common factor $X$ and correctly disregard $X_2$, since it does not give us any new information which we do not have.
Of course we can proceed with GS procedure and calculate the coefficient for $X_2$ and recalculate back to the original multiple regression problem. Since we do not have perfect colinearity it is possible to do that theoretically. Practically it will depend on the numerical stability of the algorithms. Since
$$\alpha X_1+ \beta X2 = (\alpha+\beta)X +\alpha\epsilon_1 + \beta\epsilon_2 $$
the regression $Y\sim X_1 + X_2$ will produce coefficients $\alpha$ and $\beta$ such that $\alpha+\beta \approx 1$ (we will not have strict equality because of $\epsilon_1$ and $\epsilon_2$).
Here is the example in R:
> set.seed(1001)
> x<-rnorm(1000)
> y<-x+rnorm(1000, sd = 0.1)
> x1 <- x + rnorm(1000, sd =0.001)
> x2 <- x + rnorm(1000, sd =0.001)
> lm(y~x1+x2)
Call:
lm(formula = y ~ x1 + x2)
Coefficients:
(Intercept) x1 x2
-0.0003867 -1.9282079 2.9185409
Here I skipped GS procedure, because the lm gave feasible results, and in that case recalculating coefficients from GS procedure does not fail. | Where is the explanatory effect of common variance among covariates accounted for in regression proc | The GS procedure would start with $X_1$ and then move to orthogonalizing $X_2$. Since $X_1$ and $X_2$ share $X$ the result would practically be zero in your example. But the common element $X$ remains | Where is the explanatory effect of common variance among covariates accounted for in regression procedures?
The GS procedure would start with $X_1$ and then move to orthogonalizing $X_2$. Since $X_1$ and $X_2$ share $X$ the result would practically be zero in your example. But the common element $X$ remains, because we started with $X_1$, and $X_1$ still has $X$.
Since $X_1$ and $X_2$ share common $X$, we would get that the remainder of $X_2$ after orthogonalization is practically zero as is stated in citation.
In this case one could argue that original multiple regression problem is ill posed, so there is no sense to proceed, i.e. we should stop GS process and restate original multiple regression problem as $Y\sim X_1$. In this case we do not lose the common factor $X$ and correctly disregard $X_2$, since it does not give us any new information which we do not have.
Of course we can proceed with GS procedure and calculate the coefficient for $X_2$ and recalculate back to the original multiple regression problem. Since we do not have perfect colinearity it is possible to do that theoretically. Practically it will depend on the numerical stability of the algorithms. Since
$$\alpha X_1+ \beta X2 = (\alpha+\beta)X +\alpha\epsilon_1 + \beta\epsilon_2 $$
the regression $Y\sim X_1 + X_2$ will produce coefficients $\alpha$ and $\beta$ such that $\alpha+\beta \approx 1$ (we will not have strict equality because of $\epsilon_1$ and $\epsilon_2$).
Here is the example in R:
> set.seed(1001)
> x<-rnorm(1000)
> y<-x+rnorm(1000, sd = 0.1)
> x1 <- x + rnorm(1000, sd =0.001)
> x2 <- x + rnorm(1000, sd =0.001)
> lm(y~x1+x2)
Call:
lm(formula = y ~ x1 + x2)
Coefficients:
(Intercept) x1 x2
-0.0003867 -1.9282079 2.9185409
Here I skipped GS procedure, because the lm gave feasible results, and in that case recalculating coefficients from GS procedure does not fail. | Where is the explanatory effect of common variance among covariates accounted for in regression proc
The GS procedure would start with $X_1$ and then move to orthogonalizing $X_2$. Since $X_1$ and $X_2$ share $X$ the result would practically be zero in your example. But the common element $X$ remains |
35,707 | Prediction Intervals with Heteroscedasticity | It would depend on the nature of the heteroskedasticity. If you wanted a prediction interval, you usually need a parametric specification like:
$$
y_i \sim N(\mathbf{x}_i'\beta,\sigma_i(\mathbf{x}_i,\mathbf{z}_i ))
$$
i.e. $y_i$ is normally distributed with mean $\mathbf{x}_i'\beta$, and standard deviation $\sigma_i(\mathbf{x}_i,\mathbf{z}_i )$, where the standard deviation is some known function of the $\mathbf{x}_i$ or perhaps some other set of variables $\mathbf{z}_i $ , that way you can estimate the standard deviation for each $i^{th}$ observation.
Examples of possible functions include; $\sigma^2_i(\mathbf{x}_i)=\sigma^2x_{i,k}$ (Studies of firm profits, an example from Greene's "Econometric Analysis" 7th edition CH 9), where $x_{i,k}$ is the $i^{th}$ observation of the $k^{th}$ dependent variable, or, if working with time series data, GARCH and/or stochastic volatility specifications.
You can use the estimates $\hat \sigma_i(\mathbf{x}_i,\mathbf{z}_i )$ as the standard errors for your prediction intervals if you like. I will forgo a formal treatment here because accounting for estimation errors in $\hat \sigma_i(\mathbf{x}_i,\mathbf{z}_i )$ can be complicated but, with a sufficiently large sample, ignoring the estimation error does not effect the prediction interval that much. In short, it is not necessary to open that can of worms here. For a more detailed explanation of all this and more examples, see Wooldridge's book "Introductory Econometrics: A Modern Approach", Ch 8.
The problem is that when people refer to heteroskedastic or "robust" regression, they are usually referring to the situation in which the precise nature of the heteroskedasticity (the function $\sigma_i(\mathbf{x}_i,\mathbf{z}_i )$) is not known, in which case a White or two-step estimator is used. These offer consistent estimates for $var(\hat \beta)$ but not for the $\sigma_i$, and so you have no naturally way to estimate prediction intervals. I would argue that prediction intervals are not meaningful in this context anyway. The idea behind these sandwich type estimators is to consistently estimate the standard error of the coefficients, $\hat \beta$, without the burden of offering accurate prediction intervals for each individual observation, thus making the estimates more "robust".
Edit:
Just to be clear, the above only considers least squares regression. Other forms of non-parametric regression, such as quantile regression, may offer means of obtaining a prediction interval without parametric specification of residual standard error. | Prediction Intervals with Heteroscedasticity | It would depend on the nature of the heteroskedasticity. If you wanted a prediction interval, you usually need a parametric specification like:
$$
y_i \sim N(\mathbf{x}_i'\beta,\sigma_i(\mathbf{x}_i, | Prediction Intervals with Heteroscedasticity
It would depend on the nature of the heteroskedasticity. If you wanted a prediction interval, you usually need a parametric specification like:
$$
y_i \sim N(\mathbf{x}_i'\beta,\sigma_i(\mathbf{x}_i,\mathbf{z}_i ))
$$
i.e. $y_i$ is normally distributed with mean $\mathbf{x}_i'\beta$, and standard deviation $\sigma_i(\mathbf{x}_i,\mathbf{z}_i )$, where the standard deviation is some known function of the $\mathbf{x}_i$ or perhaps some other set of variables $\mathbf{z}_i $ , that way you can estimate the standard deviation for each $i^{th}$ observation.
Examples of possible functions include; $\sigma^2_i(\mathbf{x}_i)=\sigma^2x_{i,k}$ (Studies of firm profits, an example from Greene's "Econometric Analysis" 7th edition CH 9), where $x_{i,k}$ is the $i^{th}$ observation of the $k^{th}$ dependent variable, or, if working with time series data, GARCH and/or stochastic volatility specifications.
You can use the estimates $\hat \sigma_i(\mathbf{x}_i,\mathbf{z}_i )$ as the standard errors for your prediction intervals if you like. I will forgo a formal treatment here because accounting for estimation errors in $\hat \sigma_i(\mathbf{x}_i,\mathbf{z}_i )$ can be complicated but, with a sufficiently large sample, ignoring the estimation error does not effect the prediction interval that much. In short, it is not necessary to open that can of worms here. For a more detailed explanation of all this and more examples, see Wooldridge's book "Introductory Econometrics: A Modern Approach", Ch 8.
The problem is that when people refer to heteroskedastic or "robust" regression, they are usually referring to the situation in which the precise nature of the heteroskedasticity (the function $\sigma_i(\mathbf{x}_i,\mathbf{z}_i )$) is not known, in which case a White or two-step estimator is used. These offer consistent estimates for $var(\hat \beta)$ but not for the $\sigma_i$, and so you have no naturally way to estimate prediction intervals. I would argue that prediction intervals are not meaningful in this context anyway. The idea behind these sandwich type estimators is to consistently estimate the standard error of the coefficients, $\hat \beta$, without the burden of offering accurate prediction intervals for each individual observation, thus making the estimates more "robust".
Edit:
Just to be clear, the above only considers least squares regression. Other forms of non-parametric regression, such as quantile regression, may offer means of obtaining a prediction interval without parametric specification of residual standard error. | Prediction Intervals with Heteroscedasticity
It would depend on the nature of the heteroskedasticity. If you wanted a prediction interval, you usually need a parametric specification like:
$$
y_i \sim N(\mathbf{x}_i'\beta,\sigma_i(\mathbf{x}_i, |
35,708 | Prediction Intervals with Heteroscedasticity | Nonparametric quantile regression gives a very general approach that allows for both heteroscedasticity and nonlinearity. See section 9: http://www.econ.uiuc.edu/~roger/research/rq/vig.pdf
UPDATE:
A reasonable approximation for a 90% prediction interval is the space between the 5th-percentile regression curve and the 95th-percentile regression curve. (Depending on the details of the curve estimation technique and the sparsity of the data, you might want to use something more like the 4th and 96th percentiles to be "conservative"). Intuition for this type of nonparametric prediction interval is here on wikipedia.
This answer is just a starting point. A significant amount of work has been done on quantile regression prediction intervals. Or just make nonparametric regression prediction intervals. | Prediction Intervals with Heteroscedasticity | Nonparametric quantile regression gives a very general approach that allows for both heteroscedasticity and nonlinearity. See section 9: http://www.econ.uiuc.edu/~roger/research/rq/vig.pdf
UPDATE:
A | Prediction Intervals with Heteroscedasticity
Nonparametric quantile regression gives a very general approach that allows for both heteroscedasticity and nonlinearity. See section 9: http://www.econ.uiuc.edu/~roger/research/rq/vig.pdf
UPDATE:
A reasonable approximation for a 90% prediction interval is the space between the 5th-percentile regression curve and the 95th-percentile regression curve. (Depending on the details of the curve estimation technique and the sparsity of the data, you might want to use something more like the 4th and 96th percentiles to be "conservative"). Intuition for this type of nonparametric prediction interval is here on wikipedia.
This answer is just a starting point. A significant amount of work has been done on quantile regression prediction intervals. Or just make nonparametric regression prediction intervals. | Prediction Intervals with Heteroscedasticity
Nonparametric quantile regression gives a very general approach that allows for both heteroscedasticity and nonlinearity. See section 9: http://www.econ.uiuc.edu/~roger/research/rq/vig.pdf
UPDATE:
A |
35,709 | Prediction Intervals with Heteroscedasticity | If the regression of your response on your explanatory variable is a straight line and your variance increases with the explanatory variable, a weighted regression model is needed with or (if your nonconstant variance is more extreme) as your weight. This weights your variance by your x value, so that there's a proportional relationship.
Here's code with the weights included in the model and prediction. Notice that you need to add the weights to both your original dataset and your new dataset.
Thanks to @PopcornKing for his original code from Calculating prediction intervals from heteroscedastic data.
library(ggplot2)
dummySamples <- function(n, slope, intercept, slopeVar){
x = runif(n)
y = slope*x+intercept+rnorm(n, mean=0, sd=slopeVar*x)
return(data.frame(x=x,y=y))
}
myDF <- dummySamples(20000,3,0,5)
plot(myDF$x, myDF$y)
w = 1/myDF$x**2
t = lm(y~x, data=myDF, weights=w)
summary(t)
newdata = data.frame(x=seq(0,1,0.01))
w = 1/newdata$x**2
p1 = predict.lm(t, newdata, interval = 'prediction', weights=w)
a <- ggplot()
a <- a + geom_point(data=myDF, aes(x=x,y=y), shape=1)
a <- a + geom_abline(intercept=t$coefficients[1], slope=t$coefficients[2])
a <- a + geom_abline(intercept=t$coefficients[1], slope=t$coefficients[2], color='blue')
a <- ggplot()
a <- a + geom_point(data=myDF, aes(x=x,y=y), shape=1)
a <- a + geom_abline(intercept=t$coefficients[1], slope=t$coefficients[2], color='blue')
newdata$lwr = p1[,c("lwr")]
newdata$upr = p1[,c("upr")]
a <- a + geom_ribbon(data=newdata, aes(x=x,ymin=lwr, ymax=upr), fill='yellow', alpha=0.3)
a | Prediction Intervals with Heteroscedasticity | If the regression of your response on your explanatory variable is a straight line and your variance increases with the explanatory variable, a weighted regression model is needed with or (if your n | Prediction Intervals with Heteroscedasticity
If the regression of your response on your explanatory variable is a straight line and your variance increases with the explanatory variable, a weighted regression model is needed with or (if your nonconstant variance is more extreme) as your weight. This weights your variance by your x value, so that there's a proportional relationship.
Here's code with the weights included in the model and prediction. Notice that you need to add the weights to both your original dataset and your new dataset.
Thanks to @PopcornKing for his original code from Calculating prediction intervals from heteroscedastic data.
library(ggplot2)
dummySamples <- function(n, slope, intercept, slopeVar){
x = runif(n)
y = slope*x+intercept+rnorm(n, mean=0, sd=slopeVar*x)
return(data.frame(x=x,y=y))
}
myDF <- dummySamples(20000,3,0,5)
plot(myDF$x, myDF$y)
w = 1/myDF$x**2
t = lm(y~x, data=myDF, weights=w)
summary(t)
newdata = data.frame(x=seq(0,1,0.01))
w = 1/newdata$x**2
p1 = predict.lm(t, newdata, interval = 'prediction', weights=w)
a <- ggplot()
a <- a + geom_point(data=myDF, aes(x=x,y=y), shape=1)
a <- a + geom_abline(intercept=t$coefficients[1], slope=t$coefficients[2])
a <- a + geom_abline(intercept=t$coefficients[1], slope=t$coefficients[2], color='blue')
a <- ggplot()
a <- a + geom_point(data=myDF, aes(x=x,y=y), shape=1)
a <- a + geom_abline(intercept=t$coefficients[1], slope=t$coefficients[2], color='blue')
newdata$lwr = p1[,c("lwr")]
newdata$upr = p1[,c("upr")]
a <- a + geom_ribbon(data=newdata, aes(x=x,ymin=lwr, ymax=upr), fill='yellow', alpha=0.3)
a | Prediction Intervals with Heteroscedasticity
If the regression of your response on your explanatory variable is a straight line and your variance increases with the explanatory variable, a weighted regression model is needed with or (if your n |
35,710 | Test for significant excess of significant p-values across multiple comparisons | There are a number of methods for combining $p$-values which could be considered.
Birnbaum in his paper
"Combining independent tests of significance" available
here
points out the problem
is poorly specified.
This may
account for the number of methods available
and their differing behaviour.
The null hypothesis $H_0$ is well defined,
that all $p_i$ have a uniform distribution on the unit interval.
There are
two classes of alternative hypothesis
$H_A$: all $p_i$ have the same (unknown)
non--uniform, non--increasing density,
$H_B$:
at least one $p_i$ has an (unknown)
non--uniform, non--increasing density.
If all the tests being combined come from
what are basically replicates then $H_A$ is appropriate
whereas if they are of different kinds
of test or different conditions
then $H_B$ is appropriate.
Note that Birnbaum specifically considers the
possibility that the tests being combined may be
very different
for instance some tests of means, some of variances,
and so on.
Of the methods with an eponym Fisher's method
(sum of logs, sum of $\chi^2_2$) and Tippett's method
(minimum $p$) respond well when the alternative is $H_B$
whereas Stouffer's method (sum of $z$s) and Edgington's method
(sum of $p$) may be preferred when $H_A$ is the alternative of choice.
Loughin's extensive simulations "A systematic comparison of methods for combining $p$--values from independent tests" available here may also be of interest.
In the specific application you mention it depends whether you think just some of the genes are involved or all of them. Since my knowledge of genetics stops more or less with Mendel I leave that up to you. | Test for significant excess of significant p-values across multiple comparisons | There are a number of methods for combining $p$-values which could be considered.
Birnbaum in his paper
"Combining independent tests of significance" available
here
points out the problem
is poorly sp | Test for significant excess of significant p-values across multiple comparisons
There are a number of methods for combining $p$-values which could be considered.
Birnbaum in his paper
"Combining independent tests of significance" available
here
points out the problem
is poorly specified.
This may
account for the number of methods available
and their differing behaviour.
The null hypothesis $H_0$ is well defined,
that all $p_i$ have a uniform distribution on the unit interval.
There are
two classes of alternative hypothesis
$H_A$: all $p_i$ have the same (unknown)
non--uniform, non--increasing density,
$H_B$:
at least one $p_i$ has an (unknown)
non--uniform, non--increasing density.
If all the tests being combined come from
what are basically replicates then $H_A$ is appropriate
whereas if they are of different kinds
of test or different conditions
then $H_B$ is appropriate.
Note that Birnbaum specifically considers the
possibility that the tests being combined may be
very different
for instance some tests of means, some of variances,
and so on.
Of the methods with an eponym Fisher's method
(sum of logs, sum of $\chi^2_2$) and Tippett's method
(minimum $p$) respond well when the alternative is $H_B$
whereas Stouffer's method (sum of $z$s) and Edgington's method
(sum of $p$) may be preferred when $H_A$ is the alternative of choice.
Loughin's extensive simulations "A systematic comparison of methods for combining $p$--values from independent tests" available here may also be of interest.
In the specific application you mention it depends whether you think just some of the genes are involved or all of them. Since my knowledge of genetics stops more or less with Mendel I leave that up to you. | Test for significant excess of significant p-values across multiple comparisons
There are a number of methods for combining $p$-values which could be considered.
Birnbaum in his paper
"Combining independent tests of significance" available
here
points out the problem
is poorly sp |
35,711 | Test for significant excess of significant p-values across multiple comparisons | About 10 years ago Bradley Efron wrote a number of papers on the subject. I think in one of them he also used the permutation approach, but the main idea was to estimate the null distribution from the data parametrically. You can find the corresponding R package instructions here. | Test for significant excess of significant p-values across multiple comparisons | About 10 years ago Bradley Efron wrote a number of papers on the subject. I think in one of them he also used the permutation approach, but the main idea was to estimate the null distribution from the | Test for significant excess of significant p-values across multiple comparisons
About 10 years ago Bradley Efron wrote a number of papers on the subject. I think in one of them he also used the permutation approach, but the main idea was to estimate the null distribution from the data parametrically. You can find the corresponding R package instructions here. | Test for significant excess of significant p-values across multiple comparisons
About 10 years ago Bradley Efron wrote a number of papers on the subject. I think in one of them he also used the permutation approach, but the main idea was to estimate the null distribution from the |
35,712 | Does correlation imply mutual information? | Mutual information is zero if and only if $p(x,y) = p(x) p(y)$ and this condition implies that correlation is zero. So, if correlation is non-zero, then mutual information need to be non-zero. | Does correlation imply mutual information? | Mutual information is zero if and only if $p(x,y) = p(x) p(y)$ and this condition implies that correlation is zero. So, if correlation is non-zero, then mutual information need to be non-zero. | Does correlation imply mutual information?
Mutual information is zero if and only if $p(x,y) = p(x) p(y)$ and this condition implies that correlation is zero. So, if correlation is non-zero, then mutual information need to be non-zero. | Does correlation imply mutual information?
Mutual information is zero if and only if $p(x,y) = p(x) p(y)$ and this condition implies that correlation is zero. So, if correlation is non-zero, then mutual information need to be non-zero. |
35,713 | Sum of squared Poisson probability masses | Don't hesitate to use WolframAlpha to get the sum of a series. Or do you need a mathematical proof ?
This gives $\exp(-2\lambda)I_0(2\lambda)$.
The link to the documentation of the Bessel function $I_0$ is this one.
Actually the proof here would just mean the series representation of $I_0$.
If you want to use R to evaluate this Bessel function, you can do it with the help of the gsl package:
> library(gsl)
> lambda <- 1
> exp(-2*lambda)*bessel_I0(2*lambda)
[1] 0.3085083
> sum(dpois(0:100, lambda)^2)
[1] 0.3085083 | Sum of squared Poisson probability masses | Don't hesitate to use WolframAlpha to get the sum of a series. Or do you need a mathematical proof ?
This gives $\exp(-2\lambda)I_0(2\lambda)$.
The link to the documentation of the Bessel function $ | Sum of squared Poisson probability masses
Don't hesitate to use WolframAlpha to get the sum of a series. Or do you need a mathematical proof ?
This gives $\exp(-2\lambda)I_0(2\lambda)$.
The link to the documentation of the Bessel function $I_0$ is this one.
Actually the proof here would just mean the series representation of $I_0$.
If you want to use R to evaluate this Bessel function, you can do it with the help of the gsl package:
> library(gsl)
> lambda <- 1
> exp(-2*lambda)*bessel_I0(2*lambda)
[1] 0.3085083
> sum(dpois(0:100, lambda)^2)
[1] 0.3085083 | Sum of squared Poisson probability masses
Don't hesitate to use WolframAlpha to get the sum of a series. Or do you need a mathematical proof ?
This gives $\exp(-2\lambda)I_0(2\lambda)$.
The link to the documentation of the Bessel function $ |
35,714 | Relationship between correlation and linear dependency | whuber's much more detailed answer appeared while I was composing this answer of mine (which essentially uses the same argument).
Let $X$ and $Y$ denote two random variables with finite variances
$\sigma_X^2$ and $\sigma_Y^2$ respectively and correlation coefficient
$\rho = \pm 1$. Then,
\begin{align}\operatorname{var}(Y-aX)
&= \sigma_Y^2+ a^2\sigma_X^2 - 2a\cdot\operatorname{cov}(Y,X)
&\text{standard result}\\
&= \sigma_Y^2+ a^2\sigma_X^2 - 2a\rho\sigma_X\sigma_Y
&\text{substitute for}~\operatorname{cov}(Y,X)\\
&= \sigma_Y^2+ a^2\sigma_X^2 \mp 2a\sigma_X\sigma_Y
& \text{since}~ \rho = \pm 1\\
&= (\sigma_Y\mp a\sigma_X)^2\\
&= (\sigma_Y - a\rho\sigma_X)^2
& \text{keep remembering that}~ \rho = \pm 1\\
&= 0 &\text{if we choose}~ a = \rho\frac{\sigma_Y}{\sigma_X}.
\end{align}
Thus, if $\rho = \pm 1$, then $Y-\rho\frac{\sigma_Y}{\sigma_X}X$
is a random
variable whose variance is $0$, and so
$Y-\rho\frac{\sigma_Y}{\sigma_X}X$ is a constant
(almost surely). In other words, $Y = \alpha X + \beta$ (almost
surely) and thus $X$ and $Y$ are linearly related
(almost surely). | Relationship between correlation and linear dependency | whuber's much more detailed answer appeared while I was composing this answer of mine (which essentially uses the same argument).
Let $X$ and $Y$ denote two random variables with finite variances
$\si | Relationship between correlation and linear dependency
whuber's much more detailed answer appeared while I was composing this answer of mine (which essentially uses the same argument).
Let $X$ and $Y$ denote two random variables with finite variances
$\sigma_X^2$ and $\sigma_Y^2$ respectively and correlation coefficient
$\rho = \pm 1$. Then,
\begin{align}\operatorname{var}(Y-aX)
&= \sigma_Y^2+ a^2\sigma_X^2 - 2a\cdot\operatorname{cov}(Y,X)
&\text{standard result}\\
&= \sigma_Y^2+ a^2\sigma_X^2 - 2a\rho\sigma_X\sigma_Y
&\text{substitute for}~\operatorname{cov}(Y,X)\\
&= \sigma_Y^2+ a^2\sigma_X^2 \mp 2a\sigma_X\sigma_Y
& \text{since}~ \rho = \pm 1\\
&= (\sigma_Y\mp a\sigma_X)^2\\
&= (\sigma_Y - a\rho\sigma_X)^2
& \text{keep remembering that}~ \rho = \pm 1\\
&= 0 &\text{if we choose}~ a = \rho\frac{\sigma_Y}{\sigma_X}.
\end{align}
Thus, if $\rho = \pm 1$, then $Y-\rho\frac{\sigma_Y}{\sigma_X}X$
is a random
variable whose variance is $0$, and so
$Y-\rho\frac{\sigma_Y}{\sigma_X}X$ is a constant
(almost surely). In other words, $Y = \alpha X + \beta$ (almost
surely) and thus $X$ and $Y$ are linearly related
(almost surely). | Relationship between correlation and linear dependency
whuber's much more detailed answer appeared while I was composing this answer of mine (which essentially uses the same argument).
Let $X$ and $Y$ denote two random variables with finite variances
$\si |
35,715 | Relationship between correlation and linear dependency | Let the pair of (random) variables be $(X_1,X_2)$. Since their correlation coefficient exists, each has a finite variance $\sigma^2_i$ and a finite mean $\mu_i$. The standardized variables are $Z_i = (X_i - \mu_i)/\sigma_i$. In particular their second moments are unity:
$$\mathbb{E}(Z_i^2) = 1.$$
By definition, the correlation
$$\rho = \rho(X_1,X_2) = \mathbb{E}(Z_1Z_2)$$
is the expected product of the standardized variables. Since for any real numbers $x$, $y$, and $\rho$ it is the case that
$$2\rho x y = x^2 + \rho^2 y - (x-\rho y)^2,$$
and expectation is linear, we may compute
$$\eqalign{
0 &= 2-2\rho^2 = 2 - 2\rho\, \mathbb{E}(Z_1Z_2) = 2- \mathbb{E}(2\rho Z_1Z_2) \\
&= 2-\mathbb{E}\left(Z_1^2 + Z_2^2- (Z_1 - \rho Z_2)^2\right) \\
&= 2-\left(1 + 1 - \mathbb{E}((Z_1 - \rho Z_2)^2)\right) \\
&= \mathbb{E}((Z_1 - \rho Z_2)^2).
}$$
Now--for the first time--we need to invoke a basic result about random variables: because $(Z_1 - \rho Z_2)^2$ has zero expectation, $Z_1 - \rho Z_2 = 0$ almost surely. (A direct application of Chebyshev's Inequality will prove this.)
Unraveling the algebra to re-express the $Z_i$ in terms of the $X_i$, we find
$$\sigma_2 X_1 - \rho \sigma_1 X_2 + (\rho \sigma_1 \mu_2 - \sigma_2 \mu_1) = 0$$
almost surely. This is a linear relation between $X_1$ and $X_2$. It is not quite the result requested in the question, because it involves an additive constant $\rho \sigma_1 \mu_2 - \sigma_2 \mu_1$, whereas linear combinations do not. The constant is unavoidable: for instance, $X_1$ and $X_1+1$ have unit correlation but $X_1+1$ is not a linear combination of $X_1$.
What we have proven to be true is
When $|\rho(X_1,X_2)|=1$, $X_1$ and $X_2$ are linearly related almost surely. The coefficients in the relation are universal algebraic combinations of the first two moments of the variables. | Relationship between correlation and linear dependency | Let the pair of (random) variables be $(X_1,X_2)$. Since their correlation coefficient exists, each has a finite variance $\sigma^2_i$ and a finite mean $\mu_i$. The standardized variables are $Z_i | Relationship between correlation and linear dependency
Let the pair of (random) variables be $(X_1,X_2)$. Since their correlation coefficient exists, each has a finite variance $\sigma^2_i$ and a finite mean $\mu_i$. The standardized variables are $Z_i = (X_i - \mu_i)/\sigma_i$. In particular their second moments are unity:
$$\mathbb{E}(Z_i^2) = 1.$$
By definition, the correlation
$$\rho = \rho(X_1,X_2) = \mathbb{E}(Z_1Z_2)$$
is the expected product of the standardized variables. Since for any real numbers $x$, $y$, and $\rho$ it is the case that
$$2\rho x y = x^2 + \rho^2 y - (x-\rho y)^2,$$
and expectation is linear, we may compute
$$\eqalign{
0 &= 2-2\rho^2 = 2 - 2\rho\, \mathbb{E}(Z_1Z_2) = 2- \mathbb{E}(2\rho Z_1Z_2) \\
&= 2-\mathbb{E}\left(Z_1^2 + Z_2^2- (Z_1 - \rho Z_2)^2\right) \\
&= 2-\left(1 + 1 - \mathbb{E}((Z_1 - \rho Z_2)^2)\right) \\
&= \mathbb{E}((Z_1 - \rho Z_2)^2).
}$$
Now--for the first time--we need to invoke a basic result about random variables: because $(Z_1 - \rho Z_2)^2$ has zero expectation, $Z_1 - \rho Z_2 = 0$ almost surely. (A direct application of Chebyshev's Inequality will prove this.)
Unraveling the algebra to re-express the $Z_i$ in terms of the $X_i$, we find
$$\sigma_2 X_1 - \rho \sigma_1 X_2 + (\rho \sigma_1 \mu_2 - \sigma_2 \mu_1) = 0$$
almost surely. This is a linear relation between $X_1$ and $X_2$. It is not quite the result requested in the question, because it involves an additive constant $\rho \sigma_1 \mu_2 - \sigma_2 \mu_1$, whereas linear combinations do not. The constant is unavoidable: for instance, $X_1$ and $X_1+1$ have unit correlation but $X_1+1$ is not a linear combination of $X_1$.
What we have proven to be true is
When $|\rho(X_1,X_2)|=1$, $X_1$ and $X_2$ are linearly related almost surely. The coefficients in the relation are universal algebraic combinations of the first two moments of the variables. | Relationship between correlation and linear dependency
Let the pair of (random) variables be $(X_1,X_2)$. Since their correlation coefficient exists, each has a finite variance $\sigma^2_i$ and a finite mean $\mu_i$. The standardized variables are $Z_i |
35,716 | Data visualization for missing data | I would honestly simply leave data points without information empty. In R:
foo <- structure(c(10,NA,NA,67,38),.Names=paste0("Day",1:5))
plot(foo,xaxt="n",xlab="",ylab="",pch=19,type="o",
ylim=c(0,max(foo,na.rm=TRUE)))
axis(1,seq_along(foo),names(foo))
Anything else is defensible if it reflects information you have about your data. For instance, if your database recorded sales and your store was open on days 2 & 3, but nobody wanted to buy your widgets, then you can validly infer and plot zeros. (If the store was closed or you were out of stock in widgets, you should not, since any demand could not have been satisfied.)
You could linearly interpolate if this is a "good guess" at what "really" happened during the periods with no data. Of course, what is a "good guess" will depend on your specific situation.
I would not use splines, unless I had a very good reason. Linear interpolation is simpler, and one should always use a simpler approach unless a more complex one like splines is warranted (Occam's razor). Plus, higher-order splines can explode, depending on your specific data. | Data visualization for missing data | I would honestly simply leave data points without information empty. In R:
foo <- structure(c(10,NA,NA,67,38),.Names=paste0("Day",1:5))
plot(foo,xaxt="n",xlab="",ylab="",pch=19,type="o",
ylim=c(0,ma | Data visualization for missing data
I would honestly simply leave data points without information empty. In R:
foo <- structure(c(10,NA,NA,67,38),.Names=paste0("Day",1:5))
plot(foo,xaxt="n",xlab="",ylab="",pch=19,type="o",
ylim=c(0,max(foo,na.rm=TRUE)))
axis(1,seq_along(foo),names(foo))
Anything else is defensible if it reflects information you have about your data. For instance, if your database recorded sales and your store was open on days 2 & 3, but nobody wanted to buy your widgets, then you can validly infer and plot zeros. (If the store was closed or you were out of stock in widgets, you should not, since any demand could not have been satisfied.)
You could linearly interpolate if this is a "good guess" at what "really" happened during the periods with no data. Of course, what is a "good guess" will depend on your specific situation.
I would not use splines, unless I had a very good reason. Linear interpolation is simpler, and one should always use a simpler approach unless a more complex one like splines is warranted (Occam's razor). Plus, higher-order splines can explode, depending on your specific data. | Data visualization for missing data
I would honestly simply leave data points without information empty. In R:
foo <- structure(c(10,NA,NA,67,38),.Names=paste0("Day",1:5))
plot(foo,xaxt="n",xlab="",ylab="",pch=19,type="o",
ylim=c(0,ma |
35,717 | Fit regression model from a fan-shaped relation, in R | Here's two fan-shaped plots generated by different methods:
(Click here for a larger version.)
These in turn suggest two different approaches for modelling data that looks more or less like this:
Take logs, and fit a linear model with the coefficient restricted to 1 (also called an offset)
divide $y$ by $x$ and then fit a constant-only model.
There will be other ways to generate data like this, and other ways to fit data like this. For example, some other possibilities are:
fit a gamma glm with identity link (and perhaps without an intercept)
since the variance is proportional to $x^2$, use this fact to construct a weighted regression using weights proportional to $1/x^2$. [For a simple straight line through the origin, this should give the same result as 2.]
--
[AndyW's comment about a possible missing covariate is important. However, I'm just going to deal with the question of modelling fan-shaped relationships since it's an interesting topic on its own; in practice you would want to investigate his suggestion that there appears to be potential missing covariates as well.] | Fit regression model from a fan-shaped relation, in R | Here's two fan-shaped plots generated by different methods:
(Click here for a larger version.)
These in turn suggest two different approaches for modelling data that looks more or less like this:
Ta | Fit regression model from a fan-shaped relation, in R
Here's two fan-shaped plots generated by different methods:
(Click here for a larger version.)
These in turn suggest two different approaches for modelling data that looks more or less like this:
Take logs, and fit a linear model with the coefficient restricted to 1 (also called an offset)
divide $y$ by $x$ and then fit a constant-only model.
There will be other ways to generate data like this, and other ways to fit data like this. For example, some other possibilities are:
fit a gamma glm with identity link (and perhaps without an intercept)
since the variance is proportional to $x^2$, use this fact to construct a weighted regression using weights proportional to $1/x^2$. [For a simple straight line through the origin, this should give the same result as 2.]
--
[AndyW's comment about a possible missing covariate is important. However, I'm just going to deal with the question of modelling fan-shaped relationships since it's an interesting topic on its own; in practice you would want to investigate his suggestion that there appears to be potential missing covariates as well.] | Fit regression model from a fan-shaped relation, in R
Here's two fan-shaped plots generated by different methods:
(Click here for a larger version.)
These in turn suggest two different approaches for modelling data that looks more or less like this:
Ta |
35,718 | Simulate from a dynamic mixture of distributions | As pointed out by Dougal, the shape of your target density$$h_\beta(r)\propto (1-w_{m,\tau}(r))f_{\beta_0}(r)+w_{m,\tau}(r) g_{\epsilon,\sigma}(r)
$$is open to accept-reject simulation since
$$(1-w_{m,\tau}(r))f_{\beta_0}(r)+w_{m,\tau}(r) g_{\epsilon,\sigma}(r)
\le f_{\beta_0}(r)+g_{\epsilon,\sigma}(r)=2\left\{\frac{1}{2}f_{\beta_0}(r)+\frac{1}{2}g_{\epsilon,\sigma}(r)\right\}
$$
Therefore simulating from the even mixture of Pareto $f_{\beta_0}$ and Gamma $g_{\epsilon,\sigma}$ and accepting with probability
$$\dfrac{(1-w_{m,\tau}(r))f_{\beta_0}(r)+w_{m,\tau}(r) g_{\epsilon,\sigma}(r)}{f_{\beta_0}(r)+g_{\epsilon,\sigma}(r)}$$
would return you an exact output from your target density.
Note that the original paper by Frigessi et al. does include a way to simulate from the dynamic mixture on page 6: with probability $1/2$ simulate from $f_{\beta_0}$ and with probability $1/2$ from $g_{\epsilon,\sigma}$ [which is equivalent to simulating from the even mixture] and accept the outcome with probability $1-w_{m,\tau}(r)$ in the first case and $w_{m,\tau}(r)$ in the second case. It is unclear which one of those approaches has the highest average acceptance rate.
Here is a small experiment that shows the acceptance rates are comparable:
#Frigessi et al example
beta=2
lambda=gamma(1.5)
mu=tau=1
xi=.5
sigma=1
#the target is
target=function(x)
(1-pcauchy((x-mu)/tau))*dweibull(x,shape=beta,scale=1/lambda)+pcauchy((x-mu)/tau)*dgpd(x,xi=xi,beta=sigma)[1]
T=1e4
u=sample(c(0,1),T,rep=TRUE)
x=u*rweibull(T,shape=beta,scale=1/lambda)+(1-u)*rgpd(T,xi=xi,beta=sigma)
#AR 1
ace1=mean(runif(T)<(u*(1-pcauchy((x-mu)/tau))+(1-u)*pcauchy((x-mu)/tau)))
#AR 2
ace2=mean(runif(T)<target(x)/(dweibull(x,shape=beta,scale=1/lambda)+dgpd(x,xi=xi,beta=sigma)[1]))
with
> ace1
[1] 0.5173
> ace2
[1] 0.5473
An alternative is to use a Metropolis-Hastings algorithm. For instance, at each iteration of the Markov chain,
pick the Pareto against the Gamma components with probabilities $1-w_{m,\tau}(x^{t-1})$ and $w_{m,\tau}(x^{t-1})$;
Generate a value $y$ from the chosen component;
Accept the value $y$ as $x^t=y$ with probability
$$\dfrac{(1-w_{m,\tau}(y))f_{\beta_0}(y)+w_{m,\tau}(y) g_{\epsilon,\sigma}(y)}{(1-w_{m,\tau}(x^{t-1}))f_{\beta_0}(x^{t-1})+w_{m,\tau}(x^{t-1}) g_{\epsilon,\sigma}(x^{t-1})}$$
$$\times\dfrac{(1-w_{m,\tau}(y))f_{\beta_0}(x^{t-1})+w_{m,\tau}(y) g_{\epsilon,\sigma}(x^{t-1})}{(1-w_{m,\tau}(x^{t-1}))f_{\beta_0}(y)+w_{m,\tau}(x^{t-1}) g_{\epsilon,\sigma}(y)}$$
otherwise take $x^t=x^{t-1}$
The corresponding R code is straightforward
#MCMC style
propose=function(x,y){
#moving from x to y
target(y)*(pcauchy((y-mu)/tau,lowe=FALSE)*dweibull(x,shape=beta,scale=1/lambda)+pcauchy((y-mu)/tau)*dgpd(x,xi=xi,beta=sigma)[1:length(x)])/
(target(x)*(pcauchy((x-mu)/tau,lowe=FALSE)*dweibull(y,shape=beta,scale=1/lambda)+pcauchy((x-mu)/tau)*dgpd(y,xi=xi,beta=sigma)[1:length(x)]))}
x=seq(rgpd(1,xi=xi,beta=sigma),T)
for (t in 2:T){
#proposal
x[t]=rweibull(1,shape=beta,scale=1/lambda)
if (runif(1)<pcauchy((x[t-1]-mu)/tau)) x[t]=rgpd(1,xi=xi,beta=sigma)
#acceptance
if (runif(1)>propose(x[t-1],x[t])) x[t]=x[t-1]}
ace3=length(unique(x))/T
and gives a higher acceptance rate
> ace3
[1] 0.877
While the fit is identical to the density estimate obtained by accept-reject:
[Red curve for the accept-reject sample and blue curve for the MCMC sample, both based on 10⁴ original simulations] | Simulate from a dynamic mixture of distributions | As pointed out by Dougal, the shape of your target density$$h_\beta(r)\propto (1-w_{m,\tau}(r))f_{\beta_0}(r)+w_{m,\tau}(r) g_{\epsilon,\sigma}(r)
$$is open to accept-reject simulation since
$$(1-w_{m | Simulate from a dynamic mixture of distributions
As pointed out by Dougal, the shape of your target density$$h_\beta(r)\propto (1-w_{m,\tau}(r))f_{\beta_0}(r)+w_{m,\tau}(r) g_{\epsilon,\sigma}(r)
$$is open to accept-reject simulation since
$$(1-w_{m,\tau}(r))f_{\beta_0}(r)+w_{m,\tau}(r) g_{\epsilon,\sigma}(r)
\le f_{\beta_0}(r)+g_{\epsilon,\sigma}(r)=2\left\{\frac{1}{2}f_{\beta_0}(r)+\frac{1}{2}g_{\epsilon,\sigma}(r)\right\}
$$
Therefore simulating from the even mixture of Pareto $f_{\beta_0}$ and Gamma $g_{\epsilon,\sigma}$ and accepting with probability
$$\dfrac{(1-w_{m,\tau}(r))f_{\beta_0}(r)+w_{m,\tau}(r) g_{\epsilon,\sigma}(r)}{f_{\beta_0}(r)+g_{\epsilon,\sigma}(r)}$$
would return you an exact output from your target density.
Note that the original paper by Frigessi et al. does include a way to simulate from the dynamic mixture on page 6: with probability $1/2$ simulate from $f_{\beta_0}$ and with probability $1/2$ from $g_{\epsilon,\sigma}$ [which is equivalent to simulating from the even mixture] and accept the outcome with probability $1-w_{m,\tau}(r)$ in the first case and $w_{m,\tau}(r)$ in the second case. It is unclear which one of those approaches has the highest average acceptance rate.
Here is a small experiment that shows the acceptance rates are comparable:
#Frigessi et al example
beta=2
lambda=gamma(1.5)
mu=tau=1
xi=.5
sigma=1
#the target is
target=function(x)
(1-pcauchy((x-mu)/tau))*dweibull(x,shape=beta,scale=1/lambda)+pcauchy((x-mu)/tau)*dgpd(x,xi=xi,beta=sigma)[1]
T=1e4
u=sample(c(0,1),T,rep=TRUE)
x=u*rweibull(T,shape=beta,scale=1/lambda)+(1-u)*rgpd(T,xi=xi,beta=sigma)
#AR 1
ace1=mean(runif(T)<(u*(1-pcauchy((x-mu)/tau))+(1-u)*pcauchy((x-mu)/tau)))
#AR 2
ace2=mean(runif(T)<target(x)/(dweibull(x,shape=beta,scale=1/lambda)+dgpd(x,xi=xi,beta=sigma)[1]))
with
> ace1
[1] 0.5173
> ace2
[1] 0.5473
An alternative is to use a Metropolis-Hastings algorithm. For instance, at each iteration of the Markov chain,
pick the Pareto against the Gamma components with probabilities $1-w_{m,\tau}(x^{t-1})$ and $w_{m,\tau}(x^{t-1})$;
Generate a value $y$ from the chosen component;
Accept the value $y$ as $x^t=y$ with probability
$$\dfrac{(1-w_{m,\tau}(y))f_{\beta_0}(y)+w_{m,\tau}(y) g_{\epsilon,\sigma}(y)}{(1-w_{m,\tau}(x^{t-1}))f_{\beta_0}(x^{t-1})+w_{m,\tau}(x^{t-1}) g_{\epsilon,\sigma}(x^{t-1})}$$
$$\times\dfrac{(1-w_{m,\tau}(y))f_{\beta_0}(x^{t-1})+w_{m,\tau}(y) g_{\epsilon,\sigma}(x^{t-1})}{(1-w_{m,\tau}(x^{t-1}))f_{\beta_0}(y)+w_{m,\tau}(x^{t-1}) g_{\epsilon,\sigma}(y)}$$
otherwise take $x^t=x^{t-1}$
The corresponding R code is straightforward
#MCMC style
propose=function(x,y){
#moving from x to y
target(y)*(pcauchy((y-mu)/tau,lowe=FALSE)*dweibull(x,shape=beta,scale=1/lambda)+pcauchy((y-mu)/tau)*dgpd(x,xi=xi,beta=sigma)[1:length(x)])/
(target(x)*(pcauchy((x-mu)/tau,lowe=FALSE)*dweibull(y,shape=beta,scale=1/lambda)+pcauchy((x-mu)/tau)*dgpd(y,xi=xi,beta=sigma)[1:length(x)]))}
x=seq(rgpd(1,xi=xi,beta=sigma),T)
for (t in 2:T){
#proposal
x[t]=rweibull(1,shape=beta,scale=1/lambda)
if (runif(1)<pcauchy((x[t-1]-mu)/tau)) x[t]=rgpd(1,xi=xi,beta=sigma)
#acceptance
if (runif(1)>propose(x[t-1],x[t])) x[t]=x[t-1]}
ace3=length(unique(x))/T
and gives a higher acceptance rate
> ace3
[1] 0.877
While the fit is identical to the density estimate obtained by accept-reject:
[Red curve for the accept-reject sample and blue curve for the MCMC sample, both based on 10⁴ original simulations] | Simulate from a dynamic mixture of distributions
As pointed out by Dougal, the shape of your target density$$h_\beta(r)\propto (1-w_{m,\tau}(r))f_{\beta_0}(r)+w_{m,\tau}(r) g_{\epsilon,\sigma}(r)
$$is open to accept-reject simulation since
$$(1-w_{m |
35,719 | Simulate from a dynamic mixture of distributions | I suggest that you investigate Probabilistic Programming. Basically, you write programs that operate on probability distributions rather than (random) variables. This makes it easy to define functions that are dynamic mixtures of distributions. While full-featured Probabilistic Programming systems support Bayesian statistical inference, your application doesn't really need it. Primarily you just need the programming language and the Markov Chain Monte Carlo (MCMC) solvers to generate samples from the distribution defined by your program.
Good introductions and list of resources can be found in the following:
http://radar.oreilly.com/2013/04/probabilistic-programming.html
http://probabilistic-programming.org/wiki/Home
Good tutorials on Probabilistic Programming can be found at the Church web site, especially the second, "Simple Generative Models":
http://projects.csail.mit.edu/church/wiki/Cornell_tutorial
http://projects.csail.mit.edu/church/wiki/Simple_Generative_Models
Finally, investigate STAN as an implementation platform. There are versions for R and Python. Again, you just need the generative model capabilities. | Simulate from a dynamic mixture of distributions | I suggest that you investigate Probabilistic Programming. Basically, you write programs that operate on probability distributions rather than (random) variables. This makes it easy to define functio | Simulate from a dynamic mixture of distributions
I suggest that you investigate Probabilistic Programming. Basically, you write programs that operate on probability distributions rather than (random) variables. This makes it easy to define functions that are dynamic mixtures of distributions. While full-featured Probabilistic Programming systems support Bayesian statistical inference, your application doesn't really need it. Primarily you just need the programming language and the Markov Chain Monte Carlo (MCMC) solvers to generate samples from the distribution defined by your program.
Good introductions and list of resources can be found in the following:
http://radar.oreilly.com/2013/04/probabilistic-programming.html
http://probabilistic-programming.org/wiki/Home
Good tutorials on Probabilistic Programming can be found at the Church web site, especially the second, "Simple Generative Models":
http://projects.csail.mit.edu/church/wiki/Cornell_tutorial
http://projects.csail.mit.edu/church/wiki/Simple_Generative_Models
Finally, investigate STAN as an implementation platform. There are versions for R and Python. Again, you just need the generative model capabilities. | Simulate from a dynamic mixture of distributions
I suggest that you investigate Probabilistic Programming. Basically, you write programs that operate on probability distributions rather than (random) variables. This makes it easy to define functio |
35,720 | Why does Lucene IDF have a seemingly additional +1? | All TF-IDF weighting schemes are just heuristic methods to give more weight to unusual terms. I'm not sure that TF-IDF schemes generally have a solid statistical basis behind them (see reference 1), except for the observation that TF-IDF tends to produce better results than simple word counts. Since the quality of the results is the primary (sole?) justification for TF-IDF in the first place, one could argue that trying your method with and without +1 and picking the best one would be fine.
If I'm reading this sckit learn thread correctly, it appears that you are not the first person to raise a similar question about adding 1 to IDF scores. The consensus on that thread is that +1 is nonstandard behavior as well. I only skimmed it, but the thread does not appear contain a resounding endorsement or justification of +1.
So the choice of +1 has the effect of placing the lower bound on all IDF values at 1 rather than at 0. This is the same as adding $e$ documents containing every word to your corpus. Not sure why that might be helpful, but perhaps it is in specific contexts. One might even treat some parameter $c$ in $c+\log\left(\frac{\text{numDocs}}{\text{docFreq+1}}\right)$ as a tuning parameter, to give you a more flexible family of IDF schemes with $c$ as their lower bound.
When the lower bound of IDF is zero, the product $\text{term frequency}\times\text{IDF}$ may be 0 for some terms, so that those terms are given no weight at all in the learning procedure; qualitatively, the terms are so common that they provide no information relevant to the NLP task. When the lower-bound is nonzero, these terms will have more influence.
John Lafferty and Guy Lebanon. "Diffusion Kernels on Statistical Manifolds." Journal of Machine Learning. 2005. | Why does Lucene IDF have a seemingly additional +1? | All TF-IDF weighting schemes are just heuristic methods to give more weight to unusual terms. I'm not sure that TF-IDF schemes generally have a solid statistical basis behind them (see reference 1), e | Why does Lucene IDF have a seemingly additional +1?
All TF-IDF weighting schemes are just heuristic methods to give more weight to unusual terms. I'm not sure that TF-IDF schemes generally have a solid statistical basis behind them (see reference 1), except for the observation that TF-IDF tends to produce better results than simple word counts. Since the quality of the results is the primary (sole?) justification for TF-IDF in the first place, one could argue that trying your method with and without +1 and picking the best one would be fine.
If I'm reading this sckit learn thread correctly, it appears that you are not the first person to raise a similar question about adding 1 to IDF scores. The consensus on that thread is that +1 is nonstandard behavior as well. I only skimmed it, but the thread does not appear contain a resounding endorsement or justification of +1.
So the choice of +1 has the effect of placing the lower bound on all IDF values at 1 rather than at 0. This is the same as adding $e$ documents containing every word to your corpus. Not sure why that might be helpful, but perhaps it is in specific contexts. One might even treat some parameter $c$ in $c+\log\left(\frac{\text{numDocs}}{\text{docFreq+1}}\right)$ as a tuning parameter, to give you a more flexible family of IDF schemes with $c$ as their lower bound.
When the lower bound of IDF is zero, the product $\text{term frequency}\times\text{IDF}$ may be 0 for some terms, so that those terms are given no weight at all in the learning procedure; qualitatively, the terms are so common that they provide no information relevant to the NLP task. When the lower-bound is nonzero, these terms will have more influence.
John Lafferty and Guy Lebanon. "Diffusion Kernels on Statistical Manifolds." Journal of Machine Learning. 2005. | Why does Lucene IDF have a seemingly additional +1?
All TF-IDF weighting schemes are just heuristic methods to give more weight to unusual terms. I'm not sure that TF-IDF schemes generally have a solid statistical basis behind them (see reference 1), e |
35,721 | Who conjectured that every correlation is caused by causal mechanisms? | This conjecture is called Reichenbach's Principle of Common Cause (RPCC), as it was first made precise by Hans Reichenbach (in 1956; imprecise versions have been around for much longer). The Stanford Encyclopedia of Philosophy has a good discussion and plenty of references.
Tangent: A friend recently asked me this exact question, and in addition, whether there were any counterexamples to the principle. The counterexamples that I'm aware of are: (1) selection bias, (2) logical or part-whole dependence, and (3) temporal trends.
Example of selection bias: college students get admitted if they are EITHER smart OR good at football. This induces a negative correlation between football skills and intelligence within the college population that does not exist in the general population. The selection process, Selection Into College, is a common child of Intelligence and Football Skill rather than a common cause. It induces a dependence because we always implicitly condition on the selected population, and conditioning on a variable in a causal model induces a dependence between its parents.
Example of logical dependence: x and log(x) are correlated. Example of part-whole dependence: my income in the first quarter of the year and the whole year are correlated. Neither of these examples have a well-defined causal model. In an interventionist theory of causation, for a set of variables to have a well-defined causal model, it must be logically possible to intervene on each variable individually without necessarily intervening on others. These could arguably count as cases of causation, if one were to extend the concept of a causal model (in which case they might not be counterexamples to RPCC).
Example of temporal trends: sea levels in Venice and the price of bread in London are both going up, because they are both part of temporal processes that are trending upwards, so they correlate over time. Adjusted for the temporal trend, they don't correlate, reflecting the fact that neither is causally related to the other. | Who conjectured that every correlation is caused by causal mechanisms? | This conjecture is called Reichenbach's Principle of Common Cause (RPCC), as it was first made precise by Hans Reichenbach (in 1956; imprecise versions have been around for much longer). The Stanford | Who conjectured that every correlation is caused by causal mechanisms?
This conjecture is called Reichenbach's Principle of Common Cause (RPCC), as it was first made precise by Hans Reichenbach (in 1956; imprecise versions have been around for much longer). The Stanford Encyclopedia of Philosophy has a good discussion and plenty of references.
Tangent: A friend recently asked me this exact question, and in addition, whether there were any counterexamples to the principle. The counterexamples that I'm aware of are: (1) selection bias, (2) logical or part-whole dependence, and (3) temporal trends.
Example of selection bias: college students get admitted if they are EITHER smart OR good at football. This induces a negative correlation between football skills and intelligence within the college population that does not exist in the general population. The selection process, Selection Into College, is a common child of Intelligence and Football Skill rather than a common cause. It induces a dependence because we always implicitly condition on the selected population, and conditioning on a variable in a causal model induces a dependence between its parents.
Example of logical dependence: x and log(x) are correlated. Example of part-whole dependence: my income in the first quarter of the year and the whole year are correlated. Neither of these examples have a well-defined causal model. In an interventionist theory of causation, for a set of variables to have a well-defined causal model, it must be logically possible to intervene on each variable individually without necessarily intervening on others. These could arguably count as cases of causation, if one were to extend the concept of a causal model (in which case they might not be counterexamples to RPCC).
Example of temporal trends: sea levels in Venice and the price of bread in London are both going up, because they are both part of temporal processes that are trending upwards, so they correlate over time. Adjusted for the temporal trend, they don't correlate, reflecting the fact that neither is causally related to the other. | Who conjectured that every correlation is caused by causal mechanisms?
This conjecture is called Reichenbach's Principle of Common Cause (RPCC), as it was first made precise by Hans Reichenbach (in 1956; imprecise versions have been around for much longer). The Stanford |
35,722 | How to use Random Forest for categorical variables with missing value | Off the top of my head, I would say that this shouldn't be an issue. The rf package in R implements random forests using CARTs. One of the nicest thing about trees is how they are "natively" capable of dealing with categorical and missing variables. Here is the package documentation; you can download the package itself from CRAN.
Chapter 8 in James, Witten, Hastie, & Tibshirani's Introduction to Statistical Learning with Applications in R offers a good introduction to tree methods and also covers random forests on page 328.
Imputing missing variables is a whole thing in and of itself and, depending on your needs and data, you might be able to get away with not having to do it. If you do have to perform imputation you might want to check here and here for some quick pointers, but you're probably just going to have to read up on imputation methods and make a judgement call on what to go with. | How to use Random Forest for categorical variables with missing value | Off the top of my head, I would say that this shouldn't be an issue. The rf package in R implements random forests using CARTs. One of the nicest thing about trees is how they are "natively" capable o | How to use Random Forest for categorical variables with missing value
Off the top of my head, I would say that this shouldn't be an issue. The rf package in R implements random forests using CARTs. One of the nicest thing about trees is how they are "natively" capable of dealing with categorical and missing variables. Here is the package documentation; you can download the package itself from CRAN.
Chapter 8 in James, Witten, Hastie, & Tibshirani's Introduction to Statistical Learning with Applications in R offers a good introduction to tree methods and also covers random forests on page 328.
Imputing missing variables is a whole thing in and of itself and, depending on your needs and data, you might be able to get away with not having to do it. If you do have to perform imputation you might want to check here and here for some quick pointers, but you're probably just going to have to read up on imputation methods and make a judgement call on what to go with. | How to use Random Forest for categorical variables with missing value
Off the top of my head, I would say that this shouldn't be an issue. The rf package in R implements random forests using CARTs. One of the nicest thing about trees is how they are "natively" capable o |
35,723 | How to use Random Forest for categorical variables with missing value | The R randomForest package includes functions for doing a rough imputation of missing values and then iterativelly improving this imputation based on case proximity in RF runs.
There are a bunch of other methods that have been proposed as ways rf's and decision trees can handle missing values:
1) Leave them out when split and do a bias correction for the reduction in impurity.
2) Split them onto a a third branch at each node.
3) Label them as a separate category as chf suggests. For numerical features impute and create a separate x_is_missing feature.
4) Identify "surrogate splitter" relationships between features by analyzing which features work well in the same place and then use a surrogate to split when a feature is missing.
5) Do a local imputation within the branch of the tree.
I'm not aware of R code for most of these though it may exist.
I implemented a stand alone utility that can do the first two methods:
https://github.com/ryanbressler/CloudForest
It is easy enough to use write.arff to dump you're data out and call it and load the predictions (which are stored in a tsv) back in. (The arff file format is nice for categorical data with missing values).
I chose those two methods as they don't increase the computation required on large data sets. I've found the first works well when there are few missing values and they aren't meaningfully distributed...imputation often also works well here.
The second, three way splitting, works well when the fact a value is missing may be significant. This is quite common in poorly designed survey's that don't include a "don't know" or "not applicable" category. Method 3 can also work well here. | How to use Random Forest for categorical variables with missing value | The R randomForest package includes functions for doing a rough imputation of missing values and then iterativelly improving this imputation based on case proximity in RF runs.
There are a bunch of ot | How to use Random Forest for categorical variables with missing value
The R randomForest package includes functions for doing a rough imputation of missing values and then iterativelly improving this imputation based on case proximity in RF runs.
There are a bunch of other methods that have been proposed as ways rf's and decision trees can handle missing values:
1) Leave them out when split and do a bias correction for the reduction in impurity.
2) Split them onto a a third branch at each node.
3) Label them as a separate category as chf suggests. For numerical features impute and create a separate x_is_missing feature.
4) Identify "surrogate splitter" relationships between features by analyzing which features work well in the same place and then use a surrogate to split when a feature is missing.
5) Do a local imputation within the branch of the tree.
I'm not aware of R code for most of these though it may exist.
I implemented a stand alone utility that can do the first two methods:
https://github.com/ryanbressler/CloudForest
It is easy enough to use write.arff to dump you're data out and call it and load the predictions (which are stored in a tsv) back in. (The arff file format is nice for categorical data with missing values).
I chose those two methods as they don't increase the computation required on large data sets. I've found the first works well when there are few missing values and they aren't meaningfully distributed...imputation often also works well here.
The second, three way splitting, works well when the fact a value is missing may be significant. This is quite common in poorly designed survey's that don't include a "don't know" or "not applicable" category. Method 3 can also work well here. | How to use Random Forest for categorical variables with missing value
The R randomForest package includes functions for doing a rough imputation of missing values and then iterativelly improving this imputation based on case proximity in RF runs.
There are a bunch of ot |
35,724 | How to use Random Forest for categorical variables with missing value | You can simply introduce a new level for each categorical variable which represents missing data. Then you would simply replace the missing fields with this new category. | How to use Random Forest for categorical variables with missing value | You can simply introduce a new level for each categorical variable which represents missing data. Then you would simply replace the missing fields with this new category. | How to use Random Forest for categorical variables with missing value
You can simply introduce a new level for each categorical variable which represents missing data. Then you would simply replace the missing fields with this new category. | How to use Random Forest for categorical variables with missing value
You can simply introduce a new level for each categorical variable which represents missing data. Then you would simply replace the missing fields with this new category. |
35,725 | De-normalizing Google Trends data? | Since the normalization consists in
$$ \mathbf{z} = \frac{\mathbf{x}}{\max(\mathbf{x})}, $$
where $\mathbf{x}$ is the vector of search volumes, and $\max(\mathbf{x})$ is the maximal element of $\mathbf{x}$, if you want de-normalized data, you should multiply each element of the normalized vector times the maximal element of $\mathbf{x}$:
$$ \mathbf{x} = \mathbf{z} \times \max(\mathbf{x}). $$
Unfortunately, if you don't know the value of $\max(\mathbf{x})$ you can't de-normalize your data. | De-normalizing Google Trends data? | Since the normalization consists in
$$ \mathbf{z} = \frac{\mathbf{x}}{\max(\mathbf{x})}, $$
where $\mathbf{x}$ is the vector of search volumes, and $\max(\mathbf{x})$ is the maximal element of $\mathb | De-normalizing Google Trends data?
Since the normalization consists in
$$ \mathbf{z} = \frac{\mathbf{x}}{\max(\mathbf{x})}, $$
where $\mathbf{x}$ is the vector of search volumes, and $\max(\mathbf{x})$ is the maximal element of $\mathbf{x}$, if you want de-normalized data, you should multiply each element of the normalized vector times the maximal element of $\mathbf{x}$:
$$ \mathbf{x} = \mathbf{z} \times \max(\mathbf{x}). $$
Unfortunately, if you don't know the value of $\max(\mathbf{x})$ you can't de-normalize your data. | De-normalizing Google Trends data?
Since the normalization consists in
$$ \mathbf{z} = \frac{\mathbf{x}}{\max(\mathbf{x})}, $$
where $\mathbf{x}$ is the vector of search volumes, and $\max(\mathbf{x})$ is the maximal element of $\mathb |
35,726 | De-normalizing Google Trends data? | De-normalizing Google Trends data can be very useful, but it is tricky, due to rounding errors: when comparing 2 queries of vastly different search volume, the time series for the less frequent query could appear to be 0 everywhere.
To solve this problem, we have developed a method called Google Trends Anchor Bank. It's available here: https://github.com/epfl-dlab/GoogleTrendsAnchorBank
A technical paper describing the method is available here: https://arxiv.org/abs/2007.13861 | De-normalizing Google Trends data? | De-normalizing Google Trends data can be very useful, but it is tricky, due to rounding errors: when comparing 2 queries of vastly different search volume, the time series for the less frequent query | De-normalizing Google Trends data?
De-normalizing Google Trends data can be very useful, but it is tricky, due to rounding errors: when comparing 2 queries of vastly different search volume, the time series for the less frequent query could appear to be 0 everywhere.
To solve this problem, we have developed a method called Google Trends Anchor Bank. It's available here: https://github.com/epfl-dlab/GoogleTrendsAnchorBank
A technical paper describing the method is available here: https://arxiv.org/abs/2007.13861 | De-normalizing Google Trends data?
De-normalizing Google Trends data can be very useful, but it is tricky, due to rounding errors: when comparing 2 queries of vastly different search volume, the time series for the less frequent query |
35,727 | Approaches for generating synthetic survey data with dependent answers? | Here I use a latent variable approach. This readily extends to the continuous/categorical case.
The idea is to treat a continuous variable (the latent variable) as laying behind the ordered categories that are actually observed (by splitting up the continuous variable at breakpoints).
So for the two variables that are independent, we define breakpoints that give the desired proportions in each category. Then the third continuous variable, correlated with the other two, is also split up in similar fashion. It's common to use standardized normal variables for the latent variables, but other distributions could be used.
The example below is in R but I have annotated it to help conversion to other platforms.
set.seed(10345) # just to make sure if you run this we have the same results
xu=rnorm(50) # draw 50 observations from continuous latent variables
yu=rnorm(50) #
zu= 0.8*xu+0.6*yu # the latent variables have correlations 0 between x and y,
# 0.8 between x and z, and 0.6 between y and z
cor(cbind(xu,yu,zu)) # sample correlations will be similar to those population values
px=c(.3,.2,.5) # our selected population proportions in the marginal categories
py=c(.1,.2,.4,.3)
pz=c(.1,.2,.4,.2,.1)
xc=cut(xu,qnorm(cumsum(c(0,px))),labels=c("AI","AII","AIII")) # convert to ord. categ.
yc=cut(yu,qnorm(cumsum(c(0,py))),labels=LETTERS[1:4])
zc=cut(zu,qnorm(cumsum(c(0,pz))),labels=letters[1:5])
Now let's see the relationships between variables:
table(xc,yc) #examine the resulting data. xc,yc populations are independent
yc
xc A B C D
AI 1 7 9 2
AII 0 4 11 7
AIII 2 5 18 14
> table(xc,zc) #xc,zc dependent
zc
xc a b c d e
AI 4 11 4 0 0
AII 0 2 19 1 0
AIII 0 1 18 12 8
> table(yc,zc) #yc,zc dependent
zc
yc a b c d e
A 1 1 1 0 0
B 2 7 5 1 1
C 1 5 27 5 0
D 0 1 8 7 7
How correlations between the latent variables work.
I chose $X_u$ and $Y_u$ ($u$ for "underlying"; I'd have put $l$ for "latent", but it tends to look like a "1") to be two independent standard normal variates. You can make them correlated with a third variate, $Z_u$, by making $Z_u$ a linear combination of $X_u$, $Y_u$, and an independent noise variate $\epsilon$, which we'll also take to be standard normal here.
If we write $Z^*=aX_u+bY_u+c\epsilon$ then $Z^*$ is normal, but not standard normal.
$\text{Cov}(Z^*,X_u)=\text{Cov}(aX_u+bY_u+c\epsilon,X)=a\,\sigma^2_X=a$
Similarly $\text{Cov}(Z^*,Y_u)=b$ and $\text{Cov}(Z^*,\epsilon)=c$.
$\text{Var}(Z^*)=a^2+b^2+c^2$
So $\text{Cor}(Z^*,X_u)=\frac{a}{\sqrt{a^2+b^2+c^2}}$ and So $\text{Cor}(Z^*,Y_u)=\frac{b}{\sqrt{a^2+b^2+c^2}}$.
But I want $Z_u$ to have variance $1$, so if we define $Z_u=\frac{Z^*}{\sqrt{a^2+b^2+c^2}}$ then
$\text{Var}(Z_u)=\frac{a^2+b^2+c^2}{a^2+b^2+c^2}=1$
In the example, I chose $a=0.8,b=0.6,c=0$, which has $a^2+b^2+c^2=1$ and in that case $Z_u=Z^*$, and we have $\text{Cor}(Z_u,X_u)=a=0.8$ and $\text{Cor}(Z_u,Y_u)=b$.
If you choose to have $\text{Cor}(Z_u,X_u)=\rho\,,$ then $-\sqrt{1-\rho^2}\leq\text{Cor}(Z_u,Y_u)\leq \sqrt{1-\rho^2}$ (with the limits being achieved when $c=0$).
Note that these are population correlations, not sample correlations.
In the example you mention in comments, $a=b=\frac{1}{2}$, and $c=0$ which gives $\text{Cor}(Z^*,X_u)=\frac{a}{\sqrt{a^2+b^2+c^2}}=\frac{1/2}{\sqrt{(1/2)^2+(1/2)^2}}=\sqrt{\frac{1}{2}}\approx 0.7071$
-- but now to make $Z_u$ standard normal we need to divide through by
$\sqrt{a^2+b^2+c^2}=\sqrt{\frac{1}{2}}$, i.e.
$Z_u=Z^*/\sqrt{\frac{1}{2}}=\sqrt{2}Z^*$. | Approaches for generating synthetic survey data with dependent answers? | Here I use a latent variable approach. This readily extends to the continuous/categorical case.
The idea is to treat a continuous variable (the latent variable) as laying behind the ordered categories | Approaches for generating synthetic survey data with dependent answers?
Here I use a latent variable approach. This readily extends to the continuous/categorical case.
The idea is to treat a continuous variable (the latent variable) as laying behind the ordered categories that are actually observed (by splitting up the continuous variable at breakpoints).
So for the two variables that are independent, we define breakpoints that give the desired proportions in each category. Then the third continuous variable, correlated with the other two, is also split up in similar fashion. It's common to use standardized normal variables for the latent variables, but other distributions could be used.
The example below is in R but I have annotated it to help conversion to other platforms.
set.seed(10345) # just to make sure if you run this we have the same results
xu=rnorm(50) # draw 50 observations from continuous latent variables
yu=rnorm(50) #
zu= 0.8*xu+0.6*yu # the latent variables have correlations 0 between x and y,
# 0.8 between x and z, and 0.6 between y and z
cor(cbind(xu,yu,zu)) # sample correlations will be similar to those population values
px=c(.3,.2,.5) # our selected population proportions in the marginal categories
py=c(.1,.2,.4,.3)
pz=c(.1,.2,.4,.2,.1)
xc=cut(xu,qnorm(cumsum(c(0,px))),labels=c("AI","AII","AIII")) # convert to ord. categ.
yc=cut(yu,qnorm(cumsum(c(0,py))),labels=LETTERS[1:4])
zc=cut(zu,qnorm(cumsum(c(0,pz))),labels=letters[1:5])
Now let's see the relationships between variables:
table(xc,yc) #examine the resulting data. xc,yc populations are independent
yc
xc A B C D
AI 1 7 9 2
AII 0 4 11 7
AIII 2 5 18 14
> table(xc,zc) #xc,zc dependent
zc
xc a b c d e
AI 4 11 4 0 0
AII 0 2 19 1 0
AIII 0 1 18 12 8
> table(yc,zc) #yc,zc dependent
zc
yc a b c d e
A 1 1 1 0 0
B 2 7 5 1 1
C 1 5 27 5 0
D 0 1 8 7 7
How correlations between the latent variables work.
I chose $X_u$ and $Y_u$ ($u$ for "underlying"; I'd have put $l$ for "latent", but it tends to look like a "1") to be two independent standard normal variates. You can make them correlated with a third variate, $Z_u$, by making $Z_u$ a linear combination of $X_u$, $Y_u$, and an independent noise variate $\epsilon$, which we'll also take to be standard normal here.
If we write $Z^*=aX_u+bY_u+c\epsilon$ then $Z^*$ is normal, but not standard normal.
$\text{Cov}(Z^*,X_u)=\text{Cov}(aX_u+bY_u+c\epsilon,X)=a\,\sigma^2_X=a$
Similarly $\text{Cov}(Z^*,Y_u)=b$ and $\text{Cov}(Z^*,\epsilon)=c$.
$\text{Var}(Z^*)=a^2+b^2+c^2$
So $\text{Cor}(Z^*,X_u)=\frac{a}{\sqrt{a^2+b^2+c^2}}$ and So $\text{Cor}(Z^*,Y_u)=\frac{b}{\sqrt{a^2+b^2+c^2}}$.
But I want $Z_u$ to have variance $1$, so if we define $Z_u=\frac{Z^*}{\sqrt{a^2+b^2+c^2}}$ then
$\text{Var}(Z_u)=\frac{a^2+b^2+c^2}{a^2+b^2+c^2}=1$
In the example, I chose $a=0.8,b=0.6,c=0$, which has $a^2+b^2+c^2=1$ and in that case $Z_u=Z^*$, and we have $\text{Cor}(Z_u,X_u)=a=0.8$ and $\text{Cor}(Z_u,Y_u)=b$.
If you choose to have $\text{Cor}(Z_u,X_u)=\rho\,,$ then $-\sqrt{1-\rho^2}\leq\text{Cor}(Z_u,Y_u)\leq \sqrt{1-\rho^2}$ (with the limits being achieved when $c=0$).
Note that these are population correlations, not sample correlations.
In the example you mention in comments, $a=b=\frac{1}{2}$, and $c=0$ which gives $\text{Cor}(Z^*,X_u)=\frac{a}{\sqrt{a^2+b^2+c^2}}=\frac{1/2}{\sqrt{(1/2)^2+(1/2)^2}}=\sqrt{\frac{1}{2}}\approx 0.7071$
-- but now to make $Z_u$ standard normal we need to divide through by
$\sqrt{a^2+b^2+c^2}=\sqrt{\frac{1}{2}}$, i.e.
$Z_u=Z^*/\sqrt{\frac{1}{2}}=\sqrt{2}Z^*$. | Approaches for generating synthetic survey data with dependent answers?
Here I use a latent variable approach. This readily extends to the continuous/categorical case.
The idea is to treat a continuous variable (the latent variable) as laying behind the ordered categories |
35,728 | Approaches for generating synthetic survey data with dependent answers? | Are your variables quantitative or categorical variables ?
In an article we recently wrote, we wanted to simulate three quantitative anwers to a survey : $z$ and $u$ had to be independant, and $X$ had to be correlated to both $z$ and $u$, so we generated them like this :
$\begin{align*}
u &\sim \mathcal{U}[a,b] \\
z &\sim \mathcal{U}[a,b] \\
\forall k, X_k &= \alpha \cdot z_k + \beta \cdot u_k + \sigma \cdot \epsilon \\
\end{align*}$
with : $\epsilon \sim \mathcal{N}(0,1)$ and $\alpha, \beta, \sigma \in \mathbb{R}$. I believe it is very common way to proceed, I can think of plenty of papers where people did comparable things.
For categorical variables, I'd suggest a very similar approach :
$\begin{align*}
z &\sim \mathcal{B}(n,p)~~~\text{(or whatever distribution suits your problem best)} \\
\forall k, X_k &= \lfloor z_k + \sigma \cdot \epsilon \rfloor \\
\end{align*}$
Parameters $\alpha, \beta, \sigma $ can be fine-tuned to match real survey answers in case you have data at your disposal. | Approaches for generating synthetic survey data with dependent answers? | Are your variables quantitative or categorical variables ?
In an article we recently wrote, we wanted to simulate three quantitative anwers to a survey : $z$ and $u$ had to be independant, and $X$ had | Approaches for generating synthetic survey data with dependent answers?
Are your variables quantitative or categorical variables ?
In an article we recently wrote, we wanted to simulate three quantitative anwers to a survey : $z$ and $u$ had to be independant, and $X$ had to be correlated to both $z$ and $u$, so we generated them like this :
$\begin{align*}
u &\sim \mathcal{U}[a,b] \\
z &\sim \mathcal{U}[a,b] \\
\forall k, X_k &= \alpha \cdot z_k + \beta \cdot u_k + \sigma \cdot \epsilon \\
\end{align*}$
with : $\epsilon \sim \mathcal{N}(0,1)$ and $\alpha, \beta, \sigma \in \mathbb{R}$. I believe it is very common way to proceed, I can think of plenty of papers where people did comparable things.
For categorical variables, I'd suggest a very similar approach :
$\begin{align*}
z &\sim \mathcal{B}(n,p)~~~\text{(or whatever distribution suits your problem best)} \\
\forall k, X_k &= \lfloor z_k + \sigma \cdot \epsilon \rfloor \\
\end{align*}$
Parameters $\alpha, \beta, \sigma $ can be fine-tuned to match real survey answers in case you have data at your disposal. | Approaches for generating synthetic survey data with dependent answers?
Are your variables quantitative or categorical variables ?
In an article we recently wrote, we wanted to simulate three quantitative anwers to a survey : $z$ and $u$ had to be independant, and $X$ had |
35,729 | Why don't we train neural networks to maximize linear correlation instead of error? | Because that would be a completely different objective altogether. Note that unlike MSE, Pearson correlation is maximal iff there is a linear relationship between both variables. This means that
The network would "think" it has correctly learned its inputs if its output is roughly proportional to the dependent variable samples, rather than equal (or similar). Therefore predicting $Y$ or $2Y$ or $-Y$ (etc.) would be equivalent. This is generally undesirable, since we would like our network to give prediction similar to its inputs, rather than proportionally to said inputs.
There would not be a global minimum to the optimisation problem thus posed. Any proportional constant as set above would give an optimal solution. This is undesirable from a numerical point of view and would lead to instability. | Why don't we train neural networks to maximize linear correlation instead of error? | Because that would be a completely different objective altogether. Note that unlike MSE, Pearson correlation is maximal iff there is a linear relationship between both variables. This means that
The | Why don't we train neural networks to maximize linear correlation instead of error?
Because that would be a completely different objective altogether. Note that unlike MSE, Pearson correlation is maximal iff there is a linear relationship between both variables. This means that
The network would "think" it has correctly learned its inputs if its output is roughly proportional to the dependent variable samples, rather than equal (or similar). Therefore predicting $Y$ or $2Y$ or $-Y$ (etc.) would be equivalent. This is generally undesirable, since we would like our network to give prediction similar to its inputs, rather than proportionally to said inputs.
There would not be a global minimum to the optimisation problem thus posed. Any proportional constant as set above would give an optimal solution. This is undesirable from a numerical point of view and would lead to instability. | Why don't we train neural networks to maximize linear correlation instead of error?
Because that would be a completely different objective altogether. Note that unlike MSE, Pearson correlation is maximal iff there is a linear relationship between both variables. This means that
The |
35,730 | Why don't we train neural networks to maximize linear correlation instead of error? | It's more like a comment, though I can't comment yet.
I'd suspect that minimizing RMSE respective to a normalized input is (roughly?) equivalent to maximizing the Pearson correlation coefficient, but the latter is more computationally expensive. | Why don't we train neural networks to maximize linear correlation instead of error? | It's more like a comment, though I can't comment yet.
I'd suspect that minimizing RMSE respective to a normalized input is (roughly?) equivalent to maximizing the Pearson correlation coefficient, but | Why don't we train neural networks to maximize linear correlation instead of error?
It's more like a comment, though I can't comment yet.
I'd suspect that minimizing RMSE respective to a normalized input is (roughly?) equivalent to maximizing the Pearson correlation coefficient, but the latter is more computationally expensive. | Why don't we train neural networks to maximize linear correlation instead of error?
It's more like a comment, though I can't comment yet.
I'd suspect that minimizing RMSE respective to a normalized input is (roughly?) equivalent to maximizing the Pearson correlation coefficient, but |
35,731 | Estimating Size of a Set based on two Overlapping Subsets | This sounds like the basic "capture-recapture" problem, sometimes called "mark and recapture".
You have a population of unknown size $N$; imagine them to be indistinguishable balls in an urn (all white, say). You take a sample of size $n$ randomly from the population, and mark them (paint them black say), return them to the population, and mix.
You then draw a new sample, of size $m$, of which $k$ are marked.
This is of course a hypergeometric model (i.e. $k$ is hypergeometric).
The aim here is to estimate $N$.
(In your example, the set of numbers selected by the first person are the "marked" ones.)
You can use a variety of methods to estimate $N$.
Note that the mean of the hypergeometric is $mn/N$, so a naive method of moments estimate is $\hat{N}=mn/k$. In your example, you'd guess $N=200$. In the capture-recapture literature this is called the Lincoln–Petersen estimator. It's intuitively appealing because it equates sample proportion and population proportion; asymptotically, the sample proportion will converge to the population proportion.
Obviously, since in some cases $k$ can be 0, the estimator can (with non-zero probability) be non-finite, which is somewhat of a bias problem (indeed, $E(\frac{mn}{k})>N$ even if $k$ can't be zero); if you modify your estimator when $k$ is quite small, it can nevertheless perform fairly well).
An estimator that is notionally similar is the Chapman estimator $\hat{N} = \frac{(K+1)(n+1)}{k+1} - 1$. It performs substantially better in small samples.
Note that both of these estimators can yield noninteger estimates.
The maximum likelihood estimator: the likelihood is increasing* in $N$ for integers below $\frac{mn}{k}$ and increasing for integers above it; the integers $\lfloor \frac{mn}{k} \rfloor$ and $\lfloor \frac{mn}{k} \rfloor+1$ would seem to be the two possible candidates for maximizing the likelihood. It's a simple matter to directly compute the likelihood for both.
* in some circumstances it's actually nondecreasing between the last two points
Here's the likelihood function for a small interval around the method of moments estimator:
It turns out the likelihood is equally high for $\hat{N}=199$ and $\hat{N}=200$. (Indeed, since the $mn/k$ is biased upward, it might make some sense to choose the lower of the two in this instance, $\hat{N}=199$.)
According to Zhang (2009)[1], $\lfloor mn/k\rfloor$, the integer part of the method of moments estimator maximizes the likelihood (i.e. always round down). (I haven't checked this, but the argument looks sound. It doesn't hurt to directly compare a couple of values around $mn/k$ in any case -- in our example we discovered that the next lower value also maximizes the likelihood, though I think that can only happen when $mn/k$ is integer.)
Bayesian estimation is quite useful in this problem (when $k$ is small there's not a great deal of information in the sample to hold the "tail" down, so prior information about population size can be very useful), but of course the posterior distribution depends on the particular prior one chooses, and the estimator itself depends on the loss function selected.
Confidence intervals
Let's say we want a confidence interval for $N$, along with our estimate $\hat{N}$.
We can form a large-sample interval easily enough.
We know that $k$ is hypergeometric. From the normal approximation to the hypergeometric, $k$ is approximately $\sim N\left(\frac{mn}{N},\frac{mn(N-m)(N-n)}{N^2(N-1)}\right)\,$.
So $(k-\frac{mn}{N})/\sqrt{\frac{mn(N-m)(N-n)}{N^2(N-1)}}) = (kN-mn)/\sqrt{\frac{mn(N-m)(N-n)}{(N-1)}}$ is approximately standard normal.
By squaring and comparing with the upper 95% point of a $\chi^2_1$, we get an inequality in $N$ that can be rearranged into a cubic inequality: given $m,n$ and $k$, we find those values of $N$ such that $(kN-mn)^2(N-1)-3.84(mn(N-m)(N-n)) < 0$.
For the example problem that yields the following cubic:
(Being a cubic, there's another interval where the function is negative, but that's in an impossible region for $N$ in this instance)
The cubic curve crosses the horizontal axis at about N=186.75 and N=219.78
This suggests that an approximate 95% interval for $N$ should be something like $(187,220)$, though you might "round outward" to be safer.
We could possibly derive an explicit solution for the approximate bounds, but unless you're doing many hundreds of these it's probably not worth the effort to do more than find the zeros fairly automatically (polynomial or even general root-finding functions are widely available).
In practice one should also investigate the actual coverage of intervals generated in this fashion for values of $N$ in the region of the estimate. That is, given $m$ and $n$, choose an $N$, simulate many $k$ (say 1000 or 10000), and find the proportion of intervals that include that $N$, repeating the exercise at several plausible values of $N$.
In the case of the Chapman estimator, we can obtain an approximate variance estimate for $\hat{N}$:
$\operatorname{var}(\hat{N}^{_\text{C}}) = \frac{(m+1)(n+1)(m-k)(n-k)}{(k+1)(k+1)(k+2)}$
(The $C$ superscript is to indicate the Chapman estimator)
In this case, $\hat{N}^{_\text{C}}$ should be asymptotically normal, so an asymptotic 95% interval would be
$\hat{N}^{_\text{C}}\pm 1.96 \sqrt{\frac{(m+1)(n+1)(m-k)(n-k)}{(k+1)(k+1)(k+2)}}$
which on your example gives $(184.95,216.4)$.
For the MLE, you'd get an asymptotic interval by approximating the log-likelihood at the peak by a quadratic (in effect, an asymptotic normal approximation for $\hat N$) to estimate $\hat{\sigma_{\hat N}}$ (which can be obtained from the approximating quadratic's second derivative) and from that normal approximation derive confidence limits. I won't labor the point, as details of such calculations can be found for a variety of MLEs, and are similar here.
[1]: Zhang, H. (2009),
"A Note About Maximum Likelihood Estimator in Hypergeometric Distribution,"
Comunicaciones en Estadística, June, Vol. 2, No. 1 | Estimating Size of a Set based on two Overlapping Subsets | This sounds like the basic "capture-recapture" problem, sometimes called "mark and recapture".
You have a population of unknown size $N$; imagine them to be indistinguishable balls in an urn (all whi | Estimating Size of a Set based on two Overlapping Subsets
This sounds like the basic "capture-recapture" problem, sometimes called "mark and recapture".
You have a population of unknown size $N$; imagine them to be indistinguishable balls in an urn (all white, say). You take a sample of size $n$ randomly from the population, and mark them (paint them black say), return them to the population, and mix.
You then draw a new sample, of size $m$, of which $k$ are marked.
This is of course a hypergeometric model (i.e. $k$ is hypergeometric).
The aim here is to estimate $N$.
(In your example, the set of numbers selected by the first person are the "marked" ones.)
You can use a variety of methods to estimate $N$.
Note that the mean of the hypergeometric is $mn/N$, so a naive method of moments estimate is $\hat{N}=mn/k$. In your example, you'd guess $N=200$. In the capture-recapture literature this is called the Lincoln–Petersen estimator. It's intuitively appealing because it equates sample proportion and population proportion; asymptotically, the sample proportion will converge to the population proportion.
Obviously, since in some cases $k$ can be 0, the estimator can (with non-zero probability) be non-finite, which is somewhat of a bias problem (indeed, $E(\frac{mn}{k})>N$ even if $k$ can't be zero); if you modify your estimator when $k$ is quite small, it can nevertheless perform fairly well).
An estimator that is notionally similar is the Chapman estimator $\hat{N} = \frac{(K+1)(n+1)}{k+1} - 1$. It performs substantially better in small samples.
Note that both of these estimators can yield noninteger estimates.
The maximum likelihood estimator: the likelihood is increasing* in $N$ for integers below $\frac{mn}{k}$ and increasing for integers above it; the integers $\lfloor \frac{mn}{k} \rfloor$ and $\lfloor \frac{mn}{k} \rfloor+1$ would seem to be the two possible candidates for maximizing the likelihood. It's a simple matter to directly compute the likelihood for both.
* in some circumstances it's actually nondecreasing between the last two points
Here's the likelihood function for a small interval around the method of moments estimator:
It turns out the likelihood is equally high for $\hat{N}=199$ and $\hat{N}=200$. (Indeed, since the $mn/k$ is biased upward, it might make some sense to choose the lower of the two in this instance, $\hat{N}=199$.)
According to Zhang (2009)[1], $\lfloor mn/k\rfloor$, the integer part of the method of moments estimator maximizes the likelihood (i.e. always round down). (I haven't checked this, but the argument looks sound. It doesn't hurt to directly compare a couple of values around $mn/k$ in any case -- in our example we discovered that the next lower value also maximizes the likelihood, though I think that can only happen when $mn/k$ is integer.)
Bayesian estimation is quite useful in this problem (when $k$ is small there's not a great deal of information in the sample to hold the "tail" down, so prior information about population size can be very useful), but of course the posterior distribution depends on the particular prior one chooses, and the estimator itself depends on the loss function selected.
Confidence intervals
Let's say we want a confidence interval for $N$, along with our estimate $\hat{N}$.
We can form a large-sample interval easily enough.
We know that $k$ is hypergeometric. From the normal approximation to the hypergeometric, $k$ is approximately $\sim N\left(\frac{mn}{N},\frac{mn(N-m)(N-n)}{N^2(N-1)}\right)\,$.
So $(k-\frac{mn}{N})/\sqrt{\frac{mn(N-m)(N-n)}{N^2(N-1)}}) = (kN-mn)/\sqrt{\frac{mn(N-m)(N-n)}{(N-1)}}$ is approximately standard normal.
By squaring and comparing with the upper 95% point of a $\chi^2_1$, we get an inequality in $N$ that can be rearranged into a cubic inequality: given $m,n$ and $k$, we find those values of $N$ such that $(kN-mn)^2(N-1)-3.84(mn(N-m)(N-n)) < 0$.
For the example problem that yields the following cubic:
(Being a cubic, there's another interval where the function is negative, but that's in an impossible region for $N$ in this instance)
The cubic curve crosses the horizontal axis at about N=186.75 and N=219.78
This suggests that an approximate 95% interval for $N$ should be something like $(187,220)$, though you might "round outward" to be safer.
We could possibly derive an explicit solution for the approximate bounds, but unless you're doing many hundreds of these it's probably not worth the effort to do more than find the zeros fairly automatically (polynomial or even general root-finding functions are widely available).
In practice one should also investigate the actual coverage of intervals generated in this fashion for values of $N$ in the region of the estimate. That is, given $m$ and $n$, choose an $N$, simulate many $k$ (say 1000 or 10000), and find the proportion of intervals that include that $N$, repeating the exercise at several plausible values of $N$.
In the case of the Chapman estimator, we can obtain an approximate variance estimate for $\hat{N}$:
$\operatorname{var}(\hat{N}^{_\text{C}}) = \frac{(m+1)(n+1)(m-k)(n-k)}{(k+1)(k+1)(k+2)}$
(The $C$ superscript is to indicate the Chapman estimator)
In this case, $\hat{N}^{_\text{C}}$ should be asymptotically normal, so an asymptotic 95% interval would be
$\hat{N}^{_\text{C}}\pm 1.96 \sqrt{\frac{(m+1)(n+1)(m-k)(n-k)}{(k+1)(k+1)(k+2)}}$
which on your example gives $(184.95,216.4)$.
For the MLE, you'd get an asymptotic interval by approximating the log-likelihood at the peak by a quadratic (in effect, an asymptotic normal approximation for $\hat N$) to estimate $\hat{\sigma_{\hat N}}$ (which can be obtained from the approximating quadratic's second derivative) and from that normal approximation derive confidence limits. I won't labor the point, as details of such calculations can be found for a variety of MLEs, and are similar here.
[1]: Zhang, H. (2009),
"A Note About Maximum Likelihood Estimator in Hypergeometric Distribution,"
Comunicaciones en Estadística, June, Vol. 2, No. 1 | Estimating Size of a Set based on two Overlapping Subsets
This sounds like the basic "capture-recapture" problem, sometimes called "mark and recapture".
You have a population of unknown size $N$; imagine them to be indistinguishable balls in an urn (all whi |
35,732 | What deviance is glmnet using to compare values of $\lambda$? | I just wanted to add to the input, but don't at the moment have a concise answer and it's too long for a comment. Hopefully this gives more insight.
It seems that the function of interest is in the unpacked glmnet library, and is called
cv.lognet.R It's hard to explicitly trace everything, as much is in S3/S4 code, but the above function is listed as an 'internal glmnet function,' used by the authors and seems to match how the cv.glmnet is calculating the binomial deviance.
While I didn't see it anywhere in the paper, from tracing the glmnet code to cv.lognet, what I gather is that it is using something called the capped binomial deviance
described here.
$-[Y\log_{10}(E) + (1-Y)\log_{10}(1-E)]$
predmat is a matrix of the capped probability values (E, 1-E) output for each lambda, that are compared to the y and y's complement values resulting in lp. They are then put in the 2*(ly-lp) deviance form and averaged over cross-validated hold out folds to get cvm
- The mean cross-validated error - and cv ranges, that you have plotted in the first image.
I think the manual deviance function (2nd plot) is not calculated the same way this internal one (1st plot) is.
# from cv.lognet.R
cvraw=switch(type.measure,
"mse"=(y[,1]-(1-predmat))^2 +(y[,2]-predmat)^2,
"mae"=abs(y[,1]-(1-predmat)) +abs(y[,2]-predmat),
"deviance"= {
predmat=pmin(pmax(predmat,prob_min),prob_max)
lp=y[,1]*log(1-predmat)+y[,2]*log(predmat)
ly=log(y)
ly[y==0]=0
ly=drop((y*ly)%*%c(1,1))
2*(ly-lp)
# cvm output
cvm=apply(cvraw,2,weighted.mean,w=weights,na.rm=TRUE) | What deviance is glmnet using to compare values of $\lambda$? | I just wanted to add to the input, but don't at the moment have a concise answer and it's too long for a comment. Hopefully this gives more insight.
It seems that the function of interest is in the un | What deviance is glmnet using to compare values of $\lambda$?
I just wanted to add to the input, but don't at the moment have a concise answer and it's too long for a comment. Hopefully this gives more insight.
It seems that the function of interest is in the unpacked glmnet library, and is called
cv.lognet.R It's hard to explicitly trace everything, as much is in S3/S4 code, but the above function is listed as an 'internal glmnet function,' used by the authors and seems to match how the cv.glmnet is calculating the binomial deviance.
While I didn't see it anywhere in the paper, from tracing the glmnet code to cv.lognet, what I gather is that it is using something called the capped binomial deviance
described here.
$-[Y\log_{10}(E) + (1-Y)\log_{10}(1-E)]$
predmat is a matrix of the capped probability values (E, 1-E) output for each lambda, that are compared to the y and y's complement values resulting in lp. They are then put in the 2*(ly-lp) deviance form and averaged over cross-validated hold out folds to get cvm
- The mean cross-validated error - and cv ranges, that you have plotted in the first image.
I think the manual deviance function (2nd plot) is not calculated the same way this internal one (1st plot) is.
# from cv.lognet.R
cvraw=switch(type.measure,
"mse"=(y[,1]-(1-predmat))^2 +(y[,2]-predmat)^2,
"mae"=abs(y[,1]-(1-predmat)) +abs(y[,2]-predmat),
"deviance"= {
predmat=pmin(pmax(predmat,prob_min),prob_max)
lp=y[,1]*log(1-predmat)+y[,2]*log(predmat)
ly=log(y)
ly[y==0]=0
ly=drop((y*ly)%*%c(1,1))
2*(ly-lp)
# cvm output
cvm=apply(cvraw,2,weighted.mean,w=weights,na.rm=TRUE) | What deviance is glmnet using to compare values of $\lambda$?
I just wanted to add to the input, but don't at the moment have a concise answer and it's too long for a comment. Hopefully this gives more insight.
It seems that the function of interest is in the un |
35,733 | What deviance is glmnet using to compare values of $\lambda$? | So I visited the CRAN site and downloaded what I think is the source of the glmnet package. In ./glmnet/R/plot.cv.glmnet.R it seems that you'd find the source code you're after. It's pretty brief so I'll paste here but it's probably best if you check it out yourself to be sure that it is indeed the code that is running.
plot.cv.glmnet=function(x,sign.lambda=1,...){
cvobj=x
xlab="log(Lambda)"
if(sign.lambda<0)xlab=paste("-",xlab,sep="")
plot.args=list(x=sign.lambda*log(cvobj$lambda),y=cvobj$cvm,ylim=range(cvobj$cvup,cvobj$cvlo),xlab=xlab,ylab=cvobj$name,type="n")
new.args=list(...)
if(length(new.args))plot.args[names(new.args)]=new.args
do.call("plot",plot.args)
error.bars(sign.lambda*log(cvobj$lambda),cvobj$cvup,cvobj$cvlo,width=0.01,col="darkgrey")
points(sign.lambda*log(cvobj$lambda),cvobj$cvm,pch=20,col="red")
axis(side=3,at=sign.lambda*log(cvobj$lambda),labels=paste(cvobj$nz),tick=FALSE,line=0)
abline(v=sign.lambda*log(cvobj$lambda.min),lty=3)
abline(v=sign.lambda*log(cvobj$lambda.1se),lty=3)
invisible()
} | What deviance is glmnet using to compare values of $\lambda$? | So I visited the CRAN site and downloaded what I think is the source of the glmnet package. In ./glmnet/R/plot.cv.glmnet.R it seems that you'd find the source code you're after. It's pretty brief so I | What deviance is glmnet using to compare values of $\lambda$?
So I visited the CRAN site and downloaded what I think is the source of the glmnet package. In ./glmnet/R/plot.cv.glmnet.R it seems that you'd find the source code you're after. It's pretty brief so I'll paste here but it's probably best if you check it out yourself to be sure that it is indeed the code that is running.
plot.cv.glmnet=function(x,sign.lambda=1,...){
cvobj=x
xlab="log(Lambda)"
if(sign.lambda<0)xlab=paste("-",xlab,sep="")
plot.args=list(x=sign.lambda*log(cvobj$lambda),y=cvobj$cvm,ylim=range(cvobj$cvup,cvobj$cvlo),xlab=xlab,ylab=cvobj$name,type="n")
new.args=list(...)
if(length(new.args))plot.args[names(new.args)]=new.args
do.call("plot",plot.args)
error.bars(sign.lambda*log(cvobj$lambda),cvobj$cvup,cvobj$cvlo,width=0.01,col="darkgrey")
points(sign.lambda*log(cvobj$lambda),cvobj$cvm,pch=20,col="red")
axis(side=3,at=sign.lambda*log(cvobj$lambda),labels=paste(cvobj$nz),tick=FALSE,line=0)
abline(v=sign.lambda*log(cvobj$lambda.min),lty=3)
abline(v=sign.lambda*log(cvobj$lambda.1se),lty=3)
invisible()
} | What deviance is glmnet using to compare values of $\lambda$?
So I visited the CRAN site and downloaded what I think is the source of the glmnet package. In ./glmnet/R/plot.cv.glmnet.R it seems that you'd find the source code you're after. It's pretty brief so I |
35,734 | Is a negative OOB score possible with scikit-learn's RandomForestRegressor? | RandomForestRegressor's oob_score_ attribute is the score of out-of-bag samples. scikit-learn uses "score" to mean something like "measure of how good a model is", which is different for different models. For RandomForestRegressor (as for most regression models), it's the coefficient of determination, as can be seen by the doc for the score() method.
This is defined as $(1 - u/v)$,
where $u$ is the regression's sum squared error $u = \sum_i (y_i - \hat{y}_i)^2$,
and $v$ is the sum squared error of the best constant predictor $v = \sum_i (y_i - \bar{y})^2$ (where sums range over the test instances).
This measure can indeed be negative, if $u > v$, i.e. your model is worse than the best constant predictor. This means your model kind of sucks; usually models get positive scores. The score of .0001 or whatever means that your model is only just barely better than the best constant predictor. | Is a negative OOB score possible with scikit-learn's RandomForestRegressor? | RandomForestRegressor's oob_score_ attribute is the score of out-of-bag samples. scikit-learn uses "score" to mean something like "measure of how good a model is", which is different for different mod | Is a negative OOB score possible with scikit-learn's RandomForestRegressor?
RandomForestRegressor's oob_score_ attribute is the score of out-of-bag samples. scikit-learn uses "score" to mean something like "measure of how good a model is", which is different for different models. For RandomForestRegressor (as for most regression models), it's the coefficient of determination, as can be seen by the doc for the score() method.
This is defined as $(1 - u/v)$,
where $u$ is the regression's sum squared error $u = \sum_i (y_i - \hat{y}_i)^2$,
and $v$ is the sum squared error of the best constant predictor $v = \sum_i (y_i - \bar{y})^2$ (where sums range over the test instances).
This measure can indeed be negative, if $u > v$, i.e. your model is worse than the best constant predictor. This means your model kind of sucks; usually models get positive scores. The score of .0001 or whatever means that your model is only just barely better than the best constant predictor. | Is a negative OOB score possible with scikit-learn's RandomForestRegressor?
RandomForestRegressor's oob_score_ attribute is the score of out-of-bag samples. scikit-learn uses "score" to mean something like "measure of how good a model is", which is different for different mod |
35,735 | Relationship between skew and kurtosis in a sample | A discussion on the limits of the sample skewness and kurtosis is available here. The author gives proper references to the original proofs, and the cited results are:
$$
|g_1| \le \frac{n-2}{\sqrt{n-1}} = \sqrt{n-1} - \frac{1}{\sqrt{n-1}}
$$
$$
b_2 = g_2 + 3 \le \frac{n^2-3n+3}{n-1} = n -2 + \frac1{n-1}
$$
So for $n=10$, you can't have skewness greater than 2.89, and excess kurtosis, greater than 5.11. | Relationship between skew and kurtosis in a sample | A discussion on the limits of the sample skewness and kurtosis is available here. The author gives proper references to the original proofs, and the cited results are:
$$
|g_1| \le \frac{n-2}{\sqrt{n- | Relationship between skew and kurtosis in a sample
A discussion on the limits of the sample skewness and kurtosis is available here. The author gives proper references to the original proofs, and the cited results are:
$$
|g_1| \le \frac{n-2}{\sqrt{n-1}} = \sqrt{n-1} - \frac{1}{\sqrt{n-1}}
$$
$$
b_2 = g_2 + 3 \le \frac{n^2-3n+3}{n-1} = n -2 + \frac1{n-1}
$$
So for $n=10$, you can't have skewness greater than 2.89, and excess kurtosis, greater than 5.11. | Relationship between skew and kurtosis in a sample
A discussion on the limits of the sample skewness and kurtosis is available here. The author gives proper references to the original proofs, and the cited results are:
$$
|g_1| \le \frac{n-2}{\sqrt{n- |
35,736 | Relationship between skew and kurtosis in a sample | Some broad discussion on how to understand the problem in the absence of sample definitions.
Since the quoted relationship applies to distributions, if you treat the ecdf as the cdf of a distribution, and apply those population definitions you gave, the relationship must still hold. That is, if you use $n$ denominators on all the averages in the sample definitions (including the calculation of $\hat{\sigma}^2$), so that they're expected values on that distribution, the relationship should be what you stated.
So, by defining your central sample moments all as $m_k=\frac{1}{n}\sum_i (x_i-\bar{x})^k$, you must get the same result you quoted; no additional algebra is required.
If you subsequently want to use different definitions, by writing the new ones as functions of the old ones just mentioned (pulling out scaling terms for any non-$n$ denominators), you should be able to then derive the relationships you seek (which should still asymptotically go to the relationship you mention)
So, for example, if you use the sample definition here:
$g_2 = \frac{m_4}{m_2^2}-3\,$,
and the equivalent for skewness,
$g_1 = \frac{m_3}{m_2^{3/2}}\,$,
the proof that established the population relationship will still apply.
If instead you used the definition for sample skewness here (note that this would leave you with inconsistent definitions of the variance estimates!), then you can simply write
$b_1 = g_1 \frac{m_2^{3/2}}{s^3} = g_1 (\frac{n-1}{n})^{3/2}$
and then use the relationship you quoted to derive one between $g_2$ and $b_1$. And so on for other definitions (you might like to try it with $G_1$ mentioned in the wikipedia article on skewness, for example). | Relationship between skew and kurtosis in a sample | Some broad discussion on how to understand the problem in the absence of sample definitions.
Since the quoted relationship applies to distributions, if you treat the ecdf as the cdf of a distribution, | Relationship between skew and kurtosis in a sample
Some broad discussion on how to understand the problem in the absence of sample definitions.
Since the quoted relationship applies to distributions, if you treat the ecdf as the cdf of a distribution, and apply those population definitions you gave, the relationship must still hold. That is, if you use $n$ denominators on all the averages in the sample definitions (including the calculation of $\hat{\sigma}^2$), so that they're expected values on that distribution, the relationship should be what you stated.
So, by defining your central sample moments all as $m_k=\frac{1}{n}\sum_i (x_i-\bar{x})^k$, you must get the same result you quoted; no additional algebra is required.
If you subsequently want to use different definitions, by writing the new ones as functions of the old ones just mentioned (pulling out scaling terms for any non-$n$ denominators), you should be able to then derive the relationships you seek (which should still asymptotically go to the relationship you mention)
So, for example, if you use the sample definition here:
$g_2 = \frac{m_4}{m_2^2}-3\,$,
and the equivalent for skewness,
$g_1 = \frac{m_3}{m_2^{3/2}}\,$,
the proof that established the population relationship will still apply.
If instead you used the definition for sample skewness here (note that this would leave you with inconsistent definitions of the variance estimates!), then you can simply write
$b_1 = g_1 \frac{m_2^{3/2}}{s^3} = g_1 (\frac{n-1}{n})^{3/2}$
and then use the relationship you quoted to derive one between $g_2$ and $b_1$. And so on for other definitions (you might like to try it with $G_1$ mentioned in the wikipedia article on skewness, for example). | Relationship between skew and kurtosis in a sample
Some broad discussion on how to understand the problem in the absence of sample definitions.
Since the quoted relationship applies to distributions, if you treat the ecdf as the cdf of a distribution, |
35,737 | Manually calculating p-value for t-test: How to avoid values bigger than $1$? | You can make use of abs in the numerator (so it's always >0) and keep the lower.tail=FALSE. | Manually calculating p-value for t-test: How to avoid values bigger than $1$? | You can make use of abs in the numerator (so it's always >0) and keep the lower.tail=FALSE. | Manually calculating p-value for t-test: How to avoid values bigger than $1$?
You can make use of abs in the numerator (so it's always >0) and keep the lower.tail=FALSE. | Manually calculating p-value for t-test: How to avoid values bigger than $1$?
You can make use of abs in the numerator (so it's always >0) and keep the lower.tail=FALSE. |
35,738 | Manually calculating p-value for t-test: How to avoid values bigger than $1$? | Glen_b is absolutely right about the abs, however, I have found that in certain data sets the values would require -abs to have the desired effect. I'm not able to explain why, but I'll leave these line of code here, incase anyone who is having a similar problem finds this thread.
t.value <- betacoeff/standard error of the beta coefficients
p.value <- 2 * pt(-abs(t.value), df = nrow(data)-2)
Expanded answer at the request of mdewey. | Manually calculating p-value for t-test: How to avoid values bigger than $1$? | Glen_b is absolutely right about the abs, however, I have found that in certain data sets the values would require -abs to have the desired effect. I'm not able to explain why, but I'll leave these li | Manually calculating p-value for t-test: How to avoid values bigger than $1$?
Glen_b is absolutely right about the abs, however, I have found that in certain data sets the values would require -abs to have the desired effect. I'm not able to explain why, but I'll leave these line of code here, incase anyone who is having a similar problem finds this thread.
t.value <- betacoeff/standard error of the beta coefficients
p.value <- 2 * pt(-abs(t.value), df = nrow(data)-2)
Expanded answer at the request of mdewey. | Manually calculating p-value for t-test: How to avoid values bigger than $1$?
Glen_b is absolutely right about the abs, however, I have found that in certain data sets the values would require -abs to have the desired effect. I'm not able to explain why, but I'll leave these li |
35,739 | Understanding the error term | $e$ is the population error (the part of $Y$ not explained by the linear combination of $X$ and $\beta$), while $\hat{e}$ are the residuals (the sample part of $Y$ not explained by $X$ and $\hat{\beta}$). Linear regression assumes $e \overset{\rm iid}{\sim} N(0, \sigma^2)$ with $\sigma^2$ constant, and we should check this using the observed sample $\hat{e}$.
The error is any source of variation of $Y$ not included in the model, either from excluded variables or from measurement error, as long as they comply with the distributional assumptions.
We usually use $Y$ to refer generically to the population, and $y_i$ to refer to the $i^{\rm th}$ sampled observations. | Understanding the error term | $e$ is the population error (the part of $Y$ not explained by the linear combination of $X$ and $\beta$), while $\hat{e}$ are the residuals (the sample part of $Y$ not explained by $X$ and $\hat{\beta | Understanding the error term
$e$ is the population error (the part of $Y$ not explained by the linear combination of $X$ and $\beta$), while $\hat{e}$ are the residuals (the sample part of $Y$ not explained by $X$ and $\hat{\beta}$). Linear regression assumes $e \overset{\rm iid}{\sim} N(0, \sigma^2)$ with $\sigma^2$ constant, and we should check this using the observed sample $\hat{e}$.
The error is any source of variation of $Y$ not included in the model, either from excluded variables or from measurement error, as long as they comply with the distributional assumptions.
We usually use $Y$ to refer generically to the population, and $y_i$ to refer to the $i^{\rm th}$ sampled observations. | Understanding the error term
$e$ is the population error (the part of $Y$ not explained by the linear combination of $X$ and $\beta$), while $\hat{e}$ are the residuals (the sample part of $Y$ not explained by $X$ and $\hat{\beta |
35,740 | Tuning alpha parameter in LASSO linear model in scikitlearn | First: trying to set alpha to find a pre-specified number of important features isn't a good idea. Whether a feature is predictive of the response is a property of the data, not your model. So you want your model to tell you how many features are important, not the other way around. If you try to mess with your alpha until it finds a pre-specified number of features to be predictive, you run the risk of overfitting (if there are really fewer predictive features than that) or underfitting (if there are more).
This is why the tuning parameter is often selected automatically based on minimizing cross-validated generalization error. In the cross-validation setting, people frequently do something similar to finding the "minimum adequate number of features", which is to select the largest alpha where the error is at most one standard deviation above the alpha parameter with the lowest cross-validated error (e.g. here, p. 18). The rationale for this is that there's some noise in the cross-validated error estimate, and if you select the alpha that simply minimizes the estimate you risk overfitting to the noise, so it's better to "err on the side of parsimony," as the paper puts it.
On the other hand, some papers (e.g. here) have noted that selecting alpha by minimizing cross-validated error does not yield consistent feature selection in practice (i.e., where features are selected if and only if they should be). An alternative is selecting based on the BIC, as advocated by e.g. Zou, Hastie and Tibshirani here. (For the BIC one should set the "degrees of freedom" equal to the rank of the feature matrix for the features found to be nonzero; see the paper for more detail.) | Tuning alpha parameter in LASSO linear model in scikitlearn | First: trying to set alpha to find a pre-specified number of important features isn't a good idea. Whether a feature is predictive of the response is a property of the data, not your model. So you wan | Tuning alpha parameter in LASSO linear model in scikitlearn
First: trying to set alpha to find a pre-specified number of important features isn't a good idea. Whether a feature is predictive of the response is a property of the data, not your model. So you want your model to tell you how many features are important, not the other way around. If you try to mess with your alpha until it finds a pre-specified number of features to be predictive, you run the risk of overfitting (if there are really fewer predictive features than that) or underfitting (if there are more).
This is why the tuning parameter is often selected automatically based on minimizing cross-validated generalization error. In the cross-validation setting, people frequently do something similar to finding the "minimum adequate number of features", which is to select the largest alpha where the error is at most one standard deviation above the alpha parameter with the lowest cross-validated error (e.g. here, p. 18). The rationale for this is that there's some noise in the cross-validated error estimate, and if you select the alpha that simply minimizes the estimate you risk overfitting to the noise, so it's better to "err on the side of parsimony," as the paper puts it.
On the other hand, some papers (e.g. here) have noted that selecting alpha by minimizing cross-validated error does not yield consistent feature selection in practice (i.e., where features are selected if and only if they should be). An alternative is selecting based on the BIC, as advocated by e.g. Zou, Hastie and Tibshirani here. (For the BIC one should set the "degrees of freedom" equal to the rank of the feature matrix for the features found to be nonzero; see the paper for more detail.) | Tuning alpha parameter in LASSO linear model in scikitlearn
First: trying to set alpha to find a pre-specified number of important features isn't a good idea. Whether a feature is predictive of the response is a property of the data, not your model. So you wan |
35,741 | Tuning alpha parameter in LASSO linear model in scikitlearn | You could try applying the Elastic Net Regression some times. It combines the Lasso and Ridge regression methods in order to give your feature selection a 'human touch'. This is quite helpful in optimizing your features while preserving the intuition from features in your data. | Tuning alpha parameter in LASSO linear model in scikitlearn | You could try applying the Elastic Net Regression some times. It combines the Lasso and Ridge regression methods in order to give your feature selection a 'human touch'. This is quite helpful in optim | Tuning alpha parameter in LASSO linear model in scikitlearn
You could try applying the Elastic Net Regression some times. It combines the Lasso and Ridge regression methods in order to give your feature selection a 'human touch'. This is quite helpful in optimizing your features while preserving the intuition from features in your data. | Tuning alpha parameter in LASSO linear model in scikitlearn
You could try applying the Elastic Net Regression some times. It combines the Lasso and Ridge regression methods in order to give your feature selection a 'human touch'. This is quite helpful in optim |
35,742 | Time series regression with lagged dependent and independent variables | First, you should decide on using a univariate or a multivariate model. It seems reasonable to think that oil price and unemployment are causal for the air travel demand and not the other way around. Thus, in line with one of the answers to this post, you may address your study in a univariate setting. If the previous assumption is not appropriate, then you may take a multivariate approach, for example a VAR model, as mentioned by @Miha Trošt.
In the univariate setting, you can consider the following models:
ARIMAX models: these are ARIMA models as the model that you selected which include exogenous regressors.
Distributed lag models: these models are based on a regression equation that includes lagged versions of the explanatory variables.
Autoregressive distributed lag models, as the previous model but including also as regressors the lags of the dependent variable.
Did you check whether the regular and seasonal differencing filters applied by the airlines model are necessary? You mention that the series is measured in rates, this may already render the series stationary. This is not something that must necessarily meet, I didn't see the data, so this is just a guess.
You should be also concerned with the correlation among the regressors. Oil price and unemployment may be correlated. If correlation exists and is high, estimates of the parameters may not be accurate. If correlation is high, you may include only one of the regressors. There are some techniques to deal with multicollinearity but with only two regressors it is probably not worth complicating too much the analysis and it will probably be safe to keep both variables, unless they are highly correlated. | Time series regression with lagged dependent and independent variables | First, you should decide on using a univariate or a multivariate model. It seems reasonable to think that oil price and unemployment are causal for the air travel demand and not the other way around. | Time series regression with lagged dependent and independent variables
First, you should decide on using a univariate or a multivariate model. It seems reasonable to think that oil price and unemployment are causal for the air travel demand and not the other way around. Thus, in line with one of the answers to this post, you may address your study in a univariate setting. If the previous assumption is not appropriate, then you may take a multivariate approach, for example a VAR model, as mentioned by @Miha Trošt.
In the univariate setting, you can consider the following models:
ARIMAX models: these are ARIMA models as the model that you selected which include exogenous regressors.
Distributed lag models: these models are based on a regression equation that includes lagged versions of the explanatory variables.
Autoregressive distributed lag models, as the previous model but including also as regressors the lags of the dependent variable.
Did you check whether the regular and seasonal differencing filters applied by the airlines model are necessary? You mention that the series is measured in rates, this may already render the series stationary. This is not something that must necessarily meet, I didn't see the data, so this is just a guess.
You should be also concerned with the correlation among the regressors. Oil price and unemployment may be correlated. If correlation exists and is high, estimates of the parameters may not be accurate. If correlation is high, you may include only one of the regressors. There are some techniques to deal with multicollinearity but with only two regressors it is probably not worth complicating too much the analysis and it will probably be safe to keep both variables, unless they are highly correlated. | Time series regression with lagged dependent and independent variables
First, you should decide on using a univariate or a multivariate model. It seems reasonable to think that oil price and unemployment are causal for the air travel demand and not the other way around. |
35,743 | Weighted Kendall tau rank correlation coefficient | I don't have commenting privileges, so I will attempt an answer here. Perhaps your original question is unclear, but here are answers depending on your exact meaning:
"I want to penalise R2 more as the differences in position is towards the head than tail. Thus, along with the ranking, I also want to take into consideration the position."
If you want to penalize R2 because it moved too far towards position 1, then despite your response to another answer, you do care about relevance. In other words, if errors too far towards the head or towards the tail matter, than relevance-based ranking is what you are looking for. The other answer's suggestion of Discounted cumulative gain is a good choice.
Alternatively, I don't know if you think that in R2 that there was a bigger absolute change or jump in the ranking, for which you want to give a penalty. In fact, the difference in both cases is -2: In R1, 6 moved -2 to rank 4; in R2, 4 moved -2 to 2. Thus, Kendall's tau is identical, because tau only cares about how much difference there is, not where exactly the jump occurred. If for instance, there had only been a jump in 1 (e.g. if R3 were to be 2,1,3,4,5,6), then tau would have a larger value (indicating more concordance). If that's the case, then Kendall's tau might be just what you need. | Weighted Kendall tau rank correlation coefficient | I don't have commenting privileges, so I will attempt an answer here. Perhaps your original question is unclear, but here are answers depending on your exact meaning:
"I want to penalise R2 more as th | Weighted Kendall tau rank correlation coefficient
I don't have commenting privileges, so I will attempt an answer here. Perhaps your original question is unclear, but here are answers depending on your exact meaning:
"I want to penalise R2 more as the differences in position is towards the head than tail. Thus, along with the ranking, I also want to take into consideration the position."
If you want to penalize R2 because it moved too far towards position 1, then despite your response to another answer, you do care about relevance. In other words, if errors too far towards the head or towards the tail matter, than relevance-based ranking is what you are looking for. The other answer's suggestion of Discounted cumulative gain is a good choice.
Alternatively, I don't know if you think that in R2 that there was a bigger absolute change or jump in the ranking, for which you want to give a penalty. In fact, the difference in both cases is -2: In R1, 6 moved -2 to rank 4; in R2, 4 moved -2 to 2. Thus, Kendall's tau is identical, because tau only cares about how much difference there is, not where exactly the jump occurred. If for instance, there had only been a jump in 1 (e.g. if R3 were to be 2,1,3,4,5,6), then tau would have a larger value (indicating more concordance). If that's the case, then Kendall's tau might be just what you need. | Weighted Kendall tau rank correlation coefficient
I don't have commenting privileges, so I will attempt an answer here. Perhaps your original question is unclear, but here are answers depending on your exact meaning:
"I want to penalise R2 more as th |
35,744 | Weighted Kendall tau rank correlation coefficient | A positional weighted kendall-tau (a.k.a Kemeny) metric would do the work. This is a generalization of the original metric with assigning weights on possible swaps on consecutive positions (hence an infinite class).
A simple example would be assigning weights to a swap between 1st and 2nd alternatives, 2nd and 3rd, so on and so forth.
You can see more here:
http://www.sciencedirect.com/science/article/pii/S0304406814000068
Or here an extended free access version:
http://digitalarchive.maastrichtuniversity.nl/fedora/get/guid:d5f76b52-4b10-4123-9e5f-f50f53067abc/ASSET1 | Weighted Kendall tau rank correlation coefficient | A positional weighted kendall-tau (a.k.a Kemeny) metric would do the work. This is a generalization of the original metric with assigning weights on possible swaps on consecutive positions (hence an i | Weighted Kendall tau rank correlation coefficient
A positional weighted kendall-tau (a.k.a Kemeny) metric would do the work. This is a generalization of the original metric with assigning weights on possible swaps on consecutive positions (hence an infinite class).
A simple example would be assigning weights to a swap between 1st and 2nd alternatives, 2nd and 3rd, so on and so forth.
You can see more here:
http://www.sciencedirect.com/science/article/pii/S0304406814000068
Or here an extended free access version:
http://digitalarchive.maastrichtuniversity.nl/fedora/get/guid:d5f76b52-4b10-4123-9e5f-f50f53067abc/ASSET1 | Weighted Kendall tau rank correlation coefficient
A positional weighted kendall-tau (a.k.a Kemeny) metric would do the work. This is a generalization of the original metric with assigning weights on possible swaps on consecutive positions (hence an i |
35,745 | Weighted Kendall tau rank correlation coefficient | I don't know if it is possible with Kendall tau, but some ranking measures such as Discounted cumulative gain naturally penalize more inversions towards some extreme of the list. | Weighted Kendall tau rank correlation coefficient | I don't know if it is possible with Kendall tau, but some ranking measures such as Discounted cumulative gain naturally penalize more inversions towards some extreme of the list. | Weighted Kendall tau rank correlation coefficient
I don't know if it is possible with Kendall tau, but some ranking measures such as Discounted cumulative gain naturally penalize more inversions towards some extreme of the list. | Weighted Kendall tau rank correlation coefficient
I don't know if it is possible with Kendall tau, but some ranking measures such as Discounted cumulative gain naturally penalize more inversions towards some extreme of the list. |
35,746 | Benchmark datasets for testing multiple regression or multivariate regression model? | See NIST's Statistical Reference Datasets. These include data chosen or designed to present numerical challenges to regression algorithms. The Longley data, with highly collinear predictors, is perhaps the most famous example. | Benchmark datasets for testing multiple regression or multivariate regression model? | See NIST's Statistical Reference Datasets. These include data chosen or designed to present numerical challenges to regression algorithms. The Longley data, with highly collinear predictors, is perhap | Benchmark datasets for testing multiple regression or multivariate regression model?
See NIST's Statistical Reference Datasets. These include data chosen or designed to present numerical challenges to regression algorithms. The Longley data, with highly collinear predictors, is perhaps the most famous example. | Benchmark datasets for testing multiple regression or multivariate regression model?
See NIST's Statistical Reference Datasets. These include data chosen or designed to present numerical challenges to regression algorithms. The Longley data, with highly collinear predictors, is perhap |
35,747 | Benchmark datasets for testing multiple regression or multivariate regression model? | Generate some random data yourself in whatever language you're using, that follow the assumptions of your model, e.g., for linear regression generate $X$, then $\beta$, then do $y = X\beta + \epsilon$ where $\epsilon$ is normally distributed with mean zero and sd of say 1. See if you can recover the correct $\beta$ when varying the error stdev. Compare with the multitude of established tools for doing regression. | Benchmark datasets for testing multiple regression or multivariate regression model? | Generate some random data yourself in whatever language you're using, that follow the assumptions of your model, e.g., for linear regression generate $X$, then $\beta$, then do $y = X\beta + \epsilon$ | Benchmark datasets for testing multiple regression or multivariate regression model?
Generate some random data yourself in whatever language you're using, that follow the assumptions of your model, e.g., for linear regression generate $X$, then $\beta$, then do $y = X\beta + \epsilon$ where $\epsilon$ is normally distributed with mean zero and sd of say 1. See if you can recover the correct $\beta$ when varying the error stdev. Compare with the multitude of established tools for doing regression. | Benchmark datasets for testing multiple regression or multivariate regression model?
Generate some random data yourself in whatever language you're using, that follow the assumptions of your model, e.g., for linear regression generate $X$, then $\beta$, then do $y = X\beta + \epsilon$ |
35,748 | Different p-values for fixed effects in summary() of glmer() and likelihood ratio test comparison in R | It looks like you are seeing the difference between Wald p-values (based on the curvature of the log-likelihood surface at the maximum likelihood estimate) and likelihood ratio test p-values (based on comparisons between the full and reduced models).
take a look at tpr <- profile(a25,which="beta_"); lattice::xyplot(tpr). You should see that the lines are far from straight (straight lines would indicate a log-quadratic likelihood surface, which is what's assumed by Wald p-values)
compare the results of confint(a25,which="beta_") (likelihood ratio intervals) and confint(a25,which="beta_",method="Wald"); they should be quite different.
LRT CI/p-values are essentially always better than the Wald equivalents (but much slower to compute, which is why Wald p-values are the default in summary()). | Different p-values for fixed effects in summary() of glmer() and likelihood ratio test comparison in | It looks like you are seeing the difference between Wald p-values (based on the curvature of the log-likelihood surface at the maximum likelihood estimate) and likelihood ratio test p-values (based on | Different p-values for fixed effects in summary() of glmer() and likelihood ratio test comparison in R
It looks like you are seeing the difference between Wald p-values (based on the curvature of the log-likelihood surface at the maximum likelihood estimate) and likelihood ratio test p-values (based on comparisons between the full and reduced models).
take a look at tpr <- profile(a25,which="beta_"); lattice::xyplot(tpr). You should see that the lines are far from straight (straight lines would indicate a log-quadratic likelihood surface, which is what's assumed by Wald p-values)
compare the results of confint(a25,which="beta_") (likelihood ratio intervals) and confint(a25,which="beta_",method="Wald"); they should be quite different.
LRT CI/p-values are essentially always better than the Wald equivalents (but much slower to compute, which is why Wald p-values are the default in summary()). | Different p-values for fixed effects in summary() of glmer() and likelihood ratio test comparison in
It looks like you are seeing the difference between Wald p-values (based on the curvature of the log-likelihood surface at the maximum likelihood estimate) and likelihood ratio test p-values (based on |
35,749 | Poisson process and the memoryless property | Memorylessness is a property of the following form:
$$\Pr(X>m+n \mid X > m)=\Pr(X>n)\ .$$
This property holds for $X_1=\ \text{time to the next event in a Poisson process}\ $, but it doesn't hold for $X_k=\ \text{time to the}\, k^\text{th}\, \text{event in a Poisson process}\ $ when $k>1$.
As for how to show it, you could try to do it from first principles.
If you can show that the essentially equivalent form $P(X>s+t)\neq P(X>s)P(X>t)$, (for $s, t>0\ $), that would be sufficient; you already know the distribution for $X_k$. | Poisson process and the memoryless property | Memorylessness is a property of the following form:
$$\Pr(X>m+n \mid X > m)=\Pr(X>n)\ .$$
This property holds for $X_1=\ \text{time to the next event in a Poisson process}\ $, but it doesn't hold for | Poisson process and the memoryless property
Memorylessness is a property of the following form:
$$\Pr(X>m+n \mid X > m)=\Pr(X>n)\ .$$
This property holds for $X_1=\ \text{time to the next event in a Poisson process}\ $, but it doesn't hold for $X_k=\ \text{time to the}\, k^\text{th}\, \text{event in a Poisson process}\ $ when $k>1$.
As for how to show it, you could try to do it from first principles.
If you can show that the essentially equivalent form $P(X>s+t)\neq P(X>s)P(X>t)$, (for $s, t>0\ $), that would be sufficient; you already know the distribution for $X_k$. | Poisson process and the memoryless property
Memorylessness is a property of the following form:
$$\Pr(X>m+n \mid X > m)=\Pr(X>n)\ .$$
This property holds for $X_1=\ \text{time to the next event in a Poisson process}\ $, but it doesn't hold for |
35,750 | Logistic-Regression: Prior correction at test time | For any distribution with over binary variable $C$ and continuous variable $x$:
\begin{align}
p(C_1|x) &= \frac{p(x|C_1)p(C_1)}{p(x)}\\
&= \frac{p(x|C_1)p(C_1)}{p(x|C_1)p(C_1) + p(x|C_2)p(C_2)}\\
&= \frac{1}{1 + \frac{p(x|C_2)p(C_2)}{p(x|C_1)p(C_1)}}\\
&= \frac{1}{1 + \exp\left(\ln\frac{p(x|C_2)p(C_2)}{p(x|C_1)p(C_1)}\right)}\\
&= \frac{1}{1 + \exp\left(-\ln\frac{p(x|C_1)p(C_1)}{p(x|C_2)p(C_2)}\right)}\\
&= \frac{1}{1 + \exp\left(-w^Tx + b\right)},
\end{align}
where we define $C_1$ as the event where $C=1$ and $C_2$ as the event where $C=0$. Notice this is the typical hypothesis assumed during binary logistic regression. From the above, we have that
\begin{equation}
w^Tx + b = \ln\frac{p(x|C_1)p(C_1)}{p(x|C_2)p(C_2)}= \ln\frac{p(x|C_1)}{p(x|C_2)} + \ln\frac{p(C_1)}{p(C_2)}.
\end{equation}
If, during training, we balance the dataset or weigh the examples inversely to their class prior probabilities, we effectively have that $p(C_1) = p(C_2)$, then the above becomes
\begin{equation}
w^Tx + b = \ln\frac{p(x|C_1)}{p(x|C_2)}.
\end{equation}
The parameters $w$ and $b$ are therefore estimated under the assumption that the class prior probabilities are balanced or equal. We can re-introduce the prior log odds:
\begin{align}
w^Tx + b + \ln\frac{p(C_1)}{p(C_2)} &= \ln\frac{p(x|C_1)}{p(x|C_2)}+\ln\frac{p(C_1)}{p(C_2)}\\
w^Tx + b' &= \ln\frac{p(x|C_1)}{p(x|C_2)}+\ln\frac{p(C_1)}{p(C_2)},
\end{align}
where $b' = b + \ln\frac{p(C_1)}{p(C_2)}$. So by a simple adjustment to the bias term, we can re-introduce unbalanced priors in the test/application setting. A similar argument holds for the case of multi-class logistic regression. | Logistic-Regression: Prior correction at test time | For any distribution with over binary variable $C$ and continuous variable $x$:
\begin{align}
p(C_1|x) &= \frac{p(x|C_1)p(C_1)}{p(x)}\\
&= \frac{p(x|C_1)p(C_1)}{p(x|C_1)p(C_1) + p(x|C_2)p(C_2)}\\
&= \ | Logistic-Regression: Prior correction at test time
For any distribution with over binary variable $C$ and continuous variable $x$:
\begin{align}
p(C_1|x) &= \frac{p(x|C_1)p(C_1)}{p(x)}\\
&= \frac{p(x|C_1)p(C_1)}{p(x|C_1)p(C_1) + p(x|C_2)p(C_2)}\\
&= \frac{1}{1 + \frac{p(x|C_2)p(C_2)}{p(x|C_1)p(C_1)}}\\
&= \frac{1}{1 + \exp\left(\ln\frac{p(x|C_2)p(C_2)}{p(x|C_1)p(C_1)}\right)}\\
&= \frac{1}{1 + \exp\left(-\ln\frac{p(x|C_1)p(C_1)}{p(x|C_2)p(C_2)}\right)}\\
&= \frac{1}{1 + \exp\left(-w^Tx + b\right)},
\end{align}
where we define $C_1$ as the event where $C=1$ and $C_2$ as the event where $C=0$. Notice this is the typical hypothesis assumed during binary logistic regression. From the above, we have that
\begin{equation}
w^Tx + b = \ln\frac{p(x|C_1)p(C_1)}{p(x|C_2)p(C_2)}= \ln\frac{p(x|C_1)}{p(x|C_2)} + \ln\frac{p(C_1)}{p(C_2)}.
\end{equation}
If, during training, we balance the dataset or weigh the examples inversely to their class prior probabilities, we effectively have that $p(C_1) = p(C_2)$, then the above becomes
\begin{equation}
w^Tx + b = \ln\frac{p(x|C_1)}{p(x|C_2)}.
\end{equation}
The parameters $w$ and $b$ are therefore estimated under the assumption that the class prior probabilities are balanced or equal. We can re-introduce the prior log odds:
\begin{align}
w^Tx + b + \ln\frac{p(C_1)}{p(C_2)} &= \ln\frac{p(x|C_1)}{p(x|C_2)}+\ln\frac{p(C_1)}{p(C_2)}\\
w^Tx + b' &= \ln\frac{p(x|C_1)}{p(x|C_2)}+\ln\frac{p(C_1)}{p(C_2)},
\end{align}
where $b' = b + \ln\frac{p(C_1)}{p(C_2)}$. So by a simple adjustment to the bias term, we can re-introduce unbalanced priors in the test/application setting. A similar argument holds for the case of multi-class logistic regression. | Logistic-Regression: Prior correction at test time
For any distribution with over binary variable $C$ and continuous variable $x$:
\begin{align}
p(C_1|x) &= \frac{p(x|C_1)p(C_1)}{p(x)}\\
&= \frac{p(x|C_1)p(C_1)}{p(x|C_1)p(C_1) + p(x|C_2)p(C_2)}\\
&= \ |
35,751 | Maximum likelihood estimator for $\theta$ and $E[X]$ | Let $X_1,\dots,X_n$ be a random sample from density $f(x_i)=(\theta/x_i^2)\,I_{(\theta,\infty)}(x_i)$, for $\theta>0$. Since $I_{(\theta,\infty)}(x_i)=I_{(0, x_i)}(\theta)$, writing $x=(x_1,\dots,x_n)$, the likelihood function is
$$
L_x(\theta) = \frac{\theta^n}{\prod_{i=1}^n x_i^2} I_{(0,x_{(1)})}(\theta) \, , \qquad (*)
$$
in which $x_{(1)}=\min\{x_1,\dots,x_n\}$. The way the density is defined implies that there is no MLE in the usual sense, because the candidate $x_{(1)}\neq\arg\max_\theta L_x(\theta)$. In fact, $L_x(x_{(1)})=0$. If, for each $\theta>0$, we change the version of the density in just one point, and this doesn't change the family of sampling distributions, doing $f(x_i)=(\theta/x_i^2)\,I_{[\theta,\infty)}(x_i)$, then it's true that $\hat{\theta}_{\mathrm{MLE}}=X_{(1)}$.
This is not a serious difficulty, but it's a curious case in which the particular versions of the sampling densities chosen for the problem change the answer. In the second edition of DeGroot's "Probability and Statistics" there is a similar example starting on page 343.
Also, since $\mathrm{E}_\theta[X_i]=\infty$, for every $\theta>0$, asking for an MLE of this quantity doesn't make sense. | Maximum likelihood estimator for $\theta$ and $E[X]$ | Let $X_1,\dots,X_n$ be a random sample from density $f(x_i)=(\theta/x_i^2)\,I_{(\theta,\infty)}(x_i)$, for $\theta>0$. Since $I_{(\theta,\infty)}(x_i)=I_{(0, x_i)}(\theta)$, writing $x=(x_1,\dots,x_n) | Maximum likelihood estimator for $\theta$ and $E[X]$
Let $X_1,\dots,X_n$ be a random sample from density $f(x_i)=(\theta/x_i^2)\,I_{(\theta,\infty)}(x_i)$, for $\theta>0$. Since $I_{(\theta,\infty)}(x_i)=I_{(0, x_i)}(\theta)$, writing $x=(x_1,\dots,x_n)$, the likelihood function is
$$
L_x(\theta) = \frac{\theta^n}{\prod_{i=1}^n x_i^2} I_{(0,x_{(1)})}(\theta) \, , \qquad (*)
$$
in which $x_{(1)}=\min\{x_1,\dots,x_n\}$. The way the density is defined implies that there is no MLE in the usual sense, because the candidate $x_{(1)}\neq\arg\max_\theta L_x(\theta)$. In fact, $L_x(x_{(1)})=0$. If, for each $\theta>0$, we change the version of the density in just one point, and this doesn't change the family of sampling distributions, doing $f(x_i)=(\theta/x_i^2)\,I_{[\theta,\infty)}(x_i)$, then it's true that $\hat{\theta}_{\mathrm{MLE}}=X_{(1)}$.
This is not a serious difficulty, but it's a curious case in which the particular versions of the sampling densities chosen for the problem change the answer. In the second edition of DeGroot's "Probability and Statistics" there is a similar example starting on page 343.
Also, since $\mathrm{E}_\theta[X_i]=\infty$, for every $\theta>0$, asking for an MLE of this quantity doesn't make sense. | Maximum likelihood estimator for $\theta$ and $E[X]$
Let $X_1,\dots,X_n$ be a random sample from density $f(x_i)=(\theta/x_i^2)\,I_{(\theta,\infty)}(x_i)$, for $\theta>0$. Since $I_{(\theta,\infty)}(x_i)=I_{(0, x_i)}(\theta)$, writing $x=(x_1,\dots,x_n) |
35,752 | Maximum likelihood estimator for $\theta$ and $E[X]$ | One cannot provide an estimator for a distribution moment that is not finite. Here we have a case of a distribution that does not have a finite expected value. The question that usually comes next is "then what does the sample mean from an i.i.d. sample estimates in such a case?" The answer is that the sample mean is a linear function of the random variables of the sample, and it too, has no finite expected value. So it is not an estimator of the expected value of the random variable.
In such cases, we look for other centrality measures, like for example the median. Here we have
$$F_X(x) = \int_\theta^{x}\frac {\theta}{t^2}dt = 1-\frac {\theta}{x}$$
and denoting the median by $m$ we get
$$F_X(m) = \frac 12 \Rightarrow 1-\frac {\theta}{m} = \frac 12 \Rightarrow m=2\theta$$
Therefore an MLE for a centrality measure of this distribution is
$$\hat m_{MLE} = 2\hat \theta_{MLE} = 2X_{(1)}$$ | Maximum likelihood estimator for $\theta$ and $E[X]$ | One cannot provide an estimator for a distribution moment that is not finite. Here we have a case of a distribution that does not have a finite expected value. The question that usually comes next is | Maximum likelihood estimator for $\theta$ and $E[X]$
One cannot provide an estimator for a distribution moment that is not finite. Here we have a case of a distribution that does not have a finite expected value. The question that usually comes next is "then what does the sample mean from an i.i.d. sample estimates in such a case?" The answer is that the sample mean is a linear function of the random variables of the sample, and it too, has no finite expected value. So it is not an estimator of the expected value of the random variable.
In such cases, we look for other centrality measures, like for example the median. Here we have
$$F_X(x) = \int_\theta^{x}\frac {\theta}{t^2}dt = 1-\frac {\theta}{x}$$
and denoting the median by $m$ we get
$$F_X(m) = \frac 12 \Rightarrow 1-\frac {\theta}{m} = \frac 12 \Rightarrow m=2\theta$$
Therefore an MLE for a centrality measure of this distribution is
$$\hat m_{MLE} = 2\hat \theta_{MLE} = 2X_{(1)}$$ | Maximum likelihood estimator for $\theta$ and $E[X]$
One cannot provide an estimator for a distribution moment that is not finite. Here we have a case of a distribution that does not have a finite expected value. The question that usually comes next is |
35,753 | Why is power analysis with logistic regression so liberal compared to chi squared? | The two tests (logistic regression and chi-square) are equivalent and a power analysis should give the same answer.
You are assuming that a value of 0.15 for f2 and w are the same effect size, they're not. A small value of w is 0.1, a small value of f2 is 0.02.
cohen.ES(test=c("chisq"), size=c("small"))
cohen.ES(test=c("f2"), size=c("small"))
Edit: Elaborated on the similarity of the two approaches.
IF you give the same data to logistic regression and a chi-square test (strictly: without Yates' correction), you get the same result. Here's an example
> set.seed(1234)
> x <- rbinom(100, 1, 0.2)
> y <- rbinom(100, 1, 0.2)
> chisq.test(table(x, y), correct=FALSE)
Pearson's Chi-squared test #'
data: table(x, y)
X-squared = 0.155, df = 1, p-value = **0.694**
Warning message:
In chisq.test(table(x, y), correct = FALSE) :
Chi-squared approximation may be incorrect
> summary(glm(y ~ x, family="binomial"))
Call:
glm(formula = y ~ x, family = "binomial")
Deviance Residuals:
Min 1Q Median 3Q Max
-0.753 -0.753 -0.753 -0.668 1.794
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -1.114 0.251 -4.43 9.4e-06 ***
x -0.272 0.693 -0.39 **0.69**
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 110.22 on 99 degrees of freedom
Residual deviance: 110.06 on 98 degrees of freedom
AIC: 114.1
Number of Fisher Scoring iterations: 4
The p-values are the same, so the power should be the same. I can't remember the formulas for the two different versions of the effect size. Effect size measures are a little weird because in the old days you wanted to minimize the number of tables that you put into books (so we have, for example, $f^2$ instead of $R^2$, when there's a direct relationship between them, and $R^2$ is what everyone understands). | Why is power analysis with logistic regression so liberal compared to chi squared? | The two tests (logistic regression and chi-square) are equivalent and a power analysis should give the same answer.
You are assuming that a value of 0.15 for f2 and w are the same effect size, they're | Why is power analysis with logistic regression so liberal compared to chi squared?
The two tests (logistic regression and chi-square) are equivalent and a power analysis should give the same answer.
You are assuming that a value of 0.15 for f2 and w are the same effect size, they're not. A small value of w is 0.1, a small value of f2 is 0.02.
cohen.ES(test=c("chisq"), size=c("small"))
cohen.ES(test=c("f2"), size=c("small"))
Edit: Elaborated on the similarity of the two approaches.
IF you give the same data to logistic regression and a chi-square test (strictly: without Yates' correction), you get the same result. Here's an example
> set.seed(1234)
> x <- rbinom(100, 1, 0.2)
> y <- rbinom(100, 1, 0.2)
> chisq.test(table(x, y), correct=FALSE)
Pearson's Chi-squared test #'
data: table(x, y)
X-squared = 0.155, df = 1, p-value = **0.694**
Warning message:
In chisq.test(table(x, y), correct = FALSE) :
Chi-squared approximation may be incorrect
> summary(glm(y ~ x, family="binomial"))
Call:
glm(formula = y ~ x, family = "binomial")
Deviance Residuals:
Min 1Q Median 3Q Max
-0.753 -0.753 -0.753 -0.668 1.794
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -1.114 0.251 -4.43 9.4e-06 ***
x -0.272 0.693 -0.39 **0.69**
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 110.22 on 99 degrees of freedom
Residual deviance: 110.06 on 98 degrees of freedom
AIC: 114.1
Number of Fisher Scoring iterations: 4
The p-values are the same, so the power should be the same. I can't remember the formulas for the two different versions of the effect size. Effect size measures are a little weird because in the old days you wanted to minimize the number of tables that you put into books (so we have, for example, $f^2$ instead of $R^2$, when there's a direct relationship between them, and $R^2$ is what everyone understands). | Why is power analysis with logistic regression so liberal compared to chi squared?
The two tests (logistic regression and chi-square) are equivalent and a power analysis should give the same answer.
You are assuming that a value of 0.15 for f2 and w are the same effect size, they're |
35,754 | Difference between random effect and random intercept model | A random intercept model estimates separate intercepts for each unit of each level at which the intercept is permitted to vary. This is one kind of random effect model. Another kind of random effect model also includes random slopes, and estimates separate slopes (i.e. coefficients, betas, effects, etc. depending on your discipline) for each variable for each unit of each level at which that slope is permitted to vary.
It's a citation from epidemiology, not economics, but it is well written as an introduction to these kinds of models (including the "why would we care" bits): Duncan, C., Jones, K., and Moon, G. (1998). Context, composition and heterogeneity: Using multilevel models in health research. Social Science & Medicine, 46(1):97–117. | Difference between random effect and random intercept model | A random intercept model estimates separate intercepts for each unit of each level at which the intercept is permitted to vary. This is one kind of random effect model. Another kind of random effect m | Difference between random effect and random intercept model
A random intercept model estimates separate intercepts for each unit of each level at which the intercept is permitted to vary. This is one kind of random effect model. Another kind of random effect model also includes random slopes, and estimates separate slopes (i.e. coefficients, betas, effects, etc. depending on your discipline) for each variable for each unit of each level at which that slope is permitted to vary.
It's a citation from epidemiology, not economics, but it is well written as an introduction to these kinds of models (including the "why would we care" bits): Duncan, C., Jones, K., and Moon, G. (1998). Context, composition and heterogeneity: Using multilevel models in health research. Social Science & Medicine, 46(1):97–117. | Difference between random effect and random intercept model
A random intercept model estimates separate intercepts for each unit of each level at which the intercept is permitted to vary. This is one kind of random effect model. Another kind of random effect m |
35,755 | Is p-value also the false discovery rate? | Your false discovery rate not only depends on the p-value threshold, but also on the truth. In fact, if your null hypothesis is in reality wrong it is impossible for you to make a false discovery.
Maybe it's helpful to think of it like that: the p-value threshold is the probability of making false discoveries when there are no true discoveries to be make (or to put it differently, if the null hypothesis is true).
Basically,
Type 1 Error Rate = "Probability of rejecting the null if it's true" = p-value threshold
and
Type 1 Error Rate = False Discovery Rate IF the null hypothesis is true
is correct, but note the conditional on the true null. The false discovery rate does not have this conditional and thereby depends on the unknown truth of how many of your null hypotheses are actually correct or not.
It's also worthwhile to consider that when you control the false discovery rate using a procedure like Benjamini-Hochberg you are never able to estimate the actually false discovery rate, instead you control it by estimating an upper bound. To do more you would actually need to be able to detect that the null hypothesis is true using statistics, when you can only detect violations of a certain magnitude (depending on the power of your test). | Is p-value also the false discovery rate? | Your false discovery rate not only depends on the p-value threshold, but also on the truth. In fact, if your null hypothesis is in reality wrong it is impossible for you to make a false discovery.
May | Is p-value also the false discovery rate?
Your false discovery rate not only depends on the p-value threshold, but also on the truth. In fact, if your null hypothesis is in reality wrong it is impossible for you to make a false discovery.
Maybe it's helpful to think of it like that: the p-value threshold is the probability of making false discoveries when there are no true discoveries to be make (or to put it differently, if the null hypothesis is true).
Basically,
Type 1 Error Rate = "Probability of rejecting the null if it's true" = p-value threshold
and
Type 1 Error Rate = False Discovery Rate IF the null hypothesis is true
is correct, but note the conditional on the true null. The false discovery rate does not have this conditional and thereby depends on the unknown truth of how many of your null hypotheses are actually correct or not.
It's also worthwhile to consider that when you control the false discovery rate using a procedure like Benjamini-Hochberg you are never able to estimate the actually false discovery rate, instead you control it by estimating an upper bound. To do more you would actually need to be able to detect that the null hypothesis is true using statistics, when you can only detect violations of a certain magnitude (depending on the power of your test). | Is p-value also the false discovery rate?
Your false discovery rate not only depends on the p-value threshold, but also on the truth. In fact, if your null hypothesis is in reality wrong it is impossible for you to make a false discovery.
May |
35,756 | Is p-value also the false discovery rate? | The difference between P values and false positive rate (or false discovery rate) is explained, clearly I hope, in http://rsos.royalsocietypublishing.org/content/1/3/140216
Although that paper uses the term False Discovery Rate, I now prefer False Positive Rate, becuse the former term is often used in the context of corrections for multiple comparisons. That's a different problem. The paper points out that for a single unbiased test, the false positive rate is a good deal higher than the P value under almost all circumstances.
There is also a qualitative description of the underlying logic at https://aeon.co/essays/it-s-time-for-science-to-abandon-the-term-statistically-significant | Is p-value also the false discovery rate? | The difference between P values and false positive rate (or false discovery rate) is explained, clearly I hope, in http://rsos.royalsocietypublishing.org/content/1/3/140216
Although that paper uses th | Is p-value also the false discovery rate?
The difference between P values and false positive rate (or false discovery rate) is explained, clearly I hope, in http://rsos.royalsocietypublishing.org/content/1/3/140216
Although that paper uses the term False Discovery Rate, I now prefer False Positive Rate, becuse the former term is often used in the context of corrections for multiple comparisons. That's a different problem. The paper points out that for a single unbiased test, the false positive rate is a good deal higher than the P value under almost all circumstances.
There is also a qualitative description of the underlying logic at https://aeon.co/essays/it-s-time-for-science-to-abandon-the-term-statistically-significant | Is p-value also the false discovery rate?
The difference between P values and false positive rate (or false discovery rate) is explained, clearly I hope, in http://rsos.royalsocietypublishing.org/content/1/3/140216
Although that paper uses th |
35,757 | Missing at Random Data in GEE | ML estimation based on complete cases is not considered efficient and can be horribly biased. Likelihood-based complete case estimation is consistent in general only if the data is MCAR. If data is MAR then you can use something like EM or data augmentation to get efficient likelihood-based estimates. The appropriate likelihood to use for doing maximum likelihood is the joint of the data with the missing data is
$$
\ell(\theta \mid Y_{obs}, X) = \log \int p( Y \mid \ X, \theta) \ d Y_{mis}
$$
where $Y$ is the response and $X$ is the relevant covariates.
GEE estimation is biased under MAR, just like complete-case ML estimation is biased.
People don't use usual GEE estimation for these problems because they are both inconsistent and inefficient. An easy fix-up for the consistency problem, under MAR, is to weight the estimating equations by their inverse-probability of being observed to get so-called IPW estimates. That is, solve
$$
\sum_{i=1}^N \frac{I(Y_i \mbox{ is complete})\varphi(Y_i;X_i, \theta)}{\pi(Y_i;X_i, \theta)} = 0,
$$
where $\sum_i \varphi(Y_i; X_i, \theta)=0$ is your usual estimating equation and $\pi(Y;X,\theta)$ is the probability of being completely observed giving the covariates and the data. Incidentally, this violates the likelihood principle and requires estimating the dropout mechanism even if the missingness is ignorable, and can also greatly inflate the variance of estimates. This is still not efficient because it ignores observations where we have partial data. The state of the art estimating equations are doubly-robust estimates which are consistent if either the response model or dropout model are correctly specified and are essentially missing-data-appropriate versions of GEEs. Additionally, they may enjoy an efficiency property called local-semiparametric efficiency which means they attain semiparametric efficiency if everything is correctly specified. See, for example, this book.
Estimating equations which are consistent and efficient essentially all require weighting by the inverse probability of being observed. EDIT: I mean this for semiparametric consistency rather than consistency under a parametric model.
You should also note that typically in longitudinal studies with attrition the dropout can depend both on measured covariates but also on the response at times you didn't observe, so you can't just say "I collected everything I think to be associated with dropout" and say you have MAR. MAR is a genuine assumption about how the world works, and it cannot be checked from the data. If two people with the same response history and same covariates are on study and one drops out and one does not, MAR essentially states that you can use the guy who stayed on to learn the distribution of the guy who dropped out, and this is a very strong assumption. In longitudinal studies, the consensus among experts is that an analysis of sensitivity to the MAR assumption is ideal, but I don't think this has made it into the software world yet.
Unfortunately, I'm not aware of any software for doing doubly robust estimation, but likelihood-based estimation is easy (IMO the easiest thing to do is use Bayesian software for fitting, but there is also lots of software out there). You can also do inverse probability weighting easily, but it has stability issues. | Missing at Random Data in GEE | ML estimation based on complete cases is not considered efficient and can be horribly biased. Likelihood-based complete case estimation is consistent in general only if the data is MCAR. If data is MA | Missing at Random Data in GEE
ML estimation based on complete cases is not considered efficient and can be horribly biased. Likelihood-based complete case estimation is consistent in general only if the data is MCAR. If data is MAR then you can use something like EM or data augmentation to get efficient likelihood-based estimates. The appropriate likelihood to use for doing maximum likelihood is the joint of the data with the missing data is
$$
\ell(\theta \mid Y_{obs}, X) = \log \int p( Y \mid \ X, \theta) \ d Y_{mis}
$$
where $Y$ is the response and $X$ is the relevant covariates.
GEE estimation is biased under MAR, just like complete-case ML estimation is biased.
People don't use usual GEE estimation for these problems because they are both inconsistent and inefficient. An easy fix-up for the consistency problem, under MAR, is to weight the estimating equations by their inverse-probability of being observed to get so-called IPW estimates. That is, solve
$$
\sum_{i=1}^N \frac{I(Y_i \mbox{ is complete})\varphi(Y_i;X_i, \theta)}{\pi(Y_i;X_i, \theta)} = 0,
$$
where $\sum_i \varphi(Y_i; X_i, \theta)=0$ is your usual estimating equation and $\pi(Y;X,\theta)$ is the probability of being completely observed giving the covariates and the data. Incidentally, this violates the likelihood principle and requires estimating the dropout mechanism even if the missingness is ignorable, and can also greatly inflate the variance of estimates. This is still not efficient because it ignores observations where we have partial data. The state of the art estimating equations are doubly-robust estimates which are consistent if either the response model or dropout model are correctly specified and are essentially missing-data-appropriate versions of GEEs. Additionally, they may enjoy an efficiency property called local-semiparametric efficiency which means they attain semiparametric efficiency if everything is correctly specified. See, for example, this book.
Estimating equations which are consistent and efficient essentially all require weighting by the inverse probability of being observed. EDIT: I mean this for semiparametric consistency rather than consistency under a parametric model.
You should also note that typically in longitudinal studies with attrition the dropout can depend both on measured covariates but also on the response at times you didn't observe, so you can't just say "I collected everything I think to be associated with dropout" and say you have MAR. MAR is a genuine assumption about how the world works, and it cannot be checked from the data. If two people with the same response history and same covariates are on study and one drops out and one does not, MAR essentially states that you can use the guy who stayed on to learn the distribution of the guy who dropped out, and this is a very strong assumption. In longitudinal studies, the consensus among experts is that an analysis of sensitivity to the MAR assumption is ideal, but I don't think this has made it into the software world yet.
Unfortunately, I'm not aware of any software for doing doubly robust estimation, but likelihood-based estimation is easy (IMO the easiest thing to do is use Bayesian software for fitting, but there is also lots of software out there). You can also do inverse probability weighting easily, but it has stability issues. | Missing at Random Data in GEE
ML estimation based on complete cases is not considered efficient and can be horribly biased. Likelihood-based complete case estimation is consistent in general only if the data is MCAR. If data is MA |
35,758 | Missing at Random Data in GEE | The OP's question, if I am understanding it correctly, is one that still nags me. I'll add my intuition in hopes that others will chime in and provide a resolution. I realize this post is 8 years old, but I don't think it was answered satisfactorily.
It seems guy overlooked your earlier assumption concerning the factorization of the likelihood leading to an ignorable missing data mechanism. In this setting, the ML estimation is not fully efficient, but it is still consistent (asymptotically unbiased).
When we fit a GEE model we can concenptualize a full likelihood, even if we do not write it down explicitly. It is simply unspecified. In that case, we can conceptualize the likelihood factoring, leading to an ignorable missing data mechanism. We can estimate the parameters in the factored portion of the likelihood (complete cases) using maximum likelihood estimation or method of moments (GEE). Viewed this way, I do not understand the claim that one must necessarily assume MCAR when using GEE. In this setting I simply state an ignorable missing data assumption, be it as it may.
Nevertheless, lots of people repeat the GEE MCAR mantra. I think the mantra might simply be the result of not conceptualizing a likelihood and looking to the estimating equations as the starting point. If we say a likelihood "doesn't exist" then there is nothing to factor, and we cannot claim a MAR assumption. If I am wrong, then I'm hoping someone can correct my understanding. Is there a simple counter example where the likelihood factors under a MAR ignorable missing data mechanism, yet GEE parameter estimation on complete cases is not consistent (biased even asymptotically)? Am I completely misunderstanding the point behind the GEE MCAR mantra?
guy's answer addresses the scenario where our specified outcome model does not simultaneously address the missing data mechanism. This is perfectly okay, we just need to address the missing data first, whether with imputation, IPW, etc., before analyzing the outcome. This is necessary for any modeling technique, MLE, GEE, or otherwise. | Missing at Random Data in GEE | The OP's question, if I am understanding it correctly, is one that still nags me. I'll add my intuition in hopes that others will chime in and provide a resolution. I realize this post is 8 years ol | Missing at Random Data in GEE
The OP's question, if I am understanding it correctly, is one that still nags me. I'll add my intuition in hopes that others will chime in and provide a resolution. I realize this post is 8 years old, but I don't think it was answered satisfactorily.
It seems guy overlooked your earlier assumption concerning the factorization of the likelihood leading to an ignorable missing data mechanism. In this setting, the ML estimation is not fully efficient, but it is still consistent (asymptotically unbiased).
When we fit a GEE model we can concenptualize a full likelihood, even if we do not write it down explicitly. It is simply unspecified. In that case, we can conceptualize the likelihood factoring, leading to an ignorable missing data mechanism. We can estimate the parameters in the factored portion of the likelihood (complete cases) using maximum likelihood estimation or method of moments (GEE). Viewed this way, I do not understand the claim that one must necessarily assume MCAR when using GEE. In this setting I simply state an ignorable missing data assumption, be it as it may.
Nevertheless, lots of people repeat the GEE MCAR mantra. I think the mantra might simply be the result of not conceptualizing a likelihood and looking to the estimating equations as the starting point. If we say a likelihood "doesn't exist" then there is nothing to factor, and we cannot claim a MAR assumption. If I am wrong, then I'm hoping someone can correct my understanding. Is there a simple counter example where the likelihood factors under a MAR ignorable missing data mechanism, yet GEE parameter estimation on complete cases is not consistent (biased even asymptotically)? Am I completely misunderstanding the point behind the GEE MCAR mantra?
guy's answer addresses the scenario where our specified outcome model does not simultaneously address the missing data mechanism. This is perfectly okay, we just need to address the missing data first, whether with imputation, IPW, etc., before analyzing the outcome. This is necessary for any modeling technique, MLE, GEE, or otherwise. | Missing at Random Data in GEE
The OP's question, if I am understanding it correctly, is one that still nags me. I'll add my intuition in hopes that others will chime in and provide a resolution. I realize this post is 8 years ol |
35,759 | Check if sample is representative of a larger sample | You could test whether several statistics that are descriptive of a distribution are the same in the subsample and the remaining sample. For example you could conduct tests for:
mean difference
median difference
stochastic dominance
different variance
shape
While you are at it, since you are interested in similarity, I would also explore tests for equivalence of all such measures (for example, using tost), probably combining inferences from difference and equivalence tests.
Something else you may want to consider: why are you interested in this similarity? The answer to this question may help you decide which, if any, such tests you may like to explore. For example, if you sample size is smallish, you may not have enough power for the Kolmogorov–Smirnov test mentioned by soakley, although you might still have power enough to make inferences about, say, the sample mean. If you are only interested in comparing sample means, that may be OK for your purposes. | Check if sample is representative of a larger sample | You could test whether several statistics that are descriptive of a distribution are the same in the subsample and the remaining sample. For example you could conduct tests for:
mean difference
media | Check if sample is representative of a larger sample
You could test whether several statistics that are descriptive of a distribution are the same in the subsample and the remaining sample. For example you could conduct tests for:
mean difference
median difference
stochastic dominance
different variance
shape
While you are at it, since you are interested in similarity, I would also explore tests for equivalence of all such measures (for example, using tost), probably combining inferences from difference and equivalence tests.
Something else you may want to consider: why are you interested in this similarity? The answer to this question may help you decide which, if any, such tests you may like to explore. For example, if you sample size is smallish, you may not have enough power for the Kolmogorov–Smirnov test mentioned by soakley, although you might still have power enough to make inferences about, say, the sample mean. If you are only interested in comparing sample means, that may be OK for your purposes. | Check if sample is representative of a larger sample
You could test whether several statistics that are descriptive of a distribution are the same in the subsample and the remaining sample. For example you could conduct tests for:
mean difference
media |
35,760 | Check if sample is representative of a larger sample | Since you want to compare the entire distributions, I'd recommend the two sample Kolmogorov-Smirnov test.
More information can be found here:
http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test | Check if sample is representative of a larger sample | Since you want to compare the entire distributions, I'd recommend the two sample Kolmogorov-Smirnov test.
More information can be found here:
http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_ | Check if sample is representative of a larger sample
Since you want to compare the entire distributions, I'd recommend the two sample Kolmogorov-Smirnov test.
More information can be found here:
http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test | Check if sample is representative of a larger sample
Since you want to compare the entire distributions, I'd recommend the two sample Kolmogorov-Smirnov test.
More information can be found here:
http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_ |
35,761 | Understanding the minimization of mean squared error function | Concerning your first question, adding and subtracting is a trick in statistics which is often used to more easily work with certain expressions. By adding and subtracting you do not change your equation but it makes it possible to group certain terms to obtain the result more easily.
For your second question, to make this point more formally, we want to show the conditional expectation function (CEF) prediction property:
$$E(Y|X) = \text{arg min}_{f(X)} E[(Y-f(X))^2]$$
I guess that not stating that $f(X)$ is the minimization argument in the question caused confusion for some. The CEF also has the following decomposition property:
$$Y=E(Y|X) + \epsilon $$
where $\epsilon $ is a random variable such that $E(\epsilon|X) =0$ and $E(h(X)\epsilon)=0$.
In your last expression you have $(Y-E(Y|X)) = \epsilon$ and $(f(X)-E(Y|X)) = h(X)$ is a function of $X$. Then you use the previous property of $\epsilon$ to show that $-2E[h(X)\epsilon]=0$, hence the last expression is zero. This proof goes by using properties of the CEF rather than anything unnecessarily complicated - so it's plain English for most parts. | Understanding the minimization of mean squared error function | Concerning your first question, adding and subtracting is a trick in statistics which is often used to more easily work with certain expressions. By adding and subtracting you do not change your equat | Understanding the minimization of mean squared error function
Concerning your first question, adding and subtracting is a trick in statistics which is often used to more easily work with certain expressions. By adding and subtracting you do not change your equation but it makes it possible to group certain terms to obtain the result more easily.
For your second question, to make this point more formally, we want to show the conditional expectation function (CEF) prediction property:
$$E(Y|X) = \text{arg min}_{f(X)} E[(Y-f(X))^2]$$
I guess that not stating that $f(X)$ is the minimization argument in the question caused confusion for some. The CEF also has the following decomposition property:
$$Y=E(Y|X) + \epsilon $$
where $\epsilon $ is a random variable such that $E(\epsilon|X) =0$ and $E(h(X)\epsilon)=0$.
In your last expression you have $(Y-E(Y|X)) = \epsilon$ and $(f(X)-E(Y|X)) = h(X)$ is a function of $X$. Then you use the previous property of $\epsilon$ to show that $-2E[h(X)\epsilon]=0$, hence the last expression is zero. This proof goes by using properties of the CEF rather than anything unnecessarily complicated - so it's plain English for most parts. | Understanding the minimization of mean squared error function
Concerning your first question, adding and subtracting is a trick in statistics which is often used to more easily work with certain expressions. By adding and subtracting you do not change your equat |
35,762 | Understanding the minimization of mean squared error function | For your second question, you want to show that
$$E\left[ (Y-E(Y|X)(f(X)-E(Y|X))\right]=0.$$
Now, if we look at the first term of the product, if we didn't have a conditional expectation, we would have
$$E(Y-E(Y))=E(Y)-E(Y)=0.$$
But by the Law of Total expectation, we know that
$$E(W)=E(E(W|Z)),$$
so you can actually write
$$E(Y-E(Y|X)) = E(E(Y-E(Y|X)|X)) = E(E(Y|X)-E(Y|X)) =E(0)=0.$$
To finish the proof, note that conditional on $X$, the second term is a constant, and therefore the expectation of the product is the product of the expectations:
$$E\left[ (Y-E(Y|X))(f(X)-E(Y|X))|X\right]=E\left[ (Y-E(Y|X))|X\right]\cdot E\left[(f(X)-E(Y|X))|X\right]$$ | Understanding the minimization of mean squared error function | For your second question, you want to show that
$$E\left[ (Y-E(Y|X)(f(X)-E(Y|X))\right]=0.$$
Now, if we look at the first term of the product, if we didn't have a conditional expectation, we would ha | Understanding the minimization of mean squared error function
For your second question, you want to show that
$$E\left[ (Y-E(Y|X)(f(X)-E(Y|X))\right]=0.$$
Now, if we look at the first term of the product, if we didn't have a conditional expectation, we would have
$$E(Y-E(Y))=E(Y)-E(Y)=0.$$
But by the Law of Total expectation, we know that
$$E(W)=E(E(W|Z)),$$
so you can actually write
$$E(Y-E(Y|X)) = E(E(Y-E(Y|X)|X)) = E(E(Y|X)-E(Y|X)) =E(0)=0.$$
To finish the proof, note that conditional on $X$, the second term is a constant, and therefore the expectation of the product is the product of the expectations:
$$E\left[ (Y-E(Y|X))(f(X)-E(Y|X))|X\right]=E\left[ (Y-E(Y|X))|X\right]\cdot E\left[(f(X)-E(Y|X))|X\right]$$ | Understanding the minimization of mean squared error function
For your second question, you want to show that
$$E\left[ (Y-E(Y|X)(f(X)-E(Y|X))\right]=0.$$
Now, if we look at the first term of the product, if we didn't have a conditional expectation, we would ha |
35,763 | Order of items in a questionnaire | Order certainly can matter. As a simple example, suppose I have two items on my questionnaire, which I could give in two different orders.
(a) Rate your happiness (Very Unhappy, Unhappy, Neutral, Happy, Very Happy)
(b) Are you in a relationship with another person (Yes, No)
If I ask question (b) first, it will likely influence the answer to question (a).
Based on the information provided, it is difficult to provide an answer to "which item should go first, second, third, etc." but here is a link to some general guidelines for question order:
http://www.people-press.org/methodology/questionnaire-design/question-order/ | Order of items in a questionnaire | Order certainly can matter. As a simple example, suppose I have two items on my questionnaire, which I could give in two different orders.
(a) Rate your happiness (Very Unhappy, Unhappy, Neutral, H | Order of items in a questionnaire
Order certainly can matter. As a simple example, suppose I have two items on my questionnaire, which I could give in two different orders.
(a) Rate your happiness (Very Unhappy, Unhappy, Neutral, Happy, Very Happy)
(b) Are you in a relationship with another person (Yes, No)
If I ask question (b) first, it will likely influence the answer to question (a).
Based on the information provided, it is difficult to provide an answer to "which item should go first, second, third, etc." but here is a link to some general guidelines for question order:
http://www.people-press.org/methodology/questionnaire-design/question-order/ | Order of items in a questionnaire
Order certainly can matter. As a simple example, suppose I have two items on my questionnaire, which I could give in two different orders.
(a) Rate your happiness (Very Unhappy, Unhappy, Neutral, H |
35,764 | Order of items in a questionnaire | It depends on the nature of the questionnaire. For instance, I've read that the first few items in lists that participants rank (e.g., from most to least favorite/true/important/etc.) tend to be ranked higher, presumably due to serial position effects. However, I'm not sure where I read that, and in searching for the reference, I instead found a contradictory result regarding Rokeach's Value Survey, which is a ranking-based assessment. Greenstein and Bennett (1974) randomized item ordering, and reported:
The amount of bias created by presentation order was sufficiently small as to suggest that order effects are not a problem in the instrument.
Of course, other questionnaires may be much more sensitive, but no one general answer applies. Randomization of presentation order across participants is a good way to balance any bias for nomothetic research though.
Reference
Greenstein, T., & Bennett, R. R. (1974). Order effects in Rokeach's Value Survey. Journal of Research in Personality, 8(4), 393–396. | Order of items in a questionnaire | It depends on the nature of the questionnaire. For instance, I've read that the first few items in lists that participants rank (e.g., from most to least favorite/true/important/etc.) tend to be ranke | Order of items in a questionnaire
It depends on the nature of the questionnaire. For instance, I've read that the first few items in lists that participants rank (e.g., from most to least favorite/true/important/etc.) tend to be ranked higher, presumably due to serial position effects. However, I'm not sure where I read that, and in searching for the reference, I instead found a contradictory result regarding Rokeach's Value Survey, which is a ranking-based assessment. Greenstein and Bennett (1974) randomized item ordering, and reported:
The amount of bias created by presentation order was sufficiently small as to suggest that order effects are not a problem in the instrument.
Of course, other questionnaires may be much more sensitive, but no one general answer applies. Randomization of presentation order across participants is a good way to balance any bias for nomothetic research though.
Reference
Greenstein, T., & Bennett, R. R. (1974). Order effects in Rokeach's Value Survey. Journal of Research in Personality, 8(4), 393–396. | Order of items in a questionnaire
It depends on the nature of the questionnaire. For instance, I've read that the first few items in lists that participants rank (e.g., from most to least favorite/true/important/etc.) tend to be ranke |
35,765 | Interpret clustering plotted in the first two principal components | Principal components are combinations of features. For example, say you have demographic data consisting of three features: height, weight and income. Then, if height and weight are highly correlated, it might be useful to combine them into a single feature. Principal component analysis (PCA) does this via a weighted linear combination, so you may end up with a feature: 0.5*Height + 0.5* Weight. Now we may find that income is completely independent of height and weight, so Income might be the second principal component discovered.
So say PCA gave us two components: Prin1=0.5*Height+0.5*Weight and Prin2=Income. You can map every point in your dataset to a 2D plot with these two dimension, and it might look something like what you have above.
PCA tries to find combinations of features that lead to maximum separation between data points. What this means is that, if you had another dimension, say age, in your dataset, which was the same for all members, then that would not be considered, alone or in combination, among the top principal components. Only the features that vary a lot from data point to data point form a part of the top principal components. As a result, the points should appear to be quite far apart from each other on the plot.
What happens when you cluster the dataset? Depends on the input feature space and clustering algorithm used. If you use the two components found by your PCA analysis above as input features to your clustering algorithm, a decent clustering algorithm should put points that are close together on your 2D plot into the same cluster. This should happen irrespective of the number of clusters found. If your clustering found more than five clusters, they should still consist of points relatively close together on the 2D plot. So you might get five different circles, somewhat separated from each other.
The plot of PCA vs. clusters may not make sense under a couple of conditions:
a) There is a lot of variance in the dataset along each dimension, so looking at the top 2-3 dimensions does not really give you much information.
b) The clustering algorithm for some reason focuses on features considered unimportant by PCA. Given the variety of clustering algorithms out there, this could happen.
This is a very high-level view of PCA. Take a look at this tutorial for an excellent accessible introduction. | Interpret clustering plotted in the first two principal components | Principal components are combinations of features. For example, say you have demographic data consisting of three features: height, weight and income. Then, if height and weight are highly correlated, | Interpret clustering plotted in the first two principal components
Principal components are combinations of features. For example, say you have demographic data consisting of three features: height, weight and income. Then, if height and weight are highly correlated, it might be useful to combine them into a single feature. Principal component analysis (PCA) does this via a weighted linear combination, so you may end up with a feature: 0.5*Height + 0.5* Weight. Now we may find that income is completely independent of height and weight, so Income might be the second principal component discovered.
So say PCA gave us two components: Prin1=0.5*Height+0.5*Weight and Prin2=Income. You can map every point in your dataset to a 2D plot with these two dimension, and it might look something like what you have above.
PCA tries to find combinations of features that lead to maximum separation between data points. What this means is that, if you had another dimension, say age, in your dataset, which was the same for all members, then that would not be considered, alone or in combination, among the top principal components. Only the features that vary a lot from data point to data point form a part of the top principal components. As a result, the points should appear to be quite far apart from each other on the plot.
What happens when you cluster the dataset? Depends on the input feature space and clustering algorithm used. If you use the two components found by your PCA analysis above as input features to your clustering algorithm, a decent clustering algorithm should put points that are close together on your 2D plot into the same cluster. This should happen irrespective of the number of clusters found. If your clustering found more than five clusters, they should still consist of points relatively close together on the 2D plot. So you might get five different circles, somewhat separated from each other.
The plot of PCA vs. clusters may not make sense under a couple of conditions:
a) There is a lot of variance in the dataset along each dimension, so looking at the top 2-3 dimensions does not really give you much information.
b) The clustering algorithm for some reason focuses on features considered unimportant by PCA. Given the variety of clustering algorithms out there, this could happen.
This is a very high-level view of PCA. Take a look at this tutorial for an excellent accessible introduction. | Interpret clustering plotted in the first two principal components
Principal components are combinations of features. For example, say you have demographic data consisting of three features: height, weight and income. Then, if height and weight are highly correlated, |
35,766 | What is wrong with this chi-squared calculation? | Your R code thinks you have a 2x2 contingency table for the chi-squared test, whereas your 'by hand' version treats two of your values as the expected values with which to compare the first two values. You need to decide which setup is correct and use it consistently both times.
Here is your R version:
> test <- matrix(c(4203, 4218, 786, 771), ncol=2)
> dimnames(test) <- list(group = c("control","exp"), click = c("n","y"))
> print(test)
click
group n y
control 4203 786
exp 4218 771
> print(Xsq <- chisq.test(test, correct=F))
Pearson's Chi-squared test
data: test
X-squared = 0.1712, df = 1, p-value = 0.679
This is how you do it 'by hand':
\begin{array}{lrrr}
& &\text{click} & \\
\rm group &\rm n &\rm y &\rm proportion \\
{\rm control} &4203 &786 &0.50 \\
{\rm exp} &4218 &771 &0.50 \\
\rm proportion &0.844 &0.156
\end{array}
Notice that I added the row and column proportions. These are taken as estimates of the probability that an observation will fall in each row (column). The expected count in each cell under the assumption of independence is the row probability times the column probability times the total count. For your data, that gives:
\begin{array}{lrr}
&\text{click} & \\
\rm group &\rm n &\rm y \\
{\rm control} &4210.5 &778.5 \\
{\rm exp} &4210.5 &778.5 \\
\end{array}
Thus the calculation is:
$$
\frac{(4203-4210.5)^2}{4210.5} + \frac{(4218-4210.5)^2}{4210.5} + \frac{(786-778.5)^2}{778.5} + \frac{(771-778.5)^2}{778.5} = 0.1712,
$$
Which is the same as what R gave.
If you take the control row counts as the expected counts, rather than the counts for another condition, you would have the following for your 'by hand' calculation:
$$
\frac{(4218-4203)^2}{4203} + \frac{(771 - 786)^2}{786} = 0.3398
$$
You can also run this version in R like so:
> probs <- test[1,]/sum(test[1,])
> probs
n y
0.8424534 0.1575466
> chisq.test(test[2,], correct=F, p=probs)
Chi-squared test for given probabilities
data: test[2, ]
X-squared = 0.3398, df = 1, p-value = 0.5599
They key is that you specify the p argument with the expected probabilities (R will take care of calculating the expected counts). At any rate, you can see that the $\chi^2$ values are once again the same. | What is wrong with this chi-squared calculation? | Your R code thinks you have a 2x2 contingency table for the chi-squared test, whereas your 'by hand' version treats two of your values as the expected values with which to compare the first two values | What is wrong with this chi-squared calculation?
Your R code thinks you have a 2x2 contingency table for the chi-squared test, whereas your 'by hand' version treats two of your values as the expected values with which to compare the first two values. You need to decide which setup is correct and use it consistently both times.
Here is your R version:
> test <- matrix(c(4203, 4218, 786, 771), ncol=2)
> dimnames(test) <- list(group = c("control","exp"), click = c("n","y"))
> print(test)
click
group n y
control 4203 786
exp 4218 771
> print(Xsq <- chisq.test(test, correct=F))
Pearson's Chi-squared test
data: test
X-squared = 0.1712, df = 1, p-value = 0.679
This is how you do it 'by hand':
\begin{array}{lrrr}
& &\text{click} & \\
\rm group &\rm n &\rm y &\rm proportion \\
{\rm control} &4203 &786 &0.50 \\
{\rm exp} &4218 &771 &0.50 \\
\rm proportion &0.844 &0.156
\end{array}
Notice that I added the row and column proportions. These are taken as estimates of the probability that an observation will fall in each row (column). The expected count in each cell under the assumption of independence is the row probability times the column probability times the total count. For your data, that gives:
\begin{array}{lrr}
&\text{click} & \\
\rm group &\rm n &\rm y \\
{\rm control} &4210.5 &778.5 \\
{\rm exp} &4210.5 &778.5 \\
\end{array}
Thus the calculation is:
$$
\frac{(4203-4210.5)^2}{4210.5} + \frac{(4218-4210.5)^2}{4210.5} + \frac{(786-778.5)^2}{778.5} + \frac{(771-778.5)^2}{778.5} = 0.1712,
$$
Which is the same as what R gave.
If you take the control row counts as the expected counts, rather than the counts for another condition, you would have the following for your 'by hand' calculation:
$$
\frac{(4218-4203)^2}{4203} + \frac{(771 - 786)^2}{786} = 0.3398
$$
You can also run this version in R like so:
> probs <- test[1,]/sum(test[1,])
> probs
n y
0.8424534 0.1575466
> chisq.test(test[2,], correct=F, p=probs)
Chi-squared test for given probabilities
data: test[2, ]
X-squared = 0.3398, df = 1, p-value = 0.5599
They key is that you specify the p argument with the expected probabilities (R will take care of calculating the expected counts). At any rate, you can see that the $\chi^2$ values are once again the same. | What is wrong with this chi-squared calculation?
Your R code thinks you have a 2x2 contingency table for the chi-squared test, whereas your 'by hand' version treats two of your values as the expected values with which to compare the first two values |
35,767 | Star Coordinates vs. principal component analysis | PCA and "star coordinates" do different things. Because star coordinates standardize all the values, a fair comparison would apply PCA to a correlation matrix (rather than the covariance matrix), which is another way of standardizing the values.
PCA identifies a coordinate system adapted to the shape of the data, while star coordinates are based on the given coordinates originally in the data.
This makes PCA far more flexible for uncovering relationships among the data. "Star coordinates" are, in contrast, not a whole lot more than a 2D graphic of univariate information.
PCA (when performed on a correlation matrix) uses the data means for the origin and their standard deviations for scales. Star coordinates use the data minima for the origin and their ranges for scales.
The minima and ranges are far more sensitive to outlying data than standard deviations are, making star coordinates less suitable for general-purpose data exploration.
As such, each has its strengths--although the particular strengths of star coordinates relative to PCA are difficult to fathom.
As an example, consider these two 3D datasets. Each consists of 300 points and in each one the point cloud has a very flat elliptical"pancake" shape. (The singular values of each correlation matrix are close to $\{2, 1, .01\}$.) The top row of the figure presents the correlation matrices, the second row shows a view of the point clouds in pseudo 3D (oriented approximately to capture the two largest principal components), and the bottom row is the "star coordinates" picture of the same points.
Due to the different orientations of these point clouds relative to the original coordinate axes, the star coordinates plots are entirely different. This is characteristic: star coordinates give (very limited) information about the original coordinates while PCA reveals relationships among the coordinates.
You can also see that star coordinates are a kind of "accidental" projection: sometimes they will capture large principal components of the data, as in the left hand version, and sometimes they will capture large and small components (as in the right hand), and at other times (not illustrated) they capture only small components (and all the points are clustered densely near the origin, revealing almost nothing). | Star Coordinates vs. principal component analysis | PCA and "star coordinates" do different things. Because star coordinates standardize all the values, a fair comparison would apply PCA to a correlation matrix (rather than the covariance matrix), whi | Star Coordinates vs. principal component analysis
PCA and "star coordinates" do different things. Because star coordinates standardize all the values, a fair comparison would apply PCA to a correlation matrix (rather than the covariance matrix), which is another way of standardizing the values.
PCA identifies a coordinate system adapted to the shape of the data, while star coordinates are based on the given coordinates originally in the data.
This makes PCA far more flexible for uncovering relationships among the data. "Star coordinates" are, in contrast, not a whole lot more than a 2D graphic of univariate information.
PCA (when performed on a correlation matrix) uses the data means for the origin and their standard deviations for scales. Star coordinates use the data minima for the origin and their ranges for scales.
The minima and ranges are far more sensitive to outlying data than standard deviations are, making star coordinates less suitable for general-purpose data exploration.
As such, each has its strengths--although the particular strengths of star coordinates relative to PCA are difficult to fathom.
As an example, consider these two 3D datasets. Each consists of 300 points and in each one the point cloud has a very flat elliptical"pancake" shape. (The singular values of each correlation matrix are close to $\{2, 1, .01\}$.) The top row of the figure presents the correlation matrices, the second row shows a view of the point clouds in pseudo 3D (oriented approximately to capture the two largest principal components), and the bottom row is the "star coordinates" picture of the same points.
Due to the different orientations of these point clouds relative to the original coordinate axes, the star coordinates plots are entirely different. This is characteristic: star coordinates give (very limited) information about the original coordinates while PCA reveals relationships among the coordinates.
You can also see that star coordinates are a kind of "accidental" projection: sometimes they will capture large principal components of the data, as in the left hand version, and sometimes they will capture large and small components (as in the right hand), and at other times (not illustrated) they capture only small components (and all the points are clustered densely near the origin, revealing almost nothing). | Star Coordinates vs. principal component analysis
PCA and "star coordinates" do different things. Because star coordinates standardize all the values, a fair comparison would apply PCA to a correlation matrix (rather than the covariance matrix), whi |
35,768 | Standard error of proportion that takes into account population size | If you're randomly sampling without replacement from a finite population, you're not in a binomial sampling situation, but a hypergeometric one.
When you're in a binomial situation with population proportion $\pi$, the variance of the count, $X$ is $n\pi(1-\pi)$ and so the variance of the sample proportion $p=X/n$ is $n\pi(1-\pi)/n^2=\pi(1-\pi)/n$. This variance of the proportion is then estimated as $p(1-p)/n$.
In the case of sampling $n$ without replacement from a finite population of size $N$, the count has variance $n{K\over N}{\frac{N-K}{N}}{N-n\over N-1}$.
Since $\pi=K/N$ is the population proportion, we might write that variance of the count, $X$ as $n\pi(1-\pi){N-n\over N-1}$.
So the variance of the sample proportion can be written as $\frac{\pi(1-\pi)}{n}\cdot f$ where $f={N-n\over N-1}$.
Since $f<1$, this variance is smaller than in the binomial case (as you suggested).
$f$ is referred to as "the finite population correction" (since you can use it to 'correct' the variance you get from the binomial), but as you see, it's simply the variance from using the correct (i.e. hypergeometric) probability model.
Of course, to correct the standard error rather than the variance, you must take the square root of that factor (i.e., $\sqrt{{N-n\over N-1}}$).
I read in a blog that the formula above should be used when the population is at least 10 times bigger than the sample
I'd say 'should be used' is far too strong. While the binomial formula could be used, the finite population correction factor is always right - but when the sample is a small fraction of the population, the correction factor will be close to 1, so if you leave it out, little harm is done.
what happens if a survey is close to that 1/10 sample
Let's see what happens when the sample is one tenth of the population.
$f=\frac{N-n}{N-1} = \frac{0.9N}{ N-1} \approx 0.9$
Hence the correction to the standard error is about $\sqrt{0.9}$ which is about $0.95.$ If you ignore it, your standard error will be about $5.4\%$ too large.
It's up to you to figure out if that amount of inaccuracy in the standard error is acceptable or not. | Standard error of proportion that takes into account population size | If you're randomly sampling without replacement from a finite population, you're not in a binomial sampling situation, but a hypergeometric one.
When you're in a binomial situation with population pro | Standard error of proportion that takes into account population size
If you're randomly sampling without replacement from a finite population, you're not in a binomial sampling situation, but a hypergeometric one.
When you're in a binomial situation with population proportion $\pi$, the variance of the count, $X$ is $n\pi(1-\pi)$ and so the variance of the sample proportion $p=X/n$ is $n\pi(1-\pi)/n^2=\pi(1-\pi)/n$. This variance of the proportion is then estimated as $p(1-p)/n$.
In the case of sampling $n$ without replacement from a finite population of size $N$, the count has variance $n{K\over N}{\frac{N-K}{N}}{N-n\over N-1}$.
Since $\pi=K/N$ is the population proportion, we might write that variance of the count, $X$ as $n\pi(1-\pi){N-n\over N-1}$.
So the variance of the sample proportion can be written as $\frac{\pi(1-\pi)}{n}\cdot f$ where $f={N-n\over N-1}$.
Since $f<1$, this variance is smaller than in the binomial case (as you suggested).
$f$ is referred to as "the finite population correction" (since you can use it to 'correct' the variance you get from the binomial), but as you see, it's simply the variance from using the correct (i.e. hypergeometric) probability model.
Of course, to correct the standard error rather than the variance, you must take the square root of that factor (i.e., $\sqrt{{N-n\over N-1}}$).
I read in a blog that the formula above should be used when the population is at least 10 times bigger than the sample
I'd say 'should be used' is far too strong. While the binomial formula could be used, the finite population correction factor is always right - but when the sample is a small fraction of the population, the correction factor will be close to 1, so if you leave it out, little harm is done.
what happens if a survey is close to that 1/10 sample
Let's see what happens when the sample is one tenth of the population.
$f=\frac{N-n}{N-1} = \frac{0.9N}{ N-1} \approx 0.9$
Hence the correction to the standard error is about $\sqrt{0.9}$ which is about $0.95.$ If you ignore it, your standard error will be about $5.4\%$ too large.
It's up to you to figure out if that amount of inaccuracy in the standard error is acceptable or not. | Standard error of proportion that takes into account population size
If you're randomly sampling without replacement from a finite population, you're not in a binomial sampling situation, but a hypergeometric one.
When you're in a binomial situation with population pro |
35,769 | How to compare dbscan clusters / choose epsilon parameter | Inertia is only a sensible measure for spherical clusters. I.e. not for DBSCAN.
Similar reasonings apply for most internal measures: most are designed around centroid-based cluster models, not arbitrarily shaped clusters.
For DBSCAN, a sensible measure would be density-connectedness. But that needs the same parameters as DBSCAN already uses.
A recommended approach for DBSCAN is to first fix minPts according to domain knowledge, then plot a $k$-distance graph (with $k=minPts$) and look for an elbow in this graph. Alternatively, when having a domain knowledge to choose epsilon (e.g. 1 meter, when you have a geo-spatial data and know this is a reasonable radius), you can do a density plot for this radius and look for an elbow there.
Or you just use OPTICS, where epsilon only serves as an upper limit to boost performance. | How to compare dbscan clusters / choose epsilon parameter | Inertia is only a sensible measure for spherical clusters. I.e. not for DBSCAN.
Similar reasonings apply for most internal measures: most are designed around centroid-based cluster models, not arbitra | How to compare dbscan clusters / choose epsilon parameter
Inertia is only a sensible measure for spherical clusters. I.e. not for DBSCAN.
Similar reasonings apply for most internal measures: most are designed around centroid-based cluster models, not arbitrarily shaped clusters.
For DBSCAN, a sensible measure would be density-connectedness. But that needs the same parameters as DBSCAN already uses.
A recommended approach for DBSCAN is to first fix minPts according to domain knowledge, then plot a $k$-distance graph (with $k=minPts$) and look for an elbow in this graph. Alternatively, when having a domain knowledge to choose epsilon (e.g. 1 meter, when you have a geo-spatial data and know this is a reasonable radius), you can do a density plot for this radius and look for an elbow there.
Or you just use OPTICS, where epsilon only serves as an upper limit to boost performance. | How to compare dbscan clusters / choose epsilon parameter
Inertia is only a sensible measure for spherical clusters. I.e. not for DBSCAN.
Similar reasonings apply for most internal measures: most are designed around centroid-based cluster models, not arbitra |
35,770 | Unbalanced distribution of sample size between groups in logistic regression: should one worry? | You are right that logistic regression does not make any assumptions about the distribution of your independent variable. What will occur as a result of your situation is that you will have less power than if you had equal $n$s. However, reducing the $n$ in the Rich group will only lessen your power further. Rather, the idea is that if you had the same total $N$, but equally divided, you would have more power. Although written in a different context (viz, t-tests), you can get the general idea from my answer here: How should one interpret the comparison of means from different sample sizes? | Unbalanced distribution of sample size between groups in logistic regression: should one worry? | You are right that logistic regression does not make any assumptions about the distribution of your independent variable. What will occur as a result of your situation is that you will have less powe | Unbalanced distribution of sample size between groups in logistic regression: should one worry?
You are right that logistic regression does not make any assumptions about the distribution of your independent variable. What will occur as a result of your situation is that you will have less power than if you had equal $n$s. However, reducing the $n$ in the Rich group will only lessen your power further. Rather, the idea is that if you had the same total $N$, but equally divided, you would have more power. Although written in a different context (viz, t-tests), you can get the general idea from my answer here: How should one interpret the comparison of means from different sample sizes? | Unbalanced distribution of sample size between groups in logistic regression: should one worry?
You are right that logistic regression does not make any assumptions about the distribution of your independent variable. What will occur as a result of your situation is that you will have less powe |
35,771 | Interpreting coefficients of first differences of logarithms | For $small$ changes, you can interpret logged differences as percentage changes after multiplying by 100.
For example, $y_t=9$ and $y_{t-1}=8$. Then $\ln 9 - \ln 8=.118$ or 11.8%, which is the logarithmic approximation to the actual 12.5% increase. Note that I had to multiply by 100 here. For $y_t=9$ and $y_{t-1}=8.5$ the approximation will be much better ($5.9\% \approx 5.7\%$).
Usually, a coefficient tells you the effect on $y$ of a one unit change in that explanatory variable, holding other variables constant. A one unit change in $\Delta \ln x$ corresponds to a 100% change (using the approximation above, which is terrible since this is not a small change). This means that $b_1$ tells you the percentage change in $y$ associated with a 1% increase in x.
But your $x$ is not logged, so the coefficient needs to be interpreted differently. When $x$ grows by one unit, you get $100 \cdot b_1\%$ more $y$.
Moreover, $100 \cdot b_2$ tells you the percentage change in $y$ associated with a 1 unit increase in $z$. | Interpreting coefficients of first differences of logarithms | For $small$ changes, you can interpret logged differences as percentage changes after multiplying by 100.
For example, $y_t=9$ and $y_{t-1}=8$. Then $\ln 9 - \ln 8=.118$ or 11.8%, which is the logarit | Interpreting coefficients of first differences of logarithms
For $small$ changes, you can interpret logged differences as percentage changes after multiplying by 100.
For example, $y_t=9$ and $y_{t-1}=8$. Then $\ln 9 - \ln 8=.118$ or 11.8%, which is the logarithmic approximation to the actual 12.5% increase. Note that I had to multiply by 100 here. For $y_t=9$ and $y_{t-1}=8.5$ the approximation will be much better ($5.9\% \approx 5.7\%$).
Usually, a coefficient tells you the effect on $y$ of a one unit change in that explanatory variable, holding other variables constant. A one unit change in $\Delta \ln x$ corresponds to a 100% change (using the approximation above, which is terrible since this is not a small change). This means that $b_1$ tells you the percentage change in $y$ associated with a 1% increase in x.
But your $x$ is not logged, so the coefficient needs to be interpreted differently. When $x$ grows by one unit, you get $100 \cdot b_1\%$ more $y$.
Moreover, $100 \cdot b_2$ tells you the percentage change in $y$ associated with a 1 unit increase in $z$. | Interpreting coefficients of first differences of logarithms
For $small$ changes, you can interpret logged differences as percentage changes after multiplying by 100.
For example, $y_t=9$ and $y_{t-1}=8$. Then $\ln 9 - \ln 8=.118$ or 11.8%, which is the logarit |
35,772 | Differences between Dwass-Steel-Critchlow-Fligner and Mann-Whitney U-test for a post-hoc pairwise analysis | There are absolutely differences.
The Mann-Whitney U test (i.e. the rank sum test) is not appropriate as a post hoc test following the omnibus Kruskal-Wallis nonparametric analog to the one-way ANOVA for two reasons:
The rank sum test uses different ranks than those employed in the Kruskal-Wallis test (i.e. in both tests you mush the observations together, then rank them, then separate the ranks by group—the rank sum ignores the ranks you got with the omnibus test).
The rank sum test does not use the pooled variance implied by the null hypothesis in the Kruskal-Wallis test (e.g., just as in one-way ANOVA where the post hoc t tests use an estimate of the pooled variance).
Dunn's test was (as far as I know) the first post hoc test for Kruskal-Wallis. It is based on a z approximation to the distribution of a rank sum-like test statistic that addresses both points (1) and (2). The Conover-Iman test is similar to Dunn's test, but is based upon a t distribution, and is strictly more powerful than Dunn's test when rejecting the Kruskal-Wallis null hypothesis.
The Dwass-Steel-Crichtlow-Fligner test also addresses (1) and (2), but has a specific approach to controlling the familywise error rate (FWER) built in. Crichtlow and Fligner interpret Dunn's test as necessarily and exclusively incorporating the Bonferroni adjustment (an incorrect interpretation in my opinion—indeed, I have implemented Dunn's test for Stata and for R to include a wide range of false discovery rate (FDR) and FWER adjustments for multiple comparisons), and have implemented the Conover-Iman test for Stata and for R with the same selection of methods to control the FDR and FWER.
References
Conover, W. J. and Iman, R. L. (1979). On multiple-comparisons procedures. Technical Report LA-7677-MS, Los Alamos Scientific Laboratory.
Crichtlow, D. E. and Fligner, M. A. (1991). On distribution-free multiple comparisons in the one-way analysis of variance. Communications in Statistics—Theory and Methods, 20(1):127.
Dunn, O. J. (1964). Multiple comparisons using rank sums. Technometrics, 6(3):241–252. | Differences between Dwass-Steel-Critchlow-Fligner and Mann-Whitney U-test for a post-hoc pairwise an | There are absolutely differences.
The Mann-Whitney U test (i.e. the rank sum test) is not appropriate as a post hoc test following the omnibus Kruskal-Wallis nonparametric analog to the one-way ANOVA | Differences between Dwass-Steel-Critchlow-Fligner and Mann-Whitney U-test for a post-hoc pairwise analysis
There are absolutely differences.
The Mann-Whitney U test (i.e. the rank sum test) is not appropriate as a post hoc test following the omnibus Kruskal-Wallis nonparametric analog to the one-way ANOVA for two reasons:
The rank sum test uses different ranks than those employed in the Kruskal-Wallis test (i.e. in both tests you mush the observations together, then rank them, then separate the ranks by group—the rank sum ignores the ranks you got with the omnibus test).
The rank sum test does not use the pooled variance implied by the null hypothesis in the Kruskal-Wallis test (e.g., just as in one-way ANOVA where the post hoc t tests use an estimate of the pooled variance).
Dunn's test was (as far as I know) the first post hoc test for Kruskal-Wallis. It is based on a z approximation to the distribution of a rank sum-like test statistic that addresses both points (1) and (2). The Conover-Iman test is similar to Dunn's test, but is based upon a t distribution, and is strictly more powerful than Dunn's test when rejecting the Kruskal-Wallis null hypothesis.
The Dwass-Steel-Crichtlow-Fligner test also addresses (1) and (2), but has a specific approach to controlling the familywise error rate (FWER) built in. Crichtlow and Fligner interpret Dunn's test as necessarily and exclusively incorporating the Bonferroni adjustment (an incorrect interpretation in my opinion—indeed, I have implemented Dunn's test for Stata and for R to include a wide range of false discovery rate (FDR) and FWER adjustments for multiple comparisons), and have implemented the Conover-Iman test for Stata and for R with the same selection of methods to control the FDR and FWER.
References
Conover, W. J. and Iman, R. L. (1979). On multiple-comparisons procedures. Technical Report LA-7677-MS, Los Alamos Scientific Laboratory.
Crichtlow, D. E. and Fligner, M. A. (1991). On distribution-free multiple comparisons in the one-way analysis of variance. Communications in Statistics—Theory and Methods, 20(1):127.
Dunn, O. J. (1964). Multiple comparisons using rank sums. Technometrics, 6(3):241–252. | Differences between Dwass-Steel-Critchlow-Fligner and Mann-Whitney U-test for a post-hoc pairwise an
There are absolutely differences.
The Mann-Whitney U test (i.e. the rank sum test) is not appropriate as a post hoc test following the omnibus Kruskal-Wallis nonparametric analog to the one-way ANOVA |
35,773 | How should I explain the lack of ERROR in a repeated measures ANOVA table using REML? | You solve a "classical ANOVA" (linear model) by means of least squares, this is, partitioning the sums of squares (among explicative factors) and minimizing the residual sum of squares (unexplained variation). An ANOVA table resumes that partition into all components in the model.
In a linear mixed model you are solving the problem through maximum likelihood (or REML eventually), very loosely speaking by finding the parameters that maximize the probability of observing the data (its likelihood) if several assumptions true.
The "ANOVA" table that you are getting is, then, not that partition of the sums of squares, but a list of parameters in the model (for the fixed factors) followed by Wald tests of the null hypotheses that they are equal to zero. See ?anova.lme (if you used lme function of the nlme package as I suspect) for some details of what it informs you if applied to a single model
[BTW, it will inform you something different if applied to two or more models, see same help page].
If you summary(model) you get more information, including the estimated random/residual variations (both ~Errors in your model), which may do make sense to report when describing your fitted model.
[BTW2: be sure to understand what the parameters in the model (and lines in that table) represent compared to what factors represent in a classic ANOVA table; e.g., dummy or treatment coding is default in R] | How should I explain the lack of ERROR in a repeated measures ANOVA table using REML? | You solve a "classical ANOVA" (linear model) by means of least squares, this is, partitioning the sums of squares (among explicative factors) and minimizing the residual sum of squares (unexplained va | How should I explain the lack of ERROR in a repeated measures ANOVA table using REML?
You solve a "classical ANOVA" (linear model) by means of least squares, this is, partitioning the sums of squares (among explicative factors) and minimizing the residual sum of squares (unexplained variation). An ANOVA table resumes that partition into all components in the model.
In a linear mixed model you are solving the problem through maximum likelihood (or REML eventually), very loosely speaking by finding the parameters that maximize the probability of observing the data (its likelihood) if several assumptions true.
The "ANOVA" table that you are getting is, then, not that partition of the sums of squares, but a list of parameters in the model (for the fixed factors) followed by Wald tests of the null hypotheses that they are equal to zero. See ?anova.lme (if you used lme function of the nlme package as I suspect) for some details of what it informs you if applied to a single model
[BTW, it will inform you something different if applied to two or more models, see same help page].
If you summary(model) you get more information, including the estimated random/residual variations (both ~Errors in your model), which may do make sense to report when describing your fitted model.
[BTW2: be sure to understand what the parameters in the model (and lines in that table) represent compared to what factors represent in a classic ANOVA table; e.g., dummy or treatment coding is default in R] | How should I explain the lack of ERROR in a repeated measures ANOVA table using REML?
You solve a "classical ANOVA" (linear model) by means of least squares, this is, partitioning the sums of squares (among explicative factors) and minimizing the residual sum of squares (unexplained va |
35,774 | How should I explain the lack of ERROR in a repeated measures ANOVA table using REML? | ANOVA is a strange word, because it means many different things. When people fit a general linear model with categorical predictors, they often call it ANOVA, and they get sums of squares (including error sums of squares).
From ?anova
When given a single argument it produces a table which tests whether the model terms are
significant.
So the editor is expecting the table sums of squares such as you get from anova, something like:
> x <- runif(100)
> y <- runif(100)
> anova(lm(y ~ x))
Analysis of Variance Table
Response: y
Df Sum Sq Mean Sq F value Pr(>F)
x 1 0.0023 0.002314 0.0303 0.8623
Residuals 98 7.4958 0.076487
And these sums of squares should sum to the total sums of squares:
> var(y) * (length(y)-1)
[1] 7.498077
You don't have this, because you didn't do a general linear model (or what the editor is thinking of as anova) and so you don't have sums of squares.
You could try explaining this. But I like to take the path of least resistance when it comes to dealing with statistical issues with editors and I would just rename the table. You could call it Type III tests of fixed effects (I think that's what SAS and SPSS call it), or something like 'significance tests of each predictor'. I'd also remove the intercept from it (unless you're really interested in that) and I'd be tempted to remove mmolO2.L as well, if (as I'd assume) you have that in the parameter estimates already. | How should I explain the lack of ERROR in a repeated measures ANOVA table using REML? | ANOVA is a strange word, because it means many different things. When people fit a general linear model with categorical predictors, they often call it ANOVA, and they get sums of squares (including e | How should I explain the lack of ERROR in a repeated measures ANOVA table using REML?
ANOVA is a strange word, because it means many different things. When people fit a general linear model with categorical predictors, they often call it ANOVA, and they get sums of squares (including error sums of squares).
From ?anova
When given a single argument it produces a table which tests whether the model terms are
significant.
So the editor is expecting the table sums of squares such as you get from anova, something like:
> x <- runif(100)
> y <- runif(100)
> anova(lm(y ~ x))
Analysis of Variance Table
Response: y
Df Sum Sq Mean Sq F value Pr(>F)
x 1 0.0023 0.002314 0.0303 0.8623
Residuals 98 7.4958 0.076487
And these sums of squares should sum to the total sums of squares:
> var(y) * (length(y)-1)
[1] 7.498077
You don't have this, because you didn't do a general linear model (or what the editor is thinking of as anova) and so you don't have sums of squares.
You could try explaining this. But I like to take the path of least resistance when it comes to dealing with statistical issues with editors and I would just rename the table. You could call it Type III tests of fixed effects (I think that's what SAS and SPSS call it), or something like 'significance tests of each predictor'. I'd also remove the intercept from it (unless you're really interested in that) and I'd be tempted to remove mmolO2.L as well, if (as I'd assume) you have that in the parameter estimates already. | How should I explain the lack of ERROR in a repeated measures ANOVA table using REML?
ANOVA is a strange word, because it means many different things. When people fit a general linear model with categorical predictors, they often call it ANOVA, and they get sums of squares (including e |
35,775 | What is a "good fit" Brier score and Harrell's C Index | Prior CV postings on the matter of GOF measures in generalized linear models:
Find out pseudo R square value for a Logistic Regression analysis
Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)?
Addressing model uncertainty
Compare classifiers based on AUROC or accuracy?
"Goodness of fit" is an elusive notion. Any set of data can be perfectly fit with a complex, saturated model, but such a model will generally be useless despite being perfect. Application of such tests often completely ignores what the model that is being fit to. I find it rather strange that Anderson-Darling and Kolmogorov-Smirnov tests are being called "goodness of fit tests" when they are really being used as "tests of normality".
Models need to be both validated and calibrated and the GOF measures generally tell you very little about those aspects. (It should be noted in passing that the 'rms' function print.cph also reports the Brier score along with a pseudo-R^2 and Somers-D as "discrimination indexes". And it does not report the c-index, perhaps because the Somers-D is equivalent and preceded it historically and Harrell is tired of people misusing it.)
You will note that Frank told you that your proposed strategy in an earlier rhelp posting where you proposed taking a "best" glmnet model and then apply stepwise forward and backward reduction was bad statistical practice. Part of the problem is that you were taking a result from a method which is optimized for prediction (penalized glmnet) and then applying a procedure that was in all probability lowering its predictive capacity.
Your low Brier score is something I see all the time in my research. I work with large datasets where the outcomes of interest are rather rare (mortality over 5-12 years for basically healthy people). Even a good model will only be predicting a mortality rate of 4-5% for most of the people who die and the "error rate" remains high despite many variables being highly significant. Model comparison measures (especially the deviance) are much better guides for decision making than any of the GOF or discrimination measures. | What is a "good fit" Brier score and Harrell's C Index | Prior CV postings on the matter of GOF measures in generalized linear models:
Find out pseudo R square value for a Logistic Regression analysis
Which pseudo-$R^2$ measure is the one to report for logi | What is a "good fit" Brier score and Harrell's C Index
Prior CV postings on the matter of GOF measures in generalized linear models:
Find out pseudo R square value for a Logistic Regression analysis
Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)?
Addressing model uncertainty
Compare classifiers based on AUROC or accuracy?
"Goodness of fit" is an elusive notion. Any set of data can be perfectly fit with a complex, saturated model, but such a model will generally be useless despite being perfect. Application of such tests often completely ignores what the model that is being fit to. I find it rather strange that Anderson-Darling and Kolmogorov-Smirnov tests are being called "goodness of fit tests" when they are really being used as "tests of normality".
Models need to be both validated and calibrated and the GOF measures generally tell you very little about those aspects. (It should be noted in passing that the 'rms' function print.cph also reports the Brier score along with a pseudo-R^2 and Somers-D as "discrimination indexes". And it does not report the c-index, perhaps because the Somers-D is equivalent and preceded it historically and Harrell is tired of people misusing it.)
You will note that Frank told you that your proposed strategy in an earlier rhelp posting where you proposed taking a "best" glmnet model and then apply stepwise forward and backward reduction was bad statistical practice. Part of the problem is that you were taking a result from a method which is optimized for prediction (penalized glmnet) and then applying a procedure that was in all probability lowering its predictive capacity.
Your low Brier score is something I see all the time in my research. I work with large datasets where the outcomes of interest are rather rare (mortality over 5-12 years for basically healthy people). Even a good model will only be predicting a mortality rate of 4-5% for most of the people who die and the "error rate" remains high despite many variables being highly significant. Model comparison measures (especially the deviance) are much better guides for decision making than any of the GOF or discrimination measures. | What is a "good fit" Brier score and Harrell's C Index
Prior CV postings on the matter of GOF measures in generalized linear models:
Find out pseudo R square value for a Logistic Regression analysis
Which pseudo-$R^2$ measure is the one to report for logi |
35,776 | What is a "good fit" Brier score and Harrell's C Index | It is easier (but still not very easy) to find an acceptable Brier score. In general it's all relative. A model is useful insofar as it is better than alternatives. Now with pure calibration accuracy you can sometimes judge a model to be inadequate no matter the comparitor. | What is a "good fit" Brier score and Harrell's C Index | It is easier (but still not very easy) to find an acceptable Brier score. In general it's all relative. A model is useful insofar as it is better than alternatives. Now with pure calibration accura | What is a "good fit" Brier score and Harrell's C Index
It is easier (but still not very easy) to find an acceptable Brier score. In general it's all relative. A model is useful insofar as it is better than alternatives. Now with pure calibration accuracy you can sometimes judge a model to be inadequate no matter the comparitor. | What is a "good fit" Brier score and Harrell's C Index
It is easier (but still not very easy) to find an acceptable Brier score. In general it's all relative. A model is useful insofar as it is better than alternatives. Now with pure calibration accura |
35,777 | Two different formulas for AICc | I wish I understood the paper better, but if you look at Hurvich & Tsai (1989) equation 3, they are defining the AIC itself as:
$$
\textrm{AIC} = n(\log\hat{\sigma}^2 + 1) + 2\left(m + 1\right)
$$
Which, naïvely implies $k = m+1$ and then the Hurvich & Tsai and Anderson et al post 1999 are actually one and the same as
$$
(m+1 = k) \implies \frac{2(m+1)(m+2)}{n−m-2} \equiv \frac{2(k)(k+1)}{n−k−1}
$$
Edit - (Cavanaugh 1997)
See (Cavanaugh 1997) (pdf), specifically page 203, where in the derivation he is setting $k = p+1$, for it is as @Glen_b said, $k$ includes the error variance and $p$ does not.
Reference:
Cavanaugh, J. E. Unifying the derivations for the Akaike and corrected Akaike information criteria Statistics & Probability Letters, 1997, 33, 201-208 | Two different formulas for AICc | I wish I understood the paper better, but if you look at Hurvich & Tsai (1989) equation 3, they are defining the AIC itself as:
$$
\textrm{AIC} = n(\log\hat{\sigma}^2 + 1) + 2\left(m + 1\right)
$$
Whi | Two different formulas for AICc
I wish I understood the paper better, but if you look at Hurvich & Tsai (1989) equation 3, they are defining the AIC itself as:
$$
\textrm{AIC} = n(\log\hat{\sigma}^2 + 1) + 2\left(m + 1\right)
$$
Which, naïvely implies $k = m+1$ and then the Hurvich & Tsai and Anderson et al post 1999 are actually one and the same as
$$
(m+1 = k) \implies \frac{2(m+1)(m+2)}{n−m-2} \equiv \frac{2(k)(k+1)}{n−k−1}
$$
Edit - (Cavanaugh 1997)
See (Cavanaugh 1997) (pdf), specifically page 203, where in the derivation he is setting $k = p+1$, for it is as @Glen_b said, $k$ includes the error variance and $p$ does not.
Reference:
Cavanaugh, J. E. Unifying the derivations for the Akaike and corrected Akaike information criteria Statistics & Probability Letters, 1997, 33, 201-208 | Two different formulas for AICc
I wish I understood the paper better, but if you look at Hurvich & Tsai (1989) equation 3, they are defining the AIC itself as:
$$
\textrm{AIC} = n(\log\hat{\sigma}^2 + 1) + 2\left(m + 1\right)
$$
Whi |
35,778 | What is a better way to construct a confidence interval for the probability of success in binomial distributions? | The wikipedia page on binomial distributions has several measures of confidence intervals. In R, they, and others, are implemented in the binom.confint command in the binom package. There are costs and benefits to them all. You should look into them further and select the one you like the best.
Now that I've given the standard advice...I tend to believe that the extensive work on binomial CI's clearly demonstrate that trying to get an exact one is pointless. While they often can vary considerably in the proportion of coverage that's only because the tails can change dramatically for the distribution with p values that deviate by small amounts and the distribution of real values is discrete (i.e. the actual p-values really aren't that different reported by them).
When N is small you can usually just pick any CI and round it to values supported by your actual distribution and you get the same result. If you have an N of 10 and p = 0.2 there is no way you will ever replicate that experiment and get p = 0.04588727 (Wilson interval lower bound) because the number can't possibly appear. It's as impossible as the -0.04791801 from the CLT based interval you want to avoid because it's negative. Just enter 0 for the lower bound and 0.5 for the upper. The true proportion for your experiment can't be a value that can't be produced by the experiment, and the 95% CI is about what the results are of the experiment when repeated, not what mu is. If n is large then the CLT works pretty well anyway. It may not be the best but just round away from the mean one point and you'll usually be fine with a lot less effort than working out the other values (and it's often recommended to be conservative). | What is a better way to construct a confidence interval for the probability of success in binomial d | The wikipedia page on binomial distributions has several measures of confidence intervals. In R, they, and others, are implemented in the binom.confint command in the binom package. There are costs a | What is a better way to construct a confidence interval for the probability of success in binomial distributions?
The wikipedia page on binomial distributions has several measures of confidence intervals. In R, they, and others, are implemented in the binom.confint command in the binom package. There are costs and benefits to them all. You should look into them further and select the one you like the best.
Now that I've given the standard advice...I tend to believe that the extensive work on binomial CI's clearly demonstrate that trying to get an exact one is pointless. While they often can vary considerably in the proportion of coverage that's only because the tails can change dramatically for the distribution with p values that deviate by small amounts and the distribution of real values is discrete (i.e. the actual p-values really aren't that different reported by them).
When N is small you can usually just pick any CI and round it to values supported by your actual distribution and you get the same result. If you have an N of 10 and p = 0.2 there is no way you will ever replicate that experiment and get p = 0.04588727 (Wilson interval lower bound) because the number can't possibly appear. It's as impossible as the -0.04791801 from the CLT based interval you want to avoid because it's negative. Just enter 0 for the lower bound and 0.5 for the upper. The true proportion for your experiment can't be a value that can't be produced by the experiment, and the 95% CI is about what the results are of the experiment when repeated, not what mu is. If n is large then the CLT works pretty well anyway. It may not be the best but just round away from the mean one point and you'll usually be fine with a lot less effort than working out the other values (and it's often recommended to be conservative). | What is a better way to construct a confidence interval for the probability of success in binomial d
The wikipedia page on binomial distributions has several measures of confidence intervals. In R, they, and others, are implemented in the binom.confint command in the binom package. There are costs a |
35,779 | What is a better way to construct a confidence interval for the probability of success in binomial distributions? | There's a neat little article published decades ago in JAMA entitled "If nothing goes wrong, is everything all right?". The authors considered the possibilities of a binomial parameter being in a variety of "locations": and derived the probabilities of zero outcomes out of N integer instances under varying sample sizes, N. They first did it by hand (or calculator since this was 1983), but they also pointed out that the expression:
$$1 - \text{maximum risk} = 0.05^{1/N}$$
has asymptotic expansion $$1+\ln(0.05)/N + O(1/N^2)$$
So the upper (and only confidence limit other than $0$) CL is $-\ln(0.05)/N$ or very close to $= 3/N$. Take a look at the fancy intervals and look at the upper row where observed values of 0 are tabulated. You will find that $3/N$ is a very good approximation to the exact limits.
Searching for earlier instances of citations to this article I find that I already posted such an answer. | What is a better way to construct a confidence interval for the probability of success in binomial d | There's a neat little article published decades ago in JAMA entitled "If nothing goes wrong, is everything all right?". The authors considered the possibilities of a binomial parameter being in a vari | What is a better way to construct a confidence interval for the probability of success in binomial distributions?
There's a neat little article published decades ago in JAMA entitled "If nothing goes wrong, is everything all right?". The authors considered the possibilities of a binomial parameter being in a variety of "locations": and derived the probabilities of zero outcomes out of N integer instances under varying sample sizes, N. They first did it by hand (or calculator since this was 1983), but they also pointed out that the expression:
$$1 - \text{maximum risk} = 0.05^{1/N}$$
has asymptotic expansion $$1+\ln(0.05)/N + O(1/N^2)$$
So the upper (and only confidence limit other than $0$) CL is $-\ln(0.05)/N$ or very close to $= 3/N$. Take a look at the fancy intervals and look at the upper row where observed values of 0 are tabulated. You will find that $3/N$ is a very good approximation to the exact limits.
Searching for earlier instances of citations to this article I find that I already posted such an answer. | What is a better way to construct a confidence interval for the probability of success in binomial d
There's a neat little article published decades ago in JAMA entitled "If nothing goes wrong, is everything all right?". The authors considered the possibilities of a binomial parameter being in a vari |
35,780 | Why is dependence a problem? | The p-value for the t-test is computed under the assumption that all observations are independent. Computing probabilities (such as the p-value) is much more difficult when you're dealing with dependent variables, and it is not always easy to see mathematically where things go wrong with the test in the presence of dependence. We can however easily illustrate the problem with a simulation.
Consider for instance the case where there are 5 classrooms in each of the two schools, with 10 students in each classroom. Under the assumption of normality, the p-value of the test should be uniformly distributed on the interval $(0,1)$ if there is no difference in mean test scores between all the classrooms. That is, if we performed a lot of studies like this and plotted a histogram of all the p-values, it should resemble the box-shaped uniform distribution.
However, if there is somewithin-classroom correlation between students' results, the p-values no longer behave as they should. A positive correlation (as one might expect here) will often lead to p-values that are too small, so that the null hypothesis will be rejected too often when it in fact is true. An R simulation illustrating this can be found below. 1000 studies of two schools are simulated for different within-classroom correlations. The p-values of the correpsonding t-test are shown in the histograms in the figure. They are uniformly distributed when there is no correlation, but not otherwise. In the simulation, it is assumed that there are no mean differences between classrooms, and that all classrooms have the same within-classroom correlation.
The consequence of this phenomenon is that the type I error rate of the t-test will be way off if there are within-classroom correlations present. As an example, a t-test at the 5 % level is in fact approximately at the 25 % level if the within-classroom correlation is 0.1! In other words, the risk of falsely rejecting the null hypothesis increases dramatically when the observations are dependent.
Note that the axes differ somewhat between the histograms.
R code:
library(MASS)
B1<-1000
par(mfrow=c(3,2))
for(correlation in c(0,0.1,0.25,0.5,0.75,0.95))
{
# Create correlation/covariance matrix and mean vector
Sigma<-matrix(correlation,10,10)
diag(Sigma)<-1
mu<-rep(5,10)
# Simulate B1 studies of two schools A and B
p.value<-rep(NA,B1)
for(i in 1:B1)
{
# Generate observations of 50 students from school A
A<-as.vector(mvrnorm(n=5,mu=mu,Sigma=Sigma))
# Generate observations of 50 students from school B
B<-as.vector(mvrnorm(n=5,mu=mu,Sigma=Sigma))
p.value[i]<-t.test(A,B)$p.value
}
# Plot histogram
hist(p.value,main=paste("Within-classroom correlation:",correlation),xlab="p-value",cex.main=2,cex.lab=2,cex.axis=2)
} | Why is dependence a problem? | The p-value for the t-test is computed under the assumption that all observations are independent. Computing probabilities (such as the p-value) is much more difficult when you're dealing with depende | Why is dependence a problem?
The p-value for the t-test is computed under the assumption that all observations are independent. Computing probabilities (such as the p-value) is much more difficult when you're dealing with dependent variables, and it is not always easy to see mathematically where things go wrong with the test in the presence of dependence. We can however easily illustrate the problem with a simulation.
Consider for instance the case where there are 5 classrooms in each of the two schools, with 10 students in each classroom. Under the assumption of normality, the p-value of the test should be uniformly distributed on the interval $(0,1)$ if there is no difference in mean test scores between all the classrooms. That is, if we performed a lot of studies like this and plotted a histogram of all the p-values, it should resemble the box-shaped uniform distribution.
However, if there is somewithin-classroom correlation between students' results, the p-values no longer behave as they should. A positive correlation (as one might expect here) will often lead to p-values that are too small, so that the null hypothesis will be rejected too often when it in fact is true. An R simulation illustrating this can be found below. 1000 studies of two schools are simulated for different within-classroom correlations. The p-values of the correpsonding t-test are shown in the histograms in the figure. They are uniformly distributed when there is no correlation, but not otherwise. In the simulation, it is assumed that there are no mean differences between classrooms, and that all classrooms have the same within-classroom correlation.
The consequence of this phenomenon is that the type I error rate of the t-test will be way off if there are within-classroom correlations present. As an example, a t-test at the 5 % level is in fact approximately at the 25 % level if the within-classroom correlation is 0.1! In other words, the risk of falsely rejecting the null hypothesis increases dramatically when the observations are dependent.
Note that the axes differ somewhat between the histograms.
R code:
library(MASS)
B1<-1000
par(mfrow=c(3,2))
for(correlation in c(0,0.1,0.25,0.5,0.75,0.95))
{
# Create correlation/covariance matrix and mean vector
Sigma<-matrix(correlation,10,10)
diag(Sigma)<-1
mu<-rep(5,10)
# Simulate B1 studies of two schools A and B
p.value<-rep(NA,B1)
for(i in 1:B1)
{
# Generate observations of 50 students from school A
A<-as.vector(mvrnorm(n=5,mu=mu,Sigma=Sigma))
# Generate observations of 50 students from school B
B<-as.vector(mvrnorm(n=5,mu=mu,Sigma=Sigma))
p.value[i]<-t.test(A,B)$p.value
}
# Plot histogram
hist(p.value,main=paste("Within-classroom correlation:",correlation),xlab="p-value",cex.main=2,cex.lab=2,cex.axis=2)
} | Why is dependence a problem?
The p-value for the t-test is computed under the assumption that all observations are independent. Computing probabilities (such as the p-value) is much more difficult when you're dealing with depende |
35,781 | Why is dependence a problem? | The problem would be that comparing the two schools this way mixes university level effects with classroom level effects. A mixed model would let you disentangle these. If you aren't interested in disentangling them, you should still take account of the clustered sampling (although many people fail to do this).
@Nico 's comment above gets to one problem here: Suppose one teacher in one school is really good, and he/she happens to be one of the teachers chosen?
But another problem is that the students in each class will be more similar to each other than they will be to other students in the same university in all sorts of ways: Different subjects draw different types of students by age, gender, experience, academic strength and weakness etc. | Why is dependence a problem? | The problem would be that comparing the two schools this way mixes university level effects with classroom level effects. A mixed model would let you disentangle these. If you aren't interested in dis | Why is dependence a problem?
The problem would be that comparing the two schools this way mixes university level effects with classroom level effects. A mixed model would let you disentangle these. If you aren't interested in disentangling them, you should still take account of the clustered sampling (although many people fail to do this).
@Nico 's comment above gets to one problem here: Suppose one teacher in one school is really good, and he/she happens to be one of the teachers chosen?
But another problem is that the students in each class will be more similar to each other than they will be to other students in the same university in all sorts of ways: Different subjects draw different types of students by age, gender, experience, academic strength and weakness etc. | Why is dependence a problem?
The problem would be that comparing the two schools this way mixes university level effects with classroom level effects. A mixed model would let you disentangle these. If you aren't interested in dis |
35,782 | Why is dependence a problem? | There is nothing wrong with the test you described because you took a sample from both schools in a fair way. Dependent observations come into play when there is another variable on which the samples depend. I.e., in one of the schools only one class has shown up and you decided to take results from 50 people within this one class thinking it will be OK. But within the school result depends on a class, so you can't do it like this and it will give a wrong result which you can't detect by any statistical test... it is just a wrong experimental design.
But I think people are talking about dependent observations from different point of view usually. It is when you think that you can derive distributions and errors from your samples based on assumptions of independence (most standard formulas assume that), while when your outcomes depend on each other those rules are not exact at all... | Why is dependence a problem? | There is nothing wrong with the test you described because you took a sample from both schools in a fair way. Dependent observations come into play when there is another variable on which the samples | Why is dependence a problem?
There is nothing wrong with the test you described because you took a sample from both schools in a fair way. Dependent observations come into play when there is another variable on which the samples depend. I.e., in one of the schools only one class has shown up and you decided to take results from 50 people within this one class thinking it will be OK. But within the school result depends on a class, so you can't do it like this and it will give a wrong result which you can't detect by any statistical test... it is just a wrong experimental design.
But I think people are talking about dependent observations from different point of view usually. It is when you think that you can derive distributions and errors from your samples based on assumptions of independence (most standard formulas assume that), while when your outcomes depend on each other those rules are not exact at all... | Why is dependence a problem?
There is nothing wrong with the test you described because you took a sample from both schools in a fair way. Dependent observations come into play when there is another variable on which the samples |
35,783 | Estimate multinomial probit model with mlogit (R package) | The run with probit=TRUE has not converged to a good answer. See the line in the output that starts with 'last step could not find higher value' and compare the same section in the logit model output. The other reason it takes so long to fit the probit model is that the software is approximating a high dimensional integral using simulation (See the vignette for mlogit, pg 54). Sometimes rescaling covariates can help with numerical difficulties but that's not the case here, I tried
HS$ic <- scale(H$ic)
HS$oc <- scale(H$oc)
m2.probit = mlogit(depvar~ic+oc, HS, probit=TRUE)
and had the same difficulty. I would treat the results of the probit model with a degree of skepticism. In particular, there is something funny going on with the outcome 'gr' in the probit model (see intercepts, and variance parameter estimates).
The coefficients in the summary that are labeled er.gc, er.gr etc. are the parameters of the variance-covariance matrix that is being estimated as part of the probit model. | Estimate multinomial probit model with mlogit (R package) | The run with probit=TRUE has not converged to a good answer. See the line in the output that starts with 'last step could not find higher value' and compare the same section in the logit model output. | Estimate multinomial probit model with mlogit (R package)
The run with probit=TRUE has not converged to a good answer. See the line in the output that starts with 'last step could not find higher value' and compare the same section in the logit model output. The other reason it takes so long to fit the probit model is that the software is approximating a high dimensional integral using simulation (See the vignette for mlogit, pg 54). Sometimes rescaling covariates can help with numerical difficulties but that's not the case here, I tried
HS$ic <- scale(H$ic)
HS$oc <- scale(H$oc)
m2.probit = mlogit(depvar~ic+oc, HS, probit=TRUE)
and had the same difficulty. I would treat the results of the probit model with a degree of skepticism. In particular, there is something funny going on with the outcome 'gr' in the probit model (see intercepts, and variance parameter estimates).
The coefficients in the summary that are labeled er.gc, er.gr etc. are the parameters of the variance-covariance matrix that is being estimated as part of the probit model. | Estimate multinomial probit model with mlogit (R package)
The run with probit=TRUE has not converged to a good answer. See the line in the output that starts with 'last step could not find higher value' and compare the same section in the logit model output. |
35,784 | Estimate multinomial probit model with mlogit (R package) | Probit models often take longer to fit because the likelihood function is calculated by simulation or quadrature. The logit likelihood has a closed form solution that makes it fast.
Also, the Probit likelihood function is not globally convex, so the algorithm can converge to local maxima. You need to try different starting values.
Finally, the coefficients should not be the same, because they use a different scale parameter.
In general, if there is not a really compelling reason to use Probit, it's best to just stay away. | Estimate multinomial probit model with mlogit (R package) | Probit models often take longer to fit because the likelihood function is calculated by simulation or quadrature. The logit likelihood has a closed form solution that makes it fast.
Also, the Probit l | Estimate multinomial probit model with mlogit (R package)
Probit models often take longer to fit because the likelihood function is calculated by simulation or quadrature. The logit likelihood has a closed form solution that makes it fast.
Also, the Probit likelihood function is not globally convex, so the algorithm can converge to local maxima. You need to try different starting values.
Finally, the coefficients should not be the same, because they use a different scale parameter.
In general, if there is not a really compelling reason to use Probit, it's best to just stay away. | Estimate multinomial probit model with mlogit (R package)
Probit models often take longer to fit because the likelihood function is calculated by simulation or quadrature. The logit likelihood has a closed form solution that makes it fast.
Also, the Probit l |
35,785 | ML with fastest classification speed | Support Vector Machines classify new vectors by comparing them against the set of support vectors. Depending on what parameters you used and the cost function, this set of support vectors might be large. For more than two classes, the number of SVMs needed increases as well, further reducing performance. For better runtime performance, you'll want something that does all of the training upfront.
One such classifier is the neural network. It does all training upfront, leaving classifications as simple calculations. Another is a Bayesian classifier, which requires pdfs of the classes of your expected data. Only probabilities are calculated during classification, so its performance isn't affected by training set size.
If you need your classifier to further minimize the number of false positives at the risk of increasing the number of false negatives, then consider implementing a loss function. With it, you can assign a cost to each type of error. In your example, that means classifying fewer negatives as positives while allowing more positives as negatives. A clear example of loss functions is a test for cancer, where it's assumed to be better to falsely diagnose someone who doesn't have cancer and they live than it is to not diagnose someone who does and they die.
EDIT: Clarified SVM and Bayesian sections. Performance issue with SVMs is that there might be a large amount of SVs to check against new vectors. Generally, more SVs are used to increase fit to the training set (this is okay, but avoid overfitting). The Bayesian classifier simply requires that you know the distribution of your data.
Also, forgot that SVMs are built to only distinguish between 2 classes. To support more classes, multiple SVMs using the one-vs-all approach are merged. This would also impact runtime performance. | ML with fastest classification speed | Support Vector Machines classify new vectors by comparing them against the set of support vectors. Depending on what parameters you used and the cost function, this set of support vectors might be lar | ML with fastest classification speed
Support Vector Machines classify new vectors by comparing them against the set of support vectors. Depending on what parameters you used and the cost function, this set of support vectors might be large. For more than two classes, the number of SVMs needed increases as well, further reducing performance. For better runtime performance, you'll want something that does all of the training upfront.
One such classifier is the neural network. It does all training upfront, leaving classifications as simple calculations. Another is a Bayesian classifier, which requires pdfs of the classes of your expected data. Only probabilities are calculated during classification, so its performance isn't affected by training set size.
If you need your classifier to further minimize the number of false positives at the risk of increasing the number of false negatives, then consider implementing a loss function. With it, you can assign a cost to each type of error. In your example, that means classifying fewer negatives as positives while allowing more positives as negatives. A clear example of loss functions is a test for cancer, where it's assumed to be better to falsely diagnose someone who doesn't have cancer and they live than it is to not diagnose someone who does and they die.
EDIT: Clarified SVM and Bayesian sections. Performance issue with SVMs is that there might be a large amount of SVs to check against new vectors. Generally, more SVs are used to increase fit to the training set (this is okay, but avoid overfitting). The Bayesian classifier simply requires that you know the distribution of your data.
Also, forgot that SVMs are built to only distinguish between 2 classes. To support more classes, multiple SVMs using the one-vs-all approach are merged. This would also impact runtime performance. | ML with fastest classification speed
Support Vector Machines classify new vectors by comparing them against the set of support vectors. Depending on what parameters you used and the cost function, this set of support vectors might be lar |
35,786 | ML with fastest classification speed | I would recommend trying Random Ferns -- they are easy to implement, fast to train and even faster to predict, and due to ensemble structure you can easily control their speed/quality balance. Oh, and they are trivially parallel.
They may have problems with accuracy and memory consuption, though; but this depends on the problem and a way you make splits. | ML with fastest classification speed | I would recommend trying Random Ferns -- they are easy to implement, fast to train and even faster to predict, and due to ensemble structure you can easily control their speed/quality balance. Oh, and | ML with fastest classification speed
I would recommend trying Random Ferns -- they are easy to implement, fast to train and even faster to predict, and due to ensemble structure you can easily control their speed/quality balance. Oh, and they are trivially parallel.
They may have problems with accuracy and memory consuption, though; but this depends on the problem and a way you make splits. | ML with fastest classification speed
I would recommend trying Random Ferns -- they are easy to implement, fast to train and even faster to predict, and due to ensemble structure you can easily control their speed/quality balance. Oh, and |
35,787 | Profile likelihood confidence intervals | Very good question.
In this link it is explained that SAS uses a numerical approximation (which basically consists of a modification of the Newton-Raphson algorithm)
Setting this option to both produces two sets of CL, based on the Wald test and on the profile-likelihood approach. (Venzon, D. J. and Moolgavkar, S. H. (1988), “A Method for Computing Profile-Likelihood Based Confidence Intervals,” Applied Statistics, 37, 87–94.)
The link to the paper in JSTOR is here, and the abstract is shown below
The method of constructing confidence regions based on the generalised likelihood ratio statistic is well known for parameter vectors. A similar construction of a confidence interval for a single entry of a vector can be implemented by repeatedly maximising over the other parameters. We present an algorithm for finding these confidence interval endpoints that requires less computation. It employs a modified Newton-Raphson iteration to solve a system of equations that defines the endpoints.
According to this abstract, it seems like this is the secret of the speedy calculation. | Profile likelihood confidence intervals | Very good question.
In this link it is explained that SAS uses a numerical approximation (which basically consists of a modification of the Newton-Raphson algorithm)
Setting this option to both prod | Profile likelihood confidence intervals
Very good question.
In this link it is explained that SAS uses a numerical approximation (which basically consists of a modification of the Newton-Raphson algorithm)
Setting this option to both produces two sets of CL, based on the Wald test and on the profile-likelihood approach. (Venzon, D. J. and Moolgavkar, S. H. (1988), “A Method for Computing Profile-Likelihood Based Confidence Intervals,” Applied Statistics, 37, 87–94.)
The link to the paper in JSTOR is here, and the abstract is shown below
The method of constructing confidence regions based on the generalised likelihood ratio statistic is well known for parameter vectors. A similar construction of a confidence interval for a single entry of a vector can be implemented by repeatedly maximising over the other parameters. We present an algorithm for finding these confidence interval endpoints that requires less computation. It employs a modified Newton-Raphson iteration to solve a system of equations that defines the endpoints.
According to this abstract, it seems like this is the secret of the speedy calculation. | Profile likelihood confidence intervals
Very good question.
In this link it is explained that SAS uses a numerical approximation (which basically consists of a modification of the Newton-Raphson algorithm)
Setting this option to both prod |
35,788 | Pocket algorithm for training perceptrons | It's discussed a little more fully in the neural networks book of Rojas, which is available from his website. I believe the book also contains a reference to the original paper which introduced the algorithm.
http://www.inf.fu-berlin.de/inst/ag-ki/rojas_home/pmwiki/pmwiki.php?n=Books.NeuralNetworksBook
Edit: yes, here is Gallant's original paper with pseudocode:
https://www.ling.upenn.edu/courses/Fall_2007/cogs501/Gallant1990.pdf | Pocket algorithm for training perceptrons | It's discussed a little more fully in the neural networks book of Rojas, which is available from his website. I believe the book also contains a reference to the original paper which introduced the al | Pocket algorithm for training perceptrons
It's discussed a little more fully in the neural networks book of Rojas, which is available from his website. I believe the book also contains a reference to the original paper which introduced the algorithm.
http://www.inf.fu-berlin.de/inst/ag-ki/rojas_home/pmwiki/pmwiki.php?n=Books.NeuralNetworksBook
Edit: yes, here is Gallant's original paper with pseudocode:
https://www.ling.upenn.edu/courses/Fall_2007/cogs501/Gallant1990.pdf | Pocket algorithm for training perceptrons
It's discussed a little more fully in the neural networks book of Rojas, which is available from his website. I believe the book also contains a reference to the original paper which introduced the al |
35,789 | Pocket algorithm for training perceptrons | Basically the pocket algorithm is a perceptron learning algorithm with a memory which keeps the result of the iteration. You can consider the pocket algorithm something similar to:
def pocket(training_list, max_iteration):
w = randomVector()
best_error = error(w)
for i in range(0, max_iteration):
x=misclassified_sample(w, training_list)
w=vector_sum(w, x.y(x))
if error(w) < best_error :
best_w = w
best_error = error(w)
return best_w | Pocket algorithm for training perceptrons | Basically the pocket algorithm is a perceptron learning algorithm with a memory which keeps the result of the iteration. You can consider the pocket algorithm something similar to:
def pocket(training | Pocket algorithm for training perceptrons
Basically the pocket algorithm is a perceptron learning algorithm with a memory which keeps the result of the iteration. You can consider the pocket algorithm something similar to:
def pocket(training_list, max_iteration):
w = randomVector()
best_error = error(w)
for i in range(0, max_iteration):
x=misclassified_sample(w, training_list)
w=vector_sum(w, x.y(x))
if error(w) < best_error :
best_w = w
best_error = error(w)
return best_w | Pocket algorithm for training perceptrons
Basically the pocket algorithm is a perceptron learning algorithm with a memory which keeps the result of the iteration. You can consider the pocket algorithm something similar to:
def pocket(training |
35,790 | Pocket algorithm for training perceptrons | I have found the blog very helpful to understand Pocket Algorithm. I am giving excerpt from that blog.
Pocket Learning Algorithm
The idea is straightforward: this algorithm keeps the best result seen so far in its pocket (that is why it is called Pocket Learning Algorithm). The best result means the number of misclassification is minimum. If the new weights produce a smaller number of misclassification than the weights in the pocket, then replace the weights in the pocket to the new weights; if the new weights are not better than the one in the pocket, keep the one in the pocket and discard the new weights. At the end of the training iteration, the algorithm returns the solution in the pocket, rather than the last solution.
Pseudocode | Pocket algorithm for training perceptrons | I have found the blog very helpful to understand Pocket Algorithm. I am giving excerpt from that blog.
Pocket Learning Algorithm
The idea is straightforward: this algorithm keeps the best result seen | Pocket algorithm for training perceptrons
I have found the blog very helpful to understand Pocket Algorithm. I am giving excerpt from that blog.
Pocket Learning Algorithm
The idea is straightforward: this algorithm keeps the best result seen so far in its pocket (that is why it is called Pocket Learning Algorithm). The best result means the number of misclassification is minimum. If the new weights produce a smaller number of misclassification than the weights in the pocket, then replace the weights in the pocket to the new weights; if the new weights are not better than the one in the pocket, keep the one in the pocket and discard the new weights. At the end of the training iteration, the algorithm returns the solution in the pocket, rather than the last solution.
Pseudocode | Pocket algorithm for training perceptrons
I have found the blog very helpful to understand Pocket Algorithm. I am giving excerpt from that blog.
Pocket Learning Algorithm
The idea is straightforward: this algorithm keeps the best result seen |
35,791 | how to calculate partial dependence when I have 4 predictors? | Suppose that we have a data set $X = [x_s \, x_c] \in \mathbb R^{n \times p}$ where $x_s$ is a matrix of variables we want to know the partial dependencies for and $x_c$ is a matrix of the remaining predictors. Let $y \in \mathbb R$ be a vector of responses (i.e. a regression problem). Suppose that $y = f(x) + \epsilon$ and we estimate some fit $\hat f$.
Then $\hat f_s (x)$, the partial dependence of $\hat f$ at $x$ (here $x$ lives in the same space as $x_s$), is defined as:
$$\hat f_s(x) = {1 \over n} \sum_{i=1}^n \hat f(x, x_{c_i})$$
This says: hold $x$ constant for the variables of interest and take the average prediction over all other combinations of other variables in the training set. So we need to pick variables of interest, and also to pick a region of the space that $x_s$ lives in that we are interested in. Note: be careful extrapolating the marginal mean of $f(x)$ outside of this region.
Here's an example implementation in R. We start by creating an example dataset:
library(tidyverse)
library(ranger)
library(broom)
mt2 <- mtcars %>%
as_tibble() %>%
select(hp, mpg, disp, wt, qsec)
Then we estimate $f$ using a random forest:
fit <- ranger(hp ~ ., mt2)
Next we pick the feature we're interested in estimating partial dependencies for:
var <- quo(disp)
Now we can split the dataset into this predictor and other predictors:
x_s <- select(mt2, !!var) # grid where we want partial dependencies
x_c <- select(mt2, -!!var) # other predictors
Then we create a dataframe of all combinations of these datasets:
# if the training dataset is large, use a subsample of x_c instead
grid <- crossing(x_s, x_c)
We want to know the predictions of $\hat f$ at each point on this grid. I define a helper in the spirit of broom::augment() for this:
augment.ranger <- function(x, newdata) {
newdata <- as_tibble(newdata)
mutate(newdata, .fitted = predict(x, newdata)$predictions)
}
au <- augment(fit, grid)
Now we have the predictions and we marginalize by taking the average for each point in $x_s$:
pd <- au %>%
group_by(!!var) %>%
summarize(yhat = mean(.fitted))
We can visualize this as well:
pd %>%
ggplot(aes(!!var, yhat)) +
geom_line(size = 1) +
labs(title = "Partial dependence plot for displacement",
y = "Average prediction across all other predictors",
x = "Engine displacement") +
theme_bw()
Finally, we can check this implementation against the pdp package to make sure it's correct:
pd2 <- pdp::partial(
fit,
pred.var = quo_name(var),
pred.grid = distinct(mtcars, !!var),
train = mt2
)
testthat::expect_equivalent(pd, pd2) # silent, so we're good
For a classification problem, you can repeat a similar procedure, except predicting the class probability for a single class instead. | how to calculate partial dependence when I have 4 predictors? | Suppose that we have a data set $X = [x_s \, x_c] \in \mathbb R^{n \times p}$ where $x_s$ is a matrix of variables we want to know the partial dependencies for and $x_c$ is a matrix of the remaining p | how to calculate partial dependence when I have 4 predictors?
Suppose that we have a data set $X = [x_s \, x_c] \in \mathbb R^{n \times p}$ where $x_s$ is a matrix of variables we want to know the partial dependencies for and $x_c$ is a matrix of the remaining predictors. Let $y \in \mathbb R$ be a vector of responses (i.e. a regression problem). Suppose that $y = f(x) + \epsilon$ and we estimate some fit $\hat f$.
Then $\hat f_s (x)$, the partial dependence of $\hat f$ at $x$ (here $x$ lives in the same space as $x_s$), is defined as:
$$\hat f_s(x) = {1 \over n} \sum_{i=1}^n \hat f(x, x_{c_i})$$
This says: hold $x$ constant for the variables of interest and take the average prediction over all other combinations of other variables in the training set. So we need to pick variables of interest, and also to pick a region of the space that $x_s$ lives in that we are interested in. Note: be careful extrapolating the marginal mean of $f(x)$ outside of this region.
Here's an example implementation in R. We start by creating an example dataset:
library(tidyverse)
library(ranger)
library(broom)
mt2 <- mtcars %>%
as_tibble() %>%
select(hp, mpg, disp, wt, qsec)
Then we estimate $f$ using a random forest:
fit <- ranger(hp ~ ., mt2)
Next we pick the feature we're interested in estimating partial dependencies for:
var <- quo(disp)
Now we can split the dataset into this predictor and other predictors:
x_s <- select(mt2, !!var) # grid where we want partial dependencies
x_c <- select(mt2, -!!var) # other predictors
Then we create a dataframe of all combinations of these datasets:
# if the training dataset is large, use a subsample of x_c instead
grid <- crossing(x_s, x_c)
We want to know the predictions of $\hat f$ at each point on this grid. I define a helper in the spirit of broom::augment() for this:
augment.ranger <- function(x, newdata) {
newdata <- as_tibble(newdata)
mutate(newdata, .fitted = predict(x, newdata)$predictions)
}
au <- augment(fit, grid)
Now we have the predictions and we marginalize by taking the average for each point in $x_s$:
pd <- au %>%
group_by(!!var) %>%
summarize(yhat = mean(.fitted))
We can visualize this as well:
pd %>%
ggplot(aes(!!var, yhat)) +
geom_line(size = 1) +
labs(title = "Partial dependence plot for displacement",
y = "Average prediction across all other predictors",
x = "Engine displacement") +
theme_bw()
Finally, we can check this implementation against the pdp package to make sure it's correct:
pd2 <- pdp::partial(
fit,
pred.var = quo_name(var),
pred.grid = distinct(mtcars, !!var),
train = mt2
)
testthat::expect_equivalent(pd, pd2) # silent, so we're good
For a classification problem, you can repeat a similar procedure, except predicting the class probability for a single class instead. | how to calculate partial dependence when I have 4 predictors?
Suppose that we have a data set $X = [x_s \, x_c] \in \mathbb R^{n \times p}$ where $x_s$ is a matrix of variables we want to know the partial dependencies for and $x_c$ is a matrix of the remaining p |
35,792 | Sampling with or without replacement? | From finite population perspective, the difference in variances of the sample means or totals obtained via sampling with replacement (SRSWR) and sampling without replacement (SRSWOR) is captured by the finite population correction (FPC):
$$
\mathbb{V}_{\rm SRSWOR}[\bar y] = \Bigl( 1 - \frac{n}{N}\Bigr) \mathbb{V}_{\rm SRSWR}[\bar y]
$$
where $n$ is the sample size, $N$ is the population size, and the FPC is the parentheses. For your problem, the FPC = 1 - 10,0000/2,000,000 = 1 - 1/200 = 0.995, and frankly I would not bother chasing that factor down, and treat it as being equal to 1. I typically tell my students to start keeping track of FPC when the sampling fraction $n/N \ge 0.1$.
Sometimes, the decision between SRSWOR and SRSWR is that of logistics, i.e., depends on how easy it is to organize one or the other. A simple method to draw an SRSWOR is to assign a random number $U_i \sim \mbox{i.i.d. } U[0,1]$ to every record $i=1,\ldots,N$, sort by $U_I$ and take the first $n$ entries. A simple method to draw SRSWR is to produce $n$ random numbers $V_j \sim \mbox{i.i.d. } U[0,1]$ and take units with indices $\{ [N V_j+1], j=1, \ldots, n \}$ (the brackets stand for the integer part). Depending on how your population (referred to as frame in sampling terminology) is organized, one may be easier than the other, or none may be feasible at all.
The standard sampling reference I give is Lohr (2009). | Sampling with or without replacement? | From finite population perspective, the difference in variances of the sample means or totals obtained via sampling with replacement (SRSWR) and sampling without replacement (SRSWOR) is captured by th | Sampling with or without replacement?
From finite population perspective, the difference in variances of the sample means or totals obtained via sampling with replacement (SRSWR) and sampling without replacement (SRSWOR) is captured by the finite population correction (FPC):
$$
\mathbb{V}_{\rm SRSWOR}[\bar y] = \Bigl( 1 - \frac{n}{N}\Bigr) \mathbb{V}_{\rm SRSWR}[\bar y]
$$
where $n$ is the sample size, $N$ is the population size, and the FPC is the parentheses. For your problem, the FPC = 1 - 10,0000/2,000,000 = 1 - 1/200 = 0.995, and frankly I would not bother chasing that factor down, and treat it as being equal to 1. I typically tell my students to start keeping track of FPC when the sampling fraction $n/N \ge 0.1$.
Sometimes, the decision between SRSWOR and SRSWR is that of logistics, i.e., depends on how easy it is to organize one or the other. A simple method to draw an SRSWOR is to assign a random number $U_i \sim \mbox{i.i.d. } U[0,1]$ to every record $i=1,\ldots,N$, sort by $U_I$ and take the first $n$ entries. A simple method to draw SRSWR is to produce $n$ random numbers $V_j \sim \mbox{i.i.d. } U[0,1]$ and take units with indices $\{ [N V_j+1], j=1, \ldots, n \}$ (the brackets stand for the integer part). Depending on how your population (referred to as frame in sampling terminology) is organized, one may be easier than the other, or none may be feasible at all.
The standard sampling reference I give is Lohr (2009). | Sampling with or without replacement?
From finite population perspective, the difference in variances of the sample means or totals obtained via sampling with replacement (SRSWR) and sampling without replacement (SRSWOR) is captured by th |
35,793 | Sampling with or without replacement? | the answer is very simple, the difference between Replace=T and Replace=F. When we draw random records or rows from a sample. When we add the argument Replace=T, meaning, suppose we draw 5 records randomly from a sample. After drawing those records, we replace those back to sample for next draw. And in case of Replace=F, we do not replace back to sample. This way the chances for each record being picked in random draw increases and thus validity of sampling increase.
Plz correct me, if I am wrong! | Sampling with or without replacement? | the answer is very simple, the difference between Replace=T and Replace=F. When we draw random records or rows from a sample. When we add the argument Replace=T, meaning, suppose we draw 5 records ran | Sampling with or without replacement?
the answer is very simple, the difference between Replace=T and Replace=F. When we draw random records or rows from a sample. When we add the argument Replace=T, meaning, suppose we draw 5 records randomly from a sample. After drawing those records, we replace those back to sample for next draw. And in case of Replace=F, we do not replace back to sample. This way the chances for each record being picked in random draw increases and thus validity of sampling increase.
Plz correct me, if I am wrong! | Sampling with or without replacement?
the answer is very simple, the difference between Replace=T and Replace=F. When we draw random records or rows from a sample. When we add the argument Replace=T, meaning, suppose we draw 5 records ran |
35,794 | Does Pearson correlation require removal of bivariate or univariate outliers? | You will have to remove both. But that will not be enough. You will also have to remove those observations that are outlying on any projections of your data along any direction in $\mathbb{R}^2$. This is because of so-called multivariate outliers.
It is possible for outliers to depart significantly from the pattern of the majority of the data without necessarily standing out on any of your variables taken individually (As an example, consider the cluster of red dots in the plot attached to this answer).
Because of this, multi (in this case bi)-variate outliers can in general not be reliably detected using a coordinate-wise approaches. The only reliable way of finding them is in fact to use a multivariate trimming/winsorizing approach (i.e. one that considers univariate projections of your data along all directions). Many such method exists and you will find good implementations in most modern statistical packages (R, MATLAB, STATA, SAS,....).
It is important to recognize that multivariate outliers are equally effective at wrecking the Pearson correlation as their better known coordinate-wise cousins. An example below illustrates this.
Geometrically, coordinate-wise trimming/winsorizing amounts to drawing a rectangle around the majority of your data and considering any observations outside that rectangle as outlying. In contrast, multivariate trimming amounts to drawing an ellipse around the majority of your data and considering any point outside of that ellipse as an outlier.
The former approach can only detect outliers if they are outlying on at least one of the coordinates (or in other words along a direction parallel to an axis of a scatter-plot of your data). The second approach, in contrast, does not suffer from this limitation. In other words, multivariate trimming approaches can detect outliers regardless of the multivariate direction in which they are outlying (and this include directions parallel to the an axis of a scatter-plot of your data).
Consider this example (the code to reproduce it is below this post):
The Pearson correlation computed on the full data (that is for the black+red points considered together) is 0.5. In this case, none of the observations will be down-weighted by a coordinate-wise winsorizing approach since for all observations $1\leq i\leq n$, the distance
$$\max\left(\frac{|x_{i1}-\text{median}(x_1)|}{\text{mad}(x_1)},\frac{|x_{i2}-\text{median}(x_2)|}{\text{mad}(x_2)}\right).$$
is smaller than 3. Therefore, in this case, the Pearson correlation computed on the winsorized observations would be identical to the Pearson correlation computed on the original data.
Now, 0.5 is very far from the correlation of the good part of the data (e.g. considering only the black dots) which in this case is 84%.
Contrast this with the results obtained by carrying a bivariate trimming and estimating the correlation of the remaining (untrimmed) observations. In this case, the multivariate trimming was done using the FastMCD(1) algorithm, perhaps the most popular multivariate trimming algorithm. FastMCD correctly identifies the red dots as being too far from the ellipse enclosing the majority of the data and flags them as outliers. Then, the correlation estimated on the remaining observations is now 85%, which is close enough to the correct result.
(1) Rousseeuw P. J. and Van Driessen K. (1999). A Fast Algorithm for the
Minimum Covariance Determinant Estimator. Technometrics, 41, 212--223.
library(MASS)
library(rrcov)
n<-100
p<-2
set.seed(123)
A<-matrix(rnorm((p+1)*p),p+1,p)
A<-eigen(var(A))$vector
B<-A%*%diag(c(16,1))%*%t(A)
C<-t(A)%*%diag(c(4,1))%*%A
x<-mvrnorm(n,rep(0,p),B)
y<-mvrnorm(n,rep(0,p),C)
d<-which.max(mahalanobis(y,rep(0,p),B))
y<-mvrnorm(floor(n/5),y[d,],diag(2)/100)
z<-rbind(x,y)
plot(z,asp=1,type="n")
points(x,col="black",pch=16)
points(y,col="red",pch=16)
cor(z)
[,1] [,2]
[1,] 1.0000000 0.5018708
[2,] 0.5018708 1.0000000
cov2cor(CovMcd(z)@cov)
[,1] [,2]
[1,] 1.0000000 0.8592597
[2,] 0.8592597 1.0000000
cor(x)
[,1] [,2]
[1,] 1.0000000 0.8485822
[2,] 0.8485822 1.0000000
d1<-which((abs(z[,1]-median(z[,1]))/mad(z[,1])<3) & (abs(z[,2]-median(z[,2]))/mad(z[,2])<3))
length(d1)
[1] 120 | Does Pearson correlation require removal of bivariate or univariate outliers? | You will have to remove both. But that will not be enough. You will also have to remove those observations that are outlying on any projections of your data along any direction in $\mathbb{R}^2$. This | Does Pearson correlation require removal of bivariate or univariate outliers?
You will have to remove both. But that will not be enough. You will also have to remove those observations that are outlying on any projections of your data along any direction in $\mathbb{R}^2$. This is because of so-called multivariate outliers.
It is possible for outliers to depart significantly from the pattern of the majority of the data without necessarily standing out on any of your variables taken individually (As an example, consider the cluster of red dots in the plot attached to this answer).
Because of this, multi (in this case bi)-variate outliers can in general not be reliably detected using a coordinate-wise approaches. The only reliable way of finding them is in fact to use a multivariate trimming/winsorizing approach (i.e. one that considers univariate projections of your data along all directions). Many such method exists and you will find good implementations in most modern statistical packages (R, MATLAB, STATA, SAS,....).
It is important to recognize that multivariate outliers are equally effective at wrecking the Pearson correlation as their better known coordinate-wise cousins. An example below illustrates this.
Geometrically, coordinate-wise trimming/winsorizing amounts to drawing a rectangle around the majority of your data and considering any observations outside that rectangle as outlying. In contrast, multivariate trimming amounts to drawing an ellipse around the majority of your data and considering any point outside of that ellipse as an outlier.
The former approach can only detect outliers if they are outlying on at least one of the coordinates (or in other words along a direction parallel to an axis of a scatter-plot of your data). The second approach, in contrast, does not suffer from this limitation. In other words, multivariate trimming approaches can detect outliers regardless of the multivariate direction in which they are outlying (and this include directions parallel to the an axis of a scatter-plot of your data).
Consider this example (the code to reproduce it is below this post):
The Pearson correlation computed on the full data (that is for the black+red points considered together) is 0.5. In this case, none of the observations will be down-weighted by a coordinate-wise winsorizing approach since for all observations $1\leq i\leq n$, the distance
$$\max\left(\frac{|x_{i1}-\text{median}(x_1)|}{\text{mad}(x_1)},\frac{|x_{i2}-\text{median}(x_2)|}{\text{mad}(x_2)}\right).$$
is smaller than 3. Therefore, in this case, the Pearson correlation computed on the winsorized observations would be identical to the Pearson correlation computed on the original data.
Now, 0.5 is very far from the correlation of the good part of the data (e.g. considering only the black dots) which in this case is 84%.
Contrast this with the results obtained by carrying a bivariate trimming and estimating the correlation of the remaining (untrimmed) observations. In this case, the multivariate trimming was done using the FastMCD(1) algorithm, perhaps the most popular multivariate trimming algorithm. FastMCD correctly identifies the red dots as being too far from the ellipse enclosing the majority of the data and flags them as outliers. Then, the correlation estimated on the remaining observations is now 85%, which is close enough to the correct result.
(1) Rousseeuw P. J. and Van Driessen K. (1999). A Fast Algorithm for the
Minimum Covariance Determinant Estimator. Technometrics, 41, 212--223.
library(MASS)
library(rrcov)
n<-100
p<-2
set.seed(123)
A<-matrix(rnorm((p+1)*p),p+1,p)
A<-eigen(var(A))$vector
B<-A%*%diag(c(16,1))%*%t(A)
C<-t(A)%*%diag(c(4,1))%*%A
x<-mvrnorm(n,rep(0,p),B)
y<-mvrnorm(n,rep(0,p),C)
d<-which.max(mahalanobis(y,rep(0,p),B))
y<-mvrnorm(floor(n/5),y[d,],diag(2)/100)
z<-rbind(x,y)
plot(z,asp=1,type="n")
points(x,col="black",pch=16)
points(y,col="red",pch=16)
cor(z)
[,1] [,2]
[1,] 1.0000000 0.5018708
[2,] 0.5018708 1.0000000
cov2cor(CovMcd(z)@cov)
[,1] [,2]
[1,] 1.0000000 0.8592597
[2,] 0.8592597 1.0000000
cor(x)
[,1] [,2]
[1,] 1.0000000 0.8485822
[2,] 0.8485822 1.0000000
d1<-which((abs(z[,1]-median(z[,1]))/mad(z[,1])<3) & (abs(z[,2]-median(z[,2]))/mad(z[,2])<3))
length(d1)
[1] 120 | Does Pearson correlation require removal of bivariate or univariate outliers?
You will have to remove both. But that will not be enough. You will also have to remove those observations that are outlying on any projections of your data along any direction in $\mathbb{R}^2$. This |
35,795 | How to validate & diagnose a gamma GLM in R? | -Look at Chapter 6 or Section 6.3.4 in the book "Statistical Models in S" by Chambers and Hastie. Also you many want to check the package boot and function "glm.diag.plots" (Diagnostics plots for generalized linear models). Here are some code with gamma family and the plots from the help file.
library(boot)
data(leuk, package = "MASS")
leuk.mod <- glm(time ~ ag-1+log10(wbc), family = Gamma(log), data = leuk)
leuk.diag <- glm.diag(leuk.mod)
glm.diag.plots(leuk.mod, leuk.diag)
These plots are (upper left: residual vs linear predictor, upper right: normal scores plots of standardized deviance residuals, Lower left: approximate Cook statistics against leverage, Lower right: the plot of Cook statistic)
- See Introduction to Generalized Linear Models
- Have a look at the above reference, pages 42 and 44 to see the difference of deviance and $r^2$.
- The following code shows how to find SSE (but normally you don't need it!)
#Create a data set
counts <- c(18,17,15,20,10,20,25,13,12)
outcome <- gl(3,1,9)
treatment <- gl(3,3)
print(d.AD <- data.frame(treatment, outcome, counts))
#Fitting poisson GLM
glm.D93 <- glm(counts ~ outcome + treatment, family=poisson())
summary(glm.D93)
# In the following, resid(glm.D93) extracts residuals of the fitted model glm.D93
SSE=sum(resid(glm.D93))^2
SSE
[1] 0.04638682 | How to validate & diagnose a gamma GLM in R? | -Look at Chapter 6 or Section 6.3.4 in the book "Statistical Models in S" by Chambers and Hastie. Also you many want to check the package boot and function "glm.diag.plots" (Diagnostics plots for gen | How to validate & diagnose a gamma GLM in R?
-Look at Chapter 6 or Section 6.3.4 in the book "Statistical Models in S" by Chambers and Hastie. Also you many want to check the package boot and function "glm.diag.plots" (Diagnostics plots for generalized linear models). Here are some code with gamma family and the plots from the help file.
library(boot)
data(leuk, package = "MASS")
leuk.mod <- glm(time ~ ag-1+log10(wbc), family = Gamma(log), data = leuk)
leuk.diag <- glm.diag(leuk.mod)
glm.diag.plots(leuk.mod, leuk.diag)
These plots are (upper left: residual vs linear predictor, upper right: normal scores plots of standardized deviance residuals, Lower left: approximate Cook statistics against leverage, Lower right: the plot of Cook statistic)
- See Introduction to Generalized Linear Models
- Have a look at the above reference, pages 42 and 44 to see the difference of deviance and $r^2$.
- The following code shows how to find SSE (but normally you don't need it!)
#Create a data set
counts <- c(18,17,15,20,10,20,25,13,12)
outcome <- gl(3,1,9)
treatment <- gl(3,3)
print(d.AD <- data.frame(treatment, outcome, counts))
#Fitting poisson GLM
glm.D93 <- glm(counts ~ outcome + treatment, family=poisson())
summary(glm.D93)
# In the following, resid(glm.D93) extracts residuals of the fitted model glm.D93
SSE=sum(resid(glm.D93))^2
SSE
[1] 0.04638682 | How to validate & diagnose a gamma GLM in R?
-Look at Chapter 6 or Section 6.3.4 in the book "Statistical Models in S" by Chambers and Hastie. Also you many want to check the package boot and function "glm.diag.plots" (Diagnostics plots for gen |
35,796 | Algebra for data confidence | [I note that there's some lack of clarity in the question; confidence intervals apply to things like parameters, as well as means or other functions of parameters; if we're talking about intervals for data that would be other kinds of interval (prediction intervals, tolerance intervals and so on). I'll proceed as if we're discussing something like means.]
If we're sticking with typical-sized polls so we have the CLT kicking in; then we're just dealing with the variances of normally distributed quantities. It depends on the dependence (specifically, the covariance) between the quantities.
$\rm{Var}(X + Y) = \rm{Var}(X) + \rm{Var}(Y) + 2 \rm{Cov}(X,Y)$
$\rm{Var}(X - Y) = \rm{Var}(X) + \rm{Var}(Y) - 2 \rm{Cov}(X,Y)$
(that doesn't rely on normality, it's general; the meaningfulness of the resulting confidence intervals depends on normality)
The width of the confidence intervals for the proportions $X$ and $Y$ and for their sum or difference are based off their respective standard errors (the square root of the variance).
If $X$ and $Y$ are independent (based on different polls for example) then the variances add because the covariances are $0$.
So square the width of the CI's for $X$ and $Y$, add them, take the square root. That's the width of the CI for the sum or difference.
If $X$ and $Y$ are two proportions from the same poll, that is wrong, since their covariance is negative. If they add to 100% or nearly so, directly add the widths of their CIs to get the width of the difference. (For the sum, the variance will be 0 - or nearly so if they don't quite add to 100% - and the width will be a multiple of the square root of that). Estimates for the covariances can actually be calculated in general, using results for the multinomial distribution. | Algebra for data confidence | [I note that there's some lack of clarity in the question; confidence intervals apply to things like parameters, as well as means or other functions of parameters; if we're talking about intervals for | Algebra for data confidence
[I note that there's some lack of clarity in the question; confidence intervals apply to things like parameters, as well as means or other functions of parameters; if we're talking about intervals for data that would be other kinds of interval (prediction intervals, tolerance intervals and so on). I'll proceed as if we're discussing something like means.]
If we're sticking with typical-sized polls so we have the CLT kicking in; then we're just dealing with the variances of normally distributed quantities. It depends on the dependence (specifically, the covariance) between the quantities.
$\rm{Var}(X + Y) = \rm{Var}(X) + \rm{Var}(Y) + 2 \rm{Cov}(X,Y)$
$\rm{Var}(X - Y) = \rm{Var}(X) + \rm{Var}(Y) - 2 \rm{Cov}(X,Y)$
(that doesn't rely on normality, it's general; the meaningfulness of the resulting confidence intervals depends on normality)
The width of the confidence intervals for the proportions $X$ and $Y$ and for their sum or difference are based off their respective standard errors (the square root of the variance).
If $X$ and $Y$ are independent (based on different polls for example) then the variances add because the covariances are $0$.
So square the width of the CI's for $X$ and $Y$, add them, take the square root. That's the width of the CI for the sum or difference.
If $X$ and $Y$ are two proportions from the same poll, that is wrong, since their covariance is negative. If they add to 100% or nearly so, directly add the widths of their CIs to get the width of the difference. (For the sum, the variance will be 0 - or nearly so if they don't quite add to 100% - and the width will be a multiple of the square root of that). Estimates for the covariances can actually be calculated in general, using results for the multinomial distribution. | Algebra for data confidence
[I note that there's some lack of clarity in the question; confidence intervals apply to things like parameters, as well as means or other functions of parameters; if we're talking about intervals for |
35,797 | Algebra for data confidence | I don't know if I would describe it as a special algebra per se, but the essential idea you are getting at is the Central Limit Theorem. The CLT is, in fact, one of the cornerstones of statistics. Although we usually discuss the CLT in terms of the mean, there is an obvious connection between the mean of a set of numbers and their sum. You can explore this important topic by reading the linked Wikipedia page, or by reading threads related to the topic on CV by searching on the central-limit-theorem tag. Here are a couple of good threads to get you started:
What intuitive explanation is there for the central limit theorem?
Understanding central limit theorem | Algebra for data confidence | I don't know if I would describe it as a special algebra per se, but the essential idea you are getting at is the Central Limit Theorem. The CLT is, in fact, one of the cornerstones of statistics. A | Algebra for data confidence
I don't know if I would describe it as a special algebra per se, but the essential idea you are getting at is the Central Limit Theorem. The CLT is, in fact, one of the cornerstones of statistics. Although we usually discuss the CLT in terms of the mean, there is an obvious connection between the mean of a set of numbers and their sum. You can explore this important topic by reading the linked Wikipedia page, or by reading threads related to the topic on CV by searching on the central-limit-theorem tag. Here are a couple of good threads to get you started:
What intuitive explanation is there for the central limit theorem?
Understanding central limit theorem | Algebra for data confidence
I don't know if I would describe it as a special algebra per se, but the essential idea you are getting at is the Central Limit Theorem. The CLT is, in fact, one of the cornerstones of statistics. A |
35,798 | Clear description of PCA using SVD of covariance matrix | What are dimensions of $U$, $S$ and $V^T$?
Since $\Sigma$ is a M by M matrix, the three matrices $U$, $S$, $V^T$ wil be all M by M matrices. Because applying SVD on a N by M matrix, you will get $U_{N{\times}N}$, $S_{N{\times}M}$, and $V^T_{M{\times}M}$. You can verify that in matlab. When you truncate the singular values $S$ you also should remove the corresponding parts in $U$ and $V^T$.
In $USV^T$ what exactly is considered as eigenvalues and which of them should I use as principal components?
PCA should be done by doing eigenvalue decomposition on the covariance matrix $\Sigma$, or done by applying SVD on $A$. The left singular vectors of $SVD(A)$ come from the eigen vectors of $AA^T$, and the right singular vectors of $SVD(A)$ are from the eigenvectors of $A^TA$. But you need to order them according to the eigenvalues from large to small, and make them orthonormal. $A^TA$ is called Gram Matrix and is related to the covariance matrix $\Sigma$. If the dimensional vectors in $A$ (M of them totally) are all centered already, Gram Matrix = N * Covariance matrix. Check Wikipedia and some tutorials of SVD and PCA.
How can I project original observations $x_i$ onto new reduced space and vice versa?
If applying SVD on $A$ for PCA, it would be $u_i*S$; if applying eigen decomposition on covariance matrix $\Sigma$, and $V$ is eigenvectors of $\Sigma$, it is $x_i*V$. | Clear description of PCA using SVD of covariance matrix | What are dimensions of $U$, $S$ and $V^T$?
Since $\Sigma$ is a M by M matrix, the three matrices $U$, $S$, $V^T$ wil be all M by M matrices. Because applying SVD on a N by M matrix, you will get $U_{ | Clear description of PCA using SVD of covariance matrix
What are dimensions of $U$, $S$ and $V^T$?
Since $\Sigma$ is a M by M matrix, the three matrices $U$, $S$, $V^T$ wil be all M by M matrices. Because applying SVD on a N by M matrix, you will get $U_{N{\times}N}$, $S_{N{\times}M}$, and $V^T_{M{\times}M}$. You can verify that in matlab. When you truncate the singular values $S$ you also should remove the corresponding parts in $U$ and $V^T$.
In $USV^T$ what exactly is considered as eigenvalues and which of them should I use as principal components?
PCA should be done by doing eigenvalue decomposition on the covariance matrix $\Sigma$, or done by applying SVD on $A$. The left singular vectors of $SVD(A)$ come from the eigen vectors of $AA^T$, and the right singular vectors of $SVD(A)$ are from the eigenvectors of $A^TA$. But you need to order them according to the eigenvalues from large to small, and make them orthonormal. $A^TA$ is called Gram Matrix and is related to the covariance matrix $\Sigma$. If the dimensional vectors in $A$ (M of them totally) are all centered already, Gram Matrix = N * Covariance matrix. Check Wikipedia and some tutorials of SVD and PCA.
How can I project original observations $x_i$ onto new reduced space and vice versa?
If applying SVD on $A$ for PCA, it would be $u_i*S$; if applying eigen decomposition on covariance matrix $\Sigma$, and $V$ is eigenvectors of $\Sigma$, it is $x_i*V$. | Clear description of PCA using SVD of covariance matrix
What are dimensions of $U$, $S$ and $V^T$?
Since $\Sigma$ is a M by M matrix, the three matrices $U$, $S$, $V^T$ wil be all M by M matrices. Because applying SVD on a N by M matrix, you will get $U_{ |
35,799 | p-value as a distance? | A specific case, where the p-values are generated from $\chi ^2$ tests over frequency tables were used as similarities and multidimensional scaling was applied in this paper:
http://www.biomedcentral.com/content/pdf/1748-7188-1-10.pdf | p-value as a distance? | A specific case, where the p-values are generated from $\chi ^2$ tests over frequency tables were used as similarities and multidimensional scaling was applied in this paper:
http://www.biomedcentral | p-value as a distance?
A specific case, where the p-values are generated from $\chi ^2$ tests over frequency tables were used as similarities and multidimensional scaling was applied in this paper:
http://www.biomedcentral.com/content/pdf/1748-7188-1-10.pdf | p-value as a distance?
A specific case, where the p-values are generated from $\chi ^2$ tests over frequency tables were used as similarities and multidimensional scaling was applied in this paper:
http://www.biomedcentral |
35,800 | p-value as a distance? | If all the "true distances" are 0, then the p-values will follow a uniform distribution and would just be random, incorrect distances.
If the true distances are not 0 then you still have scaling issues where a test statisic may be more meaningful. P-values of 0.9 and 0.6 are not very different in interpretation while p-values of 0.06 and 0.01 are quite different in interpretation, but the mds algorithms would put more distance between the former than the later.
You should also consider power, you may have 2 groups that have a very small distance between them, but large sample sizes so you get a small p-value; then another pair with a large difference between them, but due to small sample size (low power) you get a larger p-value. | p-value as a distance? | If all the "true distances" are 0, then the p-values will follow a uniform distribution and would just be random, incorrect distances.
If the true distances are not 0 then you still have scaling issue | p-value as a distance?
If all the "true distances" are 0, then the p-values will follow a uniform distribution and would just be random, incorrect distances.
If the true distances are not 0 then you still have scaling issues where a test statisic may be more meaningful. P-values of 0.9 and 0.6 are not very different in interpretation while p-values of 0.06 and 0.01 are quite different in interpretation, but the mds algorithms would put more distance between the former than the later.
You should also consider power, you may have 2 groups that have a very small distance between them, but large sample sizes so you get a small p-value; then another pair with a large difference between them, but due to small sample size (low power) you get a larger p-value. | p-value as a distance?
If all the "true distances" are 0, then the p-values will follow a uniform distribution and would just be random, incorrect distances.
If the true distances are not 0 then you still have scaling issue |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.