idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
27,801 | How to deal with quasi-complete separation in a logistic GLMM? | I am afraid there's a typo in your title: you should not attempt to fit mixed models, let alone nonlinear mixed models, with just 30 clusters. Not unless you believe you can fit a normal distribution to 30 points obstructed by measurement error, nonlinearities, and nearly complete separation (aka perfect prediction).
What I would do here is to run this as a regular logistic regression with Firth's correction:
library(logistf)
mf <- logistf(response ~ type * p.validity * counterexamples + as.factor(code),
data=d.binom)
Firth's correction consists of adding a penalty to the likelihood, and is a form of shrinkage. In Bayesian terms, the resulting estimates are the posterior modes of the model with a Jeffreys prior. In frequentist terms, the penalty is the determinant of the information matrix corresponding to a single observation, and hence disappears asymptotically. | How to deal with quasi-complete separation in a logistic GLMM? | I am afraid there's a typo in your title: you should not attempt to fit mixed models, let alone nonlinear mixed models, with just 30 clusters. Not unless you believe you can fit a normal distribution | How to deal with quasi-complete separation in a logistic GLMM?
I am afraid there's a typo in your title: you should not attempt to fit mixed models, let alone nonlinear mixed models, with just 30 clusters. Not unless you believe you can fit a normal distribution to 30 points obstructed by measurement error, nonlinearities, and nearly complete separation (aka perfect prediction).
What I would do here is to run this as a regular logistic regression with Firth's correction:
library(logistf)
mf <- logistf(response ~ type * p.validity * counterexamples + as.factor(code),
data=d.binom)
Firth's correction consists of adding a penalty to the likelihood, and is a form of shrinkage. In Bayesian terms, the resulting estimates are the posterior modes of the model with a Jeffreys prior. In frequentist terms, the penalty is the determinant of the information matrix corresponding to a single observation, and hence disappears asymptotically. | How to deal with quasi-complete separation in a logistic GLMM?
I am afraid there's a typo in your title: you should not attempt to fit mixed models, let alone nonlinear mixed models, with just 30 clusters. Not unless you believe you can fit a normal distribution |
27,802 | How to deal with quasi-complete separation in a logistic GLMM? | You can use a Bayesian maximum a posteriori approach with a weak prior on the fixed effects to get approximately the same effect. In particular, the blme package for R (which is a thin wrapper around the lme4 package) does this, if you specify priors for the fixed effects as in the example here (search for "complete separation"):
cmod_blme_L2 <- bglmer(predation~ttt+(1|block),data=newdat,
family=binomial,
fixef.prior = normal(cov = diag(9,4)))
This example is from an experiment where ttt is a categorical fixed effect with 4 levels, so the $\beta$ vector will have length 4. The specified prior variance-covariance matrix is $\Sigma = 9 I$, i.e. the fixed effect parameters have independent $N(\mu=0,\sigma^2=9)$ (or $\sigma$, i.e. standard devation, $=3$) priors. This works pretty well, although it's not identical to Firth correction (since Firth corresponds to a Jeffreys prior, which is not quite the same).
The linked example shows you can also do it with the MCMCglmm package, if you want to go full-Bayesian ... | How to deal with quasi-complete separation in a logistic GLMM? | You can use a Bayesian maximum a posteriori approach with a weak prior on the fixed effects to get approximately the same effect. In particular, the blme package for R (which is a thin wrapper around | How to deal with quasi-complete separation in a logistic GLMM?
You can use a Bayesian maximum a posteriori approach with a weak prior on the fixed effects to get approximately the same effect. In particular, the blme package for R (which is a thin wrapper around the lme4 package) does this, if you specify priors for the fixed effects as in the example here (search for "complete separation"):
cmod_blme_L2 <- bglmer(predation~ttt+(1|block),data=newdat,
family=binomial,
fixef.prior = normal(cov = diag(9,4)))
This example is from an experiment where ttt is a categorical fixed effect with 4 levels, so the $\beta$ vector will have length 4. The specified prior variance-covariance matrix is $\Sigma = 9 I$, i.e. the fixed effect parameters have independent $N(\mu=0,\sigma^2=9)$ (or $\sigma$, i.e. standard devation, $=3$) priors. This works pretty well, although it's not identical to Firth correction (since Firth corresponds to a Jeffreys prior, which is not quite the same).
The linked example shows you can also do it with the MCMCglmm package, if you want to go full-Bayesian ... | How to deal with quasi-complete separation in a logistic GLMM?
You can use a Bayesian maximum a posteriori approach with a weak prior on the fixed effects to get approximately the same effect. In particular, the blme package for R (which is a thin wrapper around |
27,803 | What model can be used when the constant variance assumption is violated? | There are a number of modelling options to account for a non-constant variance, for example ARCH (and GARCH, and their many extensions) or stochastic volatility models.
An ARCH model extend ARMA models with an additional time series equation for the square error term. They tend to be pretty easy to estimate (the fGRACH R package for example).
SV models extend ARMA models with an additional time series equation (usually a AR(1)) for the log of the time-dependent variance. I have found these models are best estimated using Bayesian methods (OpenBUGS has worked well for me in the past). | What model can be used when the constant variance assumption is violated? | There are a number of modelling options to account for a non-constant variance, for example ARCH (and GARCH, and their many extensions) or stochastic volatility models.
An ARCH model extend ARMA model | What model can be used when the constant variance assumption is violated?
There are a number of modelling options to account for a non-constant variance, for example ARCH (and GARCH, and their many extensions) or stochastic volatility models.
An ARCH model extend ARMA models with an additional time series equation for the square error term. They tend to be pretty easy to estimate (the fGRACH R package for example).
SV models extend ARMA models with an additional time series equation (usually a AR(1)) for the log of the time-dependent variance. I have found these models are best estimated using Bayesian methods (OpenBUGS has worked well for me in the past). | What model can be used when the constant variance assumption is violated?
There are a number of modelling options to account for a non-constant variance, for example ARCH (and GARCH, and their many extensions) or stochastic volatility models.
An ARCH model extend ARMA model |
27,804 | What model can be used when the constant variance assumption is violated? | You can fit ARIMA model, but first you need to stabilize the variance by applying suitable transformation. You can also use Box-Cox transformation. This has been done in the book Time Series Analysis: With Applications in R, page 99, and then they use Box-Cox transformation. Check this link Box-Jenkins modelling
Another reference is page 169, Introduction to Time Series and Forecasting, Brockwell and Davis, “Once the data have been transformed (e.g., by some combination of Box–Cox and differencing transformations or by removal of trend and seasonal components) to the point where the transformed series X_t can potentially be fitted by a zero-mean ARMA model, we are faced with the problem of selecting appropriate values for the orders p and q.”
Therefore, you need to stabilize the variance prior to fit the ARIMA model. | What model can be used when the constant variance assumption is violated? | You can fit ARIMA model, but first you need to stabilize the variance by applying suitable transformation. You can also use Box-Cox transformation. This has been done in the book Time Series Analysis: | What model can be used when the constant variance assumption is violated?
You can fit ARIMA model, but first you need to stabilize the variance by applying suitable transformation. You can also use Box-Cox transformation. This has been done in the book Time Series Analysis: With Applications in R, page 99, and then they use Box-Cox transformation. Check this link Box-Jenkins modelling
Another reference is page 169, Introduction to Time Series and Forecasting, Brockwell and Davis, “Once the data have been transformed (e.g., by some combination of Box–Cox and differencing transformations or by removal of trend and seasonal components) to the point where the transformed series X_t can potentially be fitted by a zero-mean ARMA model, we are faced with the problem of selecting appropriate values for the orders p and q.”
Therefore, you need to stabilize the variance prior to fit the ARIMA model. | What model can be used when the constant variance assumption is violated?
You can fit ARIMA model, but first you need to stabilize the variance by applying suitable transformation. You can also use Box-Cox transformation. This has been done in the book Time Series Analysis: |
27,805 | What model can be used when the constant variance assumption is violated? | I would first ask why the residuals from an ARIMA model don't have constant variance before I would abandon the approach. Do the residuals themselve exhibit no correlation structure? If they do maybe some moving average terms need to be incorporated into the model.
But now let us suppose that the residuals do not appear to have any autocorrelation structure. then in what ways is the variance changing with time (increasing, decreasing, or fluctuating up and down)? The way the variance is changing may be a clue to what is wrong with the existing model. Perhaps there are covariates that are crosscorrelated with this time series. In that case the covariates could be added to the model. The residuals may then no longre exhibit nonconstant variance.
You may say that if the series is cross correlated with a covariate that show show up in the autocorrelation of the residuals. But that would not be the case if the correlation is mostly at lag 0.
If neither the addition of moving average terms nor the introduction of covariates helps solve the problem, you could perhaps consider identifying a time varying function for the residual variance based on a few parameters. Then that relationship could be incorporated in the likelihood function in order to modify the model estimates. | What model can be used when the constant variance assumption is violated? | I would first ask why the residuals from an ARIMA model don't have constant variance before I would abandon the approach. Do the residuals themselve exhibit no correlation structure? If they do maybe | What model can be used when the constant variance assumption is violated?
I would first ask why the residuals from an ARIMA model don't have constant variance before I would abandon the approach. Do the residuals themselve exhibit no correlation structure? If they do maybe some moving average terms need to be incorporated into the model.
But now let us suppose that the residuals do not appear to have any autocorrelation structure. then in what ways is the variance changing with time (increasing, decreasing, or fluctuating up and down)? The way the variance is changing may be a clue to what is wrong with the existing model. Perhaps there are covariates that are crosscorrelated with this time series. In that case the covariates could be added to the model. The residuals may then no longre exhibit nonconstant variance.
You may say that if the series is cross correlated with a covariate that show show up in the autocorrelation of the residuals. But that would not be the case if the correlation is mostly at lag 0.
If neither the addition of moving average terms nor the introduction of covariates helps solve the problem, you could perhaps consider identifying a time varying function for the residual variance based on a few parameters. Then that relationship could be incorporated in the likelihood function in order to modify the model estimates. | What model can be used when the constant variance assumption is violated?
I would first ask why the residuals from an ARIMA model don't have constant variance before I would abandon the approach. Do the residuals themselve exhibit no correlation structure? If they do maybe |
27,806 | How can I represent R squared in matrix form? | We have
$$\begin{align*} R^2 = 1 - \frac{\sum{e_i^2}}{\sum{(y_i - \bar{y})^2}} = 1 - \frac{e^\prime e}{\tilde{y}^\prime\tilde{y}}, \end{align*}$$
where $\tilde{y}$ is a vector $y$ demeaned.
Recall that $\hat{\beta} = (X^\prime X)^{-1} X^\prime y$, implying that $e= y - X\hat{\beta} = y - X(X^\prime X)^{-1}X^\prime y$. Regression on a vector of 1s, written as $l$, gives the mean of $y$ as the predicted value and residuals from that model produce demeaned $y$ values; $\tilde{y} = y - \bar{y} = y - l(l^\prime l)^{-1}l^\prime y$.
Let $H = X(X^\prime X)^{-1}X^\prime$ and let $M = l(l^\prime l)^{-1}l^\prime$, where $l$ is a vector of 1's. Also, let $I$ be an identity matrix of the requisite size. Then we have
$$\begin{align*} R^2 &= 1- \frac{e^\prime e}{\tilde{y}^\prime\tilde{y}} \\
&= 1 - \frac{y^\prime(I - H)^\prime(I-H)y}{y^\prime (I - M)^\prime(I-M)y} \\
&= 1 - \frac{y^\prime(I-H)y}{y^\prime (I-M)y},
\end{align*}$$
where the second line comes from the fact that $H$ and $M$ (and $I$) are idempotent.
In the weighted case, let $\Omega$ be the weighting matrix used in the OLS objective function, $e^\prime \Omega e$. Additionally, let $H_w = X \Omega^{1/2} (X^\prime \Omega X)^{-1} \Omega^{1/2} X^\prime$ and $M_w = l \Omega^{1/2}(l^\prime \Omega l)^{-1} \Omega^{1/2} l^\prime$. Then,
$$\begin{align*} R^2 &= 1 - \frac{y^\prime \Omega^{1/2} (I-H_w) \Omega^{1/2} y}{y^\prime \Omega^{1/2}(I-M_w) \Omega^{1/2}y},
\end{align*}$$ | How can I represent R squared in matrix form? | We have
$$\begin{align*} R^2 = 1 - \frac{\sum{e_i^2}}{\sum{(y_i - \bar{y})^2}} = 1 - \frac{e^\prime e}{\tilde{y}^\prime\tilde{y}}, \end{align*}$$
where $\tilde{y}$ is a vector $y$ demeaned.
Recall th | How can I represent R squared in matrix form?
We have
$$\begin{align*} R^2 = 1 - \frac{\sum{e_i^2}}{\sum{(y_i - \bar{y})^2}} = 1 - \frac{e^\prime e}{\tilde{y}^\prime\tilde{y}}, \end{align*}$$
where $\tilde{y}$ is a vector $y$ demeaned.
Recall that $\hat{\beta} = (X^\prime X)^{-1} X^\prime y$, implying that $e= y - X\hat{\beta} = y - X(X^\prime X)^{-1}X^\prime y$. Regression on a vector of 1s, written as $l$, gives the mean of $y$ as the predicted value and residuals from that model produce demeaned $y$ values; $\tilde{y} = y - \bar{y} = y - l(l^\prime l)^{-1}l^\prime y$.
Let $H = X(X^\prime X)^{-1}X^\prime$ and let $M = l(l^\prime l)^{-1}l^\prime$, where $l$ is a vector of 1's. Also, let $I$ be an identity matrix of the requisite size. Then we have
$$\begin{align*} R^2 &= 1- \frac{e^\prime e}{\tilde{y}^\prime\tilde{y}} \\
&= 1 - \frac{y^\prime(I - H)^\prime(I-H)y}{y^\prime (I - M)^\prime(I-M)y} \\
&= 1 - \frac{y^\prime(I-H)y}{y^\prime (I-M)y},
\end{align*}$$
where the second line comes from the fact that $H$ and $M$ (and $I$) are idempotent.
In the weighted case, let $\Omega$ be the weighting matrix used in the OLS objective function, $e^\prime \Omega e$. Additionally, let $H_w = X \Omega^{1/2} (X^\prime \Omega X)^{-1} \Omega^{1/2} X^\prime$ and $M_w = l \Omega^{1/2}(l^\prime \Omega l)^{-1} \Omega^{1/2} l^\prime$. Then,
$$\begin{align*} R^2 &= 1 - \frac{y^\prime \Omega^{1/2} (I-H_w) \Omega^{1/2} y}{y^\prime \Omega^{1/2}(I-M_w) \Omega^{1/2}y},
\end{align*}$$ | How can I represent R squared in matrix form?
We have
$$\begin{align*} R^2 = 1 - \frac{\sum{e_i^2}}{\sum{(y_i - \bar{y})^2}} = 1 - \frac{e^\prime e}{\tilde{y}^\prime\tilde{y}}, \end{align*}$$
where $\tilde{y}$ is a vector $y$ demeaned.
Recall th |
27,807 | How can I represent R squared in matrix form? | You can write the coefficient-of-determination as a simple quadratic form of the correlation values between the individual variables (see this answer for details). Consider a
multiple linear regression with $m$ explanatory vectors and an intercept term. Let $r_i = \mathbb{Corr}(\mathbf{y},\mathbf{x}_i)$ and $r_{i,j} = \mathbb{Corr}(\mathbf{x}_i,\mathbf{x}_j)$ and define:
$$\boldsymbol{r}_{\mathbf{y},\mathbf{x}} = \begin{bmatrix} r_1 \\ r_2 \\ \vdots \\ r_m \end{bmatrix} \quad \quad \quad \boldsymbol{r}_{\mathbf{x},\mathbf{x}} = \begin{bmatrix}
r_{1,1} & r_{1,2} & \cdots & r_{1,m} \\
r_{2,1} & r_{2,2} & \cdots & r_{2,m} \\
\vdots & \vdots & \ddots & \vdots \\
r_{m,1} & r_{m,2} & \cdots & r_{m,m} \\ \end{bmatrix}.$$
With a bit of linear algebra it can be shown that:
$$R^2 = \boldsymbol{r}_{\mathbf{y},\mathbf{x}}^\text{T} \boldsymbol{r}_{\mathbf{x},\mathbf{x}}^{-1} \boldsymbol{r}_{\mathbf{y},\mathbf{x}}.$$
The square-root of the coefficient-of-determination gives the multiple correlation coefficient, which is a multivariate extension of the absolute correlation. | How can I represent R squared in matrix form? | You can write the coefficient-of-determination as a simple quadratic form of the correlation values between the individual variables (see this answer for details). Consider a
multiple linear regress | How can I represent R squared in matrix form?
You can write the coefficient-of-determination as a simple quadratic form of the correlation values between the individual variables (see this answer for details). Consider a
multiple linear regression with $m$ explanatory vectors and an intercept term. Let $r_i = \mathbb{Corr}(\mathbf{y},\mathbf{x}_i)$ and $r_{i,j} = \mathbb{Corr}(\mathbf{x}_i,\mathbf{x}_j)$ and define:
$$\boldsymbol{r}_{\mathbf{y},\mathbf{x}} = \begin{bmatrix} r_1 \\ r_2 \\ \vdots \\ r_m \end{bmatrix} \quad \quad \quad \boldsymbol{r}_{\mathbf{x},\mathbf{x}} = \begin{bmatrix}
r_{1,1} & r_{1,2} & \cdots & r_{1,m} \\
r_{2,1} & r_{2,2} & \cdots & r_{2,m} \\
\vdots & \vdots & \ddots & \vdots \\
r_{m,1} & r_{m,2} & \cdots & r_{m,m} \\ \end{bmatrix}.$$
With a bit of linear algebra it can be shown that:
$$R^2 = \boldsymbol{r}_{\mathbf{y},\mathbf{x}}^\text{T} \boldsymbol{r}_{\mathbf{x},\mathbf{x}}^{-1} \boldsymbol{r}_{\mathbf{y},\mathbf{x}}.$$
The square-root of the coefficient-of-determination gives the multiple correlation coefficient, which is a multivariate extension of the absolute correlation. | How can I represent R squared in matrix form?
You can write the coefficient-of-determination as a simple quadratic form of the correlation values between the individual variables (see this answer for details). Consider a
multiple linear regress |
27,808 | Approaches when learning from huge datasets? | Stream Mining is one answer. It is also called:
Data Stream Mining
Online Learning
Massive Online Learning
Instead of putting all data set in memory and training from it. They put chunks of it in memory and train classifier/clusters from these stream of chunks. See following links.
Data_stream_mining from wikipedia.
MOA: Massive Online Analysis
Article
Tool, written in Java, able to use weka algorithms
Book
Mining of Massive Datasets Book , From Stanford University. It uses MapReduce as a tool.
Videos in videolectures.net. Search it similar videos exists in that site.
State of the Art in Data Stream Mining
Mining Massive Data Sets | Approaches when learning from huge datasets? | Stream Mining is one answer. It is also called:
Data Stream Mining
Online Learning
Massive Online Learning
Instead of putting all data set in memory and training from it. They put chunks of it in mem | Approaches when learning from huge datasets?
Stream Mining is one answer. It is also called:
Data Stream Mining
Online Learning
Massive Online Learning
Instead of putting all data set in memory and training from it. They put chunks of it in memory and train classifier/clusters from these stream of chunks. See following links.
Data_stream_mining from wikipedia.
MOA: Massive Online Analysis
Article
Tool, written in Java, able to use weka algorithms
Book
Mining of Massive Datasets Book , From Stanford University. It uses MapReduce as a tool.
Videos in videolectures.net. Search it similar videos exists in that site.
State of the Art in Data Stream Mining
Mining Massive Data Sets | Approaches when learning from huge datasets?
Stream Mining is one answer. It is also called:
Data Stream Mining
Online Learning
Massive Online Learning
Instead of putting all data set in memory and training from it. They put chunks of it in mem |
27,809 | Approaches when learning from huge datasets? | Instead of using just one subset, you could use multiple subsets as in mini-batch learning (e.g. stochastic gradient descent). This way you would still make use of all your data. | Approaches when learning from huge datasets? | Instead of using just one subset, you could use multiple subsets as in mini-batch learning (e.g. stochastic gradient descent). This way you would still make use of all your data. | Approaches when learning from huge datasets?
Instead of using just one subset, you could use multiple subsets as in mini-batch learning (e.g. stochastic gradient descent). This way you would still make use of all your data. | Approaches when learning from huge datasets?
Instead of using just one subset, you could use multiple subsets as in mini-batch learning (e.g. stochastic gradient descent). This way you would still make use of all your data. |
27,810 | Approaches when learning from huge datasets? | Ensembles like bagging or blending -- no data is wasted, the problem automagically becomes trivially parallel and there might be significant accuracy/robustness gains. | Approaches when learning from huge datasets? | Ensembles like bagging or blending -- no data is wasted, the problem automagically becomes trivially parallel and there might be significant accuracy/robustness gains. | Approaches when learning from huge datasets?
Ensembles like bagging or blending -- no data is wasted, the problem automagically becomes trivially parallel and there might be significant accuracy/robustness gains. | Approaches when learning from huge datasets?
Ensembles like bagging or blending -- no data is wasted, the problem automagically becomes trivially parallel and there might be significant accuracy/robustness gains. |
27,811 | Detecting steps in time series | It appears you are looking for spikes within intervals of relative quiet. "Relative" means compared to typical nearby values, which suggests smoothing the series. A robust smooth is desirable precisely because it should not be influenced by a few local spikes. "Quiet" means variation around that smooth is small. Again, a robust estimate of local variation is desirable. Finally, a "spike" would be a large residual as a multiple of the local variation.
To implement this recipe, we need to choose (a) how close "nearby" means, (b) a recipe for smoothing, and (c) a recipe for finding local variation. You may have to experiment with (a), so let's make it an easily controllable parameter. Good, readily available choices for (b) and (c) are Lowess and the IQR, respectively. Here is an R implementation:
library(zoo) # For the local (moving window) IQR
f <- function(x, width=7) { # width = size of moving window in time steps
w <- width / length(x)
y <- lowess(x, f=w) # The smooth
r <- zoo(x - y$y) # Its residuals, structured for the next step
z <- rollapply(r, width, IQR) # The running estimate of variability
r/z # The diagnostic series: residuals scaled by IQRs
}
As an example of its use, consider these simulated data where two successive spikes are added to a quiet period (two in a row should be harder to detect than one isolated spike):
> x <- c(rnorm(192, mean=0, sd=1), rnorm(96, mean=0, sd=0.1), rnorm(192, mean=0, sd=1))
> x[240:241] <- c(1,-1) # Add a local spike
> plot(x)
Here is the diagnostic plot:
> u <- f(x)
> plot(u)
Despite all the noise in the original data, this plot beautifully detects the (relatively small) spikes in the center. Automate the detection by scanning f(x) for largish values (larger than about 5 in absolute value: experiment to see what works best with sample data).
> spikes <- u[abs(u) >= 5]
240 241 273
9.274959 -9.586756 6.319956
The spurious detection at time 273 was a random local outlier. You can refine the test to exclude (most) such spurious values by modifying f to look for simultaneously high values of the diagnostic r/z and low values of the running IQR, z. However, although the diagnostic has a universal (unitless) scale and interpretation, the meaning of a "low" IQR depends on the units of the data and has to be determined from experience. | Detecting steps in time series | It appears you are looking for spikes within intervals of relative quiet. "Relative" means compared to typical nearby values, which suggests smoothing the series. A robust smooth is desirable precis | Detecting steps in time series
It appears you are looking for spikes within intervals of relative quiet. "Relative" means compared to typical nearby values, which suggests smoothing the series. A robust smooth is desirable precisely because it should not be influenced by a few local spikes. "Quiet" means variation around that smooth is small. Again, a robust estimate of local variation is desirable. Finally, a "spike" would be a large residual as a multiple of the local variation.
To implement this recipe, we need to choose (a) how close "nearby" means, (b) a recipe for smoothing, and (c) a recipe for finding local variation. You may have to experiment with (a), so let's make it an easily controllable parameter. Good, readily available choices for (b) and (c) are Lowess and the IQR, respectively. Here is an R implementation:
library(zoo) # For the local (moving window) IQR
f <- function(x, width=7) { # width = size of moving window in time steps
w <- width / length(x)
y <- lowess(x, f=w) # The smooth
r <- zoo(x - y$y) # Its residuals, structured for the next step
z <- rollapply(r, width, IQR) # The running estimate of variability
r/z # The diagnostic series: residuals scaled by IQRs
}
As an example of its use, consider these simulated data where two successive spikes are added to a quiet period (two in a row should be harder to detect than one isolated spike):
> x <- c(rnorm(192, mean=0, sd=1), rnorm(96, mean=0, sd=0.1), rnorm(192, mean=0, sd=1))
> x[240:241] <- c(1,-1) # Add a local spike
> plot(x)
Here is the diagnostic plot:
> u <- f(x)
> plot(u)
Despite all the noise in the original data, this plot beautifully detects the (relatively small) spikes in the center. Automate the detection by scanning f(x) for largish values (larger than about 5 in absolute value: experiment to see what works best with sample data).
> spikes <- u[abs(u) >= 5]
240 241 273
9.274959 -9.586756 6.319956
The spurious detection at time 273 was a random local outlier. You can refine the test to exclude (most) such spurious values by modifying f to look for simultaneously high values of the diagnostic r/z and low values of the running IQR, z. However, although the diagnostic has a universal (unitless) scale and interpretation, the meaning of a "low" IQR depends on the units of the data and has to be determined from experience. | Detecting steps in time series
It appears you are looking for spikes within intervals of relative quiet. "Relative" means compared to typical nearby values, which suggests smoothing the series. A robust smooth is desirable precis |
27,812 | Detecting steps in time series | Here is a two cents suggestion.
Denote $X_t$ the differenced series. Given $\Delta > 0$ and a point $t$, define
$$a(\Delta,t) = {1\over 2\Delta + 1} |X_t|.$$
For let’s says $\Delta = 50$, the value of $a(\Delta,t)$ characterizes the off/on zones by low/high values.
An anomalous step is a point $t$ where $|X_t| > \alpha a(\Delta,t)$ – you’ll need to do some tuning on $\alpha, \Delta$ to detect what you want, and avoid false positive when the machine turns on. I’d try first with $\Delta = 50$ and $\alpha = 4$.
Alternatively, you can look at points $t$ where $a(\delta,t) > \alpha a(\Delta,t)$ for a $\delta\ll\Delta$ (eg $\delta = 10$, $\Delta = 100$), that may help the fine tuning (in that case, you would take a smaller value for $\alpha$). | Detecting steps in time series | Here is a two cents suggestion.
Denote $X_t$ the differenced series. Given $\Delta > 0$ and a point $t$, define
$$a(\Delta,t) = {1\over 2\Delta + 1} |X_t|.$$
For let’s says $\Delta = 50$, the value of | Detecting steps in time series
Here is a two cents suggestion.
Denote $X_t$ the differenced series. Given $\Delta > 0$ and a point $t$, define
$$a(\Delta,t) = {1\over 2\Delta + 1} |X_t|.$$
For let’s says $\Delta = 50$, the value of $a(\Delta,t)$ characterizes the off/on zones by low/high values.
An anomalous step is a point $t$ where $|X_t| > \alpha a(\Delta,t)$ – you’ll need to do some tuning on $\alpha, \Delta$ to detect what you want, and avoid false positive when the machine turns on. I’d try first with $\Delta = 50$ and $\alpha = 4$.
Alternatively, you can look at points $t$ where $a(\delta,t) > \alpha a(\Delta,t)$ for a $\delta\ll\Delta$ (eg $\delta = 10$, $\Delta = 100$), that may help the fine tuning (in that case, you would take a smaller value for $\alpha$). | Detecting steps in time series
Here is a two cents suggestion.
Denote $X_t$ the differenced series. Given $\Delta > 0$ and a point $t$, define
$$a(\Delta,t) = {1\over 2\Delta + 1} |X_t|.$$
For let’s says $\Delta = 50$, the value of |
27,813 | Detecting steps in time series | If you actually know the machine state - on or off, this is an important input and can be solved as a regression model, or more specifically a control model.
I don't know much about strain models, but the curves remind me of some of the physical models for circuits, like the Hodgkin Huxley equations. Or more generally, you can estimate the first order differential equation by regressing the $Y_i$ on the $X_i$, $X_{i-1}$, and $Y_{i-1}$ where $X_i$ is the machine state at time $i$ (on, off) and $Y_i$ is the amplitude (or whatever) on your y-axis in the graph of the time series.
With a known physical model, you can calculate the residuals, and easily use clustering or other methods to identify these abnormal periods. For instance, a simple boxcar filter can be used to identify segments of time where the average residual exceeds a certain threshold, identified using classifier development techniques. | Detecting steps in time series | If you actually know the machine state - on or off, this is an important input and can be solved as a regression model, or more specifically a control model.
I don't know much about strain models, but | Detecting steps in time series
If you actually know the machine state - on or off, this is an important input and can be solved as a regression model, or more specifically a control model.
I don't know much about strain models, but the curves remind me of some of the physical models for circuits, like the Hodgkin Huxley equations. Or more generally, you can estimate the first order differential equation by regressing the $Y_i$ on the $X_i$, $X_{i-1}$, and $Y_{i-1}$ where $X_i$ is the machine state at time $i$ (on, off) and $Y_i$ is the amplitude (or whatever) on your y-axis in the graph of the time series.
With a known physical model, you can calculate the residuals, and easily use clustering or other methods to identify these abnormal periods. For instance, a simple boxcar filter can be used to identify segments of time where the average residual exceeds a certain threshold, identified using classifier development techniques. | Detecting steps in time series
If you actually know the machine state - on or off, this is an important input and can be solved as a regression model, or more specifically a control model.
I don't know much about strain models, but |
27,814 | Measure of spread of a multivariate normal distribution | What about the determinant of the sample variance-covariance matrix: a measure of the
squared volume enclosed by the matrix within the space of dimension of the measurement vector. Also, an often used scale invariant version of that measure is the determinant of the sample correlation matrix: the volume of the space occupied within the dimensions of the measurement vector. | Measure of spread of a multivariate normal distribution | What about the determinant of the sample variance-covariance matrix: a measure of the
squared volume enclosed by the matrix within the space of dimension of the measurement vector. Also, an often used | Measure of spread of a multivariate normal distribution
What about the determinant of the sample variance-covariance matrix: a measure of the
squared volume enclosed by the matrix within the space of dimension of the measurement vector. Also, an often used scale invariant version of that measure is the determinant of the sample correlation matrix: the volume of the space occupied within the dimensions of the measurement vector. | Measure of spread of a multivariate normal distribution
What about the determinant of the sample variance-covariance matrix: a measure of the
squared volume enclosed by the matrix within the space of dimension of the measurement vector. Also, an often used |
27,815 | Measure of spread of a multivariate normal distribution | I would go with either trace or determinant with a preference towards trace depending on the application. They're both good in that they're invariant to representation and have clear geometric meanings.
I think there is a good argument to be made for Trace over Determinant.
The determinant effectively measures the volume of the uncertainty ellipsoid. If there is any redundancy in your system however then the covariance will be near-singular (the ellipsoid is very thin in one direction) and then the determinant/volume will be near-zero even if there is a lot of uncertainty/spread in the other directions. In a moderate to high-dimensional setting this occurs very frequently
The trace is geometrically the sum of the lengths of the axes and is more robust to this sort of situation. It will have a non-zero value even if some of the directions are certain.
Additionally, the trace is generally much easier to compute. | Measure of spread of a multivariate normal distribution | I would go with either trace or determinant with a preference towards trace depending on the application. They're both good in that they're invariant to representation and have clear geometric meaning | Measure of spread of a multivariate normal distribution
I would go with either trace or determinant with a preference towards trace depending on the application. They're both good in that they're invariant to representation and have clear geometric meanings.
I think there is a good argument to be made for Trace over Determinant.
The determinant effectively measures the volume of the uncertainty ellipsoid. If there is any redundancy in your system however then the covariance will be near-singular (the ellipsoid is very thin in one direction) and then the determinant/volume will be near-zero even if there is a lot of uncertainty/spread in the other directions. In a moderate to high-dimensional setting this occurs very frequently
The trace is geometrically the sum of the lengths of the axes and is more robust to this sort of situation. It will have a non-zero value even if some of the directions are certain.
Additionally, the trace is generally much easier to compute. | Measure of spread of a multivariate normal distribution
I would go with either trace or determinant with a preference towards trace depending on the application. They're both good in that they're invariant to representation and have clear geometric meaning |
27,816 | Measure of spread of a multivariate normal distribution | Another (closely related) quantity is the entropy of the distribution: for a multivariate Gaussian this is the log of the determinant of the covariance matrix, or
$\frac{1}{2} \log |(2\pi e)\Lambda|$
where $\Lambda$ is the covariance matrix. The advantage of this choice is that it can be compared to the "spread" of points under other (e.g., non-Gaussian) distributions.
(If we want to get technical, this is the differential entropy of a Gaussian). | Measure of spread of a multivariate normal distribution | Another (closely related) quantity is the entropy of the distribution: for a multivariate Gaussian this is the log of the determinant of the covariance matrix, or
$\frac{1}{2} \log |(2\pi e)\Lambda|$
| Measure of spread of a multivariate normal distribution
Another (closely related) quantity is the entropy of the distribution: for a multivariate Gaussian this is the log of the determinant of the covariance matrix, or
$\frac{1}{2} \log |(2\pi e)\Lambda|$
where $\Lambda$ is the covariance matrix. The advantage of this choice is that it can be compared to the "spread" of points under other (e.g., non-Gaussian) distributions.
(If we want to get technical, this is the differential entropy of a Gaussian). | Measure of spread of a multivariate normal distribution
Another (closely related) quantity is the entropy of the distribution: for a multivariate Gaussian this is the log of the determinant of the covariance matrix, or
$\frac{1}{2} \log |(2\pi e)\Lambda|$
|
27,817 | Capitalization of n for sample size | There is actually a difference in some textbooks: $N$ generally means population size and $n$ sample size.
However, this is not always the case. You should check in your textbook.
:) | Capitalization of n for sample size | There is actually a difference in some textbooks: $N$ generally means population size and $n$ sample size.
However, this is not always the case. You should check in your textbook.
:) | Capitalization of n for sample size
There is actually a difference in some textbooks: $N$ generally means population size and $n$ sample size.
However, this is not always the case. You should check in your textbook.
:) | Capitalization of n for sample size
There is actually a difference in some textbooks: $N$ generally means population size and $n$ sample size.
However, this is not always the case. You should check in your textbook.
:) |
27,818 | Capitalization of n for sample size | In terms of ANOVA small n (usually subscripted) could mean the sample size of a particular group while capital N might mean the total sample size. It depends on context. | Capitalization of n for sample size | In terms of ANOVA small n (usually subscripted) could mean the sample size of a particular group while capital N might mean the total sample size. It depends on context. | Capitalization of n for sample size
In terms of ANOVA small n (usually subscripted) could mean the sample size of a particular group while capital N might mean the total sample size. It depends on context. | Capitalization of n for sample size
In terms of ANOVA small n (usually subscripted) could mean the sample size of a particular group while capital N might mean the total sample size. It depends on context. |
27,819 | Capitalization of n for sample size | small letter n refers to the sample size whilst the capital letter N refers to the the population size of the test | Capitalization of n for sample size | small letter n refers to the sample size whilst the capital letter N refers to the the population size of the test | Capitalization of n for sample size
small letter n refers to the sample size whilst the capital letter N refers to the the population size of the test | Capitalization of n for sample size
small letter n refers to the sample size whilst the capital letter N refers to the the population size of the test |
27,820 | For a classification problem if class variable has unequal distribution which technique we should use? | Your class sample sizes do not seem so unbalanced since you have 30% of observations in your minority class. Logistic regression should be well performing in your case. Depending on the number of predictors that enter your model, you may consider some kind of penalization for parameters estimation, like ridge (L2) or lasso (L1). For an overview of problems with very unbalanced class, see Cramer (1999), The Statistician, 48: 85-94 (PDF).
I am not familiar with credit scoring techniques, but I found some papers that suggest that you could use SVM with weighted classes, e.g. Support Vector Machines for Credit Scoring: Extension to Non Standard Cases. As an alternative, you can look at boosting methods with CART, or Random Forests (in the latter case, it is possible to adapt the sampling strategy so that each class is represented when constructing the classification trees). The paper by Novak and LaDue discuss the pros and cons of GLM vs Recursive partitioning. I also found this article, Scorecard construction with unbalanced class sizes by Hand and Vinciotti. | For a classification problem if class variable has unequal distribution which technique we should us | Your class sample sizes do not seem so unbalanced since you have 30% of observations in your minority class. Logistic regression should be well performing in your case. Depending on the number of pred | For a classification problem if class variable has unequal distribution which technique we should use?
Your class sample sizes do not seem so unbalanced since you have 30% of observations in your minority class. Logistic regression should be well performing in your case. Depending on the number of predictors that enter your model, you may consider some kind of penalization for parameters estimation, like ridge (L2) or lasso (L1). For an overview of problems with very unbalanced class, see Cramer (1999), The Statistician, 48: 85-94 (PDF).
I am not familiar with credit scoring techniques, but I found some papers that suggest that you could use SVM with weighted classes, e.g. Support Vector Machines for Credit Scoring: Extension to Non Standard Cases. As an alternative, you can look at boosting methods with CART, or Random Forests (in the latter case, it is possible to adapt the sampling strategy so that each class is represented when constructing the classification trees). The paper by Novak and LaDue discuss the pros and cons of GLM vs Recursive partitioning. I also found this article, Scorecard construction with unbalanced class sizes by Hand and Vinciotti. | For a classification problem if class variable has unequal distribution which technique we should us
Your class sample sizes do not seem so unbalanced since you have 30% of observations in your minority class. Logistic regression should be well performing in your case. Depending on the number of pred |
27,821 | For a classification problem if class variable has unequal distribution which technique we should use? | A popular approach towards solving class imbalance problems is to bias the classifier so that it pays more attention to the positive instances. This can be done, for instance, by increasing the penalty associated with misclassifying the positive class relative to the negative class. Another approach is to preprocess the data by oversampling the majority class or undersampling the minority class in order to create a balanced dataset.
However, in your case, class imbalancing don't seem to be a problem. Perhaps it is a matter of parameter tuning, since finding the optimal parameters for an SVM classifier can be a rather tedious process. There are two parameters for e.g. in an RBF kernel: $C$ and $\gamma$. It is not known beforehand which $C$ and $\gamma$ are best for a given problem; consequently some kind of model selection (parameter search) must be done.
In the data preprocessing phase, remember that SVM requires that each data instance is represented as a vector of real numbers. Hence, if there are categorical attributes, it's recommended to convert them into numeric data, using m numbers to represent an m-category attribute (or replacing it with m new binary variables).
Also, scaling the variables before applying SVM is crucial, in order to avoid attributes in greater numeric ranges dominating those in smaller numeric ranges.
Check out this paper.
If you're working in R, check out the tune function (package e1071) to tune hyperparameters using a grid search over supplied parameter ranges. Then, using plot.tune, you can see visually which set of values gives the smaller error rate.
There is a shortcut around the time-consuming parameter search. There is an R package called "svmpath" which computes the entire regularization path for a 2-class SVM classifier in one go. Here is a link to the paper that describes what it's doing.
P.S. You may also find this paper interesting: Obtaining calibrated probability estimates | For a classification problem if class variable has unequal distribution which technique we should us | A popular approach towards solving class imbalance problems is to bias the classifier so that it pays more attention to the positive instances. This can be done, for instance, by increasing the penalt | For a classification problem if class variable has unequal distribution which technique we should use?
A popular approach towards solving class imbalance problems is to bias the classifier so that it pays more attention to the positive instances. This can be done, for instance, by increasing the penalty associated with misclassifying the positive class relative to the negative class. Another approach is to preprocess the data by oversampling the majority class or undersampling the minority class in order to create a balanced dataset.
However, in your case, class imbalancing don't seem to be a problem. Perhaps it is a matter of parameter tuning, since finding the optimal parameters for an SVM classifier can be a rather tedious process. There are two parameters for e.g. in an RBF kernel: $C$ and $\gamma$. It is not known beforehand which $C$ and $\gamma$ are best for a given problem; consequently some kind of model selection (parameter search) must be done.
In the data preprocessing phase, remember that SVM requires that each data instance is represented as a vector of real numbers. Hence, if there are categorical attributes, it's recommended to convert them into numeric data, using m numbers to represent an m-category attribute (or replacing it with m new binary variables).
Also, scaling the variables before applying SVM is crucial, in order to avoid attributes in greater numeric ranges dominating those in smaller numeric ranges.
Check out this paper.
If you're working in R, check out the tune function (package e1071) to tune hyperparameters using a grid search over supplied parameter ranges. Then, using plot.tune, you can see visually which set of values gives the smaller error rate.
There is a shortcut around the time-consuming parameter search. There is an R package called "svmpath" which computes the entire regularization path for a 2-class SVM classifier in one go. Here is a link to the paper that describes what it's doing.
P.S. You may also find this paper interesting: Obtaining calibrated probability estimates | For a classification problem if class variable has unequal distribution which technique we should us
A popular approach towards solving class imbalance problems is to bias the classifier so that it pays more attention to the positive instances. This can be done, for instance, by increasing the penalt |
27,822 | For a classification problem if class variable has unequal distribution which technique we should use? | I would advise using a different value of the regularisation parameter C for examples of the positive class and examples of the negative class (many SVM packages support this, and in any case it is easily implemented). Then use e.g. cross-validation to find good values of the two regularisation parameters.
It can be shown that this is asypmtotically equivalent re-sampling the data in a ratio determined by C+ and C- (so there is no advantage in re-sampling rather than re-weighting, they come to the same thing in the end and weights can be continuous, rather than discrete, so it gives finer control).
Don't simply choose C+ and C- to give a 50-50 weighting to positive and negative patterns though, as the stength of the effect of the "imbalances classes" problem will vary from dataset to dataset, so the strength of the optimal re-weighting cannot be determined a-priori.
Also remember that false-positive and false-negative costs may be different, and the problem may resolve itself if these are included in determining C+ and C-.
It is also worth bearing in mind, that for some problems the Bayes optimal decision rule will assign all patterns to a single class and ignore the other, so it isn't necessarily a bad thing - it may just mean that the density of patterns of one class is everywhere below the density of patterns of the other class. | For a classification problem if class variable has unequal distribution which technique we should us | I would advise using a different value of the regularisation parameter C for examples of the positive class and examples of the negative class (many SVM packages support this, and in any case it is ea | For a classification problem if class variable has unequal distribution which technique we should use?
I would advise using a different value of the regularisation parameter C for examples of the positive class and examples of the negative class (many SVM packages support this, and in any case it is easily implemented). Then use e.g. cross-validation to find good values of the two regularisation parameters.
It can be shown that this is asypmtotically equivalent re-sampling the data in a ratio determined by C+ and C- (so there is no advantage in re-sampling rather than re-weighting, they come to the same thing in the end and weights can be continuous, rather than discrete, so it gives finer control).
Don't simply choose C+ and C- to give a 50-50 weighting to positive and negative patterns though, as the stength of the effect of the "imbalances classes" problem will vary from dataset to dataset, so the strength of the optimal re-weighting cannot be determined a-priori.
Also remember that false-positive and false-negative costs may be different, and the problem may resolve itself if these are included in determining C+ and C-.
It is also worth bearing in mind, that for some problems the Bayes optimal decision rule will assign all patterns to a single class and ignore the other, so it isn't necessarily a bad thing - it may just mean that the density of patterns of one class is everywhere below the density of patterns of the other class. | For a classification problem if class variable has unequal distribution which technique we should us
I would advise using a different value of the regularisation parameter C for examples of the positive class and examples of the negative class (many SVM packages support this, and in any case it is ea |
27,823 | How can I proceed when causal directions are not that clear? An example is provided | Fist, I think it is good that you are using a DAG because it requires careful thought about causality, and this is often at the heart of modelling.
adjusting for everything, age and sex, and even if they may partially act as mediators (unknown direction or bidirectional arrows)?
One approach to this is to estimate the net effect for each variable that could either be a confounder or a mediator, and then adjust as appropriate. How you estimate the net effect is another question of course. You could also just make an assumption (and state the assumption in the paper). Another idea is to fit several models where the variables are treated as either mediators or confounders and report the results of all. Since you only have 2 variables, Sex and Age, this seems like a reasonable approach; it would mean fitting 4 models.
should I show a causal graph with undirected arrows (how should I name it then)?
I would not do this, as it makes the diagram ambiguous.
should I show a causal graph with bidirectional arrows (still named DAG?)
I would not do this either, if you are fitting 4 models, as it would be inconsistent with the modelling. Also, you can't call it a DAG if it has bidirectional arcs (a DAG is dorected by definition)
I would include 4 DAGs.
am I right that undirected and bidirected arrows both make sex and age confounders due to opening a back-door path?
Not really, if you are following DAG theory, because the presence of an arc with no direction means that the graph is not directed and therefore is not a DAG. | How can I proceed when causal directions are not that clear? An example is provided | Fist, I think it is good that you are using a DAG because it requires careful thought about causality, and this is often at the heart of modelling.
adjusting for everything, age and sex, and even if | How can I proceed when causal directions are not that clear? An example is provided
Fist, I think it is good that you are using a DAG because it requires careful thought about causality, and this is often at the heart of modelling.
adjusting for everything, age and sex, and even if they may partially act as mediators (unknown direction or bidirectional arrows)?
One approach to this is to estimate the net effect for each variable that could either be a confounder or a mediator, and then adjust as appropriate. How you estimate the net effect is another question of course. You could also just make an assumption (and state the assumption in the paper). Another idea is to fit several models where the variables are treated as either mediators or confounders and report the results of all. Since you only have 2 variables, Sex and Age, this seems like a reasonable approach; it would mean fitting 4 models.
should I show a causal graph with undirected arrows (how should I name it then)?
I would not do this, as it makes the diagram ambiguous.
should I show a causal graph with bidirectional arrows (still named DAG?)
I would not do this either, if you are fitting 4 models, as it would be inconsistent with the modelling. Also, you can't call it a DAG if it has bidirectional arcs (a DAG is dorected by definition)
I would include 4 DAGs.
am I right that undirected and bidirected arrows both make sex and age confounders due to opening a back-door path?
Not really, if you are following DAG theory, because the presence of an arc with no direction means that the graph is not directed and therefore is not a DAG. | How can I proceed when causal directions are not that clear? An example is provided
Fist, I think it is good that you are using a DAG because it requires careful thought about causality, and this is often at the heart of modelling.
adjusting for everything, age and sex, and even if |
27,824 | How can I proceed when causal directions are not that clear? An example is provided | If you are not sure about the direction of the arrow, this is likely because you suspect (implicitly or explicitly) some potential confounding between the two variables. Hence, you should draw all plausible graphs and derive identifying assumption for each. For some you might reach the conclusion that your causal quantity of interest is not identifyable vis a vis available data, for others it might. With the DAGs you make clear under which causal assumptions a causal interpretation of your empirical estimand is internally consistent.
Generally speaking, the causal interpretation of an empirical estimand is based on the underlying causal model. That is, based on likely untestable assumptions. The DAGs are a tool to make this clear.
Bi-directed arrows are used in DAGs to indicate that there are unobserved back-door paths between two variables. You could also include this unobserved confounder explicitly, labelling it for instance $U$. This is just notational convention. However, assuming a bi-directed (or an unobserved confounder) changes, of course, the implications for identification. | How can I proceed when causal directions are not that clear? An example is provided | If you are not sure about the direction of the arrow, this is likely because you suspect (implicitly or explicitly) some potential confounding between the two variables. Hence, you should draw all pla | How can I proceed when causal directions are not that clear? An example is provided
If you are not sure about the direction of the arrow, this is likely because you suspect (implicitly or explicitly) some potential confounding between the two variables. Hence, you should draw all plausible graphs and derive identifying assumption for each. For some you might reach the conclusion that your causal quantity of interest is not identifyable vis a vis available data, for others it might. With the DAGs you make clear under which causal assumptions a causal interpretation of your empirical estimand is internally consistent.
Generally speaking, the causal interpretation of an empirical estimand is based on the underlying causal model. That is, based on likely untestable assumptions. The DAGs are a tool to make this clear.
Bi-directed arrows are used in DAGs to indicate that there are unobserved back-door paths between two variables. You could also include this unobserved confounder explicitly, labelling it for instance $U$. This is just notational convention. However, assuming a bi-directed (or an unobserved confounder) changes, of course, the implications for identification. | How can I proceed when causal directions are not that clear? An example is provided
If you are not sure about the direction of the arrow, this is likely because you suspect (implicitly or explicitly) some potential confounding between the two variables. Hence, you should draw all pla |
27,825 | Poisson Gamma Mixture = Negative Binomially Distributed? | There are various ways a negative binomial distribution can come about. One of them, as Robert Long comments, is as a Poisson distribution whose parameter is itself Gamma distributed. The Wikipedia page gives the derivation of this result. So this covers parts (i) and (ii) of your model.
This is an example of compound-distributions, which are often also called "mixtures" (e.g. a "Poisson-Gamma mixture" in the present case). This can be confusing, since a "mixture" has at least one related but distinct meaning in statistics. | Poisson Gamma Mixture = Negative Binomially Distributed? | There are various ways a negative binomial distribution can come about. One of them, as Robert Long comments, is as a Poisson distribution whose parameter is itself Gamma distributed. The Wikipedia pa | Poisson Gamma Mixture = Negative Binomially Distributed?
There are various ways a negative binomial distribution can come about. One of them, as Robert Long comments, is as a Poisson distribution whose parameter is itself Gamma distributed. The Wikipedia page gives the derivation of this result. So this covers parts (i) and (ii) of your model.
This is an example of compound-distributions, which are often also called "mixtures" (e.g. a "Poisson-Gamma mixture" in the present case). This can be confusing, since a "mixture" has at least one related but distinct meaning in statistics. | Poisson Gamma Mixture = Negative Binomially Distributed?
There are various ways a negative binomial distribution can come about. One of them, as Robert Long comments, is as a Poisson distribution whose parameter is itself Gamma distributed. The Wikipedia pa |
27,826 | Poisson Gamma Mixture = Negative Binomially Distributed? | The negative binomial distribution is the Poisson-gamma mixture. Specifically, it can be established that:
$$\text{NegBin} \bigg( t \bigg| n, \frac{1}{\theta+1} \bigg)
= \int \limits_0^\infty \text{Pois}(t|\lambda) \ \text{Gamma}(\lambda|n, \theta) \ d \lambda.$$
(In this statement the parameter $\theta$ is the rate parameter of the gamma distribution.) This is a useful algebraic exercise to work through if you are new to these distributions. | Poisson Gamma Mixture = Negative Binomially Distributed? | The negative binomial distribution is the Poisson-gamma mixture. Specifically, it can be established that:
$$\text{NegBin} \bigg( t \bigg| n, \frac{1}{\theta+1} \bigg)
= \int \limits_0^\infty \text{P | Poisson Gamma Mixture = Negative Binomially Distributed?
The negative binomial distribution is the Poisson-gamma mixture. Specifically, it can be established that:
$$\text{NegBin} \bigg( t \bigg| n, \frac{1}{\theta+1} \bigg)
= \int \limits_0^\infty \text{Pois}(t|\lambda) \ \text{Gamma}(\lambda|n, \theta) \ d \lambda.$$
(In this statement the parameter $\theta$ is the rate parameter of the gamma distribution.) This is a useful algebraic exercise to work through if you are new to these distributions. | Poisson Gamma Mixture = Negative Binomially Distributed?
The negative binomial distribution is the Poisson-gamma mixture. Specifically, it can be established that:
$$\text{NegBin} \bigg( t \bigg| n, \frac{1}{\theta+1} \bigg)
= \int \limits_0^\infty \text{P |
27,827 | Review linear regression | This is a comprehensive classic:
Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.). | Review linear regression | This is a comprehensive classic:
Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.). | Review linear regression
This is a comprehensive classic:
Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.). | Review linear regression
This is a comprehensive classic:
Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.). |
27,828 | Review linear regression | I’m not sure if this answers your needs, but the best book to “get into” regression modeling that I can recommend is
Data Analysis Using Regression and Multilevel/Hierarchical Models
by Andrew Gelman and Jennifer Hill
It’s great to develop understanding of regression models and for learning how to apply them. It may not go that deep into the technical details that may come up in interviews, but I’d definitely recommend it as first read before going into more technical texts. | Review linear regression | I’m not sure if this answers your needs, but the best book to “get into” regression modeling that I can recommend is
Data Analysis Using Regression and Multilevel/Hierarchical Models
by Andrew Gelman | Review linear regression
I’m not sure if this answers your needs, but the best book to “get into” regression modeling that I can recommend is
Data Analysis Using Regression and Multilevel/Hierarchical Models
by Andrew Gelman and Jennifer Hill
It’s great to develop understanding of regression models and for learning how to apply them. It may not go that deep into the technical details that may come up in interviews, but I’d definitely recommend it as first read before going into more technical texts. | Review linear regression
I’m not sure if this answers your needs, but the best book to “get into” regression modeling that I can recommend is
Data Analysis Using Regression and Multilevel/Hierarchical Models
by Andrew Gelman |
27,829 | Review linear regression | An Introduction to Statistical Learning has an excellent review of simple and multiple linear regression, with about 60 pages dedicated to those topics. | Review linear regression | An Introduction to Statistical Learning has an excellent review of simple and multiple linear regression, with about 60 pages dedicated to those topics. | Review linear regression
An Introduction to Statistical Learning has an excellent review of simple and multiple linear regression, with about 60 pages dedicated to those topics. | Review linear regression
An Introduction to Statistical Learning has an excellent review of simple and multiple linear regression, with about 60 pages dedicated to those topics. |
27,830 | Review linear regression | I like Exegeses on Linear Models by W N Venables, old but I still find it provocative. | Review linear regression | I like Exegeses on Linear Models by W N Venables, old but I still find it provocative. | Review linear regression
I like Exegeses on Linear Models by W N Venables, old but I still find it provocative. | Review linear regression
I like Exegeses on Linear Models by W N Venables, old but I still find it provocative. |
27,831 | Do all bounded probability distributions have a definite mean? | Note that the definition of bounded you're using in your question is non-standard. I would say that your distributions have compact support. In any case...
Here's a proof that the integral defining the mean exists.
Suppose that $X$ is a random variable with chopped off tails, like you specify. Take $f$ to be the density function of $X$ (we could work with the CDF instead if we wished to, which would give a slightly more general proof). Then by your assumption, there is some interval $[-A, A]$ outside of which, the function $f$ is identically zero. Within this interval, the density function is non-negative, by its usual properties.
The integral $\int_{-A}^{A} f(x) dx$ exists and is finite, it is equal to one. Therefore, we can bound:
$$ \int_{-A}^{A} \left| x f(x) \right| dx \leq \int_{-A}^{A} A f(x) dx = A \int_{-A}^{A} f(x) dx \leq A $$
So, the function $x f(x)$ is dominated by the integrable function $A f(x)$ on the interval $[-A, A]$. From the Dominated Convergence Theorem, it follows immediately that $x f(x)$ is integrable on $[-A, A]$, and the integral is finite (being bounded by the integral of $A f(x)$, which is bounded by $A$).
Finally, since $f$ is zero outside of the specified interval, it's enough for us to observe that:
$$ \int_{-A}^A x f(x) dx = \int_{-\infty}^{\infty} x f(x) dx $$
to finish things off. | Do all bounded probability distributions have a definite mean? | Note that the definition of bounded you're using in your question is non-standard. I would say that your distributions have compact support. In any case...
Here's a proof that the integral defining th | Do all bounded probability distributions have a definite mean?
Note that the definition of bounded you're using in your question is non-standard. I would say that your distributions have compact support. In any case...
Here's a proof that the integral defining the mean exists.
Suppose that $X$ is a random variable with chopped off tails, like you specify. Take $f$ to be the density function of $X$ (we could work with the CDF instead if we wished to, which would give a slightly more general proof). Then by your assumption, there is some interval $[-A, A]$ outside of which, the function $f$ is identically zero. Within this interval, the density function is non-negative, by its usual properties.
The integral $\int_{-A}^{A} f(x) dx$ exists and is finite, it is equal to one. Therefore, we can bound:
$$ \int_{-A}^{A} \left| x f(x) \right| dx \leq \int_{-A}^{A} A f(x) dx = A \int_{-A}^{A} f(x) dx \leq A $$
So, the function $x f(x)$ is dominated by the integrable function $A f(x)$ on the interval $[-A, A]$. From the Dominated Convergence Theorem, it follows immediately that $x f(x)$ is integrable on $[-A, A]$, and the integral is finite (being bounded by the integral of $A f(x)$, which is bounded by $A$).
Finally, since $f$ is zero outside of the specified interval, it's enough for us to observe that:
$$ \int_{-A}^A x f(x) dx = \int_{-\infty}^{\infty} x f(x) dx $$
to finish things off. | Do all bounded probability distributions have a definite mean?
Note that the definition of bounded you're using in your question is non-standard. I would say that your distributions have compact support. In any case...
Here's a proof that the integral defining th |
27,832 | Do all bounded probability distributions have a definite mean? | It is true that all bounded random variables have a well-defined expectation. See https://kurser.math.su.se/pluginfile.php/9291/mod_resource/content/1/lecture-5e.pdf (pages $4$ and $5$).
With regard to unbounded random variables, the problem is a matter of improper integrals. It is indeed the behaviour of the density as the argument goes to $\pm \infty$ that causes the non-integrability of certain distributions (e.g. Cauchy distribution), as explained in the Explanation of undefined moments section of this article: https://en.wikipedia.org/wiki/Cauchy_distribution. | Do all bounded probability distributions have a definite mean? | It is true that all bounded random variables have a well-defined expectation. See https://kurser.math.su.se/pluginfile.php/9291/mod_resource/content/1/lecture-5e.pdf (pages $4$ and $5$).
With regard | Do all bounded probability distributions have a definite mean?
It is true that all bounded random variables have a well-defined expectation. See https://kurser.math.su.se/pluginfile.php/9291/mod_resource/content/1/lecture-5e.pdf (pages $4$ and $5$).
With regard to unbounded random variables, the problem is a matter of improper integrals. It is indeed the behaviour of the density as the argument goes to $\pm \infty$ that causes the non-integrability of certain distributions (e.g. Cauchy distribution), as explained in the Explanation of undefined moments section of this article: https://en.wikipedia.org/wiki/Cauchy_distribution. | Do all bounded probability distributions have a definite mean?
It is true that all bounded random variables have a well-defined expectation. See https://kurser.math.su.se/pluginfile.php/9291/mod_resource/content/1/lecture-5e.pdf (pages $4$ and $5$).
With regard |
27,833 | What exactly is model instability due to multicollinearity? | What Is It?
Here is an example of this behavior. I'm going to write a function to simulate regressions and output their coefficients. We'll look at the coordinate pair of coefficients $(a_1,a_2)$ in the case of no collinearity and high collinearity. Here is some code:
library(tidyverse)
sim <- function(rho){
#Number of samples to draw
N = 50
#Make a covariance matrix
covar = matrix(c(1,rho, rho, 1), byrow = T, nrow = 2)
# Append a column of 1s to N draws from a 2-dimensional
# Gaussian
# With covariance matrix covar
X = cbind(rep(1,N),MASS::mvrnorm(N, mu = c(0,0),
Sigma = covar))
# True betas for our regression
betas = c(1,2,4)
# Make the outcome
y = X%*%betas + rnorm(N,0,1)
# Fit a linear model
model = lm(y ~ X[,2] + X[,3])
# Return a dataframe of the coefficients
return(tibble(a1 = coef(model)[2], a2 = coef(model)[3]))
}
#Run the function 1000 times and stack the results
zero_covar = rerun(1000, sim(0)) %>%
bind_rows
#Same as above, but the covariance in covar matrix
#is now non-zero
high_covar = rerun(1000, sim(0.95)) %>% bind_rows
#plot
zero_covar %>%
ggplot(aes(a1,a2)) +
geom_point(data = high_covar, color = 'red') +
geom_point()
Run that and you get something like
This simulation is supposed to simulate the sampling distribution of the coefficients. As we can see, in the case of no collinearity (black dots) the sampling distribution for the coefficients is very tight around the true value of (2,4). The blob is symmetric about this point.
In the case of high collinearity (red dots), the coefficients of the linear model can vary quite a lot! Instability in this case manifests as wildly different coefficient values given the same data generating process.
Why Is This Happening
Let's take a statistical perspective. The sampling distribution for the coefficients of a linear regression (with enough data) looks like
$$ \hat{\beta} \sim \mathcal{N}(\beta, \Sigma)
$$
The covariance matrix for the above is
$$ \Sigma = \sigma^{2}\left(X^{\prime} X\right)^{-1}
$$
Let's focus for a minute on $\left(X^{\prime} X\right)$. If $X$ has full rank, then $\left(X^{\prime} X\right)$ is a Gram Matrix, which has some special properties. One of those properties is that it has positive eigenvalues. That means we can decompose this matrix product according to eigenvalue decomposition.
$$\left(X^{\prime} X\right) = Q\Lambda Q^{-1}
$$
Suppose now one of the columns of $X$ is highly correlated with another column. Then, one of the eigenvalues of $X^{\prime}X$ should be close to 0 (I think). Inverting this product gives us
$$\left(X^{\prime} X\right)^{-1} = Q^{-1}\Lambda^{-1} Q
$$
Since $\Lambda$ is a diagonal matrix, $\Lambda^{-1}_{jj} = \frac{1}{\Lambda_{jj}}$. If one of the eigenvalues is really small, then one of the elements of $\Lambda^{-1}$ is really big, and so too is the covariance, leading to this instability in the coefficients.
I think I got that right, it has been a long time since I've done linear algebra. | What exactly is model instability due to multicollinearity? | What Is It?
Here is an example of this behavior. I'm going to write a function to simulate regressions and output their coefficients. We'll look at the coordinate pair of coefficients $(a_1,a_2)$ in | What exactly is model instability due to multicollinearity?
What Is It?
Here is an example of this behavior. I'm going to write a function to simulate regressions and output their coefficients. We'll look at the coordinate pair of coefficients $(a_1,a_2)$ in the case of no collinearity and high collinearity. Here is some code:
library(tidyverse)
sim <- function(rho){
#Number of samples to draw
N = 50
#Make a covariance matrix
covar = matrix(c(1,rho, rho, 1), byrow = T, nrow = 2)
# Append a column of 1s to N draws from a 2-dimensional
# Gaussian
# With covariance matrix covar
X = cbind(rep(1,N),MASS::mvrnorm(N, mu = c(0,0),
Sigma = covar))
# True betas for our regression
betas = c(1,2,4)
# Make the outcome
y = X%*%betas + rnorm(N,0,1)
# Fit a linear model
model = lm(y ~ X[,2] + X[,3])
# Return a dataframe of the coefficients
return(tibble(a1 = coef(model)[2], a2 = coef(model)[3]))
}
#Run the function 1000 times and stack the results
zero_covar = rerun(1000, sim(0)) %>%
bind_rows
#Same as above, but the covariance in covar matrix
#is now non-zero
high_covar = rerun(1000, sim(0.95)) %>% bind_rows
#plot
zero_covar %>%
ggplot(aes(a1,a2)) +
geom_point(data = high_covar, color = 'red') +
geom_point()
Run that and you get something like
This simulation is supposed to simulate the sampling distribution of the coefficients. As we can see, in the case of no collinearity (black dots) the sampling distribution for the coefficients is very tight around the true value of (2,4). The blob is symmetric about this point.
In the case of high collinearity (red dots), the coefficients of the linear model can vary quite a lot! Instability in this case manifests as wildly different coefficient values given the same data generating process.
Why Is This Happening
Let's take a statistical perspective. The sampling distribution for the coefficients of a linear regression (with enough data) looks like
$$ \hat{\beta} \sim \mathcal{N}(\beta, \Sigma)
$$
The covariance matrix for the above is
$$ \Sigma = \sigma^{2}\left(X^{\prime} X\right)^{-1}
$$
Let's focus for a minute on $\left(X^{\prime} X\right)$. If $X$ has full rank, then $\left(X^{\prime} X\right)$ is a Gram Matrix, which has some special properties. One of those properties is that it has positive eigenvalues. That means we can decompose this matrix product according to eigenvalue decomposition.
$$\left(X^{\prime} X\right) = Q\Lambda Q^{-1}
$$
Suppose now one of the columns of $X$ is highly correlated with another column. Then, one of the eigenvalues of $X^{\prime}X$ should be close to 0 (I think). Inverting this product gives us
$$\left(X^{\prime} X\right)^{-1} = Q^{-1}\Lambda^{-1} Q
$$
Since $\Lambda$ is a diagonal matrix, $\Lambda^{-1}_{jj} = \frac{1}{\Lambda_{jj}}$. If one of the eigenvalues is really small, then one of the elements of $\Lambda^{-1}$ is really big, and so too is the covariance, leading to this instability in the coefficients.
I think I got that right, it has been a long time since I've done linear algebra. | What exactly is model instability due to multicollinearity?
What Is It?
Here is an example of this behavior. I'm going to write a function to simulate regressions and output their coefficients. We'll look at the coordinate pair of coefficients $(a_1,a_2)$ in |
27,834 | What exactly is model instability due to multicollinearity? | Consider an extreme case where the true process is $y(i)=ax_1(i)+b+e(i)$, and you plugged a clone variable $x_2(i)=x_1(i)$:
$$y(i)=a_1x_1(i)+a_2x_2(i)+b+e(i)$$
In this case you see that any pair of coefficients $a_1,a_2$ that satisfies $a_1+a_2=a$ would be a solution to the problem:
$$y(i)=(a_1+a_2)x_1(i)+b+e(i)\Longleftrightarrow y(i)=ax_1(i)+b+e(i)$$
Hence, your regression solver may return you any pair $(a_1,a-a_1)$ as a solution. There's your instability. Uncertainty about coefficients is infinite, you can't do any inference about $a_1$ or $a_2$.
So, in case of perfect multicollinearity in OLS instead of having a single unique solution, you get the continuum of solutions. | What exactly is model instability due to multicollinearity? | Consider an extreme case where the true process is $y(i)=ax_1(i)+b+e(i)$, and you plugged a clone variable $x_2(i)=x_1(i)$:
$$y(i)=a_1x_1(i)+a_2x_2(i)+b+e(i)$$
In this case you see that any pair of co | What exactly is model instability due to multicollinearity?
Consider an extreme case where the true process is $y(i)=ax_1(i)+b+e(i)$, and you plugged a clone variable $x_2(i)=x_1(i)$:
$$y(i)=a_1x_1(i)+a_2x_2(i)+b+e(i)$$
In this case you see that any pair of coefficients $a_1,a_2$ that satisfies $a_1+a_2=a$ would be a solution to the problem:
$$y(i)=(a_1+a_2)x_1(i)+b+e(i)\Longleftrightarrow y(i)=ax_1(i)+b+e(i)$$
Hence, your regression solver may return you any pair $(a_1,a-a_1)$ as a solution. There's your instability. Uncertainty about coefficients is infinite, you can't do any inference about $a_1$ or $a_2$.
So, in case of perfect multicollinearity in OLS instead of having a single unique solution, you get the continuum of solutions. | What exactly is model instability due to multicollinearity?
Consider an extreme case where the true process is $y(i)=ax_1(i)+b+e(i)$, and you plugged a clone variable $x_2(i)=x_1(i)$:
$$y(i)=a_1x_1(i)+a_2x_2(i)+b+e(i)$$
In this case you see that any pair of co |
27,835 | If $X$ is a normally distributed random variable, then what is the distribution of $X^3$? Does it follow a well-known distribution? | The general case of the cube of an normal random variable with any mean is quite complicated, but the case of a centered normal distribution (with zero mean) is quite simple. In this answer I will show the exact density for the simple case where the mean is zero, and I will show you how to obtain a simulated estimate of the density for the more general case.
Distribution for a normal random variable with zero mean: Consider a centred normal random variable $X \sim \text{N}(0,\sigma^2)$ and let $Y=X^3$. Then for all $y \geqslant 0$ we have:
$$\begin{equation} \begin{aligned}
\mathbb{P}(-y \leqslant Y \leqslant y)
&= \mathbb{P}(-y \leqslant X^3 \leqslant y) \\[6pt]
&= \mathbb{P}(-y^{1/3} \leqslant X \leqslant y^{1/3}) \\[6pt]
&= \Phi(y^{1/3} / \sigma) - \Phi(-y^{1/3} / \sigma). \\[6pt]
\end{aligned} \end{equation}$$
Since $Y$ is a symmetric random variable, for all $y > 0$ we then have:
$$\begin{equation} \begin{aligned}
f_Y(y)
&= \frac{1}{2} \cdot \frac{d}{dy} \mathbb{P}(-y \leqslant Y \leqslant y) \\[6pt]
&= \frac{1}{2} \cdot \frac{d}{dy} \Big[ \Phi(y^{1/3} / \sigma) - \Phi(-y^{1/3} / \sigma) \Big] \\[6pt]
&= \frac{1}{2} \cdot \Big[ \frac{1}{3} \cdot \frac{\phi(y^{1/3} / \sigma)}{\sigma y^{2/3}} + \frac{1}{3} \cdot \frac{\phi(-y^{1/3} / \sigma)}{\sigma y^{2/3}} \Big] \\[6pt]
&= \frac{1}{3} \cdot \frac{\phi(y^{1/3} / \sigma)}{\sigma y^{2/3}} \\[6pt]
&= \frac{1}{\sqrt{2 \pi \sigma^2}} \cdot \frac{1}{3 y^{2/3}} \cdot \exp \Big( -\frac{1}{2 \sigma^2} \cdot y^{2/3} \Big). \\[6pt]
\end{aligned} \end{equation}$$
Since $Y$ is a symmetric random variable, we then have the full density:
$$f_Y(y) = \frac{1}{\sqrt{2 \pi \sigma^2}} \cdot \frac{1}{3 |y|^{2/3}} \cdot \exp \Big( -\frac{1}{2 \sigma^2} \cdot |y|^{2/3} \Big)
\quad \quad \quad \quad \quad
\text{for all } y \in \mathbb{R}.$$
This is a slight generalisation of the density shown in Berg (1988)$^\dagger$ (p. 911), which applies for an underlying standard normal distribution. (Interestingly, this paper shows that this distribution is "indeterminate", in the sense that it is not fully defined by its moments; i.e., there are other distributions with the exact same moments.)
Distribution for an arbitrary normal random variable: Generalisation to the case where $X \sim \text{N}(\mu, \sigma^2)$ for arbitrary $\mu \in \mathbb{R}$ is quite complicated, due to the fact that non-zero mean values lead to a polynomial expression when expanded as a cube. In this latter case, the distribution can obtained via simulation. Here is some R code to obtain a kernel density estimator (KDE) for the distribution.
#Create function to simulate density
SIMULATE_DENSITY <- function(n, mu = 0, sigma = 1) {
X <- rnorm(n, mean = mu, sd = sigma);
density(X^3); }
#General simulation
mu <- 3;
sigma <- 1;
DENSITY <- SIMULATE_DENSITY(10^7, mu, sigma);
plot(DENSITY, main = 'Density of cube of normal random variable',
xlab = 'Value', ylab = 'Density');
This plot shows the simulated density of the cube of an underlying random variable $X \sim \text{N}(3, 1)$. The large number of values in the simulation gives a smooth density plot, and you can also make reference to the density object DENSITY that has been generated by the code.
$^\dagger$ This paper has a terrible name, which should never have made it through the journal reviewers. Its title is "The Cube of a Normal Distribution is Indeterminate", but the paper relates to the cube of a standard normal random variable, not the cube of its "distribution". | If $X$ is a normally distributed random variable, then what is the distribution of $X^3$? Does it fo | The general case of the cube of an normal random variable with any mean is quite complicated, but the case of a centered normal distribution (with zero mean) is quite simple. In this answer I will sh | If $X$ is a normally distributed random variable, then what is the distribution of $X^3$? Does it follow a well-known distribution?
The general case of the cube of an normal random variable with any mean is quite complicated, but the case of a centered normal distribution (with zero mean) is quite simple. In this answer I will show the exact density for the simple case where the mean is zero, and I will show you how to obtain a simulated estimate of the density for the more general case.
Distribution for a normal random variable with zero mean: Consider a centred normal random variable $X \sim \text{N}(0,\sigma^2)$ and let $Y=X^3$. Then for all $y \geqslant 0$ we have:
$$\begin{equation} \begin{aligned}
\mathbb{P}(-y \leqslant Y \leqslant y)
&= \mathbb{P}(-y \leqslant X^3 \leqslant y) \\[6pt]
&= \mathbb{P}(-y^{1/3} \leqslant X \leqslant y^{1/3}) \\[6pt]
&= \Phi(y^{1/3} / \sigma) - \Phi(-y^{1/3} / \sigma). \\[6pt]
\end{aligned} \end{equation}$$
Since $Y$ is a symmetric random variable, for all $y > 0$ we then have:
$$\begin{equation} \begin{aligned}
f_Y(y)
&= \frac{1}{2} \cdot \frac{d}{dy} \mathbb{P}(-y \leqslant Y \leqslant y) \\[6pt]
&= \frac{1}{2} \cdot \frac{d}{dy} \Big[ \Phi(y^{1/3} / \sigma) - \Phi(-y^{1/3} / \sigma) \Big] \\[6pt]
&= \frac{1}{2} \cdot \Big[ \frac{1}{3} \cdot \frac{\phi(y^{1/3} / \sigma)}{\sigma y^{2/3}} + \frac{1}{3} \cdot \frac{\phi(-y^{1/3} / \sigma)}{\sigma y^{2/3}} \Big] \\[6pt]
&= \frac{1}{3} \cdot \frac{\phi(y^{1/3} / \sigma)}{\sigma y^{2/3}} \\[6pt]
&= \frac{1}{\sqrt{2 \pi \sigma^2}} \cdot \frac{1}{3 y^{2/3}} \cdot \exp \Big( -\frac{1}{2 \sigma^2} \cdot y^{2/3} \Big). \\[6pt]
\end{aligned} \end{equation}$$
Since $Y$ is a symmetric random variable, we then have the full density:
$$f_Y(y) = \frac{1}{\sqrt{2 \pi \sigma^2}} \cdot \frac{1}{3 |y|^{2/3}} \cdot \exp \Big( -\frac{1}{2 \sigma^2} \cdot |y|^{2/3} \Big)
\quad \quad \quad \quad \quad
\text{for all } y \in \mathbb{R}.$$
This is a slight generalisation of the density shown in Berg (1988)$^\dagger$ (p. 911), which applies for an underlying standard normal distribution. (Interestingly, this paper shows that this distribution is "indeterminate", in the sense that it is not fully defined by its moments; i.e., there are other distributions with the exact same moments.)
Distribution for an arbitrary normal random variable: Generalisation to the case where $X \sim \text{N}(\mu, \sigma^2)$ for arbitrary $\mu \in \mathbb{R}$ is quite complicated, due to the fact that non-zero mean values lead to a polynomial expression when expanded as a cube. In this latter case, the distribution can obtained via simulation. Here is some R code to obtain a kernel density estimator (KDE) for the distribution.
#Create function to simulate density
SIMULATE_DENSITY <- function(n, mu = 0, sigma = 1) {
X <- rnorm(n, mean = mu, sd = sigma);
density(X^3); }
#General simulation
mu <- 3;
sigma <- 1;
DENSITY <- SIMULATE_DENSITY(10^7, mu, sigma);
plot(DENSITY, main = 'Density of cube of normal random variable',
xlab = 'Value', ylab = 'Density');
This plot shows the simulated density of the cube of an underlying random variable $X \sim \text{N}(3, 1)$. The large number of values in the simulation gives a smooth density plot, and you can also make reference to the density object DENSITY that has been generated by the code.
$^\dagger$ This paper has a terrible name, which should never have made it through the journal reviewers. Its title is "The Cube of a Normal Distribution is Indeterminate", but the paper relates to the cube of a standard normal random variable, not the cube of its "distribution". | If $X$ is a normally distributed random variable, then what is the distribution of $X^3$? Does it fo
The general case of the cube of an normal random variable with any mean is quite complicated, but the case of a centered normal distribution (with zero mean) is quite simple. In this answer I will sh |
27,836 | How can we explain the fact that "Bagging reduces the variance while retaining the bias" mathematically? | Quite surprising that the experts couldn't help you out, the chapter on random forests in "The Elements of Statistical Learning" explains it very well.
Basically, given n i.d.d. random variables each with variance sigma², the variance of the mean of this variables will be sigma²/n.
Since the random forest is build on bootstrap samples of the data, the outputs of the individual trees can be viewed as identically distributed random variables.
Thus, by averaging the outputs of B trees, the variance of the final prediction is given by p*sigma² + (1 - p)sigma² / B, where p is the pairwise correlation between trees.
For large B the right term vanishes and the variance is reduced to p*sigma².
This works not only for decision trees but every model that's baggable.
The reason why it works particularly well for decision trees is that they inherently have a low bias (no assumptions are made, such as e.g linear relation between features and response) but a very high variance.
Since only the variance can be reduced, decision trees are build to node purity in context of random forest and tree bagging. (Building to node purity maximizes the variance of the individual trees, i.e. they fit the data perfectly, while minimizing the bias.) | How can we explain the fact that "Bagging reduces the variance while retaining the bias" mathematica | Quite surprising that the experts couldn't help you out, the chapter on random forests in "The Elements of Statistical Learning" explains it very well.
Basically, given n i.d.d. random variables each | How can we explain the fact that "Bagging reduces the variance while retaining the bias" mathematically?
Quite surprising that the experts couldn't help you out, the chapter on random forests in "The Elements of Statistical Learning" explains it very well.
Basically, given n i.d.d. random variables each with variance sigma², the variance of the mean of this variables will be sigma²/n.
Since the random forest is build on bootstrap samples of the data, the outputs of the individual trees can be viewed as identically distributed random variables.
Thus, by averaging the outputs of B trees, the variance of the final prediction is given by p*sigma² + (1 - p)sigma² / B, where p is the pairwise correlation between trees.
For large B the right term vanishes and the variance is reduced to p*sigma².
This works not only for decision trees but every model that's baggable.
The reason why it works particularly well for decision trees is that they inherently have a low bias (no assumptions are made, such as e.g linear relation between features and response) but a very high variance.
Since only the variance can be reduced, decision trees are build to node purity in context of random forest and tree bagging. (Building to node purity maximizes the variance of the individual trees, i.e. they fit the data perfectly, while minimizing the bias.) | How can we explain the fact that "Bagging reduces the variance while retaining the bias" mathematica
Quite surprising that the experts couldn't help you out, the chapter on random forests in "The Elements of Statistical Learning" explains it very well.
Basically, given n i.d.d. random variables each |
27,837 | How to make the randomforest trees vote decimals but not binary | This is a subtle point that varies from software to software. There are two main methods that I'm aware of:
Binary leafs - Each leaf votes as the majority. This is how randomForest works in R, even when using predict(..., type="prob")
Proportion leafs - Each leaf returns the proportion of the training samples belonging to each class.
This is how sklearn.ensemble.RandomForestClassifier.predict_proba works. In another answer, @usεr11852 points out that R's ranger package also provides this functionality. Happily, I can attest that from my limited usage, ranger is also much, much faster than randomForest.
I don't think that there's an easy way to get randomForest to use the proportional leaf method, since the R software is actually just a hook into a C & FORTRAN program. Unless you enjoy modifying someone else's code, you'll either have to write your own, or find another software implementation. | How to make the randomforest trees vote decimals but not binary | This is a subtle point that varies from software to software. There are two main methods that I'm aware of:
Binary leafs - Each leaf votes as the majority. This is how randomForest works in R, even w | How to make the randomforest trees vote decimals but not binary
This is a subtle point that varies from software to software. There are two main methods that I'm aware of:
Binary leafs - Each leaf votes as the majority. This is how randomForest works in R, even when using predict(..., type="prob")
Proportion leafs - Each leaf returns the proportion of the training samples belonging to each class.
This is how sklearn.ensemble.RandomForestClassifier.predict_proba works. In another answer, @usεr11852 points out that R's ranger package also provides this functionality. Happily, I can attest that from my limited usage, ranger is also much, much faster than randomForest.
I don't think that there's an easy way to get randomForest to use the proportional leaf method, since the R software is actually just a hook into a C & FORTRAN program. Unless you enjoy modifying someone else's code, you'll either have to write your own, or find another software implementation. | How to make the randomforest trees vote decimals but not binary
This is a subtle point that varies from software to software. There are two main methods that I'm aware of:
Binary leafs - Each leaf votes as the majority. This is how randomForest works in R, even w |
27,838 | How to make the randomforest trees vote decimals but not binary | It is perfectly possible to grow a "probability forest". The methodology in Malley et al. (2012) "Probability machines: consistent probability estimation using nonparametric learning machines." that outlines how this is done and how it compares to standard random forest implementation. In addition, the excellent R package ranger implements this functionality already; just set probability = TRUE when making the function call to ranger. | How to make the randomforest trees vote decimals but not binary | It is perfectly possible to grow a "probability forest". The methodology in Malley et al. (2012) "Probability machines: consistent probability estimation using nonparametric learning machines." that o | How to make the randomforest trees vote decimals but not binary
It is perfectly possible to grow a "probability forest". The methodology in Malley et al. (2012) "Probability machines: consistent probability estimation using nonparametric learning machines." that outlines how this is done and how it compares to standard random forest implementation. In addition, the excellent R package ranger implements this functionality already; just set probability = TRUE when making the function call to ranger. | How to make the randomforest trees vote decimals but not binary
It is perfectly possible to grow a "probability forest". The methodology in Malley et al. (2012) "Probability machines: consistent probability estimation using nonparametric learning machines." that o |
27,839 | How to make the randomforest trees vote decimals but not binary | Simply use predict.randomForest(..., type="prob"). You are doing a Good Thing. | How to make the randomforest trees vote decimals but not binary | Simply use predict.randomForest(..., type="prob"). You are doing a Good Thing. | How to make the randomforest trees vote decimals but not binary
Simply use predict.randomForest(..., type="prob"). You are doing a Good Thing. | How to make the randomforest trees vote decimals but not binary
Simply use predict.randomForest(..., type="prob"). You are doing a Good Thing. |
27,840 | AUPRC vs. AUC-ROC? [duplicate] | ROC AUC is the area under the curve where x is false positive rate (FPR) and y is true positive rate (TPR).
PR AUC is the area under the curve where x is recall and y is precision.
recall = TPR = sensitivity. However precision=PPV $\neq$ FPR.
FPR = P(T+|D-)
TPR = P(T+|D+)
PPV = P(D+|T+)
So these are very different curves.
Are they talking about the same things?
Not really. Both are technically evaluating "discrimination" as opposed to "calibration".
If not, do they share similar values for all possible datasets?
No
If still not, an example of dataset where ROC AUC and AUPRC strongly
disagrees would be great.
An example would be most imbalanced datasets. PPV depends on the prevalence, so it would disagree with the TPR/FPR of the ROC curve in instances, for example, with low prevalence.
This might help (I think the numbers add up, but not certain, but it should show the difference between PPV and FPR):
Consider FPR = 1-specificity = $1 - \dfrac{TN}{TN+FP}$
The False positive rate might be low. In other words, relative to the TN, there are few FP. Consider a dataset with 1000 TN and 50 TP. Even if the algorithm misclassifies 50 FP, the FPR is just 1 - 1000/(1000+50). So the area under the ROC will be high, assuming good sensitivity.
However, now consider
PPV = $\dfrac{TP}{TP+FP}$
Assume we get every positive case correct, but also have the FP from above. In the case above, we have PPV = 50/ (50+50) = 0.5!
Hence the area under the precision recall would be very low. So in a sense PPV is affected by the absolute number of FP. FPR is affected just by the number of FP relative to the number of TN. | AUPRC vs. AUC-ROC? [duplicate] | ROC AUC is the area under the curve where x is false positive rate (FPR) and y is true positive rate (TPR).
PR AUC is the area under the curve where x is recall and y is precision.
recall = TPR = sens | AUPRC vs. AUC-ROC? [duplicate]
ROC AUC is the area under the curve where x is false positive rate (FPR) and y is true positive rate (TPR).
PR AUC is the area under the curve where x is recall and y is precision.
recall = TPR = sensitivity. However precision=PPV $\neq$ FPR.
FPR = P(T+|D-)
TPR = P(T+|D+)
PPV = P(D+|T+)
So these are very different curves.
Are they talking about the same things?
Not really. Both are technically evaluating "discrimination" as opposed to "calibration".
If not, do they share similar values for all possible datasets?
No
If still not, an example of dataset where ROC AUC and AUPRC strongly
disagrees would be great.
An example would be most imbalanced datasets. PPV depends on the prevalence, so it would disagree with the TPR/FPR of the ROC curve in instances, for example, with low prevalence.
This might help (I think the numbers add up, but not certain, but it should show the difference between PPV and FPR):
Consider FPR = 1-specificity = $1 - \dfrac{TN}{TN+FP}$
The False positive rate might be low. In other words, relative to the TN, there are few FP. Consider a dataset with 1000 TN and 50 TP. Even if the algorithm misclassifies 50 FP, the FPR is just 1 - 1000/(1000+50). So the area under the ROC will be high, assuming good sensitivity.
However, now consider
PPV = $\dfrac{TP}{TP+FP}$
Assume we get every positive case correct, but also have the FP from above. In the case above, we have PPV = 50/ (50+50) = 0.5!
Hence the area under the precision recall would be very low. So in a sense PPV is affected by the absolute number of FP. FPR is affected just by the number of FP relative to the number of TN. | AUPRC vs. AUC-ROC? [duplicate]
ROC AUC is the area under the curve where x is false positive rate (FPR) and y is true positive rate (TPR).
PR AUC is the area under the curve where x is recall and y is precision.
recall = TPR = sens |
27,841 | "Pragmatic" trials: what are they? | The critical distinction between explanatory and pragmatic trials is not regarding whether a trial produces useful information. Rather, it is what use that information is for specifically: pragmatic trials are those aiming squarely at therapeutic usefulness in the clinic.
The Pragmatic-Explanatory continuum was first proposed by Schwartz and Lellouch in a 1967 paper titled "Explanatory and Pragmatic Attitudes in Therapeutic Trials" in the Journal of Clinical Epidemiology, and which was cited by the Precis-2 developers. In this article the authors present two randomized control trial scenarios testing in an anti-cancer context a drug preparatory to radiotherapy vs. radiotherapy alone. The drug is administered 30-days prior to radiotherapy in order to sensitize the patient to the effects of radiation.
The drug for 30 days followed by radiotherapy is tested against a 30-day wait plus radiation
The drug for 30 days followed by radiotherapy is tested against radiation alone beginning immediately
The first scenario, which they describe as explanatory provides "information on the effects of the key component," while the second scenario, described as pragmatic "compares two complex treatments as a whole under practical conditions".
Schwartz and Lellouch give another example distinguishing explanatory and pragmatic trials: a randomized trial where two analgesics of very similar molecular structure are compared on an "equimolecular" basis is explanatory because it is interested in the relative effect of these drugs based on the same dose; by contrast, two analgesics with radically different structures having different "optimal levels of administration" are best studied using a practical design, intended to compare the optimal effectiveness of each treatment.
The authors summarize:
The “comparison between two treatments” is a problem which is inadequately specified even in its over-all characteristics. It may imply one of at least two types of problem which are basically different.
The first type corresponds to an explanatory approach, aimed at understanding. It seeks to discover whether a difference exists between two treatments which are specified by strict and usually simple definitions. Their effects are assessed by biologically meaningful criteria, and they are applied to a class of patients which is rather arbitrarily defined, but which is as likely as possible to reveal any difference that may exist. Statistical procedures used in determining the number of subjects and in assessing the results are aimed at reducing the probabilities of errors of the first and second kind.
The second type corresponds to a pragmatic approach, aimed at decision. It seeks to answer the question-which of the two treatments should we prefer? The definition of the treatments is flexible and usually complex; it takes account of auxiliary treatments and of the possibility of withdrawals. The criteria by which the effects are assessed take into account the interests of the patients and the costs in the widest sense. The class of patients is predetermined as that to which the results of the trial are to be extrapolated. The statistical procedures are aimed at reducing the probability of errors of the third kind (that of preferring the inferior treatment); the probability of errors of the first kind is 1.0.
Schwartz, D. and Lelluch, J. (1967). Explanatory and pragmatic attitudes in therapeutic trials. Journal of Clinical Epidemiology, 20:637–648. | "Pragmatic" trials: what are they? | The critical distinction between explanatory and pragmatic trials is not regarding whether a trial produces useful information. Rather, it is what use that information is for specifically: pragmatic t | "Pragmatic" trials: what are they?
The critical distinction between explanatory and pragmatic trials is not regarding whether a trial produces useful information. Rather, it is what use that information is for specifically: pragmatic trials are those aiming squarely at therapeutic usefulness in the clinic.
The Pragmatic-Explanatory continuum was first proposed by Schwartz and Lellouch in a 1967 paper titled "Explanatory and Pragmatic Attitudes in Therapeutic Trials" in the Journal of Clinical Epidemiology, and which was cited by the Precis-2 developers. In this article the authors present two randomized control trial scenarios testing in an anti-cancer context a drug preparatory to radiotherapy vs. radiotherapy alone. The drug is administered 30-days prior to radiotherapy in order to sensitize the patient to the effects of radiation.
The drug for 30 days followed by radiotherapy is tested against a 30-day wait plus radiation
The drug for 30 days followed by radiotherapy is tested against radiation alone beginning immediately
The first scenario, which they describe as explanatory provides "information on the effects of the key component," while the second scenario, described as pragmatic "compares two complex treatments as a whole under practical conditions".
Schwartz and Lellouch give another example distinguishing explanatory and pragmatic trials: a randomized trial where two analgesics of very similar molecular structure are compared on an "equimolecular" basis is explanatory because it is interested in the relative effect of these drugs based on the same dose; by contrast, two analgesics with radically different structures having different "optimal levels of administration" are best studied using a practical design, intended to compare the optimal effectiveness of each treatment.
The authors summarize:
The “comparison between two treatments” is a problem which is inadequately specified even in its over-all characteristics. It may imply one of at least two types of problem which are basically different.
The first type corresponds to an explanatory approach, aimed at understanding. It seeks to discover whether a difference exists between two treatments which are specified by strict and usually simple definitions. Their effects are assessed by biologically meaningful criteria, and they are applied to a class of patients which is rather arbitrarily defined, but which is as likely as possible to reveal any difference that may exist. Statistical procedures used in determining the number of subjects and in assessing the results are aimed at reducing the probabilities of errors of the first and second kind.
The second type corresponds to a pragmatic approach, aimed at decision. It seeks to answer the question-which of the two treatments should we prefer? The definition of the treatments is flexible and usually complex; it takes account of auxiliary treatments and of the possibility of withdrawals. The criteria by which the effects are assessed take into account the interests of the patients and the costs in the widest sense. The class of patients is predetermined as that to which the results of the trial are to be extrapolated. The statistical procedures are aimed at reducing the probability of errors of the third kind (that of preferring the inferior treatment); the probability of errors of the first kind is 1.0.
Schwartz, D. and Lelluch, J. (1967). Explanatory and pragmatic attitudes in therapeutic trials. Journal of Clinical Epidemiology, 20:637–648. | "Pragmatic" trials: what are they?
The critical distinction between explanatory and pragmatic trials is not regarding whether a trial produces useful information. Rather, it is what use that information is for specifically: pragmatic t |
27,842 | "Pragmatic" trials: what are they? | The Schwartz & Lellouch paper mentioned by Alexis, originally published (1967) in J Chron Dis, was reprinted in 2009 in a J Clin Epi issue that took up this theme in a number of papers [1–8].
Of these papers, I found Karanicolas et al [5] particularly helpful for introducing a new distinction that illuminates (and helps to restore) the original sense of Schwartz & Lellouch. (See also the ensuing exchange [6–8] with Oxman et al.) In brief, [5] argues that Schwartz & Lellouch's original focus on constrasting the purposes of trials has been lost in subsequent usage. To restore that focus, [5] articulates a more refined mechanistic-practical contrast, advancing 'practical' trials as those useful for individual-level decision making (doctor-patient) as against 'pragmatic' trials that may appeal to policy-makers wishing to influence the clinical encounter from behind their desks at insurance companies or government agencies.
The intrinsically political aspects of this matter have, no doubt, contributed to muddying the concepts. There is an ongoing tension within medicine, between efforts to centrally plan and control the doctor-patient encounter and efforts to preserve (and increasingly, to restore) the traditional character and independence of the doctor-patient relationship. Probably the phenomenon of pragmatic trials cannot be fully understood without appreciating arguments against industrialized medicine such as Victor Montori (a coauthor of [5]) now prominently advances in his book, Why We Revolt: A patient revolution for careful and kind care.
Schwartz D, Lellouch J. Explanatory and pragmatic attitudes in therapeutical trials. J Clin Epidemiol. 2009;62(5):499-505. doi:10.1016/j.jclinepi.2009.01.012.
Zwarenstein M, Treweek S. What kind of randomized trials do we need? J Clin Epidemiol. 2009;62(5):461-463. doi:10.1016/j.jclinepi.2009.01.011.
Thorpe KE, Zwarenstein M, Oxman AD, et al. A pragmatic–explanatory continuum indicator summary (PRECIS): a tool to help trial designers. J Clin Epidemiol. 2009;62(5):464-475. doi:10.1016/j.jclinepi.2008.12.011.
Maclure M. Explaining pragmatic trials to pragmatic policymakers. J Clin Epidemiol. 2009;62(5):476-478. doi:10.1016/j.jclinepi.2008.06.021.
Karanicolas PJ, Montori VM, Devereaux PJ, Schünemann H, Guyatt GH. A new "Mechanistic-Practical" Framework for designing and interpreting randomized trials. J Clin Epidemiol. 2009;62(5):479-484. doi:10.1016/j.jclinepi.2008.02.009.
Oxman AD, Lombard C, Treweek S, Gagnier JJ, Maclure M, Zwarenstein M. Why we will remain pragmatists: four problems with the impractical mechanistic framework and a better solution. J Clin Epidemiol. 2009;62(5):485-488. doi:10.1016/j.jclinepi.2008.08.015.
Karanicolas PJ, Montori VM, Devereaux PJ, Schünemann H, Guyatt GH. The practicalists’ response. J Clin Epidemiol. 2009;62(5):489-494. doi:10.1016/j.jclinepi.2008.08.013.
Oxman AD, Lombard C, Treweek S, Gagnier JJ, Maclure M, Zwarenstein M. A pragmatic resolution. J Clin Epidemiol. 2009;62(5):495-498. doi:10.1016/j.jclinepi.2008.08.014. | "Pragmatic" trials: what are they? | The Schwartz & Lellouch paper mentioned by Alexis, originally published (1967) in J Chron Dis, was reprinted in 2009 in a J Clin Epi issue that took up this theme in a number of papers [1–8].
Of these | "Pragmatic" trials: what are they?
The Schwartz & Lellouch paper mentioned by Alexis, originally published (1967) in J Chron Dis, was reprinted in 2009 in a J Clin Epi issue that took up this theme in a number of papers [1–8].
Of these papers, I found Karanicolas et al [5] particularly helpful for introducing a new distinction that illuminates (and helps to restore) the original sense of Schwartz & Lellouch. (See also the ensuing exchange [6–8] with Oxman et al.) In brief, [5] argues that Schwartz & Lellouch's original focus on constrasting the purposes of trials has been lost in subsequent usage. To restore that focus, [5] articulates a more refined mechanistic-practical contrast, advancing 'practical' trials as those useful for individual-level decision making (doctor-patient) as against 'pragmatic' trials that may appeal to policy-makers wishing to influence the clinical encounter from behind their desks at insurance companies or government agencies.
The intrinsically political aspects of this matter have, no doubt, contributed to muddying the concepts. There is an ongoing tension within medicine, between efforts to centrally plan and control the doctor-patient encounter and efforts to preserve (and increasingly, to restore) the traditional character and independence of the doctor-patient relationship. Probably the phenomenon of pragmatic trials cannot be fully understood without appreciating arguments against industrialized medicine such as Victor Montori (a coauthor of [5]) now prominently advances in his book, Why We Revolt: A patient revolution for careful and kind care.
Schwartz D, Lellouch J. Explanatory and pragmatic attitudes in therapeutical trials. J Clin Epidemiol. 2009;62(5):499-505. doi:10.1016/j.jclinepi.2009.01.012.
Zwarenstein M, Treweek S. What kind of randomized trials do we need? J Clin Epidemiol. 2009;62(5):461-463. doi:10.1016/j.jclinepi.2009.01.011.
Thorpe KE, Zwarenstein M, Oxman AD, et al. A pragmatic–explanatory continuum indicator summary (PRECIS): a tool to help trial designers. J Clin Epidemiol. 2009;62(5):464-475. doi:10.1016/j.jclinepi.2008.12.011.
Maclure M. Explaining pragmatic trials to pragmatic policymakers. J Clin Epidemiol. 2009;62(5):476-478. doi:10.1016/j.jclinepi.2008.06.021.
Karanicolas PJ, Montori VM, Devereaux PJ, Schünemann H, Guyatt GH. A new "Mechanistic-Practical" Framework for designing and interpreting randomized trials. J Clin Epidemiol. 2009;62(5):479-484. doi:10.1016/j.jclinepi.2008.02.009.
Oxman AD, Lombard C, Treweek S, Gagnier JJ, Maclure M, Zwarenstein M. Why we will remain pragmatists: four problems with the impractical mechanistic framework and a better solution. J Clin Epidemiol. 2009;62(5):485-488. doi:10.1016/j.jclinepi.2008.08.015.
Karanicolas PJ, Montori VM, Devereaux PJ, Schünemann H, Guyatt GH. The practicalists’ response. J Clin Epidemiol. 2009;62(5):489-494. doi:10.1016/j.jclinepi.2008.08.013.
Oxman AD, Lombard C, Treweek S, Gagnier JJ, Maclure M, Zwarenstein M. A pragmatic resolution. J Clin Epidemiol. 2009;62(5):495-498. doi:10.1016/j.jclinepi.2008.08.014. | "Pragmatic" trials: what are they?
The Schwartz & Lellouch paper mentioned by Alexis, originally published (1967) in J Chron Dis, was reprinted in 2009 in a J Clin Epi issue that took up this theme in a number of papers [1–8].
Of these |
27,843 | "Pragmatic" trials: what are they? | An efficacy trial is more likely to determine the relative benefit of A vs B, but only in a setting so artificially constructed that its applicability to the real world is questionable. For example, patients in an efficacy trial may have repeated clinic visits and adverse event capture tools not present in the real world. However, because of the visits we can be assured that A and B were given throughout the trial and outcomes are more likely to be accurately measured. An efficacy trial attempts to determine the true, cosmic difference between A and B.
A pragmatic or effectiveness trial obtains external validity by treating a broad group of patients with realistic regimens, but may suffer in its ability to isolate the A/B difference due to the inherent heterogeneity of the real world. For example, an effectiveness trial may compare A to B in patients using usual clinical follow-up as recorded in unstructured visits or using administrative data to gather outcome status. Because patients had the follow-up they would in the real world, we may believe the treatment regimen is more repeatable outside the research setting. However, important events may be missed due to the lack of structure. An effectiveness trial asks whether providers should write a prescription for A or B here on Earth. | "Pragmatic" trials: what are they? | An efficacy trial is more likely to determine the relative benefit of A vs B, but only in a setting so artificially constructed that its applicability to the real world is questionable. For example, p | "Pragmatic" trials: what are they?
An efficacy trial is more likely to determine the relative benefit of A vs B, but only in a setting so artificially constructed that its applicability to the real world is questionable. For example, patients in an efficacy trial may have repeated clinic visits and adverse event capture tools not present in the real world. However, because of the visits we can be assured that A and B were given throughout the trial and outcomes are more likely to be accurately measured. An efficacy trial attempts to determine the true, cosmic difference between A and B.
A pragmatic or effectiveness trial obtains external validity by treating a broad group of patients with realistic regimens, but may suffer in its ability to isolate the A/B difference due to the inherent heterogeneity of the real world. For example, an effectiveness trial may compare A to B in patients using usual clinical follow-up as recorded in unstructured visits or using administrative data to gather outcome status. Because patients had the follow-up they would in the real world, we may believe the treatment regimen is more repeatable outside the research setting. However, important events may be missed due to the lack of structure. An effectiveness trial asks whether providers should write a prescription for A or B here on Earth. | "Pragmatic" trials: what are they?
An efficacy trial is more likely to determine the relative benefit of A vs B, but only in a setting so artificially constructed that its applicability to the real world is questionable. For example, p |
27,844 | Where to find pre-trained models for transfer learning [closed] | Keras itself provides some of the successful image processing neural networks pretrained on the ImageNet: https://keras.io/applications/
Other deep learning libraries also offer some pretrained models, notably:
TensorFlow: https://github.com/tensorflow/models
caffe: https://github.com/BVLC/caffe/wiki/Model-Zoo
caffe2: https://github.com/caffe2/caffe2/wiki/Model-Zoo
pytorch: https://github.com/Cadene/pretrained-models.pytorch
Lasagne: https://github.com/Lasagne/Recipes
Many pretrained models for various platforms can also be found at https://www.gradientzoo.com.
Moreover, if you are interested in some particular network architecture, authors sometimes provide pretrained models themselves, e.g. ResNeXt. | Where to find pre-trained models for transfer learning [closed] | Keras itself provides some of the successful image processing neural networks pretrained on the ImageNet: https://keras.io/applications/
Other deep learning libraries also offer some pretrained models | Where to find pre-trained models for transfer learning [closed]
Keras itself provides some of the successful image processing neural networks pretrained on the ImageNet: https://keras.io/applications/
Other deep learning libraries also offer some pretrained models, notably:
TensorFlow: https://github.com/tensorflow/models
caffe: https://github.com/BVLC/caffe/wiki/Model-Zoo
caffe2: https://github.com/caffe2/caffe2/wiki/Model-Zoo
pytorch: https://github.com/Cadene/pretrained-models.pytorch
Lasagne: https://github.com/Lasagne/Recipes
Many pretrained models for various platforms can also be found at https://www.gradientzoo.com.
Moreover, if you are interested in some particular network architecture, authors sometimes provide pretrained models themselves, e.g. ResNeXt. | Where to find pre-trained models for transfer learning [closed]
Keras itself provides some of the successful image processing neural networks pretrained on the ImageNet: https://keras.io/applications/
Other deep learning libraries also offer some pretrained models |
27,845 | Where to find pre-trained models for transfer learning [closed] | Since the question title is generic (and not specific to computer vision), I will give the NLP-related answer as well, in case it helps someone who stumbles upon looking for pretrained vector embeddings:
The two most popular pre-trained vector embeddings can be found on these links:
GloVe- https://nlp.stanford.edu/projects/glove/
Tensorflow Embeddings - https://code.google.com/archive/p/word2vec/
There are also a couple of less popular and/or more recent ones:
LexVec - https://github.com/alexandres/lexvec
FastText - https://github.com/icoxfog417/fastTextJapaneseTutorial
Meta-Embeddings - http://cistern.cis.lmu.de/meta-emb/ | Where to find pre-trained models for transfer learning [closed] | Since the question title is generic (and not specific to computer vision), I will give the NLP-related answer as well, in case it helps someone who stumbles upon looking for pretrained vector embeddin | Where to find pre-trained models for transfer learning [closed]
Since the question title is generic (and not specific to computer vision), I will give the NLP-related answer as well, in case it helps someone who stumbles upon looking for pretrained vector embeddings:
The two most popular pre-trained vector embeddings can be found on these links:
GloVe- https://nlp.stanford.edu/projects/glove/
Tensorflow Embeddings - https://code.google.com/archive/p/word2vec/
There are also a couple of less popular and/or more recent ones:
LexVec - https://github.com/alexandres/lexvec
FastText - https://github.com/icoxfog417/fastTextJapaneseTutorial
Meta-Embeddings - http://cistern.cis.lmu.de/meta-emb/ | Where to find pre-trained models for transfer learning [closed]
Since the question title is generic (and not specific to computer vision), I will give the NLP-related answer as well, in case it helps someone who stumbles upon looking for pretrained vector embeddin |
27,846 | Ordinary kriging example step by step? | Apart from this answer, there are also some nice additional answers to a similar question on gis.stackexchange.com
First I'll describe ordinary kriging with three points mathematically. Assume we have an intrinsically stationary random field.
Ordinary Kriging
We're trying to predict the value $Z(x_0)$ using the known values $Z=(Z(x_1),Z(x_2),Z(x_3))$ The prediction we want is of the form
$$\hat Z(x_0) = \lambda^T Z$$
where $\lambda = (\lambda_1,\lambda_2,\lambda_3)$ are the interpolation weights. We assume a constant mean value $\mu$. In order to obtain an unbiased result, we fix $\lambda_1 + \lambda_2 + \lambda_3 = 1$. We then obtain the following problem:
$$\text{min} \; E(Z(X_0) - \lambda^T Z)^2 \quad \text{s.t.}\;\; \lambda^T \mathbf{1} = 1.$$
Using the Lagrange multiplier method, we obtain the equations:
$$\sum^3_{j=1} \lambda_j \gamma(x_i - x_j) + m = \gamma(x_i - x_0),\;\; i=1,2,3,$$
$$\sum^3_{j=1} \lambda_j =1 ,$$
where $m$ is the lagrange multiplier and $\gamma$ is the (semi)variogram. From this, we can observe a couple of things:
The weights do not depend on the mean value $\mu$.
The weights do not depend on the values of $Z$ at all. Only on the coordinates (in the isotropic case on the distance only)
Each weight depends on location of all the other points.
The precise behaviour of the weights is difficult to see just from the equation, but one can very roughly say:
The further the point is from $x_0$, the lower its weight is ("further" with respect to other points).
However, being close to other points also lowers the weight.
The result is very dependent on the shape, range, and, in particular, the nugget effect of the variogram. It would be quite illuminating to consider kriging on $\mathbb R$ with only two points and see how the result changes with different variogram settings.
I will however focus on the location of points in a plane. I wrote this little R function that takes in points from $[0,1]^2$ and plots the kriging weights (for exponential covariance function with zero nugget).
library(geoR)
# Plots prediction weights for kriging in the window [0,1]x[0,1] with the prediction point (0.5,0.5)
drawWeights <- function(x,y){
df <- data.frame(x=x,y=y, values = rep(1,length(x)))
data <- as.geodata(df, coords.col = 1:2, data.col = 3)
wls <- variofit(bin1,ini=c(1,0.5),fix.nugget=T)
weights <- round(as.numeric(krweights(data$coords,c(0.5,0.5),krige.control(obj.mod=wls, type="ok"))),3)
plot(data$coords, xlim=c(0,1), ylim=c(0,1))
segments(rep(0.5,length(x)), rep(0.5,length(x)),x, y, lty=3 )
text((x+0.5)/2,(y+0.5)/2,labels=weights)
}
You can play with it using spatstat's clickppp function:
library(spatstat)
points <- clickppp()
drawWeights(points$x,points$y)
Here are a couple of examples
Points equidistant from $x_0$ and from each other
deg <- seq(0,2*pi,length.out=4)
deg <- head(deg,length(deg)-1)
x <- 0.5*as.numeric(lapply(deg, cos)) + 0.5
y <- 0.5*as.numeric(lapply(deg, sin)) + 0.5
drawWeights(x,y)
Points close to each other will share the weights
deg <- c(0,0.1,pi)
x <- 0.5*as.numeric(lapply(deg, cos)) + 0.5
y <- 0.5*as.numeric(lapply(deg, sin)) + 0.5
drawWeights(x,y)
Nearby point "stealing" the weights
deg <- seq(0,2*pi,length.out=4)
deg <- head(deg,length(deg)-1)
x <- c(0.6,0.5*as.numeric(lapply(deg, cos)) + 0.5)
y <- c(0.6,0.5*as.numeric(lapply(deg, sin)) + 0.5)
drawWeights(x,y)
It is possible to get negative weights
Hope this gives you a feel for how the weights work. | Ordinary kriging example step by step? | Apart from this answer, there are also some nice additional answers to a similar question on gis.stackexchange.com
First I'll describe ordinary kriging with three points mathematically. Assume we have | Ordinary kriging example step by step?
Apart from this answer, there are also some nice additional answers to a similar question on gis.stackexchange.com
First I'll describe ordinary kriging with three points mathematically. Assume we have an intrinsically stationary random field.
Ordinary Kriging
We're trying to predict the value $Z(x_0)$ using the known values $Z=(Z(x_1),Z(x_2),Z(x_3))$ The prediction we want is of the form
$$\hat Z(x_0) = \lambda^T Z$$
where $\lambda = (\lambda_1,\lambda_2,\lambda_3)$ are the interpolation weights. We assume a constant mean value $\mu$. In order to obtain an unbiased result, we fix $\lambda_1 + \lambda_2 + \lambda_3 = 1$. We then obtain the following problem:
$$\text{min} \; E(Z(X_0) - \lambda^T Z)^2 \quad \text{s.t.}\;\; \lambda^T \mathbf{1} = 1.$$
Using the Lagrange multiplier method, we obtain the equations:
$$\sum^3_{j=1} \lambda_j \gamma(x_i - x_j) + m = \gamma(x_i - x_0),\;\; i=1,2,3,$$
$$\sum^3_{j=1} \lambda_j =1 ,$$
where $m$ is the lagrange multiplier and $\gamma$ is the (semi)variogram. From this, we can observe a couple of things:
The weights do not depend on the mean value $\mu$.
The weights do not depend on the values of $Z$ at all. Only on the coordinates (in the isotropic case on the distance only)
Each weight depends on location of all the other points.
The precise behaviour of the weights is difficult to see just from the equation, but one can very roughly say:
The further the point is from $x_0$, the lower its weight is ("further" with respect to other points).
However, being close to other points also lowers the weight.
The result is very dependent on the shape, range, and, in particular, the nugget effect of the variogram. It would be quite illuminating to consider kriging on $\mathbb R$ with only two points and see how the result changes with different variogram settings.
I will however focus on the location of points in a plane. I wrote this little R function that takes in points from $[0,1]^2$ and plots the kriging weights (for exponential covariance function with zero nugget).
library(geoR)
# Plots prediction weights for kriging in the window [0,1]x[0,1] with the prediction point (0.5,0.5)
drawWeights <- function(x,y){
df <- data.frame(x=x,y=y, values = rep(1,length(x)))
data <- as.geodata(df, coords.col = 1:2, data.col = 3)
wls <- variofit(bin1,ini=c(1,0.5),fix.nugget=T)
weights <- round(as.numeric(krweights(data$coords,c(0.5,0.5),krige.control(obj.mod=wls, type="ok"))),3)
plot(data$coords, xlim=c(0,1), ylim=c(0,1))
segments(rep(0.5,length(x)), rep(0.5,length(x)),x, y, lty=3 )
text((x+0.5)/2,(y+0.5)/2,labels=weights)
}
You can play with it using spatstat's clickppp function:
library(spatstat)
points <- clickppp()
drawWeights(points$x,points$y)
Here are a couple of examples
Points equidistant from $x_0$ and from each other
deg <- seq(0,2*pi,length.out=4)
deg <- head(deg,length(deg)-1)
x <- 0.5*as.numeric(lapply(deg, cos)) + 0.5
y <- 0.5*as.numeric(lapply(deg, sin)) + 0.5
drawWeights(x,y)
Points close to each other will share the weights
deg <- c(0,0.1,pi)
x <- 0.5*as.numeric(lapply(deg, cos)) + 0.5
y <- 0.5*as.numeric(lapply(deg, sin)) + 0.5
drawWeights(x,y)
Nearby point "stealing" the weights
deg <- seq(0,2*pi,length.out=4)
deg <- head(deg,length(deg)-1)
x <- c(0.6,0.5*as.numeric(lapply(deg, cos)) + 0.5)
y <- c(0.6,0.5*as.numeric(lapply(deg, sin)) + 0.5)
drawWeights(x,y)
It is possible to get negative weights
Hope this gives you a feel for how the weights work. | Ordinary kriging example step by step?
Apart from this answer, there are also some nice additional answers to a similar question on gis.stackexchange.com
First I'll describe ordinary kriging with three points mathematically. Assume we have |
27,847 | Proof that the expected MSE is smaller in training than in test | This is an interesting problem which makes you think about what is random in your computations. Here is my take.
The least squares estimate $\hat{\beta}$ is the solution of
$$
\arg \min_{\beta\in\mathbb{R}^{p+1}} \sum_{k=1}^N (y_k-\beta^T x_k).
$$
Hence, if you consider the random training data $(X_1,Y_1),\dots,(X_n,Y_n)$ as IID pairs from some unknown distribution function $F_{X,Y}$, we can imagine the random vector $\hat{\beta}$ (the least squares estimator) as some functional $\hat{\Psi}[(X_1,Y_1),\dots,(X_n,Y_n)]$, with suitable measurability conditions, which satisfies
$$
\sum_{k=1}^N (Y_k-\hat{\beta}^T X_k)^2 \leq \sum_{k=1}^N (Y_k-\beta^T X_k)^2, \qquad (*)
$$
almost surely, for every random vector $\beta$.
The symmetry of the IID assumption yields that
$$
\frac{1}{N}\sum_{k=1}^N \mathrm{E}[(Y_k-\beta^T X_k)^2] = \mathrm{E}[(Y_i-\beta^T X_i)^2],
$$
for $i=1,\dots,N$, and every random vector $\beta$.
Therefore, dividing by $N$ and taking expectations in $(*)$, we have that
$$
\mathrm{E}[(Y_i-\hat{\beta}^T X_i)^2] \leq \mathrm{E}[(Y_i-\beta^T X_i)^2], \qquad (*')
$$
for $i=1,\dots,N$, and every random vector $\beta$.
The key point here is that, since $(*')$ holds for every random vector $\beta$, it must hold for the random vector
$$
\beta = \frac{1}{||X_i||^2}\left(Y_iX_i - \tilde{Y}_jX_i + X_i\tilde{X}_j^T\hat{\beta} \right),
$$
for any choice of $j=1,\dots,M$. Using this $\beta$ in $(*')$ we have that
$$
\mathrm{E}[(Y_i-\hat{\beta}^T X_i)^2] \leq \mathrm{E}[(\tilde{Y}_j-\hat{\beta}^T \tilde{X_j})^2],
$$
for every $i=1,\dots,N$, and every $j=1,\dots,M$.
The last inequality and the IID assumption (in the same way we used it before) imply that
$$
\frac{1}{N} \sum_{k=1}^N \mathrm{E}[(Y_k-\hat{\beta}^T X_k)^2] \leq \frac{1}{M} \sum_{k=1}^M \mathrm{E}[(\tilde{Y}_k-\hat{\beta}^T \tilde{X}_k)^2],
$$
so
$$
\mathrm{E}[R_{tr}(\hat{\beta})] \leq \mathrm{E}[R_{te}(\hat{\beta})].
$$
${}$ | Proof that the expected MSE is smaller in training than in test | This is an interesting problem which makes you think about what is random in your computations. Here is my take.
The least squares estimate $\hat{\beta}$ is the solution of
$$
\arg \min_{\beta\in\ma | Proof that the expected MSE is smaller in training than in test
This is an interesting problem which makes you think about what is random in your computations. Here is my take.
The least squares estimate $\hat{\beta}$ is the solution of
$$
\arg \min_{\beta\in\mathbb{R}^{p+1}} \sum_{k=1}^N (y_k-\beta^T x_k).
$$
Hence, if you consider the random training data $(X_1,Y_1),\dots,(X_n,Y_n)$ as IID pairs from some unknown distribution function $F_{X,Y}$, we can imagine the random vector $\hat{\beta}$ (the least squares estimator) as some functional $\hat{\Psi}[(X_1,Y_1),\dots,(X_n,Y_n)]$, with suitable measurability conditions, which satisfies
$$
\sum_{k=1}^N (Y_k-\hat{\beta}^T X_k)^2 \leq \sum_{k=1}^N (Y_k-\beta^T X_k)^2, \qquad (*)
$$
almost surely, for every random vector $\beta$.
The symmetry of the IID assumption yields that
$$
\frac{1}{N}\sum_{k=1}^N \mathrm{E}[(Y_k-\beta^T X_k)^2] = \mathrm{E}[(Y_i-\beta^T X_i)^2],
$$
for $i=1,\dots,N$, and every random vector $\beta$.
Therefore, dividing by $N$ and taking expectations in $(*)$, we have that
$$
\mathrm{E}[(Y_i-\hat{\beta}^T X_i)^2] \leq \mathrm{E}[(Y_i-\beta^T X_i)^2], \qquad (*')
$$
for $i=1,\dots,N$, and every random vector $\beta$.
The key point here is that, since $(*')$ holds for every random vector $\beta$, it must hold for the random vector
$$
\beta = \frac{1}{||X_i||^2}\left(Y_iX_i - \tilde{Y}_jX_i + X_i\tilde{X}_j^T\hat{\beta} \right),
$$
for any choice of $j=1,\dots,M$. Using this $\beta$ in $(*')$ we have that
$$
\mathrm{E}[(Y_i-\hat{\beta}^T X_i)^2] \leq \mathrm{E}[(\tilde{Y}_j-\hat{\beta}^T \tilde{X_j})^2],
$$
for every $i=1,\dots,N$, and every $j=1,\dots,M$.
The last inequality and the IID assumption (in the same way we used it before) imply that
$$
\frac{1}{N} \sum_{k=1}^N \mathrm{E}[(Y_k-\hat{\beta}^T X_k)^2] \leq \frac{1}{M} \sum_{k=1}^M \mathrm{E}[(\tilde{Y}_k-\hat{\beta}^T \tilde{X}_k)^2],
$$
so
$$
\mathrm{E}[R_{tr}(\hat{\beta})] \leq \mathrm{E}[R_{te}(\hat{\beta})].
$$
${}$ | Proof that the expected MSE is smaller in training than in test
This is an interesting problem which makes you think about what is random in your computations. Here is my take.
The least squares estimate $\hat{\beta}$ is the solution of
$$
\arg \min_{\beta\in\ma |
27,848 | Proof that the expected MSE is smaller in training than in test | I think the above answer is correct, but let me just explain some of the intuition for this problem, which helps us generalize it to a broader scope of models. First we'll assume that $N = M$, for notational convenience. This assumption allows us to assume that the training set and test set are drawn from the same distribution.
Let $L(m, y)$ be the loss of the model $m$ on a data set $y$. Let $\mathcal{M}$ be the set of all models (not necessarily linear).
Now define $m_x$ to be a model which optimizes the loss on data set $x$, that is, $L(m_x, x) \le L(m, x)$ for all models $m \in \mathcal{M}$.
Let the data set $x$ and the data set $y$ be drawn independently from a distribution $\mathcal{D}$. The LHS in the original problem represents the average value of $L(m_x, x)$ whereas the RHS represents the average value of $L(m_x, y)$. This is hard to compare; it's possible for $L(m_x, x)$ to be larger than $L(m_x, y)$ because $y$ might just be an "easier" dataset than $x$. However, we can see that the average value of $L(m_x, y)$ is the same as the average value of $L(m_y, x)$ because $x$ and $y$ are independent. Now, the comparison is easy. The average value of $L(m_x, x)$ is less than or equal to the average value of $L(m_y, x)$ because $L(m_x, x)$ is always less than $L(m_y, x)$, and we may conclude. | Proof that the expected MSE is smaller in training than in test | I think the above answer is correct, but let me just explain some of the intuition for this problem, which helps us generalize it to a broader scope of models. First we'll assume that $N = M$, for not | Proof that the expected MSE is smaller in training than in test
I think the above answer is correct, but let me just explain some of the intuition for this problem, which helps us generalize it to a broader scope of models. First we'll assume that $N = M$, for notational convenience. This assumption allows us to assume that the training set and test set are drawn from the same distribution.
Let $L(m, y)$ be the loss of the model $m$ on a data set $y$. Let $\mathcal{M}$ be the set of all models (not necessarily linear).
Now define $m_x$ to be a model which optimizes the loss on data set $x$, that is, $L(m_x, x) \le L(m, x)$ for all models $m \in \mathcal{M}$.
Let the data set $x$ and the data set $y$ be drawn independently from a distribution $\mathcal{D}$. The LHS in the original problem represents the average value of $L(m_x, x)$ whereas the RHS represents the average value of $L(m_x, y)$. This is hard to compare; it's possible for $L(m_x, x)$ to be larger than $L(m_x, y)$ because $y$ might just be an "easier" dataset than $x$. However, we can see that the average value of $L(m_x, y)$ is the same as the average value of $L(m_y, x)$ because $x$ and $y$ are independent. Now, the comparison is easy. The average value of $L(m_x, x)$ is less than or equal to the average value of $L(m_y, x)$ because $L(m_x, x)$ is always less than $L(m_y, x)$, and we may conclude. | Proof that the expected MSE is smaller in training than in test
I think the above answer is correct, but let me just explain some of the intuition for this problem, which helps us generalize it to a broader scope of models. First we'll assume that $N = M$, for not |
27,849 | Proof that the expected MSE is smaller in training than in test | The short answer is this:
\begin{equation}
\text{E}[R_{tr}(\hat{\beta})] \le \text{E} [R_{tr} (\text{E} \hat{\beta})] = \text{E} [R_{te} (\text{E} \hat{\beta})] \le \text{E} [R_{te}(\hat{\beta})]
\end{equation}
Now I explain this in more details.
Proving the left inequality. $\hat{\beta}$ comes from the following:
\begin{equation}
\hat{\beta} = \text{arg} \min_{\beta'} R_{tr}(\beta')
\end{equation}
This implies that for any fix $\beta$:
\begin{equation}
R_{tr} (\hat{\beta}) \le R_{tr} (\beta)
\end{equation}
Taking the expectation of both sides:
\begin{equation}
\text{E} [R_{tr} (\hat{\beta})] \le \text{E} [R_{tr} (\beta)]
\end{equation}
$\hat{\beta}$ is a random variable (which depends on the training data), we can take the expectation, so we get $\text{E}\hat{\beta}$ which is a fix, non-random vector. Substituting into the above inequality we get what we wanted to prove:
\begin{equation}
\text{E} [R_{tr} (\hat{\beta})] \le \text{E} [R_{tr} (\text{E}\hat{\beta})]
\end{equation}
Proving the equation in the middle. For any fix $\beta$:
\begin{equation}
\text{E} [R_{tr} (\beta)] = \frac 1N \sum_{i=1}^{N} \text{E} [(y_i - \beta^{T} x_i) ^{2}] = \text{E} [(Y - \beta^{T} X)^{2}]
\end{equation}
\begin{equation}
\text{E} [R_{te} (\beta)] = \frac 1M \sum_{i=1}^{M} \text{E} [(\widetilde{y_i} - \beta^{T} \widetilde{x_i}) ^{2}] = \text{E} [(Y - \beta^{T} X)^{2}]
\end{equation}
This is because both the train and the test data come from the same distribution. So for any fix $\beta$, $\text{E} [R_{tr} (\beta)] = \text{E} [R_{te} (\beta)]$. Since $\text{E}\hat{\beta}$ is a fix vector, we're done with this part.
Proving the right inequality. For this we use the fact that the training data and the test data are independent. Thus $\hat{\beta}$ and the test data are also independent. For this part, just forget about the training data. Think of $\hat{\beta}$ as a random vector independent from the (test) data.
\begin{equation}
\text{E} [R_{te}(\hat{\beta})] = \text{E} (Y - \hat{\beta}^{T} X) ^{2} = \text{E} \text{E} \left( (Y - \hat{\beta}^{T} X) ^{2} | X,Y\right)
\end{equation}
\begin{aligned}
\text{E} \left( (Y - \hat{\beta}^{T} X)^{2} | X, Y \right) =& \text{E} \left( Y^2 - 2Y\hat{\beta}^{T} X + (\hat{\beta}^T X)^{2} | X, Y \right)\\
=& Y^2 - 2Y \text{E}(\hat{\beta}^{T}) X + X^{T} \text{E}(\hat{\beta} \hat{\beta}^T) X\\
=& Y^2 - 2Y \text{E}(\hat{\beta}^{T}) X + X^{T} [ \text{E}\hat{\beta} \cdot \text{E}\hat{\beta}^T + \text{Cov}(\hat{\beta}) ] X\\
=& Y^2 - 2Y \text{E}(\hat{\beta}^{T}) X + (\text{E}\hat{\beta}^T) X X^{T} (\text{E} \hat{\beta}) + X^{T} \text{Cov}(\hat{\beta}) X
\end{aligned}
Since the covariance matrix is positive semi-definite, $X^{T} \text{Cov}(\beta) X \ge 0$
\begin{aligned}
\text{E} \left( (Y - \hat{\beta}^{T} X)^{2} | X, Y \right) \ge& Y^2 - 2Y \text{E}(\hat{\beta}^{T}) X + (\text{E}\hat{\beta}^T) X X^{T} (\text{E}) \hat{\beta}\\
\text{E} \left( (Y - \hat{\beta}^{T} X)^{2} | X, Y \right) \ge& (Y - \text{E}(\hat{\beta}^{T}) X) ^{2}\\
\text{E} \text{E} \left( (Y - \hat{\beta}^{T} X)^{2} | X, Y \right) \ge& \text{E} (Y - \text{E}(\hat{\beta}^{T}) X)^{2}\\
\text{E} (Y - \hat{\beta}^{T} X) ^{2} \ge& \text{E} (Y - \text{E}(\hat{\beta}^{T}) X)^{2}\\
\text{E} R_{te}(\hat{\beta}) \ge& \text{E} R_{te}(\text{E} \hat{\beta})
\end{aligned} | Proof that the expected MSE is smaller in training than in test | The short answer is this:
\begin{equation}
\text{E}[R_{tr}(\hat{\beta})] \le \text{E} [R_{tr} (\text{E} \hat{\beta})] = \text{E} [R_{te} (\text{E} \hat{\beta})] \le \text{E} [R_{te}(\hat{\beta})]
| Proof that the expected MSE is smaller in training than in test
The short answer is this:
\begin{equation}
\text{E}[R_{tr}(\hat{\beta})] \le \text{E} [R_{tr} (\text{E} \hat{\beta})] = \text{E} [R_{te} (\text{E} \hat{\beta})] \le \text{E} [R_{te}(\hat{\beta})]
\end{equation}
Now I explain this in more details.
Proving the left inequality. $\hat{\beta}$ comes from the following:
\begin{equation}
\hat{\beta} = \text{arg} \min_{\beta'} R_{tr}(\beta')
\end{equation}
This implies that for any fix $\beta$:
\begin{equation}
R_{tr} (\hat{\beta}) \le R_{tr} (\beta)
\end{equation}
Taking the expectation of both sides:
\begin{equation}
\text{E} [R_{tr} (\hat{\beta})] \le \text{E} [R_{tr} (\beta)]
\end{equation}
$\hat{\beta}$ is a random variable (which depends on the training data), we can take the expectation, so we get $\text{E}\hat{\beta}$ which is a fix, non-random vector. Substituting into the above inequality we get what we wanted to prove:
\begin{equation}
\text{E} [R_{tr} (\hat{\beta})] \le \text{E} [R_{tr} (\text{E}\hat{\beta})]
\end{equation}
Proving the equation in the middle. For any fix $\beta$:
\begin{equation}
\text{E} [R_{tr} (\beta)] = \frac 1N \sum_{i=1}^{N} \text{E} [(y_i - \beta^{T} x_i) ^{2}] = \text{E} [(Y - \beta^{T} X)^{2}]
\end{equation}
\begin{equation}
\text{E} [R_{te} (\beta)] = \frac 1M \sum_{i=1}^{M} \text{E} [(\widetilde{y_i} - \beta^{T} \widetilde{x_i}) ^{2}] = \text{E} [(Y - \beta^{T} X)^{2}]
\end{equation}
This is because both the train and the test data come from the same distribution. So for any fix $\beta$, $\text{E} [R_{tr} (\beta)] = \text{E} [R_{te} (\beta)]$. Since $\text{E}\hat{\beta}$ is a fix vector, we're done with this part.
Proving the right inequality. For this we use the fact that the training data and the test data are independent. Thus $\hat{\beta}$ and the test data are also independent. For this part, just forget about the training data. Think of $\hat{\beta}$ as a random vector independent from the (test) data.
\begin{equation}
\text{E} [R_{te}(\hat{\beta})] = \text{E} (Y - \hat{\beta}^{T} X) ^{2} = \text{E} \text{E} \left( (Y - \hat{\beta}^{T} X) ^{2} | X,Y\right)
\end{equation}
\begin{aligned}
\text{E} \left( (Y - \hat{\beta}^{T} X)^{2} | X, Y \right) =& \text{E} \left( Y^2 - 2Y\hat{\beta}^{T} X + (\hat{\beta}^T X)^{2} | X, Y \right)\\
=& Y^2 - 2Y \text{E}(\hat{\beta}^{T}) X + X^{T} \text{E}(\hat{\beta} \hat{\beta}^T) X\\
=& Y^2 - 2Y \text{E}(\hat{\beta}^{T}) X + X^{T} [ \text{E}\hat{\beta} \cdot \text{E}\hat{\beta}^T + \text{Cov}(\hat{\beta}) ] X\\
=& Y^2 - 2Y \text{E}(\hat{\beta}^{T}) X + (\text{E}\hat{\beta}^T) X X^{T} (\text{E} \hat{\beta}) + X^{T} \text{Cov}(\hat{\beta}) X
\end{aligned}
Since the covariance matrix is positive semi-definite, $X^{T} \text{Cov}(\beta) X \ge 0$
\begin{aligned}
\text{E} \left( (Y - \hat{\beta}^{T} X)^{2} | X, Y \right) \ge& Y^2 - 2Y \text{E}(\hat{\beta}^{T}) X + (\text{E}\hat{\beta}^T) X X^{T} (\text{E}) \hat{\beta}\\
\text{E} \left( (Y - \hat{\beta}^{T} X)^{2} | X, Y \right) \ge& (Y - \text{E}(\hat{\beta}^{T}) X) ^{2}\\
\text{E} \text{E} \left( (Y - \hat{\beta}^{T} X)^{2} | X, Y \right) \ge& \text{E} (Y - \text{E}(\hat{\beta}^{T}) X)^{2}\\
\text{E} (Y - \hat{\beta}^{T} X) ^{2} \ge& \text{E} (Y - \text{E}(\hat{\beta}^{T}) X)^{2}\\
\text{E} R_{te}(\hat{\beta}) \ge& \text{E} R_{te}(\text{E} \hat{\beta})
\end{aligned} | Proof that the expected MSE is smaller in training than in test
The short answer is this:
\begin{equation}
\text{E}[R_{tr}(\hat{\beta})] \le \text{E} [R_{tr} (\text{E} \hat{\beta})] = \text{E} [R_{te} (\text{E} \hat{\beta})] \le \text{E} [R_{te}(\hat{\beta})]
|
27,850 | Poisson deviance (xgboost vs gbm vs regression) | It is not well-documented, but I have examined the source code for xgboost and I have determined the following for the count:poisson objective:
It uses the Poisson likelihood with a log link.
The base_margin parameter is on the linear scale, not the response scale. As the boosting rounds proceed, new trees are also added on the linear scale.
The xgboost Poisson negative log likelihood formula is correct, but it's a little different from the Poisson deviance. However the negative log likelihood and deviance are very close and asymptotically equivalent up to a factor of 2.
Setting base_margin to log(exposure) is equivalent to including a log(exposure) offset term.
A bit more detail on these points:
LogGamma is the logarithm of the gamma function, which is a continuous extension of the factorial. Specifically, $\Gamma(n) = (n-1)!$ for integer $n$. Thus, LogGamma(y + 1) = factorial(y). The LogGamma term represents the $\log(y!)$ term in the full Poisson log-likelihood. (This term is normally omitted from the log-likelihood expression, since it does not affect the optimization.)
According to Stirling's approximation, $\log(y!) \approx y\log(y) - y$. Replacing LogGamma with this approximation, and substituting py=exp(p) (i.e. replacing the linear predictor with the mean via log link) yields y * log(y / py) - (y - py). This is almost the standard Poisson deviance, except it is missing the factor of 2.
The formula you found for GBM is not the standard Poisson deviance, although it is the same up to an additive (y-dependent) constant. Confusingly, the py in your GBM formula is actually the linear scale prediction, not the response scale, whereas in the other formulas py is the response, the predicted mean of y.
You do not need to add log(exposure) to the objective formula. All you need to do is set base_margin=log(exposure). This ensures that the first sum term in the boosting series is log(exposure). Subsequent boosting rounds add more terms but the initial offset is never removed or changed. | Poisson deviance (xgboost vs gbm vs regression) | It is not well-documented, but I have examined the source code for xgboost and I have determined the following for the count:poisson objective:
It uses the Poisson likelihood with a log link.
The ba | Poisson deviance (xgboost vs gbm vs regression)
It is not well-documented, but I have examined the source code for xgboost and I have determined the following for the count:poisson objective:
It uses the Poisson likelihood with a log link.
The base_margin parameter is on the linear scale, not the response scale. As the boosting rounds proceed, new trees are also added on the linear scale.
The xgboost Poisson negative log likelihood formula is correct, but it's a little different from the Poisson deviance. However the negative log likelihood and deviance are very close and asymptotically equivalent up to a factor of 2.
Setting base_margin to log(exposure) is equivalent to including a log(exposure) offset term.
A bit more detail on these points:
LogGamma is the logarithm of the gamma function, which is a continuous extension of the factorial. Specifically, $\Gamma(n) = (n-1)!$ for integer $n$. Thus, LogGamma(y + 1) = factorial(y). The LogGamma term represents the $\log(y!)$ term in the full Poisson log-likelihood. (This term is normally omitted from the log-likelihood expression, since it does not affect the optimization.)
According to Stirling's approximation, $\log(y!) \approx y\log(y) - y$. Replacing LogGamma with this approximation, and substituting py=exp(p) (i.e. replacing the linear predictor with the mean via log link) yields y * log(y / py) - (y - py). This is almost the standard Poisson deviance, except it is missing the factor of 2.
The formula you found for GBM is not the standard Poisson deviance, although it is the same up to an additive (y-dependent) constant. Confusingly, the py in your GBM formula is actually the linear scale prediction, not the response scale, whereas in the other formulas py is the response, the predicted mean of y.
You do not need to add log(exposure) to the objective formula. All you need to do is set base_margin=log(exposure). This ensures that the first sum term in the boosting series is log(exposure). Subsequent boosting rounds add more terms but the initial offset is never removed or changed. | Poisson deviance (xgboost vs gbm vs regression)
It is not well-documented, but I have examined the source code for xgboost and I have determined the following for the count:poisson objective:
It uses the Poisson likelihood with a log link.
The ba |
27,851 | Forecast time series data with external variables | If you fit a model using external variables and want to forecast from this model, you will need (forecasted) future values of the external variables, plain and simple. There is no way around this.
There are of course different ways of forecasting your explanatory variables. You can use the last observed value (the "naive random walk" forecast) or the overall mean. You can simply set them to zero if this is a useful value for them (e.g., special events that happened in the past like an earthquake, which you don't anticipate to recur). Or you could fit and forecast a time series model to these explanatory variables themselves, e.g., using auto.arima.
The alternative is to fit a model to your $y$ values without explanatory variables, by removing the xreg parameter, then to forecast $y$ using this model. One advantage is that this may even capture regularities in your explanatory variables. For instance, your ice cream sales may be driven by temperature, and you don't have good forecasts for temperature a few months ahead... but temperature is seasonal, so simply fitting a model without temperature yields a seasonal model, and your seasonal forecasts may actually be pretty good even if you don't include the actual driver of sales.
I recommend this free online forecasting textbook, especially this section on multiple regression (unfortunately, there is nothing about ARIMAX there), as well as Rob Hyndman's blog post "The ARIMAX model muddle". | Forecast time series data with external variables | If you fit a model using external variables and want to forecast from this model, you will need (forecasted) future values of the external variables, plain and simple. There is no way around this.
The | Forecast time series data with external variables
If you fit a model using external variables and want to forecast from this model, you will need (forecasted) future values of the external variables, plain and simple. There is no way around this.
There are of course different ways of forecasting your explanatory variables. You can use the last observed value (the "naive random walk" forecast) or the overall mean. You can simply set them to zero if this is a useful value for them (e.g., special events that happened in the past like an earthquake, which you don't anticipate to recur). Or you could fit and forecast a time series model to these explanatory variables themselves, e.g., using auto.arima.
The alternative is to fit a model to your $y$ values without explanatory variables, by removing the xreg parameter, then to forecast $y$ using this model. One advantage is that this may even capture regularities in your explanatory variables. For instance, your ice cream sales may be driven by temperature, and you don't have good forecasts for temperature a few months ahead... but temperature is seasonal, so simply fitting a model without temperature yields a seasonal model, and your seasonal forecasts may actually be pretty good even if you don't include the actual driver of sales.
I recommend this free online forecasting textbook, especially this section on multiple regression (unfortunately, there is nothing about ARIMAX there), as well as Rob Hyndman's blog post "The ARIMAX model muddle". | Forecast time series data with external variables
If you fit a model using external variables and want to forecast from this model, you will need (forecasted) future values of the external variables, plain and simple. There is no way around this.
The |
27,852 | Forecast time series data with external variables | As Yogi Berra said, "It's tough to make predictions, especially about the future."
Many stat software modules will generate forecasts based on the univariate stream of time series in the absence of any future information, e.g., Proc Forecast in SAS or any number of ARIMA modules available. These forecasts are projections based on the historic behavior of your data.
You tell us that your data is monthly but don't tell us how many periods you have available. Another approach is to set your three IVs back 24 months relative to the DV so that the period they are predicting is t+24. This assumes that you have a sufficient amount of date both to initialize the model and calibrate any relevant seasonality, as appropriate. | Forecast time series data with external variables | As Yogi Berra said, "It's tough to make predictions, especially about the future."
Many stat software modules will generate forecasts based on the univariate stream of time series in the absence of a | Forecast time series data with external variables
As Yogi Berra said, "It's tough to make predictions, especially about the future."
Many stat software modules will generate forecasts based on the univariate stream of time series in the absence of any future information, e.g., Proc Forecast in SAS or any number of ARIMA modules available. These forecasts are projections based on the historic behavior of your data.
You tell us that your data is monthly but don't tell us how many periods you have available. Another approach is to set your three IVs back 24 months relative to the DV so that the period they are predicting is t+24. This assumes that you have a sufficient amount of date both to initialize the model and calibrate any relevant seasonality, as appropriate. | Forecast time series data with external variables
As Yogi Berra said, "It's tough to make predictions, especially about the future."
Many stat software modules will generate forecasts based on the univariate stream of time series in the absence of a |
27,853 | Forecast time series data with external variables | As I see it, you have three options:
Use a published forecast for your independent variables or find a model to forecast them. For example, the Census will have forecasted population data.
Using the dataset that you have, regress each of your independent variables against time & then use these results your forecast model for the independent variables
Drop the independent variables and just model your dependent variable as a function of time and lagged values of y.
Each approach has its own strengths and weaknesses, so the best depends on the specific context. | Forecast time series data with external variables | As I see it, you have three options:
Use a published forecast for your independent variables or find a model to forecast them. For example, the Census will have forecasted population data.
Using the | Forecast time series data with external variables
As I see it, you have three options:
Use a published forecast for your independent variables or find a model to forecast them. For example, the Census will have forecasted population data.
Using the dataset that you have, regress each of your independent variables against time & then use these results your forecast model for the independent variables
Drop the independent variables and just model your dependent variable as a function of time and lagged values of y.
Each approach has its own strengths and weaknesses, so the best depends on the specific context. | Forecast time series data with external variables
As I see it, you have three options:
Use a published forecast for your independent variables or find a model to forecast them. For example, the Census will have forecasted population data.
Using the |
27,854 | Test of association for a normally-distributed DV by directional independent variables? | In general, I think it's more fruitful scientifically and statistically to start by asking a broader and different question, which is how far can a response be predicted from a circular predictor. I say circular here rather than directional, partly because the latter includes spherical and even more fabulous spaces, which can't all be covered in a single answer; and partly because your examples, time of day and time of year, are both circular. A further major example is compass direction (relevant to winds, animal or human movements, alignments, etc.), which features in many circular problems: indeed, for some scientists it is a more obvious starting point.
Whenever you can get away with it, using sine and cosine functions of time in some kind of regression model is a simple and easy to implement modelling method. It is the first port of call for many biological and/or environmental examples. (The two kinds are often mushed together, because biotic phenomena showing seasonality are usually responding directly or indirectly to climate, or to weather.)
For concreteness, imagine time measurements over 24 hours or 12 months, so that e.g.
$\sin [2\pi (\text{hour}/24)],\ \ \cos [2\pi (\text{hour}/24)]$
$\sin [2\pi (\text{month}/12)],\ \ \cos [2\pi (\text{month}/12)]$
each describe one cycle over the entire day or year. A formal test of no relationship between a measured or counted response and some circular time would then be a standard test of whether the coefficients of sine and cosine are jointly zero in a generalized linear model with sine and cosine as predictors, an appropriate link and family being chosen according to the nature of the response.
The question of the marginal distribution of the response (normal or other) is in this approach secondary and/or to be handled by family choice.
The merit of sines and cosines is naturally that they are periodic and wrap around automatically, so the values at the beginning and end of each day or year are necessarily one and the same. There is no problem with boundary conditions, because there is no boundary.
This approach has been called circular, periodic, trigonometric and Fourier regression. For one introductory tutorial review, see here
In practice,
Such tests usually show overwhelmingly significant results at conventional levels whenever we expect seasonality. The more interesting question is then the precise seasonal curve estimated, and whether we need a more complicated model with other sinusoidal terms too.
Nothing rules out other predictors too, in which case we simply need more comprehensive models with other predictors included, say sines and cosines for seasonality and other predictors for everything else.
At some point, depending jointly on the data, the problem and the tastes and experience of the researcher, it may become more natural to emphasise the time series aspect of the problem and build a model with explicit time dependence. Indeed, some statistically minded people would deny that there is any other way to approach it.
What is easily named as trend (but not always so easily identifiable) comes under either #2 or #3, or even both.
Many economists and other social scientists concerned with seasonality in markets, national and international economies, or other human phenomena are usually more impressed with the possibilities for more complicated variability within each day or (more commonly) year. Often, although not always, seasonality is a nuisance to be removed or adjusted for, in contrast to biological and environmental scientists who frequently regard seasonality as interesting and important, even the main focus of a project. That said, economists and others also often adopt a regression-type approach too, but with ammunition a bundle of indicator (dummy) variables, most simply $0, 1$ variables for each month or each quarter of a year. This can be a practical way of trying to catch the effects of named holidays, vacation periods, side-effects of school years, etc., as well as influences or shocks of climatic or weather origin. With those differences noted, most of the comments above also apply in economics and social sciences.
Attitudes of, and approaches by, epidemiologists and medical statisticians concerned with variations in morbidity, mortality, hospital admissions, clinic visits, and the like, tend to fall in between these two extremes.
In my view splitting days or years into halves to compare is usually arbitrary, artificial and at best awkward. It is also ignoring the kind of smooth structure typically present in the data.
EDIT The account so far does not address the difference between discrete and continuous time, but I don't from my experience regard it as a big deal in practice.
But precise choices depend on how the data arrive and on the pattern of change.
If data were quarterly and human, I would tend to use indicator variables (e.g. quarters 3 and 4 are often different). If monthly and human, the choice isn't clear, but you would have to work hard to sell sines and cosines to most economists. If monthly or finer and biological or environmental, definitely sines and cosines.
EDIT 2 Further details on trigonometric regression
A distinctive detail of trigonometric regression (named in any other way
if you prefer) is that almost always sine and cosine terms are best
presented to a model in pairs. We first scale time of day, time of year
or compass direction so that it is represented as an angle on the circle
$\theta$ in radians, hence on the interval $[0, 2\pi]$. Then we use as
many of the pairs $ \sin k\theta, \cos k\theta, k = 1, 2, 3, \dots$ as
are needed in a model. (In circular statistics, trigonometric
conventions tend to trump statistical conventions, so that Greek
symbols such as $\theta, \phi, \psi$ are used for variables as well as
parameters.)
If we offer a pair of predictors such as $\sin \theta, \cos \theta$ to a
regression-like model, then we have coefficient estimates, say $b_1,
b_2$, for terms in the model, namely $b_1 \sin \theta, b_2 \cos \theta$.
This is a way of fitting phase as well as amplitude of a periodic
signal. Otherwise put, a function such as $\sin (\theta + \phi)$ can be
rewritten as
$$ \sin \theta \cos \phi + \cos \theta \sin \phi,$$
but $\cos \phi$ and $\sin \phi$ representing phase are estimated in the
model fitting. That way we avoid a non-linear estimation problem.
If we use $b_1 \sin \theta + b_2 \cos \theta$ to model circular
variation, then automatically the maximum and minimum of that curve are
half a circle apart. That is often a very good approximation for
biological or environmental variations, but conversely we may well need
several more terms to capture economic seasonality in particular. That could be a very good reason to use indicator variables instead, which lead immediately to simple interpretations of the coefficients. | Test of association for a normally-distributed DV by directional independent variables? | In general, I think it's more fruitful scientifically and statistically to start by asking a broader and different question, which is how far can a response be predicted from a circular predictor. I | Test of association for a normally-distributed DV by directional independent variables?
In general, I think it's more fruitful scientifically and statistically to start by asking a broader and different question, which is how far can a response be predicted from a circular predictor. I say circular here rather than directional, partly because the latter includes spherical and even more fabulous spaces, which can't all be covered in a single answer; and partly because your examples, time of day and time of year, are both circular. A further major example is compass direction (relevant to winds, animal or human movements, alignments, etc.), which features in many circular problems: indeed, for some scientists it is a more obvious starting point.
Whenever you can get away with it, using sine and cosine functions of time in some kind of regression model is a simple and easy to implement modelling method. It is the first port of call for many biological and/or environmental examples. (The two kinds are often mushed together, because biotic phenomena showing seasonality are usually responding directly or indirectly to climate, or to weather.)
For concreteness, imagine time measurements over 24 hours or 12 months, so that e.g.
$\sin [2\pi (\text{hour}/24)],\ \ \cos [2\pi (\text{hour}/24)]$
$\sin [2\pi (\text{month}/12)],\ \ \cos [2\pi (\text{month}/12)]$
each describe one cycle over the entire day or year. A formal test of no relationship between a measured or counted response and some circular time would then be a standard test of whether the coefficients of sine and cosine are jointly zero in a generalized linear model with sine and cosine as predictors, an appropriate link and family being chosen according to the nature of the response.
The question of the marginal distribution of the response (normal or other) is in this approach secondary and/or to be handled by family choice.
The merit of sines and cosines is naturally that they are periodic and wrap around automatically, so the values at the beginning and end of each day or year are necessarily one and the same. There is no problem with boundary conditions, because there is no boundary.
This approach has been called circular, periodic, trigonometric and Fourier regression. For one introductory tutorial review, see here
In practice,
Such tests usually show overwhelmingly significant results at conventional levels whenever we expect seasonality. The more interesting question is then the precise seasonal curve estimated, and whether we need a more complicated model with other sinusoidal terms too.
Nothing rules out other predictors too, in which case we simply need more comprehensive models with other predictors included, say sines and cosines for seasonality and other predictors for everything else.
At some point, depending jointly on the data, the problem and the tastes and experience of the researcher, it may become more natural to emphasise the time series aspect of the problem and build a model with explicit time dependence. Indeed, some statistically minded people would deny that there is any other way to approach it.
What is easily named as trend (but not always so easily identifiable) comes under either #2 or #3, or even both.
Many economists and other social scientists concerned with seasonality in markets, national and international economies, or other human phenomena are usually more impressed with the possibilities for more complicated variability within each day or (more commonly) year. Often, although not always, seasonality is a nuisance to be removed or adjusted for, in contrast to biological and environmental scientists who frequently regard seasonality as interesting and important, even the main focus of a project. That said, economists and others also often adopt a regression-type approach too, but with ammunition a bundle of indicator (dummy) variables, most simply $0, 1$ variables for each month or each quarter of a year. This can be a practical way of trying to catch the effects of named holidays, vacation periods, side-effects of school years, etc., as well as influences or shocks of climatic or weather origin. With those differences noted, most of the comments above also apply in economics and social sciences.
Attitudes of, and approaches by, epidemiologists and medical statisticians concerned with variations in morbidity, mortality, hospital admissions, clinic visits, and the like, tend to fall in between these two extremes.
In my view splitting days or years into halves to compare is usually arbitrary, artificial and at best awkward. It is also ignoring the kind of smooth structure typically present in the data.
EDIT The account so far does not address the difference between discrete and continuous time, but I don't from my experience regard it as a big deal in practice.
But precise choices depend on how the data arrive and on the pattern of change.
If data were quarterly and human, I would tend to use indicator variables (e.g. quarters 3 and 4 are often different). If monthly and human, the choice isn't clear, but you would have to work hard to sell sines and cosines to most economists. If monthly or finer and biological or environmental, definitely sines and cosines.
EDIT 2 Further details on trigonometric regression
A distinctive detail of trigonometric regression (named in any other way
if you prefer) is that almost always sine and cosine terms are best
presented to a model in pairs. We first scale time of day, time of year
or compass direction so that it is represented as an angle on the circle
$\theta$ in radians, hence on the interval $[0, 2\pi]$. Then we use as
many of the pairs $ \sin k\theta, \cos k\theta, k = 1, 2, 3, \dots$ as
are needed in a model. (In circular statistics, trigonometric
conventions tend to trump statistical conventions, so that Greek
symbols such as $\theta, \phi, \psi$ are used for variables as well as
parameters.)
If we offer a pair of predictors such as $\sin \theta, \cos \theta$ to a
regression-like model, then we have coefficient estimates, say $b_1,
b_2$, for terms in the model, namely $b_1 \sin \theta, b_2 \cos \theta$.
This is a way of fitting phase as well as amplitude of a periodic
signal. Otherwise put, a function such as $\sin (\theta + \phi)$ can be
rewritten as
$$ \sin \theta \cos \phi + \cos \theta \sin \phi,$$
but $\cos \phi$ and $\sin \phi$ representing phase are estimated in the
model fitting. That way we avoid a non-linear estimation problem.
If we use $b_1 \sin \theta + b_2 \cos \theta$ to model circular
variation, then automatically the maximum and minimum of that curve are
half a circle apart. That is often a very good approximation for
biological or environmental variations, but conversely we may well need
several more terms to capture economic seasonality in particular. That could be a very good reason to use indicator variables instead, which lead immediately to simple interpretations of the coefficients. | Test of association for a normally-distributed DV by directional independent variables?
In general, I think it's more fruitful scientifically and statistically to start by asking a broader and different question, which is how far can a response be predicted from a circular predictor. I |
27,855 | Test of association for a normally-distributed DV by directional independent variables? | Here is a distribution-free option, since it seems that's what you're looking for anyway. It is not particular to the field of circular statistics, of which I am fairly ignorant, but it is applicable here and in many other settings.
Let your directional variable be $X$.
Let the other variable be $Y$, which can lie in $\mathbb R^d$ for any $d \ge 1$ (or, indeed, any type of object on which a useful kernel can be defined: graphs, strings, images, probability distributions, samples from probability distributions, ...).
Define $Z := (X, Y)$, and suppose you have $m$ observations $z_i = (x_i, y_i)$.
Now, conduct a test using the Hilbert Schmidt Independence Criterion (HSIC), as in the following paper:
Gretton, Fukumizu, Teo, Song, Schölkopf, and Smola. A Kernel Statistical Test of Independence. NIPS 2008. (pdf)
That is:
Define a kernel $k$ for $X$. Here we mean a kernel in the sense of a kernel method, i.e. a kernel of an RKHS.
One choice is to represent $X$ on the unit circle in $\mathbb R^2$ (as in Kelvin's edit), and use the Gaussian kernel $k(x, x') = \exp\left( - \frac{1}{2 \sigma^2} \lVert x - x' \rVert^2 \right)$. Here $\sigma$ defines the smoothness of your space; setting it to the median distance between points in $X$ is often good enough.
Another option is to represent $X$ as an angle, say in $[-\pi, \pi]$, and use the von Mises kernel $k(x, x') = \exp\left( \kappa \cos(x - x') \right)$. Here $\kappa$ is a smoothness paramater.1
Define a kernel $l$ for $Y$, similarly. For $Y$ in $\mathbb R^n$ the Gaussian kernel, above, is a reasonable default.
Let $H$, $K$, and $L$ be $m \times m$ matrices such that $K_{ij} = k(x_i, x_j)$, $L_{ij} = l(y_i, y_j)$, and $H$ is the centering matrix $H = I - \frac1m 1 1^T$. Then the test statistic $\frac{1}{m^2} \mathrm{tr}\left( K H L H \right)$ has some nice properties when used as an independence test. Its null distribution can be approximated either by moment-matching to a gamma distribution (computationally efficient), or by bootstrapping (more accurate for small sample sizes).
Matlab code for carrying this out with RBF kernels is available from the first author here.
This approach is nice because it is general and tends to perform well.
The main drawbacks are:
$m^2$ computational complexity to compute the test statistic; this can be reduced with kernel approximations if it's a problem.
The complicated null distribution. For large-ish $m$, the gamma approximation is good and not too onerous; for small $m$, bootstrapping is necessary.
Kernel choice. As presented above, the kernels $k$ and $l$ must be selected heuristically. This paper gives a non-optimal criterion for selecting the kernel; this paper presents a good method for a large-data version of the test that unfortunately loses statistical power. Some work is ongoing right now for a near-optimal criterion in this setting, but unfortunately it's not ready for public consumption yet.
1. This is frequently used as a smoothing kernel for circular data, but I haven't in a quick search found anyone using it as an RKHS kernel. Nonetheless, it is positive-definite by Bochner's theorem, since the shift-invariant form $k(x - x')$ is proportional to the pdf of a von Mises distribution with mean 0, whose characteristic function is proportional to a uniform distribution on its support $[-\pi, \pi]$. | Test of association for a normally-distributed DV by directional independent variables? | Here is a distribution-free option, since it seems that's what you're looking for anyway. It is not particular to the field of circular statistics, of which I am fairly ignorant, but it is applicable | Test of association for a normally-distributed DV by directional independent variables?
Here is a distribution-free option, since it seems that's what you're looking for anyway. It is not particular to the field of circular statistics, of which I am fairly ignorant, but it is applicable here and in many other settings.
Let your directional variable be $X$.
Let the other variable be $Y$, which can lie in $\mathbb R^d$ for any $d \ge 1$ (or, indeed, any type of object on which a useful kernel can be defined: graphs, strings, images, probability distributions, samples from probability distributions, ...).
Define $Z := (X, Y)$, and suppose you have $m$ observations $z_i = (x_i, y_i)$.
Now, conduct a test using the Hilbert Schmidt Independence Criterion (HSIC), as in the following paper:
Gretton, Fukumizu, Teo, Song, Schölkopf, and Smola. A Kernel Statistical Test of Independence. NIPS 2008. (pdf)
That is:
Define a kernel $k$ for $X$. Here we mean a kernel in the sense of a kernel method, i.e. a kernel of an RKHS.
One choice is to represent $X$ on the unit circle in $\mathbb R^2$ (as in Kelvin's edit), and use the Gaussian kernel $k(x, x') = \exp\left( - \frac{1}{2 \sigma^2} \lVert x - x' \rVert^2 \right)$. Here $\sigma$ defines the smoothness of your space; setting it to the median distance between points in $X$ is often good enough.
Another option is to represent $X$ as an angle, say in $[-\pi, \pi]$, and use the von Mises kernel $k(x, x') = \exp\left( \kappa \cos(x - x') \right)$. Here $\kappa$ is a smoothness paramater.1
Define a kernel $l$ for $Y$, similarly. For $Y$ in $\mathbb R^n$ the Gaussian kernel, above, is a reasonable default.
Let $H$, $K$, and $L$ be $m \times m$ matrices such that $K_{ij} = k(x_i, x_j)$, $L_{ij} = l(y_i, y_j)$, and $H$ is the centering matrix $H = I - \frac1m 1 1^T$. Then the test statistic $\frac{1}{m^2} \mathrm{tr}\left( K H L H \right)$ has some nice properties when used as an independence test. Its null distribution can be approximated either by moment-matching to a gamma distribution (computationally efficient), or by bootstrapping (more accurate for small sample sizes).
Matlab code for carrying this out with RBF kernels is available from the first author here.
This approach is nice because it is general and tends to perform well.
The main drawbacks are:
$m^2$ computational complexity to compute the test statistic; this can be reduced with kernel approximations if it's a problem.
The complicated null distribution. For large-ish $m$, the gamma approximation is good and not too onerous; for small $m$, bootstrapping is necessary.
Kernel choice. As presented above, the kernels $k$ and $l$ must be selected heuristically. This paper gives a non-optimal criterion for selecting the kernel; this paper presents a good method for a large-data version of the test that unfortunately loses statistical power. Some work is ongoing right now for a near-optimal criterion in this setting, but unfortunately it's not ready for public consumption yet.
1. This is frequently used as a smoothing kernel for circular data, but I haven't in a quick search found anyone using it as an RKHS kernel. Nonetheless, it is positive-definite by Bochner's theorem, since the shift-invariant form $k(x - x')$ is proportional to the pdf of a von Mises distribution with mean 0, whose characteristic function is proportional to a uniform distribution on its support $[-\pi, \pi]$. | Test of association for a normally-distributed DV by directional independent variables?
Here is a distribution-free option, since it seems that's what you're looking for anyway. It is not particular to the field of circular statistics, of which I am fairly ignorant, but it is applicable |
27,856 | Test of association for a normally-distributed DV by directional independent variables? | You could run a t-test between the mean from opposite "halves" of the period, for example by comparing the mean value from 12am to 12pm with the mean value from 12pm to 12am. And then compare the mean value from 6pm to 6am with the mean value from 6am to 6pm.
Or if you have enough data, you could break the period into smaller (e.g., hourly) segments and perform a t-test between each pair of segments, while correcting for multiple comparisons.
Alternatively, for a more "continuous" analysis (i.e., without arbitrary segmentation), you could run linear regressions against the sine and cosine functions of your directional variable (with the correct period), which will automatically "circularize" your data:
$$x' = sin(x * 2\pi/period)$$
$$x'' = cos(x * 2\pi/period)$$
The main problem with any such approach, is that it will be difficult to ensure that the phase of your model is set to pick out the maximum correlation, hence you may need to try several different phases, or else select the phase by eye to formulate your hypothetical value $a$:
$$x''' = sin((x+a) * 2\pi/period)$$
However, ideally you should formulate your hypothesis (e.g., afternoons are more active than mornings) and then set the appropriate $a$ before you even look at the data.
EDIT: One further thought is that you could run a multiple regression against BOTH the sine and cosine functions of the directional variable at the same time (i.e., between your normal variable $y$ plus $x'$ and $x''$) as that should take into account the true "direction", in much the same way that the sine and cosine functions together define the x and y coordinates of a complete circle. Then you wouldn't need to bother about the phase issue separately, as it would be taken care of automatically. I have never seen this done before, but I don't see why it shouldn't work.
In any case, I think you must make some assumptions regarding the period, and then test accordingly. | Test of association for a normally-distributed DV by directional independent variables? | You could run a t-test between the mean from opposite "halves" of the period, for example by comparing the mean value from 12am to 12pm with the mean value from 12pm to 12am. And then compare the mea | Test of association for a normally-distributed DV by directional independent variables?
You could run a t-test between the mean from opposite "halves" of the period, for example by comparing the mean value from 12am to 12pm with the mean value from 12pm to 12am. And then compare the mean value from 6pm to 6am with the mean value from 6am to 6pm.
Or if you have enough data, you could break the period into smaller (e.g., hourly) segments and perform a t-test between each pair of segments, while correcting for multiple comparisons.
Alternatively, for a more "continuous" analysis (i.e., without arbitrary segmentation), you could run linear regressions against the sine and cosine functions of your directional variable (with the correct period), which will automatically "circularize" your data:
$$x' = sin(x * 2\pi/period)$$
$$x'' = cos(x * 2\pi/period)$$
The main problem with any such approach, is that it will be difficult to ensure that the phase of your model is set to pick out the maximum correlation, hence you may need to try several different phases, or else select the phase by eye to formulate your hypothetical value $a$:
$$x''' = sin((x+a) * 2\pi/period)$$
However, ideally you should formulate your hypothesis (e.g., afternoons are more active than mornings) and then set the appropriate $a$ before you even look at the data.
EDIT: One further thought is that you could run a multiple regression against BOTH the sine and cosine functions of the directional variable at the same time (i.e., between your normal variable $y$ plus $x'$ and $x''$) as that should take into account the true "direction", in much the same way that the sine and cosine functions together define the x and y coordinates of a complete circle. Then you wouldn't need to bother about the phase issue separately, as it would be taken care of automatically. I have never seen this done before, but I don't see why it shouldn't work.
In any case, I think you must make some assumptions regarding the period, and then test accordingly. | Test of association for a normally-distributed DV by directional independent variables?
You could run a t-test between the mean from opposite "halves" of the period, for example by comparing the mean value from 12am to 12pm with the mean value from 12pm to 12am. And then compare the mea |
27,857 | Can I use bootstrapping to estimate the uncertainty in a maximum value of a GAM? | An alternative approach that can be used for GAMs fitted using Simon Wood's mgcv software for R is to do posterior inference from the fitted GAM for the feature of interest. Essentially, this involves simulating from the posterior distribution of the parameters of the fitted model, predicting values of the response over a fine grid of $x$ locations, finding the $x$ where the fitted curve takes its maximal value, repeat for lots of simulated models and compute a confidence for the location of the optima as the quantiles of the distribution of optima from the simulated models.
The meat from what I present below was cribbed from page 4 of Simon Wood's course notes (pdf)
To have something akin to a biomass example, I'm going to simulate a single species' abundance along a single gradient using my coenocliner package.
library("coenocliner")
A0 <- 9 * 10 # max abundance
m <- 45 # location on gradient of modal abundance
r <- 6 * 10 # species range of occurence on gradient
alpha <- 1.5 # shape parameter
gamma <- 0.5 # shape parameter
locs <- 1:100 # gradient locations
pars <- list(m = m, r = r, alpha = alpha,
gamma = gamma, A0 = A0) # species parameters, in list form
set.seed(1)
mu <- coenocline(locs, responseModel = "beta", params = pars, expectation = FALSE)
Fit the GAM
library("mgcv")
m <- gam(mu ~ s(locs), method = "REML", family = "poisson")
... predict on a fine grid over the range of $x$ (locs)...
p <- data.frame(locs = seq(1, 100, length = 5000))
pp <- predict(m, newdata = p, type = "response")
and visualise the fitted function and the data
plot(mu ~ locs)
lines(pp ~ locs, data = p, col = "red")
This produces
The 5000 prediction locations is probably overkill here and certainly for the plot, but depending on the fitted function in your use-case, you might need a fine grid to get close to the maximum of the fitted curve.
Now we can simulate from the posterior of the model. First we get the $Xp$ matrix; the matrix that, once multiplied by model coefficients yields predictions from the model at new locations p
Xp <- predict(m, p, type="lpmatrix") ## map coefs to fitted curves
Next we collect the fitted model coefficients and their (Bayesian) covariance matrix
beta <- coef(m)
Vb <- vcov(m) ## posterior mean and cov of coefs
The coefficients are a multivariate normal with mean vector beta and covariance matrix Vb. Hence we can simulate from this multivariate normal new coefficients for models consistent with the fitted one but which explore the uncertainty in the fitted model. Here we generate 10000 (n)` simulated models
n <- 10000
library("MASS") ## for mvrnorm
set.seed(10)
mrand <- mvrnorm(n, beta, Vb) ## simulate n rep coef vectors from posterior
Now we can generate predictions for of the n simulated models, transform from the scale of the linear predictor to the response scale by applying the inverse of the link function (ilink()) and then compute the $x$ value (value of p$locs) at the maximal point of the fitted curve
opt <- rep(NA, n)
ilink <- family(m)$linkinv
for (i in seq_len(n)) {
pred <- ilink(Xp %*% mrand[i, ])
opt[i] <- p$locs[which.max(pred)]
}
Now we compute the confidence interval for the optima using probability quantiles of the distribution of 10,000 optima, one per simulated model
ci <- quantile(opt, c(.025,.975)) ## get 95% CI
For this example we have:
> ci
2.5% 97.5%
39.06321 52.39128
We can add this information to the earlier plot:
plot(mu ~ locs)
abline(v = p$locs[which.max(pp)], lty = "dashed", col = "grey")
lines(pp ~ locs, data = p, col = "red")
lines(y = rep(0,2), x = ci, col = "blue")
points(y = 0, x = p$locs[which.max(pp)], pch = 16, col = "blue")
which produces
As we'd expect given the data/observations, the interval on the fitted optima is quite asymmetric.
Slide 5 of Simon's course notes suggests why this approach might be preferred to bootstrapping. Advantages of posterior simulation are that it is quick - bootstrapping GAMs is slow. Two additional issues with bootstrapping are (taken from Simon's notes!)
For parametric bootstrapping the smoothing bias causes problems, the model simulated from is biased and the fits to the samples will be yet more biased.
For non-parametric ‘case-resampling’ the presence of replicate copies of the same data causes undersmoothing, especially with GCV based smoothness selection.
It should be noted that the posterior simulation performed here is conditional upon the chosen smoothness parameters for the the model/spline. This can be accounted for, but Simon's notes suggest this makes little difference if you actually go to the trouble of doing it. (so I haven't here...) | Can I use bootstrapping to estimate the uncertainty in a maximum value of a GAM? | An alternative approach that can be used for GAMs fitted using Simon Wood's mgcv software for R is to do posterior inference from the fitted GAM for the feature of interest. Essentially, this involves | Can I use bootstrapping to estimate the uncertainty in a maximum value of a GAM?
An alternative approach that can be used for GAMs fitted using Simon Wood's mgcv software for R is to do posterior inference from the fitted GAM for the feature of interest. Essentially, this involves simulating from the posterior distribution of the parameters of the fitted model, predicting values of the response over a fine grid of $x$ locations, finding the $x$ where the fitted curve takes its maximal value, repeat for lots of simulated models and compute a confidence for the location of the optima as the quantiles of the distribution of optima from the simulated models.
The meat from what I present below was cribbed from page 4 of Simon Wood's course notes (pdf)
To have something akin to a biomass example, I'm going to simulate a single species' abundance along a single gradient using my coenocliner package.
library("coenocliner")
A0 <- 9 * 10 # max abundance
m <- 45 # location on gradient of modal abundance
r <- 6 * 10 # species range of occurence on gradient
alpha <- 1.5 # shape parameter
gamma <- 0.5 # shape parameter
locs <- 1:100 # gradient locations
pars <- list(m = m, r = r, alpha = alpha,
gamma = gamma, A0 = A0) # species parameters, in list form
set.seed(1)
mu <- coenocline(locs, responseModel = "beta", params = pars, expectation = FALSE)
Fit the GAM
library("mgcv")
m <- gam(mu ~ s(locs), method = "REML", family = "poisson")
... predict on a fine grid over the range of $x$ (locs)...
p <- data.frame(locs = seq(1, 100, length = 5000))
pp <- predict(m, newdata = p, type = "response")
and visualise the fitted function and the data
plot(mu ~ locs)
lines(pp ~ locs, data = p, col = "red")
This produces
The 5000 prediction locations is probably overkill here and certainly for the plot, but depending on the fitted function in your use-case, you might need a fine grid to get close to the maximum of the fitted curve.
Now we can simulate from the posterior of the model. First we get the $Xp$ matrix; the matrix that, once multiplied by model coefficients yields predictions from the model at new locations p
Xp <- predict(m, p, type="lpmatrix") ## map coefs to fitted curves
Next we collect the fitted model coefficients and their (Bayesian) covariance matrix
beta <- coef(m)
Vb <- vcov(m) ## posterior mean and cov of coefs
The coefficients are a multivariate normal with mean vector beta and covariance matrix Vb. Hence we can simulate from this multivariate normal new coefficients for models consistent with the fitted one but which explore the uncertainty in the fitted model. Here we generate 10000 (n)` simulated models
n <- 10000
library("MASS") ## for mvrnorm
set.seed(10)
mrand <- mvrnorm(n, beta, Vb) ## simulate n rep coef vectors from posterior
Now we can generate predictions for of the n simulated models, transform from the scale of the linear predictor to the response scale by applying the inverse of the link function (ilink()) and then compute the $x$ value (value of p$locs) at the maximal point of the fitted curve
opt <- rep(NA, n)
ilink <- family(m)$linkinv
for (i in seq_len(n)) {
pred <- ilink(Xp %*% mrand[i, ])
opt[i] <- p$locs[which.max(pred)]
}
Now we compute the confidence interval for the optima using probability quantiles of the distribution of 10,000 optima, one per simulated model
ci <- quantile(opt, c(.025,.975)) ## get 95% CI
For this example we have:
> ci
2.5% 97.5%
39.06321 52.39128
We can add this information to the earlier plot:
plot(mu ~ locs)
abline(v = p$locs[which.max(pp)], lty = "dashed", col = "grey")
lines(pp ~ locs, data = p, col = "red")
lines(y = rep(0,2), x = ci, col = "blue")
points(y = 0, x = p$locs[which.max(pp)], pch = 16, col = "blue")
which produces
As we'd expect given the data/observations, the interval on the fitted optima is quite asymmetric.
Slide 5 of Simon's course notes suggests why this approach might be preferred to bootstrapping. Advantages of posterior simulation are that it is quick - bootstrapping GAMs is slow. Two additional issues with bootstrapping are (taken from Simon's notes!)
For parametric bootstrapping the smoothing bias causes problems, the model simulated from is biased and the fits to the samples will be yet more biased.
For non-parametric ‘case-resampling’ the presence of replicate copies of the same data causes undersmoothing, especially with GCV based smoothness selection.
It should be noted that the posterior simulation performed here is conditional upon the chosen smoothness parameters for the the model/spline. This can be accounted for, but Simon's notes suggest this makes little difference if you actually go to the trouble of doing it. (so I haven't here...) | Can I use bootstrapping to estimate the uncertainty in a maximum value of a GAM?
An alternative approach that can be used for GAMs fitted using Simon Wood's mgcv software for R is to do posterior inference from the fitted GAM for the feature of interest. Essentially, this involves |
27,858 | Can I use bootstrapping to estimate the uncertainty in a maximum value of a GAM? | "Repeat a large number of times". I think you're saying you have 24 observations in total. It is probably easier to identify by hand the observations that appear being an outlier and removing those by hand. It might be hard to get the full characterization of the maximum with such a limited set of observations without dredging over the high leverage points you might have in your dataset.
What I am saying is that high(er) leverage points are those which might skew your results and having only 24 results give quite a high prob of having those picked; and that would skew the distribution of the max.
I am not sure if the concentrations you're monitoring are determined by external environmental factors or by the action of a human. In any case, perhaps, traditional sensitivity analysis would help. Sensitivity analysis, in my jargon, means changing the value of a feature by x% from the value in the optimal set of parameters. That would help assess, e.g, how much more biomass we would have if we increase something by x% | Can I use bootstrapping to estimate the uncertainty in a maximum value of a GAM? | "Repeat a large number of times". I think you're saying you have 24 observations in total. It is probably easier to identify by hand the observations that appear being an outlier and removing those by | Can I use bootstrapping to estimate the uncertainty in a maximum value of a GAM?
"Repeat a large number of times". I think you're saying you have 24 observations in total. It is probably easier to identify by hand the observations that appear being an outlier and removing those by hand. It might be hard to get the full characterization of the maximum with such a limited set of observations without dredging over the high leverage points you might have in your dataset.
What I am saying is that high(er) leverage points are those which might skew your results and having only 24 results give quite a high prob of having those picked; and that would skew the distribution of the max.
I am not sure if the concentrations you're monitoring are determined by external environmental factors or by the action of a human. In any case, perhaps, traditional sensitivity analysis would help. Sensitivity analysis, in my jargon, means changing the value of a feature by x% from the value in the optimal set of parameters. That would help assess, e.g, how much more biomass we would have if we increase something by x% | Can I use bootstrapping to estimate the uncertainty in a maximum value of a GAM?
"Repeat a large number of times". I think you're saying you have 24 observations in total. It is probably easier to identify by hand the observations that appear being an outlier and removing those by |
27,859 | Overfitting with Linear Classifiers | A linear regression / classifier can absolutely be overfit if used without proper care.
Here's a small example. Let's create two vectors, the first is simply $5000$ random coin flips:
set.seed(154)
N <- 5000
y <- rbinom(N, 1, .5)
The second vector is $5000$ observations, each randomly assigned to one of $500$ random classes:
N.classes <- 500
rand.class <- factor(sample(1:N.classes, N, replace=TRUE))
There should be no relation between our flips y and our random classes rand.class, they were determined completely independently.
Yet, if we attempt to predict the random flip with the random class using logistic regression (a linear classifier), it sure thinks there is a relationship
M <- glm(y ~ rand.class, family="binomial")
hist(coef(M), breaks=50)
The true value of every one of these coefficients is zero. But as you can see, we have quite a spread. This linear classifier is for sure overfit.
Note: The extremes in this histogram, where the coefficients have wandered to $-15$ and $15$, are cases where a class had either no observations with y == 1 or no values with y == 0. The actual estimated values for these coefficients are plus and minus infinity, but the logistic regression algorithm is hard coded with a bound of $15$.
"overfitting" does not seem to be formally defined. Why is that?
Overfitting may be best understood within the context of a class of models which has some complexity parameter. In this case, a model could be said to be overfit when decreasing the complexity slightly results in better expected out of sample performance.
It would be very difficult to precisely define the concept in a model independent way. A single model is just fit, you need something to compare it to for it to be over or under fit. In my example above this comparison was with the truth, but you usually don't know the truth, hence the model!
Wouldn't some distance measure between training and test set performance allow for such a formalisation?
There is such a concept, it's called the optimism. It's defined by:
$$ \omega = E_{\text{test}} - E_{\text{train}} $$
where $E$ stands for error, and each term is averaged over all possible training and testing sets for your learning algorithm.
It doesn't quite get at the essence of overfitting though, because the performance on a test set can be quite a bit worse than the train, even though a model of higher complexity decreases both. | Overfitting with Linear Classifiers | A linear regression / classifier can absolutely be overfit if used without proper care.
Here's a small example. Let's create two vectors, the first is simply $5000$ random coin flips:
set.seed(154)
N | Overfitting with Linear Classifiers
A linear regression / classifier can absolutely be overfit if used without proper care.
Here's a small example. Let's create two vectors, the first is simply $5000$ random coin flips:
set.seed(154)
N <- 5000
y <- rbinom(N, 1, .5)
The second vector is $5000$ observations, each randomly assigned to one of $500$ random classes:
N.classes <- 500
rand.class <- factor(sample(1:N.classes, N, replace=TRUE))
There should be no relation between our flips y and our random classes rand.class, they were determined completely independently.
Yet, if we attempt to predict the random flip with the random class using logistic regression (a linear classifier), it sure thinks there is a relationship
M <- glm(y ~ rand.class, family="binomial")
hist(coef(M), breaks=50)
The true value of every one of these coefficients is zero. But as you can see, we have quite a spread. This linear classifier is for sure overfit.
Note: The extremes in this histogram, where the coefficients have wandered to $-15$ and $15$, are cases where a class had either no observations with y == 1 or no values with y == 0. The actual estimated values for these coefficients are plus and minus infinity, but the logistic regression algorithm is hard coded with a bound of $15$.
"overfitting" does not seem to be formally defined. Why is that?
Overfitting may be best understood within the context of a class of models which has some complexity parameter. In this case, a model could be said to be overfit when decreasing the complexity slightly results in better expected out of sample performance.
It would be very difficult to precisely define the concept in a model independent way. A single model is just fit, you need something to compare it to for it to be over or under fit. In my example above this comparison was with the truth, but you usually don't know the truth, hence the model!
Wouldn't some distance measure between training and test set performance allow for such a formalisation?
There is such a concept, it's called the optimism. It's defined by:
$$ \omega = E_{\text{test}} - E_{\text{train}} $$
where $E$ stands for error, and each term is averaged over all possible training and testing sets for your learning algorithm.
It doesn't quite get at the essence of overfitting though, because the performance on a test set can be quite a bit worse than the train, even though a model of higher complexity decreases both. | Overfitting with Linear Classifiers
A linear regression / classifier can absolutely be overfit if used without proper care.
Here's a small example. Let's create two vectors, the first is simply $5000$ random coin flips:
set.seed(154)
N |
27,860 | Overfitting with Linear Classifiers | I think that overfitting refers to model complexity rather than generalization ability. I understand the quote "a linear classifier cannot be overfitted" since its complexity is small, and there is no other simpler classifier providing a better performance.
The example is linked to the generalization ability of linear classifiers (and complex ones). Even in this second part, linear classifiers usually provide less variance than complex ones, thus the "overfitting" value for linear classifiers, following this concept, is also smaller (although the empirical risk of them could be so large).
atb | Overfitting with Linear Classifiers | I think that overfitting refers to model complexity rather than generalization ability. I understand the quote "a linear classifier cannot be overfitted" since its complexity is small, and there is no | Overfitting with Linear Classifiers
I think that overfitting refers to model complexity rather than generalization ability. I understand the quote "a linear classifier cannot be overfitted" since its complexity is small, and there is no other simpler classifier providing a better performance.
The example is linked to the generalization ability of linear classifiers (and complex ones). Even in this second part, linear classifiers usually provide less variance than complex ones, thus the "overfitting" value for linear classifiers, following this concept, is also smaller (although the empirical risk of them could be so large).
atb | Overfitting with Linear Classifiers
I think that overfitting refers to model complexity rather than generalization ability. I understand the quote "a linear classifier cannot be overfitted" since its complexity is small, and there is no |
27,861 | Overfitting with Linear Classifiers | In the 70-ties, experiments with pattern recognition algorithms on large datasets revealed that adding extra features did in some cases increase the test-set error rates. This is counter intuitive because one would expect that adding an extra feature always increases classifier performance, or in case that the added feature is 'white noise', its addition does not influence classifier performance at all. The effect of adding still more extra features to a classifier, eventually leading to a decrease in test-set performance became known as the peaking phenomenon [1].
Feature peaking is caused by over-generalization during learning. The extra features cause the inclusion of so many additional parameters that the classifier begins to overfit the data. Hence, the peaking point is passed.
In general, we face a bias-variance trade-off when training classifiers. The more feature-variables we use, the better will the (unknown) underlying classifier mechanism possibly be modelled by our classifier. Hence, the systematic deviation between fitted model and 'truth' will lessen, i.e. a smaller bias results. On the other hand, increasing the feature space of the classifier necessarily implies the addition of parameters (those that fit the added features). Thus, the variance of the fitted classifier increases too.
So the classifier exeeding the peaking point is just one stochastic realization of a high-dimensional classification problem, and a new fit will result in a highly different parameter vector. This fact reflects the increased variance.
[1. G. V. Trunk, "A Problem of Dimensionality: A Simple Example," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-1, no. 3, pp. 306-307, July 1979] | Overfitting with Linear Classifiers | In the 70-ties, experiments with pattern recognition algorithms on large datasets revealed that adding extra features did in some cases increase the test-set error rates. This is counter intuitive bec | Overfitting with Linear Classifiers
In the 70-ties, experiments with pattern recognition algorithms on large datasets revealed that adding extra features did in some cases increase the test-set error rates. This is counter intuitive because one would expect that adding an extra feature always increases classifier performance, or in case that the added feature is 'white noise', its addition does not influence classifier performance at all. The effect of adding still more extra features to a classifier, eventually leading to a decrease in test-set performance became known as the peaking phenomenon [1].
Feature peaking is caused by over-generalization during learning. The extra features cause the inclusion of so many additional parameters that the classifier begins to overfit the data. Hence, the peaking point is passed.
In general, we face a bias-variance trade-off when training classifiers. The more feature-variables we use, the better will the (unknown) underlying classifier mechanism possibly be modelled by our classifier. Hence, the systematic deviation between fitted model and 'truth' will lessen, i.e. a smaller bias results. On the other hand, increasing the feature space of the classifier necessarily implies the addition of parameters (those that fit the added features). Thus, the variance of the fitted classifier increases too.
So the classifier exeeding the peaking point is just one stochastic realization of a high-dimensional classification problem, and a new fit will result in a highly different parameter vector. This fact reflects the increased variance.
[1. G. V. Trunk, "A Problem of Dimensionality: A Simple Example," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-1, no. 3, pp. 306-307, July 1979] | Overfitting with Linear Classifiers
In the 70-ties, experiments with pattern recognition algorithms on large datasets revealed that adding extra features did in some cases increase the test-set error rates. This is counter intuitive bec |
27,862 | Overfitting with Linear Classifiers | Like @match-maker-ee said, Linear classifiers can over-fit depending on the input features.
The following model f is linear in its parameters a, b and c, but can be fitted to a quadratic curve in the feature space of x:
$$ f(x) = ax^2+bx+c$$
SVM's can also over-fit, for example when they use the kernel trick, despite being basically a linear model in an augmented feature space. | Overfitting with Linear Classifiers | Like @match-maker-ee said, Linear classifiers can over-fit depending on the input features.
The following model f is linear in its parameters a, b and c, but can be fitted to a quadratic curve in the | Overfitting with Linear Classifiers
Like @match-maker-ee said, Linear classifiers can over-fit depending on the input features.
The following model f is linear in its parameters a, b and c, but can be fitted to a quadratic curve in the feature space of x:
$$ f(x) = ax^2+bx+c$$
SVM's can also over-fit, for example when they use the kernel trick, despite being basically a linear model in an augmented feature space. | Overfitting with Linear Classifiers
Like @match-maker-ee said, Linear classifiers can over-fit depending on the input features.
The following model f is linear in its parameters a, b and c, but can be fitted to a quadratic curve in the |
27,863 | How to Prove that an Event Occurs Infinitely Often (Almost Surely)? | The sample space consists of seven possible outcomes: "1" through "5" on the die, "6" and "tails", and "6" and "heads." Let's abbreviate these as $\Omega=\{1,2,3,4,5,6T,6H\}$.
The events will be generated by the atoms $\{1\}, \{2\}, \ldots, \{6H\}$ and therefore all subsets of $\Omega$ are measurable.
The probability measure $\mathbb{P}$ is determined by its values on these atoms. The information in the question, together with the (reasonable) assumption that the coin toss is independent of the die throw, tells us those probabilities are as given in this table:
$$\begin{array}{lc}
\text{Outcome} & \text{Probability} \\
1 & \frac{1}{6} \\
2 & \frac{1}{6} \\
3 & \frac{1}{6} \\
4 & \frac{1}{6} \\
5 & \frac{1}{6} \\
\text{6T} & \frac{1-p}{6} \\
\text{6H} & \frac{p}{6}
\end{array}$$
A sequence of independent realizations of $X$ is a sequence $(\omega_1, \omega_2, \ldots, \omega_n, \ldots)$ all of whose elements are in $\Omega$. Let's call the set of all such sequences $\Omega^\infty$. The basic problem here lies in dealing with infinite sequences. The motivating idea behind the following solution is to keep simplifying the probability calculation until it can be reduced to computing the probability of a finite event. This is done in stages.
First, in order to discuss probabilities at all, we need to define a measure on $\Omega^\infty$ that makes events like "$6H$ occurs infinitely often" into measurable sets. This can be done in terms of "basic" sets that don't involve an infinite specification of values. Since we know how to define probabilities $\mathbb{P}_n$ on the set of finite sequences of length $n$, $\Omega^n$, let's define the "extension" of any measurable $E \subset \Omega^n$ to consist of all infinite sequences $\omega\in\Omega^\infty$ that have some element of $E$ as their prefix:
$$E^\infty = \{(\omega_i)\in\Omega^\infty\,|\, (\omega_1,\ldots,\omega_n)\in E\}.$$
The smallest sigma-algebra on $\Omega^\infty$ that contains all such sets is the one we will work with.
The probability measure $\mathbb{P}_\infty$ on $\Omega^\infty$ is determined by the finite probabilities $\mathbb{P}_n$. That is, for all $n$ and all $E\subset \Omega^n$,
$$\mathbb{P}_\infty(E^\infty) = \mathbb{P}_n(E).$$
(The preceding statements about the sigma-algebra on $\Omega^\infty$ and the measure $\mathbb{P}_\infty$ are elegant ways to carry out what will amount to limiting arguments.)
Having managed these formalities, we can do the calculations. To get started, we need to establish that it even makes sense to discuss the "probability" of $6H$ occurring infinitely often. This event can be constructed as the intersection of events of the type "$6H$ occurs at least $n$ times", for $n=1, 2, \ldots$. Because it is a countable intersection of measurable sets, it is measurable, so its probability exists.
Second, we need to compute this probability of $6H$ occurring infinitely often. One way is to compute the probability of the complementary event: what is the chance that $6H$ occurs only finitely many times? This event $E$ will be measurable, because it's the complement of a measurable set, as we have already established. $E$ can be partitioned into events $E_n$ of the form "$6H$ occurs exactly $n$ times", for $n=0, 1, 2, \ldots$. Because there are only countably many of these, the probability of $E$ will be the (countable) sum of the probabilities of the $E_n$. What are these probabilities?
Once more we can do a partition: $E_n$ breaks into events $E_{n,N}$ of the form "$6H$ occurs exactly $n$ times at roll $N$ and never occurs again." These events are disjoint and countable in number, so all we have to do (again!) is to compute their chances and add them up. But finally we have reduced the problem to a finite calculation: $\mathbb{P}_\infty(E_{n,N})$ is no greater than the chance of any finite event of the form "$6H$ occurs for the $n^\text{th}$ time at roll $N$ and does not occur between rolls $N$ and $M \gt N$." The calculation is easy because we don't really need to know the details: each time $M$ increases by $1$, the chance--whatever it may be--is further multiplied by the chance that $6H$ is not rolled, which is $1-p/6$. We thereby obtain a geometric sequence with common ratio $r = 1-p/6 \lt 1$. Regardless of the starting value, it grows arbitrarily small as $M$ gets large.
(Notice that we did not need to take a limit of probabilities: we only needed to show that the probability of $E_{n,N}$ is bounded above by numbers that converge to zero.)
Consequently $\mathbb{P}_\infty(E_{n,N})$ cannot have any value greater than $0$, whence it must equal $0$. Accordingly,
$$\mathbb{P}_\infty(E_n) = \sum_{N=0}^\infty \mathbb{P}_\infty(E_{n,N}) = 0.$$
Where are we? We have just established that for any $n \ge 0$, the chance of observing exactly $n$ outcomes of $6H$ is nil. By adding up all these zeros, we conclude that $$\mathbb{P}_\infty(E) = \sum_{n=0}^\infty \mathbb{P}_\infty(E_n) = 0.$$ This is the chance that $6H$ occurs only finitely many times. Consequently, the chance that $6H$ occurs infinitely many times is $1-0 = 1$, QED.
Every statement in the preceding paragraph is so obvious as to be intuitively trivial. The exercise of demonstrating its conclusions with some rigor, using the definitions of sigma algebras and probability measures, helps show that these definitions are the right ones for working with probabilities, even when infinite sequences are involved. | How to Prove that an Event Occurs Infinitely Often (Almost Surely)? | The sample space consists of seven possible outcomes: "1" through "5" on the die, "6" and "tails", and "6" and "heads." Let's abbreviate these as $\Omega=\{1,2,3,4,5,6T,6H\}$.
The events will be gene | How to Prove that an Event Occurs Infinitely Often (Almost Surely)?
The sample space consists of seven possible outcomes: "1" through "5" on the die, "6" and "tails", and "6" and "heads." Let's abbreviate these as $\Omega=\{1,2,3,4,5,6T,6H\}$.
The events will be generated by the atoms $\{1\}, \{2\}, \ldots, \{6H\}$ and therefore all subsets of $\Omega$ are measurable.
The probability measure $\mathbb{P}$ is determined by its values on these atoms. The information in the question, together with the (reasonable) assumption that the coin toss is independent of the die throw, tells us those probabilities are as given in this table:
$$\begin{array}{lc}
\text{Outcome} & \text{Probability} \\
1 & \frac{1}{6} \\
2 & \frac{1}{6} \\
3 & \frac{1}{6} \\
4 & \frac{1}{6} \\
5 & \frac{1}{6} \\
\text{6T} & \frac{1-p}{6} \\
\text{6H} & \frac{p}{6}
\end{array}$$
A sequence of independent realizations of $X$ is a sequence $(\omega_1, \omega_2, \ldots, \omega_n, \ldots)$ all of whose elements are in $\Omega$. Let's call the set of all such sequences $\Omega^\infty$. The basic problem here lies in dealing with infinite sequences. The motivating idea behind the following solution is to keep simplifying the probability calculation until it can be reduced to computing the probability of a finite event. This is done in stages.
First, in order to discuss probabilities at all, we need to define a measure on $\Omega^\infty$ that makes events like "$6H$ occurs infinitely often" into measurable sets. This can be done in terms of "basic" sets that don't involve an infinite specification of values. Since we know how to define probabilities $\mathbb{P}_n$ on the set of finite sequences of length $n$, $\Omega^n$, let's define the "extension" of any measurable $E \subset \Omega^n$ to consist of all infinite sequences $\omega\in\Omega^\infty$ that have some element of $E$ as their prefix:
$$E^\infty = \{(\omega_i)\in\Omega^\infty\,|\, (\omega_1,\ldots,\omega_n)\in E\}.$$
The smallest sigma-algebra on $\Omega^\infty$ that contains all such sets is the one we will work with.
The probability measure $\mathbb{P}_\infty$ on $\Omega^\infty$ is determined by the finite probabilities $\mathbb{P}_n$. That is, for all $n$ and all $E\subset \Omega^n$,
$$\mathbb{P}_\infty(E^\infty) = \mathbb{P}_n(E).$$
(The preceding statements about the sigma-algebra on $\Omega^\infty$ and the measure $\mathbb{P}_\infty$ are elegant ways to carry out what will amount to limiting arguments.)
Having managed these formalities, we can do the calculations. To get started, we need to establish that it even makes sense to discuss the "probability" of $6H$ occurring infinitely often. This event can be constructed as the intersection of events of the type "$6H$ occurs at least $n$ times", for $n=1, 2, \ldots$. Because it is a countable intersection of measurable sets, it is measurable, so its probability exists.
Second, we need to compute this probability of $6H$ occurring infinitely often. One way is to compute the probability of the complementary event: what is the chance that $6H$ occurs only finitely many times? This event $E$ will be measurable, because it's the complement of a measurable set, as we have already established. $E$ can be partitioned into events $E_n$ of the form "$6H$ occurs exactly $n$ times", for $n=0, 1, 2, \ldots$. Because there are only countably many of these, the probability of $E$ will be the (countable) sum of the probabilities of the $E_n$. What are these probabilities?
Once more we can do a partition: $E_n$ breaks into events $E_{n,N}$ of the form "$6H$ occurs exactly $n$ times at roll $N$ and never occurs again." These events are disjoint and countable in number, so all we have to do (again!) is to compute their chances and add them up. But finally we have reduced the problem to a finite calculation: $\mathbb{P}_\infty(E_{n,N})$ is no greater than the chance of any finite event of the form "$6H$ occurs for the $n^\text{th}$ time at roll $N$ and does not occur between rolls $N$ and $M \gt N$." The calculation is easy because we don't really need to know the details: each time $M$ increases by $1$, the chance--whatever it may be--is further multiplied by the chance that $6H$ is not rolled, which is $1-p/6$. We thereby obtain a geometric sequence with common ratio $r = 1-p/6 \lt 1$. Regardless of the starting value, it grows arbitrarily small as $M$ gets large.
(Notice that we did not need to take a limit of probabilities: we only needed to show that the probability of $E_{n,N}$ is bounded above by numbers that converge to zero.)
Consequently $\mathbb{P}_\infty(E_{n,N})$ cannot have any value greater than $0$, whence it must equal $0$. Accordingly,
$$\mathbb{P}_\infty(E_n) = \sum_{N=0}^\infty \mathbb{P}_\infty(E_{n,N}) = 0.$$
Where are we? We have just established that for any $n \ge 0$, the chance of observing exactly $n$ outcomes of $6H$ is nil. By adding up all these zeros, we conclude that $$\mathbb{P}_\infty(E) = \sum_{n=0}^\infty \mathbb{P}_\infty(E_n) = 0.$$ This is the chance that $6H$ occurs only finitely many times. Consequently, the chance that $6H$ occurs infinitely many times is $1-0 = 1$, QED.
Every statement in the preceding paragraph is so obvious as to be intuitively trivial. The exercise of demonstrating its conclusions with some rigor, using the definitions of sigma algebras and probability measures, helps show that these definitions are the right ones for working with probabilities, even when infinite sequences are involved. | How to Prove that an Event Occurs Infinitely Often (Almost Surely)?
The sample space consists of seven possible outcomes: "1" through "5" on the die, "6" and "tails", and "6" and "heads." Let's abbreviate these as $\Omega=\{1,2,3,4,5,6T,6H\}$.
The events will be gene |
27,864 | How to Prove that an Event Occurs Infinitely Often (Almost Surely)? | You have two nice answers addressing the question using basic probability principles. Here are two theorems that help you answer this question quickly in situations were such solutions are appropriate:
The Strong Law of Large Numbers (SLLN) tells you that for independent and identically distributed random variables with finite mean, the sample mean converges to the true mean almost surely.
The Second Borel-Cantelli Lemma (BC2) tells you that if the sum of probabilities of a sequence of independent events is infinite, then infinitely many of those events will happen almost surely.
Here's how this answers your question using SLLN:
Let $Y_i$ take the value 1 if you roll 6 and flip heads on trial $i$, and zero otherwise. Then $Y_i$ is a Bernoulli random variable with success probability $\theta:=p/6>0$. By the SLLN, $\sum_{i=1}^n Y_i/n \to \theta$ almost surely. But then we must have $\sum_{i=1}^n Y_i \to \infty$ almost surely, which is what we wanted to show.
Here's how this answers your question using BC2:
Let $E_i$ be the event that you roll 6 and flip heads on trial $i$. Then $P(E_i)=p/6>0$, for every $i$, and consequently $\sum_{i=1}^nP(E_i)\to\infty$. Thus, by BC2 infinitely many of the events $E_i$ will happen almost surely, which is what we wanted to show.
I stress that both these answers require a lot of machinery hidden in two theorems, and anyone interested in answering this and similar questions will have to decide whether they are appropriate. | How to Prove that an Event Occurs Infinitely Often (Almost Surely)? | You have two nice answers addressing the question using basic probability principles. Here are two theorems that help you answer this question quickly in situations were such solutions are appropriate | How to Prove that an Event Occurs Infinitely Often (Almost Surely)?
You have two nice answers addressing the question using basic probability principles. Here are two theorems that help you answer this question quickly in situations were such solutions are appropriate:
The Strong Law of Large Numbers (SLLN) tells you that for independent and identically distributed random variables with finite mean, the sample mean converges to the true mean almost surely.
The Second Borel-Cantelli Lemma (BC2) tells you that if the sum of probabilities of a sequence of independent events is infinite, then infinitely many of those events will happen almost surely.
Here's how this answers your question using SLLN:
Let $Y_i$ take the value 1 if you roll 6 and flip heads on trial $i$, and zero otherwise. Then $Y_i$ is a Bernoulli random variable with success probability $\theta:=p/6>0$. By the SLLN, $\sum_{i=1}^n Y_i/n \to \theta$ almost surely. But then we must have $\sum_{i=1}^n Y_i \to \infty$ almost surely, which is what we wanted to show.
Here's how this answers your question using BC2:
Let $E_i$ be the event that you roll 6 and flip heads on trial $i$. Then $P(E_i)=p/6>0$, for every $i$, and consequently $\sum_{i=1}^nP(E_i)\to\infty$. Thus, by BC2 infinitely many of the events $E_i$ will happen almost surely, which is what we wanted to show.
I stress that both these answers require a lot of machinery hidden in two theorems, and anyone interested in answering this and similar questions will have to decide whether they are appropriate. | How to Prove that an Event Occurs Infinitely Often (Almost Surely)?
You have two nice answers addressing the question using basic probability principles. Here are two theorems that help you answer this question quickly in situations were such solutions are appropriate |
27,865 | How to Prove that an Event Occurs Infinitely Often (Almost Surely)? | Without relying on any kind of advanced probability theory as in the RayVelcoro's answer, one could proceed as follows. Let $I_j$ denote the event that the final heads was attained on the $j^{\text{th}}$ toss of the die. Then, by countable additivity, the probability of a finite number of heads is
$$
\sum_{j=0}^\infty P(I_j) \stackrel{?}{=} 0.
$$
It now suffices to show that $P(I_j) = 0$ for all $j$. To do this, let $A_{j,n}$ denote the event that "after $n$ die rolls, the last head occured on the $j^{\text{th}}$ roll". Now, clearly $I_j \subseteq A_{j,n}$ for all $n$, and hence $P(I_j) \le P(A_{j,n})$; take $n \to \infty$ to see that $P(I_j) \le \lim_{n \to \infty} P(A_{j,n}) = 0$. $P(A_{j,n})$ can be computed directly in the obvious way (it is the probability of a success followed by $n-j - 1$ failures by independence).
(EDIT: This may be more-or-less equivalent to the answer of @whuber, but with a bit less formality/detail, as I am assuming OP is not in a measure-theoretic framework.) | How to Prove that an Event Occurs Infinitely Often (Almost Surely)? | Without relying on any kind of advanced probability theory as in the RayVelcoro's answer, one could proceed as follows. Let $I_j$ denote the event that the final heads was attained on the $j^{\text{th | How to Prove that an Event Occurs Infinitely Often (Almost Surely)?
Without relying on any kind of advanced probability theory as in the RayVelcoro's answer, one could proceed as follows. Let $I_j$ denote the event that the final heads was attained on the $j^{\text{th}}$ toss of the die. Then, by countable additivity, the probability of a finite number of heads is
$$
\sum_{j=0}^\infty P(I_j) \stackrel{?}{=} 0.
$$
It now suffices to show that $P(I_j) = 0$ for all $j$. To do this, let $A_{j,n}$ denote the event that "after $n$ die rolls, the last head occured on the $j^{\text{th}}$ roll". Now, clearly $I_j \subseteq A_{j,n}$ for all $n$, and hence $P(I_j) \le P(A_{j,n})$; take $n \to \infty$ to see that $P(I_j) \le \lim_{n \to \infty} P(A_{j,n}) = 0$. $P(A_{j,n})$ can be computed directly in the obvious way (it is the probability of a success followed by $n-j - 1$ failures by independence).
(EDIT: This may be more-or-less equivalent to the answer of @whuber, but with a bit less formality/detail, as I am assuming OP is not in a measure-theoretic framework.) | How to Prove that an Event Occurs Infinitely Often (Almost Surely)?
Without relying on any kind of advanced probability theory as in the RayVelcoro's answer, one could proceed as follows. Let $I_j$ denote the event that the final heads was attained on the $j^{\text{th |
27,866 | Aside from Durbin-Watson, what hypothesis tests can produce inconclusive results? | The Wikipedia article explains that the distribution of the test statistic under the null hypothesis depends on the design matrix—the particular configuration of predictor values used in the regression. Durbin & Watson calculated lower bounds for the test statistic under which the test for positive autocorrelation must reject, at given significance levels, for any design matrix, & upper bounds over which the test must fail to reject for any design matrix. The "inconclusive region" is merely the region where you'd have to calculate exact critical values, taking your design matrix into account, to get a definite answer.
An analogous situation would be having to perform a one-sample one-tailed t-test when you know just the t-statistic, & not the sample size†: 1.645 & 6.31 (corresponding to infinite degrees of freedom & only one) would be the bounds for a test of size 0.05.
As far as decision theory goes—you've a new source of uncertainty to take into account besides sampling variation, but I don't see why it shouldn't be applied in the same fashion as with composite null hypotheses. You're in the same situation as someone with an unknown nuisance parameter, regardless of how you got there; so if you need to make a reject/retain decision while controlling Type I error over all possibilities, reject conservatively (i.e. when the Durbin–Watson statistic's under the lower bound, or the t-statistic's over 6.31).
† Or perhaps you've lost your tables; but can remember some critical values for a standard Gaussian, & the formula for the Cauchy quantile function. | Aside from Durbin-Watson, what hypothesis tests can produce inconclusive results? | The Wikipedia article explains that the distribution of the test statistic under the null hypothesis depends on the design matrix—the particular configuration of predictor values used in the regressio | Aside from Durbin-Watson, what hypothesis tests can produce inconclusive results?
The Wikipedia article explains that the distribution of the test statistic under the null hypothesis depends on the design matrix—the particular configuration of predictor values used in the regression. Durbin & Watson calculated lower bounds for the test statistic under which the test for positive autocorrelation must reject, at given significance levels, for any design matrix, & upper bounds over which the test must fail to reject for any design matrix. The "inconclusive region" is merely the region where you'd have to calculate exact critical values, taking your design matrix into account, to get a definite answer.
An analogous situation would be having to perform a one-sample one-tailed t-test when you know just the t-statistic, & not the sample size†: 1.645 & 6.31 (corresponding to infinite degrees of freedom & only one) would be the bounds for a test of size 0.05.
As far as decision theory goes—you've a new source of uncertainty to take into account besides sampling variation, but I don't see why it shouldn't be applied in the same fashion as with composite null hypotheses. You're in the same situation as someone with an unknown nuisance parameter, regardless of how you got there; so if you need to make a reject/retain decision while controlling Type I error over all possibilities, reject conservatively (i.e. when the Durbin–Watson statistic's under the lower bound, or the t-statistic's over 6.31).
† Or perhaps you've lost your tables; but can remember some critical values for a standard Gaussian, & the formula for the Cauchy quantile function. | Aside from Durbin-Watson, what hypothesis tests can produce inconclusive results?
The Wikipedia article explains that the distribution of the test statistic under the null hypothesis depends on the design matrix—the particular configuration of predictor values used in the regressio |
27,867 | Aside from Durbin-Watson, what hypothesis tests can produce inconclusive results? | Another example of a test with possibly inconclusive results is a binomial test for a proportion when only the proportion, not the sample size, is available. This is not completely unrealistic — we often see or hear poorly reported claims of the form "73% of people agree that ..." and so on, where the denominator is not available.
Suppose for example we only know the sample proportion rounded correct to the nearest whole percent, and we wish to test $H_0: \pi = 0.5$ against $H_1: \pi \neq 0.5$ at the $\alpha = 0.05$ level.
If our observed proportion was $p=5\%$ then the sample size for the observed proportion must have been at least 19, since $\frac{1}{19}$ is the fraction with the lowest denominator which would round to $5\%$. We do not know whether the observed number of successes was actually 1 out of 19, 1 out of 20, 1 out of 21, 1 out of 22, 2 out of 37, 2 out of 38, 3 out of 55, 5 out of 100 or 50 out of 1000... but whichever of these it is, the result would be significant at the $\alpha = 0.05$ level.
On the other hand, if we know the sample proportion was $p = 49\%$ then we do not know whether the observed number of successes was 49 out of 100 (which would not be significant at this level) or 4900 out of 10,000 (which just attains significance). So in this case the results are inconclusive.
Note that with rounded percentages, there is no "fail to reject" region: even $p=50\%$ is consistent with samples like 49,500 successes out of 100,000, which would result in rejection, as well as samples like 1 success out of 2 trials, which would result in failure to reject $H_0$.
Unlike the Durbin-Watson test I've never seen tabulated results for which percentages are significant; this situation is more subtle as there are not upper and lower bounds for the critical value. A result of $p=0\%$ would clearly be inconclusive, since zero successes in one trial would be insignificant yet no successes in a million trials would be highly significant. We have already seen that $p=50\%$ is inconclusive but that there are significant results e.g. $p=5\%$ in between. Moreover, the lack of a cut-off is not just because of the anomalous cases of $p=0\%$ and $p=100\%$. Playing around a little, the least significant sample corresponding to $p=16\%$ is 3 successes in a sample of 19, in which case $\Pr(X \leq 3) \approx 0.00221 < 0.025$ so would be significant; for $p=17\%$ we might have 1 success in 6 trials which is insignificant, $\Pr(X \leq 1) \approx 0.109 > 0.025$ so this case is inconclusive (since there are clearly other samples with $p=16\%$ which would be significant); for $p=18\%$ there may be 2 successes in 11 trials (insignificant, $\Pr(X \leq 2) \approx 0.0327 > 0.025$) so this case is also inconclusive; but for $p=19\%$ the least significant possible sample is 3 successes in 19 trials with $\Pr(X \leq 3) \approx 0.0106 < 0.025$ so this is significant again.
In fact $p=24\%$ is the highest rounded percentage below 50% to be unambiguously significant at the 5% level (its highest p-value would be for 4 successes in 17 trials and is just significant), while $p=13\%$ is the lowest non-zero result which is inconclusive (because it could correspond to 1 success in 8 trials). As can be seen from the examples above, what happens in between is more complicated! The graph below has red line at $\alpha=0.05$: points below the line are unambiguously significant but those above it are inconclusive. The pattern of the p-values is such that there are not going to be single lower and upper limits on the observed percentage for the results to be unambiguously significant.
R code
# need rounding function that rounds 5 up
round2 = function(x, n) {
posneg = sign(x)
z = abs(x)*10^n
z = z + 0.5
z = trunc(z)
z = z/10^n
z*posneg
}
# make a results data frame for various trials and successes
results <- data.frame(successes = rep(0:100, 100),
trials = rep(1:100, each=101))
results <- subset(results, successes <= trials)
results$percentage <- round2(100*results$successes/results$trials, 0)
results$pvalue <- mapply(function(x,y) {
binom.test(x, y, p=0.5, alternative="two.sided")$p.value}, results$successes, results$trials)
# make a data frame for rounded percentages and identify which are unambiguously sig at alpha=0.05
leastsig <- sapply(0:100, function(n){
max(subset(results, percentage==n, select=pvalue))})
percentages <- data.frame(percentage=0:100, leastsig)
percentages$significant <- percentages$leastsig
subset(percentages, significant==TRUE)
# some interesting cases
subset(results, percentage==13) # inconclusive at alpha=0.05
subset(results, percentage==24) # unambiguously sig at alpha=0.05
# plot graph of greatest p-values, results below red line are unambiguously significant at alpha=0.05
plot(percentages$percentage, percentages$leastsig, panel.first = abline(v=seq(0,100,by=5), col='grey'),
pch=19, col="blue", xlab="Rounded percentage", ylab="Least significant two-sided p-value", xaxt="n")
axis(1, at = seq(0, 100, by = 10))
abline(h=0.05, col="red")
(The rounding code is snipped from this StackOverflow question.) | Aside from Durbin-Watson, what hypothesis tests can produce inconclusive results? | Another example of a test with possibly inconclusive results is a binomial test for a proportion when only the proportion, not the sample size, is available. This is not completely unrealistic — we of | Aside from Durbin-Watson, what hypothesis tests can produce inconclusive results?
Another example of a test with possibly inconclusive results is a binomial test for a proportion when only the proportion, not the sample size, is available. This is not completely unrealistic — we often see or hear poorly reported claims of the form "73% of people agree that ..." and so on, where the denominator is not available.
Suppose for example we only know the sample proportion rounded correct to the nearest whole percent, and we wish to test $H_0: \pi = 0.5$ against $H_1: \pi \neq 0.5$ at the $\alpha = 0.05$ level.
If our observed proportion was $p=5\%$ then the sample size for the observed proportion must have been at least 19, since $\frac{1}{19}$ is the fraction with the lowest denominator which would round to $5\%$. We do not know whether the observed number of successes was actually 1 out of 19, 1 out of 20, 1 out of 21, 1 out of 22, 2 out of 37, 2 out of 38, 3 out of 55, 5 out of 100 or 50 out of 1000... but whichever of these it is, the result would be significant at the $\alpha = 0.05$ level.
On the other hand, if we know the sample proportion was $p = 49\%$ then we do not know whether the observed number of successes was 49 out of 100 (which would not be significant at this level) or 4900 out of 10,000 (which just attains significance). So in this case the results are inconclusive.
Note that with rounded percentages, there is no "fail to reject" region: even $p=50\%$ is consistent with samples like 49,500 successes out of 100,000, which would result in rejection, as well as samples like 1 success out of 2 trials, which would result in failure to reject $H_0$.
Unlike the Durbin-Watson test I've never seen tabulated results for which percentages are significant; this situation is more subtle as there are not upper and lower bounds for the critical value. A result of $p=0\%$ would clearly be inconclusive, since zero successes in one trial would be insignificant yet no successes in a million trials would be highly significant. We have already seen that $p=50\%$ is inconclusive but that there are significant results e.g. $p=5\%$ in between. Moreover, the lack of a cut-off is not just because of the anomalous cases of $p=0\%$ and $p=100\%$. Playing around a little, the least significant sample corresponding to $p=16\%$ is 3 successes in a sample of 19, in which case $\Pr(X \leq 3) \approx 0.00221 < 0.025$ so would be significant; for $p=17\%$ we might have 1 success in 6 trials which is insignificant, $\Pr(X \leq 1) \approx 0.109 > 0.025$ so this case is inconclusive (since there are clearly other samples with $p=16\%$ which would be significant); for $p=18\%$ there may be 2 successes in 11 trials (insignificant, $\Pr(X \leq 2) \approx 0.0327 > 0.025$) so this case is also inconclusive; but for $p=19\%$ the least significant possible sample is 3 successes in 19 trials with $\Pr(X \leq 3) \approx 0.0106 < 0.025$ so this is significant again.
In fact $p=24\%$ is the highest rounded percentage below 50% to be unambiguously significant at the 5% level (its highest p-value would be for 4 successes in 17 trials and is just significant), while $p=13\%$ is the lowest non-zero result which is inconclusive (because it could correspond to 1 success in 8 trials). As can be seen from the examples above, what happens in between is more complicated! The graph below has red line at $\alpha=0.05$: points below the line are unambiguously significant but those above it are inconclusive. The pattern of the p-values is such that there are not going to be single lower and upper limits on the observed percentage for the results to be unambiguously significant.
R code
# need rounding function that rounds 5 up
round2 = function(x, n) {
posneg = sign(x)
z = abs(x)*10^n
z = z + 0.5
z = trunc(z)
z = z/10^n
z*posneg
}
# make a results data frame for various trials and successes
results <- data.frame(successes = rep(0:100, 100),
trials = rep(1:100, each=101))
results <- subset(results, successes <= trials)
results$percentage <- round2(100*results$successes/results$trials, 0)
results$pvalue <- mapply(function(x,y) {
binom.test(x, y, p=0.5, alternative="two.sided")$p.value}, results$successes, results$trials)
# make a data frame for rounded percentages and identify which are unambiguously sig at alpha=0.05
leastsig <- sapply(0:100, function(n){
max(subset(results, percentage==n, select=pvalue))})
percentages <- data.frame(percentage=0:100, leastsig)
percentages$significant <- percentages$leastsig
subset(percentages, significant==TRUE)
# some interesting cases
subset(results, percentage==13) # inconclusive at alpha=0.05
subset(results, percentage==24) # unambiguously sig at alpha=0.05
# plot graph of greatest p-values, results below red line are unambiguously significant at alpha=0.05
plot(percentages$percentage, percentages$leastsig, panel.first = abline(v=seq(0,100,by=5), col='grey'),
pch=19, col="blue", xlab="Rounded percentage", ylab="Least significant two-sided p-value", xaxt="n")
axis(1, at = seq(0, 100, by = 10))
abline(h=0.05, col="red")
(The rounding code is snipped from this StackOverflow question.) | Aside from Durbin-Watson, what hypothesis tests can produce inconclusive results?
Another example of a test with possibly inconclusive results is a binomial test for a proportion when only the proportion, not the sample size, is available. This is not completely unrealistic — we of |
27,868 | When to use (and not use) the rule of three | The rule of three in statistics states that if an event is binomially
distributed and does not occur with in n trials the maximum chance of it
occurring is approximately 3/n.
No, that's not what it says. It says that a 95% confidence interval for the actual chance of it occurring is approximately [0, 3/n]. That is not the same thing. The largest value for the 'chance of occurring' contained in the interval is indeed 3/n, though the question of which of the values within the interval is most likely is not answered.
The rule says: 'guess that the true chance of occurring is 3/n or less and you will be wrong about 5% of the time.
Suppose we have a roulette table with only two options, red or black. The
chance of either of these occurring is clearly 1/2.
Exactly, so there is no need for a confidence interval because the 'chance of occurring' is known. You could, on the other hand, test the coverage of the approximate interval that the rule provides using such a wheel.
Suppose, however, that we don't see black for 10 turns of the wheel.
We would then reason that the chance of black occurring is at most 3/10,
which is not true. Is this a misapplication of the rule? If so, why, and
how does one determine when it is proper to apply it.
It is a misapplication of the idea of a confidence interval, which is applied to bound the range of plausible values of things that are unknown, and which in any particular application need not contain the true value if it becomes known. | When to use (and not use) the rule of three | The rule of three in statistics states that if an event is binomially
distributed and does not occur with in n trials the maximum chance of it
occurring is approximately 3/n.
No, that's not w | When to use (and not use) the rule of three
The rule of three in statistics states that if an event is binomially
distributed and does not occur with in n trials the maximum chance of it
occurring is approximately 3/n.
No, that's not what it says. It says that a 95% confidence interval for the actual chance of it occurring is approximately [0, 3/n]. That is not the same thing. The largest value for the 'chance of occurring' contained in the interval is indeed 3/n, though the question of which of the values within the interval is most likely is not answered.
The rule says: 'guess that the true chance of occurring is 3/n or less and you will be wrong about 5% of the time.
Suppose we have a roulette table with only two options, red or black. The
chance of either of these occurring is clearly 1/2.
Exactly, so there is no need for a confidence interval because the 'chance of occurring' is known. You could, on the other hand, test the coverage of the approximate interval that the rule provides using such a wheel.
Suppose, however, that we don't see black for 10 turns of the wheel.
We would then reason that the chance of black occurring is at most 3/10,
which is not true. Is this a misapplication of the rule? If so, why, and
how does one determine when it is proper to apply it.
It is a misapplication of the idea of a confidence interval, which is applied to bound the range of plausible values of things that are unknown, and which in any particular application need not contain the true value if it becomes known. | When to use (and not use) the rule of three
The rule of three in statistics states that if an event is binomially
distributed and does not occur with in n trials the maximum chance of it
occurring is approximately 3/n.
No, that's not w |
27,869 | When to use (and not use) the rule of three | From Wikipedia, an example of "the rule of 3" is described as "For example, a pain-relief drug is tested on 1500 human subjects, and no adverse event is recorded. From the rule of three, it can be concluded with 95% confidence that fewer than 1 person in 500 (or 3/1500) will experience an adverse event."
The following derivation helped me understand the "rule of three" from another perspective.
Assume n = 1500, and the worst possible case is that an adverse event is found when n = 1501, i.e., p_ = 1/1501 (=0.000666).
The Standard Error (SE) of p_ can be calculated as the square root of p_*(1-p_)/n (=0.000666), which can be approximated as 1/1500 (= 0.000667), that is, SE ~ 1/n.
Assuming that p_ (the observed value of the true p) has a normal distribution centered at p, then we will have a 97.8 % confidence (one-sided upper bound of 2 SE) that, if we repeat the sampling, the true p will be smaller than 1/n + 2*SE = 1/n + 2/n = 3/n (= 0.002).
If the p_ value (=1/1501) is used to calculate the value of the upper bound, the result is 0.001998, which is very close to 0.002. | When to use (and not use) the rule of three | From Wikipedia, an example of "the rule of 3" is described as "For example, a pain-relief drug is tested on 1500 human subjects, and no adverse event is recorded. From the rule of three, it can be con | When to use (and not use) the rule of three
From Wikipedia, an example of "the rule of 3" is described as "For example, a pain-relief drug is tested on 1500 human subjects, and no adverse event is recorded. From the rule of three, it can be concluded with 95% confidence that fewer than 1 person in 500 (or 3/1500) will experience an adverse event."
The following derivation helped me understand the "rule of three" from another perspective.
Assume n = 1500, and the worst possible case is that an adverse event is found when n = 1501, i.e., p_ = 1/1501 (=0.000666).
The Standard Error (SE) of p_ can be calculated as the square root of p_*(1-p_)/n (=0.000666), which can be approximated as 1/1500 (= 0.000667), that is, SE ~ 1/n.
Assuming that p_ (the observed value of the true p) has a normal distribution centered at p, then we will have a 97.8 % confidence (one-sided upper bound of 2 SE) that, if we repeat the sampling, the true p will be smaller than 1/n + 2*SE = 1/n + 2/n = 3/n (= 0.002).
If the p_ value (=1/1501) is used to calculate the value of the upper bound, the result is 0.001998, which is very close to 0.002. | When to use (and not use) the rule of three
From Wikipedia, an example of "the rule of 3" is described as "For example, a pain-relief drug is tested on 1500 human subjects, and no adverse event is recorded. From the rule of three, it can be con |
27,870 | How to Transform a Folded Normal Distribution into a Gamma Distribution? | When thinking about PDFs,
Focus on the form of the function by ignoring additive and multiplicative constants.
Always, always, include the differential element.
For example, a generic Normal PDF is of the form
$$f(x;\mu, \sigma) = \frac{1}{\sqrt{2\pi \sigma^2}}\exp\left(-\frac{1}{2} \left(\frac{x-\mu}{\sigma}\right)^2\right)$$
Following (1), strip this down to $\exp(-x^2)$ and following (2), multiply by $dx$, giving
$$f(x) = \exp(-x^2)dx.$$
Consider now the generic Gamma PDF
$$g(y; \alpha, \beta) = \frac{1}{\beta\,\Gamma(\alpha)} \left(\frac{y}{\beta}\right)^{\alpha-1}\exp(-y/\beta).$$
Following the same two rules to focus on the essential part of the PDF produces
$$g(y) = y^{\alpha-1}\exp(-y)dy.$$
Notice that the constant $\alpha-1$ stayed because it is neither added to nor multiplies the variable $y$ itself: it is a power. We are going to have to figure out what the possible values of $\alpha$ could be.
Compare $f$ to $g$ and ask,
What should $y = y(x)$ be in order to make the two PDFs look more alike?
The only thing that is obviously common to the two forms is the exponential. Ignoring everything else, compare the two exponential parts of $f$ and $g$:
$$\exp(-x^2)\text{ versus }\exp(-y).$$
To convert one into the other, our only choice is
$$y = x^2.$$
Here is where (2) comes in: when you substitute $x^2$ for $y$ in $g$, make sure to include the differential element. Let's do that step first:
$$dy = d(x^2) = 2 x dx.$$
The last step differentiates $x^2$ (which is all that "$d$" asks us to do). Therefore
$$g(y)\vert_{y \to x^2} = (x^2)^{\alpha-1} \exp(-x^2) (2 x dx) = 2 x^{2\alpha-1}\exp(-x^2) dx.$$
Once again, drop any multiplicative or additive constants and compare:
$$x^{2\alpha-1}\exp(-x^2) dx = g(y) \text{ versus } f(x) = \exp(-x^2)dx.$$
We have accomplished what we intended: $\exp(-x^2)$ is common to both expressions. Although they still look different insofar as the left hand side still has an extra factor of $x^{2\alpha-1}$, they actually will be the same provided
$$x^{2\alpha-1} = \text{ constant }.$$
This uniquely determines $\alpha=1/2$. Although all these calculations were performed to transform a Normal distribution to a Gamma distribution, in review you can see that they work for the folded Normal, which has exactly the same form as the Normal PDF. Now you know which Gamma distribution to use. The rest is a matter of working out the value of $\beta$, which I leave to the interested reader. | How to Transform a Folded Normal Distribution into a Gamma Distribution? | When thinking about PDFs,
Focus on the form of the function by ignoring additive and multiplicative constants.
Always, always, include the differential element.
For example, a generic Normal PDF is | How to Transform a Folded Normal Distribution into a Gamma Distribution?
When thinking about PDFs,
Focus on the form of the function by ignoring additive and multiplicative constants.
Always, always, include the differential element.
For example, a generic Normal PDF is of the form
$$f(x;\mu, \sigma) = \frac{1}{\sqrt{2\pi \sigma^2}}\exp\left(-\frac{1}{2} \left(\frac{x-\mu}{\sigma}\right)^2\right)$$
Following (1), strip this down to $\exp(-x^2)$ and following (2), multiply by $dx$, giving
$$f(x) = \exp(-x^2)dx.$$
Consider now the generic Gamma PDF
$$g(y; \alpha, \beta) = \frac{1}{\beta\,\Gamma(\alpha)} \left(\frac{y}{\beta}\right)^{\alpha-1}\exp(-y/\beta).$$
Following the same two rules to focus on the essential part of the PDF produces
$$g(y) = y^{\alpha-1}\exp(-y)dy.$$
Notice that the constant $\alpha-1$ stayed because it is neither added to nor multiplies the variable $y$ itself: it is a power. We are going to have to figure out what the possible values of $\alpha$ could be.
Compare $f$ to $g$ and ask,
What should $y = y(x)$ be in order to make the two PDFs look more alike?
The only thing that is obviously common to the two forms is the exponential. Ignoring everything else, compare the two exponential parts of $f$ and $g$:
$$\exp(-x^2)\text{ versus }\exp(-y).$$
To convert one into the other, our only choice is
$$y = x^2.$$
Here is where (2) comes in: when you substitute $x^2$ for $y$ in $g$, make sure to include the differential element. Let's do that step first:
$$dy = d(x^2) = 2 x dx.$$
The last step differentiates $x^2$ (which is all that "$d$" asks us to do). Therefore
$$g(y)\vert_{y \to x^2} = (x^2)^{\alpha-1} \exp(-x^2) (2 x dx) = 2 x^{2\alpha-1}\exp(-x^2) dx.$$
Once again, drop any multiplicative or additive constants and compare:
$$x^{2\alpha-1}\exp(-x^2) dx = g(y) \text{ versus } f(x) = \exp(-x^2)dx.$$
We have accomplished what we intended: $\exp(-x^2)$ is common to both expressions. Although they still look different insofar as the left hand side still has an extra factor of $x^{2\alpha-1}$, they actually will be the same provided
$$x^{2\alpha-1} = \text{ constant }.$$
This uniquely determines $\alpha=1/2$. Although all these calculations were performed to transform a Normal distribution to a Gamma distribution, in review you can see that they work for the folded Normal, which has exactly the same form as the Normal PDF. Now you know which Gamma distribution to use. The rest is a matter of working out the value of $\beta$, which I leave to the interested reader. | How to Transform a Folded Normal Distribution into a Gamma Distribution?
When thinking about PDFs,
Focus on the form of the function by ignoring additive and multiplicative constants.
Always, always, include the differential element.
For example, a generic Normal PDF is |
27,871 | How to Transform a Folded Normal Distribution into a Gamma Distribution? | While the method described by whuber is a very general one, in this case, there is a much easier method for getting the answer: indeed a method that might
be describable as
more from a statistician's perspective than a probabalist's perspective.
Following up on my hint
on the main question, consider a standard normal random variable $Z$ whose square
$Z^2$ has a $\chi^2(1)$ distribution which is also a Gamma
distribution with (shape and
scale) parameters
$\left(\frac 12, 2\right)$. Now, the given folded normal random
variable $X$ has the same distribution as $|Z|$, and thus $Y = X^2$ has the same distribution as $|Z|^2 = Z^2 \sim \chi^2(1)$. Thus, the function $g(x)$ that
is sought is just $g(x) = x^2$ and the resulting Gamma density has
shape and scale parameters $\left(\frac 12, 2\right)$. More generally,
$cX^2$ has a Gamma density with shape and scale parameters $\left(\frac 12, 2c\right)$. | How to Transform a Folded Normal Distribution into a Gamma Distribution? | While the method described by whuber is a very general one, in this case, there is a much easier method for getting the answer: indeed a method that might
be describable as
more from a statistician's | How to Transform a Folded Normal Distribution into a Gamma Distribution?
While the method described by whuber is a very general one, in this case, there is a much easier method for getting the answer: indeed a method that might
be describable as
more from a statistician's perspective than a probabalist's perspective.
Following up on my hint
on the main question, consider a standard normal random variable $Z$ whose square
$Z^2$ has a $\chi^2(1)$ distribution which is also a Gamma
distribution with (shape and
scale) parameters
$\left(\frac 12, 2\right)$. Now, the given folded normal random
variable $X$ has the same distribution as $|Z|$, and thus $Y = X^2$ has the same distribution as $|Z|^2 = Z^2 \sim \chi^2(1)$. Thus, the function $g(x)$ that
is sought is just $g(x) = x^2$ and the resulting Gamma density has
shape and scale parameters $\left(\frac 12, 2\right)$. More generally,
$cX^2$ has a Gamma density with shape and scale parameters $\left(\frac 12, 2c\right)$. | How to Transform a Folded Normal Distribution into a Gamma Distribution?
While the method described by whuber is a very general one, in this case, there is a much easier method for getting the answer: indeed a method that might
be describable as
more from a statistician's |
27,872 | ROC curves for unbalanced datasets | ROC curves are insensitive to class balance. The straight line you obtain for a random classifier now is already the result of using different probabilities of yielding positive (0 brings you to (0, 0) and 1 brings you to (1, 1) with any range inbetween).
Nothing changes in an imbalanced setting. | ROC curves for unbalanced datasets | ROC curves are insensitive to class balance. The straight line you obtain for a random classifier now is already the result of using different probabilities of yielding positive (0 brings you to (0, 0 | ROC curves for unbalanced datasets
ROC curves are insensitive to class balance. The straight line you obtain for a random classifier now is already the result of using different probabilities of yielding positive (0 brings you to (0, 0) and 1 brings you to (1, 1) with any range inbetween).
Nothing changes in an imbalanced setting. | ROC curves for unbalanced datasets
ROC curves are insensitive to class balance. The straight line you obtain for a random classifier now is already the result of using different probabilities of yielding positive (0 brings you to (0, 0 |
27,873 | Is there an elegant/insightful way to understand this linear regression identity for multiple $R^2$? | The following three formulas are well known, they are found in many books on linear regression. It is not difficult to derive them.
$\beta_1= \frac {r_{YX_1}-r_{YX_2}r_{X_1X_2}} {\sqrt{1-r_{X_1X_2}^2}}$
$\beta_2= \frac {r_{YX_2}-r_{YX_1}r_{X_1X_2}} {\sqrt{1-r_{X_1X_2}^2}}$
$R^2= \frac {r_{YX_1}^2+r_{YX_2}^2-2 r_{YX_1}r_{YX_2}r_{X_1X_2}} {\sqrt{1-r_{X_1X_2}^2}}$
If you substitute the two betas into your equation
$R^2 = r_{YX_1} \beta_1 + r_{YX_2} \beta_2$, you will get the above formula for R-square.
Here is a geometric "insight". Below are two pictures showing regression of $Y$ by $X_1$ and $X_2$. This kind of representation is known as variables-as-vectors in subject space (please read what it is about). The pictures are drawn after all the three variables were centered, and so (1) every vector's length = st. deviation of the respective variable, and (2) angle (its cosine) between every two vectors = correlation between the respective variables.
$\hat{Y}$ is the regression prediction (orthogonal projection of $Y$ onto "plane X"); $e$ is the error term; $cos \angle{Y \hat{Y}}={|\hat Y|}/|Y|$, multiple correlation coefficient.
The left picture depicts skew coordinates of $\hat{Y}$ on variables $X_1$ and $X_2$. We know that such coordinates relate the regression coefficients. Namely, the coordinates are: $b_1|X_1|=b_1\sigma_{X_1}$ and $b_2|X_2|=b_2\sigma_{X_2}$.
And the right picture shows corresponding perpendicular coordinates. We know that such coordinates relate the zero order correlation coefficients (these are cosines of orthogonal projections). If $r_1$ is the correlation between $Y$ and $X_1$ and $r_1^*$ is the correlation between $\hat Y$ and $X_1$
then the coordinate is $r_1|Y|=r_1\sigma_{Y} = r_1^*|\hat{Y}|=r_1^*\sigma_{\hat{Y}}$. Likewise for the other coordinate, $r_2|Y|=r_2\sigma_{Y} = r_2^*|\hat{Y}|=r_2^*\sigma_{\hat{Y}}$.
So far it were general explanations of linear regression vector representation. Now we turn for the task to show how it may lead to $R^2 = r_1 \beta_1 + r_2 \beta_2$.
First of all, recall that in their question @Corone put forward the condition that the expression is true when all the three variables are standardized, that is, not just centered but also scaled to variance 1. Then (i.e. implying $|X_1|=|X_2|=|Y|=1$ to be the "working parts" of the vectors) we have coordinates equal to: $b_1|X_1|=\beta_1$; $b_2|X_2|=\beta_2$; $r_1|Y|=r_1$; $r_2|Y|=r_2$; as well as $R=|\hat Y|/|Y|=|\hat Y|$. Redraw, under these conditions, just the "plane X" of the pictures above:
On the picture, we have a pair of perpendicular coordinates and a pair of skew coordinates, of the same vector $\hat Y$ of length $R$. There exist a general rule to obtain perpendicular coordinates from skew ones (or back): $\bf P = S C$, where $\bf P$ is points X axes matrix of perpendicular ones; $\bf S$ is the same sized matrix of skew ones; and $\bf C$ are the axes X axes symmetric matrix of angles (cosines) between the nonorthogonal axes.
$X_1$ and $X_2$ are the axes in our case, with $r_{12}$ being the cosine between them. So, $r_1 = \beta_1 + \beta_2 r_{12}$ and $r_2 = \beta_1 r_{12} + \beta_2$.
Substitute these $r$s expressed via $\beta$s in the @Corone's statement $R^2 = r_1 \beta_1 + r_2 \beta_2$, and you'll get that $R^2 = \beta_1^2 + \beta_2^2 + 2\beta_1\beta_2r_{12}$, - which is true, because it is exactly how a diagonal of a parallelogram (tinted on the picture) is expressed via its adjacent sides (quantity $\beta_1\beta_2r_{12}$ being the scalar product).
This same thing is true for any number of predictors X. Unfortunately, it is impossible to draw the alike pictures with many predictors.
Please see similar pictures in this great answer. | Is there an elegant/insightful way to understand this linear regression identity for multiple $R^2$? | The following three formulas are well known, they are found in many books on linear regression. It is not difficult to derive them.
$\beta_1= \frac {r_{YX_1}-r_{YX_2}r_{X_1X_2}} {\sqrt{1-r_{X_1X_2}^2} | Is there an elegant/insightful way to understand this linear regression identity for multiple $R^2$?
The following three formulas are well known, they are found in many books on linear regression. It is not difficult to derive them.
$\beta_1= \frac {r_{YX_1}-r_{YX_2}r_{X_1X_2}} {\sqrt{1-r_{X_1X_2}^2}}$
$\beta_2= \frac {r_{YX_2}-r_{YX_1}r_{X_1X_2}} {\sqrt{1-r_{X_1X_2}^2}}$
$R^2= \frac {r_{YX_1}^2+r_{YX_2}^2-2 r_{YX_1}r_{YX_2}r_{X_1X_2}} {\sqrt{1-r_{X_1X_2}^2}}$
If you substitute the two betas into your equation
$R^2 = r_{YX_1} \beta_1 + r_{YX_2} \beta_2$, you will get the above formula for R-square.
Here is a geometric "insight". Below are two pictures showing regression of $Y$ by $X_1$ and $X_2$. This kind of representation is known as variables-as-vectors in subject space (please read what it is about). The pictures are drawn after all the three variables were centered, and so (1) every vector's length = st. deviation of the respective variable, and (2) angle (its cosine) between every two vectors = correlation between the respective variables.
$\hat{Y}$ is the regression prediction (orthogonal projection of $Y$ onto "plane X"); $e$ is the error term; $cos \angle{Y \hat{Y}}={|\hat Y|}/|Y|$, multiple correlation coefficient.
The left picture depicts skew coordinates of $\hat{Y}$ on variables $X_1$ and $X_2$. We know that such coordinates relate the regression coefficients. Namely, the coordinates are: $b_1|X_1|=b_1\sigma_{X_1}$ and $b_2|X_2|=b_2\sigma_{X_2}$.
And the right picture shows corresponding perpendicular coordinates. We know that such coordinates relate the zero order correlation coefficients (these are cosines of orthogonal projections). If $r_1$ is the correlation between $Y$ and $X_1$ and $r_1^*$ is the correlation between $\hat Y$ and $X_1$
then the coordinate is $r_1|Y|=r_1\sigma_{Y} = r_1^*|\hat{Y}|=r_1^*\sigma_{\hat{Y}}$. Likewise for the other coordinate, $r_2|Y|=r_2\sigma_{Y} = r_2^*|\hat{Y}|=r_2^*\sigma_{\hat{Y}}$.
So far it were general explanations of linear regression vector representation. Now we turn for the task to show how it may lead to $R^2 = r_1 \beta_1 + r_2 \beta_2$.
First of all, recall that in their question @Corone put forward the condition that the expression is true when all the three variables are standardized, that is, not just centered but also scaled to variance 1. Then (i.e. implying $|X_1|=|X_2|=|Y|=1$ to be the "working parts" of the vectors) we have coordinates equal to: $b_1|X_1|=\beta_1$; $b_2|X_2|=\beta_2$; $r_1|Y|=r_1$; $r_2|Y|=r_2$; as well as $R=|\hat Y|/|Y|=|\hat Y|$. Redraw, under these conditions, just the "plane X" of the pictures above:
On the picture, we have a pair of perpendicular coordinates and a pair of skew coordinates, of the same vector $\hat Y$ of length $R$. There exist a general rule to obtain perpendicular coordinates from skew ones (or back): $\bf P = S C$, where $\bf P$ is points X axes matrix of perpendicular ones; $\bf S$ is the same sized matrix of skew ones; and $\bf C$ are the axes X axes symmetric matrix of angles (cosines) between the nonorthogonal axes.
$X_1$ and $X_2$ are the axes in our case, with $r_{12}$ being the cosine between them. So, $r_1 = \beta_1 + \beta_2 r_{12}$ and $r_2 = \beta_1 r_{12} + \beta_2$.
Substitute these $r$s expressed via $\beta$s in the @Corone's statement $R^2 = r_1 \beta_1 + r_2 \beta_2$, and you'll get that $R^2 = \beta_1^2 + \beta_2^2 + 2\beta_1\beta_2r_{12}$, - which is true, because it is exactly how a diagonal of a parallelogram (tinted on the picture) is expressed via its adjacent sides (quantity $\beta_1\beta_2r_{12}$ being the scalar product).
This same thing is true for any number of predictors X. Unfortunately, it is impossible to draw the alike pictures with many predictors.
Please see similar pictures in this great answer. | Is there an elegant/insightful way to understand this linear regression identity for multiple $R^2$?
The following three formulas are well known, they are found in many books on linear regression. It is not difficult to derive them.
$\beta_1= \frac {r_{YX_1}-r_{YX_2}r_{X_1X_2}} {\sqrt{1-r_{X_1X_2}^2} |
27,874 | Is there an elegant/insightful way to understand this linear regression identity for multiple $R^2$? | The hat matrix is idempotent.
(This is a linear-algebraic way of stating that OLS is an orthogonal projection of the response vector onto the space spanned by the variables.)
Recall that by definition
$$R^2 = \frac{ESS}{TSS}$$
where
$$ESS = (\hat Y)^\prime \hat Y$$
is the sum of squares of the (centered) predicted values and
$$TSS = Y^\prime Y$$
is the sum of squares of the (centered) response values. Standardizing $Y$ beforehand to unit variance also implies
$$TSS = Y^\prime Y = n.$$
Recall, too, that the estimated coefficients are given by
$$\hat\beta = (X^\prime X)^{-} X^\prime Y,$$
whence
$$\hat Y = X \hat \beta = X (X^\prime X)^{-} X^\prime Y = H Y$$
where $H$ is the "hat matrix" effecting the projection of $Y$ onto its least squares fit $\hat Y$. It is symmetric (which is obvious from its very form) and idempotent. Here is a proof of the latter for those unfamiliar with this result. It's just shuffling parentheses around:
$$\eqalign{H^\prime H = H H &=\left( X (X^\prime X)^{-} X^\prime\right)\left(X (X^\prime X)^{-} X^\prime \right) \\ &= X (X^\prime X)^{-} \left(X^\prime X \right) (X^\prime X)^{-} X^\prime \\ &= X (X^\prime X)^{-} X^\prime = H.
}$$
Therefore
$$R^2 = \frac{ESS}{TSS} = \frac{1}{n} (\hat Y)^\prime \hat Y = \frac{1}{n}Y^\prime H^\prime H Y = \frac{1}{n}Y^\prime H Y = \left(\frac{1}{n}Y^\prime X\right) \hat \beta.$$
The crucial move in the middle used the idempotence of the hat matrix. The right hand side is your magical formula because $\frac{1}{n}Y^\prime X$ is the (row) vector of correlation coefficients between $Y$ and the columns of $X$. | Is there an elegant/insightful way to understand this linear regression identity for multiple $R^2$? | The hat matrix is idempotent.
(This is a linear-algebraic way of stating that OLS is an orthogonal projection of the response vector onto the space spanned by the variables.)
Recall that by definitio | Is there an elegant/insightful way to understand this linear regression identity for multiple $R^2$?
The hat matrix is idempotent.
(This is a linear-algebraic way of stating that OLS is an orthogonal projection of the response vector onto the space spanned by the variables.)
Recall that by definition
$$R^2 = \frac{ESS}{TSS}$$
where
$$ESS = (\hat Y)^\prime \hat Y$$
is the sum of squares of the (centered) predicted values and
$$TSS = Y^\prime Y$$
is the sum of squares of the (centered) response values. Standardizing $Y$ beforehand to unit variance also implies
$$TSS = Y^\prime Y = n.$$
Recall, too, that the estimated coefficients are given by
$$\hat\beta = (X^\prime X)^{-} X^\prime Y,$$
whence
$$\hat Y = X \hat \beta = X (X^\prime X)^{-} X^\prime Y = H Y$$
where $H$ is the "hat matrix" effecting the projection of $Y$ onto its least squares fit $\hat Y$. It is symmetric (which is obvious from its very form) and idempotent. Here is a proof of the latter for those unfamiliar with this result. It's just shuffling parentheses around:
$$\eqalign{H^\prime H = H H &=\left( X (X^\prime X)^{-} X^\prime\right)\left(X (X^\prime X)^{-} X^\prime \right) \\ &= X (X^\prime X)^{-} \left(X^\prime X \right) (X^\prime X)^{-} X^\prime \\ &= X (X^\prime X)^{-} X^\prime = H.
}$$
Therefore
$$R^2 = \frac{ESS}{TSS} = \frac{1}{n} (\hat Y)^\prime \hat Y = \frac{1}{n}Y^\prime H^\prime H Y = \frac{1}{n}Y^\prime H Y = \left(\frac{1}{n}Y^\prime X\right) \hat \beta.$$
The crucial move in the middle used the idempotence of the hat matrix. The right hand side is your magical formula because $\frac{1}{n}Y^\prime X$ is the (row) vector of correlation coefficients between $Y$ and the columns of $X$. | Is there an elegant/insightful way to understand this linear regression identity for multiple $R^2$?
The hat matrix is idempotent.
(This is a linear-algebraic way of stating that OLS is an orthogonal projection of the response vector onto the space spanned by the variables.)
Recall that by definitio |
27,875 | Properly interpret the alpha / beta parameters in the Beta Distribution | That is one useful interpretation of the Beta distribution when it is used as a conjugate prior distribution to the binomial distribution. It breaks down a bit when you consider the possibility that it is perfectly legitimate for $\alpha + \beta < 1$ or even for $\alpha + \beta < 1/2$, meaning that $\alpha + \beta$ being the prior sample size is also only one possible interpretation of the parameters.
More generally, these are concentration parameters, which are a class of parameters that govern probability distributions over probability distributions. Concentration parameters have an interesting property. The smaller they are, the more sparse the distribution is. In the case of the Beta distribution, as $\alpha,\beta \rightarrow 0$, more and more of the probability is concentrated on the probability parameter $p$ being 0 or 1. Another interesting property of a concentration parameter is that when they all equal one, all possibilities are equally likely. Yet another property is that, as the concentration parameters get larger, the distributions tighten about the expectations.
This is one reason why it is sometimes useful to reparameterize the Beta distribution by one of its measures of central tendency (say its mean) and a dispersion parameter that governs the uncertainty in that mean. There are several ways to do this, up to and including expanding the Beta distribution model so that the mean and dispersion parameter are independent of one another (which is not the case for the two-parameter Beta distribution). | Properly interpret the alpha / beta parameters in the Beta Distribution | That is one useful interpretation of the Beta distribution when it is used as a conjugate prior distribution to the binomial distribution. It breaks down a bit when you consider the possibility that i | Properly interpret the alpha / beta parameters in the Beta Distribution
That is one useful interpretation of the Beta distribution when it is used as a conjugate prior distribution to the binomial distribution. It breaks down a bit when you consider the possibility that it is perfectly legitimate for $\alpha + \beta < 1$ or even for $\alpha + \beta < 1/2$, meaning that $\alpha + \beta$ being the prior sample size is also only one possible interpretation of the parameters.
More generally, these are concentration parameters, which are a class of parameters that govern probability distributions over probability distributions. Concentration parameters have an interesting property. The smaller they are, the more sparse the distribution is. In the case of the Beta distribution, as $\alpha,\beta \rightarrow 0$, more and more of the probability is concentrated on the probability parameter $p$ being 0 or 1. Another interesting property of a concentration parameter is that when they all equal one, all possibilities are equally likely. Yet another property is that, as the concentration parameters get larger, the distributions tighten about the expectations.
This is one reason why it is sometimes useful to reparameterize the Beta distribution by one of its measures of central tendency (say its mean) and a dispersion parameter that governs the uncertainty in that mean. There are several ways to do this, up to and including expanding the Beta distribution model so that the mean and dispersion parameter are independent of one another (which is not the case for the two-parameter Beta distribution). | Properly interpret the alpha / beta parameters in the Beta Distribution
That is one useful interpretation of the Beta distribution when it is used as a conjugate prior distribution to the binomial distribution. It breaks down a bit when you consider the possibility that i |
27,876 | Non-orthogonal technique analogous to PCA | Independent Component Analysis should be able to provide you with s good solution. It is able to decompose non-orthogonal components (like in your case) by assuming that your measurements result from a mixture of statistically independent variables.
There are plenty of good tutorials in Internet, and quiet a few freely available implementations to try out (for example in scikit or MDP).
When does ICA not work?
As other algorithms, ICA is optimal when the assumptions for which it was derived apply. Concretely,
sources are statistically independent
the independent components are non-Gaussian
the mixing matrix is invertible
ICA returns an estimation of the mixing matrix and the independent components.
When your sources are Gaussian then ICA cannot find the components. Imagine you have two independent components, $x_{1}$ and $x_{2}$, which are $N(0,I)$. Then,
$$
p(x_{1}, x_{2}) = p(x_{1})p(x_{2}) = \frac{1}{2\pi}\exp \left( -\frac{x_{1}^{2}+x_{2}^{2}}{2} \right) = \frac{1}{2\pi}\exp -\frac{||\mathbf{x}||^{2}}{2}
$$
where $||.||$. is the norm of the two dimensional vector. If they are mixed with an orthogonal transformation (for example a rotation $R$), we have, $||R\mathbf{x}|| = ||\mathbf{x}||$, which means that the probability distribution does not change under the rotation. Hence, ICA cannot find the mixing matrix from the data. | Non-orthogonal technique analogous to PCA | Independent Component Analysis should be able to provide you with s good solution. It is able to decompose non-orthogonal components (like in your case) by assuming that your measurements result from | Non-orthogonal technique analogous to PCA
Independent Component Analysis should be able to provide you with s good solution. It is able to decompose non-orthogonal components (like in your case) by assuming that your measurements result from a mixture of statistically independent variables.
There are plenty of good tutorials in Internet, and quiet a few freely available implementations to try out (for example in scikit or MDP).
When does ICA not work?
As other algorithms, ICA is optimal when the assumptions for which it was derived apply. Concretely,
sources are statistically independent
the independent components are non-Gaussian
the mixing matrix is invertible
ICA returns an estimation of the mixing matrix and the independent components.
When your sources are Gaussian then ICA cannot find the components. Imagine you have two independent components, $x_{1}$ and $x_{2}$, which are $N(0,I)$. Then,
$$
p(x_{1}, x_{2}) = p(x_{1})p(x_{2}) = \frac{1}{2\pi}\exp \left( -\frac{x_{1}^{2}+x_{2}^{2}}{2} \right) = \frac{1}{2\pi}\exp -\frac{||\mathbf{x}||^{2}}{2}
$$
where $||.||$. is the norm of the two dimensional vector. If they are mixed with an orthogonal transformation (for example a rotation $R$), we have, $||R\mathbf{x}|| = ||\mathbf{x}||$, which means that the probability distribution does not change under the rotation. Hence, ICA cannot find the mixing matrix from the data. | Non-orthogonal technique analogous to PCA
Independent Component Analysis should be able to provide you with s good solution. It is able to decompose non-orthogonal components (like in your case) by assuming that your measurements result from |
27,877 | Non-orthogonal technique analogous to PCA | There are PCA-like procedures for the so-called "oblique" case. In stat-software like SPSS (and possibly also in its freeware clone) PSPP one finds the equivalently called "oblique rotations", and instances of them named as "oblimin","promax" and something more. If I understand things correctly the software tries to "rectangularize" the factor-loadings by re-calculating their coordinates in an orthogonal, euclidean space (as for instance shown in your picture) into coordinates of a space whose axes are non-orthogonal perhaps with some technique known from multiple regression. Moreover I think this works only iteratively and consumes one or more degrees of freedom in the statistical testing of the model.
See wikipedia for rotation-methods in factor analysis
An article with an example of comparision PCA and oblique rotation
The reference-manual of SPSS (at the IBM-site) for oblique-rotations contains even formulae for the computation.
[Update] (Upps, sorry, just checked that PSPP does not provide "rotations" of the oblique type) | Non-orthogonal technique analogous to PCA | There are PCA-like procedures for the so-called "oblique" case. In stat-software like SPSS (and possibly also in its freeware clone) PSPP one finds the equivalently called "oblique rotations", and ins | Non-orthogonal technique analogous to PCA
There are PCA-like procedures for the so-called "oblique" case. In stat-software like SPSS (and possibly also in its freeware clone) PSPP one finds the equivalently called "oblique rotations", and instances of them named as "oblimin","promax" and something more. If I understand things correctly the software tries to "rectangularize" the factor-loadings by re-calculating their coordinates in an orthogonal, euclidean space (as for instance shown in your picture) into coordinates of a space whose axes are non-orthogonal perhaps with some technique known from multiple regression. Moreover I think this works only iteratively and consumes one or more degrees of freedom in the statistical testing of the model.
See wikipedia for rotation-methods in factor analysis
An article with an example of comparision PCA and oblique rotation
The reference-manual of SPSS (at the IBM-site) for oblique-rotations contains even formulae for the computation.
[Update] (Upps, sorry, just checked that PSPP does not provide "rotations" of the oblique type) | Non-orthogonal technique analogous to PCA
There are PCA-like procedures for the so-called "oblique" case. In stat-software like SPSS (and possibly also in its freeware clone) PSPP one finds the equivalently called "oblique rotations", and ins |
27,878 | Non-orthogonal technique analogous to PCA | I don't have much experience with it, but Vidal, Ma, and Sastry's Generalized PCA was made for a very similar problem. | Non-orthogonal technique analogous to PCA | I don't have much experience with it, but Vidal, Ma, and Sastry's Generalized PCA was made for a very similar problem. | Non-orthogonal technique analogous to PCA
I don't have much experience with it, but Vidal, Ma, and Sastry's Generalized PCA was made for a very similar problem. | Non-orthogonal technique analogous to PCA
I don't have much experience with it, but Vidal, Ma, and Sastry's Generalized PCA was made for a very similar problem. |
27,879 | Non-orthogonal technique analogous to PCA | The other answers have already given some useful hints about techniques you can consider, but nobody seems to have pointed out that your assumption is wrong: the lines shown in blue on your schematic picture are NOT local maxima of the variance.
To see it, notice that the variance in direction $\mathbf{w}$ is given by $\mathbf{w}^\top\mathbf{\Sigma}\mathbf{w}$, where $\mathbf{\Sigma}$ denotes covariance matrix of the data. To find local maxima we need to put the derivative of this expression to zero. As $\mathbf{w}$ is constrained to have unit length, we need to add a term $\lambda(\mathbf{w}^\top\mathbf{w}-1)$ where $\lambda$ is a Lagrange's multiplier. Differentiating, we obtain the following equation: $$ \mathbf{\Sigma}\mathbf{w} - \lambda \mathbf{w} = 0.$$
This means that $\mathbf{w}$ should be an eigenvector of the covariance matrix, i.e. one of the principal vectors. In other words, PCA gives you all local maxima, there are no others. | Non-orthogonal technique analogous to PCA | The other answers have already given some useful hints about techniques you can consider, but nobody seems to have pointed out that your assumption is wrong: the lines shown in blue on your schematic | Non-orthogonal technique analogous to PCA
The other answers have already given some useful hints about techniques you can consider, but nobody seems to have pointed out that your assumption is wrong: the lines shown in blue on your schematic picture are NOT local maxima of the variance.
To see it, notice that the variance in direction $\mathbf{w}$ is given by $\mathbf{w}^\top\mathbf{\Sigma}\mathbf{w}$, where $\mathbf{\Sigma}$ denotes covariance matrix of the data. To find local maxima we need to put the derivative of this expression to zero. As $\mathbf{w}$ is constrained to have unit length, we need to add a term $\lambda(\mathbf{w}^\top\mathbf{w}-1)$ where $\lambda$ is a Lagrange's multiplier. Differentiating, we obtain the following equation: $$ \mathbf{\Sigma}\mathbf{w} - \lambda \mathbf{w} = 0.$$
This means that $\mathbf{w}$ should be an eigenvector of the covariance matrix, i.e. one of the principal vectors. In other words, PCA gives you all local maxima, there are no others. | Non-orthogonal technique analogous to PCA
The other answers have already given some useful hints about techniques you can consider, but nobody seems to have pointed out that your assumption is wrong: the lines shown in blue on your schematic |
27,880 | Do I have to cite 'lme4' when using 'lmerTest'? | lmerTest basically offers a bunch of convenience functions on top of lme4. The actual important software is lme4, which implements the model framework. You should definitely give the reference for lme4 as specified in citation("lme4"):
Douglas Bates, Martin Maechler, Ben Bolker and Steven Walker (2014).
lme4: Linear mixed-effects models using Eigen and S4. R package
version 1.0-6. http://CRAN.R-project.org/package=lme4
This is not only important for giving well deserved attribution, but also to state the lme4 version number. | Do I have to cite 'lme4' when using 'lmerTest'? | lmerTest basically offers a bunch of convenience functions on top of lme4. The actual important software is lme4, which implements the model framework. You should definitely give the reference for lme | Do I have to cite 'lme4' when using 'lmerTest'?
lmerTest basically offers a bunch of convenience functions on top of lme4. The actual important software is lme4, which implements the model framework. You should definitely give the reference for lme4 as specified in citation("lme4"):
Douglas Bates, Martin Maechler, Ben Bolker and Steven Walker (2014).
lme4: Linear mixed-effects models using Eigen and S4. R package
version 1.0-6. http://CRAN.R-project.org/package=lme4
This is not only important for giving well deserved attribution, but also to state the lme4 version number. | Do I have to cite 'lme4' when using 'lmerTest'?
lmerTest basically offers a bunch of convenience functions on top of lme4. The actual important software is lme4, which implements the model framework. You should definitely give the reference for lme |
27,881 | Is calculating "actual coverage probability" the same thing as calculating a "credible interval"? | In general, the actual coverage probability will never be equal to the nominal probability when you are working with a discrete distribution.
The confidence interval is defined as a function of the data. If you are working with the binomial distribution, there are only finitely many possible outcomes ($ n+1$ to be precise), so there are only finitely many possible confidence intervals. Since the parameter $ p $ is continuous, it's pretty easy to see that the coverage probability (which is a function of $ p $) can do no better than be approximately 95% (or whatever).
It is generally true that methods based on the CLT will have coverage probabilities below the nominal value, but other methods can actually be more conservative. | Is calculating "actual coverage probability" the same thing as calculating a "credible interval"? | In general, the actual coverage probability will never be equal to the nominal probability when you are working with a discrete distribution.
The confidence interval is defined as a function of the d | Is calculating "actual coverage probability" the same thing as calculating a "credible interval"?
In general, the actual coverage probability will never be equal to the nominal probability when you are working with a discrete distribution.
The confidence interval is defined as a function of the data. If you are working with the binomial distribution, there are only finitely many possible outcomes ($ n+1$ to be precise), so there are only finitely many possible confidence intervals. Since the parameter $ p $ is continuous, it's pretty easy to see that the coverage probability (which is a function of $ p $) can do no better than be approximately 95% (or whatever).
It is generally true that methods based on the CLT will have coverage probabilities below the nominal value, but other methods can actually be more conservative. | Is calculating "actual coverage probability" the same thing as calculating a "credible interval"?
In general, the actual coverage probability will never be equal to the nominal probability when you are working with a discrete distribution.
The confidence interval is defined as a function of the d |
27,882 | Is calculating "actual coverage probability" the same thing as calculating a "credible interval"? | It's nothing to do with Bayesian credible intervals vs frequentist confidence intervals. A 95% (say) confidence interval is defined as giving at least 95% coverage whatever the true value of the parameter $\pi$. So when the nominal coverage is 95%, the actual coverage may be 97% when $\pi=\pi_1$, 96.5% when $\pi=\pi_2$, but for no value of $\pi$ is it less than 95%. The issue (i.e. a discrepancy between nominal & actual coverage) arises with discrete distributions like the binomial.
As an illustration, consider observing $x$ successes from $n$ binomial trials with unknown success probability $\pi$:
$$
\begin{array}{c,c,c}
x & \pi_\mathrm{U} & \Pr(X= x | \pi=0.7) & I(\pi_\mathrm{U}\leq 0.7)\\
0 & 0.3930378 & 0.000729 & 0\\
1 & 0.5818034 & 0.010206 & 0\\
2 & 0.7286616 & 0.059535 & 1\\
3 & 0.8468389 & 0.185220 & 1\\
4 & 0.9371501 & 0.324135 & 1\\
5 & 0.9914876 & 0.302526 & 1\\
6 & 1.0000000 & 0.117649 & 1\\
\end{array}
$$
The first column shows the possible observed values of $x$. The second shows the exact† $95\%$ upper‡ confidence bound $\pi_\mathrm{U} =\pi: [\Pr(X>x | \pi)=0.95]$ that you would calculate in each case. Now suppose $\pi=0.7$: the third column shows the probability of each observed value of $x$ under this supposition; the fourth shows for which cases the calculated confidence interval covers the true parameter value, flagging them with a $1$. If you add up the probabilities for the cases in which the confidence interval does cover the true value you get the actual coverage, $0.989065$. For different true values of $\pi$, the actual coverage will be different:
The nominal coverage is only achieved when the true parameter values coincide with the obtainable upper bounds.
[I just re-read your question & noticed that the author says the actual may be less than the nominal coverage probability. So I reckon they're talking about an approximate method for calculating the confidence interval, though what I said above still goes. The graph might suggest reporting an average confidence level of about $98\%$ but—averaging over values of an unknown parameter?]
† Exact in the sense that the actual coverage is never less than the nominal coverage for any value of $\pi$, & equal to it for some values of $\pi$— @Unwisdom's sense, not @Stephane's.
‡ Intervals with upper & lower bounds are more commonly used of course; but a little more complicated to explain, & there's only one exact interval to consider with just an upper bound. (See Blaker (2000), "Confidence curves and improved exact confidence intervals for discrete distributions", Canadian Journal of Statistics, 28, 4 & the references.) | Is calculating "actual coverage probability" the same thing as calculating a "credible interval"? | It's nothing to do with Bayesian credible intervals vs frequentist confidence intervals. A 95% (say) confidence interval is defined as giving at least 95% coverage whatever the true value of the param | Is calculating "actual coverage probability" the same thing as calculating a "credible interval"?
It's nothing to do with Bayesian credible intervals vs frequentist confidence intervals. A 95% (say) confidence interval is defined as giving at least 95% coverage whatever the true value of the parameter $\pi$. So when the nominal coverage is 95%, the actual coverage may be 97% when $\pi=\pi_1$, 96.5% when $\pi=\pi_2$, but for no value of $\pi$ is it less than 95%. The issue (i.e. a discrepancy between nominal & actual coverage) arises with discrete distributions like the binomial.
As an illustration, consider observing $x$ successes from $n$ binomial trials with unknown success probability $\pi$:
$$
\begin{array}{c,c,c}
x & \pi_\mathrm{U} & \Pr(X= x | \pi=0.7) & I(\pi_\mathrm{U}\leq 0.7)\\
0 & 0.3930378 & 0.000729 & 0\\
1 & 0.5818034 & 0.010206 & 0\\
2 & 0.7286616 & 0.059535 & 1\\
3 & 0.8468389 & 0.185220 & 1\\
4 & 0.9371501 & 0.324135 & 1\\
5 & 0.9914876 & 0.302526 & 1\\
6 & 1.0000000 & 0.117649 & 1\\
\end{array}
$$
The first column shows the possible observed values of $x$. The second shows the exact† $95\%$ upper‡ confidence bound $\pi_\mathrm{U} =\pi: [\Pr(X>x | \pi)=0.95]$ that you would calculate in each case. Now suppose $\pi=0.7$: the third column shows the probability of each observed value of $x$ under this supposition; the fourth shows for which cases the calculated confidence interval covers the true parameter value, flagging them with a $1$. If you add up the probabilities for the cases in which the confidence interval does cover the true value you get the actual coverage, $0.989065$. For different true values of $\pi$, the actual coverage will be different:
The nominal coverage is only achieved when the true parameter values coincide with the obtainable upper bounds.
[I just re-read your question & noticed that the author says the actual may be less than the nominal coverage probability. So I reckon they're talking about an approximate method for calculating the confidence interval, though what I said above still goes. The graph might suggest reporting an average confidence level of about $98\%$ but—averaging over values of an unknown parameter?]
† Exact in the sense that the actual coverage is never less than the nominal coverage for any value of $\pi$, & equal to it for some values of $\pi$— @Unwisdom's sense, not @Stephane's.
‡ Intervals with upper & lower bounds are more commonly used of course; but a little more complicated to explain, & there's only one exact interval to consider with just an upper bound. (See Blaker (2000), "Confidence curves and improved exact confidence intervals for discrete distributions", Canadian Journal of Statistics, 28, 4 & the references.) | Is calculating "actual coverage probability" the same thing as calculating a "credible interval"?
It's nothing to do with Bayesian credible intervals vs frequentist confidence intervals. A 95% (say) confidence interval is defined as giving at least 95% coverage whatever the true value of the param |
27,883 | Is calculating "actual coverage probability" the same thing as calculating a "credible interval"? | I think the difference is actually about the use of approximations made when calculating confidence intervals. For example if we use the fairly standard CI of
$$\text{estimate}\pm 1.96 \times \text {estimated standard error}$$
We may call this a "95% confidence interval". However, is it usually the case that several approximations are made here. If we don't make the approximations, then we can calculate the actual coverage. A typical situation is under estimating the standard error. Then the intervals are too narrow to capture the true value with 95% probability. They might only capture the true value with say 85% probability.
The "actual coverage" probability might be calculated using a monte carlo simulation of some kind (eg generate $1000$ sample data sets using a chosen true value, then calculate 95% CI for each, and find that $850$ actually contained the true value). | Is calculating "actual coverage probability" the same thing as calculating a "credible interval"? | I think the difference is actually about the use of approximations made when calculating confidence intervals. For example if we use the fairly standard CI of
$$\text{estimate}\pm 1.96 \times \text {e | Is calculating "actual coverage probability" the same thing as calculating a "credible interval"?
I think the difference is actually about the use of approximations made when calculating confidence intervals. For example if we use the fairly standard CI of
$$\text{estimate}\pm 1.96 \times \text {estimated standard error}$$
We may call this a "95% confidence interval". However, is it usually the case that several approximations are made here. If we don't make the approximations, then we can calculate the actual coverage. A typical situation is under estimating the standard error. Then the intervals are too narrow to capture the true value with 95% probability. They might only capture the true value with say 85% probability.
The "actual coverage" probability might be calculated using a monte carlo simulation of some kind (eg generate $1000$ sample data sets using a chosen true value, then calculate 95% CI for each, and find that $850$ actually contained the true value). | Is calculating "actual coverage probability" the same thing as calculating a "credible interval"?
I think the difference is actually about the use of approximations made when calculating confidence intervals. For example if we use the fairly standard CI of
$$\text{estimate}\pm 1.96 \times \text {e |
27,884 | How to include a linear and quadratic term when also including interaction with those variables? | When including polynomials and interactions between them, multicollinearity can be a big problem; one approach is to look at orthogonal polynomials.
Generally, orthogonal polynomials are a family of polynomials which are orthogonal with
respect to some inner product.
So for example in the case of polynomials over some region with weight function $w$, the
inner product is $\int_a^bw(x)p_m(x)p_n(x)dx$ - orthogonality makes that inner product $0$
unless $m=n$.
The simplest example for continuous polynomials is the Legendre polynomials, which have
constant weight function over a finite real interval (commonly over $[-1,1]$).
In our case, the space (the observations themselves) is discrete, and our weight function is also constant (usually), so the orthogonal polynomials are a kind of discrete equivalent of Legendre polynomials. With the constant included in our predictors, the inner product is simply $p_m(x)^Tp_n(x) = \sum_i p_m(x_i)p_n(x_i)$.
For example, consider $x = 1,2,3,4,5$
Start with the constant column, $p_0(x) = x^0 = 1$. The next polynomial is of the form $ax-b$, but we're not worrying about scale at the moment, so $p_1(x) = x-\bar x = x-3$. The next polynomial would be of the form $ax^2+bx+c$; it turns out that $p_2(x)=(x-3)^2-2 = x^2-6x+7$ is orthogonal to the previous two:
x p0 p1 p2
1 1 -2 2
2 1 -1 -1
3 1 0 -2
4 1 1 -1
5 1 2 2
Frequently the basis is also normalized (producing an orthonormal family) - that is, the sums of squares of each term is set to be some constant (say, to $n$, or to $n-1$, so that the standard deviation is 1, or perhaps most frequently, to $1$).
Ways to orthogonalize a set of polynomial predictors include Gram-Schmidt orthogonalization, and Cholesky decomposition, though there are numerous other approaches.
Some of the advantages of orthogonal polynomials:
1) multicollinearity is a nonissue - these predictors are all orthogonal.
2) The low-order coefficients don't change as you add terms. If you fit a degree $k$ polynomial via orthogonal polynomials, you know the coefficients of a fit of all the lower order polynomials without re-fitting.
Example in R (cars data, stopping distances against speed):
Here we consider the possibility that a quadratic model might be suitable:
R uses the poly function to set up orthogonal polynomial predictors:
> p <- model.matrix(dist~poly(speed,2),cars)
> cbind(head(cars),head(p))
speed dist (Intercept) poly(speed, 2)1 poly(speed, 2)2
1 4 2 1 -0.3079956 0.41625480
2 4 10 1 -0.3079956 0.41625480
3 7 4 1 -0.2269442 0.16583013
4 7 22 1 -0.2269442 0.16583013
5 8 16 1 -0.1999270 0.09974267
6 9 10 1 -0.1729098 0.04234892
They're orthogonal:
> round(crossprod(p),9)
(Intercept) poly(speed, 2)1 poly(speed, 2)2
(Intercept) 50 0 0
poly(speed, 2)1 0 1 0
poly(speed, 2)2 0 0 1
Here's a plot of the polynomials:
Here's the linear model output:
> summary(carsp)
Call:
lm(formula = dist ~ poly(speed, 2), data = cars)
Residuals:
Min 1Q Median 3Q Max
-28.720 -9.184 -3.188 4.628 45.152
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 42.980 2.146 20.026 < 2e-16 ***
poly(speed, 2)1 145.552 15.176 9.591 1.21e-12 ***
poly(speed, 2)2 22.996 15.176 1.515 0.136
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 15.18 on 47 degrees of freedom
Multiple R-squared: 0.6673, Adjusted R-squared: 0.6532
F-statistic: 47.14 on 2 and 47 DF, p-value: 5.852e-12
Here's a plot of the quadratic fit: | How to include a linear and quadratic term when also including interaction with those variables? | When including polynomials and interactions between them, multicollinearity can be a big problem; one approach is to look at orthogonal polynomials.
Generally, orthogonal polynomials are a family of p | How to include a linear and quadratic term when also including interaction with those variables?
When including polynomials and interactions between them, multicollinearity can be a big problem; one approach is to look at orthogonal polynomials.
Generally, orthogonal polynomials are a family of polynomials which are orthogonal with
respect to some inner product.
So for example in the case of polynomials over some region with weight function $w$, the
inner product is $\int_a^bw(x)p_m(x)p_n(x)dx$ - orthogonality makes that inner product $0$
unless $m=n$.
The simplest example for continuous polynomials is the Legendre polynomials, which have
constant weight function over a finite real interval (commonly over $[-1,1]$).
In our case, the space (the observations themselves) is discrete, and our weight function is also constant (usually), so the orthogonal polynomials are a kind of discrete equivalent of Legendre polynomials. With the constant included in our predictors, the inner product is simply $p_m(x)^Tp_n(x) = \sum_i p_m(x_i)p_n(x_i)$.
For example, consider $x = 1,2,3,4,5$
Start with the constant column, $p_0(x) = x^0 = 1$. The next polynomial is of the form $ax-b$, but we're not worrying about scale at the moment, so $p_1(x) = x-\bar x = x-3$. The next polynomial would be of the form $ax^2+bx+c$; it turns out that $p_2(x)=(x-3)^2-2 = x^2-6x+7$ is orthogonal to the previous two:
x p0 p1 p2
1 1 -2 2
2 1 -1 -1
3 1 0 -2
4 1 1 -1
5 1 2 2
Frequently the basis is also normalized (producing an orthonormal family) - that is, the sums of squares of each term is set to be some constant (say, to $n$, or to $n-1$, so that the standard deviation is 1, or perhaps most frequently, to $1$).
Ways to orthogonalize a set of polynomial predictors include Gram-Schmidt orthogonalization, and Cholesky decomposition, though there are numerous other approaches.
Some of the advantages of orthogonal polynomials:
1) multicollinearity is a nonissue - these predictors are all orthogonal.
2) The low-order coefficients don't change as you add terms. If you fit a degree $k$ polynomial via orthogonal polynomials, you know the coefficients of a fit of all the lower order polynomials without re-fitting.
Example in R (cars data, stopping distances against speed):
Here we consider the possibility that a quadratic model might be suitable:
R uses the poly function to set up orthogonal polynomial predictors:
> p <- model.matrix(dist~poly(speed,2),cars)
> cbind(head(cars),head(p))
speed dist (Intercept) poly(speed, 2)1 poly(speed, 2)2
1 4 2 1 -0.3079956 0.41625480
2 4 10 1 -0.3079956 0.41625480
3 7 4 1 -0.2269442 0.16583013
4 7 22 1 -0.2269442 0.16583013
5 8 16 1 -0.1999270 0.09974267
6 9 10 1 -0.1729098 0.04234892
They're orthogonal:
> round(crossprod(p),9)
(Intercept) poly(speed, 2)1 poly(speed, 2)2
(Intercept) 50 0 0
poly(speed, 2)1 0 1 0
poly(speed, 2)2 0 0 1
Here's a plot of the polynomials:
Here's the linear model output:
> summary(carsp)
Call:
lm(formula = dist ~ poly(speed, 2), data = cars)
Residuals:
Min 1Q Median 3Q Max
-28.720 -9.184 -3.188 4.628 45.152
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 42.980 2.146 20.026 < 2e-16 ***
poly(speed, 2)1 145.552 15.176 9.591 1.21e-12 ***
poly(speed, 2)2 22.996 15.176 1.515 0.136
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 15.18 on 47 degrees of freedom
Multiple R-squared: 0.6673, Adjusted R-squared: 0.6532
F-statistic: 47.14 on 2 and 47 DF, p-value: 5.852e-12
Here's a plot of the quadratic fit: | How to include a linear and quadratic term when also including interaction with those variables?
When including polynomials and interactions between them, multicollinearity can be a big problem; one approach is to look at orthogonal polynomials.
Generally, orthogonal polynomials are a family of p |
27,885 | How to include a linear and quadratic term when also including interaction with those variables? | I don't feel that centering is worth the trouble, and centering makes the interpretation of parameter estimates more complex. If you use modern matrix algebra software, algebraic collinearity is not a problem. Your original motivation of centering to be able to interpret main effects in the presence of interaction is not a strong one. Main effects when estimated at any automatically chosen value of a continuous interacting factor are somewhat arbitrary, and it's best to think of this as a simple estimation problem by comparing predicted values. In the R rms package contrast.rms function, for example, you can obtain any contrast of interest independent of variable codings. Here is an example of a categorical variable x1 with levels "a" "b" "c" and a continuous variable x2, fitted using a restricted cubic spline with 4 default knots. Different relationships between x2 and y are allowed for different x1. Two of the levels of x1 are compared at x2=10.
require(rms)
dd <- datadist(x1, x2); options(datadist='dd')
f <- ols(y ~ x1 * rcs(x2,4))
contrast(f, list(x1='b', x2=10), list(x1='c', x2=10))
# Now get all comparisons with c:
contrast(f, list(x1=c('a','b'), x2=10), list(x1='c', x2=10))
# add type ='joint' to get a 2 d.f. test, or conf.type='simultaneous'
# to get simultaneous individual confidence intervals
With this approach you can also easily estimate contrasts at several values of the interacting factor(s), e.g.
contrast(f, list(x1='b', x2=10:20), list(x1='c', x2=10:20)) | How to include a linear and quadratic term when also including interaction with those variables? | I don't feel that centering is worth the trouble, and centering makes the interpretation of parameter estimates more complex. If you use modern matrix algebra software, algebraic collinearity is not | How to include a linear and quadratic term when also including interaction with those variables?
I don't feel that centering is worth the trouble, and centering makes the interpretation of parameter estimates more complex. If you use modern matrix algebra software, algebraic collinearity is not a problem. Your original motivation of centering to be able to interpret main effects in the presence of interaction is not a strong one. Main effects when estimated at any automatically chosen value of a continuous interacting factor are somewhat arbitrary, and it's best to think of this as a simple estimation problem by comparing predicted values. In the R rms package contrast.rms function, for example, you can obtain any contrast of interest independent of variable codings. Here is an example of a categorical variable x1 with levels "a" "b" "c" and a continuous variable x2, fitted using a restricted cubic spline with 4 default knots. Different relationships between x2 and y are allowed for different x1. Two of the levels of x1 are compared at x2=10.
require(rms)
dd <- datadist(x1, x2); options(datadist='dd')
f <- ols(y ~ x1 * rcs(x2,4))
contrast(f, list(x1='b', x2=10), list(x1='c', x2=10))
# Now get all comparisons with c:
contrast(f, list(x1=c('a','b'), x2=10), list(x1='c', x2=10))
# add type ='joint' to get a 2 d.f. test, or conf.type='simultaneous'
# to get simultaneous individual confidence intervals
With this approach you can also easily estimate contrasts at several values of the interacting factor(s), e.g.
contrast(f, list(x1='b', x2=10:20), list(x1='c', x2=10:20)) | How to include a linear and quadratic term when also including interaction with those variables?
I don't feel that centering is worth the trouble, and centering makes the interpretation of parameter estimates more complex. If you use modern matrix algebra software, algebraic collinearity is not |
27,886 | Partial Correlation Interpretation | For understanding this I always prefer the cholesky-decomposition of the correlation-matrix.
Assume the correlation-matrix R of the three variable $X.Y.Z$ as
$$ \text{ R =} \left[ \begin{array} {rrr}
1.00& -0.29& -0.45\\
-0.29& 1.00& 0.93\\
-0.45& 0.93& 1.00
\end{array} \right]
$$
Then the cholesky-decomposition L is
$$ \text{ L =} \left[ \begin{array} {rrr}
X\\ Y \\ Z \end{array} \right] = \left[ \begin{array} {rrr}
1.00& 0.00& 0.00\\
-0.29& 0.96& 0.00\\
-0.45& 0.83& 0.32
\end{array} \right]
$$
The matrix L gives somehow the coordinates of the three variables in an euclidean space if the variables are seen as vectors from the origin, where the x-axis is identified with the variable/vector X and so on.
Then the correlations of X and Y is $\newcommand{\corr}{\rm corr} \corr(X,Y)=x_1 y_1 + x_2 y_2 + x_3 y_3 $ and we see immediately it it $\corr(X,Y)=-0.29 $ because of the zeros and the unit-factor. We see also immediately the correlation $\corr(X,Z)=-0.45$ again because of the zeros and the unit-cofactor. However, the correlation between Y and Z is $\corr(Y,Z) = -0.29 \cdot -0.45 + 0.96 \cdot 0.83$ The partial correlation (after X is removed) is that part for which no variance in the X-variable is present, so $\corr(Y,Z)._X = 0.96 \cdot 0.83 $. Now imagine, the value $0.83$ would be $-0.83$ instead. Then the partial correlation would be negative and the correlation between Y and Z were $ 0.29 \cdot 0.45 - 0.96 \cdot 0.83$
What we see is, that the partial correlations are partly independent from the overall correlations (though within some bounds) | Partial Correlation Interpretation | For understanding this I always prefer the cholesky-decomposition of the correlation-matrix.
Assume the correlation-matrix R of the three variable $X.Y.Z$ as
$$ \text{ R =} \left[ \begin{array} {rrr} | Partial Correlation Interpretation
For understanding this I always prefer the cholesky-decomposition of the correlation-matrix.
Assume the correlation-matrix R of the three variable $X.Y.Z$ as
$$ \text{ R =} \left[ \begin{array} {rrr}
1.00& -0.29& -0.45\\
-0.29& 1.00& 0.93\\
-0.45& 0.93& 1.00
\end{array} \right]
$$
Then the cholesky-decomposition L is
$$ \text{ L =} \left[ \begin{array} {rrr}
X\\ Y \\ Z \end{array} \right] = \left[ \begin{array} {rrr}
1.00& 0.00& 0.00\\
-0.29& 0.96& 0.00\\
-0.45& 0.83& 0.32
\end{array} \right]
$$
The matrix L gives somehow the coordinates of the three variables in an euclidean space if the variables are seen as vectors from the origin, where the x-axis is identified with the variable/vector X and so on.
Then the correlations of X and Y is $\newcommand{\corr}{\rm corr} \corr(X,Y)=x_1 y_1 + x_2 y_2 + x_3 y_3 $ and we see immediately it it $\corr(X,Y)=-0.29 $ because of the zeros and the unit-factor. We see also immediately the correlation $\corr(X,Z)=-0.45$ again because of the zeros and the unit-cofactor. However, the correlation between Y and Z is $\corr(Y,Z) = -0.29 \cdot -0.45 + 0.96 \cdot 0.83$ The partial correlation (after X is removed) is that part for which no variance in the X-variable is present, so $\corr(Y,Z)._X = 0.96 \cdot 0.83 $. Now imagine, the value $0.83$ would be $-0.83$ instead. Then the partial correlation would be negative and the correlation between Y and Z were $ 0.29 \cdot 0.45 - 0.96 \cdot 0.83$
What we see is, that the partial correlations are partly independent from the overall correlations (though within some bounds) | Partial Correlation Interpretation
For understanding this I always prefer the cholesky-decomposition of the correlation-matrix.
Assume the correlation-matrix R of the three variable $X.Y.Z$ as
$$ \text{ R =} \left[ \begin{array} {rrr} |
27,887 | Partial Correlation Interpretation | @Gottfried Helms has given you a good answer. If you're looking for a slightly more intuitively-accessible interpretation, the standard answer is this: Imagine regressing A onto C and B onto C, and in both cases saving the residuals. The partial correlation of A and B controlling for C is the correlation between those two sets of residuals. In other words, it indexes the strength of the linear association between the portion of the variability in A and B that cannot be accounted for by recourse to variability in C. This can be contrasted with the part (or semi-partial) correlation in which the residuals for one of A or B is correlated with the other full variable. For an example of how it can be used, showing that the partial correlation between A and B controlling for C is zero can be part of an argument that the relationship between A and B is fully mediated by C (although this approach will only work in the simplest case, see Baron & Kenny (1986), and Kenny's mediation webpage). If you want a little more information about these topics, I discuss it here, there's a decent Wikipedia page, and I'm particularly fond of this webpage. | Partial Correlation Interpretation | @Gottfried Helms has given you a good answer. If you're looking for a slightly more intuitively-accessible interpretation, the standard answer is this: Imagine regressing A onto C and B onto C, and | Partial Correlation Interpretation
@Gottfried Helms has given you a good answer. If you're looking for a slightly more intuitively-accessible interpretation, the standard answer is this: Imagine regressing A onto C and B onto C, and in both cases saving the residuals. The partial correlation of A and B controlling for C is the correlation between those two sets of residuals. In other words, it indexes the strength of the linear association between the portion of the variability in A and B that cannot be accounted for by recourse to variability in C. This can be contrasted with the part (or semi-partial) correlation in which the residuals for one of A or B is correlated with the other full variable. For an example of how it can be used, showing that the partial correlation between A and B controlling for C is zero can be part of an argument that the relationship between A and B is fully mediated by C (although this approach will only work in the simplest case, see Baron & Kenny (1986), and Kenny's mediation webpage). If you want a little more information about these topics, I discuss it here, there's a decent Wikipedia page, and I'm particularly fond of this webpage. | Partial Correlation Interpretation
@Gottfried Helms has given you a good answer. If you're looking for a slightly more intuitively-accessible interpretation, the standard answer is this: Imagine regressing A onto C and B onto C, and |
27,888 | Which SVM kernel to use for a binary classification problem? | You've actually hit on something of an open question in the literature. As you say, there are a variety of kernels (e.g., linear, radial basis function, sigmoid, polynomial), and will perform your classification task in a space defined by their respective equations. To my knowledge, no one has definitively shown that one kernel always performs best on one type of text classification task versus another.
One thing to consider is that each kernel function has 1 or more parameters which will need to be optimized for your data set, which means, if you're doing it properly, you should have a second hold-out training collection on which you can investigate the best values for these parameters. (I say a second hold-out collection, because you should already have one which you are using to figure out the best input features for your classifier.) I did an experiment awhile back in which I did a large-scale optimization of each of these parameters for a simple textual classification task and found that each kernel appeared to perform reasonably well, but did so at different configurations. If I remember my results correctly, sigmoid performed the best, but did so at very specific parameter tunings--ones which took me over a month for my machine to find. My advice is that, if you have sufficient time and data to do some parameter optimization experiments, it could be interesting to compare the performance of each kernel in your particular classification task, but, if you don't, linear SVM performs reasonably well in text classification, has only the c-parameter to optimize (although many people just leaves this at default settings), and will allow you to focus on the aspects of your classification system that will have a greater contribution to final performance--the types of input features you use, and how you model them. | Which SVM kernel to use for a binary classification problem? | You've actually hit on something of an open question in the literature. As you say, there are a variety of kernels (e.g., linear, radial basis function, sigmoid, polynomial), and will perform your cla | Which SVM kernel to use for a binary classification problem?
You've actually hit on something of an open question in the literature. As you say, there are a variety of kernels (e.g., linear, radial basis function, sigmoid, polynomial), and will perform your classification task in a space defined by their respective equations. To my knowledge, no one has definitively shown that one kernel always performs best on one type of text classification task versus another.
One thing to consider is that each kernel function has 1 or more parameters which will need to be optimized for your data set, which means, if you're doing it properly, you should have a second hold-out training collection on which you can investigate the best values for these parameters. (I say a second hold-out collection, because you should already have one which you are using to figure out the best input features for your classifier.) I did an experiment awhile back in which I did a large-scale optimization of each of these parameters for a simple textual classification task and found that each kernel appeared to perform reasonably well, but did so at different configurations. If I remember my results correctly, sigmoid performed the best, but did so at very specific parameter tunings--ones which took me over a month for my machine to find. My advice is that, if you have sufficient time and data to do some parameter optimization experiments, it could be interesting to compare the performance of each kernel in your particular classification task, but, if you don't, linear SVM performs reasonably well in text classification, has only the c-parameter to optimize (although many people just leaves this at default settings), and will allow you to focus on the aspects of your classification system that will have a greater contribution to final performance--the types of input features you use, and how you model them. | Which SVM kernel to use for a binary classification problem?
You've actually hit on something of an open question in the literature. As you say, there are a variety of kernels (e.g., linear, radial basis function, sigmoid, polynomial), and will perform your cla |
27,889 | Which SVM kernel to use for a binary classification problem? | Try the Gaussian kernel.
The Gaussian kernel is often tried first and turns out to be the best kernel in many applications (with your bag-of-words features, too). You should try the linear kernel, too. Don't expect it to give good results, text-classification problems tend to be non-linear. But it gives you a feeling for your data and you can be happy about how much the non-linearity improves your results.
Make sure you properly cross validate your kernel-width and think about how you want to normalize your features (tf-idf etc).
I would say you can improve your results with a better feature normalisation more than with choosing a different kernel (i.e. not the Gaussian). | Which SVM kernel to use for a binary classification problem? | Try the Gaussian kernel.
The Gaussian kernel is often tried first and turns out to be the best kernel in many applications (with your bag-of-words features, too). You should try the linear kernel, too | Which SVM kernel to use for a binary classification problem?
Try the Gaussian kernel.
The Gaussian kernel is often tried first and turns out to be the best kernel in many applications (with your bag-of-words features, too). You should try the linear kernel, too. Don't expect it to give good results, text-classification problems tend to be non-linear. But it gives you a feeling for your data and you can be happy about how much the non-linearity improves your results.
Make sure you properly cross validate your kernel-width and think about how you want to normalize your features (tf-idf etc).
I would say you can improve your results with a better feature normalisation more than with choosing a different kernel (i.e. not the Gaussian). | Which SVM kernel to use for a binary classification problem?
Try the Gaussian kernel.
The Gaussian kernel is often tried first and turns out to be the best kernel in many applications (with your bag-of-words features, too). You should try the linear kernel, too |
27,890 | What is $P(X_1>X_2 , X_1>X_3,... , X_1>X_n)$? | For $n \gt 2$ this needs numeric integration, as indicated in several of the links.
To be explicit, let $\phi_i$ be the PDF of $X_i$ and $\Phi_i$ be its CDF. Conditional on $X_1 = t$, the chance that $X_1 \gt X_i$ for the remaining $i$ is the product of the individual chances (by independence):
$$\Pr(t \ge X_i, i=2,\ldots,n) = \Phi_2(t)\Phi_3(t)\cdots\Phi_n(t).$$
Integrating over all values of $t$, using the distribution function $\phi_1(t)dt$ for $X_1$, gives the answer
$$= \int_{-\infty}^{\infty} \phi_1(t) \Phi_2(t)\cdots\Phi_n(t)dt.$$
For $n=20$, the integral takes 5 seconds with Mathematica, given vectors $\mu$ and $\sigma$ of the means and SDs of the variables:
\[CapitalPhi] = MapThread[CDF[NormalDistribution[#1, #2]] &, {\[Mu], \[Sigma]}];
\[Phi] = PDF[NormalDistribution[First[\[Mu]], First[\[Sigma]]]];
f[t] := \[Phi][t] Product[i[t], {i, Rest[\[CapitalPhi]]}]
NIntegrate[f[t], {t, -Infinity, Infinity}]
The value can be checked (or even estimated) with a simulation. In the same five seconds it takes to do the integral, Mathematica can do over 2.5 million iterations and summarize their results:
m = 2500000;
x = MapThread[RandomReal[NormalDistribution[#1, #2], m] &, {\[Mu],\[Sigma]}]\[Transpose];
{1, 1./m} # & /@ SortBy[Tally[Flatten[Ordering[#, -1] & /@ x]], First[#] &]
For instance, we can generate some variable specifications at random:
{\[Mu], \[Sigma]} = RandomReal[{0, 1}, {2, n}];
In one case the integral evaluated to $0.152078$; a simulation returned
{{1, 0.152387}, ... }
indicating that the first variable was greatest $0.152387$ of the time, closely agreeing with the integral. (With this many iterations we expect agreement to within a few digits in the fourth decimal place.) | What is $P(X_1>X_2 , X_1>X_3,... , X_1>X_n)$? | For $n \gt 2$ this needs numeric integration, as indicated in several of the links.
To be explicit, let $\phi_i$ be the PDF of $X_i$ and $\Phi_i$ be its CDF. Conditional on $X_1 = t$, the chance that | What is $P(X_1>X_2 , X_1>X_3,... , X_1>X_n)$?
For $n \gt 2$ this needs numeric integration, as indicated in several of the links.
To be explicit, let $\phi_i$ be the PDF of $X_i$ and $\Phi_i$ be its CDF. Conditional on $X_1 = t$, the chance that $X_1 \gt X_i$ for the remaining $i$ is the product of the individual chances (by independence):
$$\Pr(t \ge X_i, i=2,\ldots,n) = \Phi_2(t)\Phi_3(t)\cdots\Phi_n(t).$$
Integrating over all values of $t$, using the distribution function $\phi_1(t)dt$ for $X_1$, gives the answer
$$= \int_{-\infty}^{\infty} \phi_1(t) \Phi_2(t)\cdots\Phi_n(t)dt.$$
For $n=20$, the integral takes 5 seconds with Mathematica, given vectors $\mu$ and $\sigma$ of the means and SDs of the variables:
\[CapitalPhi] = MapThread[CDF[NormalDistribution[#1, #2]] &, {\[Mu], \[Sigma]}];
\[Phi] = PDF[NormalDistribution[First[\[Mu]], First[\[Sigma]]]];
f[t] := \[Phi][t] Product[i[t], {i, Rest[\[CapitalPhi]]}]
NIntegrate[f[t], {t, -Infinity, Infinity}]
The value can be checked (or even estimated) with a simulation. In the same five seconds it takes to do the integral, Mathematica can do over 2.5 million iterations and summarize their results:
m = 2500000;
x = MapThread[RandomReal[NormalDistribution[#1, #2], m] &, {\[Mu],\[Sigma]}]\[Transpose];
{1, 1./m} # & /@ SortBy[Tally[Flatten[Ordering[#, -1] & /@ x]], First[#] &]
For instance, we can generate some variable specifications at random:
{\[Mu], \[Sigma]} = RandomReal[{0, 1}, {2, n}];
In one case the integral evaluated to $0.152078$; a simulation returned
{{1, 0.152387}, ... }
indicating that the first variable was greatest $0.152387$ of the time, closely agreeing with the integral. (With this many iterations we expect agreement to within a few digits in the fourth decimal place.) | What is $P(X_1>X_2 , X_1>X_3,... , X_1>X_n)$?
For $n \gt 2$ this needs numeric integration, as indicated in several of the links.
To be explicit, let $\phi_i$ be the PDF of $X_i$ and $\Phi_i$ be its CDF. Conditional on $X_1 = t$, the chance that |
27,891 | What is $P(X_1>X_2 , X_1>X_3,... , X_1>X_n)$? | My answer to the second question you list
has a simple form of the more general result given by @whuber, but is readily adapted
to the general case. Instead of
$$P(X_1 > \max X_i \mid X_1 = \alpha) = \prod_{i=2}^n P\{X_i < \alpha \mid X_1 = \alpha\}
= \left[\Phi(\alpha)\right]^{n-1}$$
which applies when the $X_i$ are independent $N(0,1)$ random variables, we have
$$P(X_1 > \max X_i \mid X_1 = \alpha) = \prod_{i=2}^n P\{X_i < \alpha \mid X_1 = \alpha\}
= \prod_{i=2}^n \Phi\left(\frac{\alpha-\mu_i}{\sigma_i}\right)$$
since the $X_i$ are independent $N(\mu_i, \sigma_i^2)$ random variables, and instead of
$$P(X_1 > \max X_i)
= \int_{-\infty}^{\infty}\left[\Phi(\alpha)\right]^{n-1}
\phi(\alpha-\mu)\,\mathrm d\alpha$$
we have
$$P(X_1 > \max X_i)
= \int_{-\infty}^{\infty}\prod_{i=2}^n \Phi\left(\frac{\alpha-\mu_i}{\sigma_i}\right)
\frac{1}{\sigma}\phi\left(\frac{\alpha-\mu_1}{\sigma_1}\right)\,\mathrm d\alpha$$
where $\Phi(\cdot)$ and $\phi(\cdot)$ are the cumulative distribution function
and probability density function of the standard normal random variable.
This is just whuber's answer expressed in different notation.
The complementary probability
$P(X_1 < \max X_i) = P\{(X_1 < X_2) \cup \cdots \cup (X_1 < X_n)$ can also
be bounded above by the union bound discussed in my answer to the other question.
We have that
$$\begin{align*}
P(X_1 < \max X_i) &= P\{(X_1 < X_2) \cup \cdots \cup (X_1 < X_n)\\
&\leq \sum_{i=2}^n P(X_1 < X_i)\\
&= \sum_{i=2}^n Q\left(\frac{\mu_1 - \mu_i}{\sqrt{\sigma_1^2 + \sigma_i^2}}\right)
\end{align*}$$
since $X_i-X_1 \sim N(\mu_i-\mu_1,\sigma_i^2+\sigma_1^2)$. Note that
$Q(x) = 1-\Phi(x)$ is the complementary standard normal
distribution function. The union bound is very tight when
$\mu_1 \gg \max \mu_i$ and the variances are roughly
comparable even for large $n$, but for small $n$, the bound
can exceed $1$ and thus be useless. | What is $P(X_1>X_2 , X_1>X_3,... , X_1>X_n)$? | My answer to the second question you list
has a simple form of the more general result given by @whuber, but is readily adapted
to the general case. Instead of
$$P(X_1 > \max X_i \mid X_1 = \alpha) | What is $P(X_1>X_2 , X_1>X_3,... , X_1>X_n)$?
My answer to the second question you list
has a simple form of the more general result given by @whuber, but is readily adapted
to the general case. Instead of
$$P(X_1 > \max X_i \mid X_1 = \alpha) = \prod_{i=2}^n P\{X_i < \alpha \mid X_1 = \alpha\}
= \left[\Phi(\alpha)\right]^{n-1}$$
which applies when the $X_i$ are independent $N(0,1)$ random variables, we have
$$P(X_1 > \max X_i \mid X_1 = \alpha) = \prod_{i=2}^n P\{X_i < \alpha \mid X_1 = \alpha\}
= \prod_{i=2}^n \Phi\left(\frac{\alpha-\mu_i}{\sigma_i}\right)$$
since the $X_i$ are independent $N(\mu_i, \sigma_i^2)$ random variables, and instead of
$$P(X_1 > \max X_i)
= \int_{-\infty}^{\infty}\left[\Phi(\alpha)\right]^{n-1}
\phi(\alpha-\mu)\,\mathrm d\alpha$$
we have
$$P(X_1 > \max X_i)
= \int_{-\infty}^{\infty}\prod_{i=2}^n \Phi\left(\frac{\alpha-\mu_i}{\sigma_i}\right)
\frac{1}{\sigma}\phi\left(\frac{\alpha-\mu_1}{\sigma_1}\right)\,\mathrm d\alpha$$
where $\Phi(\cdot)$ and $\phi(\cdot)$ are the cumulative distribution function
and probability density function of the standard normal random variable.
This is just whuber's answer expressed in different notation.
The complementary probability
$P(X_1 < \max X_i) = P\{(X_1 < X_2) \cup \cdots \cup (X_1 < X_n)$ can also
be bounded above by the union bound discussed in my answer to the other question.
We have that
$$\begin{align*}
P(X_1 < \max X_i) &= P\{(X_1 < X_2) \cup \cdots \cup (X_1 < X_n)\\
&\leq \sum_{i=2}^n P(X_1 < X_i)\\
&= \sum_{i=2}^n Q\left(\frac{\mu_1 - \mu_i}{\sqrt{\sigma_1^2 + \sigma_i^2}}\right)
\end{align*}$$
since $X_i-X_1 \sim N(\mu_i-\mu_1,\sigma_i^2+\sigma_1^2)$. Note that
$Q(x) = 1-\Phi(x)$ is the complementary standard normal
distribution function. The union bound is very tight when
$\mu_1 \gg \max \mu_i$ and the variances are roughly
comparable even for large $n$, but for small $n$, the bound
can exceed $1$ and thus be useless. | What is $P(X_1>X_2 , X_1>X_3,... , X_1>X_n)$?
My answer to the second question you list
has a simple form of the more general result given by @whuber, but is readily adapted
to the general case. Instead of
$$P(X_1 > \max X_i \mid X_1 = \alpha) |
27,892 | Sign flipping when adding one more variable in regression and with much greater magnitude | Think of this example:
Collect a dataset based on the coins in peoples pockets, the y variable/response is the total value of the coins, the variable x1 is the total number of coins and x2 is the number of coins that are not quarters (or whatever the largest value of the common coins are for the local).
It is easy to see that the regression with either x1 or x2 would give a positive slope, but when incuding both in the model the slope on x2 would go negative since increasing the number of smaller coins without increasing the total number of coins would mean replacing large coins with smaller ones and reducing the overall value (y).
The same thing can happen any time you have correlalted x variables, the signs can easily be opposite between when a term is by itself and in the presence of others. | Sign flipping when adding one more variable in regression and with much greater magnitude | Think of this example:
Collect a dataset based on the coins in peoples pockets, the y variable/response is the total value of the coins, the variable x1 is the total number of coins and x2 is the numb | Sign flipping when adding one more variable in regression and with much greater magnitude
Think of this example:
Collect a dataset based on the coins in peoples pockets, the y variable/response is the total value of the coins, the variable x1 is the total number of coins and x2 is the number of coins that are not quarters (or whatever the largest value of the common coins are for the local).
It is easy to see that the regression with either x1 or x2 would give a positive slope, but when incuding both in the model the slope on x2 would go negative since increasing the number of smaller coins without increasing the total number of coins would mean replacing large coins with smaller ones and reducing the overall value (y).
The same thing can happen any time you have correlalted x variables, the signs can easily be opposite between when a term is by itself and in the presence of others. | Sign flipping when adding one more variable in regression and with much greater magnitude
Think of this example:
Collect a dataset based on the coins in peoples pockets, the y variable/response is the total value of the coins, the variable x1 is the total number of coins and x2 is the numb |
27,893 | Sign flipping when adding one more variable in regression and with much greater magnitude | You have answered your own question - there is collinearity.
A bit of explanation: $x_1$ and $x_2$ are highly collinear. But when you enter both into the regression, the regression is attempting to control for the effect of the other variables. In other words, hold $x_1$ constant, what do changes in $x_2$ do to $y$. But the fact that they are so highly related means that this question is silly, and weird things can happen. | Sign flipping when adding one more variable in regression and with much greater magnitude | You have answered your own question - there is collinearity.
A bit of explanation: $x_1$ and $x_2$ are highly collinear. But when you enter both into the regression, the regression is attempting to co | Sign flipping when adding one more variable in regression and with much greater magnitude
You have answered your own question - there is collinearity.
A bit of explanation: $x_1$ and $x_2$ are highly collinear. But when you enter both into the regression, the regression is attempting to control for the effect of the other variables. In other words, hold $x_1$ constant, what do changes in $x_2$ do to $y$. But the fact that they are so highly related means that this question is silly, and weird things can happen. | Sign flipping when adding one more variable in regression and with much greater magnitude
You have answered your own question - there is collinearity.
A bit of explanation: $x_1$ and $x_2$ are highly collinear. But when you enter both into the regression, the regression is attempting to co |
27,894 | Sign flipping when adding one more variable in regression and with much greater magnitude | Why in 3, the sign of β2 becomes positive and much greater than β1 in absolute value? Is there any statistical reason that β2 can flip sign and has large magnitude?
The simple answer is there is no deep reason.
The way to think about it is that when multicollineary approaches perfect, the specific values that you end up obtaining from the fitting become more and more dependent on smaller and smaller details of the data. If you were to sample the same amount of data from the same underlying distribution and then fit, you could obtain completely different fitted values. | Sign flipping when adding one more variable in regression and with much greater magnitude | Why in 3, the sign of β2 becomes positive and much greater than β1 in absolute value? Is there any statistical reason that β2 can flip sign and has large magnitude?
The simple answer is there is no d | Sign flipping when adding one more variable in regression and with much greater magnitude
Why in 3, the sign of β2 becomes positive and much greater than β1 in absolute value? Is there any statistical reason that β2 can flip sign and has large magnitude?
The simple answer is there is no deep reason.
The way to think about it is that when multicollineary approaches perfect, the specific values that you end up obtaining from the fitting become more and more dependent on smaller and smaller details of the data. If you were to sample the same amount of data from the same underlying distribution and then fit, you could obtain completely different fitted values. | Sign flipping when adding one more variable in regression and with much greater magnitude
Why in 3, the sign of β2 becomes positive and much greater than β1 in absolute value? Is there any statistical reason that β2 can flip sign and has large magnitude?
The simple answer is there is no d |
27,895 | How do I generate numbers according to a Soliton distribution? | If we start at $k=2$, the sums telescope, giving $1-1/k$ for the (modified) CDF. Inverting this, and taking care of the special case $k=1$, gives the following algorithm (coded in R, I'm afraid, but you can take it as pseudocode for a Python implementation):
rsoliton <- function(n.values, n=2) {
x <- runif(n.values) # Uniform values in [0,1)
i <- ceiling(1/x) # Modified soliton distribution
i[i > n] <- 1 # Convert extreme values to 1
i
}
As an example of its use (and a test), let's draw $10^5$ values for $N=10$:
n.trials <- 10^5
i <- rsoliton(n.trials, n=10)
freq <- table(i) / n.trials # Tabulate frequencies
plot(freq, type="h", lwd=6) | How do I generate numbers according to a Soliton distribution? | If we start at $k=2$, the sums telescope, giving $1-1/k$ for the (modified) CDF. Inverting this, and taking care of the special case $k=1$, gives the following algorithm (coded in R, I'm afraid, but | How do I generate numbers according to a Soliton distribution?
If we start at $k=2$, the sums telescope, giving $1-1/k$ for the (modified) CDF. Inverting this, and taking care of the special case $k=1$, gives the following algorithm (coded in R, I'm afraid, but you can take it as pseudocode for a Python implementation):
rsoliton <- function(n.values, n=2) {
x <- runif(n.values) # Uniform values in [0,1)
i <- ceiling(1/x) # Modified soliton distribution
i[i > n] <- 1 # Convert extreme values to 1
i
}
As an example of its use (and a test), let's draw $10^5$ values for $N=10$:
n.trials <- 10^5
i <- rsoliton(n.trials, n=10)
freq <- table(i) / n.trials # Tabulate frequencies
plot(freq, type="h", lwd=6) | How do I generate numbers according to a Soliton distribution?
If we start at $k=2$, the sums telescope, giving $1-1/k$ for the (modified) CDF. Inverting this, and taking care of the special case $k=1$, gives the following algorithm (coded in R, I'm afraid, but |
27,896 | How do I generate numbers according to a Soliton distribution? | Python (adapted from @whuber's R solution)
from __future__ import print_function, division
import random
from math import ceil
def soliton(N, seed):
prng = random.Random()
prng.seed(seed)
while 1:
x = random.random() # Uniform values in [0, 1)
i = int(ceil(1/x)) # Modified soliton distribution
yield i if i <= N else 1 # Correct extreme values to 1
if __name__ == '__main__':
N = 10
T = 10 ** 5 # Number of trials
s = soliton(N, s = soliton(N, random.randint(0, 2 ** 32 - 1)) # soliton generator
f = [0]*N # frequency counter
for j in range(T):
i = next(s)
f[i-1] += 1
print("k\tFreq.\tExpected Prob\tObserved Prob\n");
print("{:d}\t{:d}\t{:f}\t{:f}".format(1, f[0], 1/N, f[0]/T))
for k in range(2, N+1):
print("{:d}\t{:d}\t{:f}\t{:f}".format(k, f[k-1], 1/(k*(k-1)), f[k-1]/T))
Sample Output
k Freq. Expected Prob Observed Prob
1 9965 0.100000 0.099650
2 49901 0.500000 0.499010
3 16709 0.166667 0.167090
4 8382 0.083333 0.083820
5 4971 0.050000 0.049710
6 3354 0.033333 0.033540
7 2462 0.023810 0.024620
8 1755 0.017857 0.017550
9 1363 0.013889 0.013630
10 1138 0.011111 0.011380
Requirements
The code should work in Python 2 or 3. | How do I generate numbers according to a Soliton distribution? | Python (adapted from @whuber's R solution)
from __future__ import print_function, division
import random | How do I generate numbers according to a Soliton distribution?
Python (adapted from @whuber's R solution)
from __future__ import print_function, division
import random
from math import ceil
def soliton(N, seed):
prng = random.Random()
prng.seed(seed)
while 1:
x = random.random() # Uniform values in [0, 1)
i = int(ceil(1/x)) # Modified soliton distribution
yield i if i <= N else 1 # Correct extreme values to 1
if __name__ == '__main__':
N = 10
T = 10 ** 5 # Number of trials
s = soliton(N, s = soliton(N, random.randint(0, 2 ** 32 - 1)) # soliton generator
f = [0]*N # frequency counter
for j in range(T):
i = next(s)
f[i-1] += 1
print("k\tFreq.\tExpected Prob\tObserved Prob\n");
print("{:d}\t{:d}\t{:f}\t{:f}".format(1, f[0], 1/N, f[0]/T))
for k in range(2, N+1):
print("{:d}\t{:d}\t{:f}\t{:f}".format(k, f[k-1], 1/(k*(k-1)), f[k-1]/T))
Sample Output
k Freq. Expected Prob Observed Prob
1 9965 0.100000 0.099650
2 49901 0.500000 0.499010
3 16709 0.166667 0.167090
4 8382 0.083333 0.083820
5 4971 0.050000 0.049710
6 3354 0.033333 0.033540
7 2462 0.023810 0.024620
8 1755 0.017857 0.017550
9 1363 0.013889 0.013630
10 1138 0.011111 0.011380
Requirements
The code should work in Python 2 or 3. | How do I generate numbers according to a Soliton distribution?
Python (adapted from @whuber's R solution)
from __future__ import print_function, division
import random |
27,897 | How to fit a model for a time series that contains outliers | Michael Chernick points you in the right direction. I would also look at Ruey Tsay's work as that added to this body of knowledge. See more here.
You can't compete against today's automated computer algorithms. They look at many ways to approach the time series that you haven't considered and often not documented in any paper or book. When one asks how to do an ANOVA, a precise answer can be expected when comparing against different algorithms. When one asks the question how do I do pattern recognition, many answers are possible as heuristics are involved. Your question involves the use of heuristics.
The best way to fit an ARIMA model, if outliers exist in the data is to evaluate possible states of nature and to select that approach that is deemed optimal for a particular data set. One possible state of nature is that the ARIMA process is the primary source of explained variation. In this case one would "tentatively identify" the ARIMA process via the acf/pacf function and then examine the residuals for possible outliers. Outliers can be Pulses, i.e., one-time events OR seasonal pulses which are evidenced by systematic outliers at some frequency (say, 12 for monthly data). A third type of outlier is where one has a contiguous set of pulses, each having the same sign and magnitude, this is called a step or level shift. After examining the residuals from the tentative ARIMA process one can then tentatively add the empirically identified deterministic structure to create a tentative combined model. Nor if the primary source of variation is one of the 4 kinds or "outliers" then one would be better served by identifying them ab initio (first) and then using the residuals from this "regression model" to identify the stochastic (ARIMA) structure. Now these two alternative strategies get a little more complicated when one has a "problem" where the ARIMA parameters change over time or the error variance changes over time due to a number of possible causes, possibly the need for weighted least squares or a power transform like logs / reciprocals, etc. Another complication / opportunity is how and when to form the contribution of user-suggested predictor series to form a seamlessly integrated model incorporating memory, causals and empirically identified dummy series. This problem is further exacerbated when one has trending series best modeled with indicator series of the form $0,0,0,0,1,2,3,4,...$, or $1,2,3,4,5,...n$ and combinations of level shift series like $0,0,0,0,0,0,1,1,1,1,1$. You might want to try and write such procedures in R, but life is short. I would be glad to actually solve your problem and demonstrate in this case how the procedure works, please post the data or send it to sales@autobox.com
Additional comment after receiving / analyzing the data / daily data for a foreign exchange rate / 18=765 values starting 1/1/2007
The data had an acf of:
Upon identifying an arma model of the form $(1,1,0)(0,0,0)$ and a number of outliers the acf of the residuals indicates randomness since the acf values are very small. AUTOBOX identified a number of outliers:
The final model:
included the need for a variance stabilization augmentation a la TSAY where variance changes in the residuals were identified and incorporated. The problem that you had with your automatic run was that the procedure you were using, like an accountant, believes the data rather than challenging the data via Intervention Detection (a.k.a., Outlier Detection). I have posted a complete analysis here. | How to fit a model for a time series that contains outliers | Michael Chernick points you in the right direction. I would also look at Ruey Tsay's work as that added to this body of knowledge. See more here.
You can't compete against today's automated computer a | How to fit a model for a time series that contains outliers
Michael Chernick points you in the right direction. I would also look at Ruey Tsay's work as that added to this body of knowledge. See more here.
You can't compete against today's automated computer algorithms. They look at many ways to approach the time series that you haven't considered and often not documented in any paper or book. When one asks how to do an ANOVA, a precise answer can be expected when comparing against different algorithms. When one asks the question how do I do pattern recognition, many answers are possible as heuristics are involved. Your question involves the use of heuristics.
The best way to fit an ARIMA model, if outliers exist in the data is to evaluate possible states of nature and to select that approach that is deemed optimal for a particular data set. One possible state of nature is that the ARIMA process is the primary source of explained variation. In this case one would "tentatively identify" the ARIMA process via the acf/pacf function and then examine the residuals for possible outliers. Outliers can be Pulses, i.e., one-time events OR seasonal pulses which are evidenced by systematic outliers at some frequency (say, 12 for monthly data). A third type of outlier is where one has a contiguous set of pulses, each having the same sign and magnitude, this is called a step or level shift. After examining the residuals from the tentative ARIMA process one can then tentatively add the empirically identified deterministic structure to create a tentative combined model. Nor if the primary source of variation is one of the 4 kinds or "outliers" then one would be better served by identifying them ab initio (first) and then using the residuals from this "regression model" to identify the stochastic (ARIMA) structure. Now these two alternative strategies get a little more complicated when one has a "problem" where the ARIMA parameters change over time or the error variance changes over time due to a number of possible causes, possibly the need for weighted least squares or a power transform like logs / reciprocals, etc. Another complication / opportunity is how and when to form the contribution of user-suggested predictor series to form a seamlessly integrated model incorporating memory, causals and empirically identified dummy series. This problem is further exacerbated when one has trending series best modeled with indicator series of the form $0,0,0,0,1,2,3,4,...$, or $1,2,3,4,5,...n$ and combinations of level shift series like $0,0,0,0,0,0,1,1,1,1,1$. You might want to try and write such procedures in R, but life is short. I would be glad to actually solve your problem and demonstrate in this case how the procedure works, please post the data or send it to sales@autobox.com
Additional comment after receiving / analyzing the data / daily data for a foreign exchange rate / 18=765 values starting 1/1/2007
The data had an acf of:
Upon identifying an arma model of the form $(1,1,0)(0,0,0)$ and a number of outliers the acf of the residuals indicates randomness since the acf values are very small. AUTOBOX identified a number of outliers:
The final model:
included the need for a variance stabilization augmentation a la TSAY where variance changes in the residuals were identified and incorporated. The problem that you had with your automatic run was that the procedure you were using, like an accountant, believes the data rather than challenging the data via Intervention Detection (a.k.a., Outlier Detection). I have posted a complete analysis here. | How to fit a model for a time series that contains outliers
Michael Chernick points you in the right direction. I would also look at Ruey Tsay's work as that added to this body of knowledge. See more here.
You can't compete against today's automated computer a |
27,898 | How to fit a model for a time series that contains outliers | There is no ready to use robust counterpart to arima function in R (yet); should one appear, it will be listed here. Maybe an alternative is to down-weight those observations that are outlying with respect to a simple univariate outlier detection rule, but I don't see ready to use packages to run weighted ARMA regression either. Another possible alternative would then be to Winsorize the outlying points:
#parameters
para <- list(ar=c(0.6,-0.48), ma=c(-0.22,0.24))
#original series
y1 <- y0 <- arima.sim(n=100, para, sd=sqrt(0.1796))
#outliers
out <- sample(1:100, 20)
#contaminated series
y1[out] <- rnorm(20, 10, 1)
plot( y1, type="l")
lines(y0, col="red")
#winsorized series
y2 <- rep(NA, length(y1))
a1 <- (y1-median(y1)) / mad(y1)
a2 <- which(abs(a1)>3)
y2[-a2] <- y1[-a2]
for(i in 2:length(y2)){
if(is.na(y2[i])){ y2[i] <- y2[i-1] }
} | How to fit a model for a time series that contains outliers | There is no ready to use robust counterpart to arima function in R (yet); should one appear, it will be listed here. Maybe an alternative is to down-weight those observations that are outlying with re | How to fit a model for a time series that contains outliers
There is no ready to use robust counterpart to arima function in R (yet); should one appear, it will be listed here. Maybe an alternative is to down-weight those observations that are outlying with respect to a simple univariate outlier detection rule, but I don't see ready to use packages to run weighted ARMA regression either. Another possible alternative would then be to Winsorize the outlying points:
#parameters
para <- list(ar=c(0.6,-0.48), ma=c(-0.22,0.24))
#original series
y1 <- y0 <- arima.sim(n=100, para, sd=sqrt(0.1796))
#outliers
out <- sample(1:100, 20)
#contaminated series
y1[out] <- rnorm(20, 10, 1)
plot( y1, type="l")
lines(y0, col="red")
#winsorized series
y2 <- rep(NA, length(y1))
a1 <- (y1-median(y1)) / mad(y1)
a2 <- which(abs(a1)>3)
y2[-a2] <- y1[-a2]
for(i in 2:length(y2)){
if(is.na(y2[i])){ y2[i] <- y2[i-1] }
} | How to fit a model for a time series that contains outliers
There is no ready to use robust counterpart to arima function in R (yet); should one appear, it will be listed here. Maybe an alternative is to down-weight those observations that are outlying with re |
27,899 | How to fit a model for a time series that contains outliers | There is a sizable literature on robust time series models. Martin and Yohai are among the major contributors. Their work goes back to the 1980s. I did some work on detecting outliers in time series myself, but Martin was really one of the many contributors to both the detection of outliers and parameter estimation in the presence of outliers or heavy-tailed residuals in time series.
Here is a link to a survey article on the topic with a list of over 100 references. It even includes my 1982 JASA paper.
Here is a 2000 PhD thesis (pdf) that covers the theory, methods and applications of robust time series analysis and includes a nice bibliography.
Here is a link on software that includes some robust time series tools. | How to fit a model for a time series that contains outliers | There is a sizable literature on robust time series models. Martin and Yohai are among the major contributors. Their work goes back to the 1980s. I did some work on detecting outliers in time serie | How to fit a model for a time series that contains outliers
There is a sizable literature on robust time series models. Martin and Yohai are among the major contributors. Their work goes back to the 1980s. I did some work on detecting outliers in time series myself, but Martin was really one of the many contributors to both the detection of outliers and parameter estimation in the presence of outliers or heavy-tailed residuals in time series.
Here is a link to a survey article on the topic with a list of over 100 references. It even includes my 1982 JASA paper.
Here is a 2000 PhD thesis (pdf) that covers the theory, methods and applications of robust time series analysis and includes a nice bibliography.
Here is a link on software that includes some robust time series tools. | How to fit a model for a time series that contains outliers
There is a sizable literature on robust time series models. Martin and Yohai are among the major contributors. Their work goes back to the 1980s. I did some work on detecting outliers in time serie |
27,900 | How to fit a model for a time series that contains outliers | is the purpose of your model to forecast or analyze the history? if this is not for forecasting, and you know that these are the outliers, then simply add the dummy variable, which is 1 in those dates and 0 in other dates. this way the dummy coefficients will take care of the outliers, and you'll be able to interpret the other coefficients in the model.
if this is for forecasting, then you have to ask yourself two questions: will these outliers happen again? if they would, do I have to account for them?
For instance, let's say your data series have outliers when Lehman brothers went down. it's an event which you have no way of predicting, obviously, yet you can't simply ignore it because something like this is bound to happen in future. if you throw in the dummy for outliers, then you effectively remove the uncertainty of this event from the error variance. your forecast will underestimate the tail risk - not a good thing, perhaps, for risk management. however, if you are to produce the baseline forecast of sales, the dummy will work, because you're not interested in the tail, you're interested in most likely scenarios - so you don't have to account for the unpredictable event for this purpose.
Hence, the purpose of your model impacts the way you deal with outliers. | How to fit a model for a time series that contains outliers | is the purpose of your model to forecast or analyze the history? if this is not for forecasting, and you know that these are the outliers, then simply add the dummy variable, which is 1 in those dates | How to fit a model for a time series that contains outliers
is the purpose of your model to forecast or analyze the history? if this is not for forecasting, and you know that these are the outliers, then simply add the dummy variable, which is 1 in those dates and 0 in other dates. this way the dummy coefficients will take care of the outliers, and you'll be able to interpret the other coefficients in the model.
if this is for forecasting, then you have to ask yourself two questions: will these outliers happen again? if they would, do I have to account for them?
For instance, let's say your data series have outliers when Lehman brothers went down. it's an event which you have no way of predicting, obviously, yet you can't simply ignore it because something like this is bound to happen in future. if you throw in the dummy for outliers, then you effectively remove the uncertainty of this event from the error variance. your forecast will underestimate the tail risk - not a good thing, perhaps, for risk management. however, if you are to produce the baseline forecast of sales, the dummy will work, because you're not interested in the tail, you're interested in most likely scenarios - so you don't have to account for the unpredictable event for this purpose.
Hence, the purpose of your model impacts the way you deal with outliers. | How to fit a model for a time series that contains outliers
is the purpose of your model to forecast or analyze the history? if this is not for forecasting, and you know that these are the outliers, then simply add the dummy variable, which is 1 in those dates |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.