idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
25,901 | Outlier detection using regression | Your best option to use regression to find outliers is to use robust regression.
Ordinary regression can be impacted by outliers in two ways:
First, an extreme outlier in the y-direction at x-values near $\bar x$ can affect the fit in that area in the same way an outlier can affect a mean.
Second, an 'outlying' observation in x-space is an influential observation - it can pull the fit of the line toward it. If it's sufficiently far away the line will go through the influential point:
In the left plot, there's a point that's quite influential, and it pulls the line quite a way from the large bulk of the data. In the right plot, it's been moved even further away -- and now the line goes through the point. When the x-value is that extreme, as you move that point up and down, the line moves with it, going through the mean of the other points and through the one influential point.
An influential point that's perfectly consistent with the rest of the data may not be such a big problem, but one that's far from a line through the rest of the data will make the line fit it, rather than the data.
If you look at the right-hand plot, the red line - the least squares regression line - doesn't show the extreme point as an outlier at all - its residual is 0. Instead, the large residuals from the least squares line are in the main part of the data!
This means you can completely miss an outlier.
Even worse, with multiple regression, an outlier in x-space may not look particularly unusual for any single x-variable. If there's a possibility of such a point, it's potentially a very risky thing to use least squares regression on.
Robust regression
If you fit a robust line - in particular one robust to influential outliers - like the green line in the second plot - then the outlier has a very large residual.
In that case, you have some hope of identifying outliers - they'll be points that aren't - in some sense - close to the line.
Removing outliers
You certainly can use a robust regression to identify and thereby remove outliers.
But once you have a robust regression fit, one that is already not badly affected by outliers, you don't necessarily need to remove the outliers -- you already have a model that's a good fit. | Outlier detection using regression | Your best option to use regression to find outliers is to use robust regression.
Ordinary regression can be impacted by outliers in two ways:
First, an extreme outlier in the y-direction at x-values n | Outlier detection using regression
Your best option to use regression to find outliers is to use robust regression.
Ordinary regression can be impacted by outliers in two ways:
First, an extreme outlier in the y-direction at x-values near $\bar x$ can affect the fit in that area in the same way an outlier can affect a mean.
Second, an 'outlying' observation in x-space is an influential observation - it can pull the fit of the line toward it. If it's sufficiently far away the line will go through the influential point:
In the left plot, there's a point that's quite influential, and it pulls the line quite a way from the large bulk of the data. In the right plot, it's been moved even further away -- and now the line goes through the point. When the x-value is that extreme, as you move that point up and down, the line moves with it, going through the mean of the other points and through the one influential point.
An influential point that's perfectly consistent with the rest of the data may not be such a big problem, but one that's far from a line through the rest of the data will make the line fit it, rather than the data.
If you look at the right-hand plot, the red line - the least squares regression line - doesn't show the extreme point as an outlier at all - its residual is 0. Instead, the large residuals from the least squares line are in the main part of the data!
This means you can completely miss an outlier.
Even worse, with multiple regression, an outlier in x-space may not look particularly unusual for any single x-variable. If there's a possibility of such a point, it's potentially a very risky thing to use least squares regression on.
Robust regression
If you fit a robust line - in particular one robust to influential outliers - like the green line in the second plot - then the outlier has a very large residual.
In that case, you have some hope of identifying outliers - they'll be points that aren't - in some sense - close to the line.
Removing outliers
You certainly can use a robust regression to identify and thereby remove outliers.
But once you have a robust regression fit, one that is already not badly affected by outliers, you don't necessarily need to remove the outliers -- you already have a model that's a good fit. | Outlier detection using regression
Your best option to use regression to find outliers is to use robust regression.
Ordinary regression can be impacted by outliers in two ways:
First, an extreme outlier in the y-direction at x-values n |
25,902 | Outlier detection using regression | Can regression be used for outlier detection.
Yes. This answer and Glen_b's answer address this.
The primary aim here is not to fit a regression model but find out out liers using regression
Building on Roman Lustrik's comment, here is a heuristic to find outliers using (multiple linear) regression.
Lets say you have sample size $n$. Then, do the following:
Fit a regression model on the $n$ examples. Note down its residual
sum of squares
error
$r_{total}$.
For each sample i, fit a regression model on the n-1 examples
(excluding example i) and note down the corresponding residual sum
of squares error $r_i$.
Now, compare $r_i$ with $r_tot$ for each $i$, if $r_i << r_{total}$,
then $i$ is a candidate outlier.
Setting these candidate outlier points aside, we can repeat the whole exercise again with the reduced sample. In the algorithm, we are picking examples in the data which are influencing the regression fit in a bad way (which is one way to label an example as an outlier). | Outlier detection using regression | Can regression be used for outlier detection.
Yes. This answer and Glen_b's answer address this.
The primary aim here is not to fit a regression model but find out out liers using regression
Build | Outlier detection using regression
Can regression be used for outlier detection.
Yes. This answer and Glen_b's answer address this.
The primary aim here is not to fit a regression model but find out out liers using regression
Building on Roman Lustrik's comment, here is a heuristic to find outliers using (multiple linear) regression.
Lets say you have sample size $n$. Then, do the following:
Fit a regression model on the $n$ examples. Note down its residual
sum of squares
error
$r_{total}$.
For each sample i, fit a regression model on the n-1 examples
(excluding example i) and note down the corresponding residual sum
of squares error $r_i$.
Now, compare $r_i$ with $r_tot$ for each $i$, if $r_i << r_{total}$,
then $i$ is a candidate outlier.
Setting these candidate outlier points aside, we can repeat the whole exercise again with the reduced sample. In the algorithm, we are picking examples in the data which are influencing the regression fit in a bad way (which is one way to label an example as an outlier). | Outlier detection using regression
Can regression be used for outlier detection.
Yes. This answer and Glen_b's answer address this.
The primary aim here is not to fit a regression model but find out out liers using regression
Build |
25,903 | Fit a GARCH (1,1) - model with covariates in R | Here is an example of implementation using the rugarch package and with to some fake data. The function ugarchfit allows for the inclusion of external regressors in the mean equation (note the use of external.regressors in fit.spec in the code below).
To fix notations, the model is
\begin{align*}
y_t &= \lambda_0 + \lambda_1 x_{t,1} + \lambda_2 x_{t,2} + \epsilon_t, \\
\epsilon_t &= \sigma_t Z_t , \\
\sigma_t^2 &= \omega + \alpha \epsilon_{t-1}^2 + \beta \sigma_{t-1}^2 ,
\end{align*}
where $x_{t,1}$ and $x_{t,2}$ denote the covariate at time $t$,
and
with the "usual" assumptions/requirements on parameters and the innovation process $Z_t$.
The parameter values used in the example are as follows.
## Model parameters
nb.period <- 1000
omega <- 0.00001
alpha <- 0.12
beta <- 0.87
lambda <- c(0.001, 0.4, 0.2)
The image below shows the series of covariate $x_{t,1}$ and $x_{t,2}$ as well as the series $y_t$. The R code used to generate them is provided below.
## Dependencies
library(rugarch)
## Generate some covariates
set.seed(234)
ext.reg.1 <- 0.01 * (sin(2*pi*(1:nb.period)/nb.period))/2 + rnorm(nb.period, 0, 0.0001)
ext.reg.2 <- 0.05 * (sin(6*pi*(1:nb.period)/nb.period))/2 + rnorm(nb.period, 0, 0.001)
ext.reg <- cbind(ext.reg.1, ext.reg.2)
## Generate some GARCH innovations
sim.spec <- ugarchspec(variance.model = list(model = "sGARCH", garchOrder = c(1,1)),
mean.model = list(armaOrder = c(0,0), include.mean = FALSE),
distribution.model = "norm",
fixed.pars = list(omega = omega, alpha1 = alpha, beta1 = beta))
path.sgarch <- ugarchpath(sim.spec, n.sim = nb.period, n.start = 1)
epsilon <- as.vector(fitted(path.sgarch))
## Create the time series
y <- lambda[1] + lambda[2] * ext.reg[, 1] + lambda[3] * ext.reg[, 2] + epsilon
## Data visualization
par(mfrow = c(3,1))
plot(ext.reg[, 1], type = "l", xlab = "Time", ylab = "Covariate 1")
plot(ext.reg[, 2], type = "l", xlab = "Time", ylab = "Covariate 2")
plot(y, type = "h", xlab = "Time")
par(mfrow = c(1,1))
A fit is done with ugarchfit as follows.
## Fit
fit.spec <- ugarchspec(variance.model = list(model = "sGARCH",
garchOrder = c(1, 1)),
mean.model = list(armaOrder = c(0, 0),
include.mean = TRUE,
external.regressors = ext.reg),
distribution.model = "norm")
fit <- ugarchfit(data = y, spec = fit.spec)
Parameter estimates are
## Results review
fit.val <- coef(fit)
fit.sd <- diag(vcov(fit))
true.val <- c(lambda, omega, alpha, beta)
fit.conf.lb <- fit.val + qnorm(0.025) * fit.sd
fit.conf.ub <- fit.val + qnorm(0.975) * fit.sd
> print(fit.val)
# mu mxreg1 mxreg2 omega alpha1 beta1
#1.724885e-03 3.942020e-01 7.342743e-02 1.451739e-05 1.022208e-01 8.769060e-01
> print(fit.sd)
#[1] 4.635344e-07 3.255819e-02 1.504019e-03 1.195897e-10 8.312088e-04 3.375684e-04
And corresponding true values are
> print(true.val)
#[1] 0.00100 0.40000 0.20000 0.00001 0.12000 0.87000
The following figure shows a parameter estimates with 95% confidence intervals, and the true values. The R code used to generate it is provided is below.
plot(c(lambda, omega, alpha, beta), pch = 1, col = "red",
ylim = range(c(fit.conf.lb, fit.conf.ub, true.val)),
xlab = "", ylab = "", axes = FALSE)
box(); axis(1, at = 1:length(fit.val), labels = names(fit.val)); axis(2)
points(coef(fit), col = "blue", pch = 4)
for (i in 1:length(fit.val)) {
lines(c(i,i), c(fit.conf.lb[i], fit.conf.ub[i]))
}
legend( "topleft", legend = c("true value", "estimate", "confidence interval"),
col = c("red", "blue", 1), pch = c(1, 4, NA), lty = c(NA, NA, 1), inset = 0.01) | Fit a GARCH (1,1) - model with covariates in R | Here is an example of implementation using the rugarch package and with to some fake data. The function ugarchfit allows for the inclusion of external regressors in the mean equation (note the use of | Fit a GARCH (1,1) - model with covariates in R
Here is an example of implementation using the rugarch package and with to some fake data. The function ugarchfit allows for the inclusion of external regressors in the mean equation (note the use of external.regressors in fit.spec in the code below).
To fix notations, the model is
\begin{align*}
y_t &= \lambda_0 + \lambda_1 x_{t,1} + \lambda_2 x_{t,2} + \epsilon_t, \\
\epsilon_t &= \sigma_t Z_t , \\
\sigma_t^2 &= \omega + \alpha \epsilon_{t-1}^2 + \beta \sigma_{t-1}^2 ,
\end{align*}
where $x_{t,1}$ and $x_{t,2}$ denote the covariate at time $t$,
and
with the "usual" assumptions/requirements on parameters and the innovation process $Z_t$.
The parameter values used in the example are as follows.
## Model parameters
nb.period <- 1000
omega <- 0.00001
alpha <- 0.12
beta <- 0.87
lambda <- c(0.001, 0.4, 0.2)
The image below shows the series of covariate $x_{t,1}$ and $x_{t,2}$ as well as the series $y_t$. The R code used to generate them is provided below.
## Dependencies
library(rugarch)
## Generate some covariates
set.seed(234)
ext.reg.1 <- 0.01 * (sin(2*pi*(1:nb.period)/nb.period))/2 + rnorm(nb.period, 0, 0.0001)
ext.reg.2 <- 0.05 * (sin(6*pi*(1:nb.period)/nb.period))/2 + rnorm(nb.period, 0, 0.001)
ext.reg <- cbind(ext.reg.1, ext.reg.2)
## Generate some GARCH innovations
sim.spec <- ugarchspec(variance.model = list(model = "sGARCH", garchOrder = c(1,1)),
mean.model = list(armaOrder = c(0,0), include.mean = FALSE),
distribution.model = "norm",
fixed.pars = list(omega = omega, alpha1 = alpha, beta1 = beta))
path.sgarch <- ugarchpath(sim.spec, n.sim = nb.period, n.start = 1)
epsilon <- as.vector(fitted(path.sgarch))
## Create the time series
y <- lambda[1] + lambda[2] * ext.reg[, 1] + lambda[3] * ext.reg[, 2] + epsilon
## Data visualization
par(mfrow = c(3,1))
plot(ext.reg[, 1], type = "l", xlab = "Time", ylab = "Covariate 1")
plot(ext.reg[, 2], type = "l", xlab = "Time", ylab = "Covariate 2")
plot(y, type = "h", xlab = "Time")
par(mfrow = c(1,1))
A fit is done with ugarchfit as follows.
## Fit
fit.spec <- ugarchspec(variance.model = list(model = "sGARCH",
garchOrder = c(1, 1)),
mean.model = list(armaOrder = c(0, 0),
include.mean = TRUE,
external.regressors = ext.reg),
distribution.model = "norm")
fit <- ugarchfit(data = y, spec = fit.spec)
Parameter estimates are
## Results review
fit.val <- coef(fit)
fit.sd <- diag(vcov(fit))
true.val <- c(lambda, omega, alpha, beta)
fit.conf.lb <- fit.val + qnorm(0.025) * fit.sd
fit.conf.ub <- fit.val + qnorm(0.975) * fit.sd
> print(fit.val)
# mu mxreg1 mxreg2 omega alpha1 beta1
#1.724885e-03 3.942020e-01 7.342743e-02 1.451739e-05 1.022208e-01 8.769060e-01
> print(fit.sd)
#[1] 4.635344e-07 3.255819e-02 1.504019e-03 1.195897e-10 8.312088e-04 3.375684e-04
And corresponding true values are
> print(true.val)
#[1] 0.00100 0.40000 0.20000 0.00001 0.12000 0.87000
The following figure shows a parameter estimates with 95% confidence intervals, and the true values. The R code used to generate it is provided is below.
plot(c(lambda, omega, alpha, beta), pch = 1, col = "red",
ylim = range(c(fit.conf.lb, fit.conf.ub, true.val)),
xlab = "", ylab = "", axes = FALSE)
box(); axis(1, at = 1:length(fit.val), labels = names(fit.val)); axis(2)
points(coef(fit), col = "blue", pch = 4)
for (i in 1:length(fit.val)) {
lines(c(i,i), c(fit.conf.lb[i], fit.conf.ub[i]))
}
legend( "topleft", legend = c("true value", "estimate", "confidence interval"),
col = c("red", "blue", 1), pch = c(1, 4, NA), lty = c(NA, NA, 1), inset = 0.01) | Fit a GARCH (1,1) - model with covariates in R
Here is an example of implementation using the rugarch package and with to some fake data. The function ugarchfit allows for the inclusion of external regressors in the mean equation (note the use of |
25,904 | Does the Central Limit Theorem only work for iid random variables? | There are many versions of Central Limit Theorem; iid versions tend to be taught because they're relatively easy to prove (e.g. if the MGF exists, that leads to a reasonably simple demonstration of the CLT). Because there are many CLTs, we should be careful of saying "the CLT" unless the one we mean is clear from context.
There are versions of the central limit theorem for cases where the variables are not iid and specifically there are versions where you have independence but the distributions differ. Basically, individual variances can't be too large relative to the rest (in hand-wavy terms, individual variances have to be vanishing fractions of the total, the formal condition depends on which CLT you're looking at).
For example, see the Lyapunov CLT
There are also cases of the CLT for situations where dependence exists (see later in the same article for some examples). | Does the Central Limit Theorem only work for iid random variables? | There are many versions of Central Limit Theorem; iid versions tend to be taught because they're relatively easy to prove (e.g. if the MGF exists, that leads to a reasonably simple demonstration of th | Does the Central Limit Theorem only work for iid random variables?
There are many versions of Central Limit Theorem; iid versions tend to be taught because they're relatively easy to prove (e.g. if the MGF exists, that leads to a reasonably simple demonstration of the CLT). Because there are many CLTs, we should be careful of saying "the CLT" unless the one we mean is clear from context.
There are versions of the central limit theorem for cases where the variables are not iid and specifically there are versions where you have independence but the distributions differ. Basically, individual variances can't be too large relative to the rest (in hand-wavy terms, individual variances have to be vanishing fractions of the total, the formal condition depends on which CLT you're looking at).
For example, see the Lyapunov CLT
There are also cases of the CLT for situations where dependence exists (see later in the same article for some examples). | Does the Central Limit Theorem only work for iid random variables?
There are many versions of Central Limit Theorem; iid versions tend to be taught because they're relatively easy to prove (e.g. if the MGF exists, that leads to a reasonably simple demonstration of th |
25,905 | Training a convolution neural network | You need to first calculate all your updates as if the wieghts weren't shared, but just store them, don't actually do any updating yet.
Let $w_k$ be some weight that appears at locations $I_k = \{(i,j) \colon w_{i,j} = w_k\}$ in your network and $\Delta w_{i,j} = -\eta \frac{\partial J}{\partial w_{i,j}} $ where $\eta$ is the learning rate and $J$ is your objective function. Note that at this point if you didn't have weight sharing you would just upade $w_{i,j}$ as
$$
w_{i,j} = w_{i,j} + \Delta w_{i,j}.
$$
To deal with the shared weights you need to sum up all the individual updates. So set
$$
\Delta w_k = \sum_{(i,j) \in I_k} \Delta w_{i,j}
$$
and then update
$$
w_k = w_k + \Delta w_k.
$$ | Training a convolution neural network | You need to first calculate all your updates as if the wieghts weren't shared, but just store them, don't actually do any updating yet.
Let $w_k$ be some weight that appears at locations $I_k = \{(i, | Training a convolution neural network
You need to first calculate all your updates as if the wieghts weren't shared, but just store them, don't actually do any updating yet.
Let $w_k$ be some weight that appears at locations $I_k = \{(i,j) \colon w_{i,j} = w_k\}$ in your network and $\Delta w_{i,j} = -\eta \frac{\partial J}{\partial w_{i,j}} $ where $\eta$ is the learning rate and $J$ is your objective function. Note that at this point if you didn't have weight sharing you would just upade $w_{i,j}$ as
$$
w_{i,j} = w_{i,j} + \Delta w_{i,j}.
$$
To deal with the shared weights you need to sum up all the individual updates. So set
$$
\Delta w_k = \sum_{(i,j) \in I_k} \Delta w_{i,j}
$$
and then update
$$
w_k = w_k + \Delta w_k.
$$ | Training a convolution neural network
You need to first calculate all your updates as if the wieghts weren't shared, but just store them, don't actually do any updating yet.
Let $w_k$ be some weight that appears at locations $I_k = \{(i, |
25,906 | How do I choose parameters for my beta prior? | If your prior belief is that 9 of the 10 coin flips will come up heads, then you want the expectation of your prior to be 0.9. Given $X \sim \mathrm{Beta}(\alpha,\beta)$ (for conjugacy in the beta-binomial model), then $E[X] = \alpha/(\alpha+\beta) = 0.9$, so you can use this as your first constraint. Obviously this leaves you with an infinite number of possibilities, so we'll need a second constraint which will represent your subjective confidence in this prior belief: the variance of your prior, $\operatorname{var}[X] = \frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)}\ $
Decide how you want to set your variance and solve the system of equations for $\alpha$ and $\beta$ to define the parameters for your prior. Justifying your choice of variance here may be difficult: you can always err on the side of a wider (i.e. less informative) variance. The wider you set the variance, the closer your prior will approximate a uniform distribution.
If you want a truly uninformative prior, you should consider using Jeffrey's Prior. | How do I choose parameters for my beta prior? | If your prior belief is that 9 of the 10 coin flips will come up heads, then you want the expectation of your prior to be 0.9. Given $X \sim \mathrm{Beta}(\alpha,\beta)$ (for conjugacy in the beta-bin | How do I choose parameters for my beta prior?
If your prior belief is that 9 of the 10 coin flips will come up heads, then you want the expectation of your prior to be 0.9. Given $X \sim \mathrm{Beta}(\alpha,\beta)$ (for conjugacy in the beta-binomial model), then $E[X] = \alpha/(\alpha+\beta) = 0.9$, so you can use this as your first constraint. Obviously this leaves you with an infinite number of possibilities, so we'll need a second constraint which will represent your subjective confidence in this prior belief: the variance of your prior, $\operatorname{var}[X] = \frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)}\ $
Decide how you want to set your variance and solve the system of equations for $\alpha$ and $\beta$ to define the parameters for your prior. Justifying your choice of variance here may be difficult: you can always err on the side of a wider (i.e. less informative) variance. The wider you set the variance, the closer your prior will approximate a uniform distribution.
If you want a truly uninformative prior, you should consider using Jeffrey's Prior. | How do I choose parameters for my beta prior?
If your prior belief is that 9 of the 10 coin flips will come up heads, then you want the expectation of your prior to be 0.9. Given $X \sim \mathrm{Beta}(\alpha,\beta)$ (for conjugacy in the beta-bin |
25,907 | How does a uniform prior lead to the same estimates from maximum likelihood and mode of posterior? | It is a uniform distribution (either continuous or discrete).
See also
http://en.wikipedia.org/wiki/Point_estimation#Bayesian_point-estimation
and
http://en.wikipedia.org/wiki/Maximum_a_posteriori_estimation#Description
If you use a uniform prior on a set that contains the MLE, then MAP=MLE always. The reason for this is that under this prior structure, the posterior distribution and the likelihood are proportional. | How does a uniform prior lead to the same estimates from maximum likelihood and mode of posterior? | It is a uniform distribution (either continuous or discrete).
See also
http://en.wikipedia.org/wiki/Point_estimation#Bayesian_point-estimation
and
http://en.wikipedia.org/wiki/Maximum_a_posteriori_est | How does a uniform prior lead to the same estimates from maximum likelihood and mode of posterior?
It is a uniform distribution (either continuous or discrete).
See also
http://en.wikipedia.org/wiki/Point_estimation#Bayesian_point-estimation
and
http://en.wikipedia.org/wiki/Maximum_a_posteriori_estimation#Description
If you use a uniform prior on a set that contains the MLE, then MAP=MLE always. The reason for this is that under this prior structure, the posterior distribution and the likelihood are proportional. | How does a uniform prior lead to the same estimates from maximum likelihood and mode of posterior?
It is a uniform distribution (either continuous or discrete).
See also
http://en.wikipedia.org/wiki/Point_estimation#Bayesian_point-estimation
and
http://en.wikipedia.org/wiki/Maximum_a_posteriori_est |
25,908 | How does a uniform prior lead to the same estimates from maximum likelihood and mode of posterior? | MLE is the estimate of occurrence of given event given a parameter, whereas MAP is estimate of a parameter given an event. When we use Bayes theorem further while estimating MAP it boils down to $P(D|\theta)P(\theta)$ where $P(\theta)$ is the only additional term with respect to MLE. The mean and variance estimate of MAP will be same as mean and variance estimate of MLE as Prior is remaining the same every time and is not changing at all. Thus it only acts as a constant and thus plays no role in affecting the value of mean and variance. | How does a uniform prior lead to the same estimates from maximum likelihood and mode of posterior? | MLE is the estimate of occurrence of given event given a parameter, whereas MAP is estimate of a parameter given an event. When we use Bayes theorem further while estimating MAP it boils down to $P(D| | How does a uniform prior lead to the same estimates from maximum likelihood and mode of posterior?
MLE is the estimate of occurrence of given event given a parameter, whereas MAP is estimate of a parameter given an event. When we use Bayes theorem further while estimating MAP it boils down to $P(D|\theta)P(\theta)$ where $P(\theta)$ is the only additional term with respect to MLE. The mean and variance estimate of MAP will be same as mean and variance estimate of MLE as Prior is remaining the same every time and is not changing at all. Thus it only acts as a constant and thus plays no role in affecting the value of mean and variance. | How does a uniform prior lead to the same estimates from maximum likelihood and mode of posterior?
MLE is the estimate of occurrence of given event given a parameter, whereas MAP is estimate of a parameter given an event. When we use Bayes theorem further while estimating MAP it boils down to $P(D| |
25,909 | Propensity score weighting in Cox PH analysis and covariate selection | In theory, every variable you select as part of the propensity score weight need not be included as covariates in the model, because the weighting has already controlled for their potential confounding. With a proper weighting model you can, quite literally, just model the effect of the exposure.
That being said, there are reasons you may wish to include terms in the model:
"Doubly robust" estimates. There is no reason, save for a loss in precision, that you cannot use variables both in the weighting model and as covariates. In theory, you are protecting yourself against confounding two ways (hence this technique being referred to as "doubly robust"). Keep in mind this only protects you against either the PS model or the covariate model being misspecified by giving you a "second chance" to specify the correct model, it isn't a magic fix-all.
Multiple estimates of interest. Weighting makes the effect estimates from the covariates disappear - if you want a regression coefficient for the variable, you're going to want to include it as a covariate in the CoxPH step and not in the PS model.
Try searching for "Doubly robust" and similar terms in journals like Epidemiology or The American Journal of Epidemiology as well as the biostatistical literature and you should uncover some useful sources. | Propensity score weighting in Cox PH analysis and covariate selection | In theory, every variable you select as part of the propensity score weight need not be included as covariates in the model, because the weighting has already controlled for their potential confoundin | Propensity score weighting in Cox PH analysis and covariate selection
In theory, every variable you select as part of the propensity score weight need not be included as covariates in the model, because the weighting has already controlled for their potential confounding. With a proper weighting model you can, quite literally, just model the effect of the exposure.
That being said, there are reasons you may wish to include terms in the model:
"Doubly robust" estimates. There is no reason, save for a loss in precision, that you cannot use variables both in the weighting model and as covariates. In theory, you are protecting yourself against confounding two ways (hence this technique being referred to as "doubly robust"). Keep in mind this only protects you against either the PS model or the covariate model being misspecified by giving you a "second chance" to specify the correct model, it isn't a magic fix-all.
Multiple estimates of interest. Weighting makes the effect estimates from the covariates disappear - if you want a regression coefficient for the variable, you're going to want to include it as a covariate in the CoxPH step and not in the PS model.
Try searching for "Doubly robust" and similar terms in journals like Epidemiology or The American Journal of Epidemiology as well as the biostatistical literature and you should uncover some useful sources. | Propensity score weighting in Cox PH analysis and covariate selection
In theory, every variable you select as part of the propensity score weight need not be included as covariates in the model, because the weighting has already controlled for their potential confoundin |
25,910 | Propensity score weighting in Cox PH analysis and covariate selection | It is important to distinguish "affected by treatment" and "related to treatment". The latter can include treatment selection factors such as the ones we are trying to adjust for with propensity and/or covariate adjustment. "Affected by treatment" implies that the covariates are measured after time zero (e.g., after randomization or after treatment start), which means they should seldom be used. | Propensity score weighting in Cox PH analysis and covariate selection | It is important to distinguish "affected by treatment" and "related to treatment". The latter can include treatment selection factors such as the ones we are trying to adjust for with propensity and/ | Propensity score weighting in Cox PH analysis and covariate selection
It is important to distinguish "affected by treatment" and "related to treatment". The latter can include treatment selection factors such as the ones we are trying to adjust for with propensity and/or covariate adjustment. "Affected by treatment" implies that the covariates are measured after time zero (e.g., after randomization or after treatment start), which means they should seldom be used. | Propensity score weighting in Cox PH analysis and covariate selection
It is important to distinguish "affected by treatment" and "related to treatment". The latter can include treatment selection factors such as the ones we are trying to adjust for with propensity and/ |
25,911 | Simple approximation of Poisson cumulative distribution in long tail? | A Poisson distribution with large mean is approximately normal, but you have to be careful that you want a tail bound and the normal approximation is proportionally less accurate near the tails.
One approach used in this MO question and with binomial distributions is to recognize that the tail decreases more rapidly than a geometric series, so you can write an explicit upper bound as a geometric series.
$$\begin{eqnarray}\sum_{k=D}^\infty \exp(-\mu)\frac{\mu^k}{k!} & \lt & \sum_{k=D}^\infty \exp(-\mu) \frac{\mu^D}{D!}\bigg(\frac \mu{D+1}\bigg)^{k-D} \\ & = & \exp(-\mu)\frac{\mu^D}{D!}\frac{1}{1-\frac{\mu}{D+1}} \\ & \lt & \exp(-\mu) \frac{\mu^D}{\sqrt{2\pi D}(D/e)^D} \frac{1}{1-\frac{\mu}{D+1}} \\ & = & \exp(D-\mu) \bigg(\frac{\mu}{D}\bigg)^D \frac{D+1}{\sqrt{2\pi D} (D+1-\mu)}\end{eqnarray} $$
Line 2 $\to$ line 3 was related to Stirling's formula. In practice I think you then want to solve $-p \log 2 = \log(\text{bound})$ numerically using binary search. Newton's method starting with an initial guess of $D = \mu + c \sqrt \mu.$ should also work.
For example, with $p=100$ and $\mu = 1000$, the numerical solution I get is 1384.89. A Poisson distribution with mean $1000$ takes the values from $0$ through $1384$ with probability $1-1/2^{100.06}.$ The values $0$ through $1383$ occur with probability $1-1/2^{99.59}.$ | Simple approximation of Poisson cumulative distribution in long tail? | A Poisson distribution with large mean is approximately normal, but you have to be careful that you want a tail bound and the normal approximation is proportionally less accurate near the tails.
One | Simple approximation of Poisson cumulative distribution in long tail?
A Poisson distribution with large mean is approximately normal, but you have to be careful that you want a tail bound and the normal approximation is proportionally less accurate near the tails.
One approach used in this MO question and with binomial distributions is to recognize that the tail decreases more rapidly than a geometric series, so you can write an explicit upper bound as a geometric series.
$$\begin{eqnarray}\sum_{k=D}^\infty \exp(-\mu)\frac{\mu^k}{k!} & \lt & \sum_{k=D}^\infty \exp(-\mu) \frac{\mu^D}{D!}\bigg(\frac \mu{D+1}\bigg)^{k-D} \\ & = & \exp(-\mu)\frac{\mu^D}{D!}\frac{1}{1-\frac{\mu}{D+1}} \\ & \lt & \exp(-\mu) \frac{\mu^D}{\sqrt{2\pi D}(D/e)^D} \frac{1}{1-\frac{\mu}{D+1}} \\ & = & \exp(D-\mu) \bigg(\frac{\mu}{D}\bigg)^D \frac{D+1}{\sqrt{2\pi D} (D+1-\mu)}\end{eqnarray} $$
Line 2 $\to$ line 3 was related to Stirling's formula. In practice I think you then want to solve $-p \log 2 = \log(\text{bound})$ numerically using binary search. Newton's method starting with an initial guess of $D = \mu + c \sqrt \mu.$ should also work.
For example, with $p=100$ and $\mu = 1000$, the numerical solution I get is 1384.89. A Poisson distribution with mean $1000$ takes the values from $0$ through $1384$ with probability $1-1/2^{100.06}.$ The values $0$ through $1383$ occur with probability $1-1/2^{99.59}.$ | Simple approximation of Poisson cumulative distribution in long tail?
A Poisson distribution with large mean is approximately normal, but you have to be careful that you want a tail bound and the normal approximation is proportionally less accurate near the tails.
One |
25,912 | Simple approximation of Poisson cumulative distribution in long tail? | You may see P. Harremoës: Sharp Bounds on Tail Probabilities for Poisson Random Variables https://helda.helsinki.fi/bitstream/handle/10138/229679/witmse_proc_17.pdf
The main inequalities there are as follows. Let $Y$ be a Poisson random variable with parameter $\lambda$. Put
$$G(x)=
\sqrt{2\left(x\ln \frac{x}{\lambda} +\lambda-x\right)}
\ \ {\rm sign} \left(x-\lambda\right).$$
Let $\Phi$ denote the cumulative distribution function for the standard normal law. Then, for all integer $k\ge 0$,
$${\bf P}(Y<k)\le \Phi(G(k)) \le {\bf P}(Y\le k),$$
which is equivalent to
$$\Phi(G(k-1)) \le {\bf P}(Y<k)\le \Phi(G(k))$$
for all integer $k>0$.
Moreover, $\Phi(G(k+(1/2))) \le {\bf P}(Y\le k)$ which implies that
$$\Phi(G(k-1/2)) \le {\bf P}(Y<k)\le \Phi(G(k))$$
for all integer $k>0$. | Simple approximation of Poisson cumulative distribution in long tail? | You may see P. Harremoës: Sharp Bounds on Tail Probabilities for Poisson Random Variables https://helda.helsinki.fi/bitstream/handle/10138/229679/witmse_proc_17.pdf
The main inequalities there are as | Simple approximation of Poisson cumulative distribution in long tail?
You may see P. Harremoës: Sharp Bounds on Tail Probabilities for Poisson Random Variables https://helda.helsinki.fi/bitstream/handle/10138/229679/witmse_proc_17.pdf
The main inequalities there are as follows. Let $Y$ be a Poisson random variable with parameter $\lambda$. Put
$$G(x)=
\sqrt{2\left(x\ln \frac{x}{\lambda} +\lambda-x\right)}
\ \ {\rm sign} \left(x-\lambda\right).$$
Let $\Phi$ denote the cumulative distribution function for the standard normal law. Then, for all integer $k\ge 0$,
$${\bf P}(Y<k)\le \Phi(G(k)) \le {\bf P}(Y\le k),$$
which is equivalent to
$$\Phi(G(k-1)) \le {\bf P}(Y<k)\le \Phi(G(k))$$
for all integer $k>0$.
Moreover, $\Phi(G(k+(1/2))) \le {\bf P}(Y\le k)$ which implies that
$$\Phi(G(k-1/2)) \le {\bf P}(Y<k)\le \Phi(G(k))$$
for all integer $k>0$. | Simple approximation of Poisson cumulative distribution in long tail?
You may see P. Harremoës: Sharp Bounds on Tail Probabilities for Poisson Random Variables https://helda.helsinki.fi/bitstream/handle/10138/229679/witmse_proc_17.pdf
The main inequalities there are as |
25,913 | Good books on text mining? | Check out
http://lintool.github.com/MapReduceAlgorithms/MapReduce-book-final.pdf
Data-Intensive Text Processing with MapReduce - this book is fairly academic but covers a number of commonly used text processing techniques and how they can be parrallised over large dataset using map reduce.
www.rtexttools.com
This is an excellent R package which helps you to appply a wide range of classification algorithms (including some ensemble methods) to text analytics.
and | Good books on text mining? | Check out
http://lintool.github.com/MapReduceAlgorithms/MapReduce-book-final.pdf
Data-Intensive Text Processing with MapReduce - this book is fairly academic but covers a number of commonly used text | Good books on text mining?
Check out
http://lintool.github.com/MapReduceAlgorithms/MapReduce-book-final.pdf
Data-Intensive Text Processing with MapReduce - this book is fairly academic but covers a number of commonly used text processing techniques and how they can be parrallised over large dataset using map reduce.
www.rtexttools.com
This is an excellent R package which helps you to appply a wide range of classification algorithms (including some ensemble methods) to text analytics.
and | Good books on text mining?
Check out
http://lintool.github.com/MapReduceAlgorithms/MapReduce-book-final.pdf
Data-Intensive Text Processing with MapReduce - this book is fairly academic but covers a number of commonly used text |
25,914 | Good books on text mining? | I have recently read four books in this field:
Feldman, R. and James Sanger, J. (2006). The Text Mining Handbook:
Advanced Approaches in Analyzing Unstructured Data. Cambridge
University Press.
This one focuses on practical examples, software and applied text mining. It gives multiple examples of practical usage of text-mining. It could be of interest if you want to read about commercial applications of text-mining tools.
Srivastava, A.N. and Sahami, M. (2009). Text Mining: Classification,
Clustering, and Applications. Chapman & Hall/CRC.
It is series of research papers that are used as examples of usage of different text-mining tools. It is rather too focused as for introductory test.
Weiss, S.M., Indurkhya, N., Zhang, T. and Damerau, F. (2005). Text
Mining: Predictive Methods for Analyzing Unstructured Information.
Springer.
Very introductory text that describes some general issues.
Manning, C. (1999). Foundations of Statistical Natural Language
Processing. MIT Press.
This is the best book that I already read on this topic. It is well written, clear, goes deeper into theory but in practice-friendly way. Starts with general introduction, but than reviews some of the most commonly used methods and algorithms. If you would have to choose only a single book, I would recommend this one.
You could also easily find multiple books on natural language processing and text mining that focus on using R (tm library) or Python (nltk library). | Good books on text mining? | I have recently read four books in this field:
Feldman, R. and James Sanger, J. (2006). The Text Mining Handbook:
Advanced Approaches in Analyzing Unstructured Data. Cambridge
University Press.
| Good books on text mining?
I have recently read four books in this field:
Feldman, R. and James Sanger, J. (2006). The Text Mining Handbook:
Advanced Approaches in Analyzing Unstructured Data. Cambridge
University Press.
This one focuses on practical examples, software and applied text mining. It gives multiple examples of practical usage of text-mining. It could be of interest if you want to read about commercial applications of text-mining tools.
Srivastava, A.N. and Sahami, M. (2009). Text Mining: Classification,
Clustering, and Applications. Chapman & Hall/CRC.
It is series of research papers that are used as examples of usage of different text-mining tools. It is rather too focused as for introductory test.
Weiss, S.M., Indurkhya, N., Zhang, T. and Damerau, F. (2005). Text
Mining: Predictive Methods for Analyzing Unstructured Information.
Springer.
Very introductory text that describes some general issues.
Manning, C. (1999). Foundations of Statistical Natural Language
Processing. MIT Press.
This is the best book that I already read on this topic. It is well written, clear, goes deeper into theory but in practice-friendly way. Starts with general introduction, but than reviews some of the most commonly used methods and algorithms. If you would have to choose only a single book, I would recommend this one.
You could also easily find multiple books on natural language processing and text mining that focus on using R (tm library) or Python (nltk library). | Good books on text mining?
I have recently read four books in this field:
Feldman, R. and James Sanger, J. (2006). The Text Mining Handbook:
Advanced Approaches in Analyzing Unstructured Data. Cambridge
University Press.
|
25,915 | Good books on text mining? | This might not be exactly on point for what you are looking for, but Mastering Regular Expressions by Jeffrey Friedl is a great source for learning how to use regular expressions to parse text. He doesn't discuss modeling techniques, but, armed with counts from applying regular expressions, you could apply a variety of standard modeling approaches. | Good books on text mining? | This might not be exactly on point for what you are looking for, but Mastering Regular Expressions by Jeffrey Friedl is a great source for learning how to use regular expressions to parse text. He doe | Good books on text mining?
This might not be exactly on point for what you are looking for, but Mastering Regular Expressions by Jeffrey Friedl is a great source for learning how to use regular expressions to parse text. He doesn't discuss modeling techniques, but, armed with counts from applying regular expressions, you could apply a variety of standard modeling approaches. | Good books on text mining?
This might not be exactly on point for what you are looking for, but Mastering Regular Expressions by Jeffrey Friedl is a great source for learning how to use regular expressions to parse text. He doe |
25,916 | Good books on text mining? | One book I go back to time and again for ideas is Text Mining: Predictive Methods... by Sholom Weiss. It has lots of ideas for approaching problems which I find useful since sometimes text mining is about trying different things - Global vs Local dictionary, number of features to keep, etc. I find this book to be a good idea generator. It also has case studies. | Good books on text mining? | One book I go back to time and again for ideas is Text Mining: Predictive Methods... by Sholom Weiss. It has lots of ideas for approaching problems which I find useful since sometimes text mining is | Good books on text mining?
One book I go back to time and again for ideas is Text Mining: Predictive Methods... by Sholom Weiss. It has lots of ideas for approaching problems which I find useful since sometimes text mining is about trying different things - Global vs Local dictionary, number of features to keep, etc. I find this book to be a good idea generator. It also has case studies. | Good books on text mining?
One book I go back to time and again for ideas is Text Mining: Predictive Methods... by Sholom Weiss. It has lots of ideas for approaching problems which I find useful since sometimes text mining is |
25,917 | Good books on text mining? | I suggest NLP at http://www.nltk.org/ is free and couples with NLTK in python. all the best | Good books on text mining? | I suggest NLP at http://www.nltk.org/ is free and couples with NLTK in python. all the best | Good books on text mining?
I suggest NLP at http://www.nltk.org/ is free and couples with NLTK in python. all the best | Good books on text mining?
I suggest NLP at http://www.nltk.org/ is free and couples with NLTK in python. all the best |
25,918 | What are the degrees of freedom of a distribution? | Here is a less technical answer, perhaps more accessible to people with modest mathematical preparation.
The term degrees of freedom (df) is used in connection with various test statistics but its meaning varies from one statistical test to the next. Some tests do not have degrees of freedom associated with the test statistic (e.g., Fisher's Exact Test or the z test). When we do a z test, the z value we calculate based on our data can be interpreted based on a single table of critical z values, no matter how large or small our sample(s). Another way to say this is that there is one z distribution. That is not so for some other tests (e.g., F or t or χ2).
The reason many test statistics need to be interpreted in light of df is that the (theoretical) distribution of values of the test statistic, assuming the null hypothesis is true, depends on sample size or number of groups, or both, or some other fact about the data gathered. In doing a t-test, the distribution of t values depends on the sample size, so when we evaluate the t value we calculate from the observed data we need to compare it to t values expected based on the same sample size as our data. Similarly, the distribution of values of F in an Analysis of Variance (assuming the null hypothesis is true) depends on both sample size and the number of groups. So to interpret the F value we calculate from our data we need to use tables of F values that are based on the same sample size and the same number of groups as we have in our data. Saying this differently, F tests (i.e., ANOVAs) and t-tests and χ2 tests each require a family of curves to help us interpret the t or F or χ2 value we calculate based on our data. We choose from among these families of curves based on values (i.e. df's) so that the probabilities we read from the tables are appropriate for our data. (Of course, most computer programs do this for us.) | What are the degrees of freedom of a distribution? | Here is a less technical answer, perhaps more accessible to people with modest mathematical preparation.
The term degrees of freedom (df) is used in connection with various test statistics but its m | What are the degrees of freedom of a distribution?
Here is a less technical answer, perhaps more accessible to people with modest mathematical preparation.
The term degrees of freedom (df) is used in connection with various test statistics but its meaning varies from one statistical test to the next. Some tests do not have degrees of freedom associated with the test statistic (e.g., Fisher's Exact Test or the z test). When we do a z test, the z value we calculate based on our data can be interpreted based on a single table of critical z values, no matter how large or small our sample(s). Another way to say this is that there is one z distribution. That is not so for some other tests (e.g., F or t or χ2).
The reason many test statistics need to be interpreted in light of df is that the (theoretical) distribution of values of the test statistic, assuming the null hypothesis is true, depends on sample size or number of groups, or both, or some other fact about the data gathered. In doing a t-test, the distribution of t values depends on the sample size, so when we evaluate the t value we calculate from the observed data we need to compare it to t values expected based on the same sample size as our data. Similarly, the distribution of values of F in an Analysis of Variance (assuming the null hypothesis is true) depends on both sample size and the number of groups. So to interpret the F value we calculate from our data we need to use tables of F values that are based on the same sample size and the same number of groups as we have in our data. Saying this differently, F tests (i.e., ANOVAs) and t-tests and χ2 tests each require a family of curves to help us interpret the t or F or χ2 value we calculate based on our data. We choose from among these families of curves based on values (i.e. df's) so that the probabilities we read from the tables are appropriate for our data. (Of course, most computer programs do this for us.) | What are the degrees of freedom of a distribution?
Here is a less technical answer, perhaps more accessible to people with modest mathematical preparation.
The term degrees of freedom (df) is used in connection with various test statistics but its m |
25,919 | What are the degrees of freedom of a distribution? | The F distribution is the ratio of two central chi-square distribution. The m is the degrees of freedom associated with the chi-square random variable that represents the numerator and the n is the degrees of freedom of the chi-square for the denominator. To complete the answer to your question I need to explain the chi-square degrees of freedom. A chi-square distribution with n degrees of freedom can be represented as the sum of squares of n independent N(0,1) random variables. So the degrees of freedom can be looked at as the number of normal random variable that appear in the sum.
Now this will change if these normals include estimated parameters. Suppose for example we have n independent N(m,1) random variables X$_i$ i=1,2,...,n. Then let X$_b$ be the sample mean = ∑X$_i$/n.
Now compute S$^2$ = ∑(X$_i$-X$_b$)$^2$. This S$^2$ will have a chi-square distribution but with n-1 degrees of freedom. In this case we are still summing n, squared N(0,1) random variables. But the difference here is that they are not independent because each one is formed using the same X$_b$. So for the chi-square it is often said that the degrees of freedom equals the number of terms in the sum minus the number of parameters estimated.
In the case of the t distribution we have a N(0,σ$^2$) divided by V where V is the sample estimate of σ. V is proportional to a chi-square with n-1 degrees of freedom where n is the sample size. The degrees of freedom for the t is the degrees of freedom for the chi-square random variable that is involved in the calculation of V. | What are the degrees of freedom of a distribution? | The F distribution is the ratio of two central chi-square distribution. The m is the degrees of freedom associated with the chi-square random variable that represents the numerator and the n is the d | What are the degrees of freedom of a distribution?
The F distribution is the ratio of two central chi-square distribution. The m is the degrees of freedom associated with the chi-square random variable that represents the numerator and the n is the degrees of freedom of the chi-square for the denominator. To complete the answer to your question I need to explain the chi-square degrees of freedom. A chi-square distribution with n degrees of freedom can be represented as the sum of squares of n independent N(0,1) random variables. So the degrees of freedom can be looked at as the number of normal random variable that appear in the sum.
Now this will change if these normals include estimated parameters. Suppose for example we have n independent N(m,1) random variables X$_i$ i=1,2,...,n. Then let X$_b$ be the sample mean = ∑X$_i$/n.
Now compute S$^2$ = ∑(X$_i$-X$_b$)$^2$. This S$^2$ will have a chi-square distribution but with n-1 degrees of freedom. In this case we are still summing n, squared N(0,1) random variables. But the difference here is that they are not independent because each one is formed using the same X$_b$. So for the chi-square it is often said that the degrees of freedom equals the number of terms in the sum minus the number of parameters estimated.
In the case of the t distribution we have a N(0,σ$^2$) divided by V where V is the sample estimate of σ. V is proportional to a chi-square with n-1 degrees of freedom where n is the sample size. The degrees of freedom for the t is the degrees of freedom for the chi-square random variable that is involved in the calculation of V. | What are the degrees of freedom of a distribution?
The F distribution is the ratio of two central chi-square distribution. The m is the degrees of freedom associated with the chi-square random variable that represents the numerator and the n is the d |
25,920 | Mean$\pm$SD or Median$\pm$MAD to summarise a highly skewed variable? | I don't think median $\pm$ mad is appropriate in general.
You can easily build distributions where 50% of the data are fractionally lower than the median, and 50% of the data are spread out much greater than the median - e.g. (4.9,4.9,4.9,4.9,5,1000000,1000000,100000,1000000). The 5 $\pm$ 0.10 notation seems to suggest that there's some mass around (median + mad ~= 5.10), and that's just not always the case, and you've got no idea that there's a big mass over near 1000000.
Quartiles/quantiles give a much better idea of the distribution at the cost of an extra number - (4.9,5.0,1000000.0). I doubt it's entirely a co-incidence that the skewness is the third moment and that I seem to need three numbers/dimensions to intuitively visualize a skewed distribution.
That said, there's nothing wrong with it per se - I'm just arguing intuitions and readability here. If you're using it for yourself or your team, go crazy. But I think it would confuse a broad audience. | Mean$\pm$SD or Median$\pm$MAD to summarise a highly skewed variable? | I don't think median $\pm$ mad is appropriate in general.
You can easily build distributions where 50% of the data are fractionally lower than the median, and 50% of the data are spread out much gre | Mean$\pm$SD or Median$\pm$MAD to summarise a highly skewed variable?
I don't think median $\pm$ mad is appropriate in general.
You can easily build distributions where 50% of the data are fractionally lower than the median, and 50% of the data are spread out much greater than the median - e.g. (4.9,4.9,4.9,4.9,5,1000000,1000000,100000,1000000). The 5 $\pm$ 0.10 notation seems to suggest that there's some mass around (median + mad ~= 5.10), and that's just not always the case, and you've got no idea that there's a big mass over near 1000000.
Quartiles/quantiles give a much better idea of the distribution at the cost of an extra number - (4.9,5.0,1000000.0). I doubt it's entirely a co-incidence that the skewness is the third moment and that I seem to need three numbers/dimensions to intuitively visualize a skewed distribution.
That said, there's nothing wrong with it per se - I'm just arguing intuitions and readability here. If you're using it for yourself or your team, go crazy. But I think it would confuse a broad audience. | Mean$\pm$SD or Median$\pm$MAD to summarise a highly skewed variable?
I don't think median $\pm$ mad is appropriate in general.
You can easily build distributions where 50% of the data are fractionally lower than the median, and 50% of the data are spread out much gre |
25,921 | Mean$\pm$SD or Median$\pm$MAD to summarise a highly skewed variable? | Using the MAD amounts to assuming that the underlying distribution is symmetric (deviations above the median and below the median are considered equally). If you're data is skewed this is clearly wrong: it will lead you to overestimating the true variability of your data.
Fortunately, you can choose one of the several alternative to the mad that are equally robust, almost as easy to compute and that do not assume symmetricity.
Have a look at Rousseeuw and Croux 1992. These concepts are well explained here and implemented here. These two estimators are members of the so-called class of U-statistics, for which there is a well developed theory. | Mean$\pm$SD or Median$\pm$MAD to summarise a highly skewed variable? | Using the MAD amounts to assuming that the underlying distribution is symmetric (deviations above the median and below the median are considered equally). If you're data is skewed this is clearly wron | Mean$\pm$SD or Median$\pm$MAD to summarise a highly skewed variable?
Using the MAD amounts to assuming that the underlying distribution is symmetric (deviations above the median and below the median are considered equally). If you're data is skewed this is clearly wrong: it will lead you to overestimating the true variability of your data.
Fortunately, you can choose one of the several alternative to the mad that are equally robust, almost as easy to compute and that do not assume symmetricity.
Have a look at Rousseeuw and Croux 1992. These concepts are well explained here and implemented here. These two estimators are members of the so-called class of U-statistics, for which there is a well developed theory. | Mean$\pm$SD or Median$\pm$MAD to summarise a highly skewed variable?
Using the MAD amounts to assuming that the underlying distribution is symmetric (deviations above the median and below the median are considered equally). If you're data is skewed this is clearly wron |
25,922 | Mean$\pm$SD or Median$\pm$MAD to summarise a highly skewed variable? | "In this paper a more accurate index of asymmetry is studied. Specifically, the use of the left and right variance is proposed and an index of asymmetry based on them is introduced. Several examples demonstrate its usefulness. The question of evaluating more accurately the dispersion of data about the average emerges in all non-symmetric probability distributions. When the population distribution is non-symmetric, the average and variance (or standard deviation) of a set of data do not provide a precise idea of the distribution of the data, especially shape and symmetry. It is argued that the average, the proposed left variance (or left standard deviation) and right variance (or right standard deviation) describe the set of data more accurately."
Link | Mean$\pm$SD or Median$\pm$MAD to summarise a highly skewed variable? | "In this paper a more accurate index of asymmetry is studied. Specifically, the use of the left and right variance is proposed and an index of asymmetry based on them is introduced. Several examples d | Mean$\pm$SD or Median$\pm$MAD to summarise a highly skewed variable?
"In this paper a more accurate index of asymmetry is studied. Specifically, the use of the left and right variance is proposed and an index of asymmetry based on them is introduced. Several examples demonstrate its usefulness. The question of evaluating more accurately the dispersion of data about the average emerges in all non-symmetric probability distributions. When the population distribution is non-symmetric, the average and variance (or standard deviation) of a set of data do not provide a precise idea of the distribution of the data, especially shape and symmetry. It is argued that the average, the proposed left variance (or left standard deviation) and right variance (or right standard deviation) describe the set of data more accurately."
Link | Mean$\pm$SD or Median$\pm$MAD to summarise a highly skewed variable?
"In this paper a more accurate index of asymmetry is studied. Specifically, the use of the left and right variance is proposed and an index of asymmetry based on them is introduced. Several examples d |
25,923 | Wondering what this bean plot analysis chart means | Boxplots were really designed for normal data, or at least unimodal data. The Beanplot shows you the actual density curve, which is more informative.
The shape is the density, and the short horizontal lines represent each data point. This combines the best of a boxplot, density plot, and rug plot all in one and is very readable.
Unfortunately, the example that you've chosen decided to add a bunch of longer lines which clutter the graph beyond recognition (for me). [snip]
EDIT: Having now worked with beanplot a bit more, the longer thick lines are the mean (or optionally median) for each bean. The longer thin lines are the data, with a sort of "stacking" where wider lines indicate more duplicate values. (You can also jitter them, which I prefer, but at least the "normal" category already has a fair density of points that jittering might make worse.)
I still think the example you chose is a rather cluttered, which could perhaps be cleared up by using jittering instead of stacking.
The paper that describes the R package for making bean plots is a nice read. | Wondering what this bean plot analysis chart means | Boxplots were really designed for normal data, or at least unimodal data. The Beanplot shows you the actual density curve, which is more informative.
The shape is the density, and the short horizontal | Wondering what this bean plot analysis chart means
Boxplots were really designed for normal data, or at least unimodal data. The Beanplot shows you the actual density curve, which is more informative.
The shape is the density, and the short horizontal lines represent each data point. This combines the best of a boxplot, density plot, and rug plot all in one and is very readable.
Unfortunately, the example that you've chosen decided to add a bunch of longer lines which clutter the graph beyond recognition (for me). [snip]
EDIT: Having now worked with beanplot a bit more, the longer thick lines are the mean (or optionally median) for each bean. The longer thin lines are the data, with a sort of "stacking" where wider lines indicate more duplicate values. (You can also jitter them, which I prefer, but at least the "normal" category already has a fair density of points that jittering might make worse.)
I still think the example you chose is a rather cluttered, which could perhaps be cleared up by using jittering instead of stacking.
The paper that describes the R package for making bean plots is a nice read. | Wondering what this bean plot analysis chart means
Boxplots were really designed for normal data, or at least unimodal data. The Beanplot shows you the actual density curve, which is more informative.
The shape is the density, and the short horizontal |
25,924 | Wondering what this bean plot analysis chart means | Without having read the whole paper, it appears to be essentially a variant of the boxplot. As such, you could use it where you would have otherwise used a boxplot, such as comparing the univariate distributions of several groups. It displays a line for each point and overlays a kernel density estimate. From looking at it, I would think it might be more informative with small amounts of data, but be too cluttered with more data. It doesn't seem very Earth-shaking to me, at first blush. If you want to know something more, elaborate your question. | Wondering what this bean plot analysis chart means | Without having read the whole paper, it appears to be essentially a variant of the boxplot. As such, you could use it where you would have otherwise used a boxplot, such as comparing the univariate d | Wondering what this bean plot analysis chart means
Without having read the whole paper, it appears to be essentially a variant of the boxplot. As such, you could use it where you would have otherwise used a boxplot, such as comparing the univariate distributions of several groups. It displays a line for each point and overlays a kernel density estimate. From looking at it, I would think it might be more informative with small amounts of data, but be too cluttered with more data. It doesn't seem very Earth-shaking to me, at first blush. If you want to know something more, elaborate your question. | Wondering what this bean plot analysis chart means
Without having read the whole paper, it appears to be essentially a variant of the boxplot. As such, you could use it where you would have otherwise used a boxplot, such as comparing the univariate d |
25,925 | How can I prove the experiment data follows heavy-tail distribution? | I'm not sure if I'm interpreting your question correctly, so let me know, and I could adapt or delete this answer. First, we don't prove things regarding our data, we just show that something isn't unreasonable. That can be done several ways, one of which is through statistical tests. In my opinion, however, if you have a pre-specified theoretical distribution, the best approach is just to make a qq-plot. Most people think of qq-plots as only being used to assess normality, but you can plot empirical quantiles against any theoretical distribution that can be specified. If you use R, the car package has an augmented function qq.plot() with a lot of nice features; two that I like are that you can specify a number of different theoretical distributions beyond just the Gaussian (e.g., you could to t for a fatter-tailed alternative), and that it plots a 95% confidence band. If you don't have a specific theoretical distribution, but just want to see if the tails are heavier than expected from a normal, that can be seen on a qq-plot, but can sometimes be hard to recognize. One possibility that I like is to make a kernel density plot as well as a qq-plot and you could overlay a normal curve on it to boot. The basic R code is plot(density(data)). For a number, you could calculate the kurtosis, and see if it's higher than expected. I'm not aware of canned functions for kurtosis in R, you have to code it up using the equations given on the linked page, but it's not hard to do. | How can I prove the experiment data follows heavy-tail distribution? | I'm not sure if I'm interpreting your question correctly, so let me know, and I could adapt or delete this answer. First, we don't prove things regarding our data, we just show that something isn't u | How can I prove the experiment data follows heavy-tail distribution?
I'm not sure if I'm interpreting your question correctly, so let me know, and I could adapt or delete this answer. First, we don't prove things regarding our data, we just show that something isn't unreasonable. That can be done several ways, one of which is through statistical tests. In my opinion, however, if you have a pre-specified theoretical distribution, the best approach is just to make a qq-plot. Most people think of qq-plots as only being used to assess normality, but you can plot empirical quantiles against any theoretical distribution that can be specified. If you use R, the car package has an augmented function qq.plot() with a lot of nice features; two that I like are that you can specify a number of different theoretical distributions beyond just the Gaussian (e.g., you could to t for a fatter-tailed alternative), and that it plots a 95% confidence band. If you don't have a specific theoretical distribution, but just want to see if the tails are heavier than expected from a normal, that can be seen on a qq-plot, but can sometimes be hard to recognize. One possibility that I like is to make a kernel density plot as well as a qq-plot and you could overlay a normal curve on it to boot. The basic R code is plot(density(data)). For a number, you could calculate the kurtosis, and see if it's higher than expected. I'm not aware of canned functions for kurtosis in R, you have to code it up using the equations given on the linked page, but it's not hard to do. | How can I prove the experiment data follows heavy-tail distribution?
I'm not sure if I'm interpreting your question correctly, so let me know, and I could adapt or delete this answer. First, we don't prove things regarding our data, we just show that something isn't u |
25,926 | Testing the difference of (some) quantile-Q between groups? | You are right with word "median" on your mind, albeit Kruskal-Wallis is not the test for medians. What you need is median test. It tests (asymptotically by chi-square or exactly by permutations) whether several groups are same in regard to ratio of observations falling above/not above some value. By default, median of the combined sample is taken for that value (and hence is the name of the test, which is then the test for equality of population medians). But you could specify another value than median. Any quantile will do. The test then will compare groups in regard to the proportion of cases that fall not above the quantile. | Testing the difference of (some) quantile-Q between groups? | You are right with word "median" on your mind, albeit Kruskal-Wallis is not the test for medians. What you need is median test. It tests (asymptotically by chi-square or exactly by permutations) wheth | Testing the difference of (some) quantile-Q between groups?
You are right with word "median" on your mind, albeit Kruskal-Wallis is not the test for medians. What you need is median test. It tests (asymptotically by chi-square or exactly by permutations) whether several groups are same in regard to ratio of observations falling above/not above some value. By default, median of the combined sample is taken for that value (and hence is the name of the test, which is then the test for equality of population medians). But you could specify another value than median. Any quantile will do. The test then will compare groups in regard to the proportion of cases that fall not above the quantile. | Testing the difference of (some) quantile-Q between groups?
You are right with word "median" on your mind, albeit Kruskal-Wallis is not the test for medians. What you need is median test. It tests (asymptotically by chi-square or exactly by permutations) wheth |
25,927 | Testing the difference of (some) quantile-Q between groups? | There's an approach for comparing all quantiles of two groups simultaneously:
Simultaneously compare all of the quantiles to get a global sense of where the distributions differ and by how much. For example, low scoring participants in group 1 might be very similar to low scoring participants in group 2, but for high scoring participants, the reverse might be true.
(taken from a script by Rand R. Wilcox)
The method was derived 1976 by Doksum and Sievers, and is implemented as the sband function in the WRS package for R. The method gives a comparison of all quantiles while controlling for overall $\alpha$ error.
However, you can only compare two groups at once. Maybe you can do pairwise comparisons by adjusting for $\alpha$ inflation. | Testing the difference of (some) quantile-Q between groups? | There's an approach for comparing all quantiles of two groups simultaneously:
Simultaneously compare all of the quantiles to get a global sense of where the distributions differ and by how much. For | Testing the difference of (some) quantile-Q between groups?
There's an approach for comparing all quantiles of two groups simultaneously:
Simultaneously compare all of the quantiles to get a global sense of where the distributions differ and by how much. For example, low scoring participants in group 1 might be very similar to low scoring participants in group 2, but for high scoring participants, the reverse might be true.
(taken from a script by Rand R. Wilcox)
The method was derived 1976 by Doksum and Sievers, and is implemented as the sband function in the WRS package for R. The method gives a comparison of all quantiles while controlling for overall $\alpha$ error.
However, you can only compare two groups at once. Maybe you can do pairwise comparisons by adjusting for $\alpha$ inflation. | Testing the difference of (some) quantile-Q between groups?
There's an approach for comparing all quantiles of two groups simultaneously:
Simultaneously compare all of the quantiles to get a global sense of where the distributions differ and by how much. For |
25,928 | Elbow criteria to determine number of cluster | The idea underlying the k-means algorithm is to try to find clusters that minimize the within-cluster variance (or up to a constant the corresponding sum of squares or SS), which amounts to maximize the between-cluster SS because the total variance is fixed. As mentioned on the wiki, you can directly use the within SS and look at its variation when increasing the number of clusters (like we would do in Factor Analysis with a screeplot): an abrupt change in how SS evolve is suggestive of an optimal solution, although this merely stands from visual appreciation. As the total variance is fixed, it is equivalent to study how the ratio of the between and total SS, also called the percentage of variance explained, evolves, because in this the case it will present a large gap from one k to the next k+1. (Note that the between/within ratio is not distributed as an F-distribution because k is not fixed; so, test are meaningless.)
In sum, you just have to compute the squared distance between each data point and their respective center (or centroid), for each cluster--this gives you the within SS, and the total within SS is just the sum of the cluster-specific WSS (transforming them to variance is just a matter of dividing by the corresponding degrees of freedom); the between SS is obtained by substracting the total WSS from the total SS, the latter being obtained by considering k=1 for example.
By the way, with k=1, WSS=TSS and BSS=0.
If you're after determining the number of clusters or where to stop with the k-means, you might consider the Gap statistic as an alternative to the elbow criteria:
Tibshirani, R., Walther, G., and
Hastie, T. (2001). Estimating the
numbers of clusters in a data set via
the gap statistic. J. R. Statist.
Soc. B, 63(2): 411-423. | Elbow criteria to determine number of cluster | The idea underlying the k-means algorithm is to try to find clusters that minimize the within-cluster variance (or up to a constant the corresponding sum of squares or SS), which amounts to maximize t | Elbow criteria to determine number of cluster
The idea underlying the k-means algorithm is to try to find clusters that minimize the within-cluster variance (or up to a constant the corresponding sum of squares or SS), which amounts to maximize the between-cluster SS because the total variance is fixed. As mentioned on the wiki, you can directly use the within SS and look at its variation when increasing the number of clusters (like we would do in Factor Analysis with a screeplot): an abrupt change in how SS evolve is suggestive of an optimal solution, although this merely stands from visual appreciation. As the total variance is fixed, it is equivalent to study how the ratio of the between and total SS, also called the percentage of variance explained, evolves, because in this the case it will present a large gap from one k to the next k+1. (Note that the between/within ratio is not distributed as an F-distribution because k is not fixed; so, test are meaningless.)
In sum, you just have to compute the squared distance between each data point and their respective center (or centroid), for each cluster--this gives you the within SS, and the total within SS is just the sum of the cluster-specific WSS (transforming them to variance is just a matter of dividing by the corresponding degrees of freedom); the between SS is obtained by substracting the total WSS from the total SS, the latter being obtained by considering k=1 for example.
By the way, with k=1, WSS=TSS and BSS=0.
If you're after determining the number of clusters or where to stop with the k-means, you might consider the Gap statistic as an alternative to the elbow criteria:
Tibshirani, R., Walther, G., and
Hastie, T. (2001). Estimating the
numbers of clusters in a data set via
the gap statistic. J. R. Statist.
Soc. B, 63(2): 411-423. | Elbow criteria to determine number of cluster
The idea underlying the k-means algorithm is to try to find clusters that minimize the within-cluster variance (or up to a constant the corresponding sum of squares or SS), which amounts to maximize t |
25,929 | Dummy variable trap issues | You should get the "same" estimates no matter which variable you omit; the coefficients may be different, but the estimates of particular quantities or expectations should be the same across all the models.
In a simple case, let $x_i=1$ for men and 0 for women. Then, we have the model:
$$\begin{align*}
E[y_i \mid x_i] &= x_iE[y_i \mid x_i = 1] + (1 - x_i)E[y_i \mid x_i = 0] \\
&= E[y_i \mid x_i=0] + \left[E[y_i \mid x_i= 1] - E[y_i \mid x_i=0]\right]x_i \\
&= \beta_0 + \beta_1 x_i.
\end{align*}$$
Now, let $z_i=1$ for women. Then
$$\begin{align*}
E[y_i \mid z_i] &= z_iE[y_i \mid z_i = 1] + (1 - z_i)E[y_i \mid z_i = 0] \\
&= E[y_i \mid z_i=0] + \left[E[y_i \mid z_i= 1] - E[y_i \mid z_i=0]\right]z_i \\
&= \gamma_0 + \gamma_1 z_i .
\end{align*}$$
The expected value of $y$ for women is $\beta_0$ and also $\gamma_0 + \gamma_1$. For men, it is $\beta_0 + \beta_1$ and $\gamma_0$.
These results show how the coefficients from the two models are related. For example, $\beta_1 = -\gamma_1$. A similar exercise using your data should show that the "different" coefficients that you get are just sums and differences of one another. | Dummy variable trap issues | You should get the "same" estimates no matter which variable you omit; the coefficients may be different, but the estimates of particular quantities or expectations should be the same across all the m | Dummy variable trap issues
You should get the "same" estimates no matter which variable you omit; the coefficients may be different, but the estimates of particular quantities or expectations should be the same across all the models.
In a simple case, let $x_i=1$ for men and 0 for women. Then, we have the model:
$$\begin{align*}
E[y_i \mid x_i] &= x_iE[y_i \mid x_i = 1] + (1 - x_i)E[y_i \mid x_i = 0] \\
&= E[y_i \mid x_i=0] + \left[E[y_i \mid x_i= 1] - E[y_i \mid x_i=0]\right]x_i \\
&= \beta_0 + \beta_1 x_i.
\end{align*}$$
Now, let $z_i=1$ for women. Then
$$\begin{align*}
E[y_i \mid z_i] &= z_iE[y_i \mid z_i = 1] + (1 - z_i)E[y_i \mid z_i = 0] \\
&= E[y_i \mid z_i=0] + \left[E[y_i \mid z_i= 1] - E[y_i \mid z_i=0]\right]z_i \\
&= \gamma_0 + \gamma_1 z_i .
\end{align*}$$
The expected value of $y$ for women is $\beta_0$ and also $\gamma_0 + \gamma_1$. For men, it is $\beta_0 + \beta_1$ and $\gamma_0$.
These results show how the coefficients from the two models are related. For example, $\beta_1 = -\gamma_1$. A similar exercise using your data should show that the "different" coefficients that you get are just sums and differences of one another. | Dummy variable trap issues
You should get the "same" estimates no matter which variable you omit; the coefficients may be different, but the estimates of particular quantities or expectations should be the same across all the m |
25,930 | Dummy variable trap issues | James, first of all why regression analysis, but not ANOVA (there are many specialists in this kind of analysis that could help you)? The pros for ANOVA is that all you actually interested in are differences in the means of different groups described by combinations of dummy variables (unique categories, or profiles). Well, if you do study impacts of each of categorical variable you include, you may run regression as well.
I think the type of the data you do have here is described in the sense of conjoint analysis: many attributes of the object (gender, age, education, etc.) each having several categories, thus you omit the whole largest profile, not just one dummy variable. A common practise is to code the categories within the attribute as follows (this link may be useful, you probably do not do conjoint analysis here, but coding is similar): suppose you have $n$ categories (three, as you suggested, male, female, unknown) then, first two are coded as usual you do include two dummies (male, female), giving $(1, 0)$ if male, $(0, 1)$ if female, and $(-1, -1)$ if unknown. In this way the results indeed will be placed around intercept term. You may code in a different way, however, but will lose the mentioned interpretation advantage. To sum up, you drop one category from each category, and code your observations in the described way. You do include intercept term also.
Well to omit the largest profile's categories seems good for me, though not so important, at least it is not empty I think. Since you code the variables in specific manner, joint statistical significance of included dummy variables (both male female, could be tested by F test) imply the significance of the omitted one.
It may happen that the results slightly different, but may be it is the wrong coding that influence this? | Dummy variable trap issues | James, first of all why regression analysis, but not ANOVA (there are many specialists in this kind of analysis that could help you)? The pros for ANOVA is that all you actually interested in are diff | Dummy variable trap issues
James, first of all why regression analysis, but not ANOVA (there are many specialists in this kind of analysis that could help you)? The pros for ANOVA is that all you actually interested in are differences in the means of different groups described by combinations of dummy variables (unique categories, or profiles). Well, if you do study impacts of each of categorical variable you include, you may run regression as well.
I think the type of the data you do have here is described in the sense of conjoint analysis: many attributes of the object (gender, age, education, etc.) each having several categories, thus you omit the whole largest profile, not just one dummy variable. A common practise is to code the categories within the attribute as follows (this link may be useful, you probably do not do conjoint analysis here, but coding is similar): suppose you have $n$ categories (three, as you suggested, male, female, unknown) then, first two are coded as usual you do include two dummies (male, female), giving $(1, 0)$ if male, $(0, 1)$ if female, and $(-1, -1)$ if unknown. In this way the results indeed will be placed around intercept term. You may code in a different way, however, but will lose the mentioned interpretation advantage. To sum up, you drop one category from each category, and code your observations in the described way. You do include intercept term also.
Well to omit the largest profile's categories seems good for me, though not so important, at least it is not empty I think. Since you code the variables in specific manner, joint statistical significance of included dummy variables (both male female, could be tested by F test) imply the significance of the omitted one.
It may happen that the results slightly different, but may be it is the wrong coding that influence this? | Dummy variable trap issues
James, first of all why regression analysis, but not ANOVA (there are many specialists in this kind of analysis that could help you)? The pros for ANOVA is that all you actually interested in are diff |
25,931 | Dummy variable trap issues | Without knowing the exact nature of your analysis, have you considered effects coding? This way each variable would represent the effect of that trait/attribute vs the overall grand mean rather than some particular omitted category. I believe you'll still be missing a coefficient for one of the categories/attributes - the one you assign a -1 to. Still, with this many dummies, I would think that the grand mean would make a more meaningful comparison group than any particular omitted category. | Dummy variable trap issues | Without knowing the exact nature of your analysis, have you considered effects coding? This way each variable would represent the effect of that trait/attribute vs the overall grand mean rather than | Dummy variable trap issues
Without knowing the exact nature of your analysis, have you considered effects coding? This way each variable would represent the effect of that trait/attribute vs the overall grand mean rather than some particular omitted category. I believe you'll still be missing a coefficient for one of the categories/attributes - the one you assign a -1 to. Still, with this many dummies, I would think that the grand mean would make a more meaningful comparison group than any particular omitted category. | Dummy variable trap issues
Without knowing the exact nature of your analysis, have you considered effects coding? This way each variable would represent the effect of that trait/attribute vs the overall grand mean rather than |
25,932 | How to correlate two time series with gaps and different time bases? | The question concerns calculating the correlation between two irregularly sampled time series (one-dimensional stochastic processes) and using that to find the time offset where they are maximally correlated (their "phase difference").
This problem is not usually addressed in time series analysis, because time series data are presumed to be collected systematically (at regular intervals of time). It is rather the province of geostatistics, which concerns the multidimensional generalizations of time series. The archetypal geostatistical dataset consists of measurements of geological samples at irregularly spaced locations.
With irregular spacing, the distances among pairs of locations vary: no two distances may be the same. Geostatistics overcomes this with the empirical variogram. This computes a "typical" (often the mean or median) value of $(z(p) - z(q))^2 / 2$--the "semivariance"--where $z(p)$ denotes a measured value at point $p$ and the distance between $p$ and $q$ is constrained to lie within an interval called a "lag". If we assume the process $Z$ is stationary and has a covariance, then the expectation of the semivariance equals the maximum covariance (equal to $Var(Z(p))$ for any $p$) minus the covariance between $Z(p)$ and $Z(q)$. This binning into lags copes with the irregular spacing problem.
When an ordered pair of measurements $(z(p), w(p))$ is made at each point, one can similarly compute the empirical cross-variogram between the $z$'s and the $w$'s and thereby estimate the covariance at any lag. You want the one-dimensional version of the cross-variogram. The R packages gstat and sgeostat, among others, will estimate cross-variograms. Don't worry that your data are one-dimensional; if the software won't work with them directly, just introduce a constant second coordinate: that will make them appear two-dimensional.
With two million points you should be able to detect small deviations from stationarity. It's possible the phase difference between the two time series could vary over time, too. Cope with this by computing the cross-variogram separately for different windows spaced throughout the time period.
@cardinal has already brought up most of these points in comments. The main contribution of this reply is to point towards the use of spatial statistics packages to do your work for you and to use techniques of geostatistics to analyze these data. As far as computational efficiency goes, note that the full convolution (cross-variogram) is not needed: you only need its values near the phase difference. This makes the effort $O(nk)$, not $O(n^2)$, where $k$ is the number of lags to compute, so it might be feasible even with out-of-the-box software. If not, the direct convolution algorithm is easy to implement. | How to correlate two time series with gaps and different time bases? | The question concerns calculating the correlation between two irregularly sampled time series (one-dimensional stochastic processes) and using that to find the time offset where they are maximally cor | How to correlate two time series with gaps and different time bases?
The question concerns calculating the correlation between two irregularly sampled time series (one-dimensional stochastic processes) and using that to find the time offset where they are maximally correlated (their "phase difference").
This problem is not usually addressed in time series analysis, because time series data are presumed to be collected systematically (at regular intervals of time). It is rather the province of geostatistics, which concerns the multidimensional generalizations of time series. The archetypal geostatistical dataset consists of measurements of geological samples at irregularly spaced locations.
With irregular spacing, the distances among pairs of locations vary: no two distances may be the same. Geostatistics overcomes this with the empirical variogram. This computes a "typical" (often the mean or median) value of $(z(p) - z(q))^2 / 2$--the "semivariance"--where $z(p)$ denotes a measured value at point $p$ and the distance between $p$ and $q$ is constrained to lie within an interval called a "lag". If we assume the process $Z$ is stationary and has a covariance, then the expectation of the semivariance equals the maximum covariance (equal to $Var(Z(p))$ for any $p$) minus the covariance between $Z(p)$ and $Z(q)$. This binning into lags copes with the irregular spacing problem.
When an ordered pair of measurements $(z(p), w(p))$ is made at each point, one can similarly compute the empirical cross-variogram between the $z$'s and the $w$'s and thereby estimate the covariance at any lag. You want the one-dimensional version of the cross-variogram. The R packages gstat and sgeostat, among others, will estimate cross-variograms. Don't worry that your data are one-dimensional; if the software won't work with them directly, just introduce a constant second coordinate: that will make them appear two-dimensional.
With two million points you should be able to detect small deviations from stationarity. It's possible the phase difference between the two time series could vary over time, too. Cope with this by computing the cross-variogram separately for different windows spaced throughout the time period.
@cardinal has already brought up most of these points in comments. The main contribution of this reply is to point towards the use of spatial statistics packages to do your work for you and to use techniques of geostatistics to analyze these data. As far as computational efficiency goes, note that the full convolution (cross-variogram) is not needed: you only need its values near the phase difference. This makes the effort $O(nk)$, not $O(n^2)$, where $k$ is the number of lags to compute, so it might be feasible even with out-of-the-box software. If not, the direct convolution algorithm is easy to implement. | How to correlate two time series with gaps and different time bases?
The question concerns calculating the correlation between two irregularly sampled time series (one-dimensional stochastic processes) and using that to find the time offset where they are maximally cor |
25,933 | Alternative funnel plot, without using standard error (SE) | Q: Can I still make a funnel plot with effect size on the horizontal axon and total sample size n (n=n1+n2) on the vertical axis?
A: Yes
Q: How should such a funnel plot be interpreted?
A: It is still a funnel plot. However, funnel plots should be interpreted with caution. For example, if you have only 5-10 effect sizes, a funnel plot is useless. Furthermore, although funnel plots are a helpful visualization technique, their interpretation can be misleading. The presence of an asymmetry does not proof the existence of publication bias. Egger et al. (1997: 632f.) mention a number of reasons that can result in funnel plot asymmetries, e.g. true heterogeneity, data irregularities like methodologically poorly designed small studies or fraud. So, funnel plots can be helpful in identifying possible publication bias, however, they should always be combined with a statistical test.
Q: Is such a plot acceptable when the standard error is not known?
A: Yes
Q: Is it the same as the classical funnel plot with SE or presicion=1/SE on the vertical axon?
A: No, the shape of the 'funnel' can be different.
Q: Is its interpretation different?
A: Yes, see above
Q: How should I set the lines to make the equilateral triangle?
A: What do you mean by "lines to make the equilateral triangle"? Do you mean the 95%-CI lines? You will need the standard errors...
You also might be interested in:
Peters, Jaime L., Alex J. Sutton, David R. Jones, Keith R. Abrams, and Lesly Rushton. 2006. Comparison of two methods to detect publication bias in meta-analysis. Journal of the American Medical Association 295, no. 6: 676--80. (see "An Alternative to Egger’s
Regression Test")
They propose a statistical test which focuses on sample size instead of standard errors.
By the way, do you know the book "Publication Bias in Meta-Analysis: Prevention, Assessment and Adjustments"? It will answer a lot of your questions. | Alternative funnel plot, without using standard error (SE) | Q: Can I still make a funnel plot with effect size on the horizontal axon and total sample size n (n=n1+n2) on the vertical axis?
A: Yes
Q: How should such a funnel plot be interpreted?
A: It is stil | Alternative funnel plot, without using standard error (SE)
Q: Can I still make a funnel plot with effect size on the horizontal axon and total sample size n (n=n1+n2) on the vertical axis?
A: Yes
Q: How should such a funnel plot be interpreted?
A: It is still a funnel plot. However, funnel plots should be interpreted with caution. For example, if you have only 5-10 effect sizes, a funnel plot is useless. Furthermore, although funnel plots are a helpful visualization technique, their interpretation can be misleading. The presence of an asymmetry does not proof the existence of publication bias. Egger et al. (1997: 632f.) mention a number of reasons that can result in funnel plot asymmetries, e.g. true heterogeneity, data irregularities like methodologically poorly designed small studies or fraud. So, funnel plots can be helpful in identifying possible publication bias, however, they should always be combined with a statistical test.
Q: Is such a plot acceptable when the standard error is not known?
A: Yes
Q: Is it the same as the classical funnel plot with SE or presicion=1/SE on the vertical axon?
A: No, the shape of the 'funnel' can be different.
Q: Is its interpretation different?
A: Yes, see above
Q: How should I set the lines to make the equilateral triangle?
A: What do you mean by "lines to make the equilateral triangle"? Do you mean the 95%-CI lines? You will need the standard errors...
You also might be interested in:
Peters, Jaime L., Alex J. Sutton, David R. Jones, Keith R. Abrams, and Lesly Rushton. 2006. Comparison of two methods to detect publication bias in meta-analysis. Journal of the American Medical Association 295, no. 6: 676--80. (see "An Alternative to Egger’s
Regression Test")
They propose a statistical test which focuses on sample size instead of standard errors.
By the way, do you know the book "Publication Bias in Meta-Analysis: Prevention, Assessment and Adjustments"? It will answer a lot of your questions. | Alternative funnel plot, without using standard error (SE)
Q: Can I still make a funnel plot with effect size on the horizontal axon and total sample size n (n=n1+n2) on the vertical axis?
A: Yes
Q: How should such a funnel plot be interpreted?
A: It is stil |
25,934 | In R, does "glmnet" fit an intercept? | Yes, an intercept is included in a glmnet model, but it is not regularized (cf. Regularization Paths for Generalized Linear Models via Coordinate Descent, p. 13). More details about the implementation could certainly be obtained by carefully looking at the code (for a gaussian family, it is the elnet() function that is called by glmnet()), but it is in Fortran.
You could try the penalized package, which allows to remove the intercept by passing unpenalized = ~0 to penalized().
> x <- matrix(rnorm(100*20),100,20)
> y <- rnorm(100)
> fit1 <- penalized(y, penalized=x, unpenalized=~0,
standardize=TRUE)
> fit2 <- lm(y ~ 0+x)
> plot((coef(fit1) + coef(fit2))/2, coef(fit2)-coef(fit1))
To get Lasso regularization, you might try something like
> fit1b <- penalized(y, penalized=x, unpenalized=~0,
standardize=TRUE, lambda1=1, steps=20)
> show(fit1b)
> plotpath(fit1b)
As can be seen in the next figure, there is little differences between the regression parameters computed with both methods (left), and you can plot the Lasso path solution very easily (right). | In R, does "glmnet" fit an intercept? | Yes, an intercept is included in a glmnet model, but it is not regularized (cf. Regularization Paths for Generalized Linear Models via Coordinate Descent, p. 13). More details about the implementation | In R, does "glmnet" fit an intercept?
Yes, an intercept is included in a glmnet model, but it is not regularized (cf. Regularization Paths for Generalized Linear Models via Coordinate Descent, p. 13). More details about the implementation could certainly be obtained by carefully looking at the code (for a gaussian family, it is the elnet() function that is called by glmnet()), but it is in Fortran.
You could try the penalized package, which allows to remove the intercept by passing unpenalized = ~0 to penalized().
> x <- matrix(rnorm(100*20),100,20)
> y <- rnorm(100)
> fit1 <- penalized(y, penalized=x, unpenalized=~0,
standardize=TRUE)
> fit2 <- lm(y ~ 0+x)
> plot((coef(fit1) + coef(fit2))/2, coef(fit2)-coef(fit1))
To get Lasso regularization, you might try something like
> fit1b <- penalized(y, penalized=x, unpenalized=~0,
standardize=TRUE, lambda1=1, steps=20)
> show(fit1b)
> plotpath(fit1b)
As can be seen in the next figure, there is little differences between the regression parameters computed with both methods (left), and you can plot the Lasso path solution very easily (right). | In R, does "glmnet" fit an intercept?
Yes, an intercept is included in a glmnet model, but it is not regularized (cf. Regularization Paths for Generalized Linear Models via Coordinate Descent, p. 13). More details about the implementation |
25,935 | Finding the average GPS point | One of the problems with multivariate data is deciding on, and then interpreting, a suitable metric for calculating distances, hence clever but somewhat hard-to-explain concepts such as Mahalanobis distance. But in this case surely the choice is obvious - Euclidean distance. I'd suggest a simple heuristic algorithm something like:
Calculate the (unweighted) centroid of the data points, i.e. the (unweighted) means of the 2 coordinates
Calculate the Euclidean distance of all the readings from the centroid
Exclude any readings that are further than a certain distance (to be determined based on your experience and knowledge of the technology, or failing that a bit of trial and error cross-validation - 100m, 1km, 10km??)
Calculate the weighted average of both coords of the remaining points, weighting by the inverse of the HDOP score (or some monotonic function of it - i had a quick look at the wikipedia page linked in the question and think maybe you don't need such a function but i'd need to study it further to be sure)
There are clearly several ways to make this more sophisticated, such as down-weighting outliers or using M-estimators rather than simply excluding them, but I'm not sure whether such sophistication is really necessary here. | Finding the average GPS point | One of the problems with multivariate data is deciding on, and then interpreting, a suitable metric for calculating distances, hence clever but somewhat hard-to-explain concepts such as Mahalanobis di | Finding the average GPS point
One of the problems with multivariate data is deciding on, and then interpreting, a suitable metric for calculating distances, hence clever but somewhat hard-to-explain concepts such as Mahalanobis distance. But in this case surely the choice is obvious - Euclidean distance. I'd suggest a simple heuristic algorithm something like:
Calculate the (unweighted) centroid of the data points, i.e. the (unweighted) means of the 2 coordinates
Calculate the Euclidean distance of all the readings from the centroid
Exclude any readings that are further than a certain distance (to be determined based on your experience and knowledge of the technology, or failing that a bit of trial and error cross-validation - 100m, 1km, 10km??)
Calculate the weighted average of both coords of the remaining points, weighting by the inverse of the HDOP score (or some monotonic function of it - i had a quick look at the wikipedia page linked in the question and think maybe you don't need such a function but i'd need to study it further to be sure)
There are clearly several ways to make this more sophisticated, such as down-weighting outliers or using M-estimators rather than simply excluding them, but I'm not sure whether such sophistication is really necessary here. | Finding the average GPS point
One of the problems with multivariate data is deciding on, and then interpreting, a suitable metric for calculating distances, hence clever but somewhat hard-to-explain concepts such as Mahalanobis di |
25,936 | Finding the average GPS point | Rob Hyndman recently posed a question about detecting outliers in multivariate data. The answers may provide a couple of possible approaches (and otherwise, you may want to put the question of finding 2-d outliers in a separate question).
And you can average your remaining GPS data component by component - add all the first components up and divide by the number of points, that will give you the first component of the average. Same with the second components.
This averaging can be weighted by HDOP. Sum up the products of the first component, multiplied with the corresponding HDOP score, and divide the sum by the sum of the HDOP scores. Same with the second components.
I'll take the liberty of removing the "normal-distribution" tag... | Finding the average GPS point | Rob Hyndman recently posed a question about detecting outliers in multivariate data. The answers may provide a couple of possible approaches (and otherwise, you may want to put the question of finding | Finding the average GPS point
Rob Hyndman recently posed a question about detecting outliers in multivariate data. The answers may provide a couple of possible approaches (and otherwise, you may want to put the question of finding 2-d outliers in a separate question).
And you can average your remaining GPS data component by component - add all the first components up and divide by the number of points, that will give you the first component of the average. Same with the second components.
This averaging can be weighted by HDOP. Sum up the products of the first component, multiplied with the corresponding HDOP score, and divide the sum by the sum of the HDOP scores. Same with the second components.
I'll take the liberty of removing the "normal-distribution" tag... | Finding the average GPS point
Rob Hyndman recently posed a question about detecting outliers in multivariate data. The answers may provide a couple of possible approaches (and otherwise, you may want to put the question of finding |
25,937 | Finding the average GPS point | Call the HDOP the independent variable. Use this for weighting later on. So you have sets of co-ordinates - call this (x1,y1); (x2,y2), etc...
First ignore outliers. Calculate the weighted averages of the x co-ordinates as [(x1*h1)+(x2*h2) +....+ (xn*hn)] / [sum(h1,h2,...,hn)] where h1,h2,... is the HDOP value. Do the same for the y co-ordinates. This will give a fairly accurate average value for each co-ordinate.
Dealing with outliers can be a bit tricky. How do you know if they are outliers or not? Strictly you need to determine a statistical fit to the observations and within a confidence interval determine if they are genuine or not. Looking at the question the Poison Distribution does come to mind. But this is probably a lot of work and I'm sure you don't want to go into this. Maybe use an approximation? Say you assume that the average co-ordinate value is a good mean to use. Then determine a value for the standard deviation. I think the standard dev or the poison distribution is 1/(mean). Then approximate using the normal distribution and a 95% confidence interval. Say if an observation is outside the interval (mean-*1.645*std dev ; mean + 1.645*std dev) then it is an outlier? Give this a go. Maybe go do a bit of reading on the poison distribution and incorporate the HDOP value into this? | Finding the average GPS point | Call the HDOP the independent variable. Use this for weighting later on. So you have sets of co-ordinates - call this (x1,y1); (x2,y2), etc...
First ignore outliers. Calculate the weighted averages of | Finding the average GPS point
Call the HDOP the independent variable. Use this for weighting later on. So you have sets of co-ordinates - call this (x1,y1); (x2,y2), etc...
First ignore outliers. Calculate the weighted averages of the x co-ordinates as [(x1*h1)+(x2*h2) +....+ (xn*hn)] / [sum(h1,h2,...,hn)] where h1,h2,... is the HDOP value. Do the same for the y co-ordinates. This will give a fairly accurate average value for each co-ordinate.
Dealing with outliers can be a bit tricky. How do you know if they are outliers or not? Strictly you need to determine a statistical fit to the observations and within a confidence interval determine if they are genuine or not. Looking at the question the Poison Distribution does come to mind. But this is probably a lot of work and I'm sure you don't want to go into this. Maybe use an approximation? Say you assume that the average co-ordinate value is a good mean to use. Then determine a value for the standard deviation. I think the standard dev or the poison distribution is 1/(mean). Then approximate using the normal distribution and a 95% confidence interval. Say if an observation is outside the interval (mean-*1.645*std dev ; mean + 1.645*std dev) then it is an outlier? Give this a go. Maybe go do a bit of reading on the poison distribution and incorporate the HDOP value into this? | Finding the average GPS point
Call the HDOP the independent variable. Use this for weighting later on. So you have sets of co-ordinates - call this (x1,y1); (x2,y2), etc...
First ignore outliers. Calculate the weighted averages of |
25,938 | How to look for valleys in a graph? | You could use some sort of Monte Carlo approach, using for instance the moving average of your data.
Take a moving average of the data, using a window of a reasonable size (I guess it's up to you deciding how wide).
Throughs in your data will (of course) be characterized by a lower average, so now you need to find some "threshold" to define "low".
To do that you randomly swap the values of your data (e.g. using sample()) and recalculate the moving average for your swapped data.
Repeat this last passage a reasonably high amount of times (>5000) and store all the averages of these trials. So essentially you will have a matrix with 5000 lines, one per trial, each one containing the moving average for that trial.
At this point for each column you pick the 5% (or 1% or whatever you want) quantile, that is the value under which lies only 5% of the means of the randomized data.
You now have a "confidence limit" (I'm not sure if that is the correct statistical term) to compare your original data with. If you find a part of your data that is lower than this limit then you can call that a through.
Of course, bare in mind that not this nor any other mathematical method could ever give you any indication of biological significance, although I'm sure you're well aware of that.
EDIT - an example
require(ares) # for the ma (moving average) function
# Some data with peaks and throughs
values <- cos(0.12 * 1:100) + 0.3 * rnorm(100)
plot(values, t="l")
# Calculate the moving average with a window of 10 points
mov.avg <- ma(values, 1, 10, FALSE)
numSwaps <- 1000
mov.avg.swp <- matrix(0, nrow=numSwaps, ncol=length(mov.avg))
# The swapping may take a while, so we display a progress bar
prog <- txtProgressBar(0, numSwaps, style=3)
for (i in 1:numSwaps)
{
# Swap the data
val.swp <- sample(values)
# Calculate the moving average
mov.avg.swp[i,] <- ma(val.swp, 1, 10, FALSE)
setTxtProgressBar(prog, i)
}
# Now find the 1% and 5% quantiles for each column
limits.1 <- apply(mov.avg.swp, 2, quantile, 0.01, na.rm=T)
limits.5 <- apply(mov.avg.swp, 2, quantile, 0.05, na.rm=T)
# Plot the limits
points(limits.5, t="l", col="orange", lwd=2)
points(limits.1, t="l", col="red", lwd=2)
This will just allow you to graphically find the regions, but you can easily find them using something on the lines of which(values>limits.5). | How to look for valleys in a graph? | You could use some sort of Monte Carlo approach, using for instance the moving average of your data.
Take a moving average of the data, using a window of a reasonable size (I guess it's up to you deci | How to look for valleys in a graph?
You could use some sort of Monte Carlo approach, using for instance the moving average of your data.
Take a moving average of the data, using a window of a reasonable size (I guess it's up to you deciding how wide).
Throughs in your data will (of course) be characterized by a lower average, so now you need to find some "threshold" to define "low".
To do that you randomly swap the values of your data (e.g. using sample()) and recalculate the moving average for your swapped data.
Repeat this last passage a reasonably high amount of times (>5000) and store all the averages of these trials. So essentially you will have a matrix with 5000 lines, one per trial, each one containing the moving average for that trial.
At this point for each column you pick the 5% (or 1% or whatever you want) quantile, that is the value under which lies only 5% of the means of the randomized data.
You now have a "confidence limit" (I'm not sure if that is the correct statistical term) to compare your original data with. If you find a part of your data that is lower than this limit then you can call that a through.
Of course, bare in mind that not this nor any other mathematical method could ever give you any indication of biological significance, although I'm sure you're well aware of that.
EDIT - an example
require(ares) # for the ma (moving average) function
# Some data with peaks and throughs
values <- cos(0.12 * 1:100) + 0.3 * rnorm(100)
plot(values, t="l")
# Calculate the moving average with a window of 10 points
mov.avg <- ma(values, 1, 10, FALSE)
numSwaps <- 1000
mov.avg.swp <- matrix(0, nrow=numSwaps, ncol=length(mov.avg))
# The swapping may take a while, so we display a progress bar
prog <- txtProgressBar(0, numSwaps, style=3)
for (i in 1:numSwaps)
{
# Swap the data
val.swp <- sample(values)
# Calculate the moving average
mov.avg.swp[i,] <- ma(val.swp, 1, 10, FALSE)
setTxtProgressBar(prog, i)
}
# Now find the 1% and 5% quantiles for each column
limits.1 <- apply(mov.avg.swp, 2, quantile, 0.01, na.rm=T)
limits.5 <- apply(mov.avg.swp, 2, quantile, 0.05, na.rm=T)
# Plot the limits
points(limits.5, t="l", col="orange", lwd=2)
points(limits.1, t="l", col="red", lwd=2)
This will just allow you to graphically find the regions, but you can easily find them using something on the lines of which(values>limits.5). | How to look for valleys in a graph?
You could use some sort of Monte Carlo approach, using for instance the moving average of your data.
Take a moving average of the data, using a window of a reasonable size (I guess it's up to you deci |
25,939 | How to look for valleys in a graph? | I'm completely ignorant of these data, but assuming the data are ordered (not in time, but by position?) it makes sense to make use of time series methods. There are lots of methods for identifying temporal clusters in data. Generally they are used to find high values but can be used for low values grouped together. I'm thinking here of scan statistics, cumulative sum statistics (and others) used to detect disease outbreaks in count data. Examples of these methods are in the surveillance package and the DCluster package. | How to look for valleys in a graph? | I'm completely ignorant of these data, but assuming the data are ordered (not in time, but by position?) it makes sense to make use of time series methods. There are lots of methods for identifying te | How to look for valleys in a graph?
I'm completely ignorant of these data, but assuming the data are ordered (not in time, but by position?) it makes sense to make use of time series methods. There are lots of methods for identifying temporal clusters in data. Generally they are used to find high values but can be used for low values grouped together. I'm thinking here of scan statistics, cumulative sum statistics (and others) used to detect disease outbreaks in count data. Examples of these methods are in the surveillance package and the DCluster package. | How to look for valleys in a graph?
I'm completely ignorant of these data, but assuming the data are ordered (not in time, but by position?) it makes sense to make use of time series methods. There are lots of methods for identifying te |
25,940 | How to look for valleys in a graph? | There are many options for this, but one good one: you can use the msExtrema function in the msProcess package.
Edit:
In financial performance analysis, this kind of analysis is often performed using a "drawdown" concept. The PerformanceAnalytics package has some useful functions to find these valleys. You could use the same algorithm here if you treat your observations as a time series.
Here are some examples of how you might be able to apply this to your data (where the "dates" are irrelevant but just used for ordering), but the first elements in the zoo object would be your data:
library(PerformanceAnalytics)
x <- zoo(cumsum(rnorm(50)), as.Date(1:50))
findDrawdowns(x)
table.Drawdowns(x)
chart.Drawdown(x) | How to look for valleys in a graph? | There are many options for this, but one good one: you can use the msExtrema function in the msProcess package.
Edit:
In financial performance analysis, this kind of analysis is often performed using | How to look for valleys in a graph?
There are many options for this, but one good one: you can use the msExtrema function in the msProcess package.
Edit:
In financial performance analysis, this kind of analysis is often performed using a "drawdown" concept. The PerformanceAnalytics package has some useful functions to find these valleys. You could use the same algorithm here if you treat your observations as a time series.
Here are some examples of how you might be able to apply this to your data (where the "dates" are irrelevant but just used for ordering), but the first elements in the zoo object would be your data:
library(PerformanceAnalytics)
x <- zoo(cumsum(rnorm(50)), as.Date(1:50))
findDrawdowns(x)
table.Drawdowns(x)
chart.Drawdown(x) | How to look for valleys in a graph?
There are many options for this, but one good one: you can use the msExtrema function in the msProcess package.
Edit:
In financial performance analysis, this kind of analysis is often performed using |
25,941 | How to look for valleys in a graph? | Some of the Bioconductor's packages (e.g., ShortRead, Biostrings, BSgenome, IRanges, genomeIntervals) offer facilities for dealing with genome positions or coverage vectors, e.g. for ChIP-seq and identifying enriched regions. As for the other answers, I agree that any method relying on ordered observations with some threshold-based filter would allow to isolate low signal within a specific bandwith.
Maybe you can also look at the methods used to identify so-called "islands"
Zang, C, Schones, DE, Zeng, C, Cui, K,
Zhao, K, and Peng, W (2009). A
clustering approach for identification
of enriched domains from histone
modification ChIP-Seq data.
Bioinformatics, 25(15), 1952-1958. | How to look for valleys in a graph? | Some of the Bioconductor's packages (e.g., ShortRead, Biostrings, BSgenome, IRanges, genomeIntervals) offer facilities for dealing with genome positions or coverage vectors, e.g. for ChIP-seq and iden | How to look for valleys in a graph?
Some of the Bioconductor's packages (e.g., ShortRead, Biostrings, BSgenome, IRanges, genomeIntervals) offer facilities for dealing with genome positions or coverage vectors, e.g. for ChIP-seq and identifying enriched regions. As for the other answers, I agree that any method relying on ordered observations with some threshold-based filter would allow to isolate low signal within a specific bandwith.
Maybe you can also look at the methods used to identify so-called "islands"
Zang, C, Schones, DE, Zeng, C, Cui, K,
Zhao, K, and Peng, W (2009). A
clustering approach for identification
of enriched domains from histone
modification ChIP-Seq data.
Bioinformatics, 25(15), 1952-1958. | How to look for valleys in a graph?
Some of the Bioconductor's packages (e.g., ShortRead, Biostrings, BSgenome, IRanges, genomeIntervals) offer facilities for dealing with genome positions or coverage vectors, e.g. for ChIP-seq and iden |
25,942 | References for how to plan a study | I agree with the point that statistics consultants are often brought in later on a project when it's too late to remedy design flaws. It's also true that many statistics books give scant attention to study design issues.
You say you want designs "preferably for a wide range of methods (e.g. t-test, GLM, GAM, ordination techniques...". I see designs as relatively independent of statistical method: e.g., experiments (between subjects and within subjects factors) versus observational studies; longitudinal versus cross-sectional; etc. There are also a lot of issues related to measurement, domain specific theoretical knowledge, and domain specific study design principles that need to be understood in order to design a good study.
In terms of books, I'd be inclined to look at domain specific books. In psychology (where I'm from) this means books on psychometrics for measurement, a book on research methods, and a book on statistics, as well as a range of even more domain specific research method books. You might want to check out Research Methods Knowledge Base for a free online resource for the social sciences.
Published journal articles are also a good guide to what is best practice in a particular domain. | References for how to plan a study | I agree with the point that statistics consultants are often brought in later on a project when it's too late to remedy design flaws. It's also true that many statistics books give scant attention to | References for how to plan a study
I agree with the point that statistics consultants are often brought in later on a project when it's too late to remedy design flaws. It's also true that many statistics books give scant attention to study design issues.
You say you want designs "preferably for a wide range of methods (e.g. t-test, GLM, GAM, ordination techniques...". I see designs as relatively independent of statistical method: e.g., experiments (between subjects and within subjects factors) versus observational studies; longitudinal versus cross-sectional; etc. There are also a lot of issues related to measurement, domain specific theoretical knowledge, and domain specific study design principles that need to be understood in order to design a good study.
In terms of books, I'd be inclined to look at domain specific books. In psychology (where I'm from) this means books on psychometrics for measurement, a book on research methods, and a book on statistics, as well as a range of even more domain specific research method books. You might want to check out Research Methods Knowledge Base for a free online resource for the social sciences.
Published journal articles are also a good guide to what is best practice in a particular domain. | References for how to plan a study
I agree with the point that statistics consultants are often brought in later on a project when it's too late to remedy design flaws. It's also true that many statistics books give scant attention to |
25,943 | References for how to plan a study | In general, I would say any book that has DOE (design of experiments) in the title would fit the bill (and there are MANY).
My rule of thumb for such resource would be to start with the wiki page, in particular to your question, notice the Principles of experimental design, following Ronald A. Fisher
But a more serious answer would be domain specific (clinical trial has a huge manual, but for a study on mice, you'd probably go with some other field-related book) | References for how to plan a study | In general, I would say any book that has DOE (design of experiments) in the title would fit the bill (and there are MANY).
My rule of thumb for such resource would be to start with the wiki page, in | References for how to plan a study
In general, I would say any book that has DOE (design of experiments) in the title would fit the bill (and there are MANY).
My rule of thumb for such resource would be to start with the wiki page, in particular to your question, notice the Principles of experimental design, following Ronald A. Fisher
But a more serious answer would be domain specific (clinical trial has a huge manual, but for a study on mice, you'd probably go with some other field-related book) | References for how to plan a study
In general, I would say any book that has DOE (design of experiments) in the title would fit the bill (and there are MANY).
My rule of thumb for such resource would be to start with the wiki page, in |
25,944 | References for how to plan a study | My rule of thumb is "repeat more than you think it's sufficient". | References for how to plan a study | My rule of thumb is "repeat more than you think it's sufficient". | References for how to plan a study
My rule of thumb is "repeat more than you think it's sufficient". | References for how to plan a study
My rule of thumb is "repeat more than you think it's sufficient". |
25,945 | References for how to plan a study | Answering with an aphorism, I believe that your study design will be successful as soon as it actually exists in its full-fledged form. The game of reviewing as it is played in academia is primarily a game of academics showing to each other that they have not completed that step in its full depth, e.g. by violating assumptions or omitting biases where they should be expected. If study design is a skill, it's the skill of making your research bulletproof to these critics.
Your question is very interesting but I am afraid that there is no short answer. To the best of my knowledge, the only way to learn thoroughly about research designs, whether experimental or observational, is to read the literature in your field of specialisation, and then to go the extra mile by connecting with academics in order to learn even more on how they work, in order to, eventually, write up your own research design.
In my field (European political science), we generically offer "research design" courses that span over all types of studies, but even then we miss important trends and also lack a deep understanding of our methods. After taking at least three of these courses, I have become convinced that no academic resource can replace learning from other academics, before confronting real-world settings directly.
I guess that your field also has these 'methods journals' that can be as painfully boring and complex to the outsider than they are helpful and interesting to actual 'study designers' -- and so would recommend that you start digging this literature first, eventually tracking down the recurring bibliographic items that might help you most with study design in biology/ecology. Google Scholar definitely flags a few books with the words 'ecology research methods'. | References for how to plan a study | Answering with an aphorism, I believe that your study design will be successful as soon as it actually exists in its full-fledged form. The game of reviewing as it is played in academia is primarily a | References for how to plan a study
Answering with an aphorism, I believe that your study design will be successful as soon as it actually exists in its full-fledged form. The game of reviewing as it is played in academia is primarily a game of academics showing to each other that they have not completed that step in its full depth, e.g. by violating assumptions or omitting biases where they should be expected. If study design is a skill, it's the skill of making your research bulletproof to these critics.
Your question is very interesting but I am afraid that there is no short answer. To the best of my knowledge, the only way to learn thoroughly about research designs, whether experimental or observational, is to read the literature in your field of specialisation, and then to go the extra mile by connecting with academics in order to learn even more on how they work, in order to, eventually, write up your own research design.
In my field (European political science), we generically offer "research design" courses that span over all types of studies, but even then we miss important trends and also lack a deep understanding of our methods. After taking at least three of these courses, I have become convinced that no academic resource can replace learning from other academics, before confronting real-world settings directly.
I guess that your field also has these 'methods journals' that can be as painfully boring and complex to the outsider than they are helpful and interesting to actual 'study designers' -- and so would recommend that you start digging this literature first, eventually tracking down the recurring bibliographic items that might help you most with study design in biology/ecology. Google Scholar definitely flags a few books with the words 'ecology research methods'. | References for how to plan a study
Answering with an aphorism, I believe that your study design will be successful as soon as it actually exists in its full-fledged form. The game of reviewing as it is played in academia is primarily a |
25,946 | References for how to plan a study | This might not be 100% what you are looking for, but I can name a few books that span both, quantitative and qualitative research designs in social sciences. (Personally, I find it very helpful to have the full options at hand to adapt the design to your research question, the existing knowledge in the field, the unit of comparison, the accessibility of data.)
Gschwend, T., & Schimmelfennig, F. (Eds.). (2007). Research Design in Political Science: How to Practice What They Preach. Houndsmill, et al.: Macmillan.
Leavy, P. (2017). Research Design: Quantitative, Qualitative, Mixed Methods, Arts-based, and Community-based Participatory Research Approaches. [S.l.]: Guildford.
(Further off topic to the original questions, but potentially helpful for readers of this questions who want to learn more about the underlying logic of research designs: The grande debate about the commonalities and differences in research designs in qualitative and quantitative modes of enquiry. The first book basically sketches out how qualitative research could, should and is following causal logic, the second provides a bit "Yes, but..." and has concluding and synthesizing chapters.)
King, G., & Keohane, R. O. V. S. (1994). Designing Social Inquiry: Scientific Inference in Qualitative Research. Princeton, New Jersey.
Brady, H. E., & Collier, D. (Eds.). (2004 // 2010). Rethinking Social Inquiry: Diverse Tools, Shared Standards (2. ed.). Lanham et al.: Rowman & Littlefield Publishers. | References for how to plan a study | This might not be 100% what you are looking for, but I can name a few books that span both, quantitative and qualitative research designs in social sciences. (Personally, I find it very helpful to hav | References for how to plan a study
This might not be 100% what you are looking for, but I can name a few books that span both, quantitative and qualitative research designs in social sciences. (Personally, I find it very helpful to have the full options at hand to adapt the design to your research question, the existing knowledge in the field, the unit of comparison, the accessibility of data.)
Gschwend, T., & Schimmelfennig, F. (Eds.). (2007). Research Design in Political Science: How to Practice What They Preach. Houndsmill, et al.: Macmillan.
Leavy, P. (2017). Research Design: Quantitative, Qualitative, Mixed Methods, Arts-based, and Community-based Participatory Research Approaches. [S.l.]: Guildford.
(Further off topic to the original questions, but potentially helpful for readers of this questions who want to learn more about the underlying logic of research designs: The grande debate about the commonalities and differences in research designs in qualitative and quantitative modes of enquiry. The first book basically sketches out how qualitative research could, should and is following causal logic, the second provides a bit "Yes, but..." and has concluding and synthesizing chapters.)
King, G., & Keohane, R. O. V. S. (1994). Designing Social Inquiry: Scientific Inference in Qualitative Research. Princeton, New Jersey.
Brady, H. E., & Collier, D. (Eds.). (2004 // 2010). Rethinking Social Inquiry: Diverse Tools, Shared Standards (2. ed.). Lanham et al.: Rowman & Littlefield Publishers. | References for how to plan a study
This might not be 100% what you are looking for, but I can name a few books that span both, quantitative and qualitative research designs in social sciences. (Personally, I find it very helpful to hav |
25,947 | The trinity of tests in maximum likelihood: what to do when faced with contradicting conclusions? | I do not know the literature in the area well enough to offer a direct response. However, it seems to me that if the three tests differ then that is an indication that you need further research/data collection in order to definitively answer your question.
You may also want to look at this Google Scholar search
Update in response to your comment:
If collecting additional data is not possible then there is one workaround. Do a simulation which mirrors your data structure, sample size and your proposed model. You can set the parameters to some pre-specified values. Estimate the model using the data generated and then check which one of the three tests points you to the right model. Such a simulation would offer some guidance as to which test to use for your real data. Does that make sense? | The trinity of tests in maximum likelihood: what to do when faced with contradicting conclusions? | I do not know the literature in the area well enough to offer a direct response. However, it seems to me that if the three tests differ then that is an indication that you need further research/data c | The trinity of tests in maximum likelihood: what to do when faced with contradicting conclusions?
I do not know the literature in the area well enough to offer a direct response. However, it seems to me that if the three tests differ then that is an indication that you need further research/data collection in order to definitively answer your question.
You may also want to look at this Google Scholar search
Update in response to your comment:
If collecting additional data is not possible then there is one workaround. Do a simulation which mirrors your data structure, sample size and your proposed model. You can set the parameters to some pre-specified values. Estimate the model using the data generated and then check which one of the three tests points you to the right model. Such a simulation would offer some guidance as to which test to use for your real data. Does that make sense? | The trinity of tests in maximum likelihood: what to do when faced with contradicting conclusions?
I do not know the literature in the area well enough to offer a direct response. However, it seems to me that if the three tests differ then that is an indication that you need further research/data c |
25,948 | The trinity of tests in maximum likelihood: what to do when faced with contradicting conclusions? | I won't give a definitive answer in terms of ranking the three. Build 95% CIs around your parameters based on each, and if they're radically different, then your first step should be to dig deeper. Transform your data (though the LR will be invariant), regularize your likelihood, etc. In a pinch though, I would probably opt for the LR test and associated CI. A rough argument follows.
The LR is invariant under the choice of parametrization (e.g. T versus logit(T)). The Wald statistic assumes normality of (T - T0)/SE(T). If this fails, your CI is bad. The nice thing about the LR is that you don't need to find a transform f(T) to satisfy normality. The 95% CI based on T will be the same. Also, if your likelihood isn't quadratic, the Wald 95% CI, which is symmetric, can be kooky since it may prefer values with lower likelihood to those with higher likelihood.
Another way to think about the LR is that it's using more information, loosely speaking, from the likelihood function. The Wald is based on the MLE and the curvature of the likelihood at null. The Score is based on the slope at null and curvature at null. The LR evaluates the likelihood under the null, and the likelihood under the union of the null and alternative, and combines the two. If you're forced to pick one, this may be intuitively satisfying for picking the LR.
Keep in mind that there are other reasons, such as convenience or computational, to opt for the Wald or Score. The Wald is the simplest and, given a multivariate parameter, if you're testing for setting many individual ones to 0, there are convenient ways to approximate the likelihood. Or if you want to add a variable at a time from some set, you may not want to maximize the likelihood for each new model, and the implementation of Score tests offers some convenience here. The Wald and Score become attractive as your models and likelihood become unattractive. (But I don't think this is what you were questioning, since you have all three available ...) | The trinity of tests in maximum likelihood: what to do when faced with contradicting conclusions? | I won't give a definitive answer in terms of ranking the three. Build 95% CIs around your parameters based on each, and if they're radically different, then your first step should be to dig deeper. | The trinity of tests in maximum likelihood: what to do when faced with contradicting conclusions?
I won't give a definitive answer in terms of ranking the three. Build 95% CIs around your parameters based on each, and if they're radically different, then your first step should be to dig deeper. Transform your data (though the LR will be invariant), regularize your likelihood, etc. In a pinch though, I would probably opt for the LR test and associated CI. A rough argument follows.
The LR is invariant under the choice of parametrization (e.g. T versus logit(T)). The Wald statistic assumes normality of (T - T0)/SE(T). If this fails, your CI is bad. The nice thing about the LR is that you don't need to find a transform f(T) to satisfy normality. The 95% CI based on T will be the same. Also, if your likelihood isn't quadratic, the Wald 95% CI, which is symmetric, can be kooky since it may prefer values with lower likelihood to those with higher likelihood.
Another way to think about the LR is that it's using more information, loosely speaking, from the likelihood function. The Wald is based on the MLE and the curvature of the likelihood at null. The Score is based on the slope at null and curvature at null. The LR evaluates the likelihood under the null, and the likelihood under the union of the null and alternative, and combines the two. If you're forced to pick one, this may be intuitively satisfying for picking the LR.
Keep in mind that there are other reasons, such as convenience or computational, to opt for the Wald or Score. The Wald is the simplest and, given a multivariate parameter, if you're testing for setting many individual ones to 0, there are convenient ways to approximate the likelihood. Or if you want to add a variable at a time from some set, you may not want to maximize the likelihood for each new model, and the implementation of Score tests offers some convenience here. The Wald and Score become attractive as your models and likelihood become unattractive. (But I don't think this is what you were questioning, since you have all three available ...) | The trinity of tests in maximum likelihood: what to do when faced with contradicting conclusions?
I won't give a definitive answer in terms of ranking the three. Build 95% CIs around your parameters based on each, and if they're radically different, then your first step should be to dig deeper. |
25,949 | should I use na.omit or na.exclude in a linear model (in R)? | The only benefit of na.exclude over na.omit is that the former will retain the original number of rows in the data. This may be useful where you need to retain the original size of the dataset - for example it is useful when you want to compare predicted values to original values. With na.omit you will end up with fewer rows so you won't as easily be able to compare. | should I use na.omit or na.exclude in a linear model (in R)? | The only benefit of na.exclude over na.omit is that the former will retain the original number of rows in the data. This may be useful where you need to retain the original size of the dataset - for e | should I use na.omit or na.exclude in a linear model (in R)?
The only benefit of na.exclude over na.omit is that the former will retain the original number of rows in the data. This may be useful where you need to retain the original size of the dataset - for example it is useful when you want to compare predicted values to original values. With na.omit you will end up with fewer rows so you won't as easily be able to compare. | should I use na.omit or na.exclude in a linear model (in R)?
The only benefit of na.exclude over na.omit is that the former will retain the original number of rows in the data. This may be useful where you need to retain the original size of the dataset - for e |
25,950 | What does "Parameters are fixed and data vary" in frequentists' term and "Parameters vary and data are fixed" in Bayesians' term exactly mean? | In frequentist philosophy, parameters are treated as non-random objects, while data are treated as random, hence "parameters are fixed and data vary".
In Bayesian philosophy, parameters are treated as random objects, and inference is performed by conditioning on an observed (fixed) set of data, hence "parameters vary and data are fixed". By parameters are treated as random objects, we mean parameters have a distribution, much like observations have distributions.
Note however, the interpretation is that this randomness reflects our belief of what the true underlying parameter is. In other words both Bayesians and frequentists agree that a true fixed parameter exists, but Bayesians further encode beliefs of what values this parameter might take on, in the form of a distribution.
To illustrate the difference in philosophies, consider an inference problem where we aim to construct an interval estimate for some parameter $\theta$ which is associated to the model by the sampling distribution whose density we denote as $f(X | \theta)$.
As a frequentist, you would infer a confidence interval, and a credible interval as a Bayesian.
Under the frequentist paradigm, you observe some data $X=x$ and construct a confidence interval by manipulating $x$, i.e., you have some function $C$ that maps $x$ to some interval. Because $X$ is a random variable, and $C$ is just a function of $X$, we are essentially constructing "random" interval estimates. The parameter is treated as a fixed, unknown constant. The meaning of confidence intervals is thus the probability of this random interval $C(X)$ capturing the fixed unknown constant $\theta$. Note this means if you observed say $100$ values of $x$, and you constructed a 95% confidence interval for each set of observations, you will capture $\theta$ for approximately $95$ of them.
Under the Bayesian paradigm, you begin by encoding your belief of what values the parameter might take on, say with a distribution $\pi_0$. Then you again observe some data $X=x$. To derive a credible interval you infer your updated belief, encoded as a distribution called the posterior distribution, which we denote $\pi_1$. The posterior distribution is defined as
$$\pi_1(\theta | x) = \frac{f(x|\theta)\pi_0(\theta)}{p(x)}.$$
Here we see our posterior encodes our uncertainty of $\theta$ in the form of a distribution, much like how we encoded our belief prior to observing the data.
The data here is fixed in the sense that our estimate is conditioned upon what is observed. The credible interval is then taken as an interval of the posterior. The credible interval is interpreted as the probability of the parameter taking on values in the interval. | What does "Parameters are fixed and data vary" in frequentists' term and "Parameters vary and data a | In frequentist philosophy, parameters are treated as non-random objects, while data are treated as random, hence "parameters are fixed and data vary".
In Bayesian philosophy, parameters are treated as | What does "Parameters are fixed and data vary" in frequentists' term and "Parameters vary and data are fixed" in Bayesians' term exactly mean?
In frequentist philosophy, parameters are treated as non-random objects, while data are treated as random, hence "parameters are fixed and data vary".
In Bayesian philosophy, parameters are treated as random objects, and inference is performed by conditioning on an observed (fixed) set of data, hence "parameters vary and data are fixed". By parameters are treated as random objects, we mean parameters have a distribution, much like observations have distributions.
Note however, the interpretation is that this randomness reflects our belief of what the true underlying parameter is. In other words both Bayesians and frequentists agree that a true fixed parameter exists, but Bayesians further encode beliefs of what values this parameter might take on, in the form of a distribution.
To illustrate the difference in philosophies, consider an inference problem where we aim to construct an interval estimate for some parameter $\theta$ which is associated to the model by the sampling distribution whose density we denote as $f(X | \theta)$.
As a frequentist, you would infer a confidence interval, and a credible interval as a Bayesian.
Under the frequentist paradigm, you observe some data $X=x$ and construct a confidence interval by manipulating $x$, i.e., you have some function $C$ that maps $x$ to some interval. Because $X$ is a random variable, and $C$ is just a function of $X$, we are essentially constructing "random" interval estimates. The parameter is treated as a fixed, unknown constant. The meaning of confidence intervals is thus the probability of this random interval $C(X)$ capturing the fixed unknown constant $\theta$. Note this means if you observed say $100$ values of $x$, and you constructed a 95% confidence interval for each set of observations, you will capture $\theta$ for approximately $95$ of them.
Under the Bayesian paradigm, you begin by encoding your belief of what values the parameter might take on, say with a distribution $\pi_0$. Then you again observe some data $X=x$. To derive a credible interval you infer your updated belief, encoded as a distribution called the posterior distribution, which we denote $\pi_1$. The posterior distribution is defined as
$$\pi_1(\theta | x) = \frac{f(x|\theta)\pi_0(\theta)}{p(x)}.$$
Here we see our posterior encodes our uncertainty of $\theta$ in the form of a distribution, much like how we encoded our belief prior to observing the data.
The data here is fixed in the sense that our estimate is conditioned upon what is observed. The credible interval is then taken as an interval of the posterior. The credible interval is interpreted as the probability of the parameter taking on values in the interval. | What does "Parameters are fixed and data vary" in frequentists' term and "Parameters vary and data a
In frequentist philosophy, parameters are treated as non-random objects, while data are treated as random, hence "parameters are fixed and data vary".
In Bayesian philosophy, parameters are treated as |
25,951 | What does "Parameters are fixed and data vary" in frequentists' term and "Parameters vary and data are fixed" in Bayesians' term exactly mean? | In Bayesian statistics we condition upon the observed data. The Bayesian part of your statement means that the data are known (and hence fixed to known values) and that the parameters are unknown (and hence allowed to vary and take on any plausible values).
In frequentist statistics, on the other hand, we compare the observed data to data that could have been observed. So we consider all sorts of hypothetical data (the data isn’t fixed). The fact that the parameters are fixed is a bit more nuanced. But essentially means that the results are not and do not use probability distributions for the unknown parameters. The frequentist results reduce to statements that assume specific values of the parameters or that are true for any assumed values. E.g., confidence intervals would reject the true parameters (whatever they are assumed to be) at a pre-specified rate. | What does "Parameters are fixed and data vary" in frequentists' term and "Parameters vary and data a | In Bayesian statistics we condition upon the observed data. The Bayesian part of your statement means that the data are known (and hence fixed to known values) and that the parameters are unknown (and | What does "Parameters are fixed and data vary" in frequentists' term and "Parameters vary and data are fixed" in Bayesians' term exactly mean?
In Bayesian statistics we condition upon the observed data. The Bayesian part of your statement means that the data are known (and hence fixed to known values) and that the parameters are unknown (and hence allowed to vary and take on any plausible values).
In frequentist statistics, on the other hand, we compare the observed data to data that could have been observed. So we consider all sorts of hypothetical data (the data isn’t fixed). The fact that the parameters are fixed is a bit more nuanced. But essentially means that the results are not and do not use probability distributions for the unknown parameters. The frequentist results reduce to statements that assume specific values of the parameters or that are true for any assumed values. E.g., confidence intervals would reject the true parameters (whatever they are assumed to be) at a pre-specified rate. | What does "Parameters are fixed and data vary" in frequentists' term and "Parameters vary and data a
In Bayesian statistics we condition upon the observed data. The Bayesian part of your statement means that the data are known (and hence fixed to known values) and that the parameters are unknown (and |
25,952 | When do posteriors converge to a point mass? | Convergence of the posterior due to convergence of the likelihood
One way to look at 'convergence' is in a frequentist way, for increasing sample size the posterior will, with increasing probability, be high for the true parameter and low for the false parameter.
For this we can use the Bayes factor
$$\frac{P(\theta_1\vert x)}{P(\theta_0\vert x)} = \frac{P(x \vert \theta_1)}{P(x \vert \theta_0)} \frac{P(\theta_1)}{P(\theta_0)} $$
where $\theta_0$ is the true parameter value and $\theta_1$ is any other alternative value. (maybe it is a bit strange to speak about the true parameter in a Bayesian context, but maybe the same is true for speaking about converging of the posterior, which is maybe more like a frequentist property of the posterior)
Assume that the likelihood ratio ${P(x \vert \theta_1)}/{P(x \vert \theta_0)}$ will converge to 0 in probability for all values $\theta_1$ that do not have a likelihood function that is the same as the likelihood function for the true parameter value $\theta_0$. (we will show that later)
So if ${P(x \vert \theta_1)}/{P(x \vert \theta_0)}$ converges, and if $P(\theta_0)$ is nonzero, then you will have that ${P(\theta_1\vert x)}/{P(\theta_0\vert x)}$ converges. And this implies that $P(x \vert \theta)$ converges to / concentrates in the point $\theta_0$.
What are the necessary conditions for a model's posterior to converge to a point mass in the limit of infinite observations?
So you need two conditions:
The likelihood function of two different parameters must be different.
$P(\theta)$ is non-zero for the correct $\theta$. (you can argue similarly for densities $f(\theta)$ as prior)
Intuitive: If your prior gives zero density/probability to the true $\theta$ then the posterior will never give a non-zero density/probability to the true $\theta$, no matter how large sample you take.
Convergence of the likelihood ratio to zero
The likelihood ratio of a sample of size $n$ converges to zero (when $\theta_1$ is not the true parameter).
$$ \frac{P(x_1, x_2, \dots , x_n \vert \theta_1)}{P(x_1, x_2, \dots , x_n \vert \theta_0)} \quad \xrightarrow{P} \quad 0$$
or for the negative log-likelihood ratio
$$-\Lambda_{\theta_1,n} = - \log \left( \frac{P(x_1, x_2, \dots , x_n \vert \theta_1)}{P(x_1, x_2, \dots , x_n \vert \theta_0)} \right) \quad \xrightarrow{P} \quad \infty$$
We can show this by using the law of large numbers (and we need to assume that the measurements are independent).
If we assume that the measurements are independent then we can view the log-likelihood for a sample of size $n$ as the sum of the values of the log-likelihood for single measurements
$$\Lambda_{\theta_1,n} = \log \left( \frac{P(x_1, x_2, \dots , x_n \vert \theta_1)}{P(x_1, x_2, \dots , x_n \vert \theta_0)} \right) = \log \left( \prod_{i=1}^n \frac{P(x_i \vert \theta_1)}{P(x_i \vert \theta_0)} \right) = \sum_{i=1}^n \log \left( \frac{P(x_i \vert \theta_1)}{P(x_i \vert \theta_0)} \right)$$
Note that the expectation value of the negative log-likelihood
$$E\left[- \log \left( \frac{P_{x \vert \theta_1}(x \vert \theta_1)}{P_{x \vert \theta_0}(x \vert \theta_0)} \right)\right] = -\sum_{ x \in \chi} P_{x \vert \theta_0}(x \vert \theta_0) \log \left( \frac{P_{x \vert \theta_1}(x \vert \theta_1)}{P_{x \vert \theta_0}(x \vert \theta_0)} \right) \geq 0$$
resembles the Kullback-Leibler divergence, which is positive as can be shown by Gibbs' inequality, and equality to zero occurs iff $P(x \vert \theta_1) = P(x \vert \theta_0)$:
So if this expectation is positive then by the law of large numbers, $-{\Lambda_{\theta_1,n}}/{n}$ convergences to some positive constant $c$
$$\lim_{n \to \infty} P\left( \left| -\frac{\Lambda_{\theta_1,n}}{n}-c \right| > \epsilon \right) = 0$$
which implies that $-{\Lambda_{\theta_1,n}}$ will converge to infinity. For any $K>0$
$$\lim_{n \to \infty} P\left( {-\Lambda_{\theta_1,n}} < K \right) = 0$$ | When do posteriors converge to a point mass? | Convergence of the posterior due to convergence of the likelihood
One way to look at 'convergence' is in a frequentist way, for increasing sample size the posterior will, with increasing probability, | When do posteriors converge to a point mass?
Convergence of the posterior due to convergence of the likelihood
One way to look at 'convergence' is in a frequentist way, for increasing sample size the posterior will, with increasing probability, be high for the true parameter and low for the false parameter.
For this we can use the Bayes factor
$$\frac{P(\theta_1\vert x)}{P(\theta_0\vert x)} = \frac{P(x \vert \theta_1)}{P(x \vert \theta_0)} \frac{P(\theta_1)}{P(\theta_0)} $$
where $\theta_0$ is the true parameter value and $\theta_1$ is any other alternative value. (maybe it is a bit strange to speak about the true parameter in a Bayesian context, but maybe the same is true for speaking about converging of the posterior, which is maybe more like a frequentist property of the posterior)
Assume that the likelihood ratio ${P(x \vert \theta_1)}/{P(x \vert \theta_0)}$ will converge to 0 in probability for all values $\theta_1$ that do not have a likelihood function that is the same as the likelihood function for the true parameter value $\theta_0$. (we will show that later)
So if ${P(x \vert \theta_1)}/{P(x \vert \theta_0)}$ converges, and if $P(\theta_0)$ is nonzero, then you will have that ${P(\theta_1\vert x)}/{P(\theta_0\vert x)}$ converges. And this implies that $P(x \vert \theta)$ converges to / concentrates in the point $\theta_0$.
What are the necessary conditions for a model's posterior to converge to a point mass in the limit of infinite observations?
So you need two conditions:
The likelihood function of two different parameters must be different.
$P(\theta)$ is non-zero for the correct $\theta$. (you can argue similarly for densities $f(\theta)$ as prior)
Intuitive: If your prior gives zero density/probability to the true $\theta$ then the posterior will never give a non-zero density/probability to the true $\theta$, no matter how large sample you take.
Convergence of the likelihood ratio to zero
The likelihood ratio of a sample of size $n$ converges to zero (when $\theta_1$ is not the true parameter).
$$ \frac{P(x_1, x_2, \dots , x_n \vert \theta_1)}{P(x_1, x_2, \dots , x_n \vert \theta_0)} \quad \xrightarrow{P} \quad 0$$
or for the negative log-likelihood ratio
$$-\Lambda_{\theta_1,n} = - \log \left( \frac{P(x_1, x_2, \dots , x_n \vert \theta_1)}{P(x_1, x_2, \dots , x_n \vert \theta_0)} \right) \quad \xrightarrow{P} \quad \infty$$
We can show this by using the law of large numbers (and we need to assume that the measurements are independent).
If we assume that the measurements are independent then we can view the log-likelihood for a sample of size $n$ as the sum of the values of the log-likelihood for single measurements
$$\Lambda_{\theta_1,n} = \log \left( \frac{P(x_1, x_2, \dots , x_n \vert \theta_1)}{P(x_1, x_2, \dots , x_n \vert \theta_0)} \right) = \log \left( \prod_{i=1}^n \frac{P(x_i \vert \theta_1)}{P(x_i \vert \theta_0)} \right) = \sum_{i=1}^n \log \left( \frac{P(x_i \vert \theta_1)}{P(x_i \vert \theta_0)} \right)$$
Note that the expectation value of the negative log-likelihood
$$E\left[- \log \left( \frac{P_{x \vert \theta_1}(x \vert \theta_1)}{P_{x \vert \theta_0}(x \vert \theta_0)} \right)\right] = -\sum_{ x \in \chi} P_{x \vert \theta_0}(x \vert \theta_0) \log \left( \frac{P_{x \vert \theta_1}(x \vert \theta_1)}{P_{x \vert \theta_0}(x \vert \theta_0)} \right) \geq 0$$
resembles the Kullback-Leibler divergence, which is positive as can be shown by Gibbs' inequality, and equality to zero occurs iff $P(x \vert \theta_1) = P(x \vert \theta_0)$:
So if this expectation is positive then by the law of large numbers, $-{\Lambda_{\theta_1,n}}/{n}$ convergences to some positive constant $c$
$$\lim_{n \to \infty} P\left( \left| -\frac{\Lambda_{\theta_1,n}}{n}-c \right| > \epsilon \right) = 0$$
which implies that $-{\Lambda_{\theta_1,n}}$ will converge to infinity. For any $K>0$
$$\lim_{n \to \infty} P\left( {-\Lambda_{\theta_1,n}} < K \right) = 0$$ | When do posteriors converge to a point mass?
Convergence of the posterior due to convergence of the likelihood
One way to look at 'convergence' is in a frequentist way, for increasing sample size the posterior will, with increasing probability, |
25,953 | When do posteriors converge to a point mass? | Adding three points to the answer by @SextusEmpiricus:
First, Doob's Theorem says that the posterior (under correct model specification) converges to the truth except on a set of parameters $\theta$ with prior probability zero. In a finite-dimensional setting you would typically have a prior that puts some mass everywhere, so that a set with prior probability zero also has Lebesgue measure zero.
Second, finite-dimensional misspecified models will typically also have (frequentist) posterior convergence to a point mass, at the $\theta_0$ which minimises the Kullback-Leibler divergence to the data-generating model. The arguments for this are analogous to the arguments for convergence of misspecified MLEs to the 'least false' model, and can be done along the lines of @SextusEmpiricus's answer.
Third, this is all much more complicated for infinite-dimensional parameters, partly because sets of prior probability 1 can be quite small in infinite-dimensional spaces. For any specified $\epsilon>0$, a probability distribution places at least $1-\epsilon$ of its mass on some compact set $K_\epsilon$. In, eg, Hilbert or Banach spaces a compact set can't contain any open ball.
In infinite-dimensional problems:
Doob's Theorem is still true, but it's less useful.
Whether or not the posterior converges to a point depends on how big (flexible, overfitting,..) the model is
It's quite possible for a correctly specified model to have a prior converging to the wrong point mass. In fact, Freedman gave a reasonable-looking problem for which this is typical. So prior choice is more tricky than it is in finite-dimensional problems. | When do posteriors converge to a point mass? | Adding three points to the answer by @SextusEmpiricus:
First, Doob's Theorem says that the posterior (under correct model specification) converges to the truth except on a set of parameters $\theta$ | When do posteriors converge to a point mass?
Adding three points to the answer by @SextusEmpiricus:
First, Doob's Theorem says that the posterior (under correct model specification) converges to the truth except on a set of parameters $\theta$ with prior probability zero. In a finite-dimensional setting you would typically have a prior that puts some mass everywhere, so that a set with prior probability zero also has Lebesgue measure zero.
Second, finite-dimensional misspecified models will typically also have (frequentist) posterior convergence to a point mass, at the $\theta_0$ which minimises the Kullback-Leibler divergence to the data-generating model. The arguments for this are analogous to the arguments for convergence of misspecified MLEs to the 'least false' model, and can be done along the lines of @SextusEmpiricus's answer.
Third, this is all much more complicated for infinite-dimensional parameters, partly because sets of prior probability 1 can be quite small in infinite-dimensional spaces. For any specified $\epsilon>0$, a probability distribution places at least $1-\epsilon$ of its mass on some compact set $K_\epsilon$. In, eg, Hilbert or Banach spaces a compact set can't contain any open ball.
In infinite-dimensional problems:
Doob's Theorem is still true, but it's less useful.
Whether or not the posterior converges to a point depends on how big (flexible, overfitting,..) the model is
It's quite possible for a correctly specified model to have a prior converging to the wrong point mass. In fact, Freedman gave a reasonable-looking problem for which this is typical. So prior choice is more tricky than it is in finite-dimensional problems. | When do posteriors converge to a point mass?
Adding three points to the answer by @SextusEmpiricus:
First, Doob's Theorem says that the posterior (under correct model specification) converges to the truth except on a set of parameters $\theta$ |
25,954 | When do posteriors converge to a point mass? | The necessary and sufficient condition that the posterior converges to the point mass at the true parameter is that the model is correctly specified and identified,
for any prior whose support contains the true parameter.
(Convergence here means that, under the law determined by $\theta$, for every neighborhood $U$ of $\theta$, the measure $\mu_n(U)$ of $U$ under posterior $\mu_n$ converges almost surely to $1$.)
Below is a simple argument for the case of finite parameter spaces, say $\{\theta_0, \theta_1\}$.
(The argument can be extended to the general case.
The general statement is that consistency holds except on a set of prior measure zero. The assumption that the parameter space is finite avoids measure-theoretic considerations.
The general statement comes with the usual caveat for almost-everywhere statements---one cannot say whether it holds for a given $\theta$.)
Necessity
Suppose the posterior is consistent at $\theta_0$. Then it's immediate that the model must be identified.
Otherwise, the likelihood ratio process
$$
\prod_{k = 1}^n \frac{p(x_k|\theta_1)}{p(x_k|\theta_0)}, \, n = 1, 2, \cdots
$$
equals $1$ almost surely and the posterior is equal to the prior for all $n$, almost surely.
Sufficiency
Now suppose the posterior is consistent. This implies that the likelihood ratio process converges to zero almost surely.
Two things to notice here:
Under the law determined by $\theta_0$, the likelihood ratio process
$$
M_n = \prod_{k = 1}^n \frac{p(x_k|\theta_1)}{p(x_k|\theta_0)} \equiv \prod_{k = 1}^n X_k.
$$
is a nonnegative martingale, and, by the consistency assumption, $M_n \stackrel{a.s.}{\rightarrow} M_{\infty} \equiv 0$.
$p(x|\theta_1)$ is equal to $p(x|\theta_0)$ $dx$-almost everywhere with respect to reference measure $dx$ if and only if
$\rho = \int \sqrt{ p(x|\theta_1) p(x|\theta_0)} dx = 1$. In general, $0 \leq \rho \leq 1$.
Define
$$
N_n = \prod_{k = 1}^n \frac{ X_k^{\frac12} }{\rho}= \frac{1}{\rho^n} \prod_{k = 1}^n X_k^{\frac12},
$$ which is also a nonnegative martingale.
Now suppose model is not identified, i.e. $\rho = 1$.
Then $(N_n)$ is uniformly bounded in $L^1$ (because $E[N_n^2] = 1$ for all $n$).
By Doob's $L^2$ inequality,
$$
E[\, \sup_n M_n\, ] \leq 4 \sup_n E[\, N_n^2 \,] < \infty.
$$
This implies that $(X_n)$ is a uniformly integrable martingale. By Doob's convergence theorem for UI martingale, $M_n = E[M_{\infty}|M_k, k \leq n] = 0$, which is impossible---$\prod_{k=1}^n p(x_k|\theta_1)$ cannot be zero almost surely if $\rho = 1$.
Comments on Sufficiency
Couple comments on the sufficiency part:
The coefficient $\rho$ was first considered by Kakutani (1948), who used it to prove the consistency of the LR test, among other things.
For finite parameter space, sufficiency can also be shown via the KL-divergence argument in the answer of @SextusEmpiricus (although I don't believe that argument
extends to the general setting; the martingale property seems more primitive).
In the case of finite parameter space, both arguments make use of convexity (via the $\log$ and $\sqrt{\cdot}$ functions respectively.)
Infinite Dimensional Parameter Space
The set of priors whose support contains the true parameter can be "very small", when the parameter space is infinite dimensional.
In the example of Freedman (1965), mentioned by @ThomasLumley,
the parameter space $\Theta$ is the set of all probability measures on $\mathbb{N}$, i.e.
$$
\Theta = \{ (p_i)_{i \geq 1}: \; p_i \geq 0 \; \forall i, \mbox{ and } \sum_i p_i = 1\} \subset l^1(\mathbb{N}),
$$
and given the weak-* topology induced by the pairing between $l^{\infty}$ and $l^1$.
The set of priors is the set of probability measures on $\Theta$, given the topology of weak convergence.
Freedman showed that the (true parameter, prior)-pairs which are consistent is "small" with respect to the product topology. | When do posteriors converge to a point mass? | The necessary and sufficient condition that the posterior converges to the point mass at the true parameter is that the model is correctly specified and identified,
for any prior whose support contain | When do posteriors converge to a point mass?
The necessary and sufficient condition that the posterior converges to the point mass at the true parameter is that the model is correctly specified and identified,
for any prior whose support contains the true parameter.
(Convergence here means that, under the law determined by $\theta$, for every neighborhood $U$ of $\theta$, the measure $\mu_n(U)$ of $U$ under posterior $\mu_n$ converges almost surely to $1$.)
Below is a simple argument for the case of finite parameter spaces, say $\{\theta_0, \theta_1\}$.
(The argument can be extended to the general case.
The general statement is that consistency holds except on a set of prior measure zero. The assumption that the parameter space is finite avoids measure-theoretic considerations.
The general statement comes with the usual caveat for almost-everywhere statements---one cannot say whether it holds for a given $\theta$.)
Necessity
Suppose the posterior is consistent at $\theta_0$. Then it's immediate that the model must be identified.
Otherwise, the likelihood ratio process
$$
\prod_{k = 1}^n \frac{p(x_k|\theta_1)}{p(x_k|\theta_0)}, \, n = 1, 2, \cdots
$$
equals $1$ almost surely and the posterior is equal to the prior for all $n$, almost surely.
Sufficiency
Now suppose the posterior is consistent. This implies that the likelihood ratio process converges to zero almost surely.
Two things to notice here:
Under the law determined by $\theta_0$, the likelihood ratio process
$$
M_n = \prod_{k = 1}^n \frac{p(x_k|\theta_1)}{p(x_k|\theta_0)} \equiv \prod_{k = 1}^n X_k.
$$
is a nonnegative martingale, and, by the consistency assumption, $M_n \stackrel{a.s.}{\rightarrow} M_{\infty} \equiv 0$.
$p(x|\theta_1)$ is equal to $p(x|\theta_0)$ $dx$-almost everywhere with respect to reference measure $dx$ if and only if
$\rho = \int \sqrt{ p(x|\theta_1) p(x|\theta_0)} dx = 1$. In general, $0 \leq \rho \leq 1$.
Define
$$
N_n = \prod_{k = 1}^n \frac{ X_k^{\frac12} }{\rho}= \frac{1}{\rho^n} \prod_{k = 1}^n X_k^{\frac12},
$$ which is also a nonnegative martingale.
Now suppose model is not identified, i.e. $\rho = 1$.
Then $(N_n)$ is uniformly bounded in $L^1$ (because $E[N_n^2] = 1$ for all $n$).
By Doob's $L^2$ inequality,
$$
E[\, \sup_n M_n\, ] \leq 4 \sup_n E[\, N_n^2 \,] < \infty.
$$
This implies that $(X_n)$ is a uniformly integrable martingale. By Doob's convergence theorem for UI martingale, $M_n = E[M_{\infty}|M_k, k \leq n] = 0$, which is impossible---$\prod_{k=1}^n p(x_k|\theta_1)$ cannot be zero almost surely if $\rho = 1$.
Comments on Sufficiency
Couple comments on the sufficiency part:
The coefficient $\rho$ was first considered by Kakutani (1948), who used it to prove the consistency of the LR test, among other things.
For finite parameter space, sufficiency can also be shown via the KL-divergence argument in the answer of @SextusEmpiricus (although I don't believe that argument
extends to the general setting; the martingale property seems more primitive).
In the case of finite parameter space, both arguments make use of convexity (via the $\log$ and $\sqrt{\cdot}$ functions respectively.)
Infinite Dimensional Parameter Space
The set of priors whose support contains the true parameter can be "very small", when the parameter space is infinite dimensional.
In the example of Freedman (1965), mentioned by @ThomasLumley,
the parameter space $\Theta$ is the set of all probability measures on $\mathbb{N}$, i.e.
$$
\Theta = \{ (p_i)_{i \geq 1}: \; p_i \geq 0 \; \forall i, \mbox{ and } \sum_i p_i = 1\} \subset l^1(\mathbb{N}),
$$
and given the weak-* topology induced by the pairing between $l^{\infty}$ and $l^1$.
The set of priors is the set of probability measures on $\Theta$, given the topology of weak convergence.
Freedman showed that the (true parameter, prior)-pairs which are consistent is "small" with respect to the product topology. | When do posteriors converge to a point mass?
The necessary and sufficient condition that the posterior converges to the point mass at the true parameter is that the model is correctly specified and identified,
for any prior whose support contain |
25,955 | Which image format is better for machine learning .png .jpg or other? | Here is a real-life case: accurate segmentation pipeline for 4K video stream (here are some examples). I do rely on conventional computer vision as well as on neural nets, so there is a need to prepare high-quality training sets. Also, it is somewhat impossible to find training sets for some specific objects:
(See in action)
Long story short it is about 1TB of data required to create a training set and do additional post-processing. I use ffmpeg and store extracted frames as JPG. There is no reason to use PNG because of the following:
video stream is already compressed
any single frame from the compressed stream will contain some artefacts
it might look a bit strange to use lossless compression for lossy compressed data
there is no reason to consume more space storing the same data
also, there is no reason to consume additional bandwidth
Let's do a quick test (really quick). Same 4K stream, same settings, extracting a frame as PNG and as JPG. If you see any difference -- good for you :) Any real-life problem will likely be related to a compressed video stream because bandwidth is critical.
PNG
JPG
Finally
If you need more details -- use 4K (or 8K if you need even more valuable details). Pretty much all the examples I have are based on 4K input. FPS is what actually matters when you try to deal with real-life scenes and fast moving objects.
(see in action)
It goes without saying camera and light conditions are the most critical preconditions for getting proper level of the details. | Which image format is better for machine learning .png .jpg or other? | Here is a real-life case: accurate segmentation pipeline for 4K video stream (here are some examples). I do rely on conventional computer vision as well as on neural nets, so there is a need to prepar | Which image format is better for machine learning .png .jpg or other?
Here is a real-life case: accurate segmentation pipeline for 4K video stream (here are some examples). I do rely on conventional computer vision as well as on neural nets, so there is a need to prepare high-quality training sets. Also, it is somewhat impossible to find training sets for some specific objects:
(See in action)
Long story short it is about 1TB of data required to create a training set and do additional post-processing. I use ffmpeg and store extracted frames as JPG. There is no reason to use PNG because of the following:
video stream is already compressed
any single frame from the compressed stream will contain some artefacts
it might look a bit strange to use lossless compression for lossy compressed data
there is no reason to consume more space storing the same data
also, there is no reason to consume additional bandwidth
Let's do a quick test (really quick). Same 4K stream, same settings, extracting a frame as PNG and as JPG. If you see any difference -- good for you :) Any real-life problem will likely be related to a compressed video stream because bandwidth is critical.
PNG
JPG
Finally
If you need more details -- use 4K (or 8K if you need even more valuable details). Pretty much all the examples I have are based on 4K input. FPS is what actually matters when you try to deal with real-life scenes and fast moving objects.
(see in action)
It goes without saying camera and light conditions are the most critical preconditions for getting proper level of the details. | Which image format is better for machine learning .png .jpg or other?
Here is a real-life case: accurate segmentation pipeline for 4K video stream (here are some examples). I do rely on conventional computer vision as well as on neural nets, so there is a need to prepar |
25,956 | Which image format is better for machine learning .png .jpg or other? | JPG performs better for photorealistic images, PNG for drawings with sharp lines and solid colors. For frames of video feed I would definitely use JPG.
UPDATE: Because video is usually compressed in a way similar to JPG, it is unlikely that quality will degrade further than it already has. And dataset size is not unimportant either. | Which image format is better for machine learning .png .jpg or other? | JPG performs better for photorealistic images, PNG for drawings with sharp lines and solid colors. For frames of video feed I would definitely use JPG.
UPDATE: Because video is usually compressed in a | Which image format is better for machine learning .png .jpg or other?
JPG performs better for photorealistic images, PNG for drawings with sharp lines and solid colors. For frames of video feed I would definitely use JPG.
UPDATE: Because video is usually compressed in a way similar to JPG, it is unlikely that quality will degrade further than it already has. And dataset size is not unimportant either. | Which image format is better for machine learning .png .jpg or other?
JPG performs better for photorealistic images, PNG for drawings with sharp lines and solid colors. For frames of video feed I would definitely use JPG.
UPDATE: Because video is usually compressed in a |
25,957 | Which image format is better for machine learning .png .jpg or other? | As others have indicated in comments, PNG as lossless format is better suited than JPEG. That being said, as an input to your pipeline the answer would be "neither". Almost always you have to preprocess your images, e.g. crop them, subtract mean etc. In a typical scenario, I would run my pipeline ~100 times before I am happy with the results. Preprocessing your images every time would be simply waste of resources, especially since reading images takes time.
Much better idea is to use HDF5, format very well suited to work with numerical data and slices. Typically you would:
Load an image.
Preprocess it.
Save to HDF5. You'd create two datasets: one for the image data, second for the label (if present).
Rince and repeat.
If HDF5 is new to you and may seem somewhat abstract, below you will find an example of class that can be used to store your images to disk using this format.
import h5py
import os
class HDF5Writer(object):
def __init__(self, dims, output_path, data_key="images", buf_size=1000):
self.db = h5py.File(output_path, "w")
self.data = self.db.create_dataset(data_key, dims, dtype="float")
self.labels = self.db.create_dataset("labels", (dims[0],), dtype="int")
self.bufSize = buf_size
self.buffer = {"data": [], "labels": []}
self.idx = 0
def add(self, rows, labels):
self.buffer["data"].extend(rows)
self.buffer["labels"].extend(labels)
if len(self.buffer["data"]) >= self.bufSize:
self.flush()
def flush(self):
i = self.idx + len(self.buffer["data"])
self.data[self.idx:i] = self.buffer["data"]
self.labels[self.idx:i] = self.buffer["labels"]
self.idx = i
self.buffer = {"data": [], "labels": []}
def store_class_labels(self, classLabels):
dt = h5py.special_dtype(vlen=str)
labelSet = self.db.create_dataset("label_names", (len(classLabels),), dtype=dt)
labelSet[:] = classLabels
def close(self):
if len(self.buffer["data"]) > 0:
self.flush()
self.db.close()
I recommend reading a tutorial on h5py, the extra learning curve is worth it. The buffer simply tells you every how many images data should be written to the drive.
How to use it (in pseudo-code):
data_dim = (num_of_images, x_dim, y_dim, num_of_colours)
writer = HDF5Writer(data_dim, output_path)
for image_path in paths:
image = read_image(image_path)
image = preprocess(image)
writer.add([image], [label])
writer.close()
The heavy downside of HDF5 is that it will blow your dataset size 50x if used without any compression. At certain level it may seem not feasible to use it, but then likely you'll have to use specialised compute infrastructure anyway (and then storage space won't be an issue). | Which image format is better for machine learning .png .jpg or other? | As others have indicated in comments, PNG as lossless format is better suited than JPEG. That being said, as an input to your pipeline the answer would be "neither". Almost always you have to preproce | Which image format is better for machine learning .png .jpg or other?
As others have indicated in comments, PNG as lossless format is better suited than JPEG. That being said, as an input to your pipeline the answer would be "neither". Almost always you have to preprocess your images, e.g. crop them, subtract mean etc. In a typical scenario, I would run my pipeline ~100 times before I am happy with the results. Preprocessing your images every time would be simply waste of resources, especially since reading images takes time.
Much better idea is to use HDF5, format very well suited to work with numerical data and slices. Typically you would:
Load an image.
Preprocess it.
Save to HDF5. You'd create two datasets: one for the image data, second for the label (if present).
Rince and repeat.
If HDF5 is new to you and may seem somewhat abstract, below you will find an example of class that can be used to store your images to disk using this format.
import h5py
import os
class HDF5Writer(object):
def __init__(self, dims, output_path, data_key="images", buf_size=1000):
self.db = h5py.File(output_path, "w")
self.data = self.db.create_dataset(data_key, dims, dtype="float")
self.labels = self.db.create_dataset("labels", (dims[0],), dtype="int")
self.bufSize = buf_size
self.buffer = {"data": [], "labels": []}
self.idx = 0
def add(self, rows, labels):
self.buffer["data"].extend(rows)
self.buffer["labels"].extend(labels)
if len(self.buffer["data"]) >= self.bufSize:
self.flush()
def flush(self):
i = self.idx + len(self.buffer["data"])
self.data[self.idx:i] = self.buffer["data"]
self.labels[self.idx:i] = self.buffer["labels"]
self.idx = i
self.buffer = {"data": [], "labels": []}
def store_class_labels(self, classLabels):
dt = h5py.special_dtype(vlen=str)
labelSet = self.db.create_dataset("label_names", (len(classLabels),), dtype=dt)
labelSet[:] = classLabels
def close(self):
if len(self.buffer["data"]) > 0:
self.flush()
self.db.close()
I recommend reading a tutorial on h5py, the extra learning curve is worth it. The buffer simply tells you every how many images data should be written to the drive.
How to use it (in pseudo-code):
data_dim = (num_of_images, x_dim, y_dim, num_of_colours)
writer = HDF5Writer(data_dim, output_path)
for image_path in paths:
image = read_image(image_path)
image = preprocess(image)
writer.add([image], [label])
writer.close()
The heavy downside of HDF5 is that it will blow your dataset size 50x if used without any compression. At certain level it may seem not feasible to use it, but then likely you'll have to use specialised compute infrastructure anyway (and then storage space won't be an issue). | Which image format is better for machine learning .png .jpg or other?
As others have indicated in comments, PNG as lossless format is better suited than JPEG. That being said, as an input to your pipeline the answer would be "neither". Almost always you have to preproce |
25,958 | Degrees of Freedom In Sample Variance | The connection is related to the eigenvalues of the centering matrix
The "why" of the connection at issue here actually goes down quite deeply into mathematical territory. In my view, it is related to the eigenvalues of the centering matrix, which have connections to the rank of that matrix. Before I get into the demonstration of this issue, I'll note that you can find a broad discussion of the centering matrix and its connection to the concept of degrees-of-freedom in Section 4 of O'Neill (2020). The material I give here is largely an exposition of what is shown in that section of that paper.
Preliminaries: Showing the connection between Bessel's correction and the degrees-of-freedom requires a bit of setup, and it also requires us to state the formal definition of degrees-of-freedom. To do this, we note that the sample variance is formed from the deviations of the values from their sample mean, which is a linear transformation of the sample vector. We can write this (using upper-case for random variables) as:
$$S^2 = \frac{1}{n-1} ||\mathbf{R}||^2
\quad \quad \quad \quad \quad
\mathbf{R} = \mathbf{X} - \bar{\mathbf{X}} = \mathbf{C} \mathbf{X},$$
where $\mathbf{C}$ is the centering matrix. The centering matrix $\mathbf{C}$ is a projection matrix, with $n-1$ eigenvalues equal to one, and one eigenvalue equal to zero. Its rank is the sum of its eigenvalues, which is $\text{rank} \ \mathbf{C} = n-1$.
The degrees-of-freedom: Formally, the degrees-of-freedom for the deviation vector is the dimension of the space of allowable values $\mathscr{R} \equiv \{ \mathbf{r} = \mathbf{C} \mathbf{x} | \mathbf{x} \in \mathbb{R}^n \}$, which is:
$$\begin{equation} \begin{aligned}
DF = \dim \mathscr{R}
&= \dim \{ \mathbf{r} = \mathbf{C} \mathbf{x} | \mathbf{x} \in \mathbb{R}^n \} \\[6pt]
&= \text{rank} \ \mathbf{C} \\[6pt]
&= n-1. \\[6pt]
\end{aligned} \end{equation}$$
This establishes the degrees-of-freedom formally by connection to the eigenvalues of the centering matrix. We now connect this directly to the expected value of the squared-norm of the deviations that appears in the sample variance statistic.
Establishing the connection: The squared-norm of the deviations is a quadratic form using the centering matrix, and it can be simplified using the spectral form of the centering matrix. The centering matrix can be written in its spectral form as $\mathbf{C} = \mathbf{u}^* \mathbf{\Delta} \mathbf{u}$ where $\mathbf{u}$ is the (orthonormal) normalised DFT matrix and $\mathbf{\Delta} = \text{diag}(\lambda_0,\lambda_1,...,\lambda_{n-1})$ is the diagonal matrix of the eigenvalues of the centering matrix (which we leave unstated for now). Using this form we can write the squared-norm of the deviations as:
$$\begin{equation} \begin{aligned}
||\mathbf{R}||^2
&= \mathbf{R}^\text{T} \mathbf{R} \\[6pt]
&= (\mathbf{C} \mathbf{x})^\text{T} (\mathbf{C} \mathbf{x}) \\[6pt]
&= \mathbf{x}^\text{T} \mathbf{C} \mathbf{x} \\[6pt]
&= \mathbf{x}^\text{T} \mathbf{u}^* \mathbf{\Delta} \mathbf{u} \mathbf{x} \\[6pt]
&= (\mathbf{u} \mathbf{x})^* \mathbf{\Delta} (\mathbf{u} \mathbf{x}). \\[6pt]
\end{aligned} \end{equation}$$
Now, the matrix $\mathbf{u} \mathbf{x} = (\mathscr{F}_\mathbf{x}(0), \mathscr{F}_\mathbf{x}(1/n), ..., \mathscr{F}_\mathbf{x}(1-1/n))$ is the DFT of the sample data, so we can expand the above quadratic form to obtain:
$$||\mathbf{R}||^2 = (\mathbf{u} \mathbf{x})^* \mathbf{\Delta} (\mathbf{u} \mathbf{x}) = \sum_{i=0}^{n-1} \lambda_i \cdot ||\mathscr{F}_\mathbf{x}(i/n)||^2.$$
(Note: Once we substitute the eigenvalues, we will see that this is a just a manifestation of the discrete version of the Plancherel theorem.) Since $X_1,...,X_n$ are IID with variance $\sigma^2$, it follows that $\mathbb{E}(||\mathscr{F}_\mathbf{x}(i/n)||^2) = \sigma^2$ for all $i=0,1,...,n-1$. Substitution of this result gives the expected value:
$$\begin{equation} \begin{aligned}
\mathbb{E}(||\mathbf{R}||^2)
&= \mathbb{E} \Big( \sum_{i=0}^{n-1} \lambda_i \cdot ||\mathscr{F}_\mathbf{x}(i/n)||^2 \Big) \\[6pt]
&= \sum_{i=0}^{n-1} \lambda_i \cdot \mathbb{E}(||\mathscr{F}_\mathbf{x}(i/n)||^2) \\[6pt]
&= \sum_{i=0}^{n-1} \lambda_i \cdot \sigma^2 \\[6pt]
&= \sigma^2 \sum_{i=0}^{n-1} \lambda_i \\[6pt]
&= \sigma^2 \cdot \text{tr} \ \mathbf{C} \\[6pt]
&= \sigma^2 \cdot \text{rank} \ \mathbf{C} = \sigma^2 \cdot DF. \\[6pt]
\end{aligned} \end{equation}$$
(Since the centering matrix is a projection matrix, its rank is equal to its trace.) Hence, to obtain and unbiased estimator for $\sigma^2$ we use the estimator:
$$\hat{\sigma}^2 \equiv \frac{||\mathbf{R}||^2}{DF} = \frac{1}{n-1} \sum_{i=1}^n (x_i-\bar{x})^2.$$
This establishes a direct connection between the denominator of the sample variance and the degrees-of-freedom in the problem. As you can see, this connection arises through the eigenvalues of the centering matrix --- these eigenvalues determine the rank of the matrix, and thereby determine the degrees-of-freedom, and they affect the expected value of the squared-norm of the deviation vector. Going through the derivation of these results also gives a bit more detail about the behaviour of the deviation vector. | Degrees of Freedom In Sample Variance | The connection is related to the eigenvalues of the centering matrix
The "why" of the connection at issue here actually goes down quite deeply into mathematical territory. In my view, it is related t | Degrees of Freedom In Sample Variance
The connection is related to the eigenvalues of the centering matrix
The "why" of the connection at issue here actually goes down quite deeply into mathematical territory. In my view, it is related to the eigenvalues of the centering matrix, which have connections to the rank of that matrix. Before I get into the demonstration of this issue, I'll note that you can find a broad discussion of the centering matrix and its connection to the concept of degrees-of-freedom in Section 4 of O'Neill (2020). The material I give here is largely an exposition of what is shown in that section of that paper.
Preliminaries: Showing the connection between Bessel's correction and the degrees-of-freedom requires a bit of setup, and it also requires us to state the formal definition of degrees-of-freedom. To do this, we note that the sample variance is formed from the deviations of the values from their sample mean, which is a linear transformation of the sample vector. We can write this (using upper-case for random variables) as:
$$S^2 = \frac{1}{n-1} ||\mathbf{R}||^2
\quad \quad \quad \quad \quad
\mathbf{R} = \mathbf{X} - \bar{\mathbf{X}} = \mathbf{C} \mathbf{X},$$
where $\mathbf{C}$ is the centering matrix. The centering matrix $\mathbf{C}$ is a projection matrix, with $n-1$ eigenvalues equal to one, and one eigenvalue equal to zero. Its rank is the sum of its eigenvalues, which is $\text{rank} \ \mathbf{C} = n-1$.
The degrees-of-freedom: Formally, the degrees-of-freedom for the deviation vector is the dimension of the space of allowable values $\mathscr{R} \equiv \{ \mathbf{r} = \mathbf{C} \mathbf{x} | \mathbf{x} \in \mathbb{R}^n \}$, which is:
$$\begin{equation} \begin{aligned}
DF = \dim \mathscr{R}
&= \dim \{ \mathbf{r} = \mathbf{C} \mathbf{x} | \mathbf{x} \in \mathbb{R}^n \} \\[6pt]
&= \text{rank} \ \mathbf{C} \\[6pt]
&= n-1. \\[6pt]
\end{aligned} \end{equation}$$
This establishes the degrees-of-freedom formally by connection to the eigenvalues of the centering matrix. We now connect this directly to the expected value of the squared-norm of the deviations that appears in the sample variance statistic.
Establishing the connection: The squared-norm of the deviations is a quadratic form using the centering matrix, and it can be simplified using the spectral form of the centering matrix. The centering matrix can be written in its spectral form as $\mathbf{C} = \mathbf{u}^* \mathbf{\Delta} \mathbf{u}$ where $\mathbf{u}$ is the (orthonormal) normalised DFT matrix and $\mathbf{\Delta} = \text{diag}(\lambda_0,\lambda_1,...,\lambda_{n-1})$ is the diagonal matrix of the eigenvalues of the centering matrix (which we leave unstated for now). Using this form we can write the squared-norm of the deviations as:
$$\begin{equation} \begin{aligned}
||\mathbf{R}||^2
&= \mathbf{R}^\text{T} \mathbf{R} \\[6pt]
&= (\mathbf{C} \mathbf{x})^\text{T} (\mathbf{C} \mathbf{x}) \\[6pt]
&= \mathbf{x}^\text{T} \mathbf{C} \mathbf{x} \\[6pt]
&= \mathbf{x}^\text{T} \mathbf{u}^* \mathbf{\Delta} \mathbf{u} \mathbf{x} \\[6pt]
&= (\mathbf{u} \mathbf{x})^* \mathbf{\Delta} (\mathbf{u} \mathbf{x}). \\[6pt]
\end{aligned} \end{equation}$$
Now, the matrix $\mathbf{u} \mathbf{x} = (\mathscr{F}_\mathbf{x}(0), \mathscr{F}_\mathbf{x}(1/n), ..., \mathscr{F}_\mathbf{x}(1-1/n))$ is the DFT of the sample data, so we can expand the above quadratic form to obtain:
$$||\mathbf{R}||^2 = (\mathbf{u} \mathbf{x})^* \mathbf{\Delta} (\mathbf{u} \mathbf{x}) = \sum_{i=0}^{n-1} \lambda_i \cdot ||\mathscr{F}_\mathbf{x}(i/n)||^2.$$
(Note: Once we substitute the eigenvalues, we will see that this is a just a manifestation of the discrete version of the Plancherel theorem.) Since $X_1,...,X_n$ are IID with variance $\sigma^2$, it follows that $\mathbb{E}(||\mathscr{F}_\mathbf{x}(i/n)||^2) = \sigma^2$ for all $i=0,1,...,n-1$. Substitution of this result gives the expected value:
$$\begin{equation} \begin{aligned}
\mathbb{E}(||\mathbf{R}||^2)
&= \mathbb{E} \Big( \sum_{i=0}^{n-1} \lambda_i \cdot ||\mathscr{F}_\mathbf{x}(i/n)||^2 \Big) \\[6pt]
&= \sum_{i=0}^{n-1} \lambda_i \cdot \mathbb{E}(||\mathscr{F}_\mathbf{x}(i/n)||^2) \\[6pt]
&= \sum_{i=0}^{n-1} \lambda_i \cdot \sigma^2 \\[6pt]
&= \sigma^2 \sum_{i=0}^{n-1} \lambda_i \\[6pt]
&= \sigma^2 \cdot \text{tr} \ \mathbf{C} \\[6pt]
&= \sigma^2 \cdot \text{rank} \ \mathbf{C} = \sigma^2 \cdot DF. \\[6pt]
\end{aligned} \end{equation}$$
(Since the centering matrix is a projection matrix, its rank is equal to its trace.) Hence, to obtain and unbiased estimator for $\sigma^2$ we use the estimator:
$$\hat{\sigma}^2 \equiv \frac{||\mathbf{R}||^2}{DF} = \frac{1}{n-1} \sum_{i=1}^n (x_i-\bar{x})^2.$$
This establishes a direct connection between the denominator of the sample variance and the degrees-of-freedom in the problem. As you can see, this connection arises through the eigenvalues of the centering matrix --- these eigenvalues determine the rank of the matrix, and thereby determine the degrees-of-freedom, and they affect the expected value of the squared-norm of the deviation vector. Going through the derivation of these results also gives a bit more detail about the behaviour of the deviation vector. | Degrees of Freedom In Sample Variance
The connection is related to the eigenvalues of the centering matrix
The "why" of the connection at issue here actually goes down quite deeply into mathematical territory. In my view, it is related t |
25,959 | Degrees of Freedom In Sample Variance | After thinking about the question more, I think the the first proof of correctness on Wikipedia is intuitive enough for me.
It argues that $\mathbb{E}[(x_1 - x_2)^2] = 2 \sigma^2$, where $x_1$ and $x_2$ are iid samples from distribution with variance $\sigma^2$. BUT, when we explicitly sample $n$ such elements, there becomes a $\dfrac{1}{n}$ chance we sample the same element, making the $\mathbb{E}_{\text{sample}}[(x_1 - x_2)^2] = \dfrac{n - 1}{n} \mathbb{E}_{\text{population}}[(x_1 - x_2)^2]$, resulting in the need to multiple $\mathbb{E}_{\text{sample}}[(x_1 - x_2)^2]$ by a factor of $\dfrac{n}{n -1}$ (the Bessel correction) to get an unbiased estimator. To my taste, this proof really illuminates how the fact that that once you choose an element from the sample of size $n$, there are only $(n - 1)$ other (different) options actually plays a role in Bessel's correction. I was originally confused by this proof because I wasn't sure what we would do given that the population would also have size $N$, but now I understand that it isn't a good idea to think of the population as having "size" at all, just a PDF. | Degrees of Freedom In Sample Variance | After thinking about the question more, I think the the first proof of correctness on Wikipedia is intuitive enough for me.
It argues that $\mathbb{E}[(x_1 - x_2)^2] = 2 \sigma^2$, where $x_1$ and $x_ | Degrees of Freedom In Sample Variance
After thinking about the question more, I think the the first proof of correctness on Wikipedia is intuitive enough for me.
It argues that $\mathbb{E}[(x_1 - x_2)^2] = 2 \sigma^2$, where $x_1$ and $x_2$ are iid samples from distribution with variance $\sigma^2$. BUT, when we explicitly sample $n$ such elements, there becomes a $\dfrac{1}{n}$ chance we sample the same element, making the $\mathbb{E}_{\text{sample}}[(x_1 - x_2)^2] = \dfrac{n - 1}{n} \mathbb{E}_{\text{population}}[(x_1 - x_2)^2]$, resulting in the need to multiple $\mathbb{E}_{\text{sample}}[(x_1 - x_2)^2]$ by a factor of $\dfrac{n}{n -1}$ (the Bessel correction) to get an unbiased estimator. To my taste, this proof really illuminates how the fact that that once you choose an element from the sample of size $n$, there are only $(n - 1)$ other (different) options actually plays a role in Bessel's correction. I was originally confused by this proof because I wasn't sure what we would do given that the population would also have size $N$, but now I understand that it isn't a good idea to think of the population as having "size" at all, just a PDF. | Degrees of Freedom In Sample Variance
After thinking about the question more, I think the the first proof of correctness on Wikipedia is intuitive enough for me.
It argues that $\mathbb{E}[(x_1 - x_2)^2] = 2 \sigma^2$, where $x_1$ and $x_ |
25,960 | What is difference between keras embedding layer and word2vec? | Embeddings (in general, not only in Keras) are methods for learning vector representations of categorical data. They are most commonly used for working with textual data. Word2vec and GloVe are two popular frameworks for learning word embeddings. What embeddings do, is they simply learn to map the one-hot encoded categorical variables to vectors of floating point numbers of smaller dimensionality then the input vectors. For example, one-hot vector representing a word from vocabulary of size 50 000 is mapped to real-valued vector of size 100. Then, the embeddings vector is used for whatever you want to use it as features.
one-hot vector $\to$ real-valued vector $\to$ (additional layers of
the network)
The difference is how Word2vec is trained, as compared to the "usual" learned embeddings layers. Word2vec is trained to predict if word belongs to the context, given other words, e.g. to tell if "milk" is a likely word given the "The cat was drinking..." sentence begging. By doing so, we expect Word2vec to learn something about the language, as in the quote "You shall know a word by the company it keeps" by John Rupert Firth. Using the above example, Word2vec learns that "cat" is something that is likely to appear together with "milk", but also with "house", or "pet", so it is somehow similar to "dog". As a consequence, embeddings created by Word2vec, or similar models, learn to represent words with similar meanings using similar vectors.
On another hand, with embeddings learned as a layer of a neural network, the network may be trained to predict whatever you want. For example, you can train your network to predict sentiment of a text. In such case, the embeddings would learn features that are relevant for this particular problem. As a side effect, they can learn also some general things about the language, but the network is not optimized for such task. Using the "cat" example, embeddings trained for sentiment analysis may learn that "cat" and "dog" are similar, because people often say nice things about their pets.
In practical terms, you can use the pretrained Word2vec embeddings as features of any neural network (or other algorithm). They can give you advantage if your data is small, since the pretrained embeddings were trained on large volumes of text. On another hand, there are examples showing that learning the embeddings from your data, optimized for a particular problem, may be more efficient (Qi et al, 2018).
Qi, Y., Sachan, D. S., Felix, M., Padmanabhan, S. J., & Neubig, G. (2018). When and Why are Pre-trained Word Embeddings Useful for Neural Machine Translation? arXiv preprint arXiv:1804.06323. | What is difference between keras embedding layer and word2vec? | Embeddings (in general, not only in Keras) are methods for learning vector representations of categorical data. They are most commonly used for working with textual data. Word2vec and GloVe are two po | What is difference between keras embedding layer and word2vec?
Embeddings (in general, not only in Keras) are methods for learning vector representations of categorical data. They are most commonly used for working with textual data. Word2vec and GloVe are two popular frameworks for learning word embeddings. What embeddings do, is they simply learn to map the one-hot encoded categorical variables to vectors of floating point numbers of smaller dimensionality then the input vectors. For example, one-hot vector representing a word from vocabulary of size 50 000 is mapped to real-valued vector of size 100. Then, the embeddings vector is used for whatever you want to use it as features.
one-hot vector $\to$ real-valued vector $\to$ (additional layers of
the network)
The difference is how Word2vec is trained, as compared to the "usual" learned embeddings layers. Word2vec is trained to predict if word belongs to the context, given other words, e.g. to tell if "milk" is a likely word given the "The cat was drinking..." sentence begging. By doing so, we expect Word2vec to learn something about the language, as in the quote "You shall know a word by the company it keeps" by John Rupert Firth. Using the above example, Word2vec learns that "cat" is something that is likely to appear together with "milk", but also with "house", or "pet", so it is somehow similar to "dog". As a consequence, embeddings created by Word2vec, or similar models, learn to represent words with similar meanings using similar vectors.
On another hand, with embeddings learned as a layer of a neural network, the network may be trained to predict whatever you want. For example, you can train your network to predict sentiment of a text. In such case, the embeddings would learn features that are relevant for this particular problem. As a side effect, they can learn also some general things about the language, but the network is not optimized for such task. Using the "cat" example, embeddings trained for sentiment analysis may learn that "cat" and "dog" are similar, because people often say nice things about their pets.
In practical terms, you can use the pretrained Word2vec embeddings as features of any neural network (or other algorithm). They can give you advantage if your data is small, since the pretrained embeddings were trained on large volumes of text. On another hand, there are examples showing that learning the embeddings from your data, optimized for a particular problem, may be more efficient (Qi et al, 2018).
Qi, Y., Sachan, D. S., Felix, M., Padmanabhan, S. J., & Neubig, G. (2018). When and Why are Pre-trained Word Embeddings Useful for Neural Machine Translation? arXiv preprint arXiv:1804.06323. | What is difference between keras embedding layer and word2vec?
Embeddings (in general, not only in Keras) are methods for learning vector representations of categorical data. They are most commonly used for working with textual data. Word2vec and GloVe are two po |
25,961 | What is difference between keras embedding layer and word2vec? | For Keras Embedding Layer, You are using supervised learning. My guess is embedding learned here for independent variable will directly map to the dependent variable.
However, word2vec or glove is unsupervised learning problem. Here, embedding learned depends on data you are feeding to model. | What is difference between keras embedding layer and word2vec? | For Keras Embedding Layer, You are using supervised learning. My guess is embedding learned here for independent variable will directly map to the dependent variable.
However, word2vec or glove is un | What is difference between keras embedding layer and word2vec?
For Keras Embedding Layer, You are using supervised learning. My guess is embedding learned here for independent variable will directly map to the dependent variable.
However, word2vec or glove is unsupervised learning problem. Here, embedding learned depends on data you are feeding to model. | What is difference between keras embedding layer and word2vec?
For Keras Embedding Layer, You are using supervised learning. My guess is embedding learned here for independent variable will directly map to the dependent variable.
However, word2vec or glove is un |
25,962 | What is difference between keras embedding layer and word2vec? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
http://colah.github.io/posts/2014-07-NLP-RNNs-Representations -> this blog post explains clearly about How the embedding layer is trained in Keras Embedding layer. Hope this helps.
Word2Vec is a pre-trained embedding model using a specific architecture.
Embedding layer and Word2Vec can be analogous to CNN layer and Imagenet pre-trained models. | What is difference between keras embedding layer and word2vec? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| What is difference between keras embedding layer and word2vec?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
http://colah.github.io/posts/2014-07-NLP-RNNs-Representations -> this blog post explains clearly about How the embedding layer is trained in Keras Embedding layer. Hope this helps.
Word2Vec is a pre-trained embedding model using a specific architecture.
Embedding layer and Word2Vec can be analogous to CNN layer and Imagenet pre-trained models. | What is difference between keras embedding layer and word2vec?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
25,963 | Why is the Fisher information the inverse of the (asymptotic) covariance, and vice versa? | Never mind, I just realized that this question was stupid.
Specifically, we have that by the Multivariate Central Limit Theorem (which doesn't depend on the MLE result in anyway, so this is not circular reasoning or whatever): $$\sqrt{n}(\hat{\theta}_n - \theta) = V_n \overset{d}{\implies} \mathscr{N}(0, \Sigma) $$ where $\Sigma$ is the covariance matrix of $V_n$. Then, by the MLE result, we also have that $$ V_n = \sqrt{n}(\hat{\theta}_n - \theta) \overset{d}{\implies}\mathscr{N}(0, I(\theta)^{-1}) \,.$$
Comparing the equations (and since limits in distribution are unique), it obviously follows that $$\Sigma = I(\theta)^{-1}\, \iff \Sigma^{-1} = I(\theta) \,. $$ So this doesn't actually require the Cramer-Rao Lower bound to hold for $V_n$ (it seems to me). | Why is the Fisher information the inverse of the (asymptotic) covariance, and vice versa? | Never mind, I just realized that this question was stupid.
Specifically, we have that by the Multivariate Central Limit Theorem (which doesn't depend on the MLE result in anyway, so this is not circul | Why is the Fisher information the inverse of the (asymptotic) covariance, and vice versa?
Never mind, I just realized that this question was stupid.
Specifically, we have that by the Multivariate Central Limit Theorem (which doesn't depend on the MLE result in anyway, so this is not circular reasoning or whatever): $$\sqrt{n}(\hat{\theta}_n - \theta) = V_n \overset{d}{\implies} \mathscr{N}(0, \Sigma) $$ where $\Sigma$ is the covariance matrix of $V_n$. Then, by the MLE result, we also have that $$ V_n = \sqrt{n}(\hat{\theta}_n - \theta) \overset{d}{\implies}\mathscr{N}(0, I(\theta)^{-1}) \,.$$
Comparing the equations (and since limits in distribution are unique), it obviously follows that $$\Sigma = I(\theta)^{-1}\, \iff \Sigma^{-1} = I(\theta) \,. $$ So this doesn't actually require the Cramer-Rao Lower bound to hold for $V_n$ (it seems to me). | Why is the Fisher information the inverse of the (asymptotic) covariance, and vice versa?
Never mind, I just realized that this question was stupid.
Specifically, we have that by the Multivariate Central Limit Theorem (which doesn't depend on the MLE result in anyway, so this is not circul |
25,964 | Reinforcement learning in non stationary environment [closed] | Q1: Are there common or accepted methods for dealing with non stationary environment in Reinforcement learning in general?
Most basic RL agents are online, and online learning can usually deal with non-stationary problems. In addition, update rules for state value and action value estimators in control problems are usually written for non-stationary targets, because the targets already change as the policy improves. This is nothing complicated, simply use of a learning rate $\alpha$ in updates when estimating values, effectively a rolling geometric mean as opposed to averaging over all history in an unweighted fashion.
However, this addresses longer-term non-stationarity, such as the problem changing between episodes, or over an even longer time scale. Your description looks more like you wish to change the reward structure based on actions the agent has taken, within a short timescale. That dynamic response to actions is better framed as a different more complex MDP, not as "non-stationarity" within a simpler MDP.
An agent cannot learn changes to the environment that it has not yet sampled, so changing reward structure will not prevent the agent from returning to previously-visited states. Unless you are using something like a RNN in the agent, the agent will not have a "memory" of what happened before in the episode other than whatever is represented in the current state (arguably using a RNN makes the hidden layer of the RNN part of the state). Across multiple episodes, if you use a tabular Q-learning agent, then the agent will simply learn that certain states have low value, it will not be able to learn that second or third visits to the state cause that effect, because it has no way to represent that knowledge. It will not be able to adjust to the change fast enough to learn online and mid-episode.
Q2: In my gridworld, I have the reward function changing when a state is visited. All I want my agent to learn is "Don't go back unless you really need to", however this makes the environment non-stationary.
If that's all you need the agent to learn, perhaps this can be encouraged by a suitable reward structure. Before you can do that, you need to understand yourself what "really need to" implies, and how tight that has to be logically. You may be OK though just by assigning some penalty for visiting any location that the agent has already or recently visited.
Can/Should this very simple rule be incorporated in the MDP model, and how?
Yes, you should add the information about visited locations into the state. This immediately will make your state model more complex than a simple grid world, increasing the dimensionality of the problem, but it is unavoidable. Most real-world problems very quickly outgrow the toy examples provided to teach RL concepts.
One alternative is to frame the problem as a Partially Observable Markov Decision Process (POMDP). In that case the "true" state would still include all the necessary history in order to calculate the rewards (and as this is a toy problem on a computer you would still have to represent it somehow), but the agent can attempt learn from restricted knowledge of the state, just whatever you let it observe. In general this is a much harder approach than expanding the state representation, and I would not recommend it here. However, if you find the idea interesting, you could use your problem to explore POMDPs. Here is a recent paper (from Google's Deep Mind team, 2015) that looks at two RL algorithms combined with RNNs to solve POMDPs.
Q3: I have been looking into Q-learning with experience replay as a solution to dealing with non stationary environments, as it decorrelates successive updates. Is this the correct use of the method or it is more to deal with making learning more data efficient?
Experience replay will not help with non-stationary environments. In fact it could make performance worse in them. However, as already stated, your problem is not really about a non-stationary environment, but about handling more complex state dynamics.
What you may need to do is look into function approximation, if the number of states increases to a large enough number. For instance, if you want to handle any back-tracking and have a complex reward-modifying rule that tracks each visited location, then your state might change from a single location number to a map showing visited locations. So for example it might go from $64$ states for an $8 \times 8$ grid world to a $2^{64}$ state map showing visited squares. This is far too high to track in a value table, so you will typically use a neural network (or a convolutional neural network) to estimate state values instead.
With a function estimator, experience replay is very useful, as without it, the learning process is likely to be unstable. The recent DQN approach for playing Atari games uses experience replay for this reason. | Reinforcement learning in non stationary environment [closed] | Q1: Are there common or accepted methods for dealing with non stationary environment in Reinforcement learning in general?
Most basic RL agents are online, and online learning can usually deal with n | Reinforcement learning in non stationary environment [closed]
Q1: Are there common or accepted methods for dealing with non stationary environment in Reinforcement learning in general?
Most basic RL agents are online, and online learning can usually deal with non-stationary problems. In addition, update rules for state value and action value estimators in control problems are usually written for non-stationary targets, because the targets already change as the policy improves. This is nothing complicated, simply use of a learning rate $\alpha$ in updates when estimating values, effectively a rolling geometric mean as opposed to averaging over all history in an unweighted fashion.
However, this addresses longer-term non-stationarity, such as the problem changing between episodes, or over an even longer time scale. Your description looks more like you wish to change the reward structure based on actions the agent has taken, within a short timescale. That dynamic response to actions is better framed as a different more complex MDP, not as "non-stationarity" within a simpler MDP.
An agent cannot learn changes to the environment that it has not yet sampled, so changing reward structure will not prevent the agent from returning to previously-visited states. Unless you are using something like a RNN in the agent, the agent will not have a "memory" of what happened before in the episode other than whatever is represented in the current state (arguably using a RNN makes the hidden layer of the RNN part of the state). Across multiple episodes, if you use a tabular Q-learning agent, then the agent will simply learn that certain states have low value, it will not be able to learn that second or third visits to the state cause that effect, because it has no way to represent that knowledge. It will not be able to adjust to the change fast enough to learn online and mid-episode.
Q2: In my gridworld, I have the reward function changing when a state is visited. All I want my agent to learn is "Don't go back unless you really need to", however this makes the environment non-stationary.
If that's all you need the agent to learn, perhaps this can be encouraged by a suitable reward structure. Before you can do that, you need to understand yourself what "really need to" implies, and how tight that has to be logically. You may be OK though just by assigning some penalty for visiting any location that the agent has already or recently visited.
Can/Should this very simple rule be incorporated in the MDP model, and how?
Yes, you should add the information about visited locations into the state. This immediately will make your state model more complex than a simple grid world, increasing the dimensionality of the problem, but it is unavoidable. Most real-world problems very quickly outgrow the toy examples provided to teach RL concepts.
One alternative is to frame the problem as a Partially Observable Markov Decision Process (POMDP). In that case the "true" state would still include all the necessary history in order to calculate the rewards (and as this is a toy problem on a computer you would still have to represent it somehow), but the agent can attempt learn from restricted knowledge of the state, just whatever you let it observe. In general this is a much harder approach than expanding the state representation, and I would not recommend it here. However, if you find the idea interesting, you could use your problem to explore POMDPs. Here is a recent paper (from Google's Deep Mind team, 2015) that looks at two RL algorithms combined with RNNs to solve POMDPs.
Q3: I have been looking into Q-learning with experience replay as a solution to dealing with non stationary environments, as it decorrelates successive updates. Is this the correct use of the method or it is more to deal with making learning more data efficient?
Experience replay will not help with non-stationary environments. In fact it could make performance worse in them. However, as already stated, your problem is not really about a non-stationary environment, but about handling more complex state dynamics.
What you may need to do is look into function approximation, if the number of states increases to a large enough number. For instance, if you want to handle any back-tracking and have a complex reward-modifying rule that tracks each visited location, then your state might change from a single location number to a map showing visited locations. So for example it might go from $64$ states for an $8 \times 8$ grid world to a $2^{64}$ state map showing visited squares. This is far too high to track in a value table, so you will typically use a neural network (or a convolutional neural network) to estimate state values instead.
With a function estimator, experience replay is very useful, as without it, the learning process is likely to be unstable. The recent DQN approach for playing Atari games uses experience replay for this reason. | Reinforcement learning in non stationary environment [closed]
Q1: Are there common or accepted methods for dealing with non stationary environment in Reinforcement learning in general?
Most basic RL agents are online, and online learning can usually deal with n |
25,965 | Reinforcement learning in non stationary environment [closed] | Q1: Q learning is an online reinforcementn learning algorithm that work well with the stationary environment. It may also be used with a non-stationary model with the condition that the model (reward function and transition probabilities) does not change fast. | Reinforcement learning in non stationary environment [closed] | Q1: Q learning is an online reinforcementn learning algorithm that work well with the stationary environment. It may also be used with a non-stationary model with the condition that the model (reward | Reinforcement learning in non stationary environment [closed]
Q1: Q learning is an online reinforcementn learning algorithm that work well with the stationary environment. It may also be used with a non-stationary model with the condition that the model (reward function and transition probabilities) does not change fast. | Reinforcement learning in non stationary environment [closed]
Q1: Q learning is an online reinforcementn learning algorithm that work well with the stationary environment. It may also be used with a non-stationary model with the condition that the model (reward |
25,966 | Non-parametric test equivalent to mixed ANOVA? | One of the standard techniques in such situations are due to Brunner and Langer [1]. These non-parametric mixed-effects models can deal with multiple within-subject factors and some between subject factor. In some fields (e.g. dental medicine), there are very popular. In R, they are implemented in the nparLD package.
[1] Brunner E, Domhof S, Langer F (2002). Nonparametric Analysis of longitudinal Data in Factorial Experiments. John Wiley & Sons, New York. | Non-parametric test equivalent to mixed ANOVA? | One of the standard techniques in such situations are due to Brunner and Langer [1]. These non-parametric mixed-effects models can deal with multiple within-subject factors and some between subject fa | Non-parametric test equivalent to mixed ANOVA?
One of the standard techniques in such situations are due to Brunner and Langer [1]. These non-parametric mixed-effects models can deal with multiple within-subject factors and some between subject factor. In some fields (e.g. dental medicine), there are very popular. In R, they are implemented in the nparLD package.
[1] Brunner E, Domhof S, Langer F (2002). Nonparametric Analysis of longitudinal Data in Factorial Experiments. John Wiley & Sons, New York. | Non-parametric test equivalent to mixed ANOVA?
One of the standard techniques in such situations are due to Brunner and Langer [1]. These non-parametric mixed-effects models can deal with multiple within-subject factors and some between subject fa |
25,967 | Non-parametric test equivalent to mixed ANOVA? | Considering the categorical variable, the design matrix for the explanatory variables should be adjusted such that the design is estimable.
On the other hand, for the non-normal error distribution, there are options for you before redirecting to nonparametric methods.
1) You can depict and measure the amount of nonnormality of your data. The classical F test is robust to nonnormality to some extent. If the underlying distribution is reasonably symmetric you can feel safe. If you observe significant skewness on the error distribution and do not feel safe with the F result, proceed to the next option.
2) I suggest using robust ANOVA methods. Please read through the following paper. I personally implemented Tiku's MML method which is robust under normality and is also flexible such that you can incorporate nonnormal distributions in the Maximum Likelihood context.
http://www.tandfonline.com/doi/abs/10.1080/03610918508812486?journalCode=lssp20
3) If neither of the above works for you, for a straightforward quick solution, you can try data transformation and nonparametrics selectively.
Good Luck! | Non-parametric test equivalent to mixed ANOVA? | Considering the categorical variable, the design matrix for the explanatory variables should be adjusted such that the design is estimable.
On the other hand, for the non-normal error distribution, th | Non-parametric test equivalent to mixed ANOVA?
Considering the categorical variable, the design matrix for the explanatory variables should be adjusted such that the design is estimable.
On the other hand, for the non-normal error distribution, there are options for you before redirecting to nonparametric methods.
1) You can depict and measure the amount of nonnormality of your data. The classical F test is robust to nonnormality to some extent. If the underlying distribution is reasonably symmetric you can feel safe. If you observe significant skewness on the error distribution and do not feel safe with the F result, proceed to the next option.
2) I suggest using robust ANOVA methods. Please read through the following paper. I personally implemented Tiku's MML method which is robust under normality and is also flexible such that you can incorporate nonnormal distributions in the Maximum Likelihood context.
http://www.tandfonline.com/doi/abs/10.1080/03610918508812486?journalCode=lssp20
3) If neither of the above works for you, for a straightforward quick solution, you can try data transformation and nonparametrics selectively.
Good Luck! | Non-parametric test equivalent to mixed ANOVA?
Considering the categorical variable, the design matrix for the explanatory variables should be adjusted such that the design is estimable.
On the other hand, for the non-normal error distribution, th |
25,968 | Non-parametric test equivalent to mixed ANOVA? | ezPerm function from ez package provides permutation-based versions of different ANOVAs, including mixed. It does not assume any normality. | Non-parametric test equivalent to mixed ANOVA? | ezPerm function from ez package provides permutation-based versions of different ANOVAs, including mixed. It does not assume any normality. | Non-parametric test equivalent to mixed ANOVA?
ezPerm function from ez package provides permutation-based versions of different ANOVAs, including mixed. It does not assume any normality. | Non-parametric test equivalent to mixed ANOVA?
ezPerm function from ez package provides permutation-based versions of different ANOVAs, including mixed. It does not assume any normality. |
25,969 | The benefit of "unskewing" skewed data | Nick Cox makes many good points in his comments. Let me put some of them (and some of my own) into answer format:
First, ordinary least squares regression makes no assumptions about the dependent variable being normally distributed; it make assumptions about the errors being normal and the errors are estimated by the residuals. However, when the dependent variable is as skewed as yours is, the residuals usually will be too.
Second, the emphasis on transformation for statistical reasons that you find in many introductory books is because the book wants to show how a person can use OLS regression in different situations (and, unfortunately, it's true that some professors in non-statistics courses don't know about alternatives). In older books, it may also be because some of the alternative methods were too computer intensive to be usable.
Third, I think data should be transformed for substantive reasons, not statistical ones. Here, and for price data more generally, it often makes sense to take the log. Two reasons are that 1) People often think about prices in multiplicative terms rather than additive ones - the difference between \$2,000,000 and \$2,001,000 is really small. The difference between \$2,000 and \$2,100 is much bigger. 2) When you take logs, you can't get a negative predicted price.
Fourth, if you decide not to transform (for some reason) then there are methods that do not assume that the residuals are normal. Two prominent ones are quantile regression and robust regression. | The benefit of "unskewing" skewed data | Nick Cox makes many good points in his comments. Let me put some of them (and some of my own) into answer format:
First, ordinary least squares regression makes no assumptions about the dependent var | The benefit of "unskewing" skewed data
Nick Cox makes many good points in his comments. Let me put some of them (and some of my own) into answer format:
First, ordinary least squares regression makes no assumptions about the dependent variable being normally distributed; it make assumptions about the errors being normal and the errors are estimated by the residuals. However, when the dependent variable is as skewed as yours is, the residuals usually will be too.
Second, the emphasis on transformation for statistical reasons that you find in many introductory books is because the book wants to show how a person can use OLS regression in different situations (and, unfortunately, it's true that some professors in non-statistics courses don't know about alternatives). In older books, it may also be because some of the alternative methods were too computer intensive to be usable.
Third, I think data should be transformed for substantive reasons, not statistical ones. Here, and for price data more generally, it often makes sense to take the log. Two reasons are that 1) People often think about prices in multiplicative terms rather than additive ones - the difference between \$2,000,000 and \$2,001,000 is really small. The difference between \$2,000 and \$2,100 is much bigger. 2) When you take logs, you can't get a negative predicted price.
Fourth, if you decide not to transform (for some reason) then there are methods that do not assume that the residuals are normal. Two prominent ones are quantile regression and robust regression. | The benefit of "unskewing" skewed data
Nick Cox makes many good points in his comments. Let me put some of them (and some of my own) into answer format:
First, ordinary least squares regression makes no assumptions about the dependent var |
25,970 | Why is the likelihood in Kalman filter computed using filter results instead of smoother results? | To answer your question: you can use the smoothing density. But you don't have to. Jarle Tufto's answer has the decomposition that you're using. But there are others.
Using the Kalman Recursions
Here you're evaluating the likelihood as
$$
f(y_1, \ldots, y_n) = f(y_1)\prod_{i=2}^nf(y_i|y_1, \ldots, y_{i-1}).
$$
However, means and variances don't always fully define probability distributions in general. The following is the decomposition that you're using to go from filtering distributions $f(x_{i-1}|y_1,\ldots,y_{i-1})$ to conditional likelihoods $f(y_i|y_1,\ldots,y_{i-1})$:
$$
f(y_i|y_1, \ldots, y_{i-1}) = \iint f(y_i|x_i)f(x_i|x_{i-1})f(x_{i-1}|y_1, \ldots, y_{i-1})dx_{i} dx_{i-1} \tag{1}.
$$
Here $f(x_i|x_{i-1})$ is the state transition density...part of the model, and $f(y_i|x_i)$ is the observation density...part of the model again. In your question you write these as $x_{t+1}=Fx_{t}+v_{t+1}$ and $y_{t}=Hx_{t}+Az_{t}+w_{t}$ respectively. It's the same thing.
When you get the one step ahead state prediction distribution, that's computing $\int f(x_i|x_{i-1})f(x_{i-1}|y_1, \ldots, y_{i-1}) dx_{i-1}$. When you integrate again, you obtain (1) completely. You write that density out completely in your question, and it's the same thing.
Here you're only using decompositions of probability distributions, and assumptions about the model. This likelihood calculation is an exact calculation. There isn't anything discretionary that you can use to do this better or worse.
Using the EM Algorithm
To my knowledge, there is no other way to evaluate the likelihood directly in this kind of state space model. However, you can still do maximum likelihood estimation by evaluating a different function: you can use the EM algorithm. In the Expectation step (E-Step) you would compute
$$
\int f(x_1, \ldots, x_n|y_1,\ldots y_n) \log f(y_1,\ldots,y_n,x_1, \ldots,x_n) dx_{1:n} = E_{smooth}[\log f(y_1,\ldots,y_n,x_1, \ldots,x_n)].
$$
Here $f(y_1,\ldots,y_n,x_1, \ldots,x_n)$ is the "complete data" likelihood, and you're taking the expectation of the log of that with respect to the joint smoothing density. What often happens is that, because you're taking the log of this complete data likelihood, the terms split up into sums, and because of linearity of the expectation operator, you're taking expectations with respect to the marginal smoothing distributions (the ones you mention in your question).
Other things
I've read in places that the EM is a "more stable" way to maximize the likelihood, but I've never really seen this point argued well, nor have I seen this word "stable" defined at all, but also I haven't really examined this further. Neither of these algorithms get around the local/global maxima ordeal. I personally tend to use the Kalman more often just out of habit.
It's true that smoothed estimates of the state have smaller variance typically than filtering, so I guess you're right to have some intuition about this, but you're not really using the states. The likelihood you're trying to maximize is not a function of the states. | Why is the likelihood in Kalman filter computed using filter results instead of smoother results? | To answer your question: you can use the smoothing density. But you don't have to. Jarle Tufto's answer has the decomposition that you're using. But there are others.
Using the Kalman Recursions
Here | Why is the likelihood in Kalman filter computed using filter results instead of smoother results?
To answer your question: you can use the smoothing density. But you don't have to. Jarle Tufto's answer has the decomposition that you're using. But there are others.
Using the Kalman Recursions
Here you're evaluating the likelihood as
$$
f(y_1, \ldots, y_n) = f(y_1)\prod_{i=2}^nf(y_i|y_1, \ldots, y_{i-1}).
$$
However, means and variances don't always fully define probability distributions in general. The following is the decomposition that you're using to go from filtering distributions $f(x_{i-1}|y_1,\ldots,y_{i-1})$ to conditional likelihoods $f(y_i|y_1,\ldots,y_{i-1})$:
$$
f(y_i|y_1, \ldots, y_{i-1}) = \iint f(y_i|x_i)f(x_i|x_{i-1})f(x_{i-1}|y_1, \ldots, y_{i-1})dx_{i} dx_{i-1} \tag{1}.
$$
Here $f(x_i|x_{i-1})$ is the state transition density...part of the model, and $f(y_i|x_i)$ is the observation density...part of the model again. In your question you write these as $x_{t+1}=Fx_{t}+v_{t+1}$ and $y_{t}=Hx_{t}+Az_{t}+w_{t}$ respectively. It's the same thing.
When you get the one step ahead state prediction distribution, that's computing $\int f(x_i|x_{i-1})f(x_{i-1}|y_1, \ldots, y_{i-1}) dx_{i-1}$. When you integrate again, you obtain (1) completely. You write that density out completely in your question, and it's the same thing.
Here you're only using decompositions of probability distributions, and assumptions about the model. This likelihood calculation is an exact calculation. There isn't anything discretionary that you can use to do this better or worse.
Using the EM Algorithm
To my knowledge, there is no other way to evaluate the likelihood directly in this kind of state space model. However, you can still do maximum likelihood estimation by evaluating a different function: you can use the EM algorithm. In the Expectation step (E-Step) you would compute
$$
\int f(x_1, \ldots, x_n|y_1,\ldots y_n) \log f(y_1,\ldots,y_n,x_1, \ldots,x_n) dx_{1:n} = E_{smooth}[\log f(y_1,\ldots,y_n,x_1, \ldots,x_n)].
$$
Here $f(y_1,\ldots,y_n,x_1, \ldots,x_n)$ is the "complete data" likelihood, and you're taking the expectation of the log of that with respect to the joint smoothing density. What often happens is that, because you're taking the log of this complete data likelihood, the terms split up into sums, and because of linearity of the expectation operator, you're taking expectations with respect to the marginal smoothing distributions (the ones you mention in your question).
Other things
I've read in places that the EM is a "more stable" way to maximize the likelihood, but I've never really seen this point argued well, nor have I seen this word "stable" defined at all, but also I haven't really examined this further. Neither of these algorithms get around the local/global maxima ordeal. I personally tend to use the Kalman more often just out of habit.
It's true that smoothed estimates of the state have smaller variance typically than filtering, so I guess you're right to have some intuition about this, but you're not really using the states. The likelihood you're trying to maximize is not a function of the states. | Why is the likelihood in Kalman filter computed using filter results instead of smoother results?
To answer your question: you can use the smoothing density. But you don't have to. Jarle Tufto's answer has the decomposition that you're using. But there are others.
Using the Kalman Recursions
Here |
25,971 | Why is the likelihood in Kalman filter computed using filter results instead of smoother results? | In general, by the product rule, the exact likelihood can be written
$$
f(y_1,\dots,y_n)=f(y_1)\prod_{i=2}^n f(y_i|y_1,\dots,y_{i-1}).
$$
From the assumption of the state space model, it follows that the expectation vector and variance matrix of each $y_i$ conditional on past observations can be expressed as
\begin{align}
E(y_i|y_1,\dots,y_{i-1})
&= E(Hx_{t}+Az_{t}+w_{t}|y_1,\dots,y_{i-1})
\\&= HE(x_{t}|y_1,\dots,y_{i-1})+Az_{t}+Ew_{t}
\\&= H\hat x_{t|t-1}+Az_{t},
\end{align}
and
\begin{align}
\mathrm{Var}(y_i|y_1,\dots,y_{i-1})
&= \mathrm{Var}(Hx_{t}+Az_{t}+w_{t}|y_1,\dots,y_{i-1})
\\&= H\mathrm{Var}(x_{t}|y_1,\dots,y_{i-1})H'+ \mathrm{Var}w_t
\\&= HP_{t|t-1}H'+R.
\end{align}
So this gives you the exact likelihood without computing any smoothed estimates.
While you of course could use the smoothed estimates which indeed are better estimates of the unknown states, this would not give you the likelihood function. In effect you would be using the observed value of $y_i$ to estimate its own expected value so it seems likely that this would lead to some bias in the resulting estimates. | Why is the likelihood in Kalman filter computed using filter results instead of smoother results? | In general, by the product rule, the exact likelihood can be written
$$
f(y_1,\dots,y_n)=f(y_1)\prod_{i=2}^n f(y_i|y_1,\dots,y_{i-1}).
$$
From the assumption of the state space model, it follows that | Why is the likelihood in Kalman filter computed using filter results instead of smoother results?
In general, by the product rule, the exact likelihood can be written
$$
f(y_1,\dots,y_n)=f(y_1)\prod_{i=2}^n f(y_i|y_1,\dots,y_{i-1}).
$$
From the assumption of the state space model, it follows that the expectation vector and variance matrix of each $y_i$ conditional on past observations can be expressed as
\begin{align}
E(y_i|y_1,\dots,y_{i-1})
&= E(Hx_{t}+Az_{t}+w_{t}|y_1,\dots,y_{i-1})
\\&= HE(x_{t}|y_1,\dots,y_{i-1})+Az_{t}+Ew_{t}
\\&= H\hat x_{t|t-1}+Az_{t},
\end{align}
and
\begin{align}
\mathrm{Var}(y_i|y_1,\dots,y_{i-1})
&= \mathrm{Var}(Hx_{t}+Az_{t}+w_{t}|y_1,\dots,y_{i-1})
\\&= H\mathrm{Var}(x_{t}|y_1,\dots,y_{i-1})H'+ \mathrm{Var}w_t
\\&= HP_{t|t-1}H'+R.
\end{align}
So this gives you the exact likelihood without computing any smoothed estimates.
While you of course could use the smoothed estimates which indeed are better estimates of the unknown states, this would not give you the likelihood function. In effect you would be using the observed value of $y_i$ to estimate its own expected value so it seems likely that this would lead to some bias in the resulting estimates. | Why is the likelihood in Kalman filter computed using filter results instead of smoother results?
In general, by the product rule, the exact likelihood can be written
$$
f(y_1,\dots,y_n)=f(y_1)\prod_{i=2}^n f(y_i|y_1,\dots,y_{i-1}).
$$
From the assumption of the state space model, it follows that |
25,972 | Why is the likelihood in Kalman filter computed using filter results instead of smoother results? | I think a better answer as to "why" the smoothing distribution is not used (typically) is efficiency. It is in principle straightforward to calculate the (smoothing) marginal likelihood in a leave-one-out sense as follows. Delete observation j, run the Kalman smoother on the remaining data. Then evaluate the likelihood of the unseen y(j). Repeat this for all j. Sum up the log-likelihoods. Faster versions of this works with (randomized) blocks of held-out samples (like k-fold CV). Notice that this scheme requires a more general implementation of the Kalman filter/smoother which can arbitrarily skip measurement updates where required. The backward/smoothing pass does not access the measurements (RTS algorithm anyway) and remains the same.
If the time-series is "long enough" there is likely little useful benefit in doing this since the filtering likelihood "burns off" its initial transient. But if the dataset is short the more expensive smoothing likelihood may be worth it. A fixed-lag smoother could be an in-between solution. | Why is the likelihood in Kalman filter computed using filter results instead of smoother results? | I think a better answer as to "why" the smoothing distribution is not used (typically) is efficiency. It is in principle straightforward to calculate the (smoothing) marginal likelihood in a leave-one | Why is the likelihood in Kalman filter computed using filter results instead of smoother results?
I think a better answer as to "why" the smoothing distribution is not used (typically) is efficiency. It is in principle straightforward to calculate the (smoothing) marginal likelihood in a leave-one-out sense as follows. Delete observation j, run the Kalman smoother on the remaining data. Then evaluate the likelihood of the unseen y(j). Repeat this for all j. Sum up the log-likelihoods. Faster versions of this works with (randomized) blocks of held-out samples (like k-fold CV). Notice that this scheme requires a more general implementation of the Kalman filter/smoother which can arbitrarily skip measurement updates where required. The backward/smoothing pass does not access the measurements (RTS algorithm anyway) and remains the same.
If the time-series is "long enough" there is likely little useful benefit in doing this since the filtering likelihood "burns off" its initial transient. But if the dataset is short the more expensive smoothing likelihood may be worth it. A fixed-lag smoother could be an in-between solution. | Why is the likelihood in Kalman filter computed using filter results instead of smoother results?
I think a better answer as to "why" the smoothing distribution is not used (typically) is efficiency. It is in principle straightforward to calculate the (smoothing) marginal likelihood in a leave-one |
25,973 | Do the Determinants of Covariance and Correlation Matrices and/or Their Inverses Have Useful Interpretations? | I was able to cobble together some general principles, use cases and properties of these matrices from a desultory set of sources; few of them address these topics directly, with most merely mentioned in passing. Since determinants represent signed volumes, I expected those pertaining to these four types of matrices would translate into multidimensional association measures of some sort; this turned out to be true to some extent, but a few of them exhibit interesting properties:
Covariance Matrices:
• In the case of a Gaussian distribution, the determinant indirectly measures differential entropy, which can be construed as dispersion of the data points across the volume of the matrix. See tmp's answer at What does Determinant of Covariance Matrix give? for details.
• Alexander Vigodner's answer in the same thread says it also possesses the property of positivity.
• The covariance matrix determinant can be interpreted as generalized variance. See the NIST Statistics Handbook page 6.5.3.2. Determinant and Eigenstructure.
Inverse Covariance Matrices:
• It's equivalent to the inverse of the generalized variance that the covariance matrix determinant represents; maximizing the determinant of the inverse covariance matrix can apparently be used as a substitute for calculating the determinant of the Fisher information matrix, which can be used in optimizing experiment design. See kjetil b halvorsen's answer to the CV thread Determinant of Fisher Information
Correlation Matrices:
• These are much more interesting than covariance matrix determinants, in that the correlation volume decreases as the determinant approaches 1 and increases as the latter approaches 0. This is the opposite of ordinary correlation coefficients, in which higher numbers indicate greater positive correlation. "The determinant of the correlation matrix will equal 1.0 only if all correlations equal 0, otherwise the determinant will be less than 1. Remember that the determinant is related to the volume of the space occupied by the swarm of data points represented by standard scores on the measures involved. When the measures are uncorrelated, this space is a sphere with a volume of 1. When the measures are correlated, the space occupied becomes an ellipsoid whose volume is less than 1." See this set of Tulane course notes and this Quora page.
• Another citation for this unexpected behavior: "The determinant of a correlation matrix becomes zero or near zero when some of the variables are perfectly correlated or highly correlated with each other." See Rakesh Pandey's question How to handle the problem of near zero determinant in computing reliability using SPSS?
• A third reference: "Having a very small det(R) only means that you have some variables that are almost linearly dependent."
Carlos Massera Filho's answer at this CrossValidated thread.
• The determinants also follow a scale from 0 to 1, which differ both from the -1 to 1 scale that correlation coefficients follow. They also lack the sign that an ordinary determinant may exhibit in expressing the orientation of a volume. Whether or not the correlation determinant still represents some notion of directionality was not addressed in any of the literature I found though.
Inverse Correlation Matrices:
• A Google search for the combined terms "inverse correlation matrix" and "determinant" turned up only 50 hit, so apparently they're not commonly applied to statistical reasoning.
• Apparently minimization of the inverse correlation determinant can be useful in some situations, given that a patent exists for echo cancellation using adaptive filters contains a regularization procedure designed to do just that. See p. 5 in this patent document.
• p. 5 of Robust Technology with Analysis of Interference in Signal Processing (available on Google Books previews) by Telman Aliev seems to suggest that "poor stipulation" of a correlation matrix is related to instability in the determinant of the inverse correlation matrices. In other words, wild changes in its determinant in proportion to small changes in its constituent elements are related to how much information is captured by the correlation matrices.
There may be other properties and use cases of these determinants not listed here; I'll just post these for the sake of completeness and to provide an answer to the question I posed, in case someone else runs into practical uses for these interpretations (as I have with correlation determinants). | Do the Determinants of Covariance and Correlation Matrices and/or Their Inverses Have Useful Interpr | I was able to cobble together some general principles, use cases and properties of these matrices from a desultory set of sources; few of them address these topics directly, with most merely mentioned | Do the Determinants of Covariance and Correlation Matrices and/or Their Inverses Have Useful Interpretations?
I was able to cobble together some general principles, use cases and properties of these matrices from a desultory set of sources; few of them address these topics directly, with most merely mentioned in passing. Since determinants represent signed volumes, I expected those pertaining to these four types of matrices would translate into multidimensional association measures of some sort; this turned out to be true to some extent, but a few of them exhibit interesting properties:
Covariance Matrices:
• In the case of a Gaussian distribution, the determinant indirectly measures differential entropy, which can be construed as dispersion of the data points across the volume of the matrix. See tmp's answer at What does Determinant of Covariance Matrix give? for details.
• Alexander Vigodner's answer in the same thread says it also possesses the property of positivity.
• The covariance matrix determinant can be interpreted as generalized variance. See the NIST Statistics Handbook page 6.5.3.2. Determinant and Eigenstructure.
Inverse Covariance Matrices:
• It's equivalent to the inverse of the generalized variance that the covariance matrix determinant represents; maximizing the determinant of the inverse covariance matrix can apparently be used as a substitute for calculating the determinant of the Fisher information matrix, which can be used in optimizing experiment design. See kjetil b halvorsen's answer to the CV thread Determinant of Fisher Information
Correlation Matrices:
• These are much more interesting than covariance matrix determinants, in that the correlation volume decreases as the determinant approaches 1 and increases as the latter approaches 0. This is the opposite of ordinary correlation coefficients, in which higher numbers indicate greater positive correlation. "The determinant of the correlation matrix will equal 1.0 only if all correlations equal 0, otherwise the determinant will be less than 1. Remember that the determinant is related to the volume of the space occupied by the swarm of data points represented by standard scores on the measures involved. When the measures are uncorrelated, this space is a sphere with a volume of 1. When the measures are correlated, the space occupied becomes an ellipsoid whose volume is less than 1." See this set of Tulane course notes and this Quora page.
• Another citation for this unexpected behavior: "The determinant of a correlation matrix becomes zero or near zero when some of the variables are perfectly correlated or highly correlated with each other." See Rakesh Pandey's question How to handle the problem of near zero determinant in computing reliability using SPSS?
• A third reference: "Having a very small det(R) only means that you have some variables that are almost linearly dependent."
Carlos Massera Filho's answer at this CrossValidated thread.
• The determinants also follow a scale from 0 to 1, which differ both from the -1 to 1 scale that correlation coefficients follow. They also lack the sign that an ordinary determinant may exhibit in expressing the orientation of a volume. Whether or not the correlation determinant still represents some notion of directionality was not addressed in any of the literature I found though.
Inverse Correlation Matrices:
• A Google search for the combined terms "inverse correlation matrix" and "determinant" turned up only 50 hit, so apparently they're not commonly applied to statistical reasoning.
• Apparently minimization of the inverse correlation determinant can be useful in some situations, given that a patent exists for echo cancellation using adaptive filters contains a regularization procedure designed to do just that. See p. 5 in this patent document.
• p. 5 of Robust Technology with Analysis of Interference in Signal Processing (available on Google Books previews) by Telman Aliev seems to suggest that "poor stipulation" of a correlation matrix is related to instability in the determinant of the inverse correlation matrices. In other words, wild changes in its determinant in proportion to small changes in its constituent elements are related to how much information is captured by the correlation matrices.
There may be other properties and use cases of these determinants not listed here; I'll just post these for the sake of completeness and to provide an answer to the question I posed, in case someone else runs into practical uses for these interpretations (as I have with correlation determinants). | Do the Determinants of Covariance and Correlation Matrices and/or Their Inverses Have Useful Interpr
I was able to cobble together some general principles, use cases and properties of these matrices from a desultory set of sources; few of them address these topics directly, with most merely mentioned |
25,974 | What is no ' information rate ' algorithm? | Suppose that you have response $y_i$ and covariates $x_i$ for $i = 1 ...n$, and some loss function $\mathcal{L}$. The no information error rate of a model $f$ is the average loss of $f$ over all combinations of $y_i$ and $x_i$:
$${1 \over n^2} \sum_{i=1}^n \sum_{j=1}^n \mathcal{L}\left(y_i, f(x_j)\right)$$
If you have a vector of predictions predicted and a vector of responses response, you can calculate the no info error rate by generating all the combinations of predicted and response and then evaluating some function loss on these resulting vectors.
In R, supposing RMSE loss, (using the tidyr library) this looks like:
predicted <- 1:3
response <- 4:6
loss <- function(x, y) sqrt(mean((x - y)^2))
combos <- tidyr::crossing(predicted, response)
loss(combos$predicted, combos$response)
In Python this looks like
import numpy as np
predicted = np.arange(1, 4)
response = np.arange(4, 7)
combos = np.array(np.meshgrid(predicted, response)).reshape(2, -1)
def loss(x, y):
return np.sqrt(np.mean((x - y) ** 2))
loss(combos[0], combos[1]) | What is no ' information rate ' algorithm? | Suppose that you have response $y_i$ and covariates $x_i$ for $i = 1 ...n$, and some loss function $\mathcal{L}$. The no information error rate of a model $f$ is the average loss of $f$ over all combi | What is no ' information rate ' algorithm?
Suppose that you have response $y_i$ and covariates $x_i$ for $i = 1 ...n$, and some loss function $\mathcal{L}$. The no information error rate of a model $f$ is the average loss of $f$ over all combinations of $y_i$ and $x_i$:
$${1 \over n^2} \sum_{i=1}^n \sum_{j=1}^n \mathcal{L}\left(y_i, f(x_j)\right)$$
If you have a vector of predictions predicted and a vector of responses response, you can calculate the no info error rate by generating all the combinations of predicted and response and then evaluating some function loss on these resulting vectors.
In R, supposing RMSE loss, (using the tidyr library) this looks like:
predicted <- 1:3
response <- 4:6
loss <- function(x, y) sqrt(mean((x - y)^2))
combos <- tidyr::crossing(predicted, response)
loss(combos$predicted, combos$response)
In Python this looks like
import numpy as np
predicted = np.arange(1, 4)
response = np.arange(4, 7)
combos = np.array(np.meshgrid(predicted, response)).reshape(2, -1)
def loss(x, y):
return np.sqrt(np.mean((x - y) ** 2))
loss(combos[0], combos[1]) | What is no ' information rate ' algorithm?
Suppose that you have response $y_i$ and covariates $x_i$ for $i = 1 ...n$, and some loss function $\mathcal{L}$. The no information error rate of a model $f$ is the average loss of $f$ over all combi |
25,975 | What is no ' information rate ' algorithm? | The no information error rate is the error rate when the input and output are independent. You can compute it by evaluating the prediction rule on all possible combinations of the target and the features, i.e as
$$\hat \gamma = \frac{1}{N}\sum_{i=1}^N\sum_{j=1}^NL\left(y_i, \hat f(x_j)\right).$$ | What is no ' information rate ' algorithm? | The no information error rate is the error rate when the input and output are independent. You can compute it by evaluating the prediction rule on all possible combinations of the target and the featu | What is no ' information rate ' algorithm?
The no information error rate is the error rate when the input and output are independent. You can compute it by evaluating the prediction rule on all possible combinations of the target and the features, i.e as
$$\hat \gamma = \frac{1}{N}\sum_{i=1}^N\sum_{j=1}^NL\left(y_i, \hat f(x_j)\right).$$ | What is no ' information rate ' algorithm?
The no information error rate is the error rate when the input and output are independent. You can compute it by evaluating the prediction rule on all possible combinations of the target and the featu |
25,976 | What is no ' information rate ' algorithm? | No information rate is Naive classifier which needs to be exceeded in order to prove that model we created is significant. We calculate accuracy and then compare it with Naive classifier. accuracy should be higher than No information rate (naive classifier) in order to be model significant. | What is no ' information rate ' algorithm? | No information rate is Naive classifier which needs to be exceeded in order to prove that model we created is significant. We calculate accuracy and then compare it with Naive classifier. accuracy sho | What is no ' information rate ' algorithm?
No information rate is Naive classifier which needs to be exceeded in order to prove that model we created is significant. We calculate accuracy and then compare it with Naive classifier. accuracy should be higher than No information rate (naive classifier) in order to be model significant. | What is no ' information rate ' algorithm?
No information rate is Naive classifier which needs to be exceeded in order to prove that model we created is significant. We calculate accuracy and then compare it with Naive classifier. accuracy sho |
25,977 | Why I am getting different predictions for manual polynomial expansion and using the R `poly` function? | As you correctly note the original difference is because in the first case you use the "raw" polynomials while in the second case you use the orthogonal polynomials. Therefore if the later lm call was altered into: fit3<-lm(y~ poly(x,degree=2, raw = TRUE) -1) we would get the same results between fit and fit3. The reason why we get the same results in this case is "trivial"; we fit the exact same model as we fitted with fit<-lm(y~.-1,data=x_exp), no surprises there.
One can easily check that the model matrices by the two models are the same all.equal( model.matrix(fit), model.matrix(fit3) , check.attributes= FALSE) # TRUE).
What is more interesting is why you will get the same plots when using an intercept. The first thing to notice is that, when fitting a model with an intercept
In the case of fit2 we simply move the model predictions vertically; the actual shape of the curve is the same.
On the other hand including an intercept in the case of fit results into not only a different line in terms of vertical placement but with a whole different shape overall.
We can easily see that by simply appending the following fits on the existing plot.
fit_b<-lm(y~. ,data=x_exp)
yp=predict(fit_b,xp_exp)
lines(xp,yp, col='green', lwd = 2)
fit2_b<-lm(y~ poly(x,degree=2, raw = FALSE) )
yp=predict(fit2_b,data.frame(x=xp))
lines(xp,yp,col='blue')
OK... Why were the no-intercept fits different while the intercept-including fits are the same? The catch is once again on the orthogonality condition.
In the case of fit_b the model matrix used contains non-orthogonal elements, the Gram matrix crossprod( model.matrix(fit_b) ) is far from diagonal; in the case of fit2_b the elements are orthogonal (crossprod( model.matrix(fit2_b) ) is effectively diagonal).
As such in the case of fit when we expand it to include an intercept in fit_b we changed the off-diagonal entries of the Gram matrix $X^TX$ and thus the resulting fit is different as a whole (different curvature, intercept, etc.) in comparison with the fit provided by fit. In the case of fit2 though when we expand it to include an intercept as in fit2_b we only append a column that is already orthogonal to the columns we had, the orthogonality is against the constant polynomial of degree 0. This simply results on vertically moving our fitted line by the intercept. This is why the plots are different.
The interesting by-question is why the fit_b and fit2_b are the same; after all the model matrices from fit_b and fit2_b are not the same in face value. Here we just need to remember that ultimately fit_b and fit2_b have the same information. fit2_b is just a linear combination of the fit_b so essentially their resulting fits will be the same. The differences observed in the fitted coefficient reflects the linear recombination of the values of fit_b in order to get them orthogonal. (see G. Grothendieck answer here too for different example.) | Why I am getting different predictions for manual polynomial expansion and using the R `poly` functi | As you correctly note the original difference is because in the first case you use the "raw" polynomials while in the second case you use the orthogonal polynomials. Therefore if the later lm call was | Why I am getting different predictions for manual polynomial expansion and using the R `poly` function?
As you correctly note the original difference is because in the first case you use the "raw" polynomials while in the second case you use the orthogonal polynomials. Therefore if the later lm call was altered into: fit3<-lm(y~ poly(x,degree=2, raw = TRUE) -1) we would get the same results between fit and fit3. The reason why we get the same results in this case is "trivial"; we fit the exact same model as we fitted with fit<-lm(y~.-1,data=x_exp), no surprises there.
One can easily check that the model matrices by the two models are the same all.equal( model.matrix(fit), model.matrix(fit3) , check.attributes= FALSE) # TRUE).
What is more interesting is why you will get the same plots when using an intercept. The first thing to notice is that, when fitting a model with an intercept
In the case of fit2 we simply move the model predictions vertically; the actual shape of the curve is the same.
On the other hand including an intercept in the case of fit results into not only a different line in terms of vertical placement but with a whole different shape overall.
We can easily see that by simply appending the following fits on the existing plot.
fit_b<-lm(y~. ,data=x_exp)
yp=predict(fit_b,xp_exp)
lines(xp,yp, col='green', lwd = 2)
fit2_b<-lm(y~ poly(x,degree=2, raw = FALSE) )
yp=predict(fit2_b,data.frame(x=xp))
lines(xp,yp,col='blue')
OK... Why were the no-intercept fits different while the intercept-including fits are the same? The catch is once again on the orthogonality condition.
In the case of fit_b the model matrix used contains non-orthogonal elements, the Gram matrix crossprod( model.matrix(fit_b) ) is far from diagonal; in the case of fit2_b the elements are orthogonal (crossprod( model.matrix(fit2_b) ) is effectively diagonal).
As such in the case of fit when we expand it to include an intercept in fit_b we changed the off-diagonal entries of the Gram matrix $X^TX$ and thus the resulting fit is different as a whole (different curvature, intercept, etc.) in comparison with the fit provided by fit. In the case of fit2 though when we expand it to include an intercept as in fit2_b we only append a column that is already orthogonal to the columns we had, the orthogonality is against the constant polynomial of degree 0. This simply results on vertically moving our fitted line by the intercept. This is why the plots are different.
The interesting by-question is why the fit_b and fit2_b are the same; after all the model matrices from fit_b and fit2_b are not the same in face value. Here we just need to remember that ultimately fit_b and fit2_b have the same information. fit2_b is just a linear combination of the fit_b so essentially their resulting fits will be the same. The differences observed in the fitted coefficient reflects the linear recombination of the values of fit_b in order to get them orthogonal. (see G. Grothendieck answer here too for different example.) | Why I am getting different predictions for manual polynomial expansion and using the R `poly` functi
As you correctly note the original difference is because in the first case you use the "raw" polynomials while in the second case you use the orthogonal polynomials. Therefore if the later lm call was |
25,978 | Interpreting TukeyHSD output in R | p adj is the p-value adjusted for multiple comparisons using the R function TukeyHSD(). For more information on why and how the p-value should be adjusted in those cases, see here and here.
Yes you can interpret this like any other p-value, meaning that none of your comparisons are statistically significant. You can also check ?TukeyHSD and then under Value it says:
A list of class c("multicomp", "TukeyHSD"), with one component for each term requested in which. Each component is a matrix with columns diff giving the difference in the observed means, lwr giving the lower end point of the interval, upr giving the upper end point and p adj giving the p-value after adjustment for the multiple comparisons. | Interpreting TukeyHSD output in R | p adj is the p-value adjusted for multiple comparisons using the R function TukeyHSD(). For more information on why and how the p-value should be adjusted in those cases, see here and here.
Yes you c | Interpreting TukeyHSD output in R
p adj is the p-value adjusted for multiple comparisons using the R function TukeyHSD(). For more information on why and how the p-value should be adjusted in those cases, see here and here.
Yes you can interpret this like any other p-value, meaning that none of your comparisons are statistically significant. You can also check ?TukeyHSD and then under Value it says:
A list of class c("multicomp", "TukeyHSD"), with one component for each term requested in which. Each component is a matrix with columns diff giving the difference in the observed means, lwr giving the lower end point of the interval, upr giving the upper end point and p adj giving the p-value after adjustment for the multiple comparisons. | Interpreting TukeyHSD output in R
p adj is the p-value adjusted for multiple comparisons using the R function TukeyHSD(). For more information on why and how the p-value should be adjusted in those cases, see here and here.
Yes you c |
25,979 | Interpreting TukeyHSD output in R | The p adj value tells if there is a significant difference between comparisons. To know if there is a statistical difference, first and foremost you have to check when you ran your anova test. If the p-value is greater than 0.05, then there is no need to run post hoc tests such as the Tukey because you already know that there is no significant differences. I am sure that in this example, the p-value was greater than 0.05 for the anova test, which is why when you ran the post hoc Tukey test, no significant differences was observed. | Interpreting TukeyHSD output in R | The p adj value tells if there is a significant difference between comparisons. To know if there is a statistical difference, first and foremost you have to check when you ran your anova test. If the | Interpreting TukeyHSD output in R
The p adj value tells if there is a significant difference between comparisons. To know if there is a statistical difference, first and foremost you have to check when you ran your anova test. If the p-value is greater than 0.05, then there is no need to run post hoc tests such as the Tukey because you already know that there is no significant differences. I am sure that in this example, the p-value was greater than 0.05 for the anova test, which is why when you ran the post hoc Tukey test, no significant differences was observed. | Interpreting TukeyHSD output in R
The p adj value tells if there is a significant difference between comparisons. To know if there is a statistical difference, first and foremost you have to check when you ran your anova test. If the |
25,980 | Why is isometric log-ratio transformation preferred over the additive(alr) or centered(clr) with compositional data? | Continuing off of marianess's answer, clr is really not suitable due to the colinearity issue. In words if you try to make inferences with clr transformed data, you may fall in the trap of trying to infer increase/decreases of variables, which you can never never do with proportions in the first place.
The ilr transformation attempts to resolve this by just sticking to ratios of partitions, since ratios are stable quantities. These partitions can be represented as trees, where internal nodes in the tree represents the log ratio of the geometric means of the subtrees. This log ratios of subtrees is known as balances.
I'd also recommend checking out these publications, since they all have nice explanations of how to interpret the ilr transform.
http://msystems.asm.org/content/2/1/e00162-16
https://peerj.com/articles/2969/
https://elifesciences.org/content/6/e21887
Here is an IPython notebook that goes in the details of how to calculate balances given a tree
I also gave a description how to this with the modules in scikit-bio here in case you curious. | Why is isometric log-ratio transformation preferred over the additive(alr) or centered(clr) with com | Continuing off of marianess's answer, clr is really not suitable due to the colinearity issue. In words if you try to make inferences with clr transformed data, you may fall in the trap of trying to | Why is isometric log-ratio transformation preferred over the additive(alr) or centered(clr) with compositional data?
Continuing off of marianess's answer, clr is really not suitable due to the colinearity issue. In words if you try to make inferences with clr transformed data, you may fall in the trap of trying to infer increase/decreases of variables, which you can never never do with proportions in the first place.
The ilr transformation attempts to resolve this by just sticking to ratios of partitions, since ratios are stable quantities. These partitions can be represented as trees, where internal nodes in the tree represents the log ratio of the geometric means of the subtrees. This log ratios of subtrees is known as balances.
I'd also recommend checking out these publications, since they all have nice explanations of how to interpret the ilr transform.
http://msystems.asm.org/content/2/1/e00162-16
https://peerj.com/articles/2969/
https://elifesciences.org/content/6/e21887
Here is an IPython notebook that goes in the details of how to calculate balances given a tree
I also gave a description how to this with the modules in scikit-bio here in case you curious. | Why is isometric log-ratio transformation preferred over the additive(alr) or centered(clr) with com
Continuing off of marianess's answer, clr is really not suitable due to the colinearity issue. In words if you try to make inferences with clr transformed data, you may fall in the trap of trying to |
25,981 | Why is isometric log-ratio transformation preferred over the additive(alr) or centered(clr) with compositional data? | There is a problem with clr() transformation. It does preserve the same amount variables after you transform the data, but in case of clr() you get a singular data (actually you get a singular covariance matrix):
y1 + ... yD = 0.
And as you might know, some statistical analysis cannot be performed on a singular data.
ilr() transformation will reduce the number of your variable, so let's say you had D-dimensional space, but after ilr() you will end up with D-1. As a result, your transformed data is nothing more, but ratios.
I recommend to read this paper here:
http://is.muni.cz/do/rect/habilitace/1431/Hron/habilitace/15_Filzmoser_et_al__2010_.pdf | Why is isometric log-ratio transformation preferred over the additive(alr) or centered(clr) with com | There is a problem with clr() transformation. It does preserve the same amount variables after you transform the data, but in case of clr() you get a singular data (actually you get a singular covaria | Why is isometric log-ratio transformation preferred over the additive(alr) or centered(clr) with compositional data?
There is a problem with clr() transformation. It does preserve the same amount variables after you transform the data, but in case of clr() you get a singular data (actually you get a singular covariance matrix):
y1 + ... yD = 0.
And as you might know, some statistical analysis cannot be performed on a singular data.
ilr() transformation will reduce the number of your variable, so let's say you had D-dimensional space, but after ilr() you will end up with D-1. As a result, your transformed data is nothing more, but ratios.
I recommend to read this paper here:
http://is.muni.cz/do/rect/habilitace/1431/Hron/habilitace/15_Filzmoser_et_al__2010_.pdf | Why is isometric log-ratio transformation preferred over the additive(alr) or centered(clr) with com
There is a problem with clr() transformation. It does preserve the same amount variables after you transform the data, but in case of clr() you get a singular data (actually you get a singular covaria |
25,982 | Why is isometric log-ratio transformation preferred over the additive(alr) or centered(clr) with compositional data? | I would go with ALR as it makes more sense. You use one component as baseline, or reference and then see what the others do in relation to that one. | Why is isometric log-ratio transformation preferred over the additive(alr) or centered(clr) with com | I would go with ALR as it makes more sense. You use one component as baseline, or reference and then see what the others do in relation to that one. | Why is isometric log-ratio transformation preferred over the additive(alr) or centered(clr) with compositional data?
I would go with ALR as it makes more sense. You use one component as baseline, or reference and then see what the others do in relation to that one. | Why is isometric log-ratio transformation preferred over the additive(alr) or centered(clr) with com
I would go with ALR as it makes more sense. You use one component as baseline, or reference and then see what the others do in relation to that one. |
25,983 | Contradictory results of ADF and KPSS unit root tests | Please have a look at my answer to the following question. What is the difference between a stationary test and a unit root test? Here is the most important part of the answer:
If you have a time series data set how it usually appears in econometric time series I propose you should apply both a Unit root test: (Augmented) Dickey Fuller or Phillips-Perron depending on the structure of the underlying data and a KPSS test.
Case 1: Unit root test: you can’t reject $H_0$; KPSS test: reject $H_0$. Both imply that series has unit root.
Case 2: Unit root test: Reject $H_0$. KPSS test: don`t reject $H_0$. Both imply that series is stationary.
Case 3 If we can’t reject both test: data give not enough observations.
Case 4 Reject unit root, reject stationarity: both hypothesis are component hypothesis – heteroskedasticity in series may make a big difference; if there is structural break it will affect inference.
Edit:
In case 4 a more profound approach would be to apply a variance ratio test. The variance ratio test renders you a value between 0 and 1 if the data is "between stationarity and a unit root". As the variance ratio test does not only affirm or reject a null hypothesis, but gives you a continuous value it can capture mixtures in more detail. It may also give you insight to visualise the data. | Contradictory results of ADF and KPSS unit root tests | Please have a look at my answer to the following question. What is the difference between a stationary test and a unit root test? Here is the most important part of the answer:
If you have a time seri | Contradictory results of ADF and KPSS unit root tests
Please have a look at my answer to the following question. What is the difference between a stationary test and a unit root test? Here is the most important part of the answer:
If you have a time series data set how it usually appears in econometric time series I propose you should apply both a Unit root test: (Augmented) Dickey Fuller or Phillips-Perron depending on the structure of the underlying data and a KPSS test.
Case 1: Unit root test: you can’t reject $H_0$; KPSS test: reject $H_0$. Both imply that series has unit root.
Case 2: Unit root test: Reject $H_0$. KPSS test: don`t reject $H_0$. Both imply that series is stationary.
Case 3 If we can’t reject both test: data give not enough observations.
Case 4 Reject unit root, reject stationarity: both hypothesis are component hypothesis – heteroskedasticity in series may make a big difference; if there is structural break it will affect inference.
Edit:
In case 4 a more profound approach would be to apply a variance ratio test. The variance ratio test renders you a value between 0 and 1 if the data is "between stationarity and a unit root". As the variance ratio test does not only affirm or reject a null hypothesis, but gives you a continuous value it can capture mixtures in more detail. It may also give you insight to visualise the data. | Contradictory results of ADF and KPSS unit root tests
Please have a look at my answer to the following question. What is the difference between a stationary test and a unit root test? Here is the most important part of the answer:
If you have a time seri |
25,984 | Contradictory results of ADF and KPSS unit root tests | "Here, the data seems to be accept level stationarity and trend stationarity as the p-values are less than 0.05."
The H0 for KPSS is that the data is stationary. So shouldn't we reject the H0 (p-values <0.05) and infer non-stationary? | Contradictory results of ADF and KPSS unit root tests | "Here, the data seems to be accept level stationarity and trend stationarity as the p-values are less than 0.05."
The H0 for KPSS is that the data is stationary. So shouldn't we reject the H0 (p-value | Contradictory results of ADF and KPSS unit root tests
"Here, the data seems to be accept level stationarity and trend stationarity as the p-values are less than 0.05."
The H0 for KPSS is that the data is stationary. So shouldn't we reject the H0 (p-values <0.05) and infer non-stationary? | Contradictory results of ADF and KPSS unit root tests
"Here, the data seems to be accept level stationarity and trend stationarity as the p-values are less than 0.05."
The H0 for KPSS is that the data is stationary. So shouldn't we reject the H0 (p-value |
25,985 | Contradictory results of ADF and KPSS unit root tests | You are not comparing test statistic (ADF statistic and KPSS statistic) against the critical value and are just looking at p-value. If you check, it is still possible that they give contradictory results. But the result will be ADF says its stationary while KPSS says it is non-stationary. This means that unit root does not exist and the stationarity is trend stationarity. | Contradictory results of ADF and KPSS unit root tests | You are not comparing test statistic (ADF statistic and KPSS statistic) against the critical value and are just looking at p-value. If you check, it is still possible that they give contradictory resu | Contradictory results of ADF and KPSS unit root tests
You are not comparing test statistic (ADF statistic and KPSS statistic) against the critical value and are just looking at p-value. If you check, it is still possible that they give contradictory results. But the result will be ADF says its stationary while KPSS says it is non-stationary. This means that unit root does not exist and the stationarity is trend stationarity. | Contradictory results of ADF and KPSS unit root tests
You are not comparing test statistic (ADF statistic and KPSS statistic) against the critical value and are just looking at p-value. If you check, it is still possible that they give contradictory resu |
25,986 | Contradictory results of ADF and KPSS unit root tests | It would have been good to have a reproducible example to test and understand better.
That said, I'm speaking from the deepness of my ignorance as I'm learning these days, and had exactly the same problem of yours.
I ended up finding that in my time series I was having multiple rows with the same timestamp (e.g. [[2020-05-22, 20], [2020-05-21, 10], [2020-05-12, 5],[2020-05-22, 20], [2020-05-21, 10], [2020-05-12, 5]]).
This was due to the presence of another feature to make those entries necessary for other calculation.
But it turned out that having those duplicates were creating issues, which I solved by summing up the values before getting the new dataframe digested by both KPSS and ADF.
At that point, both of them revealed the non stationarity.
Perhaps the OP case is the same.
Kudos to the other answer from where I guess I learnt something too. | Contradictory results of ADF and KPSS unit root tests | It would have been good to have a reproducible example to test and understand better.
That said, I'm speaking from the deepness of my ignorance as I'm learning these days, and had exactly the same pr | Contradictory results of ADF and KPSS unit root tests
It would have been good to have a reproducible example to test and understand better.
That said, I'm speaking from the deepness of my ignorance as I'm learning these days, and had exactly the same problem of yours.
I ended up finding that in my time series I was having multiple rows with the same timestamp (e.g. [[2020-05-22, 20], [2020-05-21, 10], [2020-05-12, 5],[2020-05-22, 20], [2020-05-21, 10], [2020-05-12, 5]]).
This was due to the presence of another feature to make those entries necessary for other calculation.
But it turned out that having those duplicates were creating issues, which I solved by summing up the values before getting the new dataframe digested by both KPSS and ADF.
At that point, both of them revealed the non stationarity.
Perhaps the OP case is the same.
Kudos to the other answer from where I guess I learnt something too. | Contradictory results of ADF and KPSS unit root tests
It would have been good to have a reproducible example to test and understand better.
That said, I'm speaking from the deepness of my ignorance as I'm learning these days, and had exactly the same pr |
25,987 | Self organizing maps vs k-means, when the SOM has a lot of nodes | The idea behind a SOM is that you're mapping high-dimensional vectors onto a smaller dimensional (typically 2-D) space. You can think of it as clustering, like in K-means, with the added difference that vectors that are close in the high-dimensional space also end up being mapped to nodes that are close in 2-D space.
SOMs therefore are said to "preserve the topology" of the original data, because the distances in 2-D space reflect those in the high-dimensional space. K-means also clusters similar data points together, but its final "representation" is hard to visualise because it's not in a convenient 2-D format.
A typical example is with colours, where each of the data points are 3-D vectors that represent R,G,B colours. When mapped to a 2-D SOM you can see regions of similar colours begin to develop, which is the topology of the colour space. I like this tutorial as an explanation, with added code snippets. | Self organizing maps vs k-means, when the SOM has a lot of nodes | The idea behind a SOM is that you're mapping high-dimensional vectors onto a smaller dimensional (typically 2-D) space. You can think of it as clustering, like in K-means, with the added difference th | Self organizing maps vs k-means, when the SOM has a lot of nodes
The idea behind a SOM is that you're mapping high-dimensional vectors onto a smaller dimensional (typically 2-D) space. You can think of it as clustering, like in K-means, with the added difference that vectors that are close in the high-dimensional space also end up being mapped to nodes that are close in 2-D space.
SOMs therefore are said to "preserve the topology" of the original data, because the distances in 2-D space reflect those in the high-dimensional space. K-means also clusters similar data points together, but its final "representation" is hard to visualise because it's not in a convenient 2-D format.
A typical example is with colours, where each of the data points are 3-D vectors that represent R,G,B colours. When mapped to a 2-D SOM you can see regions of similar colours begin to develop, which is the topology of the colour space. I like this tutorial as an explanation, with added code snippets. | Self organizing maps vs k-means, when the SOM has a lot of nodes
The idea behind a SOM is that you're mapping high-dimensional vectors onto a smaller dimensional (typically 2-D) space. You can think of it as clustering, like in K-means, with the added difference th |
25,988 | Self organizing maps vs k-means, when the SOM has a lot of nodes | You've already accepted an answer, but I think that part of the answer needs to be made clearer...
A SOM is always topological in nature. It is essentially embedding a 2D manifold in the higher-dimensional space of your data.
At an intuitive level, both k-means and SOM are moving nodes towards denser areas of your space. With k-means, the nodes move freely, with no direct relationship to each other. And a node that is responsible for zero or one data points is degenerate and the k-means algorithm must avoid this situation.
With SOM, when a node moves towards the data, it pulls neighboring nodes in the 2D manifold along with it. This naturally maintains a topology embedded in the data space. And a node can be responsible for 0 or 1 data points with no problem. (Such nodes are sitting in empty space, pulled by their neighbors in all directions. In some sense, they might be an artifact of a manifold, but in another sense they are interpolating between denser space.)
So there isn't some kind of phase change where a SOM goes from not topological to topological. Rather, as the number of SOM nodes increases, you get a higher-resolution manifold.
If you fit a 2x3 (6 node) SOM to the Iris data, you'll get something much more like k-means with 6 nodes than if you fit a 10x15 (150 node) SOM. So I think of it that a low-resolution SOM looks more like the non-topological k-means that is doing a similar task, but a high-resolution SOM's topological nature will be more visible. | Self organizing maps vs k-means, when the SOM has a lot of nodes | You've already accepted an answer, but I think that part of the answer needs to be made clearer...
A SOM is always topological in nature. It is essentially embedding a 2D manifold in the higher-dimens | Self organizing maps vs k-means, when the SOM has a lot of nodes
You've already accepted an answer, but I think that part of the answer needs to be made clearer...
A SOM is always topological in nature. It is essentially embedding a 2D manifold in the higher-dimensional space of your data.
At an intuitive level, both k-means and SOM are moving nodes towards denser areas of your space. With k-means, the nodes move freely, with no direct relationship to each other. And a node that is responsible for zero or one data points is degenerate and the k-means algorithm must avoid this situation.
With SOM, when a node moves towards the data, it pulls neighboring nodes in the 2D manifold along with it. This naturally maintains a topology embedded in the data space. And a node can be responsible for 0 or 1 data points with no problem. (Such nodes are sitting in empty space, pulled by their neighbors in all directions. In some sense, they might be an artifact of a manifold, but in another sense they are interpolating between denser space.)
So there isn't some kind of phase change where a SOM goes from not topological to topological. Rather, as the number of SOM nodes increases, you get a higher-resolution manifold.
If you fit a 2x3 (6 node) SOM to the Iris data, you'll get something much more like k-means with 6 nodes than if you fit a 10x15 (150 node) SOM. So I think of it that a low-resolution SOM looks more like the non-topological k-means that is doing a similar task, but a high-resolution SOM's topological nature will be more visible. | Self organizing maps vs k-means, when the SOM has a lot of nodes
You've already accepted an answer, but I think that part of the answer needs to be made clearer...
A SOM is always topological in nature. It is essentially embedding a 2D manifold in the higher-dimens |
25,989 | Self organizing maps vs k-means, when the SOM has a lot of nodes | The case for a low number of nodes is similar to K-means because you are forcing every vector to match an existing node, acting as a prototype/centroid, without any marging for divergence.
In the case of a high number of nodes, there is margin for slowing transitioning zones, which mimic the space among protopypes/centroids, thus modeling the transformed topological space among samples. This way, the relative distances are 'in some sense' preserved. | Self organizing maps vs k-means, when the SOM has a lot of nodes | The case for a low number of nodes is similar to K-means because you are forcing every vector to match an existing node, acting as a prototype/centroid, without any marging for divergence.
In the case | Self organizing maps vs k-means, when the SOM has a lot of nodes
The case for a low number of nodes is similar to K-means because you are forcing every vector to match an existing node, acting as a prototype/centroid, without any marging for divergence.
In the case of a high number of nodes, there is margin for slowing transitioning zones, which mimic the space among protopypes/centroids, thus modeling the transformed topological space among samples. This way, the relative distances are 'in some sense' preserved. | Self organizing maps vs k-means, when the SOM has a lot of nodes
The case for a low number of nodes is similar to K-means because you are forcing every vector to match an existing node, acting as a prototype/centroid, without any marging for divergence.
In the case |
25,990 | Linear Mixed Models and ANOVA | 1) What is the difference between conducting a Linear Mixed Models and an ANOVA?
ANOVA models have the feature of at least one continuous outcome variable and one of more categorical covariates. Linear mixed models are a family of models that also have a continous outcome variable, one or more random effects and one or more fixed effects (hence the name mixed effects model or just mixed model).
There are sub-classes of ANOVA models that allow for repeated measures, a mixed ANOVA which has one within-subjects (categorical) covariate and at least one between-subjects (categorical) covariate, and repeated measures ANOVA which has at least two within-subjects (categorical) covariate and at least one between-subjects (categorical) covariate.
2) In which circumstances do we conduct a Linear Mixed Models Analysis?
when we have a continuous outcome variable
when data are clustered (for example, repeated observation on participants or students within classes)
when we have sufficient number of clusters to enable estimation of the random effect (variance)
when we are not interested in the "effects" of the clusters themselves.
Additionally, ANOVA cannot be used (though there may be a work-around), and mixed models offer a much better alternative, when
we have missing data, or
the experimental design is unbalanced, or
we have multiple (cross-classified or nested) random effects, or
we would like to allow the effect of covariates to differ among each level of a grouping variable (random coefficients or random slopes), or
when we have an outcome variable that can't be plausibly considered as continous (such as count data and nominal data) - in which case we would use a generalised linear mixed model.
3) How do we obtain such a graph using the above model (Mixed Model or ANOVA) in SPSS to compare the "Low" and "High" condition of the product?
The figure appears to be a simple plot of means for 4 groups. Since it appears to be purely descriptive it isn't therefore something to be obtained from a model.
It appears to be typical of the type of data analysed with a two-way ANOVA - that is, a model with a continuous outcome variable, and two categorical covariates. | Linear Mixed Models and ANOVA | 1) What is the difference between conducting a Linear Mixed Models and an ANOVA?
ANOVA models have the feature of at least one continuous outcome variable and one of more categorical covariates. Line | Linear Mixed Models and ANOVA
1) What is the difference between conducting a Linear Mixed Models and an ANOVA?
ANOVA models have the feature of at least one continuous outcome variable and one of more categorical covariates. Linear mixed models are a family of models that also have a continous outcome variable, one or more random effects and one or more fixed effects (hence the name mixed effects model or just mixed model).
There are sub-classes of ANOVA models that allow for repeated measures, a mixed ANOVA which has one within-subjects (categorical) covariate and at least one between-subjects (categorical) covariate, and repeated measures ANOVA which has at least two within-subjects (categorical) covariate and at least one between-subjects (categorical) covariate.
2) In which circumstances do we conduct a Linear Mixed Models Analysis?
when we have a continuous outcome variable
when data are clustered (for example, repeated observation on participants or students within classes)
when we have sufficient number of clusters to enable estimation of the random effect (variance)
when we are not interested in the "effects" of the clusters themselves.
Additionally, ANOVA cannot be used (though there may be a work-around), and mixed models offer a much better alternative, when
we have missing data, or
the experimental design is unbalanced, or
we have multiple (cross-classified or nested) random effects, or
we would like to allow the effect of covariates to differ among each level of a grouping variable (random coefficients or random slopes), or
when we have an outcome variable that can't be plausibly considered as continous (such as count data and nominal data) - in which case we would use a generalised linear mixed model.
3) How do we obtain such a graph using the above model (Mixed Model or ANOVA) in SPSS to compare the "Low" and "High" condition of the product?
The figure appears to be a simple plot of means for 4 groups. Since it appears to be purely descriptive it isn't therefore something to be obtained from a model.
It appears to be typical of the type of data analysed with a two-way ANOVA - that is, a model with a continuous outcome variable, and two categorical covariates. | Linear Mixed Models and ANOVA
1) What is the difference between conducting a Linear Mixed Models and an ANOVA?
ANOVA models have the feature of at least one continuous outcome variable and one of more categorical covariates. Line |
25,991 | Neural network - meaning of weights | Individual weights represent the strength of connections between units. If the weight from unit A to unit B has greater magnitude (all else being equal), it means that A has greater influence over B (i.e. to increase or decrease B's level of activation).
You can also think of the the set of incoming weights to a unit as measuring what that unit 'cares about'. This is easiest to see at the first layer. Say we have an image processing network. Early units receive weighted connections from input pixels. The activation of each unit is a weighted sum of pixel intensity values, passed through an activation function. Because the activation function is monotonic, a given unit's activation will be higher when the input pixels are similar to the incoming weights of that unit (in the sense of having a large dot product). So, you can think of the weights as a set of filter coefficients, defining an image feature. For units in higher layers (in a feedforward network), the inputs aren't from pixels anymore, but from units in lower layers. So, the incoming weights are more like 'preferred input patterns'.
Not sure about your original source, but if I were talking about 'weight space', I'd be referring to the set of all possible values of all weights in the network. | Neural network - meaning of weights | Individual weights represent the strength of connections between units. If the weight from unit A to unit B has greater magnitude (all else being equal), it means that A has greater influence over B ( | Neural network - meaning of weights
Individual weights represent the strength of connections between units. If the weight from unit A to unit B has greater magnitude (all else being equal), it means that A has greater influence over B (i.e. to increase or decrease B's level of activation).
You can also think of the the set of incoming weights to a unit as measuring what that unit 'cares about'. This is easiest to see at the first layer. Say we have an image processing network. Early units receive weighted connections from input pixels. The activation of each unit is a weighted sum of pixel intensity values, passed through an activation function. Because the activation function is monotonic, a given unit's activation will be higher when the input pixels are similar to the incoming weights of that unit (in the sense of having a large dot product). So, you can think of the weights as a set of filter coefficients, defining an image feature. For units in higher layers (in a feedforward network), the inputs aren't from pixels anymore, but from units in lower layers. So, the incoming weights are more like 'preferred input patterns'.
Not sure about your original source, but if I were talking about 'weight space', I'd be referring to the set of all possible values of all weights in the network. | Neural network - meaning of weights
Individual weights represent the strength of connections between units. If the weight from unit A to unit B has greater magnitude (all else being equal), it means that A has greater influence over B ( |
25,992 | Neural network - meaning of weights | Well, it depends on a network architecture and particular layer. In general NNs are not interpretable, this is their major drawback in commercial data analysis (where your goal is to uncover actionable insights from your model).
But I love convolutional networks, because they are different! Although their upper layers learn very abstract concepts, usable for transfer learning and classification, which could not be understood easily, their bottom layers learn Gabor filters straight from raw data (and thus are interpretable as such filters). Take a look at the example from a Le Cun lecture:
In addition, M. Zeiler (pdf) and many other researchers invented very creative method to "understand" convnet and ensure it learned something useful dubbed Deconvolutional networks, in which they 'trace' some convnet by making forward pass over input pictures and remembering which neurons had largest activations for which pics. This gives stunning introspection like this (a couple of layers were shown below):
Gray images at the left side are neuron activations (the more intensity -- the larger activation) by color pictures at the right side. We see, that these activations are skeletal representations of real pics, i.e., the activations are not random. Thus, we have a solid hope, that our convnet indeed learned something useful and will have decent generalization in unseen pics. | Neural network - meaning of weights | Well, it depends on a network architecture and particular layer. In general NNs are not interpretable, this is their major drawback in commercial data analysis (where your goal is to uncover actionabl | Neural network - meaning of weights
Well, it depends on a network architecture and particular layer. In general NNs are not interpretable, this is their major drawback in commercial data analysis (where your goal is to uncover actionable insights from your model).
But I love convolutional networks, because they are different! Although their upper layers learn very abstract concepts, usable for transfer learning and classification, which could not be understood easily, their bottom layers learn Gabor filters straight from raw data (and thus are interpretable as such filters). Take a look at the example from a Le Cun lecture:
In addition, M. Zeiler (pdf) and many other researchers invented very creative method to "understand" convnet and ensure it learned something useful dubbed Deconvolutional networks, in which they 'trace' some convnet by making forward pass over input pictures and remembering which neurons had largest activations for which pics. This gives stunning introspection like this (a couple of layers were shown below):
Gray images at the left side are neuron activations (the more intensity -- the larger activation) by color pictures at the right side. We see, that these activations are skeletal representations of real pics, i.e., the activations are not random. Thus, we have a solid hope, that our convnet indeed learned something useful and will have decent generalization in unseen pics. | Neural network - meaning of weights
Well, it depends on a network architecture and particular layer. In general NNs are not interpretable, this is their major drawback in commercial data analysis (where your goal is to uncover actionabl |
25,993 | Neural network - meaning of weights | I think you are trying too hard on the model that does not have too much interpretability. Neural network (NN) is one of the black box models that will give you better performance, but it is hard to understand what was going on inside. Plus, it is very possible to have thousands even millions of weights inside of NN.
NN is a very big non-linear non-convex function that can have large amount of local minima. If you train it multiple times, with different start point, the weights will be different. You can come up with some ways to visualize the internal weights, but it also does not give you too much insights.
Here is one example on NN visualization for MNIST data. The upper right figure (reproduced below) shows the transformed features after applying the weights. | Neural network - meaning of weights | I think you are trying too hard on the model that does not have too much interpretability. Neural network (NN) is one of the black box models that will give you better performance, but it is hard to u | Neural network - meaning of weights
I think you are trying too hard on the model that does not have too much interpretability. Neural network (NN) is one of the black box models that will give you better performance, but it is hard to understand what was going on inside. Plus, it is very possible to have thousands even millions of weights inside of NN.
NN is a very big non-linear non-convex function that can have large amount of local minima. If you train it multiple times, with different start point, the weights will be different. You can come up with some ways to visualize the internal weights, but it also does not give you too much insights.
Here is one example on NN visualization for MNIST data. The upper right figure (reproduced below) shows the transformed features after applying the weights. | Neural network - meaning of weights
I think you are trying too hard on the model that does not have too much interpretability. Neural network (NN) is one of the black box models that will give you better performance, but it is hard to u |
25,994 | Neural network - meaning of weights | Simple weights are probability.
How likely a connection will give the correct or wrong answer.
Even wrong results in multilayer nets can be usefull.
Telling that something is not that.. | Neural network - meaning of weights | Simple weights are probability.
How likely a connection will give the correct or wrong answer.
Even wrong results in multilayer nets can be usefull.
Telling that something is not that.. | Neural network - meaning of weights
Simple weights are probability.
How likely a connection will give the correct or wrong answer.
Even wrong results in multilayer nets can be usefull.
Telling that something is not that.. | Neural network - meaning of weights
Simple weights are probability.
How likely a connection will give the correct or wrong answer.
Even wrong results in multilayer nets can be usefull.
Telling that something is not that.. |
25,995 | Online references giving introduction to OLS | I too am looking for an introduction to OLS. I am looking for online references I can provide to my econometric class. So far I have found a few sites that provide some general information on OLS. However, the only site that I found and that provides both introductory and some advances material is
https://economictheoryblog.com/ordinary-least-squares-ols
The site provides the usual things such as a derivation of the OLS estimator, a discussion of its assumption and even some applied stuff. You can find an overview of the material that the site provides here
https://economictheoryblog.com/contents-ordinary-least-squares/
Besides the usual textbooks, this site is the only online reference I provided to my class. However, I am constantly looking for additional material!! | Online references giving introduction to OLS | I too am looking for an introduction to OLS. I am looking for online references I can provide to my econometric class. So far I have found a few sites that provide some general information on OLS. How | Online references giving introduction to OLS
I too am looking for an introduction to OLS. I am looking for online references I can provide to my econometric class. So far I have found a few sites that provide some general information on OLS. However, the only site that I found and that provides both introductory and some advances material is
https://economictheoryblog.com/ordinary-least-squares-ols
The site provides the usual things such as a derivation of the OLS estimator, a discussion of its assumption and even some applied stuff. You can find an overview of the material that the site provides here
https://economictheoryblog.com/contents-ordinary-least-squares/
Besides the usual textbooks, this site is the only online reference I provided to my class. However, I am constantly looking for additional material!! | Online references giving introduction to OLS
I too am looking for an introduction to OLS. I am looking for online references I can provide to my econometric class. So far I have found a few sites that provide some general information on OLS. How |
25,996 | Online references giving introduction to OLS | I started typing as a comment, but it didn't work out...
I was in the same position one year ago, and I learned OLS by signing up for this Coursera course. And, yes, you can still take it for free. Two important caveats:
The course is disorganized in its presentation and you may want to skip the math lectures. Of course the math is the fun part, but they try to avoid linear algebra and even the lecturer seems disappointed. More on this later...
I would recommend using R (the course is in R), and downloading and doing the guided complement of the course in the GitHub swirl repository. This is even better than the course, and I have gone back to it many times.
So, about the math, I did go through the MIT course by Professor Strang, and got his book.
Finally, there is no better learning place than this site. So post questions, and try to answer other people's questions - don't worry about making mistakes, there are many eyes on the posts, and they typically get corrected. | Online references giving introduction to OLS | I started typing as a comment, but it didn't work out...
I was in the same position one year ago, and I learned OLS by signing up for this Coursera course. And, yes, you can still take it for free. Tw | Online references giving introduction to OLS
I started typing as a comment, but it didn't work out...
I was in the same position one year ago, and I learned OLS by signing up for this Coursera course. And, yes, you can still take it for free. Two important caveats:
The course is disorganized in its presentation and you may want to skip the math lectures. Of course the math is the fun part, but they try to avoid linear algebra and even the lecturer seems disappointed. More on this later...
I would recommend using R (the course is in R), and downloading and doing the guided complement of the course in the GitHub swirl repository. This is even better than the course, and I have gone back to it many times.
So, about the math, I did go through the MIT course by Professor Strang, and got his book.
Finally, there is no better learning place than this site. So post questions, and try to answer other people's questions - don't worry about making mistakes, there are many eyes on the posts, and they typically get corrected. | Online references giving introduction to OLS
I started typing as a comment, but it didn't work out...
I was in the same position one year ago, and I learned OLS by signing up for this Coursera course. And, yes, you can still take it for free. Tw |
25,997 | Why not use the R squared to measure forecast accuracy? | In-sample $R^2$ is not a suitable measure of forecast accuracy because it does not account for overfitting. It is always possible to build a flexible model that will fit the data perfectly in sample but there are no guarantees such a model would perform decently out of sample.
Out-of-sample $R^2$, i.e. the squared correlation between the forecasts and the actual values, is deficient in that it does not account for bias in forecasts.
For example, consider realized values
$$y_{t+1},\dotsc,y_{t+m}$$
and two competing forecasts:
$$\hat{y}_{t+1},\dotsc,\hat{y}_{t+m}$$
and
$$\tilde{y}_{t+1},\dotsc,\tilde{y}_{t+m}.$$
Now assume that
$$\tilde{y}_{t+i}=c+\hat{y}_{t+i}$$
for every $i$, where $c$ is a constant. That is, the forecasts are the same except that the second one is higher by $c$. These two forecasts will generally have different MSE, MAPE etc. but the $R^2$ will be the same.
Consider an extreme case: the first forecast is perfect, i.e. $\hat{y}_{t+i}=y_{t+i}$ for every $i$. The $R^2$ of this forecast will be 1 (which is very good). However, the $R^2$ of the other forecast will also be 1 even though the forecast is biased by $c$ for every $i$. | Why not use the R squared to measure forecast accuracy? | In-sample $R^2$ is not a suitable measure of forecast accuracy because it does not account for overfitting. It is always possible to build a flexible model that will fit the data perfectly in sample b | Why not use the R squared to measure forecast accuracy?
In-sample $R^2$ is not a suitable measure of forecast accuracy because it does not account for overfitting. It is always possible to build a flexible model that will fit the data perfectly in sample but there are no guarantees such a model would perform decently out of sample.
Out-of-sample $R^2$, i.e. the squared correlation between the forecasts and the actual values, is deficient in that it does not account for bias in forecasts.
For example, consider realized values
$$y_{t+1},\dotsc,y_{t+m}$$
and two competing forecasts:
$$\hat{y}_{t+1},\dotsc,\hat{y}_{t+m}$$
and
$$\tilde{y}_{t+1},\dotsc,\tilde{y}_{t+m}.$$
Now assume that
$$\tilde{y}_{t+i}=c+\hat{y}_{t+i}$$
for every $i$, where $c$ is a constant. That is, the forecasts are the same except that the second one is higher by $c$. These two forecasts will generally have different MSE, MAPE etc. but the $R^2$ will be the same.
Consider an extreme case: the first forecast is perfect, i.e. $\hat{y}_{t+i}=y_{t+i}$ for every $i$. The $R^2$ of this forecast will be 1 (which is very good). However, the $R^2$ of the other forecast will also be 1 even though the forecast is biased by $c$ for every $i$. | Why not use the R squared to measure forecast accuracy?
In-sample $R^2$ is not a suitable measure of forecast accuracy because it does not account for overfitting. It is always possible to build a flexible model that will fit the data perfectly in sample b |
25,998 | Why not use the R squared to measure forecast accuracy? | It’s not clear how $R^2$ should be defined in such a scenario.
In another answer, Richard Hardy accurately points out the issues of squaring the correlation between the predicted and actual values. I give some graphs of that kind of problem in my answer to a related question. Consequently, such a definition of $R^2$ leads to a calculated value that is less helpful than one might hope.
Then there’s my idea to compare the square loss of your model to the square loss of a baseline “must beat” model. However, it is not clear what such a baseline should be in time series forecasting. Do you use the mean of some subset of the data? Do you use the mean of all periods before your forecast? Do you use mean of all true observations, even though you would not have had access to all of those observations when you had to make your predictions? That is how Python's sklearn.metrics.r2_score would do it. For financial predictions, I could see using a historical model of some index, such as knowing the historical return of the S&P 500 or (probably even better) the return on the S&P 500 over the same period.
Because of this ambiguity in how to define $R^2$ and what would make for a useful calculation, such a metric seems to be avoided: the obvious calculation based on correlation has major issues, and it is not clear what remedy is appropriate.
I do believe there would be value to making such a comparison to some kind of baseline model, however. For instance, a financial advisor might boast to clients about making them a $15\%$ return on their money. That sounds impressive, but if the clients could have invested in the S&P 500 over that same time and made $17\%$, the clients should be disappointed with their advisor.
(There are all kinds of complexities when it comes to a real financial problem, such as older people near retirement not wanting to incur the risks that go along with stock investing, but I think this illustrates why some kind of comparison to a baseline model would be valuable. (An additional complication could be fees paid to an advisor or money manager.)) | Why not use the R squared to measure forecast accuracy? | It’s not clear how $R^2$ should be defined in such a scenario.
In another answer, Richard Hardy accurately points out the issues of squaring the correlation between the predicted and actual values. I | Why not use the R squared to measure forecast accuracy?
It’s not clear how $R^2$ should be defined in such a scenario.
In another answer, Richard Hardy accurately points out the issues of squaring the correlation between the predicted and actual values. I give some graphs of that kind of problem in my answer to a related question. Consequently, such a definition of $R^2$ leads to a calculated value that is less helpful than one might hope.
Then there’s my idea to compare the square loss of your model to the square loss of a baseline “must beat” model. However, it is not clear what such a baseline should be in time series forecasting. Do you use the mean of some subset of the data? Do you use the mean of all periods before your forecast? Do you use mean of all true observations, even though you would not have had access to all of those observations when you had to make your predictions? That is how Python's sklearn.metrics.r2_score would do it. For financial predictions, I could see using a historical model of some index, such as knowing the historical return of the S&P 500 or (probably even better) the return on the S&P 500 over the same period.
Because of this ambiguity in how to define $R^2$ and what would make for a useful calculation, such a metric seems to be avoided: the obvious calculation based on correlation has major issues, and it is not clear what remedy is appropriate.
I do believe there would be value to making such a comparison to some kind of baseline model, however. For instance, a financial advisor might boast to clients about making them a $15\%$ return on their money. That sounds impressive, but if the clients could have invested in the S&P 500 over that same time and made $17\%$, the clients should be disappointed with their advisor.
(There are all kinds of complexities when it comes to a real financial problem, such as older people near retirement not wanting to incur the risks that go along with stock investing, but I think this illustrates why some kind of comparison to a baseline model would be valuable. (An additional complication could be fees paid to an advisor or money manager.)) | Why not use the R squared to measure forecast accuracy?
It’s not clear how $R^2$ should be defined in such a scenario.
In another answer, Richard Hardy accurately points out the issues of squaring the correlation between the predicted and actual values. I |
25,999 | Clarifications regarding reading a nomogram | Well, since your model is linear, with the expected mpg equal to the linear predictor, you can read mpg straight off the linear predictor scale.
For each variable, you find its value on the relevant scale. For example, imagine we wanted to find a predicted mpg for a car with wt=4, am=1, qsec=18:
which gives a predicted mpg of about 18.94. Substituting into the equation gives 18.95, so that's pretty close. (In practice you would probably only work to the nearest whole point -- and so get about 2 figure accuracy - "19 mpg" - out, rather than 3-4 figures as here.)
One of the chief benefits of such a diagram to my mind is that you instantly see the relative effect of changes in the different predictor variables (IV) on the response (DV). Even when you don't need the diagram for any calculations, it can have great value in terms of simply displaying the relative effects of the variables.
Followup question from comments:
Does it work the same way for non-linear or polynomial regressions?
For cases where $E(Y)$ is nonlinear in some predictors, some minor - and perhaps obvious - modifications are needed. Imagine that we have $\hat{y} = b_0+b x_1+f(x_2)$
where either:
(a) $f$ is monotonic; or
(b) $f$ is not monotonic
In either case, the scale for $x_1$ would work exactly as above, but in case:
(a) the scale for $x_2$ won't be linear; e.g. if $f$ is monotonic decreasing but (roughly) quadratic, you might have something like this:
(b) the non-monotonic scale for $x_2$ will "break" at a turning point and flip over. e.g.
-- here the function $f(x)$ has a minimum somewhere around $x=2.23$
It's possible for such functions to have several turning points, where scales would break and flip over multiple times - but the axis line only has two sides.
With points-type nomograms this presents no difficulty, since one may move additional scale-sections up or down (or more generally, orthogonally to the direction of the axis) a little until no overlap occurs.
(More than one turning point can be a problem for alignment-type nomograms; one solution shown in Harrell's book is to offset all the scales slightly from a reference line, on which the value's position is actually taken.)
In the case of GLMs with nonlinear link function, the scales work as above, but the scale of the linear predictor will be marked with a nonlinear scale for $Y$, something like (a) above.
Examples of all of these situations can be found in Harrell's Regression Modeling Strategies.
Just a couple of side notes
I'd much prefer to see two points scales, at the top and bottom of the relevant section; otherwise it's hard to "line up" accurately because you have to guess what 'vertical' is. Something like this:
However, as I note in comments, for the last section of the diagram (total points and linear predictor) perhaps a better alternative to a second points scale would be to simply have a pair of back-to-back scales (total points on one side, linear predictor on the other), like this:
whereupon we avoid the need to know what 'vertical' is.
With only two continuous predictors and a single binary factor, we can quite readily construct a more traditional alignment nomogram:
In this case you simply find the wt and qsec values on their scales and join them with a line; where they cross the mpg axis, we read off the value (while the am variable determines which side of the mpg axis you read). In a simple case like this, these kind of nomograms are faster and simpler to use, but can be less easy to generalize to many predictors, where they can become unwieldy. The points-style nomogram in your question (as implemented in Regression Modeling Strategies and in the rms package in R) can add more variables seamlessly. This can be quite an advantage when dealing with interactions. | Clarifications regarding reading a nomogram | Well, since your model is linear, with the expected mpg equal to the linear predictor, you can read mpg straight off the linear predictor scale.
For each variable, you find its value on the relevant | Clarifications regarding reading a nomogram
Well, since your model is linear, with the expected mpg equal to the linear predictor, you can read mpg straight off the linear predictor scale.
For each variable, you find its value on the relevant scale. For example, imagine we wanted to find a predicted mpg for a car with wt=4, am=1, qsec=18:
which gives a predicted mpg of about 18.94. Substituting into the equation gives 18.95, so that's pretty close. (In practice you would probably only work to the nearest whole point -- and so get about 2 figure accuracy - "19 mpg" - out, rather than 3-4 figures as here.)
One of the chief benefits of such a diagram to my mind is that you instantly see the relative effect of changes in the different predictor variables (IV) on the response (DV). Even when you don't need the diagram for any calculations, it can have great value in terms of simply displaying the relative effects of the variables.
Followup question from comments:
Does it work the same way for non-linear or polynomial regressions?
For cases where $E(Y)$ is nonlinear in some predictors, some minor - and perhaps obvious - modifications are needed. Imagine that we have $\hat{y} = b_0+b x_1+f(x_2)$
where either:
(a) $f$ is monotonic; or
(b) $f$ is not monotonic
In either case, the scale for $x_1$ would work exactly as above, but in case:
(a) the scale for $x_2$ won't be linear; e.g. if $f$ is monotonic decreasing but (roughly) quadratic, you might have something like this:
(b) the non-monotonic scale for $x_2$ will "break" at a turning point and flip over. e.g.
-- here the function $f(x)$ has a minimum somewhere around $x=2.23$
It's possible for such functions to have several turning points, where scales would break and flip over multiple times - but the axis line only has two sides.
With points-type nomograms this presents no difficulty, since one may move additional scale-sections up or down (or more generally, orthogonally to the direction of the axis) a little until no overlap occurs.
(More than one turning point can be a problem for alignment-type nomograms; one solution shown in Harrell's book is to offset all the scales slightly from a reference line, on which the value's position is actually taken.)
In the case of GLMs with nonlinear link function, the scales work as above, but the scale of the linear predictor will be marked with a nonlinear scale for $Y$, something like (a) above.
Examples of all of these situations can be found in Harrell's Regression Modeling Strategies.
Just a couple of side notes
I'd much prefer to see two points scales, at the top and bottom of the relevant section; otherwise it's hard to "line up" accurately because you have to guess what 'vertical' is. Something like this:
However, as I note in comments, for the last section of the diagram (total points and linear predictor) perhaps a better alternative to a second points scale would be to simply have a pair of back-to-back scales (total points on one side, linear predictor on the other), like this:
whereupon we avoid the need to know what 'vertical' is.
With only two continuous predictors and a single binary factor, we can quite readily construct a more traditional alignment nomogram:
In this case you simply find the wt and qsec values on their scales and join them with a line; where they cross the mpg axis, we read off the value (while the am variable determines which side of the mpg axis you read). In a simple case like this, these kind of nomograms are faster and simpler to use, but can be less easy to generalize to many predictors, where they can become unwieldy. The points-style nomogram in your question (as implemented in Regression Modeling Strategies and in the rms package in R) can add more variables seamlessly. This can be quite an advantage when dealing with interactions. | Clarifications regarding reading a nomogram
Well, since your model is linear, with the expected mpg equal to the linear predictor, you can read mpg straight off the linear predictor scale.
For each variable, you find its value on the relevant |
26,000 | Why ever use Durbin-Watson instead of testing autocorrelation? | As pointed out before in this and other threads: (1) The Durbin-Watson test is not inconclusive. Only the boundaries suggested initially by Durbin and Watson were because the precise distribution depends on the observed regressor matrix. However, this is easy enough to address in statistical/econometric software by now. (2) There are generalizations of the Durbin-Watson test to higher lags. So neither inconclusiveness nor limitation of lags is an argument against the Durbin-Watson test.
In comparison to the Wald test of the lagged dependent variable, the Durbin-Watson test can have higher power in certain models. Specifically, if the model contains deterministic trends or seasonal patterns, it can be better to test for autocorrelation in the residuals (as the Durbin-Watson test does) compared to including the lagged response (which isn't yet adjusted for the deterministic patterns). I include a small R simulation below.
One important drawback of the Durbin-Watson test is that it must not be applied to models that already contain autoregressive effects. Thus, you cannot test for remaining residual autocorrelation after partially capturing it in an autoregressive model. In that scenario the power of the Durbin-Watson test can break down completely while for the Breusch-Godfrey test, for example, it does not. Our book "Applied Econometrics with R" has a small simulation study that shows this in the chapter "Programming Your Own Analysis", see http://eeecon.uibk.ac.at/~zeileis/teaching/AER/.
For a data set with trend plus autocorrelated errors the power of the Durbin-Watson test is higher than for the Breusch-Godfrey test, though, and also higher than for the Wald test of autoregressive effect. I illustrate this for a simple small scenario in R. I draw 50 observations from such a model and compute p-values for all three tests:
pvals <- function()
{
## data with trend and autocorrelated error term
d <- data.frame(
x = 1:50,
err = filter(rnorm(50), 0.25, method = "recursive")
)
## response and corresponding lags
d$y <- 1 + 1 * d$x + d$err
d$ylag <- c(NA, d$y[-50])
## OLS regressions with/without lags
m <- lm(y ~ x, data = d)
mlag <- lm(y ~ x + ylag, data = d)
## p-value from Durbin-Watson and Breusch-Godfrey tests
## and the Wald test of the lag coefficient
c(
"DW" = dwtest(m)$p.value,
"BG" = bgtest(m)$p.value,
"Coef-Wald" = coeftest(mlag)[3, 4]
)
}
Then we can simulate 1000 p-values for all three models:
set.seed(1)
p <- t(replicate(1000, pvals()))
The Durbin-Watson test leads to the lowest average p-values
colMeans(p)
## DW BG Coef-Wald
## 0.1220556 0.2812628 0.2892220
and the highest power at 5% significance level:
colMeans(p < 0.05)
## DW BG Coef-Wald
## 0.493 0.256 0.248 | Why ever use Durbin-Watson instead of testing autocorrelation? | As pointed out before in this and other threads: (1) The Durbin-Watson test is not inconclusive. Only the boundaries suggested initially by Durbin and Watson were because the precise distribution depe | Why ever use Durbin-Watson instead of testing autocorrelation?
As pointed out before in this and other threads: (1) The Durbin-Watson test is not inconclusive. Only the boundaries suggested initially by Durbin and Watson were because the precise distribution depends on the observed regressor matrix. However, this is easy enough to address in statistical/econometric software by now. (2) There are generalizations of the Durbin-Watson test to higher lags. So neither inconclusiveness nor limitation of lags is an argument against the Durbin-Watson test.
In comparison to the Wald test of the lagged dependent variable, the Durbin-Watson test can have higher power in certain models. Specifically, if the model contains deterministic trends or seasonal patterns, it can be better to test for autocorrelation in the residuals (as the Durbin-Watson test does) compared to including the lagged response (which isn't yet adjusted for the deterministic patterns). I include a small R simulation below.
One important drawback of the Durbin-Watson test is that it must not be applied to models that already contain autoregressive effects. Thus, you cannot test for remaining residual autocorrelation after partially capturing it in an autoregressive model. In that scenario the power of the Durbin-Watson test can break down completely while for the Breusch-Godfrey test, for example, it does not. Our book "Applied Econometrics with R" has a small simulation study that shows this in the chapter "Programming Your Own Analysis", see http://eeecon.uibk.ac.at/~zeileis/teaching/AER/.
For a data set with trend plus autocorrelated errors the power of the Durbin-Watson test is higher than for the Breusch-Godfrey test, though, and also higher than for the Wald test of autoregressive effect. I illustrate this for a simple small scenario in R. I draw 50 observations from such a model and compute p-values for all three tests:
pvals <- function()
{
## data with trend and autocorrelated error term
d <- data.frame(
x = 1:50,
err = filter(rnorm(50), 0.25, method = "recursive")
)
## response and corresponding lags
d$y <- 1 + 1 * d$x + d$err
d$ylag <- c(NA, d$y[-50])
## OLS regressions with/without lags
m <- lm(y ~ x, data = d)
mlag <- lm(y ~ x + ylag, data = d)
## p-value from Durbin-Watson and Breusch-Godfrey tests
## and the Wald test of the lag coefficient
c(
"DW" = dwtest(m)$p.value,
"BG" = bgtest(m)$p.value,
"Coef-Wald" = coeftest(mlag)[3, 4]
)
}
Then we can simulate 1000 p-values for all three models:
set.seed(1)
p <- t(replicate(1000, pvals()))
The Durbin-Watson test leads to the lowest average p-values
colMeans(p)
## DW BG Coef-Wald
## 0.1220556 0.2812628 0.2892220
and the highest power at 5% significance level:
colMeans(p < 0.05)
## DW BG Coef-Wald
## 0.493 0.256 0.248 | Why ever use Durbin-Watson instead of testing autocorrelation?
As pointed out before in this and other threads: (1) The Durbin-Watson test is not inconclusive. Only the boundaries suggested initially by Durbin and Watson were because the precise distribution depe |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.