idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
11,901
|
Splitting Time Series Data into Train/Test/Validation Sets
|
I think the most complete way to leverage your time-series data for training/validation/testing/prediction is this:
Is the picture self explanatory? If not, please comment and I will add more text...
|
Splitting Time Series Data into Train/Test/Validation Sets
|
I think the most complete way to leverage your time-series data for training/validation/testing/prediction is this:
Is the picture self explanatory? If not, please comment and I will add more text...
|
Splitting Time Series Data into Train/Test/Validation Sets
I think the most complete way to leverage your time-series data for training/validation/testing/prediction is this:
Is the picture self explanatory? If not, please comment and I will add more text...
|
Splitting Time Series Data into Train/Test/Validation Sets
I think the most complete way to leverage your time-series data for training/validation/testing/prediction is this:
Is the picture self explanatory? If not, please comment and I will add more text...
|
11,902
|
Splitting Time Series Data into Train/Test/Validation Sets
|
"walk-forward"
In the following, "validation set" was replaced with "testing set" to be aligned with the naming in this Q/A.
Instead of creating only one set of a training/testing set, you could create more of such sets.
The first training set could be, say, 6 months data (first semester of 2015) and the testing set would then be the next three months (July-Aug 2015). The second training set would be a combination of the first training and testing set. The testing set is then the next three months (Sept-Oct 2015). And so on.
This walk over time is a bit like the k-fold cross-validation where the training sets are a combination of the previous training and validation set, and that put together. Only that in this model, this happens by walking through time to check more than one prediction, and in this model, the training and testing set are put together, not the training and validation set. You have validation by default, in the model, if you walk through time with more than one prediction. The metrics of these predictions can be put against each other.
This is the walk-forward model, see a comment below. The model image mixes up testing set with validation set. Normally, this naming issue does not matter, but here, it does, since it stands against the naming of the rest. If you then allow to make an additional k-fold validation from the testing set, you have the three sets that the question asks for. And yes, you do not need that validation set if you make a "walk-forward" with enough steps.
Thus, even if this model needs only training and testing set, it can still be an answer to the question that asks for the three sets. Since the validation set can be seen as being replaced by the "walk-forward". And it still also allows small k-fold validation as splits from the testing set, so that you could see it as 3+1 sets in the end.
|
Splitting Time Series Data into Train/Test/Validation Sets
|
"walk-forward"
In the following, "validation set" was replaced with "testing set" to be aligned with the naming in this Q/A.
Instead of creating only one set of a training/testing set, you could creat
|
Splitting Time Series Data into Train/Test/Validation Sets
"walk-forward"
In the following, "validation set" was replaced with "testing set" to be aligned with the naming in this Q/A.
Instead of creating only one set of a training/testing set, you could create more of such sets.
The first training set could be, say, 6 months data (first semester of 2015) and the testing set would then be the next three months (July-Aug 2015). The second training set would be a combination of the first training and testing set. The testing set is then the next three months (Sept-Oct 2015). And so on.
This walk over time is a bit like the k-fold cross-validation where the training sets are a combination of the previous training and validation set, and that put together. Only that in this model, this happens by walking through time to check more than one prediction, and in this model, the training and testing set are put together, not the training and validation set. You have validation by default, in the model, if you walk through time with more than one prediction. The metrics of these predictions can be put against each other.
This is the walk-forward model, see a comment below. The model image mixes up testing set with validation set. Normally, this naming issue does not matter, but here, it does, since it stands against the naming of the rest. If you then allow to make an additional k-fold validation from the testing set, you have the three sets that the question asks for. And yes, you do not need that validation set if you make a "walk-forward" with enough steps.
Thus, even if this model needs only training and testing set, it can still be an answer to the question that asks for the three sets. Since the validation set can be seen as being replaced by the "walk-forward". And it still also allows small k-fold validation as splits from the testing set, so that you could see it as 3+1 sets in the end.
|
Splitting Time Series Data into Train/Test/Validation Sets
"walk-forward"
In the following, "validation set" was replaced with "testing set" to be aligned with the naming in this Q/A.
Instead of creating only one set of a training/testing set, you could creat
|
11,903
|
What is happening here, when I use squared loss in logistic regression setting?
|
It seems like you've fixed the issue in your particular example but I think it's still worth a more careful study of the difference between least squares and maximum likelihood logistic regression.
Let's get some notation. Let $L_S(y_i, \hat y_i) = \frac 12(y_i - \hat y_i)^2$ and $L_L(y_i, \hat y_i) = y_i \log \hat y_i + (1 - y_i) \log(1 - \hat y_i)$. If we're doing maximum likelihood (or minimum negative log likelihood as I'm doing here), we have
$$
\hat \beta_L := \text{argmin}_{b \in \mathbb R^p} -\sum_{i=1}^n y_i \log g^{-1}(x_i^T b) + (1-y_i)\log(1 - g^{-1}(x_i^T b))
$$
with $g$ being our link function.
Alternatively we have
$$
\hat \beta_S := \text{argmin}_{b \in \mathbb R^p} \frac 12 \sum_{i=1}^n (y_i - g^{-1}(x_i^T b))^2
$$
as the least squares solution. Thus $\hat \beta_S$ minimizes $L_S$ and similarly for $L_L$.
Let $f_S$ and $f_L$ be the objective functions corresponding to minimizing $L_S$ and $L_L$ respectively as is done for $\hat \beta_S$ and $\hat \beta_L$. Finally, let $h = g^{-1}$ so $\hat y_i = h(x_i^T b)$. Note that if we're using the canonical link we've got
$$
h(z) = \frac{1}{1+e^{-z}} \implies h'(z) = h(z) (1 - h(z)).
$$
For regular logistic regression we have
$$
\frac{\partial f_L}{\partial b_j} = -\sum_{i=1}^n h'(x_i^T b)x_{ij} \left( \frac{y_i}{h(x_i^T b)} - \frac{1-y_i}{1 - h(x_i^T b)}\right).
$$
Using $h' = h \cdot (1 - h)$ we can simplify this to
$$
\frac{\partial f_L}{\partial b_j} = -\sum_{i=1}^n x_{ij} \left( y_i(1 - \hat y_i) - (1-y_i)\hat y_i\right) = -\sum_{i=1}^n x_{ij}(y_i - \hat y_i)
$$
so
$$
\nabla f_L(b) = -X^T (Y - \hat Y).
$$
Next let's do second derivatives. The Hessian
$$H_L:=
\frac{\partial^2 f_L}{\partial b_j \partial b_k} = \sum_{i=1}^n x_{ij} x_{ik} \hat y_i (1 - \hat y_i).
$$
This means that $H_L = X^T A X$ where $A = \text{diag} \left(\hat Y (1 -
\hat Y)\right)$. $H_L$ does depend on the current fitted values $\hat Y$ but $Y$ has dropped out, and $H_L$ is PSD. Thus our optimization problem is convex in $b$.
Let's compare this to least squares.
$$
\frac{\partial f_S}{\partial b_j} = - \sum_{i=1}^n (y_i - \hat y_i) h'(x^T_i b)x_{ij}.
$$
This means we have
$$
\nabla f_S(b) = -X^T A (Y - \hat Y).
$$
This is a vital point: the gradient is almost the same except for all $i$ $\hat y_i (1 - \hat y_i) \in (0,1)$ so basically we're flattening the gradient relative to $\nabla f_L$. This'll make convergence slower.
For the Hessian we can first write
$$
\frac{\partial f_S}{\partial b_j} = - \sum_{i=1}^n x_{ij}(y_i - \hat y_i) \hat y_i (1 - \hat y_i) = - \sum_{i=1}^n x_{ij}\left( y_i \hat y_i -
(1+y_i)\hat y_i^2 + \hat y_i^3\right).
$$
This leads us to
$$
H_S:=\frac{\partial^2 f_S}{\partial b_j \partial b_k} = - \sum_{i=1}^n x_{ij} x_{ik} h'(x_i^T b) \left( y_i - 2(1+y_i)\hat y_i + 3 \hat y_i^2 \right).
$$
Let $B = \text{diag} \left( y_i - 2(1+y_i)\hat y_i + 3 \hat y_i ^2 \right)$. We now have
$$
H_S = -X^T A B X.
$$
Unfortunately for us, the weights in $B$ are not guaranteed to be non-negative: if $y_i = 0$ then $y_i - 2(1+y_i)\hat y_i + 3 \hat y_i ^2 = \hat y_i (3 \hat y_i - 2)$ which is positive iff $\hat y_i > \frac 23$. Similarly, if $y_i = 1$ then $y_i - 2(1+y_i)\hat y_i + 3 \hat y_i ^2 = 1-4 \hat y_i + 3 \hat y_i^2$ which is positive when $\hat y_i < \frac 13$ (it's also positive for $\hat y_i > 1$ but that's not possible). This means that $H_S$ is not necessarily PSD, so not only are we squashing our gradients which will make learning harder, but we've also messed up the convexity of our problem.
All in all, it's no surprise that least squares logistic regression struggles sometimes, and in your example you've got enough fitted values close to $0$ or $1$ so that $\hat y_i (1 - \hat y_i)$ can be pretty small and thus the gradient is quite flattened.
Connecting this to neural networks, even though this is but a humble logistic regression I think with squared loss you're experiencing something like what Goodfellow, Bengio, and Courville are referring to in their Deep Learning book when they write the following:
One recurring theme throughout neural network design is that the gradient of the cost function must be large and predictable enough to serve as a good guide for the learning algorithm. Functions that saturate (become very flat) undermine this objective because they make the gradient become very small. In many cases this happens because the activation functions used to produce the output of the hidden units or the output units saturate. The negative log-likelihood helps to avoid this problem for many models. Many output units involve an exp function that can saturate when its argument is very negative. The log function in the negative log-likelihood cost function undoes the exp of some output units. We will discuss the interaction between the cost function and the choice of output unit in Sec. 6.2.2.
and, in 6.2.2,
Unfortunately, mean squared error and mean absolute error often lead to poor results when used with gradient-based optimization. Some output units that saturate produce very small gradients when combined with these cost functions. This is one reason that the cross-entropy cost function is more popular than mean squared error or mean absolute error, even when it is not necessary to estimate an entire distribution $p(y|x)$.
(both excerpts are from chapter 6).
|
What is happening here, when I use squared loss in logistic regression setting?
|
It seems like you've fixed the issue in your particular example but I think it's still worth a more careful study of the difference between least squares and maximum likelihood logistic regression.
Le
|
What is happening here, when I use squared loss in logistic regression setting?
It seems like you've fixed the issue in your particular example but I think it's still worth a more careful study of the difference between least squares and maximum likelihood logistic regression.
Let's get some notation. Let $L_S(y_i, \hat y_i) = \frac 12(y_i - \hat y_i)^2$ and $L_L(y_i, \hat y_i) = y_i \log \hat y_i + (1 - y_i) \log(1 - \hat y_i)$. If we're doing maximum likelihood (or minimum negative log likelihood as I'm doing here), we have
$$
\hat \beta_L := \text{argmin}_{b \in \mathbb R^p} -\sum_{i=1}^n y_i \log g^{-1}(x_i^T b) + (1-y_i)\log(1 - g^{-1}(x_i^T b))
$$
with $g$ being our link function.
Alternatively we have
$$
\hat \beta_S := \text{argmin}_{b \in \mathbb R^p} \frac 12 \sum_{i=1}^n (y_i - g^{-1}(x_i^T b))^2
$$
as the least squares solution. Thus $\hat \beta_S$ minimizes $L_S$ and similarly for $L_L$.
Let $f_S$ and $f_L$ be the objective functions corresponding to minimizing $L_S$ and $L_L$ respectively as is done for $\hat \beta_S$ and $\hat \beta_L$. Finally, let $h = g^{-1}$ so $\hat y_i = h(x_i^T b)$. Note that if we're using the canonical link we've got
$$
h(z) = \frac{1}{1+e^{-z}} \implies h'(z) = h(z) (1 - h(z)).
$$
For regular logistic regression we have
$$
\frac{\partial f_L}{\partial b_j} = -\sum_{i=1}^n h'(x_i^T b)x_{ij} \left( \frac{y_i}{h(x_i^T b)} - \frac{1-y_i}{1 - h(x_i^T b)}\right).
$$
Using $h' = h \cdot (1 - h)$ we can simplify this to
$$
\frac{\partial f_L}{\partial b_j} = -\sum_{i=1}^n x_{ij} \left( y_i(1 - \hat y_i) - (1-y_i)\hat y_i\right) = -\sum_{i=1}^n x_{ij}(y_i - \hat y_i)
$$
so
$$
\nabla f_L(b) = -X^T (Y - \hat Y).
$$
Next let's do second derivatives. The Hessian
$$H_L:=
\frac{\partial^2 f_L}{\partial b_j \partial b_k} = \sum_{i=1}^n x_{ij} x_{ik} \hat y_i (1 - \hat y_i).
$$
This means that $H_L = X^T A X$ where $A = \text{diag} \left(\hat Y (1 -
\hat Y)\right)$. $H_L$ does depend on the current fitted values $\hat Y$ but $Y$ has dropped out, and $H_L$ is PSD. Thus our optimization problem is convex in $b$.
Let's compare this to least squares.
$$
\frac{\partial f_S}{\partial b_j} = - \sum_{i=1}^n (y_i - \hat y_i) h'(x^T_i b)x_{ij}.
$$
This means we have
$$
\nabla f_S(b) = -X^T A (Y - \hat Y).
$$
This is a vital point: the gradient is almost the same except for all $i$ $\hat y_i (1 - \hat y_i) \in (0,1)$ so basically we're flattening the gradient relative to $\nabla f_L$. This'll make convergence slower.
For the Hessian we can first write
$$
\frac{\partial f_S}{\partial b_j} = - \sum_{i=1}^n x_{ij}(y_i - \hat y_i) \hat y_i (1 - \hat y_i) = - \sum_{i=1}^n x_{ij}\left( y_i \hat y_i -
(1+y_i)\hat y_i^2 + \hat y_i^3\right).
$$
This leads us to
$$
H_S:=\frac{\partial^2 f_S}{\partial b_j \partial b_k} = - \sum_{i=1}^n x_{ij} x_{ik} h'(x_i^T b) \left( y_i - 2(1+y_i)\hat y_i + 3 \hat y_i^2 \right).
$$
Let $B = \text{diag} \left( y_i - 2(1+y_i)\hat y_i + 3 \hat y_i ^2 \right)$. We now have
$$
H_S = -X^T A B X.
$$
Unfortunately for us, the weights in $B$ are not guaranteed to be non-negative: if $y_i = 0$ then $y_i - 2(1+y_i)\hat y_i + 3 \hat y_i ^2 = \hat y_i (3 \hat y_i - 2)$ which is positive iff $\hat y_i > \frac 23$. Similarly, if $y_i = 1$ then $y_i - 2(1+y_i)\hat y_i + 3 \hat y_i ^2 = 1-4 \hat y_i + 3 \hat y_i^2$ which is positive when $\hat y_i < \frac 13$ (it's also positive for $\hat y_i > 1$ but that's not possible). This means that $H_S$ is not necessarily PSD, so not only are we squashing our gradients which will make learning harder, but we've also messed up the convexity of our problem.
All in all, it's no surprise that least squares logistic regression struggles sometimes, and in your example you've got enough fitted values close to $0$ or $1$ so that $\hat y_i (1 - \hat y_i)$ can be pretty small and thus the gradient is quite flattened.
Connecting this to neural networks, even though this is but a humble logistic regression I think with squared loss you're experiencing something like what Goodfellow, Bengio, and Courville are referring to in their Deep Learning book when they write the following:
One recurring theme throughout neural network design is that the gradient of the cost function must be large and predictable enough to serve as a good guide for the learning algorithm. Functions that saturate (become very flat) undermine this objective because they make the gradient become very small. In many cases this happens because the activation functions used to produce the output of the hidden units or the output units saturate. The negative log-likelihood helps to avoid this problem for many models. Many output units involve an exp function that can saturate when its argument is very negative. The log function in the negative log-likelihood cost function undoes the exp of some output units. We will discuss the interaction between the cost function and the choice of output unit in Sec. 6.2.2.
and, in 6.2.2,
Unfortunately, mean squared error and mean absolute error often lead to poor results when used with gradient-based optimization. Some output units that saturate produce very small gradients when combined with these cost functions. This is one reason that the cross-entropy cost function is more popular than mean squared error or mean absolute error, even when it is not necessary to estimate an entire distribution $p(y|x)$.
(both excerpts are from chapter 6).
|
What is happening here, when I use squared loss in logistic regression setting?
It seems like you've fixed the issue in your particular example but I think it's still worth a more careful study of the difference between least squares and maximum likelihood logistic regression.
Le
|
11,904
|
What is happening here, when I use squared loss in logistic regression setting?
|
I would thank to thank @whuber and @Chaconne for help. Especially @Chaconne, this derivation is what I wished to have for years.
The problem IS in the optimization part. If we set the random seed to 1, the default BFGS will not work. But if we change the algorithm and change the max iteration number it will work again.
As @Chaconne mentioned, the problem is squared loss for classification is non-convex and harder to optimize. To add on @Chaconne's math, I would like to present some visualizations on to logistic loss and squared loss.
We will change the demo data from mtcars, since the original toy example has $3$ coefficients including the intercept. We will use another toy data set generated from mlbench, in this data set, we set $2$ parameters, which is better for visualization.
Here is the demo
The data is shown in the left figure: we have two classes in two colors. x,y are two features for the data. In addition, we use red line to represent the linear classifier from logistic loss, and the blue line represent the linear classifier from squared loss.
The middle figure and right figure shows the contour for logistic loss (red) and squared loss (blue). x, y are two parameters we are fitting. The dot is the optimal point found by BFGS.
From the contour we can easily see how why optimizing squared loss is harder: as Chaconne mentioned, it is non-convex.
Here is one more view from persp3d.
Code
set.seed(0)
d=mlbench::mlbench.2dnormals(50,2,r=1)
x=d$x
y=ifelse(d$classes==1,1,0)
lg_loss <- function(w){
p=plogis(x %*% w)
L=-y*log(p)-(1-y)*log(1-p)
return(sum(L))
}
sq_loss <- function(w){
p=plogis(x %*% w)
L=sum((y-p)^2)
return(L)
}
w_grid_v=seq(-15,15,0.1)
w_grid=expand.grid(w_grid_v,w_grid_v)
opt1=optimx::optimx(c(1,1),fn=lg_loss ,method="BFGS")
z1=matrix(apply(w_grid,1,lg_loss),ncol=length(w_grid_v))
opt2=optimx::optimx(c(1,1),fn=sq_loss ,method="BFGS")
z2=matrix(apply(w_grid,1,sq_loss),ncol=length(w_grid_v))
par(mfrow=c(1,3))
plot(d,xlim=c(-3,3),ylim=c(-3,3))
abline(0,-opt1$p2/opt1$p1,col='darkred',lwd=2)
abline(0,-opt2$p2/opt2$p1,col='blue',lwd=2)
grid()
contour(w_grid_v,w_grid_v,z1,col='darkred',lwd=2, nlevels = 8)
points(opt1$p1,opt1$p2,col='darkred',pch=19)
grid()
contour(w_grid_v,w_grid_v,z2,col='blue',lwd=2, nlevels = 8)
points(opt2$p1,opt2$p2,col='blue',pch=19)
grid()
# library(rgl)
# persp3d(w_grid_v,w_grid_v,z1,col='darkred')
|
What is happening here, when I use squared loss in logistic regression setting?
|
I would thank to thank @whuber and @Chaconne for help. Especially @Chaconne, this derivation is what I wished to have for years.
The problem IS in the optimization part. If we set the random seed to 1
|
What is happening here, when I use squared loss in logistic regression setting?
I would thank to thank @whuber and @Chaconne for help. Especially @Chaconne, this derivation is what I wished to have for years.
The problem IS in the optimization part. If we set the random seed to 1, the default BFGS will not work. But if we change the algorithm and change the max iteration number it will work again.
As @Chaconne mentioned, the problem is squared loss for classification is non-convex and harder to optimize. To add on @Chaconne's math, I would like to present some visualizations on to logistic loss and squared loss.
We will change the demo data from mtcars, since the original toy example has $3$ coefficients including the intercept. We will use another toy data set generated from mlbench, in this data set, we set $2$ parameters, which is better for visualization.
Here is the demo
The data is shown in the left figure: we have two classes in two colors. x,y are two features for the data. In addition, we use red line to represent the linear classifier from logistic loss, and the blue line represent the linear classifier from squared loss.
The middle figure and right figure shows the contour for logistic loss (red) and squared loss (blue). x, y are two parameters we are fitting. The dot is the optimal point found by BFGS.
From the contour we can easily see how why optimizing squared loss is harder: as Chaconne mentioned, it is non-convex.
Here is one more view from persp3d.
Code
set.seed(0)
d=mlbench::mlbench.2dnormals(50,2,r=1)
x=d$x
y=ifelse(d$classes==1,1,0)
lg_loss <- function(w){
p=plogis(x %*% w)
L=-y*log(p)-(1-y)*log(1-p)
return(sum(L))
}
sq_loss <- function(w){
p=plogis(x %*% w)
L=sum((y-p)^2)
return(L)
}
w_grid_v=seq(-15,15,0.1)
w_grid=expand.grid(w_grid_v,w_grid_v)
opt1=optimx::optimx(c(1,1),fn=lg_loss ,method="BFGS")
z1=matrix(apply(w_grid,1,lg_loss),ncol=length(w_grid_v))
opt2=optimx::optimx(c(1,1),fn=sq_loss ,method="BFGS")
z2=matrix(apply(w_grid,1,sq_loss),ncol=length(w_grid_v))
par(mfrow=c(1,3))
plot(d,xlim=c(-3,3),ylim=c(-3,3))
abline(0,-opt1$p2/opt1$p1,col='darkred',lwd=2)
abline(0,-opt2$p2/opt2$p1,col='blue',lwd=2)
grid()
contour(w_grid_v,w_grid_v,z1,col='darkred',lwd=2, nlevels = 8)
points(opt1$p1,opt1$p2,col='darkred',pch=19)
grid()
contour(w_grid_v,w_grid_v,z2,col='blue',lwd=2, nlevels = 8)
points(opt2$p1,opt2$p2,col='blue',pch=19)
grid()
# library(rgl)
# persp3d(w_grid_v,w_grid_v,z1,col='darkred')
|
What is happening here, when I use squared loss in logistic regression setting?
I would thank to thank @whuber and @Chaconne for help. Especially @Chaconne, this derivation is what I wished to have for years.
The problem IS in the optimization part. If we set the random seed to 1
|
11,905
|
Computing percentile rank in R [closed]
|
Given a vector of raw data values, a simple function might look like
perc.rank <- function(x, xo) length(x[x <= xo])/length(x)*100
where x0 is the value for which we want the percentile rank, given the vector x, as suggested on R-bloggers.
However, it might easily be vectorized as
perc.rank <- function(x) trunc(rank(x))/length(x)
which has the advantage of not having to pass each value. So, here is an example of use:
my.df <- data.frame(x=rnorm(200))
my.df <- within(my.df, xr <- perc.rank(x))
|
Computing percentile rank in R [closed]
|
Given a vector of raw data values, a simple function might look like
perc.rank <- function(x, xo) length(x[x <= xo])/length(x)*100
where x0 is the value for which we want the percentile rank, given
|
Computing percentile rank in R [closed]
Given a vector of raw data values, a simple function might look like
perc.rank <- function(x, xo) length(x[x <= xo])/length(x)*100
where x0 is the value for which we want the percentile rank, given the vector x, as suggested on R-bloggers.
However, it might easily be vectorized as
perc.rank <- function(x) trunc(rank(x))/length(x)
which has the advantage of not having to pass each value. So, here is an example of use:
my.df <- data.frame(x=rnorm(200))
my.df <- within(my.df, xr <- perc.rank(x))
|
Computing percentile rank in R [closed]
Given a vector of raw data values, a simple function might look like
perc.rank <- function(x, xo) length(x[x <= xo])/length(x)*100
where x0 is the value for which we want the percentile rank, given
|
11,906
|
Computing percentile rank in R [closed]
|
If your original data.frame is called dfr and the variable of interest is called myvar, you can use dfr$myrank<-rank(dfr$myvar) for normal ranks, or dfr$myrank<-rank(dfr$myvar)/length(myvar) for percentile ranks.
Oh well. If you really want it the Excel way (may not be the simplest solution, but I had some fun using new (to me) functions and avoiding loops):
percentilerank<-function(x){
rx<-rle(sort(x))
smaller<-cumsum(c(0, rx$lengths))[seq(length(rx$lengths))]
larger<-rev(cumsum(c(0, rev(rx$lengths))))[-1]
rxpr<-smaller/(smaller+larger)
rxpr[match(x, rx$values)]
}
so now you can use dfr$myrank<-percentilerank(dfr$myvar)
HTH.
|
Computing percentile rank in R [closed]
|
If your original data.frame is called dfr and the variable of interest is called myvar, you can use dfr$myrank<-rank(dfr$myvar) for normal ranks, or dfr$myrank<-rank(dfr$myvar)/length(myvar) for perce
|
Computing percentile rank in R [closed]
If your original data.frame is called dfr and the variable of interest is called myvar, you can use dfr$myrank<-rank(dfr$myvar) for normal ranks, or dfr$myrank<-rank(dfr$myvar)/length(myvar) for percentile ranks.
Oh well. If you really want it the Excel way (may not be the simplest solution, but I had some fun using new (to me) functions and avoiding loops):
percentilerank<-function(x){
rx<-rle(sort(x))
smaller<-cumsum(c(0, rx$lengths))[seq(length(rx$lengths))]
larger<-rev(cumsum(c(0, rev(rx$lengths))))[-1]
rxpr<-smaller/(smaller+larger)
rxpr[match(x, rx$values)]
}
so now you can use dfr$myrank<-percentilerank(dfr$myvar)
HTH.
|
Computing percentile rank in R [closed]
If your original data.frame is called dfr and the variable of interest is called myvar, you can use dfr$myrank<-rank(dfr$myvar) for normal ranks, or dfr$myrank<-rank(dfr$myvar)/length(myvar) for perce
|
11,907
|
Computing percentile rank in R [closed]
|
A problem with the presented answer is that it will not work properly, when you have NAs.
In this case, another possibility (inspired by the function from chl♦) is:
perc.rank <- function(x) trunc(rank(x,na.last = NA))/sum(!is.na(x))
quant <- function (x, p.ile) {
x[which.min(x = abs(perc.rank(x-(p.ile/100))))]
}
Here, x is the vector of values, and p.ile is the percentile by rank. 2.5 percentile by rank of (arbitrary) coef.mat may be calculated by:
quant(coef.mat[,3], 2.5)
[1] 0.00025
or as a single function:
quant <- function (x, p.ile) {
perc.rank <- trunc(rank(x,na.last = NA))/sum(!is.na(x))
x = na.omit(x)
x[which.min(x = abs(perc.rank(x-(p.ile/100))))]
}
|
Computing percentile rank in R [closed]
|
A problem with the presented answer is that it will not work properly, when you have NAs.
In this case, another possibility (inspired by the function from chl♦) is:
perc.rank <- function(x) trunc(
|
Computing percentile rank in R [closed]
A problem with the presented answer is that it will not work properly, when you have NAs.
In this case, another possibility (inspired by the function from chl♦) is:
perc.rank <- function(x) trunc(rank(x,na.last = NA))/sum(!is.na(x))
quant <- function (x, p.ile) {
x[which.min(x = abs(perc.rank(x-(p.ile/100))))]
}
Here, x is the vector of values, and p.ile is the percentile by rank. 2.5 percentile by rank of (arbitrary) coef.mat may be calculated by:
quant(coef.mat[,3], 2.5)
[1] 0.00025
or as a single function:
quant <- function (x, p.ile) {
perc.rank <- trunc(rank(x,na.last = NA))/sum(!is.na(x))
x = na.omit(x)
x[which.min(x = abs(perc.rank(x-(p.ile/100))))]
}
|
Computing percentile rank in R [closed]
A problem with the presented answer is that it will not work properly, when you have NAs.
In this case, another possibility (inspired by the function from chl♦) is:
perc.rank <- function(x) trunc(
|
11,908
|
Why do we do matching for causal inference vs regressing on confounders?
|
As I see it, there are two related reasons to consider matching instead of regression. The first is assumptions about functional form, and the second is about proving to your audience that functional form assumptions do not affect the resulting effect estimate. The first is a statistical matter and the second is epistemic. Consider the tale below that attempts to illustrate how the choice between matching and regression could play out.
We'll assume you have measured a sufficient adjustment set to satisfy the backdoor criterion (i.e., all relevant confounders have been measured) with no measurement error or missing data, and that your goal is to estimate the marginal treatment effect of the treatment on an outcome. We'll also assume the standard assumptions of positivity and SUTVA hold. We'll consider a continuous outcome first, but much of the discussion extends to general outcomes.
Part 1: Regression
You decide to run a regression of the outcome on the treatment and confounders as a way to control for confounding by these variables because that is what linear regression is supposed to do. However, the effect estimate is only unbiased under extremely strict circumstances. First, that the treatment effect is constant across levels of the confounders, and second, that the linear model describes the conditional relationship between the outcome and the confounders. For the first, you might include an interaction between the treatment and each confounder, allowing for heterogeneous treatment effects while estimating the marginal effect. This is equivalent to g-computation (1), which involves using the fitted regression model to generate predicted values under treatment and control for all units and using the difference in the means of these predicted values as the effect estimate.
That still assumes a linear model for the outcomes under treatment and control. Okay, we'll use a flexible machine-learning method like random forests instead. Well, now we can't claim our estimator is unbiased, only possibly consistent, and it still requires the specific machine learning model to approach the truth at a certain rate. Okay, we'll use Superlearner (2), a stacking method that takes on the rate of convergence of the fastest of its included models. Well, now we don't have a way to conduct inference, and the model might still be wrong. Okay, we'll use a semiparametric efficient doubly-robust estimator like augmented inverse probability weighting (AIPW) (3) or targeted minimum loss-based estimation (TMLE) (4). Well, that's only consistent if the true models fall in the Donsker class of models. Okay, we'll use cross-fitting with AIPW or TMLE to relax that requirement (5).
Great. You've taken regression to its extreme, relaxing as many assumptions as possible and landing with a multiply-robust estimator (multiply-robust in the sense that if one of many models are correct, the estimator is consistent) with generally good inference properties (but it can be bootstrapped so getting the variance exactly right isn't a big problem). Have we solved causal inference?
You submit the results of your cross-fit TMLE estimate using Superlearner for the propensity score and potential outcome models with a full library including highly adaptive lasso and many other models, which, under weak assumptions, are all that are required for a truly consistent estimator that converges at a parametric rate.
A reviewer reads the paper and says, "I don't believe the results of this model."
"Why not?" you say. "I used the optimal estimator with the best properties; it is consistent and semiparametric efficient with few, if any, assumptions on the functional forms of the models."
"Your estimator is consistent," says the reviewer, "but not unbiased. That means I can only trust its results in general and as N goes to infinity. How do I know you have successfully eliminated bias in the effect estimate in this dataset?"
"..."
Part 2: Matching to the Rescue
You read about a hot new method called "propensity score matching" (6). It was big in 1983, and, even in 2021, you see it in almost every paper published in specialized medical journals. You come across King and Nielsen's influential paper "Why Propensity Scores Should Not Be Used for Matching" (7) and Noah's answer on CV describing the many drawbacks to using propensity score matching. Okay, you'll use genetic matching instead (8), and minimize the energy distance between the samples (9), including a flexibly estimated propensity score as a covariate to match on. You find that balance can be improved by using substantive knowledge to incorporate exact matching and caliper constraints that prioritize balance on covariates known to be important to the outcome. You decide to use full matching to relax the requirement of 1:1 matching to include more units in the analysis (10).
You estimate the treatment effect using a simple linear regression of the outcome on the treatment and the covariates, including the matching weights in the regression and using a cluster-robust standard error to account for pair membership (11). You resubmit the result of your full matching analysis using exact matching and calipers for prognostically important variables and a distance matrix estimating using genetic matching on the covariates and a flexibly estimated propensity score.
The reviewer reads your new manuscript. "Wow, you've learned a lot. But I still don't believe you've removed bias the in the effect estimate."
"Look at the balance tables," you say. "The covariate distributions are almost identical."
"I see low standardized mean differences," says the reviewer, "but imbalances could remain on other features of the covariate distribution."
"Look at the balance tables in the appendix which contain balance statistics for pairwise interactions, polynomials up to the 5th power of each covariate, and Kolmogorov-Smirnov statistics to compare the full covariate distributions. There are no meaningful differences between the samples, and no differences at all on the most highly prognostic covariates because of the exact matching constraints and calipers."
"I see..."
"Also, I used Branson's randomization test (12) with the energy distance as the balance statistic to show that my sample is better balanced not only than a hypothetical randomized trial using the same data, but also a block randomized trial, and even a covariate balance-constrained randomized trial."
"Wow, I guess I don't have much to say..."
"My outcome regression estimator isn't just consistent, it's truly unbiased in this sample. Also, because I incorporated pair membership into the analysis, my standard errors are smaller and more accurate and the resulting estimate is less sensitive to unobserved confounding* (13)."
"I get it!"
Part 3: The criticism
Frank Harrell bursts into the room. "Wait, by discarding so many units in matching, you have thrown away so much useful data and needlessly decimated your precision." Mark van der Laan follows. "Wait, by using substantive 'expertise' you are not letting the analysis method find the true patterns in the data that might have eluded researchers, and your estimator does not converge at a known rate, let alone a parametric one! And there is no guarantee that your inference is valid!" I, your humble narrator, too, join in on the dogpile. "Wait, by using exact matching constraints and calipers, you have shifted your estimand away from the ATE or any a priori describable estimand (14)! Your effect estimate may be unbiased, but unbiased for what?"
You stand there, bewildered, defeated, feeling like you have come nowhere since you asked your simple question on CrossValidated what felt like years ago, no closer to understanding whether you should use matching or regression to estimate causal effects.
The curtains close.
Part 4: Epilogue
In the face of uncertainty and scarcity, we are left with tradeoffs. The choice between a regression-based method and matching to estimate a causal effect depends on how you and your audience choose to manage those tradeoffs and prioritize the advantages and drawbacks of each method.
Standard regression requires strong functional form assumptions, but with advanced methods, those can be relaxed, at the cost of giving up on bias and focusing on consistency and asymptotic inference. Many of these advanced methods work best in large samples, and they still require many choices along the way (e.g., which specific estimator to use, which machine learning methods to include in the Superlearner library, how many folds to use for cross-validation and cross-fitting, etc.). Although the multiply-robust methods may guarantee consistency and fast convergence rates in general data, it is not immediately clear how you can assess how well they eliminated bias in your dataset, potentially leaving one skeptical of their actual performance in your one instance.
Matching methods require few functional form assumptions because no models are required (e.g., when using a distance matrix that doesn't depend solely on the propensity score, like that resulting from genetic matching). You can control confounding by adjusting the specification of the match, focusing more effort on hard-to-balance or prognostically important variables. You can come close to guaranteeing unbiasedness by ensuring you have achieved covariate balance, which can and should be measured extremely broadly with a skeptic in mind. You can use tools for analyzing randomized trials and trials with more powerful and robust designs. This comes at the cost of possibly decimating your precision by discarding huge amounts of data, changing your estimand so that your effect estimate doesn't generalize to a meaningful population and isn't replicable, and relying on ad hoc, "artisanal" methods with no clear path for valid inference.
The advantage matching has over regression, and the reason why I think it is so valuable and why I devoted my graduate training to understanding and improving matching and its use by applied researchers as the author of the R package cobalt, WeightIt, MatchIt, and others, is an epistemic advantage. With matching, you can more effectively convince a reader that what you have done is trustworthy and that you have accounted for all possible objections to the observed result, and can at least point to specific assumptions and explain how their violation might affect results. This all centers on covariate balance, the similarity between covariate distributions across the treatment groups. By reporting balance broadly and submitting the resulting matched data to a battery of tests and balance measures, you can convince yourself and your readers that the resulting effect estimate is unbiased and therefore trustworthy (given the assumptions mentioned at the beginning, though these may be tenuous, and neither matching nor regression can solve that problem).
However, not everyone agrees that this advantage so important, or more important than consistency and valid asymptotic inference. There can never be consensus on this matter, because consensus requires knowing the truth, and science (including statistics research) is about searching for an inherently unknowable truth (i.e., the true parameters that govern or describe our world). That is, if we knew the true causal effect, we could know the best method to estimate it, but we don't, so we can't. We can only do our best using the knowledge we have and try to manage the inherent constraints and tradeoffs as well as we can as we fumble around in the dark using the pinpoint of light the universe has shown us.
*Only when using a special method of inference for matched samples.
Snowden JM, Rose S, Mortimer KM. Implementation of G-Computation on a Simulated Data Set: Demonstration of a Causal Inference Technique. Am J Epidemiol. 2011;173(7):731–738.
van der Laan MJ, Polley EC, Hubbard AE. Super Learner. Statistical Applications in Genetics and Molecular Biology [electronic article]. 2007;6(1). (https://www.degruyter.com/view/j/sagmb.2007.6.issue-1/sagmb.2007.6.1.1309/sagmb.2007.6.1.1309.xml). (Accessed October 8, 2019)
Daniel RM. Double Robustness. In: Wiley StatsRef: Statistics Reference Online. American Cancer Society; 2018 (Accessed November 9, 2018):1–14.(http://onlinelibrary.wiley.com/doi/abs/10.1002/9781118445112.stat08068). (Accessed November 9, 2018)
Gruber S, van der Laan MJ. Targeted Maximum Likelihood Estimation: A Gentle Introduction. 2009;17.
Zivich PN, Breskin A. Machine Learning for Causal Inference: On the Use of Cross-fit Estimators. Epidemiology. 2021;32(3):393–401.
Rosenbaum PR, Rubin DB. The central role of the propensity score in observational studies for causal effects. Biometrika. 1983;70(1):41–55.
King G, Nielsen R. Why Propensity Scores Should Not Be Used for Matching. Polit. Anal. 2019;1–20.
Diamond A, Sekhon JS. Genetic matching for estimating causal effects: A general multivariate matching method for achieving balance in observational studies. Review of Economics and Statistics. 2013;95(3):932–945.
Huling JD, Mak S. Energy Balancing of Covariate Distributions. arXiv:2004.13962 [stat] [electronic article]. 2020;(http://arxiv.org/abs/2004.13962). (Accessed December 22, 2020)
Stuart EA, Green KM. Using full matching to estimate causal effects in nonexperimental studies: Examining the relationship between adolescent marijuana use and adult outcomes. Developmental Psychology. 2008;44(2):395–406.
Abadie A, Spiess J. Robust Post-Matching Inference. Journal of the American Statistical Association. 2020;0(ja):1–37.
Branson Z. Randomization Tests to Assess Covariate Balance When Designing and Analyzing Matched Datasets. Observational Studies. 2021;7:44–80.
Zubizarreta JR, Paredes RD, Rosenbaum PR. Matching for balance, pairing for heterogeneity in an observational study of the effectiveness of for-profit and not-for-profit high schools in Chile. The Annals of Applied Statistics. 2014;8(1):204–231.
Greifer N, Stuart EA. Choosing the Estimand When Matching or Weighting in Observational Studies. arXiv:2106.10577 [stat] [electronic article]. 2021;(http://arxiv.org/abs/2106.10577). (Accessed September 17, 2021)
|
Why do we do matching for causal inference vs regressing on confounders?
|
As I see it, there are two related reasons to consider matching instead of regression. The first is assumptions about functional form, and the second is about proving to your audience that functional
|
Why do we do matching for causal inference vs regressing on confounders?
As I see it, there are two related reasons to consider matching instead of regression. The first is assumptions about functional form, and the second is about proving to your audience that functional form assumptions do not affect the resulting effect estimate. The first is a statistical matter and the second is epistemic. Consider the tale below that attempts to illustrate how the choice between matching and regression could play out.
We'll assume you have measured a sufficient adjustment set to satisfy the backdoor criterion (i.e., all relevant confounders have been measured) with no measurement error or missing data, and that your goal is to estimate the marginal treatment effect of the treatment on an outcome. We'll also assume the standard assumptions of positivity and SUTVA hold. We'll consider a continuous outcome first, but much of the discussion extends to general outcomes.
Part 1: Regression
You decide to run a regression of the outcome on the treatment and confounders as a way to control for confounding by these variables because that is what linear regression is supposed to do. However, the effect estimate is only unbiased under extremely strict circumstances. First, that the treatment effect is constant across levels of the confounders, and second, that the linear model describes the conditional relationship between the outcome and the confounders. For the first, you might include an interaction between the treatment and each confounder, allowing for heterogeneous treatment effects while estimating the marginal effect. This is equivalent to g-computation (1), which involves using the fitted regression model to generate predicted values under treatment and control for all units and using the difference in the means of these predicted values as the effect estimate.
That still assumes a linear model for the outcomes under treatment and control. Okay, we'll use a flexible machine-learning method like random forests instead. Well, now we can't claim our estimator is unbiased, only possibly consistent, and it still requires the specific machine learning model to approach the truth at a certain rate. Okay, we'll use Superlearner (2), a stacking method that takes on the rate of convergence of the fastest of its included models. Well, now we don't have a way to conduct inference, and the model might still be wrong. Okay, we'll use a semiparametric efficient doubly-robust estimator like augmented inverse probability weighting (AIPW) (3) or targeted minimum loss-based estimation (TMLE) (4). Well, that's only consistent if the true models fall in the Donsker class of models. Okay, we'll use cross-fitting with AIPW or TMLE to relax that requirement (5).
Great. You've taken regression to its extreme, relaxing as many assumptions as possible and landing with a multiply-robust estimator (multiply-robust in the sense that if one of many models are correct, the estimator is consistent) with generally good inference properties (but it can be bootstrapped so getting the variance exactly right isn't a big problem). Have we solved causal inference?
You submit the results of your cross-fit TMLE estimate using Superlearner for the propensity score and potential outcome models with a full library including highly adaptive lasso and many other models, which, under weak assumptions, are all that are required for a truly consistent estimator that converges at a parametric rate.
A reviewer reads the paper and says, "I don't believe the results of this model."
"Why not?" you say. "I used the optimal estimator with the best properties; it is consistent and semiparametric efficient with few, if any, assumptions on the functional forms of the models."
"Your estimator is consistent," says the reviewer, "but not unbiased. That means I can only trust its results in general and as N goes to infinity. How do I know you have successfully eliminated bias in the effect estimate in this dataset?"
"..."
Part 2: Matching to the Rescue
You read about a hot new method called "propensity score matching" (6). It was big in 1983, and, even in 2021, you see it in almost every paper published in specialized medical journals. You come across King and Nielsen's influential paper "Why Propensity Scores Should Not Be Used for Matching" (7) and Noah's answer on CV describing the many drawbacks to using propensity score matching. Okay, you'll use genetic matching instead (8), and minimize the energy distance between the samples (9), including a flexibly estimated propensity score as a covariate to match on. You find that balance can be improved by using substantive knowledge to incorporate exact matching and caliper constraints that prioritize balance on covariates known to be important to the outcome. You decide to use full matching to relax the requirement of 1:1 matching to include more units in the analysis (10).
You estimate the treatment effect using a simple linear regression of the outcome on the treatment and the covariates, including the matching weights in the regression and using a cluster-robust standard error to account for pair membership (11). You resubmit the result of your full matching analysis using exact matching and calipers for prognostically important variables and a distance matrix estimating using genetic matching on the covariates and a flexibly estimated propensity score.
The reviewer reads your new manuscript. "Wow, you've learned a lot. But I still don't believe you've removed bias the in the effect estimate."
"Look at the balance tables," you say. "The covariate distributions are almost identical."
"I see low standardized mean differences," says the reviewer, "but imbalances could remain on other features of the covariate distribution."
"Look at the balance tables in the appendix which contain balance statistics for pairwise interactions, polynomials up to the 5th power of each covariate, and Kolmogorov-Smirnov statistics to compare the full covariate distributions. There are no meaningful differences between the samples, and no differences at all on the most highly prognostic covariates because of the exact matching constraints and calipers."
"I see..."
"Also, I used Branson's randomization test (12) with the energy distance as the balance statistic to show that my sample is better balanced not only than a hypothetical randomized trial using the same data, but also a block randomized trial, and even a covariate balance-constrained randomized trial."
"Wow, I guess I don't have much to say..."
"My outcome regression estimator isn't just consistent, it's truly unbiased in this sample. Also, because I incorporated pair membership into the analysis, my standard errors are smaller and more accurate and the resulting estimate is less sensitive to unobserved confounding* (13)."
"I get it!"
Part 3: The criticism
Frank Harrell bursts into the room. "Wait, by discarding so many units in matching, you have thrown away so much useful data and needlessly decimated your precision." Mark van der Laan follows. "Wait, by using substantive 'expertise' you are not letting the analysis method find the true patterns in the data that might have eluded researchers, and your estimator does not converge at a known rate, let alone a parametric one! And there is no guarantee that your inference is valid!" I, your humble narrator, too, join in on the dogpile. "Wait, by using exact matching constraints and calipers, you have shifted your estimand away from the ATE or any a priori describable estimand (14)! Your effect estimate may be unbiased, but unbiased for what?"
You stand there, bewildered, defeated, feeling like you have come nowhere since you asked your simple question on CrossValidated what felt like years ago, no closer to understanding whether you should use matching or regression to estimate causal effects.
The curtains close.
Part 4: Epilogue
In the face of uncertainty and scarcity, we are left with tradeoffs. The choice between a regression-based method and matching to estimate a causal effect depends on how you and your audience choose to manage those tradeoffs and prioritize the advantages and drawbacks of each method.
Standard regression requires strong functional form assumptions, but with advanced methods, those can be relaxed, at the cost of giving up on bias and focusing on consistency and asymptotic inference. Many of these advanced methods work best in large samples, and they still require many choices along the way (e.g., which specific estimator to use, which machine learning methods to include in the Superlearner library, how many folds to use for cross-validation and cross-fitting, etc.). Although the multiply-robust methods may guarantee consistency and fast convergence rates in general data, it is not immediately clear how you can assess how well they eliminated bias in your dataset, potentially leaving one skeptical of their actual performance in your one instance.
Matching methods require few functional form assumptions because no models are required (e.g., when using a distance matrix that doesn't depend solely on the propensity score, like that resulting from genetic matching). You can control confounding by adjusting the specification of the match, focusing more effort on hard-to-balance or prognostically important variables. You can come close to guaranteeing unbiasedness by ensuring you have achieved covariate balance, which can and should be measured extremely broadly with a skeptic in mind. You can use tools for analyzing randomized trials and trials with more powerful and robust designs. This comes at the cost of possibly decimating your precision by discarding huge amounts of data, changing your estimand so that your effect estimate doesn't generalize to a meaningful population and isn't replicable, and relying on ad hoc, "artisanal" methods with no clear path for valid inference.
The advantage matching has over regression, and the reason why I think it is so valuable and why I devoted my graduate training to understanding and improving matching and its use by applied researchers as the author of the R package cobalt, WeightIt, MatchIt, and others, is an epistemic advantage. With matching, you can more effectively convince a reader that what you have done is trustworthy and that you have accounted for all possible objections to the observed result, and can at least point to specific assumptions and explain how their violation might affect results. This all centers on covariate balance, the similarity between covariate distributions across the treatment groups. By reporting balance broadly and submitting the resulting matched data to a battery of tests and balance measures, you can convince yourself and your readers that the resulting effect estimate is unbiased and therefore trustworthy (given the assumptions mentioned at the beginning, though these may be tenuous, and neither matching nor regression can solve that problem).
However, not everyone agrees that this advantage so important, or more important than consistency and valid asymptotic inference. There can never be consensus on this matter, because consensus requires knowing the truth, and science (including statistics research) is about searching for an inherently unknowable truth (i.e., the true parameters that govern or describe our world). That is, if we knew the true causal effect, we could know the best method to estimate it, but we don't, so we can't. We can only do our best using the knowledge we have and try to manage the inherent constraints and tradeoffs as well as we can as we fumble around in the dark using the pinpoint of light the universe has shown us.
*Only when using a special method of inference for matched samples.
Snowden JM, Rose S, Mortimer KM. Implementation of G-Computation on a Simulated Data Set: Demonstration of a Causal Inference Technique. Am J Epidemiol. 2011;173(7):731–738.
van der Laan MJ, Polley EC, Hubbard AE. Super Learner. Statistical Applications in Genetics and Molecular Biology [electronic article]. 2007;6(1). (https://www.degruyter.com/view/j/sagmb.2007.6.issue-1/sagmb.2007.6.1.1309/sagmb.2007.6.1.1309.xml). (Accessed October 8, 2019)
Daniel RM. Double Robustness. In: Wiley StatsRef: Statistics Reference Online. American Cancer Society; 2018 (Accessed November 9, 2018):1–14.(http://onlinelibrary.wiley.com/doi/abs/10.1002/9781118445112.stat08068). (Accessed November 9, 2018)
Gruber S, van der Laan MJ. Targeted Maximum Likelihood Estimation: A Gentle Introduction. 2009;17.
Zivich PN, Breskin A. Machine Learning for Causal Inference: On the Use of Cross-fit Estimators. Epidemiology. 2021;32(3):393–401.
Rosenbaum PR, Rubin DB. The central role of the propensity score in observational studies for causal effects. Biometrika. 1983;70(1):41–55.
King G, Nielsen R. Why Propensity Scores Should Not Be Used for Matching. Polit. Anal. 2019;1–20.
Diamond A, Sekhon JS. Genetic matching for estimating causal effects: A general multivariate matching method for achieving balance in observational studies. Review of Economics and Statistics. 2013;95(3):932–945.
Huling JD, Mak S. Energy Balancing of Covariate Distributions. arXiv:2004.13962 [stat] [electronic article]. 2020;(http://arxiv.org/abs/2004.13962). (Accessed December 22, 2020)
Stuart EA, Green KM. Using full matching to estimate causal effects in nonexperimental studies: Examining the relationship between adolescent marijuana use and adult outcomes. Developmental Psychology. 2008;44(2):395–406.
Abadie A, Spiess J. Robust Post-Matching Inference. Journal of the American Statistical Association. 2020;0(ja):1–37.
Branson Z. Randomization Tests to Assess Covariate Balance When Designing and Analyzing Matched Datasets. Observational Studies. 2021;7:44–80.
Zubizarreta JR, Paredes RD, Rosenbaum PR. Matching for balance, pairing for heterogeneity in an observational study of the effectiveness of for-profit and not-for-profit high schools in Chile. The Annals of Applied Statistics. 2014;8(1):204–231.
Greifer N, Stuart EA. Choosing the Estimand When Matching or Weighting in Observational Studies. arXiv:2106.10577 [stat] [electronic article]. 2021;(http://arxiv.org/abs/2106.10577). (Accessed September 17, 2021)
|
Why do we do matching for causal inference vs regressing on confounders?
As I see it, there are two related reasons to consider matching instead of regression. The first is assumptions about functional form, and the second is about proving to your audience that functional
|
11,909
|
Distribution that has a range from 0 to 1 and with peak between them?
|
One possible choice is the beta distribution, but re-parametrized in terms of mean $\mu$ and precision $\phi$, that is, "for fixed $\mu$, the larger the value of $\phi$, the smaller the variance of $y$" (see Ferrari, and Cribari-Neto, 2004). The probability density function is constructed by replacing the standard parameters of beta distribution with $\alpha = \phi\mu$ and $\beta = \phi(1-\mu)$
$$
f(y) = \frac{1}{\mathrm{B}(\phi\mu,\; \phi(1-\mu))}\; y^{\phi\mu-1} (1-y)^{\phi(1-\mu)-1}
$$
where $E(Y) = \mu$ and $\mathrm{Var}(Y) = \frac{\mu(1-\mu)}{1+\phi}$.
Alternatively, you can calculate appropriate $\alpha$ and $\beta$ parameters that would lead to beta distribution with pre-defined mean and variance. However, notice that there are restrictions on possible values of variance that are valid for beta distribution. For me personally, the parametrization using precision is more intuitive (think of $x\,/\,\phi$ proportions in binomially distributed $X$, with sample size $\phi$ and the probability of success $\mu$).
Kumaraswamy distribution is another bounded continuous distribution, but it would be harder to re-parametrize like above.
As others have noticed, it is not normal since normal distribution has the $(-\infty, \infty)$ support, so at best you could use the truncated normal as an approximation.
Ferrari, S., & Cribari-Neto, F. (2004). Beta regression for modelling rates and proportions. Journal of Applied Statistics, 31(7), 799-815.
|
Distribution that has a range from 0 to 1 and with peak between them?
|
One possible choice is the beta distribution, but re-parametrized in terms of mean $\mu$ and precision $\phi$, that is, "for fixed $\mu$, the larger the value of $\phi$, the smaller the variance of $y
|
Distribution that has a range from 0 to 1 and with peak between them?
One possible choice is the beta distribution, but re-parametrized in terms of mean $\mu$ and precision $\phi$, that is, "for fixed $\mu$, the larger the value of $\phi$, the smaller the variance of $y$" (see Ferrari, and Cribari-Neto, 2004). The probability density function is constructed by replacing the standard parameters of beta distribution with $\alpha = \phi\mu$ and $\beta = \phi(1-\mu)$
$$
f(y) = \frac{1}{\mathrm{B}(\phi\mu,\; \phi(1-\mu))}\; y^{\phi\mu-1} (1-y)^{\phi(1-\mu)-1}
$$
where $E(Y) = \mu$ and $\mathrm{Var}(Y) = \frac{\mu(1-\mu)}{1+\phi}$.
Alternatively, you can calculate appropriate $\alpha$ and $\beta$ parameters that would lead to beta distribution with pre-defined mean and variance. However, notice that there are restrictions on possible values of variance that are valid for beta distribution. For me personally, the parametrization using precision is more intuitive (think of $x\,/\,\phi$ proportions in binomially distributed $X$, with sample size $\phi$ and the probability of success $\mu$).
Kumaraswamy distribution is another bounded continuous distribution, but it would be harder to re-parametrize like above.
As others have noticed, it is not normal since normal distribution has the $(-\infty, \infty)$ support, so at best you could use the truncated normal as an approximation.
Ferrari, S., & Cribari-Neto, F. (2004). Beta regression for modelling rates and proportions. Journal of Applied Statistics, 31(7), 799-815.
|
Distribution that has a range from 0 to 1 and with peak between them?
One possible choice is the beta distribution, but re-parametrized in terms of mean $\mu$ and precision $\phi$, that is, "for fixed $\mu$, the larger the value of $\phi$, the smaller the variance of $y
|
11,910
|
Distribution that has a range from 0 to 1 and with peak between them?
|
I transform to create this kind of variable. Start with a random variable, x, which has support on the whole real line (like normal), and then transform it to make a new random variable $y=\frac{exp(x)}{1+exp(x)}$. Presto, you have a random variable distributed on the unit interval. Since this particular transformation is increasing, you can move the mean/median/mode of y around by moving the mean/median/mode of x around. Want to make $y$ more dispersed (in terms of inter-quartile range, say)? Just make $x$ more dispersed.
There is nothing special about the function $\frac{exp(x)}{1+exp(x)}$. Any cumulative distribution function works to produce a new random variable defined on the unit interval.
So, any random variable transformed by plugging it into any cdf ($y=F(x)$) does what you want---makes an r.v. distributed on the unit interval whose properties you can conveniently adjust by adjusting the parameters of the untransformed random variable in an intuitive way. As long as $F()$ is strictly monotonic, the transformed variable will, in several ways, look like the untransformed one. For example, you want $y$ to be a unimodal random variable on the unit interval. As long as $F()$ is strictly increasing and $x$ is unimodal, you get that. Increasing the median/mean/mode of $x$ increases the median/mean/mode of $y$. Increasing the interquartile range of $x$ (by moving the 25th percentile down and the 75th percentile up) increases the interquartile range of $y$. Strict monotonicity is a nice thing.
The formula for calculating the mean and sd of $y$ is perhaps not easy to find, but that's what Monte Carlo simulations are for. To get relatively pretty distributions like the ones you draw, you want $x$ and $F()$ to be continuous random variables (cdf of continuous random variables) with support on the real line.
|
Distribution that has a range from 0 to 1 and with peak between them?
|
I transform to create this kind of variable. Start with a random variable, x, which has support on the whole real line (like normal), and then transform it to make a new random variable $y=\frac{exp(
|
Distribution that has a range from 0 to 1 and with peak between them?
I transform to create this kind of variable. Start with a random variable, x, which has support on the whole real line (like normal), and then transform it to make a new random variable $y=\frac{exp(x)}{1+exp(x)}$. Presto, you have a random variable distributed on the unit interval. Since this particular transformation is increasing, you can move the mean/median/mode of y around by moving the mean/median/mode of x around. Want to make $y$ more dispersed (in terms of inter-quartile range, say)? Just make $x$ more dispersed.
There is nothing special about the function $\frac{exp(x)}{1+exp(x)}$. Any cumulative distribution function works to produce a new random variable defined on the unit interval.
So, any random variable transformed by plugging it into any cdf ($y=F(x)$) does what you want---makes an r.v. distributed on the unit interval whose properties you can conveniently adjust by adjusting the parameters of the untransformed random variable in an intuitive way. As long as $F()$ is strictly monotonic, the transformed variable will, in several ways, look like the untransformed one. For example, you want $y$ to be a unimodal random variable on the unit interval. As long as $F()$ is strictly increasing and $x$ is unimodal, you get that. Increasing the median/mean/mode of $x$ increases the median/mean/mode of $y$. Increasing the interquartile range of $x$ (by moving the 25th percentile down and the 75th percentile up) increases the interquartile range of $y$. Strict monotonicity is a nice thing.
The formula for calculating the mean and sd of $y$ is perhaps not easy to find, but that's what Monte Carlo simulations are for. To get relatively pretty distributions like the ones you draw, you want $x$ and $F()$ to be continuous random variables (cdf of continuous random variables) with support on the real line.
|
Distribution that has a range from 0 to 1 and with peak between them?
I transform to create this kind of variable. Start with a random variable, x, which has support on the whole real line (like normal), and then transform it to make a new random variable $y=\frac{exp(
|
11,911
|
Distribution that has a range from 0 to 1 and with peak between them?
|
This answer is for you if you do not know what your mean/maximum will be and you want your program to be able to handle any mean and freely set variance.
Disclaimer: I am not a Mathematician, my answer works but the explanations might be incomplete/incorrect and not mathematically specific.
All of the comments and answers above about using the beta distribution reparametrized for standard variation and mean do not work as if you set a fixed variance you'll run into trouble because beta and alpha can become less than one, which creates bimodal distributions. Thus, I came up with this.
You can use the beta distribution and by setting alpha (or beta when mean > 0.5) to a number above 1 to ensure that the distribution stays unimodal, and then calculating beta (or alpha when mean > 0.5) from the mean and alpha (or beta).
However, maximum, mean and median will not be the same.
While the maximum as you indicated will be at e.g. 0.9 the mean will not be at 0.9 instead it will be slightly lower.
You can also set the mean to 0.9 but then the maximum will be slightly higher(see graphs at the bottom with mean = 0.8). How much higher/lower it will be depends on how high you set alpha (or beta). However, the variance is limited for means/modes that are very close to 0 or 1 e.g. if you set the mean to 0.9999 it you cannot set a variance that will likely return numbers like 0.9 as it cannot take that many numbers above 0.9999 (see last graph). Basically the maximum variance will be tied to how close the the mean is to 0 or 1.
If you want to select a certain variance you could for example calibrate it once by setting alpha and beta from mean (0.5) and variance (see here: https://stats.stackexchange.com/a/12239/305461). And then just set the "narrowness" in my code to whatever alpha/beta became.
To calculate this I took the formula for calculating mean and solving for alpha and beta respectively from wikipedia page on beta distributions: en.wikipedia.org/wiki/Beta_distribution
To set the mode (maximum) I found out it works by just adding 1 to each alpha and beta, after having calculated it with mean. You can probably also do it more elegantly by solving for alpha and beta with the equation for the mode.
Results
Mean (red), median (purple) mode/maximum (blue) (left to right).
This is how the distribution looks with a set maximum (mode), however mean and median are both lower. As Beta is set to higher numbers the variance will be lower and the more it will look like a symmetrical normal distribution (truncated), this will also mean that mean, median and mode will be closer.
Mean (red), median (purple) mode/maximum (blue).
Median and mode are higher than the mean.
Because someone on one of answers thought this couldn't work; I added the last plot to showcase that this works for any mean. It works as long as alpha and beta are above 1. As alpha is always higher than the set beta there are no weird edge cases.
Since I wanted to draw from a distribution where Mode and Mean are the same I asked another question which you can find here: Random number (between 0 & 1; > 5 decimal places) from binomial/beta-like distribution, with set mean (same as mode & median) and set variance
Code for Beta Distribution (in R):
mean <- 0.7
#what I call "narrowness" is an invented, it will become the lower one of the beta/alpha value
narrowness <- 2 #if you set narrowness higher it will narrow the pdf; below 1.5 it might lead to unintuitive output with maximum being super close to 1 or 0
#To calibrate narrowness to your liking.
# mean<-0.5 #leave mean at 0.5
# var<-0.05 #set variance to whatever you want however if you go too high alpha/beta will become
# narrowness <- ((1 - mean) / var - 1 / mean) * mean ^ 2 #this is how you would calculate alpha/beta if mean is 0.5
if (mean < 0.5) {
alpha <- narrowness
beta <- ((-alpha*mean)+alpha)/mean
} else {
beta <- narrowness
alpha <- (-beta*mean)/(mean-1)
}
print(c(alpha,beta))
numbers_drawn <- 1000000
#if you want the mode/maximum to be e.g. 0.8, set mean to 0.8 and add 1 each to alpha and beta, however your mean is not gonna be 0.8 anymore, see below
distribution <- stats::rbeta(numbers_drawn, alpha+1, beta+1, ncp = 0) #ncp = 0 is default and changing it will push the distribution in towards right/left, I have not tried it out
#if you want the mean to be 0.8 just leave as it is below and comment line above
#distribution<-stats::rbeta(numbers_drawn, alpha, beta, ncp = 0)
#var<-(mean^3-2*mean^2+mean)/(beta-mean+1) #calculate var just from beta and mean
#calculate mode (most common number/maximum of the pdf)
dist <- round(distribution, digits = 2) #if you set really narrow pdfs you need to round to more digits to get an accurate mode
uniqv <- unique(dist) #groups same numbers
mode <- uniqv[which.max(tabulate(match(dist, uniqv)))] #which number occurs most often
print(mode)
cutoff <- 0 #allows you to cutoff for plotting purposes (to "zoom in" to a specific area)
hist(subset(distribution, distribution > cutoff), breaks = seq(cutoff, 1, 0.005), main = paste("Mode =", mode, ", n = 1,000,000, Alpha & Beta =", round(alpha, digits = 2), "&", round(beta, digits = 2)), xlab = "")
#uncomment line below if you want to set mean, and comment line above
#hist(subset(distribution, distribution > cutoff), breaks = seq(cutoff,1,0.005), main = paste("Mean =", mean(distribution), ", n = 1,000,000, Alpha & Beta =", round(alpha, digits = 2), "&", round(beta, digits = 2)), xlab = "")
abline(v = c(mean(distribution), median(distribution), mode ), col = c("red", "purple", "blue"), lwd = 2) #plot vertical lines
##############Drawing just on random number##################
numbers_drawn <- 1
stats::rbeta(numbers_drawn, alpha+1, beta+1, ncp = 0)
#uncomment line below if you want to set mean, and comment line above
#stats::rbeta(numbers_drawn, alpha, beta, ncp = 0)
|
Distribution that has a range from 0 to 1 and with peak between them?
|
This answer is for you if you do not know what your mean/maximum will be and you want your program to be able to handle any mean and freely set variance.
Disclaimer: I am not a Mathematician, my answe
|
Distribution that has a range from 0 to 1 and with peak between them?
This answer is for you if you do not know what your mean/maximum will be and you want your program to be able to handle any mean and freely set variance.
Disclaimer: I am not a Mathematician, my answer works but the explanations might be incomplete/incorrect and not mathematically specific.
All of the comments and answers above about using the beta distribution reparametrized for standard variation and mean do not work as if you set a fixed variance you'll run into trouble because beta and alpha can become less than one, which creates bimodal distributions. Thus, I came up with this.
You can use the beta distribution and by setting alpha (or beta when mean > 0.5) to a number above 1 to ensure that the distribution stays unimodal, and then calculating beta (or alpha when mean > 0.5) from the mean and alpha (or beta).
However, maximum, mean and median will not be the same.
While the maximum as you indicated will be at e.g. 0.9 the mean will not be at 0.9 instead it will be slightly lower.
You can also set the mean to 0.9 but then the maximum will be slightly higher(see graphs at the bottom with mean = 0.8). How much higher/lower it will be depends on how high you set alpha (or beta). However, the variance is limited for means/modes that are very close to 0 or 1 e.g. if you set the mean to 0.9999 it you cannot set a variance that will likely return numbers like 0.9 as it cannot take that many numbers above 0.9999 (see last graph). Basically the maximum variance will be tied to how close the the mean is to 0 or 1.
If you want to select a certain variance you could for example calibrate it once by setting alpha and beta from mean (0.5) and variance (see here: https://stats.stackexchange.com/a/12239/305461). And then just set the "narrowness" in my code to whatever alpha/beta became.
To calculate this I took the formula for calculating mean and solving for alpha and beta respectively from wikipedia page on beta distributions: en.wikipedia.org/wiki/Beta_distribution
To set the mode (maximum) I found out it works by just adding 1 to each alpha and beta, after having calculated it with mean. You can probably also do it more elegantly by solving for alpha and beta with the equation for the mode.
Results
Mean (red), median (purple) mode/maximum (blue) (left to right).
This is how the distribution looks with a set maximum (mode), however mean and median are both lower. As Beta is set to higher numbers the variance will be lower and the more it will look like a symmetrical normal distribution (truncated), this will also mean that mean, median and mode will be closer.
Mean (red), median (purple) mode/maximum (blue).
Median and mode are higher than the mean.
Because someone on one of answers thought this couldn't work; I added the last plot to showcase that this works for any mean. It works as long as alpha and beta are above 1. As alpha is always higher than the set beta there are no weird edge cases.
Since I wanted to draw from a distribution where Mode and Mean are the same I asked another question which you can find here: Random number (between 0 & 1; > 5 decimal places) from binomial/beta-like distribution, with set mean (same as mode & median) and set variance
Code for Beta Distribution (in R):
mean <- 0.7
#what I call "narrowness" is an invented, it will become the lower one of the beta/alpha value
narrowness <- 2 #if you set narrowness higher it will narrow the pdf; below 1.5 it might lead to unintuitive output with maximum being super close to 1 or 0
#To calibrate narrowness to your liking.
# mean<-0.5 #leave mean at 0.5
# var<-0.05 #set variance to whatever you want however if you go too high alpha/beta will become
# narrowness <- ((1 - mean) / var - 1 / mean) * mean ^ 2 #this is how you would calculate alpha/beta if mean is 0.5
if (mean < 0.5) {
alpha <- narrowness
beta <- ((-alpha*mean)+alpha)/mean
} else {
beta <- narrowness
alpha <- (-beta*mean)/(mean-1)
}
print(c(alpha,beta))
numbers_drawn <- 1000000
#if you want the mode/maximum to be e.g. 0.8, set mean to 0.8 and add 1 each to alpha and beta, however your mean is not gonna be 0.8 anymore, see below
distribution <- stats::rbeta(numbers_drawn, alpha+1, beta+1, ncp = 0) #ncp = 0 is default and changing it will push the distribution in towards right/left, I have not tried it out
#if you want the mean to be 0.8 just leave as it is below and comment line above
#distribution<-stats::rbeta(numbers_drawn, alpha, beta, ncp = 0)
#var<-(mean^3-2*mean^2+mean)/(beta-mean+1) #calculate var just from beta and mean
#calculate mode (most common number/maximum of the pdf)
dist <- round(distribution, digits = 2) #if you set really narrow pdfs you need to round to more digits to get an accurate mode
uniqv <- unique(dist) #groups same numbers
mode <- uniqv[which.max(tabulate(match(dist, uniqv)))] #which number occurs most often
print(mode)
cutoff <- 0 #allows you to cutoff for plotting purposes (to "zoom in" to a specific area)
hist(subset(distribution, distribution > cutoff), breaks = seq(cutoff, 1, 0.005), main = paste("Mode =", mode, ", n = 1,000,000, Alpha & Beta =", round(alpha, digits = 2), "&", round(beta, digits = 2)), xlab = "")
#uncomment line below if you want to set mean, and comment line above
#hist(subset(distribution, distribution > cutoff), breaks = seq(cutoff,1,0.005), main = paste("Mean =", mean(distribution), ", n = 1,000,000, Alpha & Beta =", round(alpha, digits = 2), "&", round(beta, digits = 2)), xlab = "")
abline(v = c(mean(distribution), median(distribution), mode ), col = c("red", "purple", "blue"), lwd = 2) #plot vertical lines
##############Drawing just on random number##################
numbers_drawn <- 1
stats::rbeta(numbers_drawn, alpha+1, beta+1, ncp = 0)
#uncomment line below if you want to set mean, and comment line above
#stats::rbeta(numbers_drawn, alpha, beta, ncp = 0)
|
Distribution that has a range from 0 to 1 and with peak between them?
This answer is for you if you do not know what your mean/maximum will be and you want your program to be able to handle any mean and freely set variance.
Disclaimer: I am not a Mathematician, my answe
|
11,912
|
Distribution that has a range from 0 to 1 and with peak between them?
|
If somebody is interested in the solution I used in Python for generating a random value close to the given number as a parameter. My solution exist of four stages. Each stage the chance that the generated number is closer to the given number is bigger.
I know the solution is not as beautiful as using one distribution but this was the way I was able to solve my problem:
number_factory.py:
import random
import numpy as np
class NumberFactory:
def __init__(self):
self.functions = [self.__linear, self.__exponential_point_four, self.__exponential_point_three, self.__exponential_point_twenty_five]
self.stage = 0
def next_stage(self):
self.stage += 1
def get_mutated_number(self, number):
# True if the generated number will be higher than the given number
# False if the generated number will be lower than the given number
add = bool(np.random.choice([0,1], p=[number, 1-number]))
# Generate a number between 0 and 1 that will be used
# to multiply the new number by which the number parameter will be substracted or added
# The bigger the stage number (0-3) the more change that the mutated number is close to the number parameter
multiply_number_seed = random.uniform(0, 1)
multiply_number = self.functions[self.stage](multiply_number_seed)
if (add):
return number+((1-number)*multiply_number)
else:
return number-(number*multiply_number)
def __linear(self, x):
return -x+1
def __exponential_point_four(self, x):
return 0.4*x**2 - 1.4*x + 1
def __exponential_point_three(self, x):
return 0.8*x**2 - 1.8*x + 1
def __exponential_point_twenty_five(self, x):
return x**2 - 2*x + 1
def get_stage(self):
return self.stage
main.py:
import matplotlib.pyplot as plt
import numpy as np
factory = NumberFactory()
numbers = []
factory.next_stage()
factory.next_stage()
factory.next_stage()
for _ in range(100000):
numbers.append(factory.get_mutated_number(0.3))
bins = 100
plt.hist(numbers, bins, normed=True)
plt.plot(1, np.ones_like(bins))
plt.show()
result when executing this code is shown in the picture below:
|
Distribution that has a range from 0 to 1 and with peak between them?
|
If somebody is interested in the solution I used in Python for generating a random value close to the given number as a parameter. My solution exist of four stages. Each stage the chance that the gene
|
Distribution that has a range from 0 to 1 and with peak between them?
If somebody is interested in the solution I used in Python for generating a random value close to the given number as a parameter. My solution exist of four stages. Each stage the chance that the generated number is closer to the given number is bigger.
I know the solution is not as beautiful as using one distribution but this was the way I was able to solve my problem:
number_factory.py:
import random
import numpy as np
class NumberFactory:
def __init__(self):
self.functions = [self.__linear, self.__exponential_point_four, self.__exponential_point_three, self.__exponential_point_twenty_five]
self.stage = 0
def next_stage(self):
self.stage += 1
def get_mutated_number(self, number):
# True if the generated number will be higher than the given number
# False if the generated number will be lower than the given number
add = bool(np.random.choice([0,1], p=[number, 1-number]))
# Generate a number between 0 and 1 that will be used
# to multiply the new number by which the number parameter will be substracted or added
# The bigger the stage number (0-3) the more change that the mutated number is close to the number parameter
multiply_number_seed = random.uniform(0, 1)
multiply_number = self.functions[self.stage](multiply_number_seed)
if (add):
return number+((1-number)*multiply_number)
else:
return number-(number*multiply_number)
def __linear(self, x):
return -x+1
def __exponential_point_four(self, x):
return 0.4*x**2 - 1.4*x + 1
def __exponential_point_three(self, x):
return 0.8*x**2 - 1.8*x + 1
def __exponential_point_twenty_five(self, x):
return x**2 - 2*x + 1
def get_stage(self):
return self.stage
main.py:
import matplotlib.pyplot as plt
import numpy as np
factory = NumberFactory()
numbers = []
factory.next_stage()
factory.next_stage()
factory.next_stage()
for _ in range(100000):
numbers.append(factory.get_mutated_number(0.3))
bins = 100
plt.hist(numbers, bins, normed=True)
plt.plot(1, np.ones_like(bins))
plt.show()
result when executing this code is shown in the picture below:
|
Distribution that has a range from 0 to 1 and with peak between them?
If somebody is interested in the solution I used in Python for generating a random value close to the given number as a parameter. My solution exist of four stages. Each stage the chance that the gene
|
11,913
|
Distribution that has a range from 0 to 1 and with peak between them?
|
You might want to take a look at 'Johnson curves'. See N.L. Johnson: Systems of Frequency Curves generated by methods of translation. 1949 Biometrika Volume 36 pp 149-176. R has support for fitting them to arbitrary curves. In particular his SB (bounded) curves might be useful.
It's 40 years since I used them, but they were very useful to me at the time, and I think they will work for you.
|
Distribution that has a range from 0 to 1 and with peak between them?
|
You might want to take a look at 'Johnson curves'. See N.L. Johnson: Systems of Frequency Curves generated by methods of translation. 1949 Biometrika Volume 36 pp 149-176. R has support for fitting th
|
Distribution that has a range from 0 to 1 and with peak between them?
You might want to take a look at 'Johnson curves'. See N.L. Johnson: Systems of Frequency Curves generated by methods of translation. 1949 Biometrika Volume 36 pp 149-176. R has support for fitting them to arbitrary curves. In particular his SB (bounded) curves might be useful.
It's 40 years since I used them, but they were very useful to me at the time, and I think they will work for you.
|
Distribution that has a range from 0 to 1 and with peak between them?
You might want to take a look at 'Johnson curves'. See N.L. Johnson: Systems of Frequency Curves generated by methods of translation. 1949 Biometrika Volume 36 pp 149-176. R has support for fitting th
|
11,914
|
What statistical model or algorithm could be used to solve the John Snow Cholera problem?
|
Not to give a complete or authoritative answer, but just to stimulate ideas, I will report on a quick analysis I made for a lab exercise in a spatial stats course I was teaching ten years ago. The purpose was to see what effect an accurate accounting of likely travel pathways (on foot), compared to using Euclidean distances, would have on a relatively simple exploratory method: a kernel density estimate. Where would the peak (or peaks) of the density be relative to the pump whose handle Snow removed?
Using a fairly high-resolution raster representation (2946 rows by 3160 columns) of Snow's map (properly georeferenced), I digitized each of the hundreds of little black coffins shown on the map (finding 558 of them at 309 addresses), assigning each to the edge of the street corresponding to its address, and summarizing by address into a count at each location.
After some image processing to identify the streets and alleyways, I conducted a simple Gaussian diffusion limited to those areas (using repeated focal means in a GIS). This is the KDE.
The result speaks for itself--it scarcely even needs a legend to explain it. (The map shows many other pumps, but they all lie outside this view, which focuses on the areas of highest density.)
|
What statistical model or algorithm could be used to solve the John Snow Cholera problem?
|
Not to give a complete or authoritative answer, but just to stimulate ideas, I will report on a quick analysis I made for a lab exercise in a spatial stats course I was teaching ten years ago. The pu
|
What statistical model or algorithm could be used to solve the John Snow Cholera problem?
Not to give a complete or authoritative answer, but just to stimulate ideas, I will report on a quick analysis I made for a lab exercise in a spatial stats course I was teaching ten years ago. The purpose was to see what effect an accurate accounting of likely travel pathways (on foot), compared to using Euclidean distances, would have on a relatively simple exploratory method: a kernel density estimate. Where would the peak (or peaks) of the density be relative to the pump whose handle Snow removed?
Using a fairly high-resolution raster representation (2946 rows by 3160 columns) of Snow's map (properly georeferenced), I digitized each of the hundreds of little black coffins shown on the map (finding 558 of them at 309 addresses), assigning each to the edge of the street corresponding to its address, and summarizing by address into a count at each location.
After some image processing to identify the streets and alleyways, I conducted a simple Gaussian diffusion limited to those areas (using repeated focal means in a GIS). This is the KDE.
The result speaks for itself--it scarcely even needs a legend to explain it. (The map shows many other pumps, but they all lie outside this view, which focuses on the areas of highest density.)
|
What statistical model or algorithm could be used to solve the John Snow Cholera problem?
Not to give a complete or authoritative answer, but just to stimulate ideas, I will report on a quick analysis I made for a lab exercise in a spatial stats course I was teaching ten years ago. The pu
|
11,915
|
What statistical model or algorithm could be used to solve the John Snow Cholera problem?
|
In [1,§3.2], David Freedman suggests an essentially negative answer to your question. That is, no (mere) statistical model or algorithm could solve John Snow's problem. Snow's problem was to develop a critical argument supporting his theory that cholera is a water-borne infectious disease, against the prevailing miasma theory of his day. (Chapter 3 in [1], titled “Statistical Models and Shoe Leather,” is also available in previously published form [2] here.)
In these few short pages [1, pp.47–53], much of which is an extended quote from John Snow himself, Freedman argues that "what Snow actually did in 1853–54 is even more interesting than the fable [of the Broad Street Pump]." As far as marshalling statistical evidence (other preliminaries such as index case identification, etc., are discussed besides), Snow exploited natural variation to effect a truly remarkable quasi-experiment.
It turns out that at an earlier time, there was a vigorous competition among water supply companies in London, and this resulted in spatial mixing of the water supply that was (in Snow's words) "of the most intimate kind."
The pipes of each Company go down all the streets, and into nearly all the courts and alleys. A few houses are supplied by one Company and a few by the other, according to the decision of the owner or occupier at that time when the Water Companies were in active competition.
...
As there is no difference whatever in the houses or the people receiving the supply of the two Water Companies, or in any of the physical conditions with which they are surrounded, it is obvious that no experiment could have been devised which would more thoroughly test the effect of water supply on the progress of cholera than this, which circumstances placed ready made before the observer.
—John Snow
Another critically important bit of 'natural variation' John Snow exploited in this quasi-experiment was that one water company had its water intake on the Thames downstream of sewage discharges, whereas the other had a few years before relocated its intake upstream. I'll let you guess which was which from John Snow's data table!
| Number of | Cholera | Deaths per
Company | houses | deaths | 10,000 houses
----------------------------------------------------------
Southwark & Vauxhall | 40,046 | 1263 | 315
Lambeth | 26,107 | 98 | 37
Rest of London | 256,423 | 1422 | 59
As Freedman notes witheringly,
As a piece of statistical technology, [the above table] is by no means remarkable. But the story it tells is very persuasive. The force of the argument results from the clarity of the prior reasoning, the bringing together of many different lines of evidence, and the amount of shoe leather Snow was willing to use to get the data. [1, p.51]
One further point of natural variation exploited by Snow occurred in the time dimension: the abovementioned water intake relocation occurred between two epidemics, enabling Snow to compare the same company's water with and without added sewage. (Thanks to Philip B. Stark, one author of [1], for this info via Twitter. See this online lecture of his.)
This matter also provides an instructive study in the contrast between deductivism and inductivism, as discussed in this answer.
Freedman D, Collier D, Sekhon JS, Stark PB. Statistical Models and Causal Inference: A Dialogue with the Social Sciences. Cambridge ; New York: Cambridge University Press; 2010.
Freedman DA. Statistical Models and Shoe Leather. Sociological Methodology. 1991;21:291-313. doi:10.2307/270939. Full text
|
What statistical model or algorithm could be used to solve the John Snow Cholera problem?
|
In [1,§3.2], David Freedman suggests an essentially negative answer to your question. That is, no (mere) statistical model or algorithm could solve John Snow's problem. Snow's problem was to develop a
|
What statistical model or algorithm could be used to solve the John Snow Cholera problem?
In [1,§3.2], David Freedman suggests an essentially negative answer to your question. That is, no (mere) statistical model or algorithm could solve John Snow's problem. Snow's problem was to develop a critical argument supporting his theory that cholera is a water-borne infectious disease, against the prevailing miasma theory of his day. (Chapter 3 in [1], titled “Statistical Models and Shoe Leather,” is also available in previously published form [2] here.)
In these few short pages [1, pp.47–53], much of which is an extended quote from John Snow himself, Freedman argues that "what Snow actually did in 1853–54 is even more interesting than the fable [of the Broad Street Pump]." As far as marshalling statistical evidence (other preliminaries such as index case identification, etc., are discussed besides), Snow exploited natural variation to effect a truly remarkable quasi-experiment.
It turns out that at an earlier time, there was a vigorous competition among water supply companies in London, and this resulted in spatial mixing of the water supply that was (in Snow's words) "of the most intimate kind."
The pipes of each Company go down all the streets, and into nearly all the courts and alleys. A few houses are supplied by one Company and a few by the other, according to the decision of the owner or occupier at that time when the Water Companies were in active competition.
...
As there is no difference whatever in the houses or the people receiving the supply of the two Water Companies, or in any of the physical conditions with which they are surrounded, it is obvious that no experiment could have been devised which would more thoroughly test the effect of water supply on the progress of cholera than this, which circumstances placed ready made before the observer.
—John Snow
Another critically important bit of 'natural variation' John Snow exploited in this quasi-experiment was that one water company had its water intake on the Thames downstream of sewage discharges, whereas the other had a few years before relocated its intake upstream. I'll let you guess which was which from John Snow's data table!
| Number of | Cholera | Deaths per
Company | houses | deaths | 10,000 houses
----------------------------------------------------------
Southwark & Vauxhall | 40,046 | 1263 | 315
Lambeth | 26,107 | 98 | 37
Rest of London | 256,423 | 1422 | 59
As Freedman notes witheringly,
As a piece of statistical technology, [the above table] is by no means remarkable. But the story it tells is very persuasive. The force of the argument results from the clarity of the prior reasoning, the bringing together of many different lines of evidence, and the amount of shoe leather Snow was willing to use to get the data. [1, p.51]
One further point of natural variation exploited by Snow occurred in the time dimension: the abovementioned water intake relocation occurred between two epidemics, enabling Snow to compare the same company's water with and without added sewage. (Thanks to Philip B. Stark, one author of [1], for this info via Twitter. See this online lecture of his.)
This matter also provides an instructive study in the contrast between deductivism and inductivism, as discussed in this answer.
Freedman D, Collier D, Sekhon JS, Stark PB. Statistical Models and Causal Inference: A Dialogue with the Social Sciences. Cambridge ; New York: Cambridge University Press; 2010.
Freedman DA. Statistical Models and Shoe Leather. Sociological Methodology. 1991;21:291-313. doi:10.2307/270939. Full text
|
What statistical model or algorithm could be used to solve the John Snow Cholera problem?
In [1,§3.2], David Freedman suggests an essentially negative answer to your question. That is, no (mere) statistical model or algorithm could solve John Snow's problem. Snow's problem was to develop a
|
11,916
|
How does bootstrapping in R actually work?
|
There are several "flavours" or forms of the bootstrap (e.g. non-parametric, parametric, residual resampling and many more). The bootstrap in the example is called a non-parametric bootstrap, or case resampling (see here, here, here and here for applications in regression). The basic idea is that you treat your sample as population and repeatedly draw new samples from it with replacement. All original observations have equal probability of being drawn into the new sample. Then you calculate and store the statistic(s) of interest, this may be the mean, the median or regression coefficients using the newly drawn sample. This is repeated $n$ times. In each iteration, some observations from your original sample are drawn multiple times while some observations may not be drawn at all. After $n$ iterations, you have $n$ stored bootstrap estimates of the statistic(s) of interest (e.g. if $n=1000$ and the statistic of interest is the mean, you have 1000 bootstrapped estimates of the mean). Lastly, summary statistics such as the mean, median and the standard deviation of the $n$ bootstrap-estimates are calculated.
Bootstrapping is often used for:
Calculation of confidence intervals (and estimation of the standard errors)
Estimation of the bias of the point estimates
There are several methods for calculating confidence intervals based on the bootstrap samples (this paper provides explanation and guidance). One very simple method for calculating a 95%-confidence interval is just calculating the empirical 2.5th and 97.5th percentiles of the bootstrap samples (this interval is called the bootstrap percentile interval; see code below). The simple percentile interval method is rarely used in practice as there are better methods, such as the bias-corrected and accelerated bootstrap (BCa). BCa intervals adjust for both bias and skewness in the bootstrap distribution.
The bias is simply estimated as the difference between the mean of the $n$ stored bootstrap samples and the original estimate(s).
Let's replicate the example from the website but using our own loop incorporating the ideas I've outlined above (drawing repeatedly with replacement):
#-----------------------------------------------------------------------------
# Load packages
#-----------------------------------------------------------------------------
require(ggplot2)
require(pscl)
require(MASS)
require(boot)
#-----------------------------------------------------------------------------
# Load data
#-----------------------------------------------------------------------------
zinb <- read.csv("http://www.ats.ucla.edu/stat/data/fish.csv")
zinb <- within(zinb, {
nofish <- factor(nofish)
livebait <- factor(livebait)
camper <- factor(camper)
})
#-----------------------------------------------------------------------------
# Calculate zero-inflated regression
#-----------------------------------------------------------------------------
m1 <- zeroinfl(count ~ child + camper | persons, data = zinb,
dist = "negbin", EM = TRUE)
#-----------------------------------------------------------------------------
# Store the original regression coefficients
#-----------------------------------------------------------------------------
original.estimates <- as.vector(t(do.call(rbind, coef(summary(m1)))[, 1:2]))
#-----------------------------------------------------------------------------
# Set the number of replications
#-----------------------------------------------------------------------------
n.sim <- 2000
#-----------------------------------------------------------------------------
# Set up a matrix to store the results
#-----------------------------------------------------------------------------
store.matrix <- matrix(NA, nrow=n.sim, ncol=12)
#-----------------------------------------------------------------------------
# The loop
#-----------------------------------------------------------------------------
set.seed(123)
for(i in 1:n.sim) {
#-----------------------------------------------------------------------------
# Draw the observations WITH replacement
#-----------------------------------------------------------------------------
data.new <- zinb[sample(1:dim(zinb)[1], dim(zinb)[1], replace=TRUE),]
#-----------------------------------------------------------------------------
# Calculate the model with this "new" data
#-----------------------------------------------------------------------------
m <- zeroinfl(count ~ child + camper | persons,
data = data.new, dist = "negbin",
start = list(count = c(1.3711, -1.5152, 0.879),
zero = c(1.6028, -1.6663)))
#-----------------------------------------------------------------------------
# Store the results
#-----------------------------------------------------------------------------
store.matrix[i, ] <- as.vector(t(do.call(rbind, coef(summary(m)))[, 1:2]))
}
#-----------------------------------------------------------------------------
# Save the means, medians and SDs of the bootstrapped statistics
#-----------------------------------------------------------------------------
boot.means <- colMeans(store.matrix, na.rm=T)
boot.medians <- apply(store.matrix,2,median, na.rm=T)
boot.sds <- apply(store.matrix,2,sd, na.rm=T)
#-----------------------------------------------------------------------------
# The bootstrap bias is the difference between the mean bootstrap estimates
# and the original estimates
#-----------------------------------------------------------------------------
boot.bias <- colMeans(store.matrix, na.rm=T) - original.estimates
#-----------------------------------------------------------------------------
# Basic bootstrap CIs based on the empirical quantiles
#-----------------------------------------------------------------------------
conf.mat <- matrix(apply(store.matrix, 2 ,quantile, c(0.025, 0.975), na.rm=T),
ncol=2, byrow=TRUE)
colnames(conf.mat) <- c("95%-CI Lower", "95%-CI Upper")
And here is our summary table:
#-----------------------------------------------------------------------------
# Set up summary data frame
#-----------------------------------------------------------------------------
summary.frame <- data.frame(mean=boot.means, median=boot.medians,
sd=boot.sds, bias=boot.bias, "CI_lower"=conf.mat[,1], "CI_upper"=conf.mat[,2])
summary.frame
mean median sd bias CI_lower CI_upper
1 1.2998 1.3013 0.39674 -0.0712912 0.51960 2.0605
2 0.2527 0.2486 0.03208 -0.0034461 0.19898 0.3229
3 -1.5662 -1.5572 0.26220 -0.0509239 -2.12900 -1.0920
4 0.2005 0.1986 0.01949 0.0049019 0.16744 0.2418
5 0.9544 0.9252 0.48915 0.0753405 0.03493 1.9025
6 0.2702 0.2688 0.02043 0.0009583 0.23272 0.3137
7 -0.8997 -0.9082 0.22174 0.0856793 -1.30664 -0.4380
8 0.1789 0.1781 0.01667 0.0029513 0.14494 0.2140
9 2.0683 1.7719 1.59102 0.4654898 0.44150 8.0471
10 4.0209 0.8270 13.23434 3.1845710 0.58114 57.6417
11 -2.0969 -1.6717 1.56311 -0.4306844 -8.43440 -1.1156
12 3.8660 0.6435 13.27525 3.1870642 0.33631 57.6062
Some explanations
The difference between the mean of the bootstrap estimates and the original estimates is what is called "bias" in the output of boot
What the output of boot calls "std. error" is the standard deviation of the bootstrapped estimates
Compare it with the output from boot:
#-----------------------------------------------------------------------------
# Compare with boot output and confidence intervals
#-----------------------------------------------------------------------------
set.seed(10)
res <- boot(zinb, f, R = 2000, parallel = "snow", ncpus = 4)
res
Bootstrap Statistics :
original bias std. error
t1* 1.3710504 -0.076735010 0.39842905
t2* 0.2561136 -0.003127401 0.03172301
t3* -1.5152609 -0.064110745 0.26554358
t4* 0.1955916 0.005819378 0.01933571
t5* 0.8790522 0.083866901 0.49476780
t6* 0.2692734 0.001475496 0.01957823
t7* -0.9853566 0.083186595 0.22384444
t8* 0.1759504 0.002507872 0.01648298
t9* 1.6031354 0.482973831 1.58603356
t10* 0.8365225 3.240981223 13.86307093
t11* -1.6665917 -0.453059768 1.55143344
t12* 0.6793077 3.247826469 13.90167954
perc.cis <- matrix(NA, nrow=dim(res$t)[2], ncol=2)
for( i in 1:dim(res$t)[2] ) {
perc.cis[i,] <- boot.ci(res, conf=0.95, type="perc", index=i)$percent[4:5]
}
colnames(perc.cis) <- c("95%-CI Lower", "95%-CI Upper")
perc.cis
95%-CI Lower 95%-CI Upper
[1,] 0.52240 2.1035
[2,] 0.19984 0.3220
[3,] -2.12820 -1.1012
[4,] 0.16754 0.2430
[5,] 0.04817 1.9084
[6,] 0.23401 0.3124
[7,] -1.29964 -0.4314
[8,] 0.14517 0.2149
[9,] 0.29993 8.0463
[10,] 0.57248 56.6710
[11,] -8.64798 -1.1088
[12,] 0.33048 56.6702
#-----------------------------------------------------------------------------
# Our summary table
#-----------------------------------------------------------------------------
summary.frame
mean median sd bias CI_lower CI_upper
1 1.2998 1.3013 0.39674 -0.0712912 0.51960 2.0605
2 0.2527 0.2486 0.03208 -0.0034461 0.19898 0.3229
3 -1.5662 -1.5572 0.26220 -0.0509239 -2.12900 -1.0920
4 0.2005 0.1986 0.01949 0.0049019 0.16744 0.2418
5 0.9544 0.9252 0.48915 0.0753405 0.03493 1.9025
6 0.2702 0.2688 0.02043 0.0009583 0.23272 0.3137
7 -0.8997 -0.9082 0.22174 0.0856793 -1.30664 -0.4380
8 0.1789 0.1781 0.01667 0.0029513 0.14494 0.2140
9 2.0683 1.7719 1.59102 0.4654898 0.44150 8.0471
10 4.0209 0.8270 13.23434 3.1845710 0.58114 57.6417
11 -2.0969 -1.6717 1.56311 -0.4306844 -8.43440 -1.1156
12 3.8660 0.6435 13.27525 3.1870642 0.33631 57.6062
Compare the "bias" columns and the "std. error" with the "sd" column of our own summary table. Our 95%-confidence intervals are very similar to the confidence intervals calculated by boot.ci using the percentile method (not all though: look at the lower limit of parameter with index 9).
|
How does bootstrapping in R actually work?
|
There are several "flavours" or forms of the bootstrap (e.g. non-parametric, parametric, residual resampling and many more). The bootstrap in the example is called a non-parametric bootstrap, or case
|
How does bootstrapping in R actually work?
There are several "flavours" or forms of the bootstrap (e.g. non-parametric, parametric, residual resampling and many more). The bootstrap in the example is called a non-parametric bootstrap, or case resampling (see here, here, here and here for applications in regression). The basic idea is that you treat your sample as population and repeatedly draw new samples from it with replacement. All original observations have equal probability of being drawn into the new sample. Then you calculate and store the statistic(s) of interest, this may be the mean, the median or regression coefficients using the newly drawn sample. This is repeated $n$ times. In each iteration, some observations from your original sample are drawn multiple times while some observations may not be drawn at all. After $n$ iterations, you have $n$ stored bootstrap estimates of the statistic(s) of interest (e.g. if $n=1000$ and the statistic of interest is the mean, you have 1000 bootstrapped estimates of the mean). Lastly, summary statistics such as the mean, median and the standard deviation of the $n$ bootstrap-estimates are calculated.
Bootstrapping is often used for:
Calculation of confidence intervals (and estimation of the standard errors)
Estimation of the bias of the point estimates
There are several methods for calculating confidence intervals based on the bootstrap samples (this paper provides explanation and guidance). One very simple method for calculating a 95%-confidence interval is just calculating the empirical 2.5th and 97.5th percentiles of the bootstrap samples (this interval is called the bootstrap percentile interval; see code below). The simple percentile interval method is rarely used in practice as there are better methods, such as the bias-corrected and accelerated bootstrap (BCa). BCa intervals adjust for both bias and skewness in the bootstrap distribution.
The bias is simply estimated as the difference between the mean of the $n$ stored bootstrap samples and the original estimate(s).
Let's replicate the example from the website but using our own loop incorporating the ideas I've outlined above (drawing repeatedly with replacement):
#-----------------------------------------------------------------------------
# Load packages
#-----------------------------------------------------------------------------
require(ggplot2)
require(pscl)
require(MASS)
require(boot)
#-----------------------------------------------------------------------------
# Load data
#-----------------------------------------------------------------------------
zinb <- read.csv("http://www.ats.ucla.edu/stat/data/fish.csv")
zinb <- within(zinb, {
nofish <- factor(nofish)
livebait <- factor(livebait)
camper <- factor(camper)
})
#-----------------------------------------------------------------------------
# Calculate zero-inflated regression
#-----------------------------------------------------------------------------
m1 <- zeroinfl(count ~ child + camper | persons, data = zinb,
dist = "negbin", EM = TRUE)
#-----------------------------------------------------------------------------
# Store the original regression coefficients
#-----------------------------------------------------------------------------
original.estimates <- as.vector(t(do.call(rbind, coef(summary(m1)))[, 1:2]))
#-----------------------------------------------------------------------------
# Set the number of replications
#-----------------------------------------------------------------------------
n.sim <- 2000
#-----------------------------------------------------------------------------
# Set up a matrix to store the results
#-----------------------------------------------------------------------------
store.matrix <- matrix(NA, nrow=n.sim, ncol=12)
#-----------------------------------------------------------------------------
# The loop
#-----------------------------------------------------------------------------
set.seed(123)
for(i in 1:n.sim) {
#-----------------------------------------------------------------------------
# Draw the observations WITH replacement
#-----------------------------------------------------------------------------
data.new <- zinb[sample(1:dim(zinb)[1], dim(zinb)[1], replace=TRUE),]
#-----------------------------------------------------------------------------
# Calculate the model with this "new" data
#-----------------------------------------------------------------------------
m <- zeroinfl(count ~ child + camper | persons,
data = data.new, dist = "negbin",
start = list(count = c(1.3711, -1.5152, 0.879),
zero = c(1.6028, -1.6663)))
#-----------------------------------------------------------------------------
# Store the results
#-----------------------------------------------------------------------------
store.matrix[i, ] <- as.vector(t(do.call(rbind, coef(summary(m)))[, 1:2]))
}
#-----------------------------------------------------------------------------
# Save the means, medians and SDs of the bootstrapped statistics
#-----------------------------------------------------------------------------
boot.means <- colMeans(store.matrix, na.rm=T)
boot.medians <- apply(store.matrix,2,median, na.rm=T)
boot.sds <- apply(store.matrix,2,sd, na.rm=T)
#-----------------------------------------------------------------------------
# The bootstrap bias is the difference between the mean bootstrap estimates
# and the original estimates
#-----------------------------------------------------------------------------
boot.bias <- colMeans(store.matrix, na.rm=T) - original.estimates
#-----------------------------------------------------------------------------
# Basic bootstrap CIs based on the empirical quantiles
#-----------------------------------------------------------------------------
conf.mat <- matrix(apply(store.matrix, 2 ,quantile, c(0.025, 0.975), na.rm=T),
ncol=2, byrow=TRUE)
colnames(conf.mat) <- c("95%-CI Lower", "95%-CI Upper")
And here is our summary table:
#-----------------------------------------------------------------------------
# Set up summary data frame
#-----------------------------------------------------------------------------
summary.frame <- data.frame(mean=boot.means, median=boot.medians,
sd=boot.sds, bias=boot.bias, "CI_lower"=conf.mat[,1], "CI_upper"=conf.mat[,2])
summary.frame
mean median sd bias CI_lower CI_upper
1 1.2998 1.3013 0.39674 -0.0712912 0.51960 2.0605
2 0.2527 0.2486 0.03208 -0.0034461 0.19898 0.3229
3 -1.5662 -1.5572 0.26220 -0.0509239 -2.12900 -1.0920
4 0.2005 0.1986 0.01949 0.0049019 0.16744 0.2418
5 0.9544 0.9252 0.48915 0.0753405 0.03493 1.9025
6 0.2702 0.2688 0.02043 0.0009583 0.23272 0.3137
7 -0.8997 -0.9082 0.22174 0.0856793 -1.30664 -0.4380
8 0.1789 0.1781 0.01667 0.0029513 0.14494 0.2140
9 2.0683 1.7719 1.59102 0.4654898 0.44150 8.0471
10 4.0209 0.8270 13.23434 3.1845710 0.58114 57.6417
11 -2.0969 -1.6717 1.56311 -0.4306844 -8.43440 -1.1156
12 3.8660 0.6435 13.27525 3.1870642 0.33631 57.6062
Some explanations
The difference between the mean of the bootstrap estimates and the original estimates is what is called "bias" in the output of boot
What the output of boot calls "std. error" is the standard deviation of the bootstrapped estimates
Compare it with the output from boot:
#-----------------------------------------------------------------------------
# Compare with boot output and confidence intervals
#-----------------------------------------------------------------------------
set.seed(10)
res <- boot(zinb, f, R = 2000, parallel = "snow", ncpus = 4)
res
Bootstrap Statistics :
original bias std. error
t1* 1.3710504 -0.076735010 0.39842905
t2* 0.2561136 -0.003127401 0.03172301
t3* -1.5152609 -0.064110745 0.26554358
t4* 0.1955916 0.005819378 0.01933571
t5* 0.8790522 0.083866901 0.49476780
t6* 0.2692734 0.001475496 0.01957823
t7* -0.9853566 0.083186595 0.22384444
t8* 0.1759504 0.002507872 0.01648298
t9* 1.6031354 0.482973831 1.58603356
t10* 0.8365225 3.240981223 13.86307093
t11* -1.6665917 -0.453059768 1.55143344
t12* 0.6793077 3.247826469 13.90167954
perc.cis <- matrix(NA, nrow=dim(res$t)[2], ncol=2)
for( i in 1:dim(res$t)[2] ) {
perc.cis[i,] <- boot.ci(res, conf=0.95, type="perc", index=i)$percent[4:5]
}
colnames(perc.cis) <- c("95%-CI Lower", "95%-CI Upper")
perc.cis
95%-CI Lower 95%-CI Upper
[1,] 0.52240 2.1035
[2,] 0.19984 0.3220
[3,] -2.12820 -1.1012
[4,] 0.16754 0.2430
[5,] 0.04817 1.9084
[6,] 0.23401 0.3124
[7,] -1.29964 -0.4314
[8,] 0.14517 0.2149
[9,] 0.29993 8.0463
[10,] 0.57248 56.6710
[11,] -8.64798 -1.1088
[12,] 0.33048 56.6702
#-----------------------------------------------------------------------------
# Our summary table
#-----------------------------------------------------------------------------
summary.frame
mean median sd bias CI_lower CI_upper
1 1.2998 1.3013 0.39674 -0.0712912 0.51960 2.0605
2 0.2527 0.2486 0.03208 -0.0034461 0.19898 0.3229
3 -1.5662 -1.5572 0.26220 -0.0509239 -2.12900 -1.0920
4 0.2005 0.1986 0.01949 0.0049019 0.16744 0.2418
5 0.9544 0.9252 0.48915 0.0753405 0.03493 1.9025
6 0.2702 0.2688 0.02043 0.0009583 0.23272 0.3137
7 -0.8997 -0.9082 0.22174 0.0856793 -1.30664 -0.4380
8 0.1789 0.1781 0.01667 0.0029513 0.14494 0.2140
9 2.0683 1.7719 1.59102 0.4654898 0.44150 8.0471
10 4.0209 0.8270 13.23434 3.1845710 0.58114 57.6417
11 -2.0969 -1.6717 1.56311 -0.4306844 -8.43440 -1.1156
12 3.8660 0.6435 13.27525 3.1870642 0.33631 57.6062
Compare the "bias" columns and the "std. error" with the "sd" column of our own summary table. Our 95%-confidence intervals are very similar to the confidence intervals calculated by boot.ci using the percentile method (not all though: look at the lower limit of parameter with index 9).
|
How does bootstrapping in R actually work?
There are several "flavours" or forms of the bootstrap (e.g. non-parametric, parametric, residual resampling and many more). The bootstrap in the example is called a non-parametric bootstrap, or case
|
11,917
|
How does bootstrapping in R actually work?
|
You should focus on the function that is passed to boot as the "statistic" parameter and notice how it is constructed.
f <- function(data, i) {
require(pscl)
m <- zeroinfl(count ~ child + camper | persons,
data = data[i, ], dist = "negbin",
start = list(count = c(1.3711, -1.5152, 0.879), zero = c(1.6028, -1.6663)))
as.vector(t(do.call(rbind, coef(summary(m)))[, 1:2]))
}
The "data" argument is going to receive an entire data frame, but the "i" argument is going to receive a sample of row indices generated by the "boot" and taken from 1:NROW(data). As you can see from that code, "i" is then used to create a neo-sample which is passed to zeroinl and then only selected portions of it's results are returned.
Let's imagine that "i" is {1,2,3,3,3,6,7,7,10}. The "[" function will return just those rows with 3 copies of row 3 and 2 copies of row 7. That would be the basis for a single zeroinl() calculation and then the coefficients will be returned to boot as the result from that replicate of the process. The number of such replicates is controlled by the "R" parameter.
Since only the regression coefficients are returned from statistic in this case, the boot function will return these accumulated coefficients as the value of "t". Further comparisons can be performed by other boot-package functions.
|
How does bootstrapping in R actually work?
|
You should focus on the function that is passed to boot as the "statistic" parameter and notice how it is constructed.
f <- function(data, i) {
require(pscl)
m <- zeroinfl(count ~ child + camper |
|
How does bootstrapping in R actually work?
You should focus on the function that is passed to boot as the "statistic" parameter and notice how it is constructed.
f <- function(data, i) {
require(pscl)
m <- zeroinfl(count ~ child + camper | persons,
data = data[i, ], dist = "negbin",
start = list(count = c(1.3711, -1.5152, 0.879), zero = c(1.6028, -1.6663)))
as.vector(t(do.call(rbind, coef(summary(m)))[, 1:2]))
}
The "data" argument is going to receive an entire data frame, but the "i" argument is going to receive a sample of row indices generated by the "boot" and taken from 1:NROW(data). As you can see from that code, "i" is then used to create a neo-sample which is passed to zeroinl and then only selected portions of it's results are returned.
Let's imagine that "i" is {1,2,3,3,3,6,7,7,10}. The "[" function will return just those rows with 3 copies of row 3 and 2 copies of row 7. That would be the basis for a single zeroinl() calculation and then the coefficients will be returned to boot as the result from that replicate of the process. The number of such replicates is controlled by the "R" parameter.
Since only the regression coefficients are returned from statistic in this case, the boot function will return these accumulated coefficients as the value of "t". Further comparisons can be performed by other boot-package functions.
|
How does bootstrapping in R actually work?
You should focus on the function that is passed to boot as the "statistic" parameter and notice how it is constructed.
f <- function(data, i) {
require(pscl)
m <- zeroinfl(count ~ child + camper |
|
11,918
|
PCA of non-Gaussian data
|
You have a couple of good answers here already (+1 to both @Cam.Davidson.Pilon & @MichaelChernick). Let me throw out a couple of points that help me to think about this issue.
First, PCA operates over the correlation matrix. Thus, it seems to me the important question is whether it makes sense to use a correlation matrix to help you think about your data. For example, the Pearson product-moment correlation assesses the linear relationship between two variables; if your variables are related, but not linearly, the correlation is not an ideal metric to index the strength of the relationship. (Here is a nice discussion on CV about correlation and non-normal data.)
Second, I think the easiest way to understand what is going on with PCA is that you are simply rotating your axes. You can do more things, of course, and unfortunately PCA gets confused with factor analysis (which definitely does have more going on). Nevertheless, plain old PCA with no bells and whistles, can be thought of as follows:
you have some points plotted in two dimensions on a sheet of graph paper;
you have a transparency with orthogonal axes drawn on it, and a pinhole at the origin;
you center the origin of the transparency (i.e., the pinhole) over $(\bar x, \bar y)$ and put the tip of your pencil through the pinhole to hold it in place;
then you rotate the transparency until the points (when indexed according to the transparency's axes instead of the original ones) are uncorrelated.
This isn't a perfect metaphor for PCA (e.g., we didn't rescale the variances to 1). But does give people the basic idea. The point is now to use that image to think about what the result looks like if the data weren't Gaussian to begin with; that will help you decide whether this process was worth doing. Hope that helps.
|
PCA of non-Gaussian data
|
You have a couple of good answers here already (+1 to both @Cam.Davidson.Pilon & @MichaelChernick). Let me throw out a couple of points that help me to think about this issue.
First, PCA operates o
|
PCA of non-Gaussian data
You have a couple of good answers here already (+1 to both @Cam.Davidson.Pilon & @MichaelChernick). Let me throw out a couple of points that help me to think about this issue.
First, PCA operates over the correlation matrix. Thus, it seems to me the important question is whether it makes sense to use a correlation matrix to help you think about your data. For example, the Pearson product-moment correlation assesses the linear relationship between two variables; if your variables are related, but not linearly, the correlation is not an ideal metric to index the strength of the relationship. (Here is a nice discussion on CV about correlation and non-normal data.)
Second, I think the easiest way to understand what is going on with PCA is that you are simply rotating your axes. You can do more things, of course, and unfortunately PCA gets confused with factor analysis (which definitely does have more going on). Nevertheless, plain old PCA with no bells and whistles, can be thought of as follows:
you have some points plotted in two dimensions on a sheet of graph paper;
you have a transparency with orthogonal axes drawn on it, and a pinhole at the origin;
you center the origin of the transparency (i.e., the pinhole) over $(\bar x, \bar y)$ and put the tip of your pencil through the pinhole to hold it in place;
then you rotate the transparency until the points (when indexed according to the transparency's axes instead of the original ones) are uncorrelated.
This isn't a perfect metaphor for PCA (e.g., we didn't rescale the variances to 1). But does give people the basic idea. The point is now to use that image to think about what the result looks like if the data weren't Gaussian to begin with; that will help you decide whether this process was worth doing. Hope that helps.
|
PCA of non-Gaussian data
You have a couple of good answers here already (+1 to both @Cam.Davidson.Pilon & @MichaelChernick). Let me throw out a couple of points that help me to think about this issue.
First, PCA operates o
|
11,919
|
PCA of non-Gaussian data
|
I can give a partial solution and show an answer for your second paragraph third question, relating to whether the new data is correlated. The short answer is no, the data in the new space is not correlated. To see, consider $w_1$ and $w_2$ as two unique principle components. Then $Xw_1$ and $Xw_2$ are two dimensions in the new space of the data, $X$.
$$ {\rm Cov}( Xw_1, Xw_2 ) = E[ (Xw_1)^T(Xw_2) ] - E[Xw_1]^TE[Xw_2] $$
As $w_i$ are constant, the second term is 0 (as you said we demean $X$ prior). The first term can be rewritten as
$$ w_1^TE[X^TX]w_2 = {\rm Var}(X)w_1^Tw_2 = 0$$ as $w_i$ are orthonormal to each other, so the whole term is zero, assuming $Var(X)$ is finite. This was all independent of any assumption about normality.
I think the reliance on normality boils down to the whole debate over variance. Here's a intuitive argument: First, note that variance is a really good measure of "spread" for symmetric distributions. But it can fail when we consider skewed or asymmetric distributions. Now recall that PCA tries to maximize the variance in the projected dimension. If $X$ is normal, then $Xw$ is still normal, i.e. still symmetric and variance works well. But if $X$ is not normal, like Poisson, the variance of $Xw$ need not be very descriptive.
To give an example where variance (and standard deviation) break down, consider the pareto distribution. The variance drops quickly as $\alpha$ grows, but only because the data starts to group around the small mean. But we know that we can easily see large swings with the pareto distribution, something that a small variance would not describe well.
|
PCA of non-Gaussian data
|
I can give a partial solution and show an answer for your second paragraph third question, relating to whether the new data is correlated. The short answer is no, the data in the new space is not corr
|
PCA of non-Gaussian data
I can give a partial solution and show an answer for your second paragraph third question, relating to whether the new data is correlated. The short answer is no, the data in the new space is not correlated. To see, consider $w_1$ and $w_2$ as two unique principle components. Then $Xw_1$ and $Xw_2$ are two dimensions in the new space of the data, $X$.
$$ {\rm Cov}( Xw_1, Xw_2 ) = E[ (Xw_1)^T(Xw_2) ] - E[Xw_1]^TE[Xw_2] $$
As $w_i$ are constant, the second term is 0 (as you said we demean $X$ prior). The first term can be rewritten as
$$ w_1^TE[X^TX]w_2 = {\rm Var}(X)w_1^Tw_2 = 0$$ as $w_i$ are orthonormal to each other, so the whole term is zero, assuming $Var(X)$ is finite. This was all independent of any assumption about normality.
I think the reliance on normality boils down to the whole debate over variance. Here's a intuitive argument: First, note that variance is a really good measure of "spread" for symmetric distributions. But it can fail when we consider skewed or asymmetric distributions. Now recall that PCA tries to maximize the variance in the projected dimension. If $X$ is normal, then $Xw$ is still normal, i.e. still symmetric and variance works well. But if $X$ is not normal, like Poisson, the variance of $Xw$ need not be very descriptive.
To give an example where variance (and standard deviation) break down, consider the pareto distribution. The variance drops quickly as $\alpha$ grows, but only because the data starts to group around the small mean. But we know that we can easily see large swings with the pareto distribution, something that a small variance would not describe well.
|
PCA of non-Gaussian data
I can give a partial solution and show an answer for your second paragraph third question, relating to whether the new data is correlated. The short answer is no, the data in the new space is not corr
|
11,920
|
PCA of non-Gaussian data
|
There is no linearity or normality assumed in PCA. The idea is just decomposing the variation in a p-dimensional dataset into orthogonal components that are ordered according to amount of variance explained.
|
PCA of non-Gaussian data
|
There is no linearity or normality assumed in PCA. The idea is just decomposing the variation in a p-dimensional dataset into orthogonal components that are ordered according to amount of variance ex
|
PCA of non-Gaussian data
There is no linearity or normality assumed in PCA. The idea is just decomposing the variation in a p-dimensional dataset into orthogonal components that are ordered according to amount of variance explained.
|
PCA of non-Gaussian data
There is no linearity or normality assumed in PCA. The idea is just decomposing the variation in a p-dimensional dataset into orthogonal components that are ordered according to amount of variance ex
|
11,921
|
PCA of non-Gaussian data
|
Reading page 7 here:
http://www.cs.princeton.edu/picasso/mats/PCA-Tutorial-Intuition_jp.pdf
they note that PCA assumes that the distribution of whatever we are explaining can be described by a mean (of zero) and variance alone, which they say can only be the Normal distribution.
(Basically in addition to Cam's answer, but I don't have enough reputation to comment : )
|
PCA of non-Gaussian data
|
Reading page 7 here:
http://www.cs.princeton.edu/picasso/mats/PCA-Tutorial-Intuition_jp.pdf
they note that PCA assumes that the distribution of whatever we are explaining can be described by a mean (o
|
PCA of non-Gaussian data
Reading page 7 here:
http://www.cs.princeton.edu/picasso/mats/PCA-Tutorial-Intuition_jp.pdf
they note that PCA assumes that the distribution of whatever we are explaining can be described by a mean (of zero) and variance alone, which they say can only be the Normal distribution.
(Basically in addition to Cam's answer, but I don't have enough reputation to comment : )
|
PCA of non-Gaussian data
Reading page 7 here:
http://www.cs.princeton.edu/picasso/mats/PCA-Tutorial-Intuition_jp.pdf
they note that PCA assumes that the distribution of whatever we are explaining can be described by a mean (o
|
11,922
|
PCA of non-Gaussian data
|
As far as I know, PCA doesn't assume normality of data. But if it is normally distributed (in more general sense, symmetrically distributed), then the result is more robust. As other people say, the key is that PCA is based on Pearson correlation coefficient matrix, of which estimation is affected by outliers and skewed distribution. So in some analysis involved in, such as statistical test or p-value, then you should care more about whether normality is satisfied; but in other applications like exploratory analysis, you can use it but only take care when make interpretations.
|
PCA of non-Gaussian data
|
As far as I know, PCA doesn't assume normality of data. But if it is normally distributed (in more general sense, symmetrically distributed), then the result is more robust. As other people say, the k
|
PCA of non-Gaussian data
As far as I know, PCA doesn't assume normality of data. But if it is normally distributed (in more general sense, symmetrically distributed), then the result is more robust. As other people say, the key is that PCA is based on Pearson correlation coefficient matrix, of which estimation is affected by outliers and skewed distribution. So in some analysis involved in, such as statistical test or p-value, then you should care more about whether normality is satisfied; but in other applications like exploratory analysis, you can use it but only take care when make interpretations.
|
PCA of non-Gaussian data
As far as I know, PCA doesn't assume normality of data. But if it is normally distributed (in more general sense, symmetrically distributed), then the result is more robust. As other people say, the k
|
11,923
|
PCA of non-Gaussian data
|
Agreed with others who said data should be "Normally" distributed. Any distribution will overlap with a normal distribution if you transform it. If your distribution is not normal, the results you will get will be inferior compared to the case when it is normal, as stated by some here...
You can transform your distribution if you need.
You can opt the PCA and use Independent Component Analysis (ICA) instead.
If you read the reference in the first answer, in the Appendix section it states that the assumption is a Normal distribution.
|
PCA of non-Gaussian data
|
Agreed with others who said data should be "Normally" distributed. Any distribution will overlap with a normal distribution if you transform it. If your distribution is not normal, the results you wil
|
PCA of non-Gaussian data
Agreed with others who said data should be "Normally" distributed. Any distribution will overlap with a normal distribution if you transform it. If your distribution is not normal, the results you will get will be inferior compared to the case when it is normal, as stated by some here...
You can transform your distribution if you need.
You can opt the PCA and use Independent Component Analysis (ICA) instead.
If you read the reference in the first answer, in the Appendix section it states that the assumption is a Normal distribution.
|
PCA of non-Gaussian data
Agreed with others who said data should be "Normally" distributed. Any distribution will overlap with a normal distribution if you transform it. If your distribution is not normal, the results you wil
|
11,924
|
Calculating statistical power
|
This isn't an answer you are going to want to hear, I am afraid, but I am going to say it anyway: try to resist the temptation of online calculators (and save your money before purchasing proprietary calculators).
Here are some of the reasons why: 1) online calculators all use different notation and are often poorly documented. It is a waste of your time. 2) SPSS does offer a power calculator but I've never even tried it because it was too expensive for my department to afford! 3) Phrases like "medium effect size" are at best misleading and at worst just plain wrong for all but the simplest research designs. There are too many parameters and too much interplay to be able to distill effect size down to a single number in [0,1]. Even if you could put it into a single number, there's no guarantee that Cohen's 0.5 corresponds to "medium" in the context of the problem.
Believe me - it is better in the long run to bite the bullet and teach yourself how to use simulation to your benefit (and the benefit of the person(s) you're consulting). Sit down with them and complete the following steps:
1) Decide on a model that is appropriate in the context of the problem (sounds like you've already worked on this part).
2) Consult with them to decide what the null parameters should be, the behavior of the control group, whatever this means in context of the problem.
3) Consult with them to determine what the parameters should be in order for the difference to be practically meaningful. If there are sample size limitations then this should be identified here, as well.
4) Simulate data according to the two models in 2) and 3), and run your test. You can do this with software galore - pick your favorite and go for it. See if you rejected or not.
5) Repeat 4) thousands of times, say, $n$. Keep track of how many times you rejected, and the sample proportion $\hat{p}$ of rejections is an estimate of power. This estimate has standard error approximately $\sqrt{\hat{p}(1 - \hat{p})/n}$.
If you do your power analysis this way, you are going to find several things: A) there were a lot more parameters running around than you ever anticipated. It will make you wonder how in the world it's possible to collapse all of them into a single number like "medium" - and you will see that it isn't possible, at least not in any straightforward way. B) your power is going to be a lot smaller than a lot of the other calculators advertise. C) you can increase power by increasing sample size, but watch out! You may find as I have that in order to detect a difference that's "practically meaningful" you need a sample size that's prohibitively large.
If you have trouble with any of the above steps you could collect your thoughts, well-formulate a question for CrossValidated, and the people here will help you.
EDIT: In the case you find that you absolutely must use an online calculator, the best one I've found is Russ Lenth's Power and Sample Size page. It's been around for a long time, it has relatively complete documentation, it doesn't depend on canned effect sizes, and has links to other papers which are relevant and important.
ANOTHER EDIT: Coincidentally, when this question came up I was right in the middle of writing a blog post to flesh out some of these ideas (otherwise, I might not have answered so quickly). Anyway, I finished it last weekend and you can find it here. It is not written with SPSS in mind, but I'd bet if a person were clever they might be able to translate portions of it to SPSS syntax.
|
Calculating statistical power
|
This isn't an answer you are going to want to hear, I am afraid, but I am going to say it anyway: try to resist the temptation of online calculators (and save your money before purchasing proprietary
|
Calculating statistical power
This isn't an answer you are going to want to hear, I am afraid, but I am going to say it anyway: try to resist the temptation of online calculators (and save your money before purchasing proprietary calculators).
Here are some of the reasons why: 1) online calculators all use different notation and are often poorly documented. It is a waste of your time. 2) SPSS does offer a power calculator but I've never even tried it because it was too expensive for my department to afford! 3) Phrases like "medium effect size" are at best misleading and at worst just plain wrong for all but the simplest research designs. There are too many parameters and too much interplay to be able to distill effect size down to a single number in [0,1]. Even if you could put it into a single number, there's no guarantee that Cohen's 0.5 corresponds to "medium" in the context of the problem.
Believe me - it is better in the long run to bite the bullet and teach yourself how to use simulation to your benefit (and the benefit of the person(s) you're consulting). Sit down with them and complete the following steps:
1) Decide on a model that is appropriate in the context of the problem (sounds like you've already worked on this part).
2) Consult with them to decide what the null parameters should be, the behavior of the control group, whatever this means in context of the problem.
3) Consult with them to determine what the parameters should be in order for the difference to be practically meaningful. If there are sample size limitations then this should be identified here, as well.
4) Simulate data according to the two models in 2) and 3), and run your test. You can do this with software galore - pick your favorite and go for it. See if you rejected or not.
5) Repeat 4) thousands of times, say, $n$. Keep track of how many times you rejected, and the sample proportion $\hat{p}$ of rejections is an estimate of power. This estimate has standard error approximately $\sqrt{\hat{p}(1 - \hat{p})/n}$.
If you do your power analysis this way, you are going to find several things: A) there were a lot more parameters running around than you ever anticipated. It will make you wonder how in the world it's possible to collapse all of them into a single number like "medium" - and you will see that it isn't possible, at least not in any straightforward way. B) your power is going to be a lot smaller than a lot of the other calculators advertise. C) you can increase power by increasing sample size, but watch out! You may find as I have that in order to detect a difference that's "practically meaningful" you need a sample size that's prohibitively large.
If you have trouble with any of the above steps you could collect your thoughts, well-formulate a question for CrossValidated, and the people here will help you.
EDIT: In the case you find that you absolutely must use an online calculator, the best one I've found is Russ Lenth's Power and Sample Size page. It's been around for a long time, it has relatively complete documentation, it doesn't depend on canned effect sizes, and has links to other papers which are relevant and important.
ANOTHER EDIT: Coincidentally, when this question came up I was right in the middle of writing a blog post to flesh out some of these ideas (otherwise, I might not have answered so quickly). Anyway, I finished it last weekend and you can find it here. It is not written with SPSS in mind, but I'd bet if a person were clever they might be able to translate portions of it to SPSS syntax.
|
Calculating statistical power
This isn't an answer you are going to want to hear, I am afraid, but I am going to say it anyway: try to resist the temptation of online calculators (and save your money before purchasing proprietary
|
11,925
|
Introduction to measure theory
|
For a really short introduction (seven page pdf), there's also this, intended to allow you to follow papers that use a bit of measure theory :
A Measure Theory Tutorial (Measure Theory for Dummies). Maya R. Gupta.
Dept of Electrical Engineering, University of Washington, 2006. (archive.org copy)
The author gives some refs at the end and says "one of the friendliest books is Resnick’s, which teaches measure theoretic graduate level probability with the assumption that you do not have a B.A. in mathematics."
S. I. Resnick, A probability path, Birkhäuser, 1999. 453 pages.
|
Introduction to measure theory
|
For a really short introduction (seven page pdf), there's also this, intended to allow you to follow papers that use a bit of measure theory :
A Measure Theory Tutorial (Measure Theory for Dummies). M
|
Introduction to measure theory
For a really short introduction (seven page pdf), there's also this, intended to allow you to follow papers that use a bit of measure theory :
A Measure Theory Tutorial (Measure Theory for Dummies). Maya R. Gupta.
Dept of Electrical Engineering, University of Washington, 2006. (archive.org copy)
The author gives some refs at the end and says "one of the friendliest books is Resnick’s, which teaches measure theoretic graduate level probability with the assumption that you do not have a B.A. in mathematics."
S. I. Resnick, A probability path, Birkhäuser, 1999. 453 pages.
|
Introduction to measure theory
For a really short introduction (seven page pdf), there's also this, intended to allow you to follow papers that use a bit of measure theory :
A Measure Theory Tutorial (Measure Theory for Dummies). M
|
11,926
|
Introduction to measure theory
|
After some research, I ended up buying this when I thought I needed to know something about measure-theoretic probability:
Jeffrey Rosenthal. A First Look at Rigorous Probability Theory. World Scientific 2007. ISBN 9789812703712.
I haven't read much of it, however, as my personal experience is in accord with Stephen Senn's quip.
|
Introduction to measure theory
|
After some research, I ended up buying this when I thought I needed to know something about measure-theoretic probability:
Jeffrey Rosenthal. A First Look at Rigorous Probability Theory. World Scien
|
Introduction to measure theory
After some research, I ended up buying this when I thought I needed to know something about measure-theoretic probability:
Jeffrey Rosenthal. A First Look at Rigorous Probability Theory. World Scientific 2007. ISBN 9789812703712.
I haven't read much of it, however, as my personal experience is in accord with Stephen Senn's quip.
|
Introduction to measure theory
After some research, I ended up buying this when I thought I needed to know something about measure-theoretic probability:
Jeffrey Rosenthal. A First Look at Rigorous Probability Theory. World Scien
|
11,927
|
Introduction to measure theory
|
Personally, I've found Kolmogorov's original Foundations of the Theory of Probability to be fairly readable, at least compared to most measure theory texts. Although it obviously doesn't contain any later work, it does give you an idea of most of the important concepts (sets of measure zero, conditional expectation, etc.). It is also mercifully brief, at only 84 pages.
|
Introduction to measure theory
|
Personally, I've found Kolmogorov's original Foundations of the Theory of Probability to be fairly readable, at least compared to most measure theory texts. Although it obviously doesn't contain any l
|
Introduction to measure theory
Personally, I've found Kolmogorov's original Foundations of the Theory of Probability to be fairly readable, at least compared to most measure theory texts. Although it obviously doesn't contain any later work, it does give you an idea of most of the important concepts (sets of measure zero, conditional expectation, etc.). It is also mercifully brief, at only 84 pages.
|
Introduction to measure theory
Personally, I've found Kolmogorov's original Foundations of the Theory of Probability to be fairly readable, at least compared to most measure theory texts. Although it obviously doesn't contain any l
|
11,928
|
Introduction to measure theory
|
Outline of Lebesgue Theory: A Heuristic Introduction by Robert E. Wernikoff. For engineers this is easily the best introduction.
|
Introduction to measure theory
|
Outline of Lebesgue Theory: A Heuristic Introduction by Robert E. Wernikoff. For engineers this is easily the best introduction.
|
Introduction to measure theory
Outline of Lebesgue Theory: A Heuristic Introduction by Robert E. Wernikoff. For engineers this is easily the best introduction.
|
Introduction to measure theory
Outline of Lebesgue Theory: A Heuristic Introduction by Robert E. Wernikoff. For engineers this is easily the best introduction.
|
11,929
|
Introduction to measure theory
|
Jumping straight into non-parametric Bayesian analysis is quite a big first leap! Maybe get a bit of parametric Bayes under your belt first?
Three books which you may find useful from the Bayesian part of things are:
1) Probability Theory: The Logic of Science by E. T. Jaynes, Edited by G. L. Bretthorst (2003)
2) Bayesian Theory by Bernardo, J. M. and Smith, A. F. M. (1st ed 1994, 2nd ed 2007).
3) Bayesian Decision Theory J. O. Berger (1985)
A good place to see recent applications of Bayesian statistics is the FREE journal called Bayesian Analysis, with articles from 2006 to present.
|
Introduction to measure theory
|
Jumping straight into non-parametric Bayesian analysis is quite a big first leap! Maybe get a bit of parametric Bayes under your belt first?
Three books which you may find useful from the Bayesian pa
|
Introduction to measure theory
Jumping straight into non-parametric Bayesian analysis is quite a big first leap! Maybe get a bit of parametric Bayes under your belt first?
Three books which you may find useful from the Bayesian part of things are:
1) Probability Theory: The Logic of Science by E. T. Jaynes, Edited by G. L. Bretthorst (2003)
2) Bayesian Theory by Bernardo, J. M. and Smith, A. F. M. (1st ed 1994, 2nd ed 2007).
3) Bayesian Decision Theory J. O. Berger (1985)
A good place to see recent applications of Bayesian statistics is the FREE journal called Bayesian Analysis, with articles from 2006 to present.
|
Introduction to measure theory
Jumping straight into non-parametric Bayesian analysis is quite a big first leap! Maybe get a bit of parametric Bayes under your belt first?
Three books which you may find useful from the Bayesian pa
|
11,930
|
Algorithms to compute the running median?
|
#Edit:
As @Hunaphu's points out (and @whuber below in his answer) the original answer I gave to the OP (below) is wrong. It is indeed quicker to first sort the initial batch and then keep updating the median up or down (depending on whether a new data points falls to the left or to the right of the current median).
It's bad form to sort an array to compute a median. Medians (and other quantiles) are typically computed using the quickselect algorithm, with $O(n)$ complexity.
You may also want to look at my answer to a recent related question here.
|
Algorithms to compute the running median?
|
#Edit:
As @Hunaphu's points out (and @whuber below in his answer) the original answer I gave to the OP (below) is wrong. It is indeed quicker to first sort the initial batch and then keep updating the
|
Algorithms to compute the running median?
#Edit:
As @Hunaphu's points out (and @whuber below in his answer) the original answer I gave to the OP (below) is wrong. It is indeed quicker to first sort the initial batch and then keep updating the median up or down (depending on whether a new data points falls to the left or to the right of the current median).
It's bad form to sort an array to compute a median. Medians (and other quantiles) are typically computed using the quickselect algorithm, with $O(n)$ complexity.
You may also want to look at my answer to a recent related question here.
|
Algorithms to compute the running median?
#Edit:
As @Hunaphu's points out (and @whuber below in his answer) the original answer I gave to the OP (below) is wrong. It is indeed quicker to first sort the initial batch and then keep updating the
|
11,931
|
Algorithms to compute the running median?
|
If you're willing to tolerate an approximation, there are other methods. For example, one approximation is a value whose rank is within some (user specified) distance from the true median. For example, the median has (normalized) rank 0.5, and if you specify an error term of 10%, you'd want an answer that has rank between 0.45 and 0.55.
If such an answer is appropriate, then there are many solutions that can work on sliding windows of data. The basic idea is to maintain a sample of the data of a certain size (roughly 1/error term) and compute the median on this sample. It can be shown that with high probability, regardless of the nature of the input, the resulting median satisfies the properties I mentioned above.
Thus, the main question is how to maintain a running sample of the data of a certain size, and there are many approaches for that, including the technique known as reservoir sampling. For example, this paper: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.24.7136
|
Algorithms to compute the running median?
|
If you're willing to tolerate an approximation, there are other methods. For example, one approximation is a value whose rank is within some (user specified) distance from the true median. For example
|
Algorithms to compute the running median?
If you're willing to tolerate an approximation, there are other methods. For example, one approximation is a value whose rank is within some (user specified) distance from the true median. For example, the median has (normalized) rank 0.5, and if you specify an error term of 10%, you'd want an answer that has rank between 0.45 and 0.55.
If such an answer is appropriate, then there are many solutions that can work on sliding windows of data. The basic idea is to maintain a sample of the data of a certain size (roughly 1/error term) and compute the median on this sample. It can be shown that with high probability, regardless of the nature of the input, the resulting median satisfies the properties I mentioned above.
Thus, the main question is how to maintain a running sample of the data of a certain size, and there are many approaches for that, including the technique known as reservoir sampling. For example, this paper: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.24.7136
|
Algorithms to compute the running median?
If you're willing to tolerate an approximation, there are other methods. For example, one approximation is a value whose rank is within some (user specified) distance from the true median. For example
|
11,932
|
Algorithms to compute the running median?
|
Here is an article describing one possible algorithm. Source code included and a quite serious application (gravitational wave detection based on laser interferometry), so you can expect it to be well tested.
|
Algorithms to compute the running median?
|
Here is an article describing one possible algorithm. Source code included and a quite serious application (gravitational wave detection based on laser interferometry), so you can expect it to be well
|
Algorithms to compute the running median?
Here is an article describing one possible algorithm. Source code included and a quite serious application (gravitational wave detection based on laser interferometry), so you can expect it to be well tested.
|
Algorithms to compute the running median?
Here is an article describing one possible algorithm. Source code included and a quite serious application (gravitational wave detection based on laser interferometry), so you can expect it to be well
|
11,933
|
Algorithms to compute the running median?
|
If you maintain a length-k window of data as a sorted doubly linked list then, by means of a binary search (to insert each new element as it gets shifted into the window) and a circular array of pointers (to immediately locate elements that need to be deleted), each shift of the window requires O(log(k)) effort for inserting one element, only O(1) effort for deleting the element shifted out of the window, and only O(1) effort to find the median (because every time one element is inserted or deleted into the list you can update a pointer to the median in O(1) time). The total effort for processing an array of length N therefore is O((n-k)log(k)) <= O(n log(k)). This is better than any of the other methods proposed so far and it is not an approximation, it is exact.
|
Algorithms to compute the running median?
|
If you maintain a length-k window of data as a sorted doubly linked list then, by means of a binary search (to insert each new element as it gets shifted into the window) and a circular array of point
|
Algorithms to compute the running median?
If you maintain a length-k window of data as a sorted doubly linked list then, by means of a binary search (to insert each new element as it gets shifted into the window) and a circular array of pointers (to immediately locate elements that need to be deleted), each shift of the window requires O(log(k)) effort for inserting one element, only O(1) effort for deleting the element shifted out of the window, and only O(1) effort to find the median (because every time one element is inserted or deleted into the list you can update a pointer to the median in O(1) time). The total effort for processing an array of length N therefore is O((n-k)log(k)) <= O(n log(k)). This is better than any of the other methods proposed so far and it is not an approximation, it is exact.
|
Algorithms to compute the running median?
If you maintain a length-k window of data as a sorted doubly linked list then, by means of a binary search (to insert each new element as it gets shifted into the window) and a circular array of point
|
11,934
|
Algorithms to compute the running median?
|
Here is a solution O(1) for finding current median, and O(log n) for adding a new number
http://www.dsalgo.com/RunningMedian.php
|
Algorithms to compute the running median?
|
Here is a solution O(1) for finding current median, and O(log n) for adding a new number
http://www.dsalgo.com/RunningMedian.php
|
Algorithms to compute the running median?
Here is a solution O(1) for finding current median, and O(log n) for adding a new number
http://www.dsalgo.com/RunningMedian.php
|
Algorithms to compute the running median?
Here is a solution O(1) for finding current median, and O(log n) for adding a new number
http://www.dsalgo.com/RunningMedian.php
|
11,935
|
Algorithms to compute the running median?
|
As you mentioned sorting would be O(n·log n) for a window of length n. Doing this moving adds another l=vectorlength making the total cost O(l·n·log n).
The simplest way to push this is by keeping an ordered list of the last n elements in memory when moving from one window to the next one. As removing/inserting one element from/into an ordered list are both O(n) this would result in costs of O(l·n).
Pseudocode:
l = length(input)
aidvector = sort(input(1:n))
output(i) = aid(n/2)
for i = n+1:l
remove input(i-n) from aidvector
sort aid(n) into aidvector
output(i) = aid(n/2)
|
Algorithms to compute the running median?
|
As you mentioned sorting would be O(n·log n) for a window of length n. Doing this moving adds another l=vectorlength making the total cost O(l·n·log n).
The simplest way to push this is by keeping an
|
Algorithms to compute the running median?
As you mentioned sorting would be O(n·log n) for a window of length n. Doing this moving adds another l=vectorlength making the total cost O(l·n·log n).
The simplest way to push this is by keeping an ordered list of the last n elements in memory when moving from one window to the next one. As removing/inserting one element from/into an ordered list are both O(n) this would result in costs of O(l·n).
Pseudocode:
l = length(input)
aidvector = sort(input(1:n))
output(i) = aid(n/2)
for i = n+1:l
remove input(i-n) from aidvector
sort aid(n) into aidvector
output(i) = aid(n/2)
|
Algorithms to compute the running median?
As you mentioned sorting would be O(n·log n) for a window of length n. Doing this moving adds another l=vectorlength making the total cost O(l·n·log n).
The simplest way to push this is by keeping an
|
11,936
|
Algorithms to compute the running median?
|
If you can live with an estimate instead of the true median, the Remedian Algorithm (PDF) is one-pass with low storage requirements and well defined accuracy.
The remedian with base b proceeds by computing medians of groups of b observations, and then medians of these medians, until only a single estimate remains. This method merely needs k arrays of size b (where n = b^k)...
|
Algorithms to compute the running median?
|
If you can live with an estimate instead of the true median, the Remedian Algorithm (PDF) is one-pass with low storage requirements and well defined accuracy.
The remedian with base b proceeds by com
|
Algorithms to compute the running median?
If you can live with an estimate instead of the true median, the Remedian Algorithm (PDF) is one-pass with low storage requirements and well defined accuracy.
The remedian with base b proceeds by computing medians of groups of b observations, and then medians of these medians, until only a single estimate remains. This method merely needs k arrays of size b (where n = b^k)...
|
Algorithms to compute the running median?
If you can live with an estimate instead of the true median, the Remedian Algorithm (PDF) is one-pass with low storage requirements and well defined accuracy.
The remedian with base b proceeds by com
|
11,937
|
Algorithms to compute the running median?
|
I used this RunningStats C++ Library in an embedded application. It is the most simple running stats library I have found yet.
From the link:
The code is an extension of the method of Knuth and Welford for
computing standard deviation in one pass through the data. It computes
skewness and kurtosis as well with a similar interface. In addition to
only requiring one pass through the data, the algorithm is numerically
stable and accurate.
|
Algorithms to compute the running median?
|
I used this RunningStats C++ Library in an embedded application. It is the most simple running stats library I have found yet.
From the link:
The code is an extension of the method of Knuth and Welfo
|
Algorithms to compute the running median?
I used this RunningStats C++ Library in an embedded application. It is the most simple running stats library I have found yet.
From the link:
The code is an extension of the method of Knuth and Welford for
computing standard deviation in one pass through the data. It computes
skewness and kurtosis as well with a similar interface. In addition to
only requiring one pass through the data, the algorithm is numerically
stable and accurate.
|
Algorithms to compute the running median?
I used this RunningStats C++ Library in an embedded application. It is the most simple running stats library I have found yet.
From the link:
The code is an extension of the method of Knuth and Welfo
|
11,938
|
Zero correlation of all functions of random variables implying independence
|
Using indicator functions of measurable sets like$$f(x)=\mathbb I_A(x)\quad g(x)=\mathbb I_B(x)$$leads to$$\text{cov}(f(X),g(Y))=\mathbb P(X\in A,Y\in B)-\mathbb P(X\in A)\mathbb P(Y\in B)$$therefore implying independence. As shown in the following snapshot of A. Dembo's probability course, proving the result for indicator functions is enough.
This is due to this monotone class theorem:
|
Zero correlation of all functions of random variables implying independence
|
Using indicator functions of measurable sets like$$f(x)=\mathbb I_A(x)\quad g(x)=\mathbb I_B(x)$$leads to$$\text{cov}(f(X),g(Y))=\mathbb P(X\in A,Y\in B)-\mathbb P(X\in A)\mathbb P(Y\in B)$$therefore
|
Zero correlation of all functions of random variables implying independence
Using indicator functions of measurable sets like$$f(x)=\mathbb I_A(x)\quad g(x)=\mathbb I_B(x)$$leads to$$\text{cov}(f(X),g(Y))=\mathbb P(X\in A,Y\in B)-\mathbb P(X\in A)\mathbb P(Y\in B)$$therefore implying independence. As shown in the following snapshot of A. Dembo's probability course, proving the result for indicator functions is enough.
This is due to this monotone class theorem:
|
Zero correlation of all functions of random variables implying independence
Using indicator functions of measurable sets like$$f(x)=\mathbb I_A(x)\quad g(x)=\mathbb I_B(x)$$leads to$$\text{cov}(f(X),g(Y))=\mathbb P(X\in A,Y\in B)-\mathbb P(X\in A)\mathbb P(Y\in B)$$therefore
|
11,939
|
Zero correlation of all functions of random variables implying independence
|
@Xi'an gives probably the simplest set of functions $f,\,g$ that will work. Here's a more general argument:
It is sufficient to show that the characteristic function $E[\exp(itX+iSY)]$ factors into $E[\exp(itX)]E[\exp(iSY)]$, because characteristic functions determine distributions.
Therefore, it is sufficient to show zero correlation
when $f,\,g$ are of the form $f_t(x)=\exp(itx)$ and $f_s(y)=\exp(isy)$
so $\sin(tx)$ and $\cos(sy)$ are also sufficient
by the Weierstrass approximation theorem, the sines and cosines can be approximated by polynomials, which also suffice
more generally, by the Stone-Weierstrass theorem, any other set of continuous functions closed under addition and multiplication, containing the constants, and separating points will also do ['separates points' means for any $x_1$ and $x_2$ you can find $f$ so that $f(x_1)\neq f(x_2)$, and similarly for $y$ and $g$]
the construction of integrals from indicator functions shows you can also use constant functions as @Xi'an does
and, like, wavelets or whatever
It might occasionally be useful to note that you don't have to use the same set of functions for $f$ as for $g$. For example, you could use indicator functions for $f$ and polynomials for $g$ if that somehow made your life easier
|
Zero correlation of all functions of random variables implying independence
|
@Xi'an gives probably the simplest set of functions $f,\,g$ that will work. Here's a more general argument:
It is sufficient to show that the characteristic function $E[\exp(itX+iSY)]$ factors into $
|
Zero correlation of all functions of random variables implying independence
@Xi'an gives probably the simplest set of functions $f,\,g$ that will work. Here's a more general argument:
It is sufficient to show that the characteristic function $E[\exp(itX+iSY)]$ factors into $E[\exp(itX)]E[\exp(iSY)]$, because characteristic functions determine distributions.
Therefore, it is sufficient to show zero correlation
when $f,\,g$ are of the form $f_t(x)=\exp(itx)$ and $f_s(y)=\exp(isy)$
so $\sin(tx)$ and $\cos(sy)$ are also sufficient
by the Weierstrass approximation theorem, the sines and cosines can be approximated by polynomials, which also suffice
more generally, by the Stone-Weierstrass theorem, any other set of continuous functions closed under addition and multiplication, containing the constants, and separating points will also do ['separates points' means for any $x_1$ and $x_2$ you can find $f$ so that $f(x_1)\neq f(x_2)$, and similarly for $y$ and $g$]
the construction of integrals from indicator functions shows you can also use constant functions as @Xi'an does
and, like, wavelets or whatever
It might occasionally be useful to note that you don't have to use the same set of functions for $f$ as for $g$. For example, you could use indicator functions for $f$ and polynomials for $g$ if that somehow made your life easier
|
Zero correlation of all functions of random variables implying independence
@Xi'an gives probably the simplest set of functions $f,\,g$ that will work. Here's a more general argument:
It is sufficient to show that the characteristic function $E[\exp(itX+iSY)]$ factors into $
|
11,940
|
Zero correlation of all functions of random variables implying independence
|
Any continuous random variable can be mapped into a uniform [0,1] random variable using the cumulative distribution function. If the variables are independent, then the joint distribution on the 1x1 square will be the product of the two uniform margins and so uniform too. For the variables to be dependent, the joint distribution is not equal to the product, and therefore not uniform. The 1x1 square has bumps and dips in it. We then apply a permutation of intervals/blocks along each axis to rearrange those bumps along the diagonal and the dips far away from it - like permuting the rows and columns of a matrix with the Cuthill-McKee algorithm. This makes the correlation non-zero. Thus, zero correlation for all functions of continuous random variables implies independence.
|
Zero correlation of all functions of random variables implying independence
|
Any continuous random variable can be mapped into a uniform [0,1] random variable using the cumulative distribution function. If the variables are independent, then the joint distribution on the 1x1 s
|
Zero correlation of all functions of random variables implying independence
Any continuous random variable can be mapped into a uniform [0,1] random variable using the cumulative distribution function. If the variables are independent, then the joint distribution on the 1x1 square will be the product of the two uniform margins and so uniform too. For the variables to be dependent, the joint distribution is not equal to the product, and therefore not uniform. The 1x1 square has bumps and dips in it. We then apply a permutation of intervals/blocks along each axis to rearrange those bumps along the diagonal and the dips far away from it - like permuting the rows and columns of a matrix with the Cuthill-McKee algorithm. This makes the correlation non-zero. Thus, zero correlation for all functions of continuous random variables implies independence.
|
Zero correlation of all functions of random variables implying independence
Any continuous random variable can be mapped into a uniform [0,1] random variable using the cumulative distribution function. If the variables are independent, then the joint distribution on the 1x1 s
|
11,941
|
Zero correlation of all functions of random variables implying independence
|
If $\text{Corr}\left(f(X),g(Y)\right)=0$ for all possible functions $f(\cdot)$ and $g(\cdot)$, then $X$ and $Y$ are independent.
In the ref that I have the opposite is affirmed. If $X$ and $Y$ are independent we have that:
$E[f(X)]E[g(Y)]-E[f(X)g(Y)]=0$ (then $corr[f(X),g(Y)]=0$)
for any $f()$ and $g()$.
In words, we have no chance to find dependencies. Indeed if exist, them must be revealed by some functional relations. See: Econometrics – Verbeek; 5th edition pag 463. But some distributions/moments/functions conditions seems me implicit.
To move in the opposite direction is permitted, so from $\text{Corr}\left(f(X),g(Y)\right)=0$ the independence is implied.
However can be useful to note that the condition $\text{Corr}\left(f(X),g(Y)\right)=0$
imply some restrictions on the distributions/functions/moments. In some cases, this condition can fail. For example if $X$ and $Y$ are independent Cauchy r.vs: $\text{Corr}\left(f(X),g(Y)\right)=0$ not hold, or at lest not for some $f()$ and $g()$. Then, the condition in argument and the independence are not completely equivalent.
|
Zero correlation of all functions of random variables implying independence
|
If $\text{Corr}\left(f(X),g(Y)\right)=0$ for all possible functions $f(\cdot)$ and $g(\cdot)$, then $X$ and $Y$ are independent.
In the ref that I have the opposite is affirmed. If $X$ and $Y$ are in
|
Zero correlation of all functions of random variables implying independence
If $\text{Corr}\left(f(X),g(Y)\right)=0$ for all possible functions $f(\cdot)$ and $g(\cdot)$, then $X$ and $Y$ are independent.
In the ref that I have the opposite is affirmed. If $X$ and $Y$ are independent we have that:
$E[f(X)]E[g(Y)]-E[f(X)g(Y)]=0$ (then $corr[f(X),g(Y)]=0$)
for any $f()$ and $g()$.
In words, we have no chance to find dependencies. Indeed if exist, them must be revealed by some functional relations. See: Econometrics – Verbeek; 5th edition pag 463. But some distributions/moments/functions conditions seems me implicit.
To move in the opposite direction is permitted, so from $\text{Corr}\left(f(X),g(Y)\right)=0$ the independence is implied.
However can be useful to note that the condition $\text{Corr}\left(f(X),g(Y)\right)=0$
imply some restrictions on the distributions/functions/moments. In some cases, this condition can fail. For example if $X$ and $Y$ are independent Cauchy r.vs: $\text{Corr}\left(f(X),g(Y)\right)=0$ not hold, or at lest not for some $f()$ and $g()$. Then, the condition in argument and the independence are not completely equivalent.
|
Zero correlation of all functions of random variables implying independence
If $\text{Corr}\left(f(X),g(Y)\right)=0$ for all possible functions $f(\cdot)$ and $g(\cdot)$, then $X$ and $Y$ are independent.
In the ref that I have the opposite is affirmed. If $X$ and $Y$ are in
|
11,942
|
Zero correlation of all functions of random variables implying independence
|
Two variables being dependent means that there is some value(s) of one variable that make some value(s) of the other variable more likely (the general statement is that it changes the probability, but WLOG we can assume that it increases the probability). And if that is the cases, then clearly there is positive correlation between the first variable having the value(s) in question, and the second variable having the value(s) in question. This correlation can be reflected in correlation between functions by taking functions that have different outputs depending on whether the variables take on the value(s) in question.
As a practical matter, this isn't generally a good method of proving independence. Given any countable set of functions, it's possible to construct two dependent variables for which all those functions are uncorrelated. So you have to prove that an uncountable set of functions are uncorrelated, at which point it's probably easier to just prove independence directly.
|
Zero correlation of all functions of random variables implying independence
|
Two variables being dependent means that there is some value(s) of one variable that make some value(s) of the other variable more likely (the general statement is that it changes the probability, but
|
Zero correlation of all functions of random variables implying independence
Two variables being dependent means that there is some value(s) of one variable that make some value(s) of the other variable more likely (the general statement is that it changes the probability, but WLOG we can assume that it increases the probability). And if that is the cases, then clearly there is positive correlation between the first variable having the value(s) in question, and the second variable having the value(s) in question. This correlation can be reflected in correlation between functions by taking functions that have different outputs depending on whether the variables take on the value(s) in question.
As a practical matter, this isn't generally a good method of proving independence. Given any countable set of functions, it's possible to construct two dependent variables for which all those functions are uncorrelated. So you have to prove that an uncountable set of functions are uncorrelated, at which point it's probably easier to just prove independence directly.
|
Zero correlation of all functions of random variables implying independence
Two variables being dependent means that there is some value(s) of one variable that make some value(s) of the other variable more likely (the general statement is that it changes the probability, but
|
11,943
|
Zero correlation of all functions of random variables implying independence
|
Correlation catches only the linear dependence between two variables.
A and B are dependent but uncorrelated if $A = B^2$ for example
Pure independence implies the stochastic independence, which is that the occurrence of one does not affect the occurrence of the other. Similarly, two random variables are independent if the realization of one does not affect the probability distribution of the other (copied from the wiki)
|
Zero correlation of all functions of random variables implying independence
|
Correlation catches only the linear dependence between two variables.
A and B are dependent but uncorrelated if $A = B^2$ for example
Pure independence implies the stochastic independence, which is th
|
Zero correlation of all functions of random variables implying independence
Correlation catches only the linear dependence between two variables.
A and B are dependent but uncorrelated if $A = B^2$ for example
Pure independence implies the stochastic independence, which is that the occurrence of one does not affect the occurrence of the other. Similarly, two random variables are independent if the realization of one does not affect the probability distribution of the other (copied from the wiki)
|
Zero correlation of all functions of random variables implying independence
Correlation catches only the linear dependence between two variables.
A and B are dependent but uncorrelated if $A = B^2$ for example
Pure independence implies the stochastic independence, which is th
|
11,944
|
Log-linear regression vs. logistic regression
|
The name is a bit of a misnomer. Log-linear models were traditionally used for the analysis of data in a contingency table format. While "count data" need not necessarily follow a Poisson distribution, the log-linear model is actually just a Poisson regression model. Hence the "log" name (Poisson regression models contain a "log" link function).
A "log transformed outcome variable" in a linear regression model is not a log-linear model, (neither is an exponentiated outcome variable, as "log-linear" would suggest). Both log-linear models and logistic regressions are examples of generalized linear models, in which the relationship between a linear predictor (such as log-odds or log-rates) is linear in the model variables. They are not "simple linear regression models" (or models using the usual $E[Y|X] = a + bX$ format).
Despite all that, it's possible to obtain equivalent inference on associations between categorical variables using logistic regression and poisson regression. It's just that in the poisson model, the outcome variables are treated like covariates. Interestingly, you can set up some models that borrow information across groups in a way much similar to a proportional odds model, but this is not well understood and rarely used.
Examples of obtaining equivalent inference in logistic and poisson regression models using R illustrated below:
y <- c(0, 1, 0, 1)
x <- c(0, 0, 1, 1)
w <- c(10, 20, 30, 40)
## odds ratio for relationship between x and y from logistic regression
glm(y ~ x, family=binomial, weights=w)
## the odds ratio is the same interaction parameter between contingency table frequencies
glm(w ~ y * x, family=poisson)
Interesting, lack of association between $y$ and $x$ means the odds ratio is 1 in the logistic regression model and, likewise, the interaction term is 0 in the loglinear model. Gives you an idea of how we measure conditional independence in contingency table data.
|
Log-linear regression vs. logistic regression
|
The name is a bit of a misnomer. Log-linear models were traditionally used for the analysis of data in a contingency table format. While "count data" need not necessarily follow a Poisson distribution
|
Log-linear regression vs. logistic regression
The name is a bit of a misnomer. Log-linear models were traditionally used for the analysis of data in a contingency table format. While "count data" need not necessarily follow a Poisson distribution, the log-linear model is actually just a Poisson regression model. Hence the "log" name (Poisson regression models contain a "log" link function).
A "log transformed outcome variable" in a linear regression model is not a log-linear model, (neither is an exponentiated outcome variable, as "log-linear" would suggest). Both log-linear models and logistic regressions are examples of generalized linear models, in which the relationship between a linear predictor (such as log-odds or log-rates) is linear in the model variables. They are not "simple linear regression models" (or models using the usual $E[Y|X] = a + bX$ format).
Despite all that, it's possible to obtain equivalent inference on associations between categorical variables using logistic regression and poisson regression. It's just that in the poisson model, the outcome variables are treated like covariates. Interestingly, you can set up some models that borrow information across groups in a way much similar to a proportional odds model, but this is not well understood and rarely used.
Examples of obtaining equivalent inference in logistic and poisson regression models using R illustrated below:
y <- c(0, 1, 0, 1)
x <- c(0, 0, 1, 1)
w <- c(10, 20, 30, 40)
## odds ratio for relationship between x and y from logistic regression
glm(y ~ x, family=binomial, weights=w)
## the odds ratio is the same interaction parameter between contingency table frequencies
glm(w ~ y * x, family=poisson)
Interesting, lack of association between $y$ and $x$ means the odds ratio is 1 in the logistic regression model and, likewise, the interaction term is 0 in the loglinear model. Gives you an idea of how we measure conditional independence in contingency table data.
|
Log-linear regression vs. logistic regression
The name is a bit of a misnomer. Log-linear models were traditionally used for the analysis of data in a contingency table format. While "count data" need not necessarily follow a Poisson distribution
|
11,945
|
Log-linear regression vs. logistic regression
|
I don't think I would call either of them a "simple linear regression model". Although it is possible to use the log or the logit transformations as the link function for a number of different models, these are typically understood to refer to specific models. For example, "logistic regression" is understood to be a generalized linear model (GLiM) for situations where the response variable is distributed as a binomial. In addition, "log-linear regression" is usually understood to be a Poisson GLiM applied to multi-way contingency tables. In other words, beyond the fact that they are both regression models / GLiMs, I don't see them as necessarily being very similar (there are some connections between them, as @AdamO points out, but the typical usages are fairly distinct). The biggest difference would be that logistic regression assumes the response is distributed as a binomial and log-linear regression assumes the response is distributed as Poisson. In fact, log-linear regression is rather different from most regression models in that the response variable isn't really one of your variables at all (in the usual sense), but rather the set of frequency counts associated with the combinations of your variables in the multi-way contingency table.
|
Log-linear regression vs. logistic regression
|
I don't think I would call either of them a "simple linear regression model". Although it is possible to use the log or the logit transformations as the link function for a number of different models
|
Log-linear regression vs. logistic regression
I don't think I would call either of them a "simple linear regression model". Although it is possible to use the log or the logit transformations as the link function for a number of different models, these are typically understood to refer to specific models. For example, "logistic regression" is understood to be a generalized linear model (GLiM) for situations where the response variable is distributed as a binomial. In addition, "log-linear regression" is usually understood to be a Poisson GLiM applied to multi-way contingency tables. In other words, beyond the fact that they are both regression models / GLiMs, I don't see them as necessarily being very similar (there are some connections between them, as @AdamO points out, but the typical usages are fairly distinct). The biggest difference would be that logistic regression assumes the response is distributed as a binomial and log-linear regression assumes the response is distributed as Poisson. In fact, log-linear regression is rather different from most regression models in that the response variable isn't really one of your variables at all (in the usual sense), but rather the set of frequency counts associated with the combinations of your variables in the multi-way contingency table.
|
Log-linear regression vs. logistic regression
I don't think I would call either of them a "simple linear regression model". Although it is possible to use the log or the logit transformations as the link function for a number of different models
|
11,946
|
Log-linear regression vs. logistic regression
|
To clarify, a "binary" logistic regression has a dependent variable with two outcomes. My understanding is that there is also the option of using a "multinomial" logistic regression if your dependent, outcome variable has more than 2 categories. See here.
|
Log-linear regression vs. logistic regression
|
To clarify, a "binary" logistic regression has a dependent variable with two outcomes. My understanding is that there is also the option of using a "multinomial" logistic regression if your dependent
|
Log-linear regression vs. logistic regression
To clarify, a "binary" logistic regression has a dependent variable with two outcomes. My understanding is that there is also the option of using a "multinomial" logistic regression if your dependent, outcome variable has more than 2 categories. See here.
|
Log-linear regression vs. logistic regression
To clarify, a "binary" logistic regression has a dependent variable with two outcomes. My understanding is that there is also the option of using a "multinomial" logistic regression if your dependent
|
11,947
|
What exactly is a hypothesis space in machine learning?
|
Lets say you have an unknown target function $f:X \rightarrow Y$ that you are trying to capture by learning. In order to capture the target function you have to come up with some hypotheses, or you may call it candidate models denoted by H $h_1,...,h_n$ where $h \in H$. Here, $H$ as the set of all candidate models is called hypothesis class or hypothesis space or hypothesis set.
For more information browse Abu-Mostafa's presentaton slides: https://work.caltech.edu/textbook.html
|
What exactly is a hypothesis space in machine learning?
|
Lets say you have an unknown target function $f:X \rightarrow Y$ that you are trying to capture by learning. In order to capture the target function you have to come up with some hypotheses, or you ma
|
What exactly is a hypothesis space in machine learning?
Lets say you have an unknown target function $f:X \rightarrow Y$ that you are trying to capture by learning. In order to capture the target function you have to come up with some hypotheses, or you may call it candidate models denoted by H $h_1,...,h_n$ where $h \in H$. Here, $H$ as the set of all candidate models is called hypothesis class or hypothesis space or hypothesis set.
For more information browse Abu-Mostafa's presentaton slides: https://work.caltech.edu/textbook.html
|
What exactly is a hypothesis space in machine learning?
Lets say you have an unknown target function $f:X \rightarrow Y$ that you are trying to capture by learning. In order to capture the target function you have to come up with some hypotheses, or you ma
|
11,948
|
What exactly is a hypothesis space in machine learning?
|
Suppose an example with four binary features and one binary output variable. Below is a set of observations:
x1 x2 x3 x4 | y
---------------
0 0 0 1 | 0
0 1 0 1 | 0
1 1 0 0 | 1
0 0 1 0 | 1
This set of observations can be used by a machine learning (ML) algorithm to learn a function f that is able to predict a value y for any input from the input space.
We are searching for the ground truth f(x) = y that explains the relation between x and y for all possible inputs in the correct way.
The function f has to be chosen from the hypothesis space.
To get a better idea: The input space is in the above given example $2^4$, its the number of possible inputs. The hypothesis space is $2^{2^4}=65536$ because for each set of features of the input space two outcomes (0 and 1) are possible.
The ML algorithm helps us to find one function, sometimes also referred as hypothesis, from the relatively large hypothesis space.
References
A Few Useful Things to Know About ML
|
What exactly is a hypothesis space in machine learning?
|
Suppose an example with four binary features and one binary output variable. Below is a set of observations:
x1 x2 x3 x4 | y
---------------
0 0 0 1 | 0
0 1 0 1 | 0
1 1 0 0 | 1
0 0 1
|
What exactly is a hypothesis space in machine learning?
Suppose an example with four binary features and one binary output variable. Below is a set of observations:
x1 x2 x3 x4 | y
---------------
0 0 0 1 | 0
0 1 0 1 | 0
1 1 0 0 | 1
0 0 1 0 | 1
This set of observations can be used by a machine learning (ML) algorithm to learn a function f that is able to predict a value y for any input from the input space.
We are searching for the ground truth f(x) = y that explains the relation between x and y for all possible inputs in the correct way.
The function f has to be chosen from the hypothesis space.
To get a better idea: The input space is in the above given example $2^4$, its the number of possible inputs. The hypothesis space is $2^{2^4}=65536$ because for each set of features of the input space two outcomes (0 and 1) are possible.
The ML algorithm helps us to find one function, sometimes also referred as hypothesis, from the relatively large hypothesis space.
References
A Few Useful Things to Know About ML
|
What exactly is a hypothesis space in machine learning?
Suppose an example with four binary features and one binary output variable. Below is a set of observations:
x1 x2 x3 x4 | y
---------------
0 0 0 1 | 0
0 1 0 1 | 0
1 1 0 0 | 1
0 0 1
|
11,949
|
What exactly is a hypothesis space in machine learning?
|
The hypothesis space is very relevant to the topic of the so-called Bias-Variance Tradeoff in maximum likelihood. That's if the number of parameters in the model(hypothesis function) is too small for the model to fit the data(indicating underfitting and
that the hypothesis space is too limited), the bias is high; while if the model you choose contains too many parameters than needed to fit the data the variance is high(indicating overfitting and that the hypothesis space is too expressive).
As stated in So S' answer, if the parameters are discrete we can easily and concretely calculate how many possibilities are in the hypothesis space(or how large it is), but normally under realy life circumstances the parameters are continuous. Therefore generally the hypothesis space is uncountable.
Here is an example I borrowed and modified from the related part in the classical machine learning textbook: Pattern Recognition And Machine Learning to fit this question:
We are selecting a hypothesis function for an unknown function hidding in the training data given by a third person named CoolGuy living in an extragalactic planet. Let's say CoolGuy knows what the function is, because the data cases are provided by him and he just generated the data using the function. Let's call it(we only have the limited data and CoolGuy has both the unlimited data and the function generating them) the ground truth function and denote it by $y(x, w)$.
The green curve is the $y(x,w)$, and the little blue circles are the cases we have(they are not actually the true data cases transmitted by CoolGuy because of the it would be contaminated by some transmission noise, for example by macula or other things).
We thought that that hidden function would be very simple then we make an attempt at a linear model(make a hypothesis with a very limited space): $g_1(x, w)=w_0 + w_1 x$ with only two parameters: $w_0$ and $w_1$, and we train the model use our data and we obtain this:
We can see that no matter how many data we use to fit the hypothesis it just doesn't work because it is not expressive enough.
So we try a much more expressive hypothesis: $g_9=\sum_j^9 w_j x^j $ with ten adaptive paramaters $w_0, w_1\cdots , w_9$, and we also train the model and then we get:
We can see that it is just too expressive and fits all data cases. We see that a much larger hypothesis space(since $g_2$ can be expressed by $g_9$ by setting $w_2, w_3, \cdots, w_9$ as all 0) is more powerful than a simple hypothesis. But the generalization is also bad. That is, if we recieve more data from CoolGuy and to do reference, the trained model most likely fails in those unseen cases.
Then how large the hypothesis space is large enough for the training dataset? We can find an aswer from the textbook aforementioned:
One rough heuristic that is sometimes advocated is that the number of
data points should be no less than some multiple (say 5 or 10) of the
number of adaptive parameters in the model.
And you'll see from the textbook that if we try to use 4 parameters, $g_3=w_0+w_1 x + w_2 x^2 + w_3 x^3$, the trained function is expressive enough for the underlying function $y=\sin(2\pi x)$. It's kind a black art to find the number 3(the appropriate hypothesis space) in this case.
Then we can roughly say that the hypothesis space is the measure of how expressive you model is to fit the training data. The hypothesis that is expressive enough for the training data is the good hypothesis with an expressive hypothesis space. To test whether the hypothesis is good or bad we do the cross validation to see if it performs well in the validation data-set. If it is neither underfitting(too limited) nor overfititing(too expressive) the space is enough(according to Occam Razor a simpler one is preferable, but I digress).
|
What exactly is a hypothesis space in machine learning?
|
The hypothesis space is very relevant to the topic of the so-called Bias-Variance Tradeoff in maximum likelihood. That's if the number of parameters in the model(hypothesis function) is too small for
|
What exactly is a hypothesis space in machine learning?
The hypothesis space is very relevant to the topic of the so-called Bias-Variance Tradeoff in maximum likelihood. That's if the number of parameters in the model(hypothesis function) is too small for the model to fit the data(indicating underfitting and
that the hypothesis space is too limited), the bias is high; while if the model you choose contains too many parameters than needed to fit the data the variance is high(indicating overfitting and that the hypothesis space is too expressive).
As stated in So S' answer, if the parameters are discrete we can easily and concretely calculate how many possibilities are in the hypothesis space(or how large it is), but normally under realy life circumstances the parameters are continuous. Therefore generally the hypothesis space is uncountable.
Here is an example I borrowed and modified from the related part in the classical machine learning textbook: Pattern Recognition And Machine Learning to fit this question:
We are selecting a hypothesis function for an unknown function hidding in the training data given by a third person named CoolGuy living in an extragalactic planet. Let's say CoolGuy knows what the function is, because the data cases are provided by him and he just generated the data using the function. Let's call it(we only have the limited data and CoolGuy has both the unlimited data and the function generating them) the ground truth function and denote it by $y(x, w)$.
The green curve is the $y(x,w)$, and the little blue circles are the cases we have(they are not actually the true data cases transmitted by CoolGuy because of the it would be contaminated by some transmission noise, for example by macula or other things).
We thought that that hidden function would be very simple then we make an attempt at a linear model(make a hypothesis with a very limited space): $g_1(x, w)=w_0 + w_1 x$ with only two parameters: $w_0$ and $w_1$, and we train the model use our data and we obtain this:
We can see that no matter how many data we use to fit the hypothesis it just doesn't work because it is not expressive enough.
So we try a much more expressive hypothesis: $g_9=\sum_j^9 w_j x^j $ with ten adaptive paramaters $w_0, w_1\cdots , w_9$, and we also train the model and then we get:
We can see that it is just too expressive and fits all data cases. We see that a much larger hypothesis space(since $g_2$ can be expressed by $g_9$ by setting $w_2, w_3, \cdots, w_9$ as all 0) is more powerful than a simple hypothesis. But the generalization is also bad. That is, if we recieve more data from CoolGuy and to do reference, the trained model most likely fails in those unseen cases.
Then how large the hypothesis space is large enough for the training dataset? We can find an aswer from the textbook aforementioned:
One rough heuristic that is sometimes advocated is that the number of
data points should be no less than some multiple (say 5 or 10) of the
number of adaptive parameters in the model.
And you'll see from the textbook that if we try to use 4 parameters, $g_3=w_0+w_1 x + w_2 x^2 + w_3 x^3$, the trained function is expressive enough for the underlying function $y=\sin(2\pi x)$. It's kind a black art to find the number 3(the appropriate hypothesis space) in this case.
Then we can roughly say that the hypothesis space is the measure of how expressive you model is to fit the training data. The hypothesis that is expressive enough for the training data is the good hypothesis with an expressive hypothesis space. To test whether the hypothesis is good or bad we do the cross validation to see if it performs well in the validation data-set. If it is neither underfitting(too limited) nor overfititing(too expressive) the space is enough(according to Occam Razor a simpler one is preferable, but I digress).
|
What exactly is a hypothesis space in machine learning?
The hypothesis space is very relevant to the topic of the so-called Bias-Variance Tradeoff in maximum likelihood. That's if the number of parameters in the model(hypothesis function) is too small for
|
11,950
|
Why does lrtest() not match anova(test="LRT")
|
The test statistics is derived differently. anova.lmlist uses the scaled difference of the residual sum of squares:
anova(base, full, test="LRT")
# Res.Df RSS Df Sum of Sq Pr(>Chi)
#1 995 330.29
#2 994 330.20 1 0.08786 0.6071
vals <- (sum(residuals(base)^2) - sum(residuals(full)^2))/sum(residuals(full)^2) * full$df.residual
pchisq(vals, df.diff, lower.tail = FALSE)
#[1] 0.6070549
|
Why does lrtest() not match anova(test="LRT")
|
The test statistics is derived differently. anova.lmlist uses the scaled difference of the residual sum of squares:
anova(base, full, test="LRT")
# Res.Df RSS Df Sum of Sq Pr(>Chi)
#1 995 330.2
|
Why does lrtest() not match anova(test="LRT")
The test statistics is derived differently. anova.lmlist uses the scaled difference of the residual sum of squares:
anova(base, full, test="LRT")
# Res.Df RSS Df Sum of Sq Pr(>Chi)
#1 995 330.29
#2 994 330.20 1 0.08786 0.6071
vals <- (sum(residuals(base)^2) - sum(residuals(full)^2))/sum(residuals(full)^2) * full$df.residual
pchisq(vals, df.diff, lower.tail = FALSE)
#[1] 0.6070549
|
Why does lrtest() not match anova(test="LRT")
The test statistics is derived differently. anova.lmlist uses the scaled difference of the residual sum of squares:
anova(base, full, test="LRT")
# Res.Df RSS Df Sum of Sq Pr(>Chi)
#1 995 330.2
|
11,951
|
Why does lrtest() not match anova(test="LRT")
|
As mentioned in the previous answer, the difference comes down to a difference in scaling, i.e., different estimators for the standard deviation of the errors. Sources for the difference are (1) scaling by $n-k$ (the unbiased OLS estimator) vs. scaling by $n$ (the biased ML estimator), and (2) using the estimator under the null hypothesis or alternative.
The likelihood ratio test implemented in lrtest() uses the ML estimator for each model separately while anova(..., test = "LRT") uses the OLS estimator under the alternative.
sd_ols <- function(object) sqrt(sum(residuals(object)^2)/df.residual(object))
sd_mle <- function(object) sqrt(mean(residuals(object)^2))
Then the statistic that lrtest() computes is
ll <- function(object, sd) sum(dnorm(model.response(model.frame(object)),
mean = fitted(object), sd = sd, log = TRUE))
-2 * (ll(base, sd_mle(base)) - ll(full, sd_mle(full)))
## [1] 0.266047
anova(..., test = "LRT") on the other hand uses
-2 * (ll(base, sd_ols(full)) - ll(full, sd_ols(full)))
## [1] 0.2644859
Under the null hypothesis both are asymptotically equivalent, of course, but in finite samples there is a small difference.
|
Why does lrtest() not match anova(test="LRT")
|
As mentioned in the previous answer, the difference comes down to a difference in scaling, i.e., different estimators for the standard deviation of the errors. Sources for the difference are (1) scali
|
Why does lrtest() not match anova(test="LRT")
As mentioned in the previous answer, the difference comes down to a difference in scaling, i.e., different estimators for the standard deviation of the errors. Sources for the difference are (1) scaling by $n-k$ (the unbiased OLS estimator) vs. scaling by $n$ (the biased ML estimator), and (2) using the estimator under the null hypothesis or alternative.
The likelihood ratio test implemented in lrtest() uses the ML estimator for each model separately while anova(..., test = "LRT") uses the OLS estimator under the alternative.
sd_ols <- function(object) sqrt(sum(residuals(object)^2)/df.residual(object))
sd_mle <- function(object) sqrt(mean(residuals(object)^2))
Then the statistic that lrtest() computes is
ll <- function(object, sd) sum(dnorm(model.response(model.frame(object)),
mean = fitted(object), sd = sd, log = TRUE))
-2 * (ll(base, sd_mle(base)) - ll(full, sd_mle(full)))
## [1] 0.266047
anova(..., test = "LRT") on the other hand uses
-2 * (ll(base, sd_ols(full)) - ll(full, sd_ols(full)))
## [1] 0.2644859
Under the null hypothesis both are asymptotically equivalent, of course, but in finite samples there is a small difference.
|
Why does lrtest() not match anova(test="LRT")
As mentioned in the previous answer, the difference comes down to a difference in scaling, i.e., different estimators for the standard deviation of the errors. Sources for the difference are (1) scali
|
11,952
|
Test model coefficient (regression slope) against some value
|
Here's a broader solution that will work with any package, or even if you only have the regression output (such as from a paper).
Take the coefficient and its standard error.
Compute $t=\frac{\hat{\beta}-\beta_{H_0}}{\text{s.e.}(\hat{\beta})}$. The d.f. for the $t$ are the same as they would be for a test with $H_0: \beta=0$.
|
Test model coefficient (regression slope) against some value
|
Here's a broader solution that will work with any package, or even if you only have the regression output (such as from a paper).
Take the coefficient and its standard error.
Compute $t=\frac{\hat{\be
|
Test model coefficient (regression slope) against some value
Here's a broader solution that will work with any package, or even if you only have the regression output (such as from a paper).
Take the coefficient and its standard error.
Compute $t=\frac{\hat{\beta}-\beta_{H_0}}{\text{s.e.}(\hat{\beta})}$. The d.f. for the $t$ are the same as they would be for a test with $H_0: \beta=0$.
|
Test model coefficient (regression slope) against some value
Here's a broader solution that will work with any package, or even if you only have the regression output (such as from a paper).
Take the coefficient and its standard error.
Compute $t=\frac{\hat{\be
|
11,953
|
Test model coefficient (regression slope) against some value
|
You can use either a simple t-test as proposed by Glen_b, or a more general Wald test.
The Wald test allows to test multiple hypotheses on multiple parameters. It is formulated as: $R\beta=q$ where R selects (a combination of) coefficients, and q indicates the value to be tested against, $\beta$ being the standard regresison coefficients.
In your example, where you have just one hypothesis on one parameter, R is a row vector, with a value of one for the parameter in question and zero elsewhere, and q is a scalar with the restriction to test.
In R, you can run a Wald test with the function linearHypothesis() from package car. Let us say you want to check if the second coefficient (indicated by argument hypothesis.matrix) is different than 0.1 (argument rhs):
reg <- lm(freeny)
coef(reg)
# wald test for lag.quarterly.revenue =0.1
>library(car)
>linearHypothesis(reg, hypothesis.matrix = c(0, 1, rep(0,3)), rhs=0.1)
#skip some result, look at last value on last row, of Pr(>F)
Res.Df RSS Df Sum of Sq F Pr(>F)
1 35 0.0073811
2 34 0.0073750 1 6.0936e-06 0.0281 0.8679
For the t-test, this function implements the t-test shown by Glen_b:
ttest <- function(reg, coefnum, val){
co <- coef(summary(reg))
tstat <- (co[coefnum,1]-val)/co[coefnum,2]
2 * pt(abs(tstat), reg$df.residual, lower.tail = FALSE)
}
> ttest(reg, 2,0.1)
[1] 0.8678848
Let us make sure we got the right procedure by comparing the Wald, our t-test, and R default t-test, for the standard hypothesis that the second coefficient is zero:
> linearHypothesis(reg, hypothesis.matrix = c(0, 1, rep(0,3)), rhs=0)[["Pr(>F)"]][2]
[1] 0.3904361
> ttest(reg, 2,0)
[1] 0.3904361
## The 'right' answer from R:
> coef(summary(reg))[2,4]
[1] 0.3904361
You should get the same result with the three procedures.
|
Test model coefficient (regression slope) against some value
|
You can use either a simple t-test as proposed by Glen_b, or a more general Wald test.
The Wald test allows to test multiple hypotheses on multiple parameters. It is formulated as: $R\beta=q$ where R
|
Test model coefficient (regression slope) against some value
You can use either a simple t-test as proposed by Glen_b, or a more general Wald test.
The Wald test allows to test multiple hypotheses on multiple parameters. It is formulated as: $R\beta=q$ where R selects (a combination of) coefficients, and q indicates the value to be tested against, $\beta$ being the standard regresison coefficients.
In your example, where you have just one hypothesis on one parameter, R is a row vector, with a value of one for the parameter in question and zero elsewhere, and q is a scalar with the restriction to test.
In R, you can run a Wald test with the function linearHypothesis() from package car. Let us say you want to check if the second coefficient (indicated by argument hypothesis.matrix) is different than 0.1 (argument rhs):
reg <- lm(freeny)
coef(reg)
# wald test for lag.quarterly.revenue =0.1
>library(car)
>linearHypothesis(reg, hypothesis.matrix = c(0, 1, rep(0,3)), rhs=0.1)
#skip some result, look at last value on last row, of Pr(>F)
Res.Df RSS Df Sum of Sq F Pr(>F)
1 35 0.0073811
2 34 0.0073750 1 6.0936e-06 0.0281 0.8679
For the t-test, this function implements the t-test shown by Glen_b:
ttest <- function(reg, coefnum, val){
co <- coef(summary(reg))
tstat <- (co[coefnum,1]-val)/co[coefnum,2]
2 * pt(abs(tstat), reg$df.residual, lower.tail = FALSE)
}
> ttest(reg, 2,0.1)
[1] 0.8678848
Let us make sure we got the right procedure by comparing the Wald, our t-test, and R default t-test, for the standard hypothesis that the second coefficient is zero:
> linearHypothesis(reg, hypothesis.matrix = c(0, 1, rep(0,3)), rhs=0)[["Pr(>F)"]][2]
[1] 0.3904361
> ttest(reg, 2,0)
[1] 0.3904361
## The 'right' answer from R:
> coef(summary(reg))[2,4]
[1] 0.3904361
You should get the same result with the three procedures.
|
Test model coefficient (regression slope) against some value
You can use either a simple t-test as proposed by Glen_b, or a more general Wald test.
The Wald test allows to test multiple hypotheses on multiple parameters. It is formulated as: $R\beta=q$ where R
|
11,954
|
Test model coefficient (regression slope) against some value
|
In the end, farly the easiest solution was to do the reparametrization:
gls(I(y - T*x) ~ x, ...)
|
Test model coefficient (regression slope) against some value
|
In the end, farly the easiest solution was to do the reparametrization:
gls(I(y - T*x) ~ x, ...)
|
Test model coefficient (regression slope) against some value
In the end, farly the easiest solution was to do the reparametrization:
gls(I(y - T*x) ~ x, ...)
|
Test model coefficient (regression slope) against some value
In the end, farly the easiest solution was to do the reparametrization:
gls(I(y - T*x) ~ x, ...)
|
11,955
|
Motivation of Expectation Maximization algorithm
|
Likelihood vs. log-likelihood
As has already been said, the $\log$ is introduced in maximum likelihood simply because it is generally easier to optimize sums than products. The reason we don't consider other monotonic functions is that the logarithm is the unique function with the property of turning products into sums.
Another way to motivate the logarithm is the following: Instead of maximizing the probability of the data under our model, we could equivalently try to minimize the Kullback-Leibler divergence between the data distribution, $p_\text{data}(x)$,
and the model distribution, $p(x \mid \theta)$,
$$
\begin{align}
D_\text{KL}[p_\text{data}(x) \mid\mid p(x \mid \theta)]
&= \int p_\text{data}(x) \log \frac{p_\text{data}(x)}{p(x \mid \theta)} \, dx \\
&= \mathrm{const} - \int p_\text{data}(x)\log p(x \mid \theta) \, dx.
\end{align}
$$
The first term on the right-hand side is constant in the parameters. If we have $N$ samples from the data distribution (our data points), we can approximate the second term with the average log-likelihood of the data,
$$\int p_\text{data}(x)\log p(x \mid \theta) \, dx \approx \frac{1}{N} \sum_n \log p(x_n \mid \theta).$$
An alternative view of EM
I am not sure this is going to be the kind of explanation you are looking for, but I found the following view of expectation maximization much more enlightening than its motivation via Jensen's inequality (you can find a detailed description in Neal & Hinton (1998) or in Chris Bishop's PRML book, Chapter 9.3).
It is not difficult to show that
$$\log p(x \mid \theta)
= \int q(z \mid x) \log \frac{p(x, z \mid \theta)}{q(z \mid x)} \, dz
+ D_\text{KL}[q(z \mid x) \mid\mid p(z \mid x, \theta)]$$
for any $q(z \mid x)$. If we call the first term on the right-hand side $F(q, \theta)$, this implies that
$$F(q, \theta)
= \int q(z \mid x) \log \frac{p(x, z \mid \theta)}{q(z \mid x)} \, dz
= \log p(x \mid \theta) - D_\text{KL}[q(z \mid x) \mid\mid p(z \mid x, \theta)].$$
Because the KL divergence is always positive, $F(q, \theta)$ is a lower bound on the log-likelihood for every fixed $q$. Now, EM can be viewed as alternately maximizing $F$ with respect to $q$ and $\theta$. In particular, by setting $q(z \mid x) = p(z \mid x, \theta)$ in the E-step, we minimize the KL divergence on the right-hand side and thus maximize $F$.
|
Motivation of Expectation Maximization algorithm
|
Likelihood vs. log-likelihood
As has already been said, the $\log$ is introduced in maximum likelihood simply because it is generally easier to optimize sums than products. The reason we don't conside
|
Motivation of Expectation Maximization algorithm
Likelihood vs. log-likelihood
As has already been said, the $\log$ is introduced in maximum likelihood simply because it is generally easier to optimize sums than products. The reason we don't consider other monotonic functions is that the logarithm is the unique function with the property of turning products into sums.
Another way to motivate the logarithm is the following: Instead of maximizing the probability of the data under our model, we could equivalently try to minimize the Kullback-Leibler divergence between the data distribution, $p_\text{data}(x)$,
and the model distribution, $p(x \mid \theta)$,
$$
\begin{align}
D_\text{KL}[p_\text{data}(x) \mid\mid p(x \mid \theta)]
&= \int p_\text{data}(x) \log \frac{p_\text{data}(x)}{p(x \mid \theta)} \, dx \\
&= \mathrm{const} - \int p_\text{data}(x)\log p(x \mid \theta) \, dx.
\end{align}
$$
The first term on the right-hand side is constant in the parameters. If we have $N$ samples from the data distribution (our data points), we can approximate the second term with the average log-likelihood of the data,
$$\int p_\text{data}(x)\log p(x \mid \theta) \, dx \approx \frac{1}{N} \sum_n \log p(x_n \mid \theta).$$
An alternative view of EM
I am not sure this is going to be the kind of explanation you are looking for, but I found the following view of expectation maximization much more enlightening than its motivation via Jensen's inequality (you can find a detailed description in Neal & Hinton (1998) or in Chris Bishop's PRML book, Chapter 9.3).
It is not difficult to show that
$$\log p(x \mid \theta)
= \int q(z \mid x) \log \frac{p(x, z \mid \theta)}{q(z \mid x)} \, dz
+ D_\text{KL}[q(z \mid x) \mid\mid p(z \mid x, \theta)]$$
for any $q(z \mid x)$. If we call the first term on the right-hand side $F(q, \theta)$, this implies that
$$F(q, \theta)
= \int q(z \mid x) \log \frac{p(x, z \mid \theta)}{q(z \mid x)} \, dz
= \log p(x \mid \theta) - D_\text{KL}[q(z \mid x) \mid\mid p(z \mid x, \theta)].$$
Because the KL divergence is always positive, $F(q, \theta)$ is a lower bound on the log-likelihood for every fixed $q$. Now, EM can be viewed as alternately maximizing $F$ with respect to $q$ and $\theta$. In particular, by setting $q(z \mid x) = p(z \mid x, \theta)$ in the E-step, we minimize the KL divergence on the right-hand side and thus maximize $F$.
|
Motivation of Expectation Maximization algorithm
Likelihood vs. log-likelihood
As has already been said, the $\log$ is introduced in maximum likelihood simply because it is generally easier to optimize sums than products. The reason we don't conside
|
11,956
|
Motivation of Expectation Maximization algorithm
|
The EM algorithm has different interpretations and can arise in different forms in different applications.
It all starts with the likelihood function $p(x \vert \theta)$, or equivalently, the log-likelihood function $\log p(x \vert \theta)$ we would like to maximize. (We generally use logarithm as it simplifies the calculation: It is strictly monotone, concave, and $\log(ab) = \log a + \log b$.) In an ideal world, the value of $p$ depends only on the model parameter $\theta$, so we can search through the space of $\theta$ and find one that maximizes $p$.
However, in many interesting real-world applications things are more complicated, because not all the variables are observed. Yes, we might directly observe $x$, but some other variables $z$ are unobserved. Because of the missing variables $z$, we are in a kind of chicken-and-eggs situation: Without $z$ we cannot estimate the parameter $\theta$ and without $\theta$ we cannot infer what the value of $z$ may be.
It is where the EM algorithm comes into play. We start with an initial guess of the model parameters $\theta$ and derive the expected values of the missing variables $z$ (i.e., the E step). When we have the values of $z$, we can maximize the likelihood w.r.t. the parameters $\theta$ (i.e., the M step, corresponding to the $\arg \max$ equation in the problem statement). With this $\theta$ we can derive the new expected values of $z$ (another E step), so on and so forth. In another word, in each step we assume one of the both, $z$ and $\theta$, is known. We repeat this iterative process until the likelihood cannot be increased anymore.
This is the EM algorithm in a nutshell. It is well-known that the likelihood will never decrease during this iterative EM process. But keep in mind that EM algorithm doesn't guarantee global optimum. That is, it might end up with a local optimum of the likelihood function.
The appearance of $\log$ in the equation of $\theta^{(k+1)}$ is inevitable, because here the function you would like to maximize is written as a log-likelihood.
|
Motivation of Expectation Maximization algorithm
|
The EM algorithm has different interpretations and can arise in different forms in different applications.
It all starts with the likelihood function $p(x \vert \theta)$, or equivalently, the log-like
|
Motivation of Expectation Maximization algorithm
The EM algorithm has different interpretations and can arise in different forms in different applications.
It all starts with the likelihood function $p(x \vert \theta)$, or equivalently, the log-likelihood function $\log p(x \vert \theta)$ we would like to maximize. (We generally use logarithm as it simplifies the calculation: It is strictly monotone, concave, and $\log(ab) = \log a + \log b$.) In an ideal world, the value of $p$ depends only on the model parameter $\theta$, so we can search through the space of $\theta$ and find one that maximizes $p$.
However, in many interesting real-world applications things are more complicated, because not all the variables are observed. Yes, we might directly observe $x$, but some other variables $z$ are unobserved. Because of the missing variables $z$, we are in a kind of chicken-and-eggs situation: Without $z$ we cannot estimate the parameter $\theta$ and without $\theta$ we cannot infer what the value of $z$ may be.
It is where the EM algorithm comes into play. We start with an initial guess of the model parameters $\theta$ and derive the expected values of the missing variables $z$ (i.e., the E step). When we have the values of $z$, we can maximize the likelihood w.r.t. the parameters $\theta$ (i.e., the M step, corresponding to the $\arg \max$ equation in the problem statement). With this $\theta$ we can derive the new expected values of $z$ (another E step), so on and so forth. In another word, in each step we assume one of the both, $z$ and $\theta$, is known. We repeat this iterative process until the likelihood cannot be increased anymore.
This is the EM algorithm in a nutshell. It is well-known that the likelihood will never decrease during this iterative EM process. But keep in mind that EM algorithm doesn't guarantee global optimum. That is, it might end up with a local optimum of the likelihood function.
The appearance of $\log$ in the equation of $\theta^{(k+1)}$ is inevitable, because here the function you would like to maximize is written as a log-likelihood.
|
Motivation of Expectation Maximization algorithm
The EM algorithm has different interpretations and can arise in different forms in different applications.
It all starts with the likelihood function $p(x \vert \theta)$, or equivalently, the log-like
|
11,957
|
Motivation of Expectation Maximization algorithm
|
The paper that I found clarifying with respect to expectation-maximization is Bayesian K-Means as a "Maximization-Expectation" Algorithm (pdf) by Welling and Kurihara.
Suppose we have a probabilistic model $p(x,z,\theta)$ with $x$ observations, $z$ hidden random variables, and a total of $\theta$ parameters. We are given a dataset $D$ and are forced (by higher powers) to establish $p(z,\theta|D)$.
1. Gibbs sampling
We can approximate $p(z,\theta|D)$ by sampling. Gibbs sampling gives $p(z,\theta|D)$ by alternating:
$$
\theta \sim p(\theta|z,D) \\
z \sim p(z|\theta,D)
$$
2. Variational Bayes
Instead, we can try to establish a distribution $q(\theta)$ and $q(z)$ and minimize the difference with the distribution we are after $p(\theta,z|D)$. The difference between distributions has a convenient fancy name, the KL-divergence. To minimize $KL[q(\theta)q(z)||p(\theta,z|D)]$ we update:
$$
q(\theta) \propto \exp (E [\log p(\theta,z,D) ]_{q(z)} ) \\
q(z) \propto \exp (E [\log p(\theta,z,D) ]_{q(\theta)} )
$$
3. Expectation-Maximization
To come up with full-fledged probability distributions for both $z$ and $\theta$ might be considered extreme. Why don't we instead consider a point estimate for one of these and keep the other nice and nuanced. In EM the parameter $\theta$ is established as the one being unworthy of a full distribution, and set to its MAP (Maximum A Posteriori) value, $\theta^*$.
$$
\theta^* = \underset{\theta}{\operatorname{argmax}} E [\log p(\theta,z,D) ]_{q(z)} \\
q(z) = p(z|\theta^*,D)
$$
Here $\theta^* \in \operatorname{argmax}$ would actually be a better notation: the argmax operator can return multiple values. But let's not nitpick. Compared to variational Bayes you see that correcting for the $\log$ by $\exp$ doesn't change the result, so that is not necessary anymore.
4. Maximization-Expectation
There is no reason to treat $z$ as a spoiled child. We can just as well use point estimates $z^*$ for our hidden variables and give the parameters $\theta$ the luxury of a full distribution.
$$
z^* = \underset{z}{\operatorname{argmax}} E [\log p(\theta,z,D) ]_{q(\theta)} \\
q(\theta) = p(\theta|z^*,D)
$$
If our hidden variables $z$ are indicator variables, we suddenly have a computationally cheap method to perform inference on the number of clusters. This is in other words: model selection (or automatic relevance detection or imagine another fancy name).
5. Iterated conditional modes
Of course, the poster child of approximate inference is to use point estimates for both the parameters $\theta$ as well as the observations $z$.
$$
\theta^* = \underset{\theta}{\operatorname{argmax}} p(\theta,z^*,D) \\
z^* = \underset{z}{\operatorname{argmax}} p(\theta^*,z,D) \\
$$
To see how Maximization-Expectation plays out I highly recommend the article. In my opinion, the strength of this article is however not the application to a $k$-means alternative, but this lucid and concise exposition of approximation.
|
Motivation of Expectation Maximization algorithm
|
The paper that I found clarifying with respect to expectation-maximization is Bayesian K-Means as a "Maximization-Expectation" Algorithm (pdf) by Welling and Kurihara.
Suppose we have a probabilistic
|
Motivation of Expectation Maximization algorithm
The paper that I found clarifying with respect to expectation-maximization is Bayesian K-Means as a "Maximization-Expectation" Algorithm (pdf) by Welling and Kurihara.
Suppose we have a probabilistic model $p(x,z,\theta)$ with $x$ observations, $z$ hidden random variables, and a total of $\theta$ parameters. We are given a dataset $D$ and are forced (by higher powers) to establish $p(z,\theta|D)$.
1. Gibbs sampling
We can approximate $p(z,\theta|D)$ by sampling. Gibbs sampling gives $p(z,\theta|D)$ by alternating:
$$
\theta \sim p(\theta|z,D) \\
z \sim p(z|\theta,D)
$$
2. Variational Bayes
Instead, we can try to establish a distribution $q(\theta)$ and $q(z)$ and minimize the difference with the distribution we are after $p(\theta,z|D)$. The difference between distributions has a convenient fancy name, the KL-divergence. To minimize $KL[q(\theta)q(z)||p(\theta,z|D)]$ we update:
$$
q(\theta) \propto \exp (E [\log p(\theta,z,D) ]_{q(z)} ) \\
q(z) \propto \exp (E [\log p(\theta,z,D) ]_{q(\theta)} )
$$
3. Expectation-Maximization
To come up with full-fledged probability distributions for both $z$ and $\theta$ might be considered extreme. Why don't we instead consider a point estimate for one of these and keep the other nice and nuanced. In EM the parameter $\theta$ is established as the one being unworthy of a full distribution, and set to its MAP (Maximum A Posteriori) value, $\theta^*$.
$$
\theta^* = \underset{\theta}{\operatorname{argmax}} E [\log p(\theta,z,D) ]_{q(z)} \\
q(z) = p(z|\theta^*,D)
$$
Here $\theta^* \in \operatorname{argmax}$ would actually be a better notation: the argmax operator can return multiple values. But let's not nitpick. Compared to variational Bayes you see that correcting for the $\log$ by $\exp$ doesn't change the result, so that is not necessary anymore.
4. Maximization-Expectation
There is no reason to treat $z$ as a spoiled child. We can just as well use point estimates $z^*$ for our hidden variables and give the parameters $\theta$ the luxury of a full distribution.
$$
z^* = \underset{z}{\operatorname{argmax}} E [\log p(\theta,z,D) ]_{q(\theta)} \\
q(\theta) = p(\theta|z^*,D)
$$
If our hidden variables $z$ are indicator variables, we suddenly have a computationally cheap method to perform inference on the number of clusters. This is in other words: model selection (or automatic relevance detection or imagine another fancy name).
5. Iterated conditional modes
Of course, the poster child of approximate inference is to use point estimates for both the parameters $\theta$ as well as the observations $z$.
$$
\theta^* = \underset{\theta}{\operatorname{argmax}} p(\theta,z^*,D) \\
z^* = \underset{z}{\operatorname{argmax}} p(\theta^*,z,D) \\
$$
To see how Maximization-Expectation plays out I highly recommend the article. In my opinion, the strength of this article is however not the application to a $k$-means alternative, but this lucid and concise exposition of approximation.
|
Motivation of Expectation Maximization algorithm
The paper that I found clarifying with respect to expectation-maximization is Bayesian K-Means as a "Maximization-Expectation" Algorithm (pdf) by Welling and Kurihara.
Suppose we have a probabilistic
|
11,958
|
Motivation of Expectation Maximization algorithm
|
There is a useful optimisation technique underlying the EM algorithm. However, it's usually expressed in the language of probability theory so it's hard to see that at the core is a method that has nothing to do with probability and expectation.
Consider the problem of maximising $$g(x)=\sum_i\exp(f_i(x))$$ (or equivalently $\log g(x)$) with respect to $x$. If you write down an expression for $g'(x)$ and set it equal to zero you will often end up with a transcendental equation to solve. These can be nasty.
Now suppose that the $f_i$ play well together in the sense that linear combinations of them give you something easy to optimise. For example, if all of the $f_i(x)$ are quadratic in $x$ then a linear combination of the $f_i(x)$ will also be quadratic, and hence easy to optimise.
Given this supposition, it'd be cool if, in order to optimise $\log g(x)=\log \sum_i\exp(f_i(x))$ we could somehow shuffle the $\log$ past the $\sum$ so it could meet the $\exp$s and eliminate them. Then the $f_i$ could play together. But we can't do that.
Let's do the next best thing. We'll make another function $h$ that is similar to $g$. And we'll make it out of linear combinations of the $f_i$.
Let's say $x_0$ is a guess for an optimal value. We'd like to improve this. Let's find another function $h$ that matches $g$ and its derivative at $x_0$, i.e. $g(x_0)=h(x_0)$ and $g'(x_0)=h'(x_0)$. If you plot a graph of $h$ in a small neighbourhood of $x_0$ it's going to look similar to $g$.
You can show that $$g'(x)=\sum_i f_i'(x)\exp(f_i(x)).$$ We want something that matches this at $x_0$. There's a natural choice: $$h(x)=\mbox{constant}+\sum_i f_i(x)\exp(f_i(x_0)).$$ You can see they match at $x=x_0$. We get $$h'(x)=\sum_i f_i'(x)\exp(f_i(x_0)).$$ As $x_0$ is a constant we have a simple linear combination of the $f_i$ whose derivative matches $g$. We just have to choose the constant in $h$ to make $g(x_0)=h(x_0)$.
So starting with $x_0$, we form $h(x)$ and optimise that. Because it's similar to $g(x)$ in the neighbourhood of $x_0$ we hope the optimum of $h$ is similar to the optimum of g. Once you have a new estimate, construct the next $h$ and repeat.
I hope this has motivated the choice of $h$. This is exactly the procedure that takes place in EM.
But there's one more important point. Using Jensen's inequality you can show that $h(x)\le g(x)$. This means that when you optimise $h(x)$ you always get an $x$ that makes $g$ bigger compared to $g(x_0)$. So even though $h$ was motivated by its local similarity to $g$, it's safe to globally maximise $h$ at each iteration. The hope I mentioned above isn't required.
This also gives a clue to when to use EM: when linear combinations of the arguments to the $\exp$ function are easier to optimise. For example when they're quadratic - as happens when working with mixtures of Gaussians. This is particularly relevant to statistics where many of the standard distributions are from exponential families.
|
Motivation of Expectation Maximization algorithm
|
There is a useful optimisation technique underlying the EM algorithm. However, it's usually expressed in the language of probability theory so it's hard to see that at the core is a method that has no
|
Motivation of Expectation Maximization algorithm
There is a useful optimisation technique underlying the EM algorithm. However, it's usually expressed in the language of probability theory so it's hard to see that at the core is a method that has nothing to do with probability and expectation.
Consider the problem of maximising $$g(x)=\sum_i\exp(f_i(x))$$ (or equivalently $\log g(x)$) with respect to $x$. If you write down an expression for $g'(x)$ and set it equal to zero you will often end up with a transcendental equation to solve. These can be nasty.
Now suppose that the $f_i$ play well together in the sense that linear combinations of them give you something easy to optimise. For example, if all of the $f_i(x)$ are quadratic in $x$ then a linear combination of the $f_i(x)$ will also be quadratic, and hence easy to optimise.
Given this supposition, it'd be cool if, in order to optimise $\log g(x)=\log \sum_i\exp(f_i(x))$ we could somehow shuffle the $\log$ past the $\sum$ so it could meet the $\exp$s and eliminate them. Then the $f_i$ could play together. But we can't do that.
Let's do the next best thing. We'll make another function $h$ that is similar to $g$. And we'll make it out of linear combinations of the $f_i$.
Let's say $x_0$ is a guess for an optimal value. We'd like to improve this. Let's find another function $h$ that matches $g$ and its derivative at $x_0$, i.e. $g(x_0)=h(x_0)$ and $g'(x_0)=h'(x_0)$. If you plot a graph of $h$ in a small neighbourhood of $x_0$ it's going to look similar to $g$.
You can show that $$g'(x)=\sum_i f_i'(x)\exp(f_i(x)).$$ We want something that matches this at $x_0$. There's a natural choice: $$h(x)=\mbox{constant}+\sum_i f_i(x)\exp(f_i(x_0)).$$ You can see they match at $x=x_0$. We get $$h'(x)=\sum_i f_i'(x)\exp(f_i(x_0)).$$ As $x_0$ is a constant we have a simple linear combination of the $f_i$ whose derivative matches $g$. We just have to choose the constant in $h$ to make $g(x_0)=h(x_0)$.
So starting with $x_0$, we form $h(x)$ and optimise that. Because it's similar to $g(x)$ in the neighbourhood of $x_0$ we hope the optimum of $h$ is similar to the optimum of g. Once you have a new estimate, construct the next $h$ and repeat.
I hope this has motivated the choice of $h$. This is exactly the procedure that takes place in EM.
But there's one more important point. Using Jensen's inequality you can show that $h(x)\le g(x)$. This means that when you optimise $h(x)$ you always get an $x$ that makes $g$ bigger compared to $g(x_0)$. So even though $h$ was motivated by its local similarity to $g$, it's safe to globally maximise $h$ at each iteration. The hope I mentioned above isn't required.
This also gives a clue to when to use EM: when linear combinations of the arguments to the $\exp$ function are easier to optimise. For example when they're quadratic - as happens when working with mixtures of Gaussians. This is particularly relevant to statistics where many of the standard distributions are from exponential families.
|
Motivation of Expectation Maximization algorithm
There is a useful optimisation technique underlying the EM algorithm. However, it's usually expressed in the language of probability theory so it's hard to see that at the core is a method that has no
|
11,959
|
Motivation of Expectation Maximization algorithm
|
As you said, I will not go into technical details. There are quite a few very nice tutorials. One of my favourites are Andrew Ng's lecture notes. Take a look also at the references here.
EM is naturally motivated in mixture models and models with hidden factors in general. Take for example the case of Gaussian mixture models (GMM). Here we model the density of the observations as a weighted sum of $K$ gaussians:
$$p(x) = \sum_{i=1}^{K}\pi_{i} \mathcal{N}(x|\mu_{i}, \Sigma_{i})$$
where $\pi_{i}$ is the probability that the sample $x$ was caused/generated by the ith component, $\mu_{i}$ is the mean of the distribution, and $\Sigma_{i}$ is the covariance matrix.
The way to understand this expression is the following: each data sample has been generated/caused by one component, but we do not know which one. The approach is then to express the uncertainty in terms of probability ($\pi_{i}$ represents the chances that the ith component can account for that sample), and take the weighted sum.
As a concrete example, imagine you want to cluster text documents. The idea is to assume that each document belong to a topic (science, sports,...) which you do not know beforehand!. The possible topics are hidden variables. Then you are given a bunch of documents, and by counting n-grams or whatever features you extract, you want to then find those clusters and see to which cluster each document belongs to.
EM is a procedure which attacks this problem step-wise: the expectation step attempts to improve the assignments of the samples it has achieved so far. The maximization step you improve the parameters of the mixture, in other words, the form of the clusters.
The point is not using monotonic functions but convex functions. And the reason is the Jensen's inequality which ensures that the estimates of the EM algorithm will improve at every step.
|
Motivation of Expectation Maximization algorithm
|
As you said, I will not go into technical details. There are quite a few very nice tutorials. One of my favourites are Andrew Ng's lecture notes. Take a look also at the references here.
EM is natura
|
Motivation of Expectation Maximization algorithm
As you said, I will not go into technical details. There are quite a few very nice tutorials. One of my favourites are Andrew Ng's lecture notes. Take a look also at the references here.
EM is naturally motivated in mixture models and models with hidden factors in general. Take for example the case of Gaussian mixture models (GMM). Here we model the density of the observations as a weighted sum of $K$ gaussians:
$$p(x) = \sum_{i=1}^{K}\pi_{i} \mathcal{N}(x|\mu_{i}, \Sigma_{i})$$
where $\pi_{i}$ is the probability that the sample $x$ was caused/generated by the ith component, $\mu_{i}$ is the mean of the distribution, and $\Sigma_{i}$ is the covariance matrix.
The way to understand this expression is the following: each data sample has been generated/caused by one component, but we do not know which one. The approach is then to express the uncertainty in terms of probability ($\pi_{i}$ represents the chances that the ith component can account for that sample), and take the weighted sum.
As a concrete example, imagine you want to cluster text documents. The idea is to assume that each document belong to a topic (science, sports,...) which you do not know beforehand!. The possible topics are hidden variables. Then you are given a bunch of documents, and by counting n-grams or whatever features you extract, you want to then find those clusters and see to which cluster each document belongs to.
EM is a procedure which attacks this problem step-wise: the expectation step attempts to improve the assignments of the samples it has achieved so far. The maximization step you improve the parameters of the mixture, in other words, the form of the clusters.
The point is not using monotonic functions but convex functions. And the reason is the Jensen's inequality which ensures that the estimates of the EM algorithm will improve at every step.
|
Motivation of Expectation Maximization algorithm
As you said, I will not go into technical details. There are quite a few very nice tutorials. One of my favourites are Andrew Ng's lecture notes. Take a look also at the references here.
EM is natura
|
11,960
|
What is this type of circular-link visualization called?
|
Take a look at Circos:
Circos is a software package for visualizing data and information. It visualizes data in a circular layout — this makes Circos ideal for exploring relationships between objects or positions.
The flowing data blog also had a post on this that you might find interesting:
Visual Representation of Tabular Information – How to Fix the Uncommunicative Table
|
What is this type of circular-link visualization called?
|
Take a look at Circos:
Circos is a software package for visualizing data and information. It visualizes data in a circular layout — this makes Circos ideal for exploring relationships between objects
|
What is this type of circular-link visualization called?
Take a look at Circos:
Circos is a software package for visualizing data and information. It visualizes data in a circular layout — this makes Circos ideal for exploring relationships between objects or positions.
The flowing data blog also had a post on this that you might find interesting:
Visual Representation of Tabular Information – How to Fix the Uncommunicative Table
|
What is this type of circular-link visualization called?
Take a look at Circos:
Circos is a software package for visualizing data and information. It visualizes data in a circular layout — this makes Circos ideal for exploring relationships between objects
|
11,961
|
What is this type of circular-link visualization called?
|
I found that the dependency graph in Flare is also similar to what I want:
http://flare.prefuse.org/apps/dependency_graph
|
What is this type of circular-link visualization called?
|
I found that the dependency graph in Flare is also similar to what I want:
http://flare.prefuse.org/apps/dependency_graph
|
What is this type of circular-link visualization called?
I found that the dependency graph in Flare is also similar to what I want:
http://flare.prefuse.org/apps/dependency_graph
|
What is this type of circular-link visualization called?
I found that the dependency graph in Flare is also similar to what I want:
http://flare.prefuse.org/apps/dependency_graph
|
11,962
|
What is this type of circular-link visualization called?
|
It is called a Chord diagram.
Now that you know its name you can research for the tool that best suits you. I dont think it is nice to advertise tools.
|
What is this type of circular-link visualization called?
|
It is called a Chord diagram.
Now that you know its name you can research for the tool that best suits you. I dont think it is nice to advertise tools.
|
What is this type of circular-link visualization called?
It is called a Chord diagram.
Now that you know its name you can research for the tool that best suits you. I dont think it is nice to advertise tools.
|
What is this type of circular-link visualization called?
It is called a Chord diagram.
Now that you know its name you can research for the tool that best suits you. I dont think it is nice to advertise tools.
|
11,963
|
What is this type of circular-link visualization called?
|
I would just add:
As you point out, Flare has the dependency graph, which Aleks Jakulin argued was similar but better. This was based originally on the "Hierarchical Edge Bundles:
Visualization of Adjacency Relations in Hierarchical Data" (Holden 2006).
I personally prefer to use Protovis to Flare directly, and you can look at Mike Bostock's example of the same graphic. Here is also an example of an Arc Diagram in Protovis, which is very similar but laid out linearly.
|
What is this type of circular-link visualization called?
|
I would just add:
As you point out, Flare has the dependency graph, which Aleks Jakulin argued was similar but better. This was based originally on the "Hierarchical Edge Bundles:
Visualization of Ad
|
What is this type of circular-link visualization called?
I would just add:
As you point out, Flare has the dependency graph, which Aleks Jakulin argued was similar but better. This was based originally on the "Hierarchical Edge Bundles:
Visualization of Adjacency Relations in Hierarchical Data" (Holden 2006).
I personally prefer to use Protovis to Flare directly, and you can look at Mike Bostock's example of the same graphic. Here is also an example of an Arc Diagram in Protovis, which is very similar but laid out linearly.
|
What is this type of circular-link visualization called?
I would just add:
As you point out, Flare has the dependency graph, which Aleks Jakulin argued was similar but better. This was based originally on the "Hierarchical Edge Bundles:
Visualization of Ad
|
11,964
|
What is this type of circular-link visualization called?
|
For the #Rstats crowd there are two other options.
circlize library (package, vignette):
This package aims to implement circos layout in R.
RCircos library (CRAN):
RCircos package provides a simple and flexible way to generate Circos
2D track plot images for genomic data visualization.
|
What is this type of circular-link visualization called?
|
For the #Rstats crowd there are two other options.
circlize library (package, vignette):
This package aims to implement circos layout in R.
RCircos library (CRAN):
RCircos package provides a simpl
|
What is this type of circular-link visualization called?
For the #Rstats crowd there are two other options.
circlize library (package, vignette):
This package aims to implement circos layout in R.
RCircos library (CRAN):
RCircos package provides a simple and flexible way to generate Circos
2D track plot images for genomic data visualization.
|
What is this type of circular-link visualization called?
For the #Rstats crowd there are two other options.
circlize library (package, vignette):
This package aims to implement circos layout in R.
RCircos library (CRAN):
RCircos package provides a simpl
|
11,965
|
Loss function autoencoder vs variational-autoencoder or MSE-loss vs binary-cross-entropy-loss
|
I don't believe there's some kind of deep, meaningful rationale at play here - it's a showcase example running on MNIST, it's pretty error-tolerant.
Optimizing for MSE means your generated output intensities are symmetrically close to the input intensities. A higher-than-training intensity is penalized by the same amount as an equally valued lower intensity.
Cross-entropy loss is assymetrical.
If your true intensity is high, e.g. 0.8, generating a pixel with the intensity of 0.9 is penalized more than generating a pixel with intensity of 0.7.
Conversely if it's low, e.g. 0.3, predicting an intensity of 0.4 is penalized less than a predicted intensity of 0.2.
You might have guessed by now - cross-entropy loss is biased towards 0.5 whenever the ground truth is not binary. For a ground truth of 0.5, the per-pixel zero-normalized loss is equal to 2*MSE.
This is quite obviously wrong! The end result is that you're training the network to always generate images that are blurrier than the inputs. You're actively penalizing any result that would enhance the output sharpness more than those that make it worse!
MSE is not immune to the this behavior either, but at least it's just unbiased and not biased in the completely wrong direction.
However, before you run off to write a loss function with the opposite bias - just keep in mind pushing outputs away from 0.5 will in turn mean the decoded images will have very hard, pixellized edges.
That is - or at least I very strongly suspect is - why adversarial methods yield better results - the adversarial component is essentially a trainable, 'smart' loss function for the (possibly variational) autoencoder.
|
Loss function autoencoder vs variational-autoencoder or MSE-loss vs binary-cross-entropy-loss
|
I don't believe there's some kind of deep, meaningful rationale at play here - it's a showcase example running on MNIST, it's pretty error-tolerant.
Optimizing for MSE means your generated output int
|
Loss function autoencoder vs variational-autoencoder or MSE-loss vs binary-cross-entropy-loss
I don't believe there's some kind of deep, meaningful rationale at play here - it's a showcase example running on MNIST, it's pretty error-tolerant.
Optimizing for MSE means your generated output intensities are symmetrically close to the input intensities. A higher-than-training intensity is penalized by the same amount as an equally valued lower intensity.
Cross-entropy loss is assymetrical.
If your true intensity is high, e.g. 0.8, generating a pixel with the intensity of 0.9 is penalized more than generating a pixel with intensity of 0.7.
Conversely if it's low, e.g. 0.3, predicting an intensity of 0.4 is penalized less than a predicted intensity of 0.2.
You might have guessed by now - cross-entropy loss is biased towards 0.5 whenever the ground truth is not binary. For a ground truth of 0.5, the per-pixel zero-normalized loss is equal to 2*MSE.
This is quite obviously wrong! The end result is that you're training the network to always generate images that are blurrier than the inputs. You're actively penalizing any result that would enhance the output sharpness more than those that make it worse!
MSE is not immune to the this behavior either, but at least it's just unbiased and not biased in the completely wrong direction.
However, before you run off to write a loss function with the opposite bias - just keep in mind pushing outputs away from 0.5 will in turn mean the decoded images will have very hard, pixellized edges.
That is - or at least I very strongly suspect is - why adversarial methods yield better results - the adversarial component is essentially a trainable, 'smart' loss function for the (possibly variational) autoencoder.
|
Loss function autoencoder vs variational-autoencoder or MSE-loss vs binary-cross-entropy-loss
I don't believe there's some kind of deep, meaningful rationale at play here - it's a showcase example running on MNIST, it's pretty error-tolerant.
Optimizing for MSE means your generated output int
|
11,966
|
Loss function autoencoder vs variational-autoencoder or MSE-loss vs binary-cross-entropy-loss
|
This discussion suggests that binary cross entropy is used in VAE case mainly for better optimization behavior. Another reason it works well is that MNIST dataset roughly follows multivariate Bernoulli distribution - the pixel values are close to either zero or one and binarization does not change it much. For more in-depth explanation of this, see Using a Bernoulli VAE on real-valued observations.
|
Loss function autoencoder vs variational-autoencoder or MSE-loss vs binary-cross-entropy-loss
|
This discussion suggests that binary cross entropy is used in VAE case mainly for better optimization behavior. Another reason it works well is that MNIST dataset roughly follows multivariate Bernoull
|
Loss function autoencoder vs variational-autoencoder or MSE-loss vs binary-cross-entropy-loss
This discussion suggests that binary cross entropy is used in VAE case mainly for better optimization behavior. Another reason it works well is that MNIST dataset roughly follows multivariate Bernoulli distribution - the pixel values are close to either zero or one and binarization does not change it much. For more in-depth explanation of this, see Using a Bernoulli VAE on real-valued observations.
|
Loss function autoencoder vs variational-autoencoder or MSE-loss vs binary-cross-entropy-loss
This discussion suggests that binary cross entropy is used in VAE case mainly for better optimization behavior. Another reason it works well is that MNIST dataset roughly follows multivariate Bernoull
|
11,967
|
Loss function autoencoder vs variational-autoencoder or MSE-loss vs binary-cross-entropy-loss
|
It depends on how you assume the model for the likelihood. In other words, in variational autoencoders you seek to minimize the ELBO (empirical lower bound), which contains $KL(q||p)$ which is managed by the encoder and a second term known as the reconstruction error $E_{q}[log p(x|z)]$ managed by the decoder and requires sampling, here is where the choise of the model of $p(x|z)$ comes into play. If you assume it follows a normal distribution you will end up with a MSE minimization since $p(x|z)$ can be reformulated as $p(x|\hat{x}) \sim \mathcal{N}(\hat{x},\sigma)$, if you assume a multinoully distribution you will use cross entropy.
Just a side note taken from Goodfellow's book:
Many authors use the term “cross-entropy” to
identify specifically the negative log-likelihood of a Bernoulli or softmax distribution,
but that is a misnomer. Any loss consisting of a negative log-likelihood is a crossentropy
between the empirical distribution defined by the training set and the
probability distribution defined by model. For example, mean squared error is the
cross-entropy between the empirical distribution and a Gaussian model.
|
Loss function autoencoder vs variational-autoencoder or MSE-loss vs binary-cross-entropy-loss
|
It depends on how you assume the model for the likelihood. In other words, in variational autoencoders you seek to minimize the ELBO (empirical lower bound), which contains $KL(q||p)$ which is manage
|
Loss function autoencoder vs variational-autoencoder or MSE-loss vs binary-cross-entropy-loss
It depends on how you assume the model for the likelihood. In other words, in variational autoencoders you seek to minimize the ELBO (empirical lower bound), which contains $KL(q||p)$ which is managed by the encoder and a second term known as the reconstruction error $E_{q}[log p(x|z)]$ managed by the decoder and requires sampling, here is where the choise of the model of $p(x|z)$ comes into play. If you assume it follows a normal distribution you will end up with a MSE minimization since $p(x|z)$ can be reformulated as $p(x|\hat{x}) \sim \mathcal{N}(\hat{x},\sigma)$, if you assume a multinoully distribution you will use cross entropy.
Just a side note taken from Goodfellow's book:
Many authors use the term “cross-entropy” to
identify specifically the negative log-likelihood of a Bernoulli or softmax distribution,
but that is a misnomer. Any loss consisting of a negative log-likelihood is a crossentropy
between the empirical distribution defined by the training set and the
probability distribution defined by model. For example, mean squared error is the
cross-entropy between the empirical distribution and a Gaussian model.
|
Loss function autoencoder vs variational-autoencoder or MSE-loss vs binary-cross-entropy-loss
It depends on how you assume the model for the likelihood. In other words, in variational autoencoders you seek to minimize the ELBO (empirical lower bound), which contains $KL(q||p)$ which is manage
|
11,968
|
Zero inflated distributions, what are they really?
|
fit a logistic regression first calculate the probability of zeroes, and then I could remove all the zeroes, and then fit a regular regression using my choice of distribution (poisson e.g.)
You're absolutely right. This is one way to fit a zero-inflated model (or as Achim Zeileis points out in the comments, this is strictly a "hurdle model", which one could view as a special case of a zero-inflated model).
The difference between the procedure you described and an "all-in-one" zero-inflated model is error propagation. Like all other two-step procedures in statistics, the overall uncertainty of your predictions in step 2 won't take into account the uncertainty as to whether the prediction should be 0 or not.
Sometimes this is a necessary evil. Fortunately, it's not necessary in this case. In R, you can use pscl::hurdle() or fitdistrplus::fitdist().
|
Zero inflated distributions, what are they really?
|
fit a logistic regression first calculate the probability of zeroes, and then I could remove all the zeroes, and then fit a regular regression using my choice of distribution (poisson e.g.)
You're ab
|
Zero inflated distributions, what are they really?
fit a logistic regression first calculate the probability of zeroes, and then I could remove all the zeroes, and then fit a regular regression using my choice of distribution (poisson e.g.)
You're absolutely right. This is one way to fit a zero-inflated model (or as Achim Zeileis points out in the comments, this is strictly a "hurdle model", which one could view as a special case of a zero-inflated model).
The difference between the procedure you described and an "all-in-one" zero-inflated model is error propagation. Like all other two-step procedures in statistics, the overall uncertainty of your predictions in step 2 won't take into account the uncertainty as to whether the prediction should be 0 or not.
Sometimes this is a necessary evil. Fortunately, it's not necessary in this case. In R, you can use pscl::hurdle() or fitdistrplus::fitdist().
|
Zero inflated distributions, what are they really?
fit a logistic regression first calculate the probability of zeroes, and then I could remove all the zeroes, and then fit a regular regression using my choice of distribution (poisson e.g.)
You're ab
|
11,969
|
Zero inflated distributions, what are they really?
|
The basic idea you describe is a valid approach and it is often called a hurdle model (or two-part model) rather than a zero-inflated model.
However, it is crucial that the model for the non-zero data accounts for having the zeros removed. If you fit a Poisson model to the data without zeros this will almost certainly produce a poor fit because the Poisson distribution always has a positive probability for zero. The natural alternative is to use a zero-truncated Poisson distribution which is the classic approach to hurdle regression for count data.
The main difference between zero-inflated models and hurdle models is which probability is modeled in the binary part of the regression. For hurdle models it is simply the probability of zero vs. non-zero. In zero-inflated models it is the probability to have an excess zero, i.e., the probability of a zero that is not caused by the un-inflated distribution (e.g., Poisson).
For a discussion of both hurdle and zero-inflation models for count data in R, see our manuscript published in JSS and also shipped as a vignette to the pscl package: http://dx.doi.org/10.18637/jss.v027.i08
|
Zero inflated distributions, what are they really?
|
The basic idea you describe is a valid approach and it is often called a hurdle model (or two-part model) rather than a zero-inflated model.
However, it is crucial that the model for the non-zero dat
|
Zero inflated distributions, what are they really?
The basic idea you describe is a valid approach and it is often called a hurdle model (or two-part model) rather than a zero-inflated model.
However, it is crucial that the model for the non-zero data accounts for having the zeros removed. If you fit a Poisson model to the data without zeros this will almost certainly produce a poor fit because the Poisson distribution always has a positive probability for zero. The natural alternative is to use a zero-truncated Poisson distribution which is the classic approach to hurdle regression for count data.
The main difference between zero-inflated models and hurdle models is which probability is modeled in the binary part of the regression. For hurdle models it is simply the probability of zero vs. non-zero. In zero-inflated models it is the probability to have an excess zero, i.e., the probability of a zero that is not caused by the un-inflated distribution (e.g., Poisson).
For a discussion of both hurdle and zero-inflation models for count data in R, see our manuscript published in JSS and also shipped as a vignette to the pscl package: http://dx.doi.org/10.18637/jss.v027.i08
|
Zero inflated distributions, what are they really?
The basic idea you describe is a valid approach and it is often called a hurdle model (or two-part model) rather than a zero-inflated model.
However, it is crucial that the model for the non-zero dat
|
11,970
|
Zero inflated distributions, what are they really?
|
What ssdecontrol said is very correct. But I'd like to add a few cents to the discussion.
I just watched the lecture on Zero Inflated models for count data by Richard McElreath on YouTube.
It makes sense to estimate p while controlling for the variables that are explaining the rate of the pure Poisson model, specially if you consider that the chance of an observed zero being originated from the Poisson distribution is not 100% .
It also makes sense when you consider the parameters of the model, since you end up with two variables to estimate, p and the rate of the Poisson model, and two equations, the case when count is zero and case when the count is different from zero.
Image source : Statistical Rethinking - A Bayesian Course with Examples in R and Stan by Richard McElreath
Edit: typo
|
Zero inflated distributions, what are they really?
|
What ssdecontrol said is very correct. But I'd like to add a few cents to the discussion.
I just watched the lecture on Zero Inflated models for count data by Richard McElreath on YouTube.
It makes se
|
Zero inflated distributions, what are they really?
What ssdecontrol said is very correct. But I'd like to add a few cents to the discussion.
I just watched the lecture on Zero Inflated models for count data by Richard McElreath on YouTube.
It makes sense to estimate p while controlling for the variables that are explaining the rate of the pure Poisson model, specially if you consider that the chance of an observed zero being originated from the Poisson distribution is not 100% .
It also makes sense when you consider the parameters of the model, since you end up with two variables to estimate, p and the rate of the Poisson model, and two equations, the case when count is zero and case when the count is different from zero.
Image source : Statistical Rethinking - A Bayesian Course with Examples in R and Stan by Richard McElreath
Edit: typo
|
Zero inflated distributions, what are they really?
What ssdecontrol said is very correct. But I'd like to add a few cents to the discussion.
I just watched the lecture on Zero Inflated models for count data by Richard McElreath on YouTube.
It makes se
|
11,971
|
Can someone explain the importance of mean stationarity in time series?
|
In the case of time series forecasting, first of all, you need to understand that stationarity is important mostly in the context of ARMA and related models (AR: Auto-Regressive, MA: Moving Average). There are other types of time series forecasting models where stationarity is not a requirement, such as Holt-Winters or Facebook Prophet.
Here are two intuitive, if not entirely mathematically rigorous, explanations of why mean stationarity is important in the ARMA case:
The AR component of ARMA models, treats time series modeling as a supervised learning problem, $Y_t = a_1Y_{t-1}+...a_nY_{t-n}+c+\sigma(t)$. A common rule of thumb in supervised learning is that the distribution of the training data and the distribution of the test data should be the same, otherwise your model will perform poorly on out-of-sample tests and on production data. Since for time series data, you train set is the past, and your test set is the future, the stationarity requirement is simply ensuring that the distribution stays the same over time. This way you avoid the problems that come with training your model on data that has a different distribution than the test/production distribution. And mean stationarity in particular is just saying that the mean of the train set and the mean of the test should stay the same.
An even simpler consideration: take the most basic ARMA model possible, an $AR(1)$ model: $$Y_t = aY_{t-1}+c+ \sigma$$ so the recursive relationship for estimating on step based on the previous one is: $$\hat{Y}_t = a\hat{Y}_{t-1}+c$$, $$\hat{Y}_t - c = a\hat{Y}_{t-1}$$ taking the expected value: $$E(\hat{Y}_t) - c = aE(\hat{Y}_{t-1})$$ meaning that: $$a = \frac{E(\hat{Y}_t) - c}{E(\hat{Y}_{t-1})}$$ so if we want $a$ to stay constant over time, which is the starting assumption of an $AR(1)$ model since we want it to be similar to a linear regression, then $E(\hat{Y}_t)$ has to stay the same for all $t$, i.e. you series has to be mean stationary.
The above considerations are applicable as well to the general ARMA case, with $AR(p)$ and $MA(q)$ terms, although the math is somewhat more complicated than what I describe, but intuitively, the idea is still the same. The 'I' in ARIMA stands for "Integrated" which refers to the differencing process that allows one to transform a more general time series into one that is stationary and can be modeled using ARMA processes.
I disagree with @Alexis characterization that "that time series are stationary is more or less embodying the worldview that the past does not matter" - if anything it is the other way around: Transforming a time series into a stationary one for modeling purposes is exactly about seeing whether there are any causal/deterministic structures in the time series beyond just trend and seasonality. I.e. does the past impact the present or the future in ways more subtle ways than just the large scale variations? (But I might simply misinterpreting what she is trying to say).
|
Can someone explain the importance of mean stationarity in time series?
|
In the case of time series forecasting, first of all, you need to understand that stationarity is important mostly in the context of ARMA and related models (AR: Auto-Regressive, MA: Moving Average).
|
Can someone explain the importance of mean stationarity in time series?
In the case of time series forecasting, first of all, you need to understand that stationarity is important mostly in the context of ARMA and related models (AR: Auto-Regressive, MA: Moving Average). There are other types of time series forecasting models where stationarity is not a requirement, such as Holt-Winters or Facebook Prophet.
Here are two intuitive, if not entirely mathematically rigorous, explanations of why mean stationarity is important in the ARMA case:
The AR component of ARMA models, treats time series modeling as a supervised learning problem, $Y_t = a_1Y_{t-1}+...a_nY_{t-n}+c+\sigma(t)$. A common rule of thumb in supervised learning is that the distribution of the training data and the distribution of the test data should be the same, otherwise your model will perform poorly on out-of-sample tests and on production data. Since for time series data, you train set is the past, and your test set is the future, the stationarity requirement is simply ensuring that the distribution stays the same over time. This way you avoid the problems that come with training your model on data that has a different distribution than the test/production distribution. And mean stationarity in particular is just saying that the mean of the train set and the mean of the test should stay the same.
An even simpler consideration: take the most basic ARMA model possible, an $AR(1)$ model: $$Y_t = aY_{t-1}+c+ \sigma$$ so the recursive relationship for estimating on step based on the previous one is: $$\hat{Y}_t = a\hat{Y}_{t-1}+c$$, $$\hat{Y}_t - c = a\hat{Y}_{t-1}$$ taking the expected value: $$E(\hat{Y}_t) - c = aE(\hat{Y}_{t-1})$$ meaning that: $$a = \frac{E(\hat{Y}_t) - c}{E(\hat{Y}_{t-1})}$$ so if we want $a$ to stay constant over time, which is the starting assumption of an $AR(1)$ model since we want it to be similar to a linear regression, then $E(\hat{Y}_t)$ has to stay the same for all $t$, i.e. you series has to be mean stationary.
The above considerations are applicable as well to the general ARMA case, with $AR(p)$ and $MA(q)$ terms, although the math is somewhat more complicated than what I describe, but intuitively, the idea is still the same. The 'I' in ARIMA stands for "Integrated" which refers to the differencing process that allows one to transform a more general time series into one that is stationary and can be modeled using ARMA processes.
I disagree with @Alexis characterization that "that time series are stationary is more or less embodying the worldview that the past does not matter" - if anything it is the other way around: Transforming a time series into a stationary one for modeling purposes is exactly about seeing whether there are any causal/deterministic structures in the time series beyond just trend and seasonality. I.e. does the past impact the present or the future in ways more subtle ways than just the large scale variations? (But I might simply misinterpreting what she is trying to say).
|
Can someone explain the importance of mean stationarity in time series?
In the case of time series forecasting, first of all, you need to understand that stationarity is important mostly in the context of ARMA and related models (AR: Auto-Regressive, MA: Moving Average).
|
11,972
|
Can someone explain the importance of mean stationarity in time series?
|
Stationarity is important because it is a mathematically strong assumption that's still much weaker than independence or finite-range dependence.
In some settings, it's important primarily for the mathematical tractability: it's easier to first find out what is true for stationary time series, then you can work on how to relax the assumptions. Perhaps you only need weak-sense stationarity, or mean stationarity plus some tail condition, or whatever. Or perhaps you need stationarity for a result to hold exactly, but it holds approximately under weaker assumptions.
In other settings stationarity is important because there are so many ways to be non-stationary that it would be hard to handle every one of them. If a problem can be approximated by a stationary series that's a big practical advantage. Here it's important to remember that the stationary series $X(t)$ that appears in the maths may not be your raw data. For example, traditional ARMA models are stationary, but you would typically want to remove season and trend relationships before fitting one. You might want to log-transform a series that has increasing mean and variance. And so on.
|
Can someone explain the importance of mean stationarity in time series?
|
Stationarity is important because it is a mathematically strong assumption that's still much weaker than independence or finite-range dependence.
In some settings, it's important primarily for the mat
|
Can someone explain the importance of mean stationarity in time series?
Stationarity is important because it is a mathematically strong assumption that's still much weaker than independence or finite-range dependence.
In some settings, it's important primarily for the mathematical tractability: it's easier to first find out what is true for stationary time series, then you can work on how to relax the assumptions. Perhaps you only need weak-sense stationarity, or mean stationarity plus some tail condition, or whatever. Or perhaps you need stationarity for a result to hold exactly, but it holds approximately under weaker assumptions.
In other settings stationarity is important because there are so many ways to be non-stationary that it would be hard to handle every one of them. If a problem can be approximated by a stationary series that's a big practical advantage. Here it's important to remember that the stationary series $X(t)$ that appears in the maths may not be your raw data. For example, traditional ARMA models are stationary, but you would typically want to remove season and trend relationships before fitting one. You might want to log-transform a series that has increasing mean and variance. And so on.
|
Can someone explain the importance of mean stationarity in time series?
Stationarity is important because it is a mathematically strong assumption that's still much weaker than independence or finite-range dependence.
In some settings, it's important primarily for the mat
|
11,973
|
Can someone explain the importance of mean stationarity in time series?
|
First, your mean estimates and your standard errors will be badly biased if you are using any of the inferential tools which assume i.i.d, meaning your results risk being spurious. This can even be true if your data are weakly stationary, but your study period is shorter than the time it takes your series to reach equilibrium after a disturbance.
Second, assuming that time series are stationary is more or less embodying the worldview that the past does not matter (e.g., the prevalence of COVID-19 today is completely independent of COVID-19 prevalence yesterday; the \$ per capita spent on addictive goods such as cigarettes this year is completely independent of the \$ per capita spent on them last year)… kinda unrealistic.
|
Can someone explain the importance of mean stationarity in time series?
|
First, your mean estimates and your standard errors will be badly biased if you are using any of the inferential tools which assume i.i.d, meaning your results risk being spurious. This can even be tr
|
Can someone explain the importance of mean stationarity in time series?
First, your mean estimates and your standard errors will be badly biased if you are using any of the inferential tools which assume i.i.d, meaning your results risk being spurious. This can even be true if your data are weakly stationary, but your study period is shorter than the time it takes your series to reach equilibrium after a disturbance.
Second, assuming that time series are stationary is more or less embodying the worldview that the past does not matter (e.g., the prevalence of COVID-19 today is completely independent of COVID-19 prevalence yesterday; the \$ per capita spent on addictive goods such as cigarettes this year is completely independent of the \$ per capita spent on them last year)… kinda unrealistic.
|
Can someone explain the importance of mean stationarity in time series?
First, your mean estimates and your standard errors will be badly biased if you are using any of the inferential tools which assume i.i.d, meaning your results risk being spurious. This can even be tr
|
11,974
|
Can someone explain the importance of mean stationarity in time series?
|
Stationary means that the statistics that describe the random process are constant. ‘A memoryless Markov process’ is another way to say stationary as is saying that the probability generating function has no “feedback” terms, but if you recognized those words you might not be asking this question. FWIW “weakly stationary” isn't quite the same, a constant or knowable rate of change of the stats would be weakly stationary, as would something that averages out, but it’s a little more involved so consider this fair warning that there’s more to know in case that’s part of the puzzle, but describing everything that isn’t stationary in detail would turn a simple answer a complex answer.
Why is stationary important? The commonly used statistical formulae are crafted to use a data set to extract an imprecise description with an estimable accuracy of an otherwise unknown random process. The formulae assume that adding more samples increases the accuracy of the description by reducing the uncertainty. For that the Mean Central tendency, i.e. ergodic in the mean, has to be true. If the random process itself is changing, e.g. the average value or the variance is changing, then an essential underlying assumption is invalid, you can’t make a better estimate.
As a general “what happens” if the mean is moving as a linear function of time, the computed mean will represent the mean at a weighted mean time, and the computed variance will be inflated. Is possible to compute an ‘optimal a posteriori” (after the fact) estimate of a non stationary process and then use that to extract meaningful stats because the best estimate of the time function minimizes the variance. It’s also easy to hypothesize some high order time function and create a complex model that appears to be valid and predictive that in fact has no predictive power because it modeled a snapshot of randomness, not an underlying time trend.
|
Can someone explain the importance of mean stationarity in time series?
|
Stationary means that the statistics that describe the random process are constant. ‘A memoryless Markov process’ is another way to say stationary as is saying that the probability generating function
|
Can someone explain the importance of mean stationarity in time series?
Stationary means that the statistics that describe the random process are constant. ‘A memoryless Markov process’ is another way to say stationary as is saying that the probability generating function has no “feedback” terms, but if you recognized those words you might not be asking this question. FWIW “weakly stationary” isn't quite the same, a constant or knowable rate of change of the stats would be weakly stationary, as would something that averages out, but it’s a little more involved so consider this fair warning that there’s more to know in case that’s part of the puzzle, but describing everything that isn’t stationary in detail would turn a simple answer a complex answer.
Why is stationary important? The commonly used statistical formulae are crafted to use a data set to extract an imprecise description with an estimable accuracy of an otherwise unknown random process. The formulae assume that adding more samples increases the accuracy of the description by reducing the uncertainty. For that the Mean Central tendency, i.e. ergodic in the mean, has to be true. If the random process itself is changing, e.g. the average value or the variance is changing, then an essential underlying assumption is invalid, you can’t make a better estimate.
As a general “what happens” if the mean is moving as a linear function of time, the computed mean will represent the mean at a weighted mean time, and the computed variance will be inflated. Is possible to compute an ‘optimal a posteriori” (after the fact) estimate of a non stationary process and then use that to extract meaningful stats because the best estimate of the time function minimizes the variance. It’s also easy to hypothesize some high order time function and create a complex model that appears to be valid and predictive that in fact has no predictive power because it modeled a snapshot of randomness, not an underlying time trend.
|
Can someone explain the importance of mean stationarity in time series?
Stationary means that the statistics that describe the random process are constant. ‘A memoryless Markov process’ is another way to say stationary as is saying that the probability generating function
|
11,975
|
Can someone explain the importance of mean stationarity in time series?
|
Short and sweet:
The parameters need to be constant. If the series is not stationary, then the parameters that you estimate are going to be functions of time themselves. But the model assumes that they are constants, as such, you will estimate the average parameter value over the time-period. See Skander's answer for why, I won't dive into the math since he already did.
This presents at least 2 problems:
Your estimates for the true parameter value are likely wrong, because at any moment in time the parameter value is likely to be different from its average value. Therefore, any inference that you make from the data is likely wrong. This leads to spurious regressions/correlations.
You cannot use the model to predict the future. Since your parameter is now a function of time, and you don't know how it is evolving over time, any forecast that you make is complete (pardon my french) horseshit.
Getting to stationarity is actually pretty easy. We just need to difference until we have a stationary series. So just do that.
|
Can someone explain the importance of mean stationarity in time series?
|
Short and sweet:
The parameters need to be constant. If the series is not stationary, then the parameters that you estimate are going to be functions of time themselves. But the model assumes that the
|
Can someone explain the importance of mean stationarity in time series?
Short and sweet:
The parameters need to be constant. If the series is not stationary, then the parameters that you estimate are going to be functions of time themselves. But the model assumes that they are constants, as such, you will estimate the average parameter value over the time-period. See Skander's answer for why, I won't dive into the math since he already did.
This presents at least 2 problems:
Your estimates for the true parameter value are likely wrong, because at any moment in time the parameter value is likely to be different from its average value. Therefore, any inference that you make from the data is likely wrong. This leads to spurious regressions/correlations.
You cannot use the model to predict the future. Since your parameter is now a function of time, and you don't know how it is evolving over time, any forecast that you make is complete (pardon my french) horseshit.
Getting to stationarity is actually pretty easy. We just need to difference until we have a stationary series. So just do that.
|
Can someone explain the importance of mean stationarity in time series?
Short and sweet:
The parameters need to be constant. If the series is not stationary, then the parameters that you estimate are going to be functions of time themselves. But the model assumes that the
|
11,976
|
2 Sample Kolmogorov-Smirnov vs. Anderson-Darling vs Cramer-von-Mises
|
To cut a long story short: Anderson-Darling test is assumed to be more powerful than Kolmogorov-Smirnov test.
Have a glance on this article comparing various tests (of normality, but the results hold for comparing two distribudions) Power Comparisons of Shapiro-Wilk, Kolmogorov-Smirnov, Lilliefors and Anderson-Darling Tests by Nornadiah Mohd Razali & Yap Bee Wah.
Anderson-Darling test is much more sensitive to the tails of distribution, whereas Kolmogorov-Smirnov test is more aware of the center of distribution.
To sum up, I would recommend you to use Anderson-Darling or eventually Cramer-von Misses test, to get much more powerful test.
|
2 Sample Kolmogorov-Smirnov vs. Anderson-Darling vs Cramer-von-Mises
|
To cut a long story short: Anderson-Darling test is assumed to be more powerful than Kolmogorov-Smirnov test.
Have a glance on this article comparing various tests (of normality, but the results hold
|
2 Sample Kolmogorov-Smirnov vs. Anderson-Darling vs Cramer-von-Mises
To cut a long story short: Anderson-Darling test is assumed to be more powerful than Kolmogorov-Smirnov test.
Have a glance on this article comparing various tests (of normality, but the results hold for comparing two distribudions) Power Comparisons of Shapiro-Wilk, Kolmogorov-Smirnov, Lilliefors and Anderson-Darling Tests by Nornadiah Mohd Razali & Yap Bee Wah.
Anderson-Darling test is much more sensitive to the tails of distribution, whereas Kolmogorov-Smirnov test is more aware of the center of distribution.
To sum up, I would recommend you to use Anderson-Darling or eventually Cramer-von Misses test, to get much more powerful test.
|
2 Sample Kolmogorov-Smirnov vs. Anderson-Darling vs Cramer-von-Mises
To cut a long story short: Anderson-Darling test is assumed to be more powerful than Kolmogorov-Smirnov test.
Have a glance on this article comparing various tests (of normality, but the results hold
|
11,977
|
2 Sample Kolmogorov-Smirnov vs. Anderson-Darling vs Cramer-von-Mises
|
Each of the three tests have better power against different alternatives; but on the other hand, all three exhibit varying degrees of test bias in some situations.
Broadly speaking, the Anderson-Darling test has better power against fatter tails than specified and the Kolmogorov-Smirnov has more power against deviations in the middle, with Cramer-von Mises in between the two but somewhat more akin to the Kolmogorov-Smirnov in that respect.
The kinds of alternatives many people find to be of interest tend to be picked up more often by the Anderson-Darling and the Cramer-von Mises test but your particular needs may be different.
The Anderson-Darling tends to suffer worse bias problems overall (for hypothesis tests, bias means that there are some alternatives you're even less likely to reject than the null -- which is not what you want from an omnibus goodness of fit test -- but it seems to be difficult to avoid in realistic situations).
A number of power studies have been done which include an array of goodness of fit tests; generally for the alternatives that they consider, the Anderson-Darling tends to come out best most often --- but if you're testing uniformity and trying to pick up say a beta(2,2) alternative, none of them do well, and the Anderson Darling is the worst.
|
2 Sample Kolmogorov-Smirnov vs. Anderson-Darling vs Cramer-von-Mises
|
Each of the three tests have better power against different alternatives; but on the other hand, all three exhibit varying degrees of test bias in some situations.
Broadly speaking, the Anderson-Darli
|
2 Sample Kolmogorov-Smirnov vs. Anderson-Darling vs Cramer-von-Mises
Each of the three tests have better power against different alternatives; but on the other hand, all three exhibit varying degrees of test bias in some situations.
Broadly speaking, the Anderson-Darling test has better power against fatter tails than specified and the Kolmogorov-Smirnov has more power against deviations in the middle, with Cramer-von Mises in between the two but somewhat more akin to the Kolmogorov-Smirnov in that respect.
The kinds of alternatives many people find to be of interest tend to be picked up more often by the Anderson-Darling and the Cramer-von Mises test but your particular needs may be different.
The Anderson-Darling tends to suffer worse bias problems overall (for hypothesis tests, bias means that there are some alternatives you're even less likely to reject than the null -- which is not what you want from an omnibus goodness of fit test -- but it seems to be difficult to avoid in realistic situations).
A number of power studies have been done which include an array of goodness of fit tests; generally for the alternatives that they consider, the Anderson-Darling tends to come out best most often --- but if you're testing uniformity and trying to pick up say a beta(2,2) alternative, none of them do well, and the Anderson Darling is the worst.
|
2 Sample Kolmogorov-Smirnov vs. Anderson-Darling vs Cramer-von-Mises
Each of the three tests have better power against different alternatives; but on the other hand, all three exhibit varying degrees of test bias in some situations.
Broadly speaking, the Anderson-Darli
|
11,978
|
Why does Central Limit Theorem break down in my simulation?
|
Let's recall, precisely, what the central limit theorem says.
If $X_1, X_2, \cdots, X_k$ are independent and identically distributed random variables with (shared) mean $\mu$ and standard deviation $\sigma$, then $\frac{X_1 + X_2 + \cdots + X_k}{k\frac{\sigma}{\sqrt{k}}}$ converges in distribution to a standard normal distribution $N(0, 1)$ (*).
This is often used in the "informal" form:
If $X_1, X_2, \cdots, X_k$ are independent and identically distributed random variables with (shared) mean $\mu$ and standard deviation $\sigma$, then $X_1 + X_2 + \cdots + X_k$ converges "in distribution" to a standard normal distribution $N(k \mu, \sqrt{k} \sigma)$.
There's no good way to make that form of the CLT mathematically precise, since the "limit" distribution change, but it's useful in practices.
When we have a static list of numbers like
4,3,5,6,5,3,10000000,2,5,4,3,6,5
and we are sampling by taking a number at random from this list, to apply the central limit theorem we need to be sure that our sampling scheme satisfies these two conditions of independence and identically distributed.
Identically distributed is no problem: each number in the list is equally likely to be chosen.
Independent is more subtle, and depends on our sampling scheme. If we are sampling without replacement, then we violate independence. It is only when we sample with replacement that the central limit theorem is applicable.
So, if we use with replacement sampling in your scheme, then we should be able to apply the central limit theorem. At the same time, you are right, if our sample is of size 5, then we are going to see very different behaviour depending on if the very large number is chosen, or not chosen in our sample.
So what's the rub? Well, the rate of convergence to a normal distribution is very dependent on the shape of the population we are sampling from, in particular, if our population is very skew, we expect it to take a long time to converge to the normal. This is the case in our example, so we should not expect that a sample of size 5 is sufficient to show the normal structure.
Above I repeated your experiment (with replacement sampling) for samples of size 5, 100, and 1000. You can see that the normal structure is emergent for very large samples.
(*) Note there are some technical conditions needed here, like finite mean and variance. They are easily verified to be true in our sampling from a list example.
|
Why does Central Limit Theorem break down in my simulation?
|
Let's recall, precisely, what the central limit theorem says.
If $X_1, X_2, \cdots, X_k$ are independent and identically distributed random variables with (shared) mean $\mu$ and standard deviation $
|
Why does Central Limit Theorem break down in my simulation?
Let's recall, precisely, what the central limit theorem says.
If $X_1, X_2, \cdots, X_k$ are independent and identically distributed random variables with (shared) mean $\mu$ and standard deviation $\sigma$, then $\frac{X_1 + X_2 + \cdots + X_k}{k\frac{\sigma}{\sqrt{k}}}$ converges in distribution to a standard normal distribution $N(0, 1)$ (*).
This is often used in the "informal" form:
If $X_1, X_2, \cdots, X_k$ are independent and identically distributed random variables with (shared) mean $\mu$ and standard deviation $\sigma$, then $X_1 + X_2 + \cdots + X_k$ converges "in distribution" to a standard normal distribution $N(k \mu, \sqrt{k} \sigma)$.
There's no good way to make that form of the CLT mathematically precise, since the "limit" distribution change, but it's useful in practices.
When we have a static list of numbers like
4,3,5,6,5,3,10000000,2,5,4,3,6,5
and we are sampling by taking a number at random from this list, to apply the central limit theorem we need to be sure that our sampling scheme satisfies these two conditions of independence and identically distributed.
Identically distributed is no problem: each number in the list is equally likely to be chosen.
Independent is more subtle, and depends on our sampling scheme. If we are sampling without replacement, then we violate independence. It is only when we sample with replacement that the central limit theorem is applicable.
So, if we use with replacement sampling in your scheme, then we should be able to apply the central limit theorem. At the same time, you are right, if our sample is of size 5, then we are going to see very different behaviour depending on if the very large number is chosen, or not chosen in our sample.
So what's the rub? Well, the rate of convergence to a normal distribution is very dependent on the shape of the population we are sampling from, in particular, if our population is very skew, we expect it to take a long time to converge to the normal. This is the case in our example, so we should not expect that a sample of size 5 is sufficient to show the normal structure.
Above I repeated your experiment (with replacement sampling) for samples of size 5, 100, and 1000. You can see that the normal structure is emergent for very large samples.
(*) Note there are some technical conditions needed here, like finite mean and variance. They are easily verified to be true in our sampling from a list example.
|
Why does Central Limit Theorem break down in my simulation?
Let's recall, precisely, what the central limit theorem says.
If $X_1, X_2, \cdots, X_k$ are independent and identically distributed random variables with (shared) mean $\mu$ and standard deviation $
|
11,979
|
Why does Central Limit Theorem break down in my simulation?
|
In general, the size of each sample should be more than $5$ for the CLT approximation to be good. A rule of thumb is a sample of size $30$ or more. But, with the population of your first example, $5$ is OK.
pop <- c(4, 3, 5, 6, 5, 3, 4, 2, 5, 4, 3, 6, 5)
N <- 10^5
n <- 5
x <- matrix(sample(pop, size = N*n, replace = TRUE), nrow = N)
x_bar <- rowMeans(x)
hist(x_bar, freq = FALSE, col = "cyan")
f <- function(t) dnorm(t, mean = mean(pop), sd = sd(pop)/sqrt(n))
curve(f, add = TRUE, lwd = 2, col = "red")
In your second example, because of the shape of the population distribution (for one thing, it's too much skewed; read the comments by guy and Glen_b bellow), even samples of size $30$ won't give you a good approximation for the distribution of the sample mean using the CLT.
pop <- c(4, 3, 5, 6, 5, 3, 10000000, 2, 5, 4, 3, 6, 5)
N <- 10^5
n <- 30
x <- matrix(sample(pop, size = N*n, replace = TRUE), nrow = N)
x_bar <- rowMeans(x)
hist(x_bar, freq = FALSE, col = "cyan")
f <- function(t) dnorm(t, mean = mean(pop), sd = sd(pop)/sqrt(n))
curve(f, add = TRUE, lwd = 2, col = "red")
But, with this second population, samples of, say, size $100$ are fine.
pop <- c(4, 3, 5, 6, 5, 3, 10000000, 2, 5, 4, 3, 6, 5)
N <- 10^5
n <- 100
x <- matrix(sample(pop, size = N*n, replace = TRUE), nrow = N)
x_bar <- rowMeans(x)
hist(x_bar, freq = FALSE, col = "cyan")
f <- function(t) dnorm(t, mean = mean(pop), sd = sd(pop)/sqrt(n))
curve(f, add = TRUE, lwd = 2, col = "red")
|
Why does Central Limit Theorem break down in my simulation?
|
In general, the size of each sample should be more than $5$ for the CLT approximation to be good. A rule of thumb is a sample of size $30$ or more. But, with the population of your first example, $5$
|
Why does Central Limit Theorem break down in my simulation?
In general, the size of each sample should be more than $5$ for the CLT approximation to be good. A rule of thumb is a sample of size $30$ or more. But, with the population of your first example, $5$ is OK.
pop <- c(4, 3, 5, 6, 5, 3, 4, 2, 5, 4, 3, 6, 5)
N <- 10^5
n <- 5
x <- matrix(sample(pop, size = N*n, replace = TRUE), nrow = N)
x_bar <- rowMeans(x)
hist(x_bar, freq = FALSE, col = "cyan")
f <- function(t) dnorm(t, mean = mean(pop), sd = sd(pop)/sqrt(n))
curve(f, add = TRUE, lwd = 2, col = "red")
In your second example, because of the shape of the population distribution (for one thing, it's too much skewed; read the comments by guy and Glen_b bellow), even samples of size $30$ won't give you a good approximation for the distribution of the sample mean using the CLT.
pop <- c(4, 3, 5, 6, 5, 3, 10000000, 2, 5, 4, 3, 6, 5)
N <- 10^5
n <- 30
x <- matrix(sample(pop, size = N*n, replace = TRUE), nrow = N)
x_bar <- rowMeans(x)
hist(x_bar, freq = FALSE, col = "cyan")
f <- function(t) dnorm(t, mean = mean(pop), sd = sd(pop)/sqrt(n))
curve(f, add = TRUE, lwd = 2, col = "red")
But, with this second population, samples of, say, size $100$ are fine.
pop <- c(4, 3, 5, 6, 5, 3, 10000000, 2, 5, 4, 3, 6, 5)
N <- 10^5
n <- 100
x <- matrix(sample(pop, size = N*n, replace = TRUE), nrow = N)
x_bar <- rowMeans(x)
hist(x_bar, freq = FALSE, col = "cyan")
f <- function(t) dnorm(t, mean = mean(pop), sd = sd(pop)/sqrt(n))
curve(f, add = TRUE, lwd = 2, col = "red")
|
Why does Central Limit Theorem break down in my simulation?
In general, the size of each sample should be more than $5$ for the CLT approximation to be good. A rule of thumb is a sample of size $30$ or more. But, with the population of your first example, $5$
|
11,980
|
Why does Central Limit Theorem break down in my simulation?
|
I'd just like to explain, using complex cumulant-generating functions, why everyone keeps blaming this on skew.
Let's write the random variable you're sampling as $\mu+\sigma Z$, where $\mu$ is the mean and $\sigma$ the standard deviation so $Z$ has mean $0$ and variance $1$. The cumulant-generating function of $Z$ is $-\frac{1}{2}t^2-\frac{i\gamma_1}{6}t^3+o(t^3)$. Here $\gamma_1$ denotes the skew of $Z$; we could write it in terms of the skew $\kappa_3$ of the original variable $\mu+\sigma Z$, viz. $\gamma_1=\sigma^{-3}\kappa_3$.
If we divide the sum of $n$ samples of $Z$'s distribution by $\sqrt{n}$, the result has cgf $$n\left(-\frac{1}{2}\left(\frac{t}{\sqrt{n}}\right)^2-\frac{i\gamma_1}{6}\left(\frac{t}{\sqrt{n}}\right)^3\right)+o(t^3)=-\frac{1}{2}t^2-\frac{i\gamma_1}{6\sqrt{n}}t^3+o(t^3).$$For a Normal approximation to be valid at large enough $t$ for the graph to look right, we need sufficiently large $n$. This calculation motivates $n\propto\gamma_1^2$. The two samples you considered have very different values of $\gamma_1$.
|
Why does Central Limit Theorem break down in my simulation?
|
I'd just like to explain, using complex cumulant-generating functions, why everyone keeps blaming this on skew.
Let's write the random variable you're sampling as $\mu+\sigma Z$, where $\mu$ is the me
|
Why does Central Limit Theorem break down in my simulation?
I'd just like to explain, using complex cumulant-generating functions, why everyone keeps blaming this on skew.
Let's write the random variable you're sampling as $\mu+\sigma Z$, where $\mu$ is the mean and $\sigma$ the standard deviation so $Z$ has mean $0$ and variance $1$. The cumulant-generating function of $Z$ is $-\frac{1}{2}t^2-\frac{i\gamma_1}{6}t^3+o(t^3)$. Here $\gamma_1$ denotes the skew of $Z$; we could write it in terms of the skew $\kappa_3$ of the original variable $\mu+\sigma Z$, viz. $\gamma_1=\sigma^{-3}\kappa_3$.
If we divide the sum of $n$ samples of $Z$'s distribution by $\sqrt{n}$, the result has cgf $$n\left(-\frac{1}{2}\left(\frac{t}{\sqrt{n}}\right)^2-\frac{i\gamma_1}{6}\left(\frac{t}{\sqrt{n}}\right)^3\right)+o(t^3)=-\frac{1}{2}t^2-\frac{i\gamma_1}{6\sqrt{n}}t^3+o(t^3).$$For a Normal approximation to be valid at large enough $t$ for the graph to look right, we need sufficiently large $n$. This calculation motivates $n\propto\gamma_1^2$. The two samples you considered have very different values of $\gamma_1$.
|
Why does Central Limit Theorem break down in my simulation?
I'd just like to explain, using complex cumulant-generating functions, why everyone keeps blaming this on skew.
Let's write the random variable you're sampling as $\mu+\sigma Z$, where $\mu$ is the me
|
11,981
|
Why does Central Limit Theorem break down in my simulation?
|
Short answer is, you don't have a big enough sample to make central limit theorem apply.
|
Why does Central Limit Theorem break down in my simulation?
|
Short answer is, you don't have a big enough sample to make central limit theorem apply.
|
Why does Central Limit Theorem break down in my simulation?
Short answer is, you don't have a big enough sample to make central limit theorem apply.
|
Why does Central Limit Theorem break down in my simulation?
Short answer is, you don't have a big enough sample to make central limit theorem apply.
|
11,982
|
Are heat maps "one of the least effective types of data visualization"?
|
There is no such thing as a "best" plot for this or for that.
How you plot your data depends on the message you want to convey. Commonly used plots have the advantage that users are more likely to be able to read them. Nevertheless, that does not mean that they are necessarily the best choice.
Regarding heat maps, I've ordered my response by the supposed arguments against them.
1) If you don't trust color as an encoding channel, use brightness instead, with a scale encompassing dark gray to light gray "color" tones. Most often, you want to bin continuous variables (also see 5), so you can keep the number of colors low and make it easier to decode by users. This is not a must though. Take a look at this example, in which the continuous variable is not binned.
2) Certainly, they should not be used as an alternative to look up precise values. Heat maps should primarily be used to illustrate patterns, not to replace tables.
3,4) I don't see how this would be related to heat maps only.
5) Heat maps are ideally but not necessarily used with discrete variables. For continuous variables, heat maps can be used as a sort of two-dimensional histogram or bar chart, with proper binning, as well as brightness as an encoding channel.
|
Are heat maps "one of the least effective types of data visualization"?
|
There is no such thing as a "best" plot for this or for that.
How you plot your data depends on the message you want to convey. Commonly used plots have the advantage that users are more likely to be
|
Are heat maps "one of the least effective types of data visualization"?
There is no such thing as a "best" plot for this or for that.
How you plot your data depends on the message you want to convey. Commonly used plots have the advantage that users are more likely to be able to read them. Nevertheless, that does not mean that they are necessarily the best choice.
Regarding heat maps, I've ordered my response by the supposed arguments against them.
1) If you don't trust color as an encoding channel, use brightness instead, with a scale encompassing dark gray to light gray "color" tones. Most often, you want to bin continuous variables (also see 5), so you can keep the number of colors low and make it easier to decode by users. This is not a must though. Take a look at this example, in which the continuous variable is not binned.
2) Certainly, they should not be used as an alternative to look up precise values. Heat maps should primarily be used to illustrate patterns, not to replace tables.
3,4) I don't see how this would be related to heat maps only.
5) Heat maps are ideally but not necessarily used with discrete variables. For continuous variables, heat maps can be used as a sort of two-dimensional histogram or bar chart, with proper binning, as well as brightness as an encoding channel.
|
Are heat maps "one of the least effective types of data visualization"?
There is no such thing as a "best" plot for this or for that.
How you plot your data depends on the message you want to convey. Commonly used plots have the advantage that users are more likely to be
|
11,983
|
Are heat maps "one of the least effective types of data visualization"?
|
Someone can not say Heat Map is the least effective type of visualization. I would rather say it depends on your requirement. In some cases Heat maps are very useful. Let's say you have to make a report on crime in a country state-wise (or city-wise). Here you will have a huge data set which can have time dependencies.
Similarly, let's say you have to prepare a report on electricity consumption for cities. In these cases you can easily visualize through Heat map. It will make more sense and be less cumbersome.
So, in a nutshell, if you have lots of continuous data and you want to make a report which can pin-point the answers quickly then Heat map is best.
|
Are heat maps "one of the least effective types of data visualization"?
|
Someone can not say Heat Map is the least effective type of visualization. I would rather say it depends on your requirement. In some cases Heat maps are very useful. Let's say you have to make a repo
|
Are heat maps "one of the least effective types of data visualization"?
Someone can not say Heat Map is the least effective type of visualization. I would rather say it depends on your requirement. In some cases Heat maps are very useful. Let's say you have to make a report on crime in a country state-wise (or city-wise). Here you will have a huge data set which can have time dependencies.
Similarly, let's say you have to prepare a report on electricity consumption for cities. In these cases you can easily visualize through Heat map. It will make more sense and be less cumbersome.
So, in a nutshell, if you have lots of continuous data and you want to make a report which can pin-point the answers quickly then Heat map is best.
|
Are heat maps "one of the least effective types of data visualization"?
Someone can not say Heat Map is the least effective type of visualization. I would rather say it depends on your requirement. In some cases Heat maps are very useful. Let's say you have to make a repo
|
11,984
|
Are heat maps "one of the least effective types of data visualization"?
|
Critique 1 in the original question covers the biggest drawback - that it is difficult for someone reading the heat map to decode the quantitative information that is conveyed. Consider an xy-scatter plot or dot plot, where the underlying quantity is directly related to the distance on the chart - very straightforward for interpretation.
In a heat map, on the other hand, the person reading the chart is at liberty to interpret 10% 'redder' or 'darker' to their own satisfaction. On top of that is the problem of differing abilities of people to discern colour and shade to begin with. These are genuine disadvantages, but they are not universally fatal.
The third critique, by contrast, seems to inadvertently identify an occasion when heat maps are especially useful - when the data is clustered on a 2D plane so that similar values in a third dimension show as patches of a particular shade or colour. So while heat maps are ineffective at some things, they are useful for others, and they should stay in your bag, in the same way that golfers often carry pitching wedges or similar despite their being useless for driving or putting, or carpenters don't disregard hammers because they are no good for cutting wood.
In general visualising data should be seen as iterative activity that will take some time as you try a number of visualisations that bring out the important features of the data, including trying more than one kind of visualisation, and then experimenting to find the best settings within particular choices. Nor should it be assumed that the result will be one visualisation - sometimes a number of visualisations of data will needed to highlight multiple important features of the data. In this context, there will be times where for particular features of particular data sets, the heat map will be the most effective, and communicating clusters as described may be one of those times. Overall, there will be frequent occasions where a single visualisation cannot do everything, and more than one will be required.
|
Are heat maps "one of the least effective types of data visualization"?
|
Critique 1 in the original question covers the biggest drawback - that it is difficult for someone reading the heat map to decode the quantitative information that is conveyed. Consider an xy-scatter
|
Are heat maps "one of the least effective types of data visualization"?
Critique 1 in the original question covers the biggest drawback - that it is difficult for someone reading the heat map to decode the quantitative information that is conveyed. Consider an xy-scatter plot or dot plot, where the underlying quantity is directly related to the distance on the chart - very straightforward for interpretation.
In a heat map, on the other hand, the person reading the chart is at liberty to interpret 10% 'redder' or 'darker' to their own satisfaction. On top of that is the problem of differing abilities of people to discern colour and shade to begin with. These are genuine disadvantages, but they are not universally fatal.
The third critique, by contrast, seems to inadvertently identify an occasion when heat maps are especially useful - when the data is clustered on a 2D plane so that similar values in a third dimension show as patches of a particular shade or colour. So while heat maps are ineffective at some things, they are useful for others, and they should stay in your bag, in the same way that golfers often carry pitching wedges or similar despite their being useless for driving or putting, or carpenters don't disregard hammers because they are no good for cutting wood.
In general visualising data should be seen as iterative activity that will take some time as you try a number of visualisations that bring out the important features of the data, including trying more than one kind of visualisation, and then experimenting to find the best settings within particular choices. Nor should it be assumed that the result will be one visualisation - sometimes a number of visualisations of data will needed to highlight multiple important features of the data. In this context, there will be times where for particular features of particular data sets, the heat map will be the most effective, and communicating clusters as described may be one of those times. Overall, there will be frequent occasions where a single visualisation cannot do everything, and more than one will be required.
|
Are heat maps "one of the least effective types of data visualization"?
Critique 1 in the original question covers the biggest drawback - that it is difficult for someone reading the heat map to decode the quantitative information that is conveyed. Consider an xy-scatter
|
11,985
|
Are heat maps "one of the least effective types of data visualization"?
|
As aforementioned by others, it is really improper to say that heat maps are always ineffective. Actually, they are quite effective in many instances.
For example, if you want to visualize 4D data, it is simple enough to do the first three dimensions in many plotting software. However, the whole concept of 4D is pretty difficult to conceptualize at all. What is the "4th" direction/dimension?
That's where a heat map may be effective, because it will allow to plot the first three dimensions on coordinate axis, and the fourth can be visualized by stacking a heat map onto your plotted plane (or line, but that's less likely).
Bottom line is that you need context. What are you looking for in your visualization? Also, as a fellow self-teacher, I can tell you that these online courses tend to be very trivial and unhelpful. You are much better off only using them when you are looking for information/help on specific topics rather than looking to be taught about a whole subject.
Best of luck anyway though.
|
Are heat maps "one of the least effective types of data visualization"?
|
As aforementioned by others, it is really improper to say that heat maps are always ineffective. Actually, they are quite effective in many instances.
For example, if you want to visualize 4D data, i
|
Are heat maps "one of the least effective types of data visualization"?
As aforementioned by others, it is really improper to say that heat maps are always ineffective. Actually, they are quite effective in many instances.
For example, if you want to visualize 4D data, it is simple enough to do the first three dimensions in many plotting software. However, the whole concept of 4D is pretty difficult to conceptualize at all. What is the "4th" direction/dimension?
That's where a heat map may be effective, because it will allow to plot the first three dimensions on coordinate axis, and the fourth can be visualized by stacking a heat map onto your plotted plane (or line, but that's less likely).
Bottom line is that you need context. What are you looking for in your visualization? Also, as a fellow self-teacher, I can tell you that these online courses tend to be very trivial and unhelpful. You are much better off only using them when you are looking for information/help on specific topics rather than looking to be taught about a whole subject.
Best of luck anyway though.
|
Are heat maps "one of the least effective types of data visualization"?
As aforementioned by others, it is really improper to say that heat maps are always ineffective. Actually, they are quite effective in many instances.
For example, if you want to visualize 4D data, i
|
11,986
|
Are heat maps "one of the least effective types of data visualization"?
|
By nature, a heat map displays data with two continuous independent variables (or, not quite equivalently, one independent variable from a two-dimensional vector space), and one continuous dependent variable. For data of that type, a heat map is definitely one of the most effective types of data visualisation. Yes, it has its problems, but that's inevitable: you really have only two dimensions to work with and a three-dimensional space cannot be mapped to that in a structure-preserving way, therefore you need a hack like mapping one dimension to colour or drawing contour lines etc..
If the independent variables are categorical, the heat map immediately makes much less sense: there's generally no reason why a categorical variable would map onto a real axis. In fact a categorical variable, by definition, does not come with any pre-determined topology, or we might say, with the discrete topology. Now unlike $\mathbb{R}^2$, which is only homeomorphic to another two-dimensional space, the cartesian product $X\times Y$ of two discrete spaces is actually homeomorphic to any space of the cardinality $|X| \cdot |Y|$, which is finite for a categorical variable – in other words, the cartesian product of two categorical variable can be considered as a single categorical variable! And in that light, you can just as well use other plots, which don't have the problems of a heat map.
If you find yourself in a situation where a heat map over two categorical variables appears useful, it's an indication that these are probably not really categorical variables, but rather quantised continuous variables.
|
Are heat maps "one of the least effective types of data visualization"?
|
By nature, a heat map displays data with two continuous independent variables (or, not quite equivalently, one independent variable from a two-dimensional vector space), and one continuous dependent v
|
Are heat maps "one of the least effective types of data visualization"?
By nature, a heat map displays data with two continuous independent variables (or, not quite equivalently, one independent variable from a two-dimensional vector space), and one continuous dependent variable. For data of that type, a heat map is definitely one of the most effective types of data visualisation. Yes, it has its problems, but that's inevitable: you really have only two dimensions to work with and a three-dimensional space cannot be mapped to that in a structure-preserving way, therefore you need a hack like mapping one dimension to colour or drawing contour lines etc..
If the independent variables are categorical, the heat map immediately makes much less sense: there's generally no reason why a categorical variable would map onto a real axis. In fact a categorical variable, by definition, does not come with any pre-determined topology, or we might say, with the discrete topology. Now unlike $\mathbb{R}^2$, which is only homeomorphic to another two-dimensional space, the cartesian product $X\times Y$ of two discrete spaces is actually homeomorphic to any space of the cardinality $|X| \cdot |Y|$, which is finite for a categorical variable – in other words, the cartesian product of two categorical variable can be considered as a single categorical variable! And in that light, you can just as well use other plots, which don't have the problems of a heat map.
If you find yourself in a situation where a heat map over two categorical variables appears useful, it's an indication that these are probably not really categorical variables, but rather quantised continuous variables.
|
Are heat maps "one of the least effective types of data visualization"?
By nature, a heat map displays data with two continuous independent variables (or, not quite equivalently, one independent variable from a two-dimensional vector space), and one continuous dependent v
|
11,987
|
Are heat maps "one of the least effective types of data visualization"?
|
Heat maps are great at providing a simplistic view of multiple variables from a time series perspective- the data can be absolute changes over time or standardized using Z scores or other means to examine variables with different measurements intervals or relative changes of subgroups. It does provide a very visually noticeable view that one can spot correlations- or inverses and replaces a multitude of graphs. They can also be used in preprocessing to assess possible dimensionality reduction- i.e. Factoring or PCA.
The bad- intervening variables and other factors can become hidden and passed by when using this approach to spot correlations. The same hidden aspects do occur with line graphs- however given the large number of variables- my experience is that heat maps brings so much information that a user does not consider the intervening aspects nor other hidden factors.
This from a a data scientist from a progressive economist perspective with 20 years in the field producing data and tasked at educating the general public with such data.
|
Are heat maps "one of the least effective types of data visualization"?
|
Heat maps are great at providing a simplistic view of multiple variables from a time series perspective- the data can be absolute changes over time or standardized using Z scores or other means to e
|
Are heat maps "one of the least effective types of data visualization"?
Heat maps are great at providing a simplistic view of multiple variables from a time series perspective- the data can be absolute changes over time or standardized using Z scores or other means to examine variables with different measurements intervals or relative changes of subgroups. It does provide a very visually noticeable view that one can spot correlations- or inverses and replaces a multitude of graphs. They can also be used in preprocessing to assess possible dimensionality reduction- i.e. Factoring or PCA.
The bad- intervening variables and other factors can become hidden and passed by when using this approach to spot correlations. The same hidden aspects do occur with line graphs- however given the large number of variables- my experience is that heat maps brings so much information that a user does not consider the intervening aspects nor other hidden factors.
This from a a data scientist from a progressive economist perspective with 20 years in the field producing data and tasked at educating the general public with such data.
|
Are heat maps "one of the least effective types of data visualization"?
Heat maps are great at providing a simplistic view of multiple variables from a time series perspective- the data can be absolute changes over time or standardized using Z scores or other means to e
|
11,988
|
Are heat maps "one of the least effective types of data visualization"?
|
Heatmaps are advantageous over scatterplots when there are too many data points to view on a scatterplot. This can be mitigated in a scatterplot using translucent data points but beyond a certain threshold it becomes better to summarize the data.
In this blog post a compelling example of scatterplots being hard to interpret is given.
A scatterplot can only visually represent density up to a certain threshold - the threshold of "points everywhere"...
Plot density, not points
The solution is to plot the binned point density rather than the points themselves. We already know this method in one dimension as the histogram.
In two dimensions, there are multiple ways of doing it. The bin shapes can be taken from any method of uniformly tiling the plane, such as squares or hexagons. For each tile, the number of data points inside the tile are counted. The tile is then assigned a color according to the number of points.
A similar statement from the ggplot2 docs on heatmap of 2d bin counts:
This is a useful alternative to geom_point() in the presence of overplotting.
In the docs of geom_point():
Overplotting
The biggest potential problem with a scatterplot is overplotting: whenever you have more than a few points, points may be plotted on top of one another. This can severely distort the visual appearance of the plot. There is no one solution to this problem, but there are some techniques that can help. You can add additional information with geom_smooth(), geom_quantile() or geom_density_2d(). If you have few unique x values, geom_boxplot() may also be useful.
Alternatively, you can summarise the number of points at each location and display that in some way, using geom_count(), geom_hex(), or geom_density2d().
Another technique is to make the points transparent (e.g. geom_point(alpha = 0.05)) or very small (e.g. geom_point(shape = ".")).
|
Are heat maps "one of the least effective types of data visualization"?
|
Heatmaps are advantageous over scatterplots when there are too many data points to view on a scatterplot. This can be mitigated in a scatterplot using translucent data points but beyond a certain thre
|
Are heat maps "one of the least effective types of data visualization"?
Heatmaps are advantageous over scatterplots when there are too many data points to view on a scatterplot. This can be mitigated in a scatterplot using translucent data points but beyond a certain threshold it becomes better to summarize the data.
In this blog post a compelling example of scatterplots being hard to interpret is given.
A scatterplot can only visually represent density up to a certain threshold - the threshold of "points everywhere"...
Plot density, not points
The solution is to plot the binned point density rather than the points themselves. We already know this method in one dimension as the histogram.
In two dimensions, there are multiple ways of doing it. The bin shapes can be taken from any method of uniformly tiling the plane, such as squares or hexagons. For each tile, the number of data points inside the tile are counted. The tile is then assigned a color according to the number of points.
A similar statement from the ggplot2 docs on heatmap of 2d bin counts:
This is a useful alternative to geom_point() in the presence of overplotting.
In the docs of geom_point():
Overplotting
The biggest potential problem with a scatterplot is overplotting: whenever you have more than a few points, points may be plotted on top of one another. This can severely distort the visual appearance of the plot. There is no one solution to this problem, but there are some techniques that can help. You can add additional information with geom_smooth(), geom_quantile() or geom_density_2d(). If you have few unique x values, geom_boxplot() may also be useful.
Alternatively, you can summarise the number of points at each location and display that in some way, using geom_count(), geom_hex(), or geom_density2d().
Another technique is to make the points transparent (e.g. geom_point(alpha = 0.05)) or very small (e.g. geom_point(shape = ".")).
|
Are heat maps "one of the least effective types of data visualization"?
Heatmaps are advantageous over scatterplots when there are too many data points to view on a scatterplot. This can be mitigated in a scatterplot using translucent data points but beyond a certain thre
|
11,989
|
How do ensemble methods outperform all their constituents?
|
It's not guaranteed. As you say, the ensemble could be worse than the individual models. For example, taking the average of the true model and a bad model would give a fairly bad model.
The average of $k$ models is only going to be an improvement if the models are (somewhat) independent of one another. For example, in bagging, each model is built from a random subset of the data, so some independence is built in. Or models could be built using different combinations of features, and then combined by averaging.
Also, model averaging only works well when the individual models have high variance. That's why a random forest is built using very large trees. On the other hand, averaging a bunch of linear regression models still gives you a linear model, which isn't likely to be better than the models you started with (try it!)
Other ensemble methods, such as boosting and blending, work by taking the outputs from individual models, together with the training data, as inputs to a bigger model. In this case, it's not surprising that they often work better than the individual models, since they are in fact more complicated, and they still use the training data.
|
How do ensemble methods outperform all their constituents?
|
It's not guaranteed. As you say, the ensemble could be worse than the individual models. For example, taking the average of the true model and a bad model would give a fairly bad model.
The average of
|
How do ensemble methods outperform all their constituents?
It's not guaranteed. As you say, the ensemble could be worse than the individual models. For example, taking the average of the true model and a bad model would give a fairly bad model.
The average of $k$ models is only going to be an improvement if the models are (somewhat) independent of one another. For example, in bagging, each model is built from a random subset of the data, so some independence is built in. Or models could be built using different combinations of features, and then combined by averaging.
Also, model averaging only works well when the individual models have high variance. That's why a random forest is built using very large trees. On the other hand, averaging a bunch of linear regression models still gives you a linear model, which isn't likely to be better than the models you started with (try it!)
Other ensemble methods, such as boosting and blending, work by taking the outputs from individual models, together with the training data, as inputs to a bigger model. In this case, it's not surprising that they often work better than the individual models, since they are in fact more complicated, and they still use the training data.
|
How do ensemble methods outperform all their constituents?
It's not guaranteed. As you say, the ensemble could be worse than the individual models. For example, taking the average of the true model and a bad model would give a fairly bad model.
The average of
|
11,990
|
How do ensemble methods outperform all their constituents?
|
In your example, your ensemble of two models could be worse than a single model itself. But your example is artificial, we generally build more than two in our ensemble.
There is no absolute guarantee a ensemble model performs better than an individual model, but if you build many of those, and your individual classifier is weak. Your overall performance should be better than an individual model.
In machine learning, training multiple models generally outperform training a single model. That's because you have more parameters to tune.
|
How do ensemble methods outperform all their constituents?
|
In your example, your ensemble of two models could be worse than a single model itself. But your example is artificial, we generally build more than two in our ensemble.
There is no absolute guarante
|
How do ensemble methods outperform all their constituents?
In your example, your ensemble of two models could be worse than a single model itself. But your example is artificial, we generally build more than two in our ensemble.
There is no absolute guarantee a ensemble model performs better than an individual model, but if you build many of those, and your individual classifier is weak. Your overall performance should be better than an individual model.
In machine learning, training multiple models generally outperform training a single model. That's because you have more parameters to tune.
|
How do ensemble methods outperform all their constituents?
In your example, your ensemble of two models could be worse than a single model itself. But your example is artificial, we generally build more than two in our ensemble.
There is no absolute guarante
|
11,991
|
How do ensemble methods outperform all their constituents?
|
I just want to throw something that is seldom discussed in this context, and it should give you food for thought.
Ensemble also works with humans!
It has been observed that averaging human predictions gives better predictions than any individual prediction. This is known as the wisdom of the crowd.
Now, you could argue that it is because some people have different information, so you are effectively averaging information. But no, this is true even for tasks such as guessing the number of beans in a jar.
There are plenty of books and experiments written on this, and the phenomenon still puzzles researchers.
This being said, as @Flounderer pointed out, the real gains come from so-called unstable models such as decisions trees, where each observation usually has an impact on the decision boundary. More stable ones like SVMs do not gain as much because resampling usually does not affect support vectors much.
|
How do ensemble methods outperform all their constituents?
|
I just want to throw something that is seldom discussed in this context, and it should give you food for thought.
Ensemble also works with humans!
It has been observed that averaging human predictions
|
How do ensemble methods outperform all their constituents?
I just want to throw something that is seldom discussed in this context, and it should give you food for thought.
Ensemble also works with humans!
It has been observed that averaging human predictions gives better predictions than any individual prediction. This is known as the wisdom of the crowd.
Now, you could argue that it is because some people have different information, so you are effectively averaging information. But no, this is true even for tasks such as guessing the number of beans in a jar.
There are plenty of books and experiments written on this, and the phenomenon still puzzles researchers.
This being said, as @Flounderer pointed out, the real gains come from so-called unstable models such as decisions trees, where each observation usually has an impact on the decision boundary. More stable ones like SVMs do not gain as much because resampling usually does not affect support vectors much.
|
How do ensemble methods outperform all their constituents?
I just want to throw something that is seldom discussed in this context, and it should give you food for thought.
Ensemble also works with humans!
It has been observed that averaging human predictions
|
11,992
|
How do ensemble methods outperform all their constituents?
|
It is actually quite possible for single models to be better than ensembles.
Even if there are no points in your data where some of your models are overestimating and some are underestimating (in that case you might hope that average error would be negated), some of the most popular loss functions (like mean squared loss) are penalizing single big deviations more than some number of moderate deviations. If models you are averaging are somewhat different you might hope that variance becomes "less" as average kills outstanding deviations. Probably it is explainable with that.
|
How do ensemble methods outperform all their constituents?
|
It is actually quite possible for single models to be better than ensembles.
Even if there are no points in your data where some of your models are overestimating and some are underestimating (in that
|
How do ensemble methods outperform all their constituents?
It is actually quite possible for single models to be better than ensembles.
Even if there are no points in your data where some of your models are overestimating and some are underestimating (in that case you might hope that average error would be negated), some of the most popular loss functions (like mean squared loss) are penalizing single big deviations more than some number of moderate deviations. If models you are averaging are somewhat different you might hope that variance becomes "less" as average kills outstanding deviations. Probably it is explainable with that.
|
How do ensemble methods outperform all their constituents?
It is actually quite possible for single models to be better than ensembles.
Even if there are no points in your data where some of your models are overestimating and some are underestimating (in that
|
11,993
|
How do ensemble methods outperform all their constituents?
|
Yes, it might be the case but the idea for ensembling is to train simpler models to avoid over fitting while capturing different characteristics of data from different ensembles. Of course there is no guarantee of an ensemble model to outperform a single model while trained with same training data.
The outperformance can be gained by combining ensemble models and boosting(e.g. AdaBoost). By boosting you train each next ensemle model by assigning weights on each data point and updating them according to error. So think of it as a coordinate descent algorithm, it allows the training error to go down with each iteration while maintaining a constant average model complexity. In overall this makes an impact on the performance. There are many
|
How do ensemble methods outperform all their constituents?
|
Yes, it might be the case but the idea for ensembling is to train simpler models to avoid over fitting while capturing different characteristics of data from different ensembles. Of course there is no
|
How do ensemble methods outperform all their constituents?
Yes, it might be the case but the idea for ensembling is to train simpler models to avoid over fitting while capturing different characteristics of data from different ensembles. Of course there is no guarantee of an ensemble model to outperform a single model while trained with same training data.
The outperformance can be gained by combining ensemble models and boosting(e.g. AdaBoost). By boosting you train each next ensemle model by assigning weights on each data point and updating them according to error. So think of it as a coordinate descent algorithm, it allows the training error to go down with each iteration while maintaining a constant average model complexity. In overall this makes an impact on the performance. There are many
|
How do ensemble methods outperform all their constituents?
Yes, it might be the case but the idea for ensembling is to train simpler models to avoid over fitting while capturing different characteristics of data from different ensembles. Of course there is no
|
11,994
|
How do I know my k-means clustering algorithm is suffering from the curse of dimensionality?
|
It helps to think about what The Curse of Dimensionality is. There are several very good threads on CV that are worth reading. Here is a place to start: Explain “Curse of dimensionality” to a child.
I note that you are interested in how this applies to $k$-means clustering. It is worth being aware that $k$-means is a search strategy to minimize (only) the squared Euclidean distance. In light of that, it's worth thinking about how Euclidean distance relates to the curse of dimensionality (see: Why is Euclidean distance not a good metric in high dimensions?).
The short answer from these threads is that the volume (size) of the space increases at an incredible rate relative to the number of dimensions. Even $10$ dimensions (which doesn't seem like it's very 'high-dimensional' to me) can bring on the curse. If your data were distributed uniformly throughout that space, all objects become approximately equidistant from each other. However, as @Anony-Mousse notes in his answer to that question, this phenomenon depends on how the data are arrayed within the space; if they are not uniform, you don't necessarily have this problem. This leads to the question of whether uniformly-distributed high-dimensional data are very common at all (see: Does “curse of dimensionality” really exist in real data?).
I would argue that what matters is not necessarily the number of variables (the literal dimensionality of your data), but the effective dimensionality of your data. Under the assumption that $10$ dimensions is 'too high' for $k$-means, the simplest strategy would be to count the number of features you have. But if you wanted to think in terms of the effective dimensionality, you could perform a principle components analysis (PCA) and look at how the eigenvalues drop off. It is quite common that most of the variation exists in a couple of dimensions (which typically cut across the original dimensions of your dataset). That would imply you are less likely to have a problem with $k$-means in the sense that your effective dimensionality is actually much smaller.
A more involved approach would be to examine the distribution of pairwise distances in your dataset along the lines @hxd1011 suggests in his answer. Looking at simple marginal distributions will give you some hint of the possible uniformity. If you normalize all the variables to lie within the interval $[0,\ 1]$, the pairwise distances must lie within the interval $[0,\ \sqrt{\sum D}]$. Distances that are highly concentrated will cause problems; on the other hand, a multi-modal distribution may be hopeful (you can see an example in my answer here: How to use both binary and continuous variables together in clustering?).
However, whether $k$-means will 'work' is still a complicated question. Under the assumption that there are meaningful latent groupings in your data, they don't necessarily exist in all of your dimensions or in constructed dimensions that maximize variation (i.e., the principle components). The clusters could be in the lower-variation dimensions (see: Examples of PCA where PCs with low variance are “useful”). That is, you could have clusters with points that are close within and well-separated between on just a few of your dimensions or on lower-variation PCs, but aren't remotely similar on high-variation PCs, which would cause $k$-means to ignore the clusters you're after and pick out faux clusters instead (some examples can be seen here: How to understand the drawbacks of K-means).
|
How do I know my k-means clustering algorithm is suffering from the curse of dimensionality?
|
It helps to think about what The Curse of Dimensionality is. There are several very good threads on CV that are worth reading. Here is a place to start: Explain “Curse of dimensionality” to a child
|
How do I know my k-means clustering algorithm is suffering from the curse of dimensionality?
It helps to think about what The Curse of Dimensionality is. There are several very good threads on CV that are worth reading. Here is a place to start: Explain “Curse of dimensionality” to a child.
I note that you are interested in how this applies to $k$-means clustering. It is worth being aware that $k$-means is a search strategy to minimize (only) the squared Euclidean distance. In light of that, it's worth thinking about how Euclidean distance relates to the curse of dimensionality (see: Why is Euclidean distance not a good metric in high dimensions?).
The short answer from these threads is that the volume (size) of the space increases at an incredible rate relative to the number of dimensions. Even $10$ dimensions (which doesn't seem like it's very 'high-dimensional' to me) can bring on the curse. If your data were distributed uniformly throughout that space, all objects become approximately equidistant from each other. However, as @Anony-Mousse notes in his answer to that question, this phenomenon depends on how the data are arrayed within the space; if they are not uniform, you don't necessarily have this problem. This leads to the question of whether uniformly-distributed high-dimensional data are very common at all (see: Does “curse of dimensionality” really exist in real data?).
I would argue that what matters is not necessarily the number of variables (the literal dimensionality of your data), but the effective dimensionality of your data. Under the assumption that $10$ dimensions is 'too high' for $k$-means, the simplest strategy would be to count the number of features you have. But if you wanted to think in terms of the effective dimensionality, you could perform a principle components analysis (PCA) and look at how the eigenvalues drop off. It is quite common that most of the variation exists in a couple of dimensions (which typically cut across the original dimensions of your dataset). That would imply you are less likely to have a problem with $k$-means in the sense that your effective dimensionality is actually much smaller.
A more involved approach would be to examine the distribution of pairwise distances in your dataset along the lines @hxd1011 suggests in his answer. Looking at simple marginal distributions will give you some hint of the possible uniformity. If you normalize all the variables to lie within the interval $[0,\ 1]$, the pairwise distances must lie within the interval $[0,\ \sqrt{\sum D}]$. Distances that are highly concentrated will cause problems; on the other hand, a multi-modal distribution may be hopeful (you can see an example in my answer here: How to use both binary and continuous variables together in clustering?).
However, whether $k$-means will 'work' is still a complicated question. Under the assumption that there are meaningful latent groupings in your data, they don't necessarily exist in all of your dimensions or in constructed dimensions that maximize variation (i.e., the principle components). The clusters could be in the lower-variation dimensions (see: Examples of PCA where PCs with low variance are “useful”). That is, you could have clusters with points that are close within and well-separated between on just a few of your dimensions or on lower-variation PCs, but aren't remotely similar on high-variation PCs, which would cause $k$-means to ignore the clusters you're after and pick out faux clusters instead (some examples can be seen here: How to understand the drawbacks of K-means).
|
How do I know my k-means clustering algorithm is suffering from the curse of dimensionality?
It helps to think about what The Curse of Dimensionality is. There are several very good threads on CV that are worth reading. Here is a place to start: Explain “Curse of dimensionality” to a child
|
11,995
|
How do I know my k-means clustering algorithm is suffering from the curse of dimensionality?
|
My answer is not limit to K means, but check if we have curse of dimensionality for any distance based methods. K-means is based on a distance measure (for example, Euclidean distance)
Before run the algorithm, we can check the distance metric distribution, i.e., all distance metrics for all pairs in of data. If you have $N$ data points, you should have $0.5\cdot N\cdot(N-1)$ distance metrics. If the data is too large, we can check a sample of that.
If we have the curse of dimensionality problem, what you will see, is that these values are very close to each other. This seems very counter-intuitive, because it means every one is close or far away from every one and distance measure is basically useless.
Here is some simulation to show you such counter-intuitive results. If all of the features are uniformly distributed, and if there are have too many dimensions, every distance metrics should be close to $\frac 1 6$, which comes from $\int_{x_i=0}^1\int_{x_j=0}^1 (x_i-x_j)^2 dx_i dx_j$. Feel free to change the uniform distribution to other distributions. For example, if we change to normal distribution (change runif to rnorm), it will converge to another number with large number dimensions.
Here is the simulation for dimension from 1 to 500, the features are uniform distribution from 0 to 1.
plot(0, type="n",xlim=c(0,0.5),ylim=c(0,50))
abline(v=1/6,lty=2,col=2)
grid()
n_data=1e3
for (p in c(1:5,10,15,20,25,50,100,250,500)){
x=matrix(runif(n_data*p),ncol=p)
all_dist=as.vector(dist(x))^2/p
lines(density(all_dist))
}
|
How do I know my k-means clustering algorithm is suffering from the curse of dimensionality?
|
My answer is not limit to K means, but check if we have curse of dimensionality for any distance based methods. K-means is based on a distance measure (for example, Euclidean distance)
Before run the
|
How do I know my k-means clustering algorithm is suffering from the curse of dimensionality?
My answer is not limit to K means, but check if we have curse of dimensionality for any distance based methods. K-means is based on a distance measure (for example, Euclidean distance)
Before run the algorithm, we can check the distance metric distribution, i.e., all distance metrics for all pairs in of data. If you have $N$ data points, you should have $0.5\cdot N\cdot(N-1)$ distance metrics. If the data is too large, we can check a sample of that.
If we have the curse of dimensionality problem, what you will see, is that these values are very close to each other. This seems very counter-intuitive, because it means every one is close or far away from every one and distance measure is basically useless.
Here is some simulation to show you such counter-intuitive results. If all of the features are uniformly distributed, and if there are have too many dimensions, every distance metrics should be close to $\frac 1 6$, which comes from $\int_{x_i=0}^1\int_{x_j=0}^1 (x_i-x_j)^2 dx_i dx_j$. Feel free to change the uniform distribution to other distributions. For example, if we change to normal distribution (change runif to rnorm), it will converge to another number with large number dimensions.
Here is the simulation for dimension from 1 to 500, the features are uniform distribution from 0 to 1.
plot(0, type="n",xlim=c(0,0.5),ylim=c(0,50))
abline(v=1/6,lty=2,col=2)
grid()
n_data=1e3
for (p in c(1:5,10,15,20,25,50,100,250,500)){
x=matrix(runif(n_data*p),ncol=p)
all_dist=as.vector(dist(x))^2/p
lines(density(all_dist))
}
|
How do I know my k-means clustering algorithm is suffering from the curse of dimensionality?
My answer is not limit to K means, but check if we have curse of dimensionality for any distance based methods. K-means is based on a distance measure (for example, Euclidean distance)
Before run the
|
11,996
|
Sample size too large? [duplicate]
|
I always thought larger sample sizes were good.
Almost always, though there are situations where they don't help much. However, as sample sizes become quite large, the particular aspects of the problem that are of most concern change.
Then I read something somewhere about how when sample sizes are larger, it's easier to find significant p-values when they're not really there (i.e., false positives), because significance gets exaggerated.
As stated, this is untrue, though there are some things that may be of concern.
Let's start with the basic assertion: Large samples don't prevent hypothesis tests from working exactly as they are designed to. [If you're able to, ask the source of the statement for some kind of reason to accept this claim, such as evidence that it's true (whether by algebraic argument, simulation, logical reasoning or whatever - or even a reference). This will likely lead to a slight change in the statement of the claim.]
The problem isn't generally false positives, but true positives -- in situations where people don't want them.
People often make the mistaken assumption that statistical significance always implies something practically meaningful. In large samples, it may not.
As sample sizes get very large even very tiny differences from the situation specified in the null may become detectable. This is not a failure of the test, that's how it's supposed to work!
[It sometimes seems to me to border on the perverse that while almost everyone will insist on consistency for their tests, so many will complain that something is wrong with hypothesis testing when they actually get it.]
When this bothers people it's an indication that hypothesis testing (or at least the form of it they were using) didn't address the actual research question they had. In some situations this is addressed better by confidence intervals. In others, it's better addressed by calculation of effect sizes. In other situations equivalence tests might better address what they want. In other cases they might need other things.
A caveat: If some of the assumptions don't hold, you might in some situations get an increase in false positives as sample size increases, but that's a failure of the assumptions, rather than a problem with large-sample hypothesis testing itself.
In large samples, issues like sampling bias can completely dominate effects from sampling variability, to the extent that they're the only thing that you see. Greater effort is required to address issues like this, because small issues that produce effects that may be very small compared to sampling variation in small samples may dominate in large ones. Again, the impact of that kind of thing is not a problem with hypothesis testing itself, but in the way the sample was obtained, or in treating it as a random sample when it actually wasn't.
I'm currently working with a large sample size (around 5,000 cases) where I did a t-test and the p-value turned out to be less than 0.001. What test(s) can I use to determine whether this is a valid p-value or whether this happened because the sample size was large.
Some issues to consider:
Significance level: in very large samples, if you're using the same significance levels that you would in small samples, you're not balancing the costs of the two error types; you can reduce type I error substantially with little detriment to power at effect sizes you care about - it would be odd to tolerate relatively high type I error rates if there's little to gain. Hypothesis tests in large samples would sensibly be conducted at substantially smaller significance levels, while still retaining very good power (why would you have power of 99.99999% if you can get power of say 99.9% and drop your type I error rate by a factor of 10?).
Validity of p-value: You may like to address the robustness of your procedure to potential failure of assumptions; this is not addressed by hypothesis testing of assumptions on the data. You may also like to consider possible issues related to things like sampling biases (e.g. do you really have a random sample of the target population?)
Practical significance: compute CIs for actual differences from the situation under the null in the case of say a two-sample t-test, look at a CI for the difference in means* - it should exclude 0, but is it so small you don't care about it?
* (Or, if it's more relevant to your situation, perhaps a calculation of effect size.)
One way to reassure yourself about your own test would be to carry out (before the test, and indeed hopefully before you have data) a study of the power at some small-but-relevant-to-your-application effect size; if you have very good power then, and reasonably low type I error rate, then you would nearly always be making the right decision when the effect size is at least that large and nearly always be making the right decision when the effect size was 0. The only section in which you were not nearly always making the correct choice would be in the small window of effect sizes that were very small (once you didn't have a strong interest in rejecting), where the power curve is increasing from $\alpha$ to whatever it was at your small-effect-size that you did your power calculation at.
I'm not a statistics expert, so please pardon any "newb-ness" evident in my post.
The entire point of this site is to generate good questions and good answers, and the question is quite good. You shouldn't apologize for using the site for exactly what it's here for. [However, aspects of it are addressed in other questions and answers on the site. If you look down the 'Related' column at the right hand side of this page you'll see a list of links to somewhat similar questions (as judged by an automatic algorithm). At least a couple of the questions in that list are highly relevant, in a way that may have altered the form or emphasis in your question, but the basic question of the truth of the statement itself - relating to the possible occurrence of false positives - would presumably remain, so even if you had pursued those questions, you'd presumably still need to ask the main one]
e.g. see this question; it has $n$ of about a hundred thousand.
One of the data sets in one of the other questions in the sidebar has sample size in the trillions. That is a big sample. In that kind of situation sampling variation (and so hypothesis testing) generally becomes completely irrelevant.
|
Sample size too large? [duplicate]
|
I always thought larger sample sizes were good.
Almost always, though there are situations where they don't help much. However, as sample sizes become quite large, the particular aspects of the probl
|
Sample size too large? [duplicate]
I always thought larger sample sizes were good.
Almost always, though there are situations where they don't help much. However, as sample sizes become quite large, the particular aspects of the problem that are of most concern change.
Then I read something somewhere about how when sample sizes are larger, it's easier to find significant p-values when they're not really there (i.e., false positives), because significance gets exaggerated.
As stated, this is untrue, though there are some things that may be of concern.
Let's start with the basic assertion: Large samples don't prevent hypothesis tests from working exactly as they are designed to. [If you're able to, ask the source of the statement for some kind of reason to accept this claim, such as evidence that it's true (whether by algebraic argument, simulation, logical reasoning or whatever - or even a reference). This will likely lead to a slight change in the statement of the claim.]
The problem isn't generally false positives, but true positives -- in situations where people don't want them.
People often make the mistaken assumption that statistical significance always implies something practically meaningful. In large samples, it may not.
As sample sizes get very large even very tiny differences from the situation specified in the null may become detectable. This is not a failure of the test, that's how it's supposed to work!
[It sometimes seems to me to border on the perverse that while almost everyone will insist on consistency for their tests, so many will complain that something is wrong with hypothesis testing when they actually get it.]
When this bothers people it's an indication that hypothesis testing (or at least the form of it they were using) didn't address the actual research question they had. In some situations this is addressed better by confidence intervals. In others, it's better addressed by calculation of effect sizes. In other situations equivalence tests might better address what they want. In other cases they might need other things.
A caveat: If some of the assumptions don't hold, you might in some situations get an increase in false positives as sample size increases, but that's a failure of the assumptions, rather than a problem with large-sample hypothesis testing itself.
In large samples, issues like sampling bias can completely dominate effects from sampling variability, to the extent that they're the only thing that you see. Greater effort is required to address issues like this, because small issues that produce effects that may be very small compared to sampling variation in small samples may dominate in large ones. Again, the impact of that kind of thing is not a problem with hypothesis testing itself, but in the way the sample was obtained, or in treating it as a random sample when it actually wasn't.
I'm currently working with a large sample size (around 5,000 cases) where I did a t-test and the p-value turned out to be less than 0.001. What test(s) can I use to determine whether this is a valid p-value or whether this happened because the sample size was large.
Some issues to consider:
Significance level: in very large samples, if you're using the same significance levels that you would in small samples, you're not balancing the costs of the two error types; you can reduce type I error substantially with little detriment to power at effect sizes you care about - it would be odd to tolerate relatively high type I error rates if there's little to gain. Hypothesis tests in large samples would sensibly be conducted at substantially smaller significance levels, while still retaining very good power (why would you have power of 99.99999% if you can get power of say 99.9% and drop your type I error rate by a factor of 10?).
Validity of p-value: You may like to address the robustness of your procedure to potential failure of assumptions; this is not addressed by hypothesis testing of assumptions on the data. You may also like to consider possible issues related to things like sampling biases (e.g. do you really have a random sample of the target population?)
Practical significance: compute CIs for actual differences from the situation under the null in the case of say a two-sample t-test, look at a CI for the difference in means* - it should exclude 0, but is it so small you don't care about it?
* (Or, if it's more relevant to your situation, perhaps a calculation of effect size.)
One way to reassure yourself about your own test would be to carry out (before the test, and indeed hopefully before you have data) a study of the power at some small-but-relevant-to-your-application effect size; if you have very good power then, and reasonably low type I error rate, then you would nearly always be making the right decision when the effect size is at least that large and nearly always be making the right decision when the effect size was 0. The only section in which you were not nearly always making the correct choice would be in the small window of effect sizes that were very small (once you didn't have a strong interest in rejecting), where the power curve is increasing from $\alpha$ to whatever it was at your small-effect-size that you did your power calculation at.
I'm not a statistics expert, so please pardon any "newb-ness" evident in my post.
The entire point of this site is to generate good questions and good answers, and the question is quite good. You shouldn't apologize for using the site for exactly what it's here for. [However, aspects of it are addressed in other questions and answers on the site. If you look down the 'Related' column at the right hand side of this page you'll see a list of links to somewhat similar questions (as judged by an automatic algorithm). At least a couple of the questions in that list are highly relevant, in a way that may have altered the form or emphasis in your question, but the basic question of the truth of the statement itself - relating to the possible occurrence of false positives - would presumably remain, so even if you had pursued those questions, you'd presumably still need to ask the main one]
e.g. see this question; it has $n$ of about a hundred thousand.
One of the data sets in one of the other questions in the sidebar has sample size in the trillions. That is a big sample. In that kind of situation sampling variation (and so hypothesis testing) generally becomes completely irrelevant.
|
Sample size too large? [duplicate]
I always thought larger sample sizes were good.
Almost always, though there are situations where they don't help much. However, as sample sizes become quite large, the particular aspects of the probl
|
11,997
|
What are the practical uses of Neural ODEs?
|
TL;DR: For time series and density modeling, neural ODEs offer some benefits that we don't know how to get otherwise. For plain supervised learning, there are potential computational benefits, but for practical purposes they probably aren't worth using yet in that setting.
To answer your first question:
Is there something NeuralODEs do that "conventional" Neural Networks
cannot?
Neural ODEs differ in two ways from standard nets:
They represent a different set of functions, which can be good or bad depending on what you're modeling.
We have to approximate their exact solution, which gives more freedom in how to compute the answer, but adds complexity.
I'd say the clearest setting where neural ODEs help is building continuous-time time series models, which can easily handle data coming at irregular intervals. However, ODEs can only model deterministic dynamics, so I'm more excited by generalization of these time-series models to stochastic differential equations.
If you're modeling data sampled at regular time intervals (like video or audio), I think there's not much advantage, and standard approaches will probably be simpler and faster.
Another setting where they have an advantage is in building normalizing flows for density modeling. The bottleneck in normalizing flows is keeping track of the change in density, which is slow (O(D^3)) for standard nets. That's why discrete-time normalizing flow models like Glow or Real-NVP have to restrict the architectures of their layers, for example only updating half the units depending on the other half. In continuous time, it's easier to track the change in density, even for unrestricted architectures. That's what the FFJORD paper is about. Since then, Residual Flows were developed, which are discrete time flows that can also handle unrestricted architectures, with some caveats.
For standard deep learning, there are two potential big advantages:
Constant memory cost at training time. Before neural ODEs there was already some work showing we can reduce the memory cost of computing reverse-mode gradients of neural networks if we could 'run them backwards' from the output, but this required restricting the architecture of the network. The nice thing about neural ODEs that you you can simply run their dynamics backwards to reconstruct the original trajectory. In both cases, compounding numerical error could be a problem in some cases, but we didn't find this to be a practical concern.
Adaptive time cost. The idea is that since we're only approximating an exact answer, sometimes we might only need a few iterations of our approximate solver to get an acceptably good answer, and so could save time.
Both of these potential advantages are shared by Deep Equilibrium Models, and they've already been scaled up to transformers. But in both cases, these models so far in practice have tended to be slower overall than standard nets, because we don't yet know how to regularize these models to be easy to approximate.
To answer your second question:
Is there something "conventional" Neural Networks do that NeuralODEs
cannot do?
Conventional nets can fit non-homeomorphic functions, for example functions whose output has a smaller dimension that their input, or that change the topology of the input space. There was a nice paper from Oxford pointing out these issues, and showing that you can also fix it by adding extra dimensions.
Of course, you could handle this by composing ODE nets with standard network layers.
Conventional nets can be evaluated exactly with a fixed amount of computation, and are typically faster to train. Plus, with standard nets you don't have to choose an error tolerance for a solver.
|
What are the practical uses of Neural ODEs?
|
TL;DR: For time series and density modeling, neural ODEs offer some benefits that we don't know how to get otherwise. For plain supervised learning, there are potential computational benefits, but fo
|
What are the practical uses of Neural ODEs?
TL;DR: For time series and density modeling, neural ODEs offer some benefits that we don't know how to get otherwise. For plain supervised learning, there are potential computational benefits, but for practical purposes they probably aren't worth using yet in that setting.
To answer your first question:
Is there something NeuralODEs do that "conventional" Neural Networks
cannot?
Neural ODEs differ in two ways from standard nets:
They represent a different set of functions, which can be good or bad depending on what you're modeling.
We have to approximate their exact solution, which gives more freedom in how to compute the answer, but adds complexity.
I'd say the clearest setting where neural ODEs help is building continuous-time time series models, which can easily handle data coming at irregular intervals. However, ODEs can only model deterministic dynamics, so I'm more excited by generalization of these time-series models to stochastic differential equations.
If you're modeling data sampled at regular time intervals (like video or audio), I think there's not much advantage, and standard approaches will probably be simpler and faster.
Another setting where they have an advantage is in building normalizing flows for density modeling. The bottleneck in normalizing flows is keeping track of the change in density, which is slow (O(D^3)) for standard nets. That's why discrete-time normalizing flow models like Glow or Real-NVP have to restrict the architectures of their layers, for example only updating half the units depending on the other half. In continuous time, it's easier to track the change in density, even for unrestricted architectures. That's what the FFJORD paper is about. Since then, Residual Flows were developed, which are discrete time flows that can also handle unrestricted architectures, with some caveats.
For standard deep learning, there are two potential big advantages:
Constant memory cost at training time. Before neural ODEs there was already some work showing we can reduce the memory cost of computing reverse-mode gradients of neural networks if we could 'run them backwards' from the output, but this required restricting the architecture of the network. The nice thing about neural ODEs that you you can simply run their dynamics backwards to reconstruct the original trajectory. In both cases, compounding numerical error could be a problem in some cases, but we didn't find this to be a practical concern.
Adaptive time cost. The idea is that since we're only approximating an exact answer, sometimes we might only need a few iterations of our approximate solver to get an acceptably good answer, and so could save time.
Both of these potential advantages are shared by Deep Equilibrium Models, and they've already been scaled up to transformers. But in both cases, these models so far in practice have tended to be slower overall than standard nets, because we don't yet know how to regularize these models to be easy to approximate.
To answer your second question:
Is there something "conventional" Neural Networks do that NeuralODEs
cannot do?
Conventional nets can fit non-homeomorphic functions, for example functions whose output has a smaller dimension that their input, or that change the topology of the input space. There was a nice paper from Oxford pointing out these issues, and showing that you can also fix it by adding extra dimensions.
Of course, you could handle this by composing ODE nets with standard network layers.
Conventional nets can be evaluated exactly with a fixed amount of computation, and are typically faster to train. Plus, with standard nets you don't have to choose an error tolerance for a solver.
|
What are the practical uses of Neural ODEs?
TL;DR: For time series and density modeling, neural ODEs offer some benefits that we don't know how to get otherwise. For plain supervised learning, there are potential computational benefits, but fo
|
11,998
|
Difference between regression analysis and curve fitting
|
I doubt that there is a clear and consistent distinction across statistically minded sciences and fields between regression and curve-fitting.
Regression without qualification implies linear regression and least-squares estimation. That doesn't rule out other or broader senses: indeed once you allow logit, Poisson, negative binomial regression, etc., etc. it gets harder to see what modelling is not regression in some sense.
Curve-fitting does literally suggest a curve that can be drawn on a plane or at least in a low-dimensional space. Regression is not so bounded and can predict surfaces in a several dimensional space.
Curve-fitting may or may not use linear regression and/or least squares. It might refer to fitting a polynomial (power series) or a set of sine and cosine terms or in some other way actually qualify as linear regression in the key sense of fitting a functional form linear in the parameters. Indeed curve-fitting when nonlinear regression is regression too.
The term curve-fitting could be used in a disparaging, derogatory, deprecatory or dismissive sense ("that's just curve fitting!") or (almost the complete opposite) it might refer to fitting a specific curve carefully chosen with specific physical (biological, economic, whatever) rationale or tailored to match particular kinds of initial or limiting behaviour (e.g. being always positive, bounded in one or both directions, monotone, with an inflexion, with a single turning point, oscillatory, etc.).
One of several fuzzy issues here is that the same functional form can be at best empirical in some circumstances and excellent theory in others. Newton taught that trajectories of projectiles can be parabolic, and so naturally fitted by quadratics, whereas a quadratic fitted to age dependency in the social sciences is often just a fudge that matches some curvature in the data. Exponential decay is a really good approximation for radioactive isotopes and a sometimes not too crazy guess for the way that land values decline with distance from a centre.
Your example gets no explicit guesses from me. Much of the point here is that with a very small set of data and precisely no information on what the variables are or how they are expected to behave it could be irresponsible or foolish to suggest a model form. Perhaps the data should rise sharply from (0, 0) and then approach (1, 1), or perhaps something else. You tell us!
Note. Neither regression nor curve-fitting is limited to single predictors or single parameters (coefficients).
|
Difference between regression analysis and curve fitting
|
I doubt that there is a clear and consistent distinction across statistically minded sciences and fields between regression and curve-fitting.
Regression without qualification implies linear regress
|
Difference between regression analysis and curve fitting
I doubt that there is a clear and consistent distinction across statistically minded sciences and fields between regression and curve-fitting.
Regression without qualification implies linear regression and least-squares estimation. That doesn't rule out other or broader senses: indeed once you allow logit, Poisson, negative binomial regression, etc., etc. it gets harder to see what modelling is not regression in some sense.
Curve-fitting does literally suggest a curve that can be drawn on a plane or at least in a low-dimensional space. Regression is not so bounded and can predict surfaces in a several dimensional space.
Curve-fitting may or may not use linear regression and/or least squares. It might refer to fitting a polynomial (power series) or a set of sine and cosine terms or in some other way actually qualify as linear regression in the key sense of fitting a functional form linear in the parameters. Indeed curve-fitting when nonlinear regression is regression too.
The term curve-fitting could be used in a disparaging, derogatory, deprecatory or dismissive sense ("that's just curve fitting!") or (almost the complete opposite) it might refer to fitting a specific curve carefully chosen with specific physical (biological, economic, whatever) rationale or tailored to match particular kinds of initial or limiting behaviour (e.g. being always positive, bounded in one or both directions, monotone, with an inflexion, with a single turning point, oscillatory, etc.).
One of several fuzzy issues here is that the same functional form can be at best empirical in some circumstances and excellent theory in others. Newton taught that trajectories of projectiles can be parabolic, and so naturally fitted by quadratics, whereas a quadratic fitted to age dependency in the social sciences is often just a fudge that matches some curvature in the data. Exponential decay is a really good approximation for radioactive isotopes and a sometimes not too crazy guess for the way that land values decline with distance from a centre.
Your example gets no explicit guesses from me. Much of the point here is that with a very small set of data and precisely no information on what the variables are or how they are expected to behave it could be irresponsible or foolish to suggest a model form. Perhaps the data should rise sharply from (0, 0) and then approach (1, 1), or perhaps something else. You tell us!
Note. Neither regression nor curve-fitting is limited to single predictors or single parameters (coefficients).
|
Difference between regression analysis and curve fitting
I doubt that there is a clear and consistent distinction across statistically minded sciences and fields between regression and curve-fitting.
Regression without qualification implies linear regress
|
11,999
|
Difference between regression analysis and curve fitting
|
In addition to @NickCox's excellent answer (+1), I wanted to share my subjective impression on this somewhat fuzzy terminology topic. I think that a rather subtle difference between the two terms lies in the following. On one hand, regression often, if not always, implies an analytical solution (reference to regressors implies determining their parameters, hence my argument about analytical solution). On the other hand, curve fitting does not necessarily imply producing an analytical solution and IMHO often might be and is used as an exploratory approach.
|
Difference between regression analysis and curve fitting
|
In addition to @NickCox's excellent answer (+1), I wanted to share my subjective impression on this somewhat fuzzy terminology topic. I think that a rather subtle difference between the two terms lies
|
Difference between regression analysis and curve fitting
In addition to @NickCox's excellent answer (+1), I wanted to share my subjective impression on this somewhat fuzzy terminology topic. I think that a rather subtle difference between the two terms lies in the following. On one hand, regression often, if not always, implies an analytical solution (reference to regressors implies determining their parameters, hence my argument about analytical solution). On the other hand, curve fitting does not necessarily imply producing an analytical solution and IMHO often might be and is used as an exploratory approach.
|
Difference between regression analysis and curve fitting
In addition to @NickCox's excellent answer (+1), I wanted to share my subjective impression on this somewhat fuzzy terminology topic. I think that a rather subtle difference between the two terms lies
|
12,000
|
Difference between regression analysis and curve fitting
|
As there already seems to be an adequate array of explanations of Regression Analysis vs Curve Fitting, I’ll leave that alone. However, there is an additional question buried in the OP’s original question. There’s very little ‘given data’, but he asked if someone could suggest a correlation formula, so I’ll add my 2 cents.
I don’t have any experience with ROC Curves etc..., however, plotting the data gives a strong indication that it’s a First-Order System $\frac{1}{\tau\centerdot s+1}$ (in Laplace terminology) responding to a Step-Input $\frac{1}{s}$ (in Laplace terminology). Obviously, the time constant is very small, yielding an extremely fast steady-state. I’ll assume that the dependent variable is ‘y’, and the independent variable is ‘t’.
A general equation for a 1st order process is $y=A[1 – B \centerdot e^ {-\frac {t}{\tau}}]$, where $\tau$ is the process time constant (where 1 $\centerdot\tau$ generates approx 63.2% of the response).
Using your data, $@t=\infty, y=1$, therefore, $A=1$. The general model now becomes $y=1 – B \centerdot e^ {-\frac {t}{\tau}}$.
In addition, $@t=0, y=0$, therefore $B=1$. The model is now
$y=1 – e^ {-\frac {t}{\tau}}$.
Regressing (or curve fitting) your data to this equation yields $y=1 – e^ {-\frac {t}{0.0023}}$. Explicitly, this says that in $\approx 0.0023\space sec$, 63.2% of the response has completed.
I have the benefit of not knowing your specific noise or data variance, so I took the liberty to curve fit a 2-parameter model to see if it tightens the error. That yielded the model $y=1 – 0.99972 \centerdot e^ {-\frac {t}{0.0023}}$.
I’d recommend taking more data on the early portion of the response, and ignore sampling after the response lines out, as there’s nothing interesting to model at steady-state. In addition, the $exp()$ argument is getting extremely high the further you go out into steady-state.
|
Difference between regression analysis and curve fitting
|
As there already seems to be an adequate array of explanations of Regression Analysis vs Curve Fitting, I’ll leave that alone. However, there is an additional question buried in the OP’s original que
|
Difference between regression analysis and curve fitting
As there already seems to be an adequate array of explanations of Regression Analysis vs Curve Fitting, I’ll leave that alone. However, there is an additional question buried in the OP’s original question. There’s very little ‘given data’, but he asked if someone could suggest a correlation formula, so I’ll add my 2 cents.
I don’t have any experience with ROC Curves etc..., however, plotting the data gives a strong indication that it’s a First-Order System $\frac{1}{\tau\centerdot s+1}$ (in Laplace terminology) responding to a Step-Input $\frac{1}{s}$ (in Laplace terminology). Obviously, the time constant is very small, yielding an extremely fast steady-state. I’ll assume that the dependent variable is ‘y’, and the independent variable is ‘t’.
A general equation for a 1st order process is $y=A[1 – B \centerdot e^ {-\frac {t}{\tau}}]$, where $\tau$ is the process time constant (where 1 $\centerdot\tau$ generates approx 63.2% of the response).
Using your data, $@t=\infty, y=1$, therefore, $A=1$. The general model now becomes $y=1 – B \centerdot e^ {-\frac {t}{\tau}}$.
In addition, $@t=0, y=0$, therefore $B=1$. The model is now
$y=1 – e^ {-\frac {t}{\tau}}$.
Regressing (or curve fitting) your data to this equation yields $y=1 – e^ {-\frac {t}{0.0023}}$. Explicitly, this says that in $\approx 0.0023\space sec$, 63.2% of the response has completed.
I have the benefit of not knowing your specific noise or data variance, so I took the liberty to curve fit a 2-parameter model to see if it tightens the error. That yielded the model $y=1 – 0.99972 \centerdot e^ {-\frac {t}{0.0023}}$.
I’d recommend taking more data on the early portion of the response, and ignore sampling after the response lines out, as there’s nothing interesting to model at steady-state. In addition, the $exp()$ argument is getting extremely high the further you go out into steady-state.
|
Difference between regression analysis and curve fitting
As there already seems to be an adequate array of explanations of Regression Analysis vs Curve Fitting, I’ll leave that alone. However, there is an additional question buried in the OP’s original que
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.