idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
32,101
Standard errors of hyperbolic distribution estimates using delta-method?
In the following solution, I assume hyperbPi to be $\pi$. Also, the variances used in the approximations below are simply the squared standard errors calculated by summary after hyperbFit, so $\mathrm{Var}(X)=\mathrm{SE}(X)^2$. In order to calculate the approximation using the delta-method, we need the partial derivatives of the transformation function s $g_{\alpha}(\zeta, \pi, \delta)$ and $g_{\beta}(\zeta, \pi, \delta)$. The transformation functions for $\alpha$ and $\beta$ are given by: $$ \begin{align} g_{\alpha}(\zeta, \pi, \delta) &=\frac{\zeta\sqrt{1 + \pi^{2}}}{\delta}\\ g_{\beta}(\zeta, \pi, \delta) &= \frac{\zeta\pi}{\delta}\\ \end{align} $$ The partial derivatives of the transformation function for $\alpha$ are then: $$ \begin{align} \frac{\partial}{\partial \zeta} g_{\alpha}(\zeta, \pi, \delta) &=\frac{\sqrt{1+\pi^{2}}}{\delta}\\ \frac{\partial}{\partial \pi} g_{\alpha}(\zeta, \pi, \delta) &= \frac{\pi\zeta}{\sqrt{1+\pi^{2}}\delta }\\ \frac{\partial}{\partial \delta} g_{\alpha}(\zeta, \pi, \delta) &= -\frac{\sqrt{1+\pi^{2}}\zeta}{\delta^{2}}\\ \end{align} $$ The partial derivatives of the transformation function for $\beta$ are: $$ \begin{align} \frac{\partial}{\partial \zeta} g_{\beta}(\zeta, \pi, \delta) &=\frac{\pi}{\delta}\\ \frac{\partial}{\partial \pi} g_{\beta}(\zeta, \pi, \delta) &= \frac{\zeta}{\delta }\\ \frac{\partial}{\partial \delta} g_{\beta}(\zeta, \pi, \delta) &= -\frac{\pi\zeta}{\delta^{2}}\\ \end{align} $$ Applying the delta-method to the transformations, we get the following approximation for the variance of $\alpha$ (take square roots to get the standard errors): $$ \mathrm{Var}(\alpha)\approx \frac{1+\pi^{2}}{\delta^{2}}\cdot \mathrm{Var}(\zeta)+\frac{\pi^{2}\zeta^{2}}{(1+\pi^{2})\delta^{2}}\cdot \mathrm{Var}(\pi) + \frac{(1+\pi^{2})\zeta^{2}}{\delta^{4}}\cdot \mathrm{Var}(\delta) + \\ 2\times \left[ \frac{\pi\zeta}{\delta^{2}}\cdot \mathrm{Cov}(\pi,\zeta) - \frac{(1+\pi^{2})\zeta}{\delta^{3}}\cdot \mathrm{Cov}(\delta,\zeta)- \frac{\pi\zeta^{2}}{\delta^{3}}\cdot \mathrm{Cov}(\delta,\pi)\right] $$ The approximated variance of $\beta$ is: $$ \mathrm{Var}(\beta)\approx \frac{\pi^{2}}{\delta^{2}}\cdot \mathrm{Var}(\zeta) + \frac{\zeta^{2}}{\delta^{2}}\cdot \mathrm{Var}(\pi) + \frac{\pi^{2}\zeta^{2}}{\delta^{4}}\cdot \mathrm{Var}(\delta) + \\ 2\times \left[ \frac{\pi\zeta}{\delta^{2}}\cdot \mathrm{Cov}(\pi,\zeta) - \frac{\pi^{2}\zeta}{\delta^{3}}\cdot \mathrm{Cov}(\delta, \zeta) - \frac{\pi\zeta^{2}}{\delta^{3}}\cdot \mathrm{Cov}(\pi, \delta) \right] $$ Coding in R The fastest way to calculate the above approximations is using matrices. Denote $D$ the row vector containing the partial derivatives of the transformation function for $\alpha$ or $\beta$ with respect to $\zeta, \pi, \delta$. Further, denote $\Sigma$ the $3\times 3$ variance-covariance matrix of $\zeta, \pi, \delta$. The covariance matrix can be retrieved by typing vcov(my.hyperbFit) where my.hyperbFit is the fitted function. The above approximation of the variance of $\alpha$ is then $$ \mathrm{Var}(\alpha)\approx D_{\alpha}\Sigma D_{\alpha}^\top $$ The same is true for the approximation of the variance of $\beta$. In R, this can be easily coded like this: #----------------------------------------------------------------------------- # The row vector D of the partial derivatives for alpha #----------------------------------------------------------------------------- D.alpha <- matrix( c( sqrt(1+pi^2)/delta, # differentiate wrt zeta ((pi*zeta)/(sqrt(1+pi^2)*delta)), # differentiate wrt pi -(sqrt(1+pi^2)*zeta)/(delta^2) # differentiate wrt delta ), ncol=3) #----------------------------------------------------------------------------- # The row vector D of the partial derivatives for beta #----------------------------------------------------------------------------- D.beta <- matrix( c( (pi/delta), # differentiate wrt zeta (zeta/delta), # differentiate wrt pi -((pi*zeta)/delta^2) # differentiate wrt delta ), ncol=3) #----------------------------------------------------------------------------- # Calculate the approximations of the variances for alpha and beta # "sigma" denotes the 3x3 covariance matrix #----------------------------------------------------------------------------- var.alpha <- D.alpha %*% sigma %*% t(D.alpha) var.beta <- D.beta %*% sigma %*% t(D.beta) #----------------------------------------------------------------------------- # The standard errors are the square roots of the variances #----------------------------------------------------------------------------- se.alpha <- sqrt(var.alpha) se.beta <- sqrt(var.beta) Using $\log(\zeta)$ and $\log(\delta)$ If the standard errors/variances are only available for $\zeta^{*}=\log(\zeta)$ and $\delta^{*}=\log(\delta)$ instead of $\zeta$ and $\delta$, the transformation functions change to: $$ \begin{align} g_{\alpha}(\zeta^{*}, \pi, \delta^{*}) &=\frac{\exp(\zeta^{*})\sqrt{1 + \pi^{2}}}{\exp(\zeta^{*})}\\ g_{\beta}(\zeta^{*}, \pi, \delta^{*}) &= \frac{\exp(\zeta^{*})\pi}{\exp(\delta^{*})}\\ \end{align} $$ The partial derivatives of the transformation function for $\alpha$ are then: $$ \begin{align} \frac{\partial}{\partial \zeta^{*}} g_{\alpha}(\zeta^{*}, \pi, \delta^{*}) &=\sqrt{1+\pi^{2}}\exp(-\delta^{*}+\zeta^{*})\\ \frac{\partial}{\partial \pi} g_{\alpha}(\zeta^{*}, \pi, \delta^{*}) &=\frac{\pi\exp(-\delta^{*}+\zeta^{*})}{\sqrt{1+\pi^{2}}} \\ \frac{\partial}{\partial \delta^{*}} g_{\alpha}(\zeta^{*}, \pi, \delta^{*}) &=-\sqrt{1+\pi^{2}}\exp(-\delta^{*}+\zeta^{*})\\ \end{align} $$ The partial derivatives of the transformation function for $\beta$ are: $$ \begin{align} \frac{\partial}{\partial \zeta^{*}} g_{\beta}(\zeta^{*}, \pi, \delta^{*}) &=\pi\exp(-\delta^{*}+\zeta^{*})\\ \frac{\partial}{\partial \pi} g_{\beta}(\zeta^{*}, \pi, \delta^{*}) &=\exp(-\delta^{*}+\zeta^{*})\\ \frac{\partial}{\partial \delta^{*}} g_{\beta}(\zeta^{*}, \pi, \delta^{*}) &=-\pi\exp(-\delta^{*}+\zeta^{*})\\ \end{align} $$ Applying the delta-method to the transformations, we get the following approximation for the variance of $\alpha$: $$ \mathrm{Var}(\alpha)\approx (1+\pi^{2})\exp(-2\delta^{*}+2\zeta^{*})\cdot \mathrm{Var}(\zeta^{*})+\frac{\pi^{2}\exp(-2\delta^{*}+2\zeta^{*})}{1+\pi^{2}}\cdot \mathrm{Var}(\pi) + (1+\pi^{2})\exp(-2\delta^{*}+2\zeta^{*})\cdot \mathrm{Var}(\delta^{*}) + \\ 2\times \left[ \pi\exp(-2\delta^{*}+2\zeta^{*})\cdot \mathrm{Cov}(\pi,\zeta^{*}) - (1+\pi^{2})\exp(-2\delta^{*}+2\zeta^{*})\cdot \mathrm{Cov}(\delta^{*},\zeta^{*}) - \pi\exp(-2\delta^{*}+2\zeta^{*})\cdot \mathrm{Cov}(\delta^{*},\pi)\right] $$ The approximated variance of $\beta$ is: $$ \mathrm{Var}(\beta)\approx \pi^{2}\exp(-2\delta^{*}+2\zeta^{*})\cdot \mathrm{Var}(\zeta^{*})+\exp(-2\delta^{*}+2\zeta^{*})\cdot \mathrm{Var}(\pi) + \pi^{2}\exp(-2\delta^{*}+2\zeta^{*})\cdot \mathrm{Var}(\delta^{*}) + \\ 2\times \left[\pi\exp(-2\delta^{*}+2\zeta^{*}) \cdot \mathrm{Cov}(\pi,\zeta^{*}) -\pi^{2}\exp(-2\delta^{*}+2\zeta^{*})\cdot \mathrm{Cov}(\delta^{*},\zeta^{*}) -\pi\exp(-2\delta^{*}+2\zeta^{*}) \cdot \mathrm{Cov}(\delta^{*},\pi)\right] $$ Coding in R 2 This time, sigma denotes the covariance matrix but including the variances and covariances for $\zeta^{*}=\log(\zeta)$ and $\delta^{*}=\log(\delta)$ instead of $\zeta$ and $\delta$. #----------------------------------------------------------------------------- # The row vector D of the partial derivatives for alpha #----------------------------------------------------------------------------- D.alpha <- matrix( c( sqrt(1+pi^2)*exp(-ldelta + lzeta), # differentiate wrt lzeta ((pi*exp(-ldelta + lzeta))/(sqrt(1+pi^2))), # differentiate wrt pi (-sqrt(1+pi^2)*exp(-ldelta + lzeta)) # differentiate wrt ldelta ), ncol=3) #----------------------------------------------------------------------------- # The row vector D of the partial derivatives for beta #----------------------------------------------------------------------------- D.beta <- matrix( c( (pi*exp(-ldelta + lzeta)), # differentiate wrt lzeta exp(-ldelta + lzeta), # differentiate wrt pi (-pi*exp(-ldelta + lzeta)) # differentiate wrt ldelta ), ncol=3) #----------------------------------------------------------------------------- # Calculate the approximations of the variances for alpha and beta # "sigma" denotes the 3x3 covariance matrix with log(delta) and log(zeta) #----------------------------------------------------------------------------- var.alpha <- D.alpha %*% sigma %*% t(D.alpha) var.beta <- D.beta %*% sigma %*% t(D.beta) #----------------------------------------------------------------------------- # The standard errors are the square roots of the variances #----------------------------------------------------------------------------- se.alpha <- sqrt(var.alpha) se.beta <- sqrt(var.beta)
Standard errors of hyperbolic distribution estimates using delta-method?
In the following solution, I assume hyperbPi to be $\pi$. Also, the variances used in the approximations below are simply the squared standard errors calculated by summary after hyperbFit, so $\mathrm
Standard errors of hyperbolic distribution estimates using delta-method? In the following solution, I assume hyperbPi to be $\pi$. Also, the variances used in the approximations below are simply the squared standard errors calculated by summary after hyperbFit, so $\mathrm{Var}(X)=\mathrm{SE}(X)^2$. In order to calculate the approximation using the delta-method, we need the partial derivatives of the transformation function s $g_{\alpha}(\zeta, \pi, \delta)$ and $g_{\beta}(\zeta, \pi, \delta)$. The transformation functions for $\alpha$ and $\beta$ are given by: $$ \begin{align} g_{\alpha}(\zeta, \pi, \delta) &=\frac{\zeta\sqrt{1 + \pi^{2}}}{\delta}\\ g_{\beta}(\zeta, \pi, \delta) &= \frac{\zeta\pi}{\delta}\\ \end{align} $$ The partial derivatives of the transformation function for $\alpha$ are then: $$ \begin{align} \frac{\partial}{\partial \zeta} g_{\alpha}(\zeta, \pi, \delta) &=\frac{\sqrt{1+\pi^{2}}}{\delta}\\ \frac{\partial}{\partial \pi} g_{\alpha}(\zeta, \pi, \delta) &= \frac{\pi\zeta}{\sqrt{1+\pi^{2}}\delta }\\ \frac{\partial}{\partial \delta} g_{\alpha}(\zeta, \pi, \delta) &= -\frac{\sqrt{1+\pi^{2}}\zeta}{\delta^{2}}\\ \end{align} $$ The partial derivatives of the transformation function for $\beta$ are: $$ \begin{align} \frac{\partial}{\partial \zeta} g_{\beta}(\zeta, \pi, \delta) &=\frac{\pi}{\delta}\\ \frac{\partial}{\partial \pi} g_{\beta}(\zeta, \pi, \delta) &= \frac{\zeta}{\delta }\\ \frac{\partial}{\partial \delta} g_{\beta}(\zeta, \pi, \delta) &= -\frac{\pi\zeta}{\delta^{2}}\\ \end{align} $$ Applying the delta-method to the transformations, we get the following approximation for the variance of $\alpha$ (take square roots to get the standard errors): $$ \mathrm{Var}(\alpha)\approx \frac{1+\pi^{2}}{\delta^{2}}\cdot \mathrm{Var}(\zeta)+\frac{\pi^{2}\zeta^{2}}{(1+\pi^{2})\delta^{2}}\cdot \mathrm{Var}(\pi) + \frac{(1+\pi^{2})\zeta^{2}}{\delta^{4}}\cdot \mathrm{Var}(\delta) + \\ 2\times \left[ \frac{\pi\zeta}{\delta^{2}}\cdot \mathrm{Cov}(\pi,\zeta) - \frac{(1+\pi^{2})\zeta}{\delta^{3}}\cdot \mathrm{Cov}(\delta,\zeta)- \frac{\pi\zeta^{2}}{\delta^{3}}\cdot \mathrm{Cov}(\delta,\pi)\right] $$ The approximated variance of $\beta$ is: $$ \mathrm{Var}(\beta)\approx \frac{\pi^{2}}{\delta^{2}}\cdot \mathrm{Var}(\zeta) + \frac{\zeta^{2}}{\delta^{2}}\cdot \mathrm{Var}(\pi) + \frac{\pi^{2}\zeta^{2}}{\delta^{4}}\cdot \mathrm{Var}(\delta) + \\ 2\times \left[ \frac{\pi\zeta}{\delta^{2}}\cdot \mathrm{Cov}(\pi,\zeta) - \frac{\pi^{2}\zeta}{\delta^{3}}\cdot \mathrm{Cov}(\delta, \zeta) - \frac{\pi\zeta^{2}}{\delta^{3}}\cdot \mathrm{Cov}(\pi, \delta) \right] $$ Coding in R The fastest way to calculate the above approximations is using matrices. Denote $D$ the row vector containing the partial derivatives of the transformation function for $\alpha$ or $\beta$ with respect to $\zeta, \pi, \delta$. Further, denote $\Sigma$ the $3\times 3$ variance-covariance matrix of $\zeta, \pi, \delta$. The covariance matrix can be retrieved by typing vcov(my.hyperbFit) where my.hyperbFit is the fitted function. The above approximation of the variance of $\alpha$ is then $$ \mathrm{Var}(\alpha)\approx D_{\alpha}\Sigma D_{\alpha}^\top $$ The same is true for the approximation of the variance of $\beta$. In R, this can be easily coded like this: #----------------------------------------------------------------------------- # The row vector D of the partial derivatives for alpha #----------------------------------------------------------------------------- D.alpha <- matrix( c( sqrt(1+pi^2)/delta, # differentiate wrt zeta ((pi*zeta)/(sqrt(1+pi^2)*delta)), # differentiate wrt pi -(sqrt(1+pi^2)*zeta)/(delta^2) # differentiate wrt delta ), ncol=3) #----------------------------------------------------------------------------- # The row vector D of the partial derivatives for beta #----------------------------------------------------------------------------- D.beta <- matrix( c( (pi/delta), # differentiate wrt zeta (zeta/delta), # differentiate wrt pi -((pi*zeta)/delta^2) # differentiate wrt delta ), ncol=3) #----------------------------------------------------------------------------- # Calculate the approximations of the variances for alpha and beta # "sigma" denotes the 3x3 covariance matrix #----------------------------------------------------------------------------- var.alpha <- D.alpha %*% sigma %*% t(D.alpha) var.beta <- D.beta %*% sigma %*% t(D.beta) #----------------------------------------------------------------------------- # The standard errors are the square roots of the variances #----------------------------------------------------------------------------- se.alpha <- sqrt(var.alpha) se.beta <- sqrt(var.beta) Using $\log(\zeta)$ and $\log(\delta)$ If the standard errors/variances are only available for $\zeta^{*}=\log(\zeta)$ and $\delta^{*}=\log(\delta)$ instead of $\zeta$ and $\delta$, the transformation functions change to: $$ \begin{align} g_{\alpha}(\zeta^{*}, \pi, \delta^{*}) &=\frac{\exp(\zeta^{*})\sqrt{1 + \pi^{2}}}{\exp(\zeta^{*})}\\ g_{\beta}(\zeta^{*}, \pi, \delta^{*}) &= \frac{\exp(\zeta^{*})\pi}{\exp(\delta^{*})}\\ \end{align} $$ The partial derivatives of the transformation function for $\alpha$ are then: $$ \begin{align} \frac{\partial}{\partial \zeta^{*}} g_{\alpha}(\zeta^{*}, \pi, \delta^{*}) &=\sqrt{1+\pi^{2}}\exp(-\delta^{*}+\zeta^{*})\\ \frac{\partial}{\partial \pi} g_{\alpha}(\zeta^{*}, \pi, \delta^{*}) &=\frac{\pi\exp(-\delta^{*}+\zeta^{*})}{\sqrt{1+\pi^{2}}} \\ \frac{\partial}{\partial \delta^{*}} g_{\alpha}(\zeta^{*}, \pi, \delta^{*}) &=-\sqrt{1+\pi^{2}}\exp(-\delta^{*}+\zeta^{*})\\ \end{align} $$ The partial derivatives of the transformation function for $\beta$ are: $$ \begin{align} \frac{\partial}{\partial \zeta^{*}} g_{\beta}(\zeta^{*}, \pi, \delta^{*}) &=\pi\exp(-\delta^{*}+\zeta^{*})\\ \frac{\partial}{\partial \pi} g_{\beta}(\zeta^{*}, \pi, \delta^{*}) &=\exp(-\delta^{*}+\zeta^{*})\\ \frac{\partial}{\partial \delta^{*}} g_{\beta}(\zeta^{*}, \pi, \delta^{*}) &=-\pi\exp(-\delta^{*}+\zeta^{*})\\ \end{align} $$ Applying the delta-method to the transformations, we get the following approximation for the variance of $\alpha$: $$ \mathrm{Var}(\alpha)\approx (1+\pi^{2})\exp(-2\delta^{*}+2\zeta^{*})\cdot \mathrm{Var}(\zeta^{*})+\frac{\pi^{2}\exp(-2\delta^{*}+2\zeta^{*})}{1+\pi^{2}}\cdot \mathrm{Var}(\pi) + (1+\pi^{2})\exp(-2\delta^{*}+2\zeta^{*})\cdot \mathrm{Var}(\delta^{*}) + \\ 2\times \left[ \pi\exp(-2\delta^{*}+2\zeta^{*})\cdot \mathrm{Cov}(\pi,\zeta^{*}) - (1+\pi^{2})\exp(-2\delta^{*}+2\zeta^{*})\cdot \mathrm{Cov}(\delta^{*},\zeta^{*}) - \pi\exp(-2\delta^{*}+2\zeta^{*})\cdot \mathrm{Cov}(\delta^{*},\pi)\right] $$ The approximated variance of $\beta$ is: $$ \mathrm{Var}(\beta)\approx \pi^{2}\exp(-2\delta^{*}+2\zeta^{*})\cdot \mathrm{Var}(\zeta^{*})+\exp(-2\delta^{*}+2\zeta^{*})\cdot \mathrm{Var}(\pi) + \pi^{2}\exp(-2\delta^{*}+2\zeta^{*})\cdot \mathrm{Var}(\delta^{*}) + \\ 2\times \left[\pi\exp(-2\delta^{*}+2\zeta^{*}) \cdot \mathrm{Cov}(\pi,\zeta^{*}) -\pi^{2}\exp(-2\delta^{*}+2\zeta^{*})\cdot \mathrm{Cov}(\delta^{*},\zeta^{*}) -\pi\exp(-2\delta^{*}+2\zeta^{*}) \cdot \mathrm{Cov}(\delta^{*},\pi)\right] $$ Coding in R 2 This time, sigma denotes the covariance matrix but including the variances and covariances for $\zeta^{*}=\log(\zeta)$ and $\delta^{*}=\log(\delta)$ instead of $\zeta$ and $\delta$. #----------------------------------------------------------------------------- # The row vector D of the partial derivatives for alpha #----------------------------------------------------------------------------- D.alpha <- matrix( c( sqrt(1+pi^2)*exp(-ldelta + lzeta), # differentiate wrt lzeta ((pi*exp(-ldelta + lzeta))/(sqrt(1+pi^2))), # differentiate wrt pi (-sqrt(1+pi^2)*exp(-ldelta + lzeta)) # differentiate wrt ldelta ), ncol=3) #----------------------------------------------------------------------------- # The row vector D of the partial derivatives for beta #----------------------------------------------------------------------------- D.beta <- matrix( c( (pi*exp(-ldelta + lzeta)), # differentiate wrt lzeta exp(-ldelta + lzeta), # differentiate wrt pi (-pi*exp(-ldelta + lzeta)) # differentiate wrt ldelta ), ncol=3) #----------------------------------------------------------------------------- # Calculate the approximations of the variances for alpha and beta # "sigma" denotes the 3x3 covariance matrix with log(delta) and log(zeta) #----------------------------------------------------------------------------- var.alpha <- D.alpha %*% sigma %*% t(D.alpha) var.beta <- D.beta %*% sigma %*% t(D.beta) #----------------------------------------------------------------------------- # The standard errors are the square roots of the variances #----------------------------------------------------------------------------- se.alpha <- sqrt(var.alpha) se.beta <- sqrt(var.beta)
Standard errors of hyperbolic distribution estimates using delta-method? In the following solution, I assume hyperbPi to be $\pi$. Also, the variances used in the approximations below are simply the squared standard errors calculated by summary after hyperbFit, so $\mathrm
32,102
Standard errors of hyperbolic distribution estimates using delta-method?
Possible duplicate: Standard errors of hyperbFit? I could bet some accounts belong to the same person ...
Standard errors of hyperbolic distribution estimates using delta-method?
Possible duplicate: Standard errors of hyperbFit? I could bet some accounts belong to the same person ...
Standard errors of hyperbolic distribution estimates using delta-method? Possible duplicate: Standard errors of hyperbFit? I could bet some accounts belong to the same person ...
Standard errors of hyperbolic distribution estimates using delta-method? Possible duplicate: Standard errors of hyperbFit? I could bet some accounts belong to the same person ...
32,103
Multivariate response regressions vs many linear models
I think my comments have grown long enough for an answer... One reason why you might want to look at the multivariate case rather than univariate cases is when there's a lot of dependence between variables. It's quite possible for each univariate response to show "no effect" but the multivariate one to show a strong one. See this plot about a difference between two groups on just two dimensions Note that here, $y$ and $x$ are both DVs, and the grouping variable (red/black indicator) is the (lone) IV in the 'regression'. The issue is that the thing whose mean really differs between the two groups is not the variable $X$ or the variable $Y$ (that is, $\mu_{X2}-\mu_{X1}$ is almost zero, same for $Y$), but a particular linear combination - in the example, $Y-X$ - on which the means of the two groups strongly differ. In that case univariate $t$ tests find nothing but a multivariate test sees it easily (which can be done by regression and multivariate regression where there is a single IV, the group indicator). The same issue applies to other, less simple regressions.
Multivariate response regressions vs many linear models
I think my comments have grown long enough for an answer... One reason why you might want to look at the multivariate case rather than univariate cases is when there's a lot of dependence between vari
Multivariate response regressions vs many linear models I think my comments have grown long enough for an answer... One reason why you might want to look at the multivariate case rather than univariate cases is when there's a lot of dependence between variables. It's quite possible for each univariate response to show "no effect" but the multivariate one to show a strong one. See this plot about a difference between two groups on just two dimensions Note that here, $y$ and $x$ are both DVs, and the grouping variable (red/black indicator) is the (lone) IV in the 'regression'. The issue is that the thing whose mean really differs between the two groups is not the variable $X$ or the variable $Y$ (that is, $\mu_{X2}-\mu_{X1}$ is almost zero, same for $Y$), but a particular linear combination - in the example, $Y-X$ - on which the means of the two groups strongly differ. In that case univariate $t$ tests find nothing but a multivariate test sees it easily (which can be done by regression and multivariate regression where there is a single IV, the group indicator). The same issue applies to other, less simple regressions.
Multivariate response regressions vs many linear models I think my comments have grown long enough for an answer... One reason why you might want to look at the multivariate case rather than univariate cases is when there's a lot of dependence between vari
32,104
Data processing before applying SVM
It's advised to scale all inputs to a set interval ($[-1,1]$ or $[0,1]$ are popular choices). That way you won't get any bias towards specific inputs which happen to have large values. Scaling can have a large effect on accuracy. Make sure to use the same scaling factors on both training and testing data. For more information, you can have a look at a practical guide to SVM classification.
Data processing before applying SVM
It's advised to scale all inputs to a set interval ($[-1,1]$ or $[0,1]$ are popular choices). That way you won't get any bias towards specific inputs which happen to have large values. Scaling can hav
Data processing before applying SVM It's advised to scale all inputs to a set interval ($[-1,1]$ or $[0,1]$ are popular choices). That way you won't get any bias towards specific inputs which happen to have large values. Scaling can have a large effect on accuracy. Make sure to use the same scaling factors on both training and testing data. For more information, you can have a look at a practical guide to SVM classification.
Data processing before applying SVM It's advised to scale all inputs to a set interval ($[-1,1]$ or $[0,1]$ are popular choices). That way you won't get any bias towards specific inputs which happen to have large values. Scaling can hav
32,105
Applying Bayes: Estimating a bimodal distribution
EM algorithm In response to @Daniel Johnson, I want to quickly show you how you can fit the EM algorithm in R. Use the package mixtools (click for a link). Then, you can use the normalmixEM function with the option k=2 to estimate the parameters of a two-component gaussian mixture distribution. Let's make an example: # Let's look at the old faithful data library(mixtools) data(faithful) plot(density(faithful$waiting), las=1, col="steelblue", lwd=2, main="") The distribution is clearly bimodal, so we're gonna calculate a mixture model with 2 components (k=2): out <- normalmixEM(faithful$waiting, k=2, epsilon = 1e-03, fast=TRUE) summary(out) summary of normalmixEM object: comp 1 comp 2 lambda 0.361283 0.638717 mu 54.628096 80.099412 sigma 5.882584 5.859425 loglik at estimate: -1034.002 The first normal distribution has an estimated mean of $54.6$ with a standard deviation of $5.88$, the second a mean of $80.1$ with a standard deviation of $5.86$. This corresponds nicely to the peaks in the graph above. Bayesian methods To estimate the mixture models with Bayesian methodology, use the bayesmix package for R (click here for link). You'll need to install JAGS on your computer first, though (just download the exectuble file and install it). To illustrate its use, I re-run the above example of old faithful data. I choose independent (option independence) and uninformative priors (option priorsUncertain). Further, we run 10000 MCMC samples (option n.iter=10000) and discard the first 1000 as burn-in samples (option burn.in=1000): library(bayesmix) data(faithful) bayesmod <- BMMmodel(faithful$waiting, k=2, priors=list(kind = "independence", parameter = "priorsUncertain", hierarchical = NULL)) # k=2 for two components jcontrol <- JAGScontrol(variables = c("mu", "tau", "eta", "S"), burn.in = 1000, n.iter = 10000, seed = 10) z <- JAGSrun(faithful$waiting, model = bayesmod, control = jcontrol, tmp = FALSE, cleanup = TRUE) zSort <- Sort(z, by = "mu") zSort The model output is ("mu" denotes the estimates of the means and "sigma2" the estimates of the variances): Markov Chain Monte Carlo (MCMC) output: Start = 1001 End = 11000 Thinning interval = 1 Empirical mean, standard deviation and 95% CI for eta Mean SD 2.5% 97.5% eta[1] 0.3622 0.0318 0.3013 0.4251 eta[2] 0.6378 0.0318 0.5749 0.6987 Empirical mean, standard deviation and 95% CI for mu Mean SD 2.5% 97.5% mu[1] 54.63 0.7365 53.25 56.12 mu[2] 80.08 0.5128 79.02 81.05 Empirical mean, standard deviation and 95% CI for sigma2 Mean SD 2.5% 97.5% sigma2[1] 36.11 7.209 24.61 52.76 sigma2[2] 35.38 5.066 26.90 46.34 The estimated means are $54.63$ and $80.08$ with standard deviations (square roots of the variances "sigma2") of $\sqrt(36.11)\approx 6.01$ and $\sqrt(35.38)\approx 5.95$. This is very close to the estimates we've calculated with normalmixEM.
Applying Bayes: Estimating a bimodal distribution
EM algorithm In response to @Daniel Johnson, I want to quickly show you how you can fit the EM algorithm in R. Use the package mixtools (click for a link). Then, you can use the normalmixEM function w
Applying Bayes: Estimating a bimodal distribution EM algorithm In response to @Daniel Johnson, I want to quickly show you how you can fit the EM algorithm in R. Use the package mixtools (click for a link). Then, you can use the normalmixEM function with the option k=2 to estimate the parameters of a two-component gaussian mixture distribution. Let's make an example: # Let's look at the old faithful data library(mixtools) data(faithful) plot(density(faithful$waiting), las=1, col="steelblue", lwd=2, main="") The distribution is clearly bimodal, so we're gonna calculate a mixture model with 2 components (k=2): out <- normalmixEM(faithful$waiting, k=2, epsilon = 1e-03, fast=TRUE) summary(out) summary of normalmixEM object: comp 1 comp 2 lambda 0.361283 0.638717 mu 54.628096 80.099412 sigma 5.882584 5.859425 loglik at estimate: -1034.002 The first normal distribution has an estimated mean of $54.6$ with a standard deviation of $5.88$, the second a mean of $80.1$ with a standard deviation of $5.86$. This corresponds nicely to the peaks in the graph above. Bayesian methods To estimate the mixture models with Bayesian methodology, use the bayesmix package for R (click here for link). You'll need to install JAGS on your computer first, though (just download the exectuble file and install it). To illustrate its use, I re-run the above example of old faithful data. I choose independent (option independence) and uninformative priors (option priorsUncertain). Further, we run 10000 MCMC samples (option n.iter=10000) and discard the first 1000 as burn-in samples (option burn.in=1000): library(bayesmix) data(faithful) bayesmod <- BMMmodel(faithful$waiting, k=2, priors=list(kind = "independence", parameter = "priorsUncertain", hierarchical = NULL)) # k=2 for two components jcontrol <- JAGScontrol(variables = c("mu", "tau", "eta", "S"), burn.in = 1000, n.iter = 10000, seed = 10) z <- JAGSrun(faithful$waiting, model = bayesmod, control = jcontrol, tmp = FALSE, cleanup = TRUE) zSort <- Sort(z, by = "mu") zSort The model output is ("mu" denotes the estimates of the means and "sigma2" the estimates of the variances): Markov Chain Monte Carlo (MCMC) output: Start = 1001 End = 11000 Thinning interval = 1 Empirical mean, standard deviation and 95% CI for eta Mean SD 2.5% 97.5% eta[1] 0.3622 0.0318 0.3013 0.4251 eta[2] 0.6378 0.0318 0.5749 0.6987 Empirical mean, standard deviation and 95% CI for mu Mean SD 2.5% 97.5% mu[1] 54.63 0.7365 53.25 56.12 mu[2] 80.08 0.5128 79.02 81.05 Empirical mean, standard deviation and 95% CI for sigma2 Mean SD 2.5% 97.5% sigma2[1] 36.11 7.209 24.61 52.76 sigma2[2] 35.38 5.066 26.90 46.34 The estimated means are $54.63$ and $80.08$ with standard deviations (square roots of the variances "sigma2") of $\sqrt(36.11)\approx 6.01$ and $\sqrt(35.38)\approx 5.95$. This is very close to the estimates we've calculated with normalmixEM.
Applying Bayes: Estimating a bimodal distribution EM algorithm In response to @Daniel Johnson, I want to quickly show you how you can fit the EM algorithm in R. Use the package mixtools (click for a link). Then, you can use the normalmixEM function w
32,106
Applying Bayes: Estimating a bimodal distribution
I think "A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models" is exactly what you're looking for. Particularly the third section: Finding Maximum Likelihood Mixture Densities Parameters via EM.
Applying Bayes: Estimating a bimodal distribution
I think "A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models" is exactly what you're looking for. Particularly the third sec
Applying Bayes: Estimating a bimodal distribution I think "A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models" is exactly what you're looking for. Particularly the third section: Finding Maximum Likelihood Mixture Densities Parameters via EM.
Applying Bayes: Estimating a bimodal distribution I think "A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models" is exactly what you're looking for. Particularly the third sec
32,107
Hamming distance for strings with different length
Hamming distance fundamentally assumes that the input strings are the same length. You can generalize hamming distance a bit to allow for insertions and deletions and arrive at the Levenshtein distance. How big are your strings? The edit distance is much harder to compute for long strings.
Hamming distance for strings with different length
Hamming distance fundamentally assumes that the input strings are the same length. You can generalize hamming distance a bit to allow for insertions and deletions and arrive at the Levenshtein distan
Hamming distance for strings with different length Hamming distance fundamentally assumes that the input strings are the same length. You can generalize hamming distance a bit to allow for insertions and deletions and arrive at the Levenshtein distance. How big are your strings? The edit distance is much harder to compute for long strings.
Hamming distance for strings with different length Hamming distance fundamentally assumes that the input strings are the same length. You can generalize hamming distance a bit to allow for insertions and deletions and arrive at the Levenshtein distan
32,108
Using varimax-rotated PCA components as predictors in linear regression
Standardized (to unit variance) principal components after an orthogonal rotation, such as varimax, are simply rotated standardized principal components (by "principal component" I mean PC scores). In linear regression, scaling of individual predictors has no effect and replacing predictors by their linear combinations (e.g. via a rotation) has no effect either. This means that using any of the following in a regression: "raw" principal components (projections on the cov. matrix eigenvectors), standardized principal components, rotated [standardized] principal components, arbitrarily scaled rotated [standardized] principal components, would lead to exactly the same regression model with identical $R^2$, predictive power, etc. (Individual regression coefficients will of course depend on the normalization and rotation choice.) The total variance captured by the raw and by the rotated PCs is the same. This answers your main question. However, you should be careful with your workflows, as it is very easy to get confused and mess up the calculations. The simplest way to obtain standardized rotated PC scores is to use psych::principal function: psych::principal(data, rotate="varimax", nfactors=k, scores=TRUE) Your workflow #2 can be more tricky than you think, because loadings after varimax rotation are not orthogonal, so to obtain the scores you cannot simply project the data onto the rotated loadings. See my answer here for details: How to compute varimax-rotated principal components in R? Your workflow #3 is probably also wrong, at least if you refer to the psych::fa function. It does not do PCA; the fm="pa" extraction method refers to "principal factor" method which is based on PCA, but is not identical to PCA (it is an iterative method). As I wrote above, you need psych::principal to perform PCA. See my answer in the following thread for a detailed account on PCA and varimax: Is PCA followed by a rotation (such as varimax) still PCA?
Using varimax-rotated PCA components as predictors in linear regression
Standardized (to unit variance) principal components after an orthogonal rotation, such as varimax, are simply rotated standardized principal components (by "principal component" I mean PC scores). In
Using varimax-rotated PCA components as predictors in linear regression Standardized (to unit variance) principal components after an orthogonal rotation, such as varimax, are simply rotated standardized principal components (by "principal component" I mean PC scores). In linear regression, scaling of individual predictors has no effect and replacing predictors by their linear combinations (e.g. via a rotation) has no effect either. This means that using any of the following in a regression: "raw" principal components (projections on the cov. matrix eigenvectors), standardized principal components, rotated [standardized] principal components, arbitrarily scaled rotated [standardized] principal components, would lead to exactly the same regression model with identical $R^2$, predictive power, etc. (Individual regression coefficients will of course depend on the normalization and rotation choice.) The total variance captured by the raw and by the rotated PCs is the same. This answers your main question. However, you should be careful with your workflows, as it is very easy to get confused and mess up the calculations. The simplest way to obtain standardized rotated PC scores is to use psych::principal function: psych::principal(data, rotate="varimax", nfactors=k, scores=TRUE) Your workflow #2 can be more tricky than you think, because loadings after varimax rotation are not orthogonal, so to obtain the scores you cannot simply project the data onto the rotated loadings. See my answer here for details: How to compute varimax-rotated principal components in R? Your workflow #3 is probably also wrong, at least if you refer to the psych::fa function. It does not do PCA; the fm="pa" extraction method refers to "principal factor" method which is based on PCA, but is not identical to PCA (it is an iterative method). As I wrote above, you need psych::principal to perform PCA. See my answer in the following thread for a detailed account on PCA and varimax: Is PCA followed by a rotation (such as varimax) still PCA?
Using varimax-rotated PCA components as predictors in linear regression Standardized (to unit variance) principal components after an orthogonal rotation, such as varimax, are simply rotated standardized principal components (by "principal component" I mean PC scores). In
32,109
How to calculate the standard error of the marginal effects in interactions (robust regression)?
If you treat $Z$ as non-random variable, then the marginal effect is $b_1 + b_3 \cdot Z$, a function of $Z$. The variance of a weighted sum is $Var(b_1) + Var(b_3) \cdot Z^2 + 2 \cdot Z \cdot Cov(b_1,b_3),$ and the standard error is just the square root of that. You can get the covariance from the right off-diagonal element of the variance-covariance matrix of the coefficients. The variances will be found on the diagonal. If your software does not return it, you can estimate the variance-covariance matrix as \begin{equation}V=\frac{\hat e'\hat e}{n-k}(D'D)^{-1},\end{equation} where $\hat e$ is the vector of residuals $\hat e=y-\hat \beta'D$, and the $D$ matrix contains a column of ones, $X$, $Z$ and their interaction, $n$ is the number of observations and $k$ is the number of variables.
How to calculate the standard error of the marginal effects in interactions (robust regression)?
If you treat $Z$ as non-random variable, then the marginal effect is $b_1 + b_3 \cdot Z$, a function of $Z$. The variance of a weighted sum is $Var(b_1) + Var(b_3) \cdot Z^2 + 2 \cdot Z \cdot Cov(b_1,
How to calculate the standard error of the marginal effects in interactions (robust regression)? If you treat $Z$ as non-random variable, then the marginal effect is $b_1 + b_3 \cdot Z$, a function of $Z$. The variance of a weighted sum is $Var(b_1) + Var(b_3) \cdot Z^2 + 2 \cdot Z \cdot Cov(b_1,b_3),$ and the standard error is just the square root of that. You can get the covariance from the right off-diagonal element of the variance-covariance matrix of the coefficients. The variances will be found on the diagonal. If your software does not return it, you can estimate the variance-covariance matrix as \begin{equation}V=\frac{\hat e'\hat e}{n-k}(D'D)^{-1},\end{equation} where $\hat e$ is the vector of residuals $\hat e=y-\hat \beta'D$, and the $D$ matrix contains a column of ones, $X$, $Z$ and their interaction, $n$ is the number of observations and $k$ is the number of variables.
How to calculate the standard error of the marginal effects in interactions (robust regression)? If you treat $Z$ as non-random variable, then the marginal effect is $b_1 + b_3 \cdot Z$, a function of $Z$. The variance of a weighted sum is $Var(b_1) + Var(b_3) \cdot Z^2 + 2 \cdot Z \cdot Cov(b_1,
32,110
Analytical approximation of probability of one beta-distributed var being greater than another?
Use this close-form approximation approximation. The author claim it is 2 orders of magnitude faster to evaluate. To abide by CV rules I will be citing key infos from the intro of this very clear paper: The key idea is that: The key idea in that paper is that $$P(X_B>Y_B)\approx P(X_N>Y_N)$$ where $X_B$ and $Y_B$ are independent beta random variables and $X_N$ and $Y_N$ are their normal approximations formed by moment matching. The author shows that these approximations are rather accurate, even for small values of $(a_1,b_1)$ and $(a_2,b_2)$. For example: when these parameters take integer values between 1 and 10 inclusive, the average absolute error is 0.006676. Edit:updated link.
Analytical approximation of probability of one beta-distributed var being greater than another?
Use this close-form approximation approximation. The author claim it is 2 orders of magnitude faster to evaluate. To abide by CV rules I will be citing key infos from the intro of this very clear pa
Analytical approximation of probability of one beta-distributed var being greater than another? Use this close-form approximation approximation. The author claim it is 2 orders of magnitude faster to evaluate. To abide by CV rules I will be citing key infos from the intro of this very clear paper: The key idea is that: The key idea in that paper is that $$P(X_B>Y_B)\approx P(X_N>Y_N)$$ where $X_B$ and $Y_B$ are independent beta random variables and $X_N$ and $Y_N$ are their normal approximations formed by moment matching. The author shows that these approximations are rather accurate, even for small values of $(a_1,b_1)$ and $(a_2,b_2)$. For example: when these parameters take integer values between 1 and 10 inclusive, the average absolute error is 0.006676. Edit:updated link.
Analytical approximation of probability of one beta-distributed var being greater than another? Use this close-form approximation approximation. The author claim it is 2 orders of magnitude faster to evaluate. To abide by CV rules I will be citing key infos from the intro of this very clear pa
32,111
Analytical approximation of probability of one beta-distributed var being greater than another?
I'm uncertain if this is answering the question you are asking. But you may want to check out Asymptotics of Evan Miller's Bayesian A/B formula. What they effectively are solving for is P(Conversion Rate of A > Conversion Rate of B) given two Beta distributions.
Analytical approximation of probability of one beta-distributed var being greater than another?
I'm uncertain if this is answering the question you are asking. But you may want to check out Asymptotics of Evan Miller's Bayesian A/B formula. What they effectively are solving for is P(Conversion R
Analytical approximation of probability of one beta-distributed var being greater than another? I'm uncertain if this is answering the question you are asking. But you may want to check out Asymptotics of Evan Miller's Bayesian A/B formula. What they effectively are solving for is P(Conversion Rate of A > Conversion Rate of B) given two Beta distributions.
Analytical approximation of probability of one beta-distributed var being greater than another? I'm uncertain if this is answering the question you are asking. But you may want to check out Asymptotics of Evan Miller's Bayesian A/B formula. What they effectively are solving for is P(Conversion R
32,112
Is the square root of a positive semi-definite matrix a unique result?
Let a matrix $\mathbb{V}$ have "square roots" $\mathbb{A}$ and $\mathbb{B}$; that is, $$\mathbb{V = AA^\intercal = BB^\intercal}.$$ For simplicity, suppose the original matrix $\mathbb{V}$ is invertible (which is equivalent to being positive definite under the assumptions). Then $\mathbb{A}$, $\mathbb{B}$, and their transposes must also be invertible because $$ \mathbb{I} = \mathbb{V^{-1}V} = \mathbb{V^{-1}AA^\intercal} = \mathbb{(V^{-1}A)A^\intercal}$$ exhibits a left inverse for $\mathbb{A}^\intercal$, implying $\mathbb{A}$ is invertible too; the same argument applies to $\mathbb{B}$, of course. We exploit these inverses to write $$\mathbb{(B^{-1}A)(B^{-1}A)^\intercal = B^{-1} (A A^\intercal) B^{-1}{^\intercal} = B^{-1} (V) B^{-1}{^\intercal}=B^{-1}(BB^\intercal)B^{-1}{^\intercal} = I\ I = I},$$ showing that $\mathbb{O=B^{-1}A}$ is an orthogonal matrix: that is, $\mathbb{OO^\intercal=I}$. The set of such matrices forms two smooth real manifolds of dimension $n(n-1)/2$ when $\mathbb{V}$ is $n$ by $n$. Geometrically, orthogonal matrices correspond to rotations or to a reflection followed by rotations, depending upon the sign of their determinant. Conversely, when $\mathbb{A}$ is a square root of $\mathbb{V}$, similar (but easier) calculations show that $\mathbb{AO}$ is also a square root for any orthogonal matrix $\mathbb{O}$--and it does not matter here whether $\mathbb{A}$ is invertible or not. It is also easy to see that multiplication by an orthogonal matrix (not equal to $\mathbb{I}$) really does alter the square root of an invertible matrix. After all, $\mathbb{AO = A}$ immediately implies $\mathbb{O = A^{-1}A = I}$. This shows that the square roots of positive-definite matrices can be put into a one-to-one correspondence with the orthogonal matrices. This demonstrates that square roots of positive-definite matrices are determined only up to multiplication by orthogonal matrices. For the semi-definite case, the situation is more complicated, but at a minimum, multiplication by an orthogonal matrix preserves the property of being a square root. If you wish to apply additional criteria to your square root you might be able to identify a unique one or at least narrow down the ambiguity: that will depend on your particular preferences.
Is the square root of a positive semi-definite matrix a unique result?
Let a matrix $\mathbb{V}$ have "square roots" $\mathbb{A}$ and $\mathbb{B}$; that is, $$\mathbb{V = AA^\intercal = BB^\intercal}.$$ For simplicity, suppose the original matrix $\mathbb{V}$ is inverti
Is the square root of a positive semi-definite matrix a unique result? Let a matrix $\mathbb{V}$ have "square roots" $\mathbb{A}$ and $\mathbb{B}$; that is, $$\mathbb{V = AA^\intercal = BB^\intercal}.$$ For simplicity, suppose the original matrix $\mathbb{V}$ is invertible (which is equivalent to being positive definite under the assumptions). Then $\mathbb{A}$, $\mathbb{B}$, and their transposes must also be invertible because $$ \mathbb{I} = \mathbb{V^{-1}V} = \mathbb{V^{-1}AA^\intercal} = \mathbb{(V^{-1}A)A^\intercal}$$ exhibits a left inverse for $\mathbb{A}^\intercal$, implying $\mathbb{A}$ is invertible too; the same argument applies to $\mathbb{B}$, of course. We exploit these inverses to write $$\mathbb{(B^{-1}A)(B^{-1}A)^\intercal = B^{-1} (A A^\intercal) B^{-1}{^\intercal} = B^{-1} (V) B^{-1}{^\intercal}=B^{-1}(BB^\intercal)B^{-1}{^\intercal} = I\ I = I},$$ showing that $\mathbb{O=B^{-1}A}$ is an orthogonal matrix: that is, $\mathbb{OO^\intercal=I}$. The set of such matrices forms two smooth real manifolds of dimension $n(n-1)/2$ when $\mathbb{V}$ is $n$ by $n$. Geometrically, orthogonal matrices correspond to rotations or to a reflection followed by rotations, depending upon the sign of their determinant. Conversely, when $\mathbb{A}$ is a square root of $\mathbb{V}$, similar (but easier) calculations show that $\mathbb{AO}$ is also a square root for any orthogonal matrix $\mathbb{O}$--and it does not matter here whether $\mathbb{A}$ is invertible or not. It is also easy to see that multiplication by an orthogonal matrix (not equal to $\mathbb{I}$) really does alter the square root of an invertible matrix. After all, $\mathbb{AO = A}$ immediately implies $\mathbb{O = A^{-1}A = I}$. This shows that the square roots of positive-definite matrices can be put into a one-to-one correspondence with the orthogonal matrices. This demonstrates that square roots of positive-definite matrices are determined only up to multiplication by orthogonal matrices. For the semi-definite case, the situation is more complicated, but at a minimum, multiplication by an orthogonal matrix preserves the property of being a square root. If you wish to apply additional criteria to your square root you might be able to identify a unique one or at least narrow down the ambiguity: that will depend on your particular preferences.
Is the square root of a positive semi-definite matrix a unique result? Let a matrix $\mathbb{V}$ have "square roots" $\mathbb{A}$ and $\mathbb{B}$; that is, $$\mathbb{V = AA^\intercal = BB^\intercal}.$$ For simplicity, suppose the original matrix $\mathbb{V}$ is inverti
32,113
How to generate a Bernoulli variate with bias $a/\mathbb{E}[X]$ given a sampler of $X$ and uniform variates?
This is similar to a "Bernoulli factory" problem. This paper by Nacu and Peres shows that given a way to simulate from a $Bernoulli(p)$, it is possible to simulate from a $Bernoulli(f(p))$ iff $∃n,∀p, \min(f(p),1-f(p))≥min(p,1-p)^n$. With your notations, depending on the values of $a$ and $b$, you may or may not be able to get this inequality. This paper by Łatuszyński et al. might also be useful for the implementation.
How to generate a Bernoulli variate with bias $a/\mathbb{E}[X]$ given a sampler of $X$ and uniform v
This is similar to a "Bernoulli factory" problem. This paper by Nacu and Peres shows that given a way to simulate from a $Bernoulli(p)$, it is possible to simulate from a $Bernoulli(f(p))$ iff $∃n,∀p,
How to generate a Bernoulli variate with bias $a/\mathbb{E}[X]$ given a sampler of $X$ and uniform variates? This is similar to a "Bernoulli factory" problem. This paper by Nacu and Peres shows that given a way to simulate from a $Bernoulli(p)$, it is possible to simulate from a $Bernoulli(f(p))$ iff $∃n,∀p, \min(f(p),1-f(p))≥min(p,1-p)^n$. With your notations, depending on the values of $a$ and $b$, you may or may not be able to get this inequality. This paper by Łatuszyński et al. might also be useful for the implementation.
How to generate a Bernoulli variate with bias $a/\mathbb{E}[X]$ given a sampler of $X$ and uniform v This is similar to a "Bernoulli factory" problem. This paper by Nacu and Peres shows that given a way to simulate from a $Bernoulli(p)$, it is possible to simulate from a $Bernoulli(f(p))$ iff $∃n,∀p,
32,114
Exponential Distribution - Rate - Bayesian Prior?
Edit: This answer seems to entail some confusion about different ways to parameterize the Gamma distribution. It's probably best to ignore it. I think I know what's going on. It has to do with a decision about what you want your prior to be uninformative about: the rate parameter or the distribution of survival times. Gamma(0.001, 0.001) has a lot of very small values (close to 0). When the Exponential distribution's rate parameter is close to zero ($\epsilon$), then it has a very high expected value (1/$\epsilon$) and is very flat over a wide range of values. In R, you can see this by plotting an exponential distribution with mean .0001 from 0 to 100: curve(dexp(x, .0001), ylim = c(0, 1E-4), to = 100) It's essentially uniform (i.e. uninformative) over this range. It's much less flat if you look all the way out to 10000, though, which is why you might prefer an even smaller rate parameter. Hope this makes sense.
Exponential Distribution - Rate - Bayesian Prior?
Edit: This answer seems to entail some confusion about different ways to parameterize the Gamma distribution. It's probably best to ignore it. I think I know what's going on. It has to do with a dec
Exponential Distribution - Rate - Bayesian Prior? Edit: This answer seems to entail some confusion about different ways to parameterize the Gamma distribution. It's probably best to ignore it. I think I know what's going on. It has to do with a decision about what you want your prior to be uninformative about: the rate parameter or the distribution of survival times. Gamma(0.001, 0.001) has a lot of very small values (close to 0). When the Exponential distribution's rate parameter is close to zero ($\epsilon$), then it has a very high expected value (1/$\epsilon$) and is very flat over a wide range of values. In R, you can see this by plotting an exponential distribution with mean .0001 from 0 to 100: curve(dexp(x, .0001), ylim = c(0, 1E-4), to = 100) It's essentially uniform (i.e. uninformative) over this range. It's much less flat if you look all the way out to 10000, though, which is why you might prefer an even smaller rate parameter. Hope this makes sense.
Exponential Distribution - Rate - Bayesian Prior? Edit: This answer seems to entail some confusion about different ways to parameterize the Gamma distribution. It's probably best to ignore it. I think I know what's going on. It has to do with a dec
32,115
Exponential Distribution - Rate - Bayesian Prior?
What does uninformative prior mean to you? If you mean the Jeffreys prior, then it is $\beta \sim \textrm{Gamma}(0,0)$ as @Daniel points out. If you mean a flat prior (which isn't uninformative, although it gives the illusion of being uninformative), then it is simply $\beta \sim \textrm{Gamma}(1,0)$, which you can verify by looking at the pdf to be an improper flat prior. However, if there is a big difference between the priors, then you probably don't have enough data.
Exponential Distribution - Rate - Bayesian Prior?
What does uninformative prior mean to you? If you mean the Jeffreys prior, then it is $\beta \sim \textrm{Gamma}(0,0)$ as @Daniel points out. If you mean a flat prior (which isn't uninformative, altho
Exponential Distribution - Rate - Bayesian Prior? What does uninformative prior mean to you? If you mean the Jeffreys prior, then it is $\beta \sim \textrm{Gamma}(0,0)$ as @Daniel points out. If you mean a flat prior (which isn't uninformative, although it gives the illusion of being uninformative), then it is simply $\beta \sim \textrm{Gamma}(1,0)$, which you can verify by looking at the pdf to be an improper flat prior. However, if there is a big difference between the priors, then you probably don't have enough data.
Exponential Distribution - Rate - Bayesian Prior? What does uninformative prior mean to you? If you mean the Jeffreys prior, then it is $\beta \sim \textrm{Gamma}(0,0)$ as @Daniel points out. If you mean a flat prior (which isn't uninformative, altho
32,116
Exponential Distribution - Rate - Bayesian Prior?
Using $\text{Gamma}(a,b), \; a\approx b \approx 0 $ is uniform logarithmic-ally. This gives the non-informative prior, in terms of what your model want to do with the input data. As an example of this, you can see this: http://jmlr.csail.mit.edu/papers/volume1/tipping01a/tipping01a.pdf search for "gamma". Also check this out: http://www.stats.org.uk/priors/noninformative/YangBerger1998.pdf
Exponential Distribution - Rate - Bayesian Prior?
Using $\text{Gamma}(a,b), \; a\approx b \approx 0 $ is uniform logarithmic-ally. This gives the non-informative prior, in terms of what your model want to do with the input data. As an example of thi
Exponential Distribution - Rate - Bayesian Prior? Using $\text{Gamma}(a,b), \; a\approx b \approx 0 $ is uniform logarithmic-ally. This gives the non-informative prior, in terms of what your model want to do with the input data. As an example of this, you can see this: http://jmlr.csail.mit.edu/papers/volume1/tipping01a/tipping01a.pdf search for "gamma". Also check this out: http://www.stats.org.uk/priors/noninformative/YangBerger1998.pdf
Exponential Distribution - Rate - Bayesian Prior? Using $\text{Gamma}(a,b), \; a\approx b \approx 0 $ is uniform logarithmic-ally. This gives the non-informative prior, in terms of what your model want to do with the input data. As an example of thi
32,117
Finding the distribution of a statistic
It's a trick. Conditionally on $X_{3,i} = x$ we have that $W_i$ equals $$\frac{X_{1,i} + X_{2,i}x}{\sqrt{1 + x^2}} \sim \mathcal{N}(0, 1).$$ This follows from the fact that for fixed $x$ this is a simple linear transformation of the two independent $\mathcal{N}(0,1)$-distributed variables $X_{1,i}$ and $X_{2,i}$. Whence, $W_i \mid X_{3,i} = x$ has a normal distribution. The conditional mean is seen to be 0 and the conditional variance is (by the independence assumptions) $$V(W_i \mid X_{3,i} = x) = \frac{V(X_{1,i}) + V(X_{2,i})x^2}{1 + x^2} = \frac{1 + x^2}{1 + x^2} = 1.$$ Since the conditional distribution of $W_i \mid X_{3,i} = x$ doesn't depend upon $x$ we conclude that it is its marginal distribution as well, that is, $W_i \sim \mathcal{N}(0,1).$ The rest follows from standard results on averages and residuals for independent normal random variables. Basu's theorem is not needed for anything.
Finding the distribution of a statistic
It's a trick. Conditionally on $X_{3,i} = x$ we have that $W_i$ equals $$\frac{X_{1,i} + X_{2,i}x}{\sqrt{1 + x^2}} \sim \mathcal{N}(0, 1).$$ This follows from the fact that for fixed $x$ this is a si
Finding the distribution of a statistic It's a trick. Conditionally on $X_{3,i} = x$ we have that $W_i$ equals $$\frac{X_{1,i} + X_{2,i}x}{\sqrt{1 + x^2}} \sim \mathcal{N}(0, 1).$$ This follows from the fact that for fixed $x$ this is a simple linear transformation of the two independent $\mathcal{N}(0,1)$-distributed variables $X_{1,i}$ and $X_{2,i}$. Whence, $W_i \mid X_{3,i} = x$ has a normal distribution. The conditional mean is seen to be 0 and the conditional variance is (by the independence assumptions) $$V(W_i \mid X_{3,i} = x) = \frac{V(X_{1,i}) + V(X_{2,i})x^2}{1 + x^2} = \frac{1 + x^2}{1 + x^2} = 1.$$ Since the conditional distribution of $W_i \mid X_{3,i} = x$ doesn't depend upon $x$ we conclude that it is its marginal distribution as well, that is, $W_i \sim \mathcal{N}(0,1).$ The rest follows from standard results on averages and residuals for independent normal random variables. Basu's theorem is not needed for anything.
Finding the distribution of a statistic It's a trick. Conditionally on $X_{3,i} = x$ we have that $W_i$ equals $$\frac{X_{1,i} + X_{2,i}x}{\sqrt{1 + x^2}} \sim \mathcal{N}(0, 1).$$ This follows from the fact that for fixed $x$ this is a si
32,118
Interpretation of the regression coefficient of a proportion type independent variable
The interpretation for the regression coefficient is always for a 1 unit change regardless of what a "unit" is. In your case, if the IV is a proportion falling between 0 and 1, a one unit change is the same as 100%. If instead you want to look at the "effect" of a 1% change, simply multiply your IV by 100 before using it in the regression.
Interpretation of the regression coefficient of a proportion type independent variable
The interpretation for the regression coefficient is always for a 1 unit change regardless of what a "unit" is. In your case, if the IV is a proportion falling between 0 and 1, a one unit change is th
Interpretation of the regression coefficient of a proportion type independent variable The interpretation for the regression coefficient is always for a 1 unit change regardless of what a "unit" is. In your case, if the IV is a proportion falling between 0 and 1, a one unit change is the same as 100%. If instead you want to look at the "effect" of a 1% change, simply multiply your IV by 100 before using it in the regression.
Interpretation of the regression coefficient of a proportion type independent variable The interpretation for the regression coefficient is always for a 1 unit change regardless of what a "unit" is. In your case, if the IV is a proportion falling between 0 and 1, a one unit change is th
32,119
Interpretation of the regression coefficient of a proportion type independent variable
They are the same, aren't they? Let's take the first model, coefficient is -1.3 and variable is in original scale (0-1). So if the variable increases by 0.01, say from 0.08 to 0.09, then the log odds are down by 1.3*0.01 = 0.013. Now the second model, coefficient is -0.013 and variable is multiplied by 100 (0 - 100). So if the variable increases by 1 from 8 to 9 (which is actually from 0.08 to 0.09), then the log odds are down by 0.013*1=0.013.
Interpretation of the regression coefficient of a proportion type independent variable
They are the same, aren't they? Let's take the first model, coefficient is -1.3 and variable is in original scale (0-1). So if the variable increases by 0.01, say from 0.08 to 0.09, then the log odds
Interpretation of the regression coefficient of a proportion type independent variable They are the same, aren't they? Let's take the first model, coefficient is -1.3 and variable is in original scale (0-1). So if the variable increases by 0.01, say from 0.08 to 0.09, then the log odds are down by 1.3*0.01 = 0.013. Now the second model, coefficient is -0.013 and variable is multiplied by 100 (0 - 100). So if the variable increases by 1 from 8 to 9 (which is actually from 0.08 to 0.09), then the log odds are down by 0.013*1=0.013.
Interpretation of the regression coefficient of a proportion type independent variable They are the same, aren't they? Let's take the first model, coefficient is -1.3 and variable is in original scale (0-1). So if the variable increases by 0.01, say from 0.08 to 0.09, then the log odds
32,120
Central Moments of Symmetric Distributions
This answer aims to make a demonstration that is as elementary as possible, because such things frequently get to the essential idea. The only facts needed (beyond the simplest kind of algebraic manipulations) are linearity of integration (or, equivalently, of expectation), the change of variables formula for integrals, and the axiomatic result that a PDF integrates to unity. Motivating this demonstration is the intuition that when $f_X$ is symmetric about $a$, then the contribution of any quantity $G(x)$ to the expectation $\mathbb{E}_X(G(X))$ will have the same weight as the quantity $G(2a-x)$, because $x$ and $2a-x$ are on opposite sides of $a$ and equally far from it. Provided, then, that $G(x) = -G(2a-x)$ for all $x$, everything cancels and the expectation must be zero. The relationship between $x$ and $2a-x$, then, is our point of departure. Notice, by writing $y = x + a$, that the symmetry can just as well be expressed by the relationship $$f_X(y) = f_X(2a-y)$$ for all $y$. For any measurable function $G$, the one-to-one change of variable from $x$ to $2a-x$ changes $dx$ to $-dx$, while reversing the direction of integration, implying $$\mathbb{E}_X(G(X)) = \int G(x) f_X(x)dx = \int G(x) f_X(2a - x)dx = \int G(2a-x)f_X(x)dx.$$ Assuming this expectation exists (that is, the integral converges), the linearity of the integral implies $$\int \left(G(x) - G(2a - x)\right)f_X(x)dx = 0.$$ Consider the odd moments about $a$, which are defined as the expectations of $G_{k,a}(X) = (X-a)^k$, $k = 1, 3, 5, \ldots$. In these cases $$\eqalign{ G_{k,a}(x) - G_{k,a}(2a-x) &= (x-a)^k - (2a-x-a)^k \\&= (x-a)^k - (a-x)^k \\ &= (1^k - (-1)^k)(x-a)^k \\&= 2(x-a)^k,}$$ precisely because $k$ is odd. Applying the preceding result gives $$0 = \int \left(G_{k,a}(x) - G_{k,a}(2a - x)\right)f_X(x)dx = 2\int (x-a)^k f_X(x)dx.$$ Because the right hand side is twice the $k$th moment about $a$, dividing by $2$ shows that this moment is zero whenever it exists. Finally, the mean (assuming it exists) is $$\mu_X = \mathbb{E}_X(X) = \int x f_X(x)dx = \int (2a-x)f_X(x)dx.$$ Once again exploiting linearity, and recalling that $\int f_X(x)dx=1$ because $f_X$ is a probability distribution, we can rearrange the last equality to read $$2\mu_X = 2\int x f_X(x)dx = 2a\int f_X(x)dx = 2a\times 1 = 2a$$ with the unique solution $\mu_X = a$. Therefore all our previous calculations of moments about $a$ are really the central moments, QED. Postword The need to divide by $2$ in several places is related to the fact that there is a group of order $2$ acting on the measurable functions (namely, the group generated by the reflection in the line around $a$). More generally, the idea of a symmetry can be generalized to the action of any group. The theory of group representations implies that when the character of that action on a function is not trivial, it is orthogonal to the trivial character, and that means the expectation of the function must be zero. The orthogonality relations involve adding (or integrating) over the group, whence the size of the group constantly appears in denominators: its cardinality when it is finite or its volume when it is compact. The beauty of this generalization becomes apparent in applications with manifest symmetry, such as in mechanical (or quantum mechanical) equations of motion of symmetrical systems exemplified by a benzene molecule (which has a 12 element symmetry group). (The QM application is most relevant here because it explicitly calculates expectations.) Values of physical interest--which typically involve multidimensional integrals of tensors--can be computed with no more work than was involved here, simply by knowing the characters associated with the integrands. For instance, the "colors" of various symmetric molecules--their spectra at various wavelengths--can be determined ab initio with this approach.
Central Moments of Symmetric Distributions
This answer aims to make a demonstration that is as elementary as possible, because such things frequently get to the essential idea. The only facts needed (beyond the simplest kind of algebraic mani
Central Moments of Symmetric Distributions This answer aims to make a demonstration that is as elementary as possible, because such things frequently get to the essential idea. The only facts needed (beyond the simplest kind of algebraic manipulations) are linearity of integration (or, equivalently, of expectation), the change of variables formula for integrals, and the axiomatic result that a PDF integrates to unity. Motivating this demonstration is the intuition that when $f_X$ is symmetric about $a$, then the contribution of any quantity $G(x)$ to the expectation $\mathbb{E}_X(G(X))$ will have the same weight as the quantity $G(2a-x)$, because $x$ and $2a-x$ are on opposite sides of $a$ and equally far from it. Provided, then, that $G(x) = -G(2a-x)$ for all $x$, everything cancels and the expectation must be zero. The relationship between $x$ and $2a-x$, then, is our point of departure. Notice, by writing $y = x + a$, that the symmetry can just as well be expressed by the relationship $$f_X(y) = f_X(2a-y)$$ for all $y$. For any measurable function $G$, the one-to-one change of variable from $x$ to $2a-x$ changes $dx$ to $-dx$, while reversing the direction of integration, implying $$\mathbb{E}_X(G(X)) = \int G(x) f_X(x)dx = \int G(x) f_X(2a - x)dx = \int G(2a-x)f_X(x)dx.$$ Assuming this expectation exists (that is, the integral converges), the linearity of the integral implies $$\int \left(G(x) - G(2a - x)\right)f_X(x)dx = 0.$$ Consider the odd moments about $a$, which are defined as the expectations of $G_{k,a}(X) = (X-a)^k$, $k = 1, 3, 5, \ldots$. In these cases $$\eqalign{ G_{k,a}(x) - G_{k,a}(2a-x) &= (x-a)^k - (2a-x-a)^k \\&= (x-a)^k - (a-x)^k \\ &= (1^k - (-1)^k)(x-a)^k \\&= 2(x-a)^k,}$$ precisely because $k$ is odd. Applying the preceding result gives $$0 = \int \left(G_{k,a}(x) - G_{k,a}(2a - x)\right)f_X(x)dx = 2\int (x-a)^k f_X(x)dx.$$ Because the right hand side is twice the $k$th moment about $a$, dividing by $2$ shows that this moment is zero whenever it exists. Finally, the mean (assuming it exists) is $$\mu_X = \mathbb{E}_X(X) = \int x f_X(x)dx = \int (2a-x)f_X(x)dx.$$ Once again exploiting linearity, and recalling that $\int f_X(x)dx=1$ because $f_X$ is a probability distribution, we can rearrange the last equality to read $$2\mu_X = 2\int x f_X(x)dx = 2a\int f_X(x)dx = 2a\times 1 = 2a$$ with the unique solution $\mu_X = a$. Therefore all our previous calculations of moments about $a$ are really the central moments, QED. Postword The need to divide by $2$ in several places is related to the fact that there is a group of order $2$ acting on the measurable functions (namely, the group generated by the reflection in the line around $a$). More generally, the idea of a symmetry can be generalized to the action of any group. The theory of group representations implies that when the character of that action on a function is not trivial, it is orthogonal to the trivial character, and that means the expectation of the function must be zero. The orthogonality relations involve adding (or integrating) over the group, whence the size of the group constantly appears in denominators: its cardinality when it is finite or its volume when it is compact. The beauty of this generalization becomes apparent in applications with manifest symmetry, such as in mechanical (or quantum mechanical) equations of motion of symmetrical systems exemplified by a benzene molecule (which has a 12 element symmetry group). (The QM application is most relevant here because it explicitly calculates expectations.) Values of physical interest--which typically involve multidimensional integrals of tensors--can be computed with no more work than was involved here, simply by knowing the characters associated with the integrands. For instance, the "colors" of various symmetric molecules--their spectra at various wavelengths--can be determined ab initio with this approach.
Central Moments of Symmetric Distributions This answer aims to make a demonstration that is as elementary as possible, because such things frequently get to the essential idea. The only facts needed (beyond the simplest kind of algebraic mani
32,121
What are the red lines in quantile regression plot (quantreg package)?
Here is a reproduction of the Engel regression from the quantreg vignette. To make the plot more readable, I focus on the second (slope) parameter: library(quantreg) data(engel) xx <- engel$income-mean(engel$income) fit1 <- summary(rq(engel$foodexp~xx,tau=1:9/10)) plot(fit1, parm=2) I can now overplot, in green, the usual OLS estimates and it's (5-95)% confidence interval: fit2 <- lm(foodexp~xx, data=engel) abline(h=summary(fit2)$coef[2], col="green", lwd=3, lty=2) #$ abline(h=summary(fit2)$coef[2] + qt(0.95,fit2$df)*summary(fit2)$coef[4], col="green", lwd=3, lty=2) #$ abline(h=summary(fit2)$coef[2] + qt(0.05,fit2$df)*summary(fit2)$coef[4], col="green", lwd=3, lty=2) and they match the dotted red lines on the original plot.
What are the red lines in quantile regression plot (quantreg package)?
Here is a reproduction of the Engel regression from the quantreg vignette. To make the plot more readable, I focus on the second (slope) parameter: library(quantreg) data(engel) xx <- engel$income-
What are the red lines in quantile regression plot (quantreg package)? Here is a reproduction of the Engel regression from the quantreg vignette. To make the plot more readable, I focus on the second (slope) parameter: library(quantreg) data(engel) xx <- engel$income-mean(engel$income) fit1 <- summary(rq(engel$foodexp~xx,tau=1:9/10)) plot(fit1, parm=2) I can now overplot, in green, the usual OLS estimates and it's (5-95)% confidence interval: fit2 <- lm(foodexp~xx, data=engel) abline(h=summary(fit2)$coef[2], col="green", lwd=3, lty=2) #$ abline(h=summary(fit2)$coef[2] + qt(0.95,fit2$df)*summary(fit2)$coef[4], col="green", lwd=3, lty=2) #$ abline(h=summary(fit2)$coef[2] + qt(0.05,fit2$df)*summary(fit2)$coef[4], col="green", lwd=3, lty=2) and they match the dotted red lines on the original plot.
What are the red lines in quantile regression plot (quantreg package)? Here is a reproduction of the Engel regression from the quantreg vignette. To make the plot more readable, I focus on the second (slope) parameter: library(quantreg) data(engel) xx <- engel$income-
32,122
Where does the Gaussian function come from?
The original derivation came from de Moivre who used it as an approximation to the binomial. It was later derived independently several times in other contexts. http://en.wikipedia.org/wiki/Abraham_de_Moivre#Probability http://en.wikipedia.org/wiki/De_Moivre%E2%80%93Laplace_theorem
Where does the Gaussian function come from?
The original derivation came from de Moivre who used it as an approximation to the binomial. It was later derived independently several times in other contexts. http://en.wikipedia.org/wiki/Abraham_de
Where does the Gaussian function come from? The original derivation came from de Moivre who used it as an approximation to the binomial. It was later derived independently several times in other contexts. http://en.wikipedia.org/wiki/Abraham_de_Moivre#Probability http://en.wikipedia.org/wiki/De_Moivre%E2%80%93Laplace_theorem
Where does the Gaussian function come from? The original derivation came from de Moivre who used it as an approximation to the binomial. It was later derived independently several times in other contexts. http://en.wikipedia.org/wiki/Abraham_de
32,123
Where does the Gaussian function come from?
The normal distribution is the distribution that is expected when measurements are made up from a large number of 'noise' components that are all distributed in the same way as each other. The principle is sometimes illustrated with an example using dice. Throw one die a large number of times and plot the distribution of values. Assuming the die is fair, you will end up with a (discrete) uniform distribution from 1 to 6. Now do that again but use two dice. You get a stepwise triangular distribution from 2 to 12. Add a third die and the distribution is a little bit bell-shaped and the steps are small because there are now 17 different possible values. With four dice the distribution looks very much like a normal distribution, and with an infinite number of dice it is a normal distribution. Somewhere between four and an infinite number of die (I often say 12) are needed for a distribution that is, for practical purposes, indistinguishable from the normal distribution given by the normal formula. Many biological and physical measurements have lots of sources of inaccuracy and noise and so the distributions of those measurements will be approximately normal, as long as the distributions of those components is similar. If one noise component is much larger than the others then the normal distribution will not result. Imagine if one die out of a set of dozen had faces marked from 100 to 600 rather than 1 to 6. That die would dominate the other eleven and so the distribution of the sum of their top faces would be an obvious mixture of (discrete) uniform 100 to 600 and a nearly continuous nearly normal 11 to 66. The distributions of the component variations have to be similar, even if they don't need to be normal (they don't have to be even nearly normal if there are a lot of them). (It is worth noting that many sources of variability have a logarithmic distribution and so many measurements in biology and physics are more nearly log-normal than normal.)
Where does the Gaussian function come from?
The normal distribution is the distribution that is expected when measurements are made up from a large number of 'noise' components that are all distributed in the same way as each other. The princip
Where does the Gaussian function come from? The normal distribution is the distribution that is expected when measurements are made up from a large number of 'noise' components that are all distributed in the same way as each other. The principle is sometimes illustrated with an example using dice. Throw one die a large number of times and plot the distribution of values. Assuming the die is fair, you will end up with a (discrete) uniform distribution from 1 to 6. Now do that again but use two dice. You get a stepwise triangular distribution from 2 to 12. Add a third die and the distribution is a little bit bell-shaped and the steps are small because there are now 17 different possible values. With four dice the distribution looks very much like a normal distribution, and with an infinite number of dice it is a normal distribution. Somewhere between four and an infinite number of die (I often say 12) are needed for a distribution that is, for practical purposes, indistinguishable from the normal distribution given by the normal formula. Many biological and physical measurements have lots of sources of inaccuracy and noise and so the distributions of those measurements will be approximately normal, as long as the distributions of those components is similar. If one noise component is much larger than the others then the normal distribution will not result. Imagine if one die out of a set of dozen had faces marked from 100 to 600 rather than 1 to 6. That die would dominate the other eleven and so the distribution of the sum of their top faces would be an obvious mixture of (discrete) uniform 100 to 600 and a nearly continuous nearly normal 11 to 66. The distributions of the component variations have to be similar, even if they don't need to be normal (they don't have to be even nearly normal if there are a lot of them). (It is worth noting that many sources of variability have a logarithmic distribution and so many measurements in biology and physics are more nearly log-normal than normal.)
Where does the Gaussian function come from? The normal distribution is the distribution that is expected when measurements are made up from a large number of 'noise' components that are all distributed in the same way as each other. The princip
32,124
Distributions on the simplex with correlated components
One way to have a random $\theta=(\theta_1,\dots,\theta_k)$ living on the simplex, without the limitations imposed by the negative covariances of the Dirichlet distribution, is to define $\phi_i=\sum_{j=1}^k c_{ij} \log \theta_j$, for $i=1,\dots,k-1$, where the $(k-1)\times k$ matrix $C=(c_{ij})$ has rank $k-1$. Adding the constraint $\sum_{i=1}^k\theta_i=1$, any $k-1$ dimensional normal distribution may be assigned to $\phi=(\phi_1,\dots,\phi_{k-1})$. Bayesian inference is tractable within this rich class of distributions introduced and studied by Aitchison in a series of papers Journal of the Royal Statistical Society, $\textbf{B}$, $\textbf{44}$, 139-177 (1982), Journal of the Royal Statistical Society, $\textbf{B}$, $\textbf{47}$, 136-146 (1985); and in his book $\textit{The Statistical Analysis of Compositional Data}$. Chapman & Hall: London (1986).
Distributions on the simplex with correlated components
One way to have a random $\theta=(\theta_1,\dots,\theta_k)$ living on the simplex, without the limitations imposed by the negative covariances of the Dirichlet distribution, is to define $\phi_i=\sum_
Distributions on the simplex with correlated components One way to have a random $\theta=(\theta_1,\dots,\theta_k)$ living on the simplex, without the limitations imposed by the negative covariances of the Dirichlet distribution, is to define $\phi_i=\sum_{j=1}^k c_{ij} \log \theta_j$, for $i=1,\dots,k-1$, where the $(k-1)\times k$ matrix $C=(c_{ij})$ has rank $k-1$. Adding the constraint $\sum_{i=1}^k\theta_i=1$, any $k-1$ dimensional normal distribution may be assigned to $\phi=(\phi_1,\dots,\phi_{k-1})$. Bayesian inference is tractable within this rich class of distributions introduced and studied by Aitchison in a series of papers Journal of the Royal Statistical Society, $\textbf{B}$, $\textbf{44}$, 139-177 (1982), Journal of the Royal Statistical Society, $\textbf{B}$, $\textbf{47}$, 136-146 (1985); and in his book $\textit{The Statistical Analysis of Compositional Data}$. Chapman & Hall: London (1986).
Distributions on the simplex with correlated components One way to have a random $\theta=(\theta_1,\dots,\theta_k)$ living on the simplex, without the limitations imposed by the negative covariances of the Dirichlet distribution, is to define $\phi_i=\sum_
32,125
detecting plagiarism on multiple-choice test
Here's a surprisingly vast array of the answer copying indexes, with little discussion of their merits though: http://www.bjournal.co.uk/paper/BJASS_01_01_06.pdf. There's a field of (educational) psychology called item response theory (IRT) that provides the statistical background for questions like these. If you an American, and took an SAT, ACT or GRE, you dealt with a test developed with IRT in mind. The basic postulate of IRT is that each student $i$ is characterized by their ability $a_i$; each question is characterized by its difficulty $b_j$; and the probability to answer a question correctly is $$ \pi(a_i,b_j;c) = {\rm Prob}[\mbox{student $i$ answers question $j$ correctly}] = \Phi( c(a_i-b_j) ) $$ where $\Phi(z)$ is the cdf of the standard normal, and $c$ is an additional sensitivity/discrimination parameter (sometimes, it is made question-specific, $c_j$, if there's enough information, i.e., enough test takers, to identify the differences). A hidden assumption here that given the students ability $i$, answers to different questions are independent. This assumption is violated if you have a battery of questions about say the same paragraph of text, but let's abstract from it for a minute. For "Yes/No" questions, this may be the end of the story. For more than two category questions, we can make an additional assumption that all wrong choices are equally likely; for a question $j$ with $k_j$ choices, probability of each wrong choice is $\pi'(a_i,b_j;c) = [1-\pi(a_i,b_j;c)]/(k_j-1)$. For students of abilities $a_i$ and $a_k$, the probability that they match on their answers for a question with difficulty $b_j$ is $$ \psi(a_i,a_k;b_j,c) = \pi(a_i,b_j;c)\pi(a_k,b_j;c) + (k-1)\pi'(a_i,b_j;c)\pi'(a_k,b_j;c) $$ If you like, you can break this into probability of matching on the correct answer, $\psi_c(a_i,a_k;b_j,c) = \pi(a_i,b_j;c)\pi(a_k,b_j;c)$, and the probability of matching on an incorrect answer, $\psi_i(a_i,a_k;b_j,c) = (k-1)\pi'(a_i,b_j;c)\pi'(a_k,b_j;c)$, although from the conceptual framework of IRT, this distinction is hardly material. Now, you can compute the probability of matching, but it will probably be combinatorially minuscule. A better measure may be the ratio of the information in the pairwise pattern of responses, $$ I(i,k) = \sum_j 1\{ \mbox{match}_j \} \ln \psi(a_i,a_k;b_j,c) + 1\{ \mbox{non-match}_j \} \ln [1- \psi(a_i,a_k;b_j,c) ] $$ and relate it to the entropy $$ E(i,k) = {\rm E}[ I(i,k) ] = \sum_j \psi(a_i,a_k;b_j,c) \ln \psi(a_i,a_k;b_j,c) + (1- \psi(a_i,a_k;b_j,c) ) \ln [1- \psi(a_i,a_k;b_j,c) ] $$ You can do this for all pairs of students, plot them or rank them, and investigate the greatest ratios of information to entropy. The parameters of the test $\{c,b_j, j=1, 2, \ldots\}$ and student abilities $\{a_i\}$ won't fall out of blue sky, but they are easily estimable in modern software such as R with lme4 or similar packages: irt <- glmer( answer ~ 1 + (1|student) + (1|question), family = binomial) or something very close to this.
detecting plagiarism on multiple-choice test
Here's a surprisingly vast array of the answer copying indexes, with little discussion of their merits though: http://www.bjournal.co.uk/paper/BJASS_01_01_06.pdf. There's a field of (educational) psyc
detecting plagiarism on multiple-choice test Here's a surprisingly vast array of the answer copying indexes, with little discussion of their merits though: http://www.bjournal.co.uk/paper/BJASS_01_01_06.pdf. There's a field of (educational) psychology called item response theory (IRT) that provides the statistical background for questions like these. If you an American, and took an SAT, ACT or GRE, you dealt with a test developed with IRT in mind. The basic postulate of IRT is that each student $i$ is characterized by their ability $a_i$; each question is characterized by its difficulty $b_j$; and the probability to answer a question correctly is $$ \pi(a_i,b_j;c) = {\rm Prob}[\mbox{student $i$ answers question $j$ correctly}] = \Phi( c(a_i-b_j) ) $$ where $\Phi(z)$ is the cdf of the standard normal, and $c$ is an additional sensitivity/discrimination parameter (sometimes, it is made question-specific, $c_j$, if there's enough information, i.e., enough test takers, to identify the differences). A hidden assumption here that given the students ability $i$, answers to different questions are independent. This assumption is violated if you have a battery of questions about say the same paragraph of text, but let's abstract from it for a minute. For "Yes/No" questions, this may be the end of the story. For more than two category questions, we can make an additional assumption that all wrong choices are equally likely; for a question $j$ with $k_j$ choices, probability of each wrong choice is $\pi'(a_i,b_j;c) = [1-\pi(a_i,b_j;c)]/(k_j-1)$. For students of abilities $a_i$ and $a_k$, the probability that they match on their answers for a question with difficulty $b_j$ is $$ \psi(a_i,a_k;b_j,c) = \pi(a_i,b_j;c)\pi(a_k,b_j;c) + (k-1)\pi'(a_i,b_j;c)\pi'(a_k,b_j;c) $$ If you like, you can break this into probability of matching on the correct answer, $\psi_c(a_i,a_k;b_j,c) = \pi(a_i,b_j;c)\pi(a_k,b_j;c)$, and the probability of matching on an incorrect answer, $\psi_i(a_i,a_k;b_j,c) = (k-1)\pi'(a_i,b_j;c)\pi'(a_k,b_j;c)$, although from the conceptual framework of IRT, this distinction is hardly material. Now, you can compute the probability of matching, but it will probably be combinatorially minuscule. A better measure may be the ratio of the information in the pairwise pattern of responses, $$ I(i,k) = \sum_j 1\{ \mbox{match}_j \} \ln \psi(a_i,a_k;b_j,c) + 1\{ \mbox{non-match}_j \} \ln [1- \psi(a_i,a_k;b_j,c) ] $$ and relate it to the entropy $$ E(i,k) = {\rm E}[ I(i,k) ] = \sum_j \psi(a_i,a_k;b_j,c) \ln \psi(a_i,a_k;b_j,c) + (1- \psi(a_i,a_k;b_j,c) ) \ln [1- \psi(a_i,a_k;b_j,c) ] $$ You can do this for all pairs of students, plot them or rank them, and investigate the greatest ratios of information to entropy. The parameters of the test $\{c,b_j, j=1, 2, \ldots\}$ and student abilities $\{a_i\}$ won't fall out of blue sky, but they are easily estimable in modern software such as R with lme4 or similar packages: irt <- glmer( answer ~ 1 + (1|student) + (1|question), family = binomial) or something very close to this.
detecting plagiarism on multiple-choice test Here's a surprisingly vast array of the answer copying indexes, with little discussion of their merits though: http://www.bjournal.co.uk/paper/BJASS_01_01_06.pdf. There's a field of (educational) psyc
32,126
What do Lift and Gain Charts state in the context of an employee turnover model
Sometimes it helps to picture the goal of such an analysis and what a company can do without one. Suppose the company the turnover data belongs to wants to do something against a (possibly) high turnover rate. I can imagine two possible actions Find out what is driving people to leave and fix this (not enough healthcare ? No teamspirit ?) in general Find the employees which are considering to leave and talk to them, finding out what drives them to fix the issues specifically for them. So why does this matter ? Lift charts are primarily important for the second usecase. Imagine what a company can do when they have decided to invest money talking to employees 1 to 1 but do not have a model ? The only option is to talk to everyone or to everyone in a random sample of a fixed size. Talking to everyone, despite the gain of identifying all potential departers is way too expensive. But when only a random sample is selected to talk to, only a fraction of all potential departers is identified meanwhile still spending a lot of money. In both cases, the cost-per-leave-prevention-ratio is quite high. But when a good model exists, the company can decide to talk only to those which have the highest probability to leave (those with the topscores according to the model), so that more of the potential departers are identified, hence optimizing the cost-per-leave-prevention. Take a look again at the first two tables here: http://www2.cs.uregina.ca/~dbd/cs831/notes/lift_chart/lift_chart.html. Let's say that "customers"="employees" and "positive respondents" = "potential departers" (see data below). If the company decides it can only spend enough money to talk to 10000 employees, it will identify $\frac{20000}{100000}*10000=2000$ departers are identified without a model $\frac{6000}{10000}*10000=6000$ departers are identified with the model (selecting only the top 10000 according to the model score) which means an improvement of factor $\frac{6000}{2000}=3$ which is pictured as point (10%,3) in the lift chart. that 6000 of 20000 total departers have been identified, i.e. 30%, which is pictured as (10%,30%) in the gain chart. The baseline here is only 10%, because by taking a random sample of 10000 employees, only $\frac{10000 * (20000/100000)}{20000}=\frac{10000}{100000}=0.1$ of all potential departers are identified. The x-axis in both cases shows the percentage of employees contacted, in this specific example 10%. Appendix Data used to make this question independent of link rot. Overall Rate Total Employees Contacted Identified Departers 100000 20000 Effectiveness of the model when employees are contacted in chunks of 10000 Total Employees Contacted Identified Departers 10000 6000 20000 10000 30000 13000 40000 15800 50000 17000 60000 18000 70000 18800 80000 19400 90000 19800 100000 20000
What do Lift and Gain Charts state in the context of an employee turnover model
Sometimes it helps to picture the goal of such an analysis and what a company can do without one. Suppose the company the turnover data belongs to wants to do something against a (possibly) high turno
What do Lift and Gain Charts state in the context of an employee turnover model Sometimes it helps to picture the goal of such an analysis and what a company can do without one. Suppose the company the turnover data belongs to wants to do something against a (possibly) high turnover rate. I can imagine two possible actions Find out what is driving people to leave and fix this (not enough healthcare ? No teamspirit ?) in general Find the employees which are considering to leave and talk to them, finding out what drives them to fix the issues specifically for them. So why does this matter ? Lift charts are primarily important for the second usecase. Imagine what a company can do when they have decided to invest money talking to employees 1 to 1 but do not have a model ? The only option is to talk to everyone or to everyone in a random sample of a fixed size. Talking to everyone, despite the gain of identifying all potential departers is way too expensive. But when only a random sample is selected to talk to, only a fraction of all potential departers is identified meanwhile still spending a lot of money. In both cases, the cost-per-leave-prevention-ratio is quite high. But when a good model exists, the company can decide to talk only to those which have the highest probability to leave (those with the topscores according to the model), so that more of the potential departers are identified, hence optimizing the cost-per-leave-prevention. Take a look again at the first two tables here: http://www2.cs.uregina.ca/~dbd/cs831/notes/lift_chart/lift_chart.html. Let's say that "customers"="employees" and "positive respondents" = "potential departers" (see data below). If the company decides it can only spend enough money to talk to 10000 employees, it will identify $\frac{20000}{100000}*10000=2000$ departers are identified without a model $\frac{6000}{10000}*10000=6000$ departers are identified with the model (selecting only the top 10000 according to the model score) which means an improvement of factor $\frac{6000}{2000}=3$ which is pictured as point (10%,3) in the lift chart. that 6000 of 20000 total departers have been identified, i.e. 30%, which is pictured as (10%,30%) in the gain chart. The baseline here is only 10%, because by taking a random sample of 10000 employees, only $\frac{10000 * (20000/100000)}{20000}=\frac{10000}{100000}=0.1$ of all potential departers are identified. The x-axis in both cases shows the percentage of employees contacted, in this specific example 10%. Appendix Data used to make this question independent of link rot. Overall Rate Total Employees Contacted Identified Departers 100000 20000 Effectiveness of the model when employees are contacted in chunks of 10000 Total Employees Contacted Identified Departers 10000 6000 20000 10000 30000 13000 40000 15800 50000 17000 60000 18000 70000 18800 80000 19400 90000 19800 100000 20000
What do Lift and Gain Charts state in the context of an employee turnover model Sometimes it helps to picture the goal of such an analysis and what a company can do without one. Suppose the company the turnover data belongs to wants to do something against a (possibly) high turno
32,127
What is the difference between "priors" and "likelihood"?
The likelihood relates your data to a set of parameters. It is typically written as: $P(D | \theta)$ (or $\mathcal{L}(\theta | D)$ because the likelihood can be viewed as a function of the parameters - holding the data constant). where $\theta$ contains all of the parameters necessary for the model. For example, consider we have a bunch of iid data $X = \{x_1, ..., x_n\}$ and we want to see how well this fits to a Normal distribution. $\theta = \{\mu, \sigma\}$, and $P(D | \theta) = \prod_i \mathcal{N}(x_i; \mu, \sigma)$. One approach to fitting this model would be to maximize the parameter values according to maximum likelihood. This is exactly what it sounds like. We take the likelihood function, and attempt to maximize it by changing the parameter settings (keeping the observed data constant). This is usually done by computing the derivative of the likelihood w.r.t. each parameter, setting to 0 and solving (side note: it is common to first take the logarithm of the likelihood function to make the derivatives easier to solve). Alternatively, we could take a Bayesian approach and assign a prior probability distribution over the parameters and compute the posterior distribution to fit the parameters: $P(\theta | D) \propto P(D | \theta) P(\theta)$. In this case we treat the parameters as random variables ad thus must define a distribution over their possible values. The prior distribution can encode any prior knowledge we may have about the variables. For instance, we may have a good idea of the possible ranges for $\mu$ and could thus assign a prior distribution that pushes the $\mu$ slightly toward these values. To recap: Likelihood: $P(D | \theta)$ links data to parameters Prior: $P(\theta)$ distribution over possible parameter values (used in Bayesian analysis)
What is the difference between "priors" and "likelihood"?
The likelihood relates your data to a set of parameters. It is typically written as: $P(D | \theta)$ (or $\mathcal{L}(\theta | D)$ because the likelihood can be viewed as a function of the parameters
What is the difference between "priors" and "likelihood"? The likelihood relates your data to a set of parameters. It is typically written as: $P(D | \theta)$ (or $\mathcal{L}(\theta | D)$ because the likelihood can be viewed as a function of the parameters - holding the data constant). where $\theta$ contains all of the parameters necessary for the model. For example, consider we have a bunch of iid data $X = \{x_1, ..., x_n\}$ and we want to see how well this fits to a Normal distribution. $\theta = \{\mu, \sigma\}$, and $P(D | \theta) = \prod_i \mathcal{N}(x_i; \mu, \sigma)$. One approach to fitting this model would be to maximize the parameter values according to maximum likelihood. This is exactly what it sounds like. We take the likelihood function, and attempt to maximize it by changing the parameter settings (keeping the observed data constant). This is usually done by computing the derivative of the likelihood w.r.t. each parameter, setting to 0 and solving (side note: it is common to first take the logarithm of the likelihood function to make the derivatives easier to solve). Alternatively, we could take a Bayesian approach and assign a prior probability distribution over the parameters and compute the posterior distribution to fit the parameters: $P(\theta | D) \propto P(D | \theta) P(\theta)$. In this case we treat the parameters as random variables ad thus must define a distribution over their possible values. The prior distribution can encode any prior knowledge we may have about the variables. For instance, we may have a good idea of the possible ranges for $\mu$ and could thus assign a prior distribution that pushes the $\mu$ slightly toward these values. To recap: Likelihood: $P(D | \theta)$ links data to parameters Prior: $P(\theta)$ distribution over possible parameter values (used in Bayesian analysis)
What is the difference between "priors" and "likelihood"? The likelihood relates your data to a set of parameters. It is typically written as: $P(D | \theta)$ (or $\mathcal{L}(\theta | D)$ because the likelihood can be viewed as a function of the parameters
32,128
How does the mice imputation function work?
By default, mice will use all the variables in your dataset to predict any other one. As for averaging, you need to do this after calculating your stats, not before. For instance, if you want to do a linear regression, you'd do something like this: library(mice) mi <- mice(dataset) mi.reg <- with(data=mi,exp=glm(y~x+z)) mi.reg.pool <- pool(mi.reg) summary(mi.reg.pool) The summary function will show you the averaged coefficients.
How does the mice imputation function work?
By default, mice will use all the variables in your dataset to predict any other one. As for averaging, you need to do this after calculating your stats, not before. For instance, if you want to do a
How does the mice imputation function work? By default, mice will use all the variables in your dataset to predict any other one. As for averaging, you need to do this after calculating your stats, not before. For instance, if you want to do a linear regression, you'd do something like this: library(mice) mi <- mice(dataset) mi.reg <- with(data=mi,exp=glm(y~x+z)) mi.reg.pool <- pool(mi.reg) summary(mi.reg.pool) The summary function will show you the averaged coefficients.
How does the mice imputation function work? By default, mice will use all the variables in your dataset to predict any other one. As for averaging, you need to do this after calculating your stats, not before. For instance, if you want to do a
32,129
Use of propensity score in a case-control study
A propensity score isn't just a way of matching groups. There are other ways to use propensity scores - at its heart, its a way to characterize the probability of being exposed given covariates. When this is adjusted for in any one of a number of ways (including matching) you theoretically break one of the conditions necessary for confounding. The problem with a case-control study is its very hard to calculate a true probability of exposure for the same reason its hard to calculate a true probability of disease: you don't have a whole cohort to work off of, just an unbalanced sample. That being said, there are some articles discussing the use of propensity-score methods in case-control studies. This one might be a good place to start. The main thrust is that they're much less straightforward to use, so unless you have a credible reason to adjust using propensity scores instead of outcome-oriented approaches like including covariates in a model, it might not be worthwhile. 04/03 edit for your comment: It's not a matter of matching to the exposure or the outcome. In all matching, you're matching on covariates. The propensity score is just a way to roll all your covariates into one composite covariate - the propensity score itself. What you're doing by matching is trying to find cases and controls who had ~equal probability of being exposed for all covariates save for your exposure of interest. Observe that in the SUGI paper you linked, the actual code to generate the propensity score used in the matching is the following: PROC LOGISTIC DATA= study.contra descend; MODEL revasc = ptage sex white mlrphecg rwmisxhr mhsmoke ... / SELECTION = STEPWISE...; OUTPUT OUT = study.ALLPropen prob=prob; RUN; That code is modeling your predicted probability of having the exposure (revasc). See Page 2 of that paper.
Use of propensity score in a case-control study
A propensity score isn't just a way of matching groups. There are other ways to use propensity scores - at its heart, its a way to characterize the probability of being exposed given covariates. When
Use of propensity score in a case-control study A propensity score isn't just a way of matching groups. There are other ways to use propensity scores - at its heart, its a way to characterize the probability of being exposed given covariates. When this is adjusted for in any one of a number of ways (including matching) you theoretically break one of the conditions necessary for confounding. The problem with a case-control study is its very hard to calculate a true probability of exposure for the same reason its hard to calculate a true probability of disease: you don't have a whole cohort to work off of, just an unbalanced sample. That being said, there are some articles discussing the use of propensity-score methods in case-control studies. This one might be a good place to start. The main thrust is that they're much less straightforward to use, so unless you have a credible reason to adjust using propensity scores instead of outcome-oriented approaches like including covariates in a model, it might not be worthwhile. 04/03 edit for your comment: It's not a matter of matching to the exposure or the outcome. In all matching, you're matching on covariates. The propensity score is just a way to roll all your covariates into one composite covariate - the propensity score itself. What you're doing by matching is trying to find cases and controls who had ~equal probability of being exposed for all covariates save for your exposure of interest. Observe that in the SUGI paper you linked, the actual code to generate the propensity score used in the matching is the following: PROC LOGISTIC DATA= study.contra descend; MODEL revasc = ptage sex white mlrphecg rwmisxhr mhsmoke ... / SELECTION = STEPWISE...; OUTPUT OUT = study.ALLPropen prob=prob; RUN; That code is modeling your predicted probability of having the exposure (revasc). See Page 2 of that paper.
Use of propensity score in a case-control study A propensity score isn't just a way of matching groups. There are other ways to use propensity scores - at its heart, its a way to characterize the probability of being exposed given covariates. When
32,130
Use of propensity score in a case-control study
You can implement a PS in a case-cohort design, but not in a traditional case-control design; due in part to the reverse logic (relative to the cohort design) of the case-control design.
Use of propensity score in a case-control study
You can implement a PS in a case-cohort design, but not in a traditional case-control design; due in part to the reverse logic (relative to the cohort design) of the case-control design.
Use of propensity score in a case-control study You can implement a PS in a case-cohort design, but not in a traditional case-control design; due in part to the reverse logic (relative to the cohort design) of the case-control design.
Use of propensity score in a case-control study You can implement a PS in a case-cohort design, but not in a traditional case-control design; due in part to the reverse logic (relative to the cohort design) of the case-control design.
32,131
Understanding and applying sentiment analysis
Try SentiStrength which performs well compared to similar algorithms, and the associated research papers. Discussion of other tools and methods can be found here and here.
Understanding and applying sentiment analysis
Try SentiStrength which performs well compared to similar algorithms, and the associated research papers. Discussion of other tools and methods can be found here and here.
Understanding and applying sentiment analysis Try SentiStrength which performs well compared to similar algorithms, and the associated research papers. Discussion of other tools and methods can be found here and here.
Understanding and applying sentiment analysis Try SentiStrength which performs well compared to similar algorithms, and the associated research papers. Discussion of other tools and methods can be found here and here.
32,132
Understanding and applying sentiment analysis
I have the impression that much of what is being done here is extremely heuristic. In fact, most people seem to apply this to the <120 chars of twitter statements. Probably the results (while not being computed this way) aren't much better than counting "positive" and "negative" words with a litte position information ("A better than B" = positive for A, negative for B) When you then see companies buying a full twitter feed (that's how many mbit per second?) and claiming to do sentiment analysis on this this seriously makes me wonder if there is any statistical validity here. No wonder e.g. Yahoo failed badly at predicting the preelections for South Carolina: http://www.technologyreview.com/web/39487/ People are way to proud and keen on just being at all able to process the amount of data, they completely seem to neglect properly validating their performance. Sorry to be this pessimistic about the state of the art.
Understanding and applying sentiment analysis
I have the impression that much of what is being done here is extremely heuristic. In fact, most people seem to apply this to the <120 chars of twitter statements. Probably the results (while not bein
Understanding and applying sentiment analysis I have the impression that much of what is being done here is extremely heuristic. In fact, most people seem to apply this to the <120 chars of twitter statements. Probably the results (while not being computed this way) aren't much better than counting "positive" and "negative" words with a litte position information ("A better than B" = positive for A, negative for B) When you then see companies buying a full twitter feed (that's how many mbit per second?) and claiming to do sentiment analysis on this this seriously makes me wonder if there is any statistical validity here. No wonder e.g. Yahoo failed badly at predicting the preelections for South Carolina: http://www.technologyreview.com/web/39487/ People are way to proud and keen on just being at all able to process the amount of data, they completely seem to neglect properly validating their performance. Sorry to be this pessimistic about the state of the art.
Understanding and applying sentiment analysis I have the impression that much of what is being done here is extremely heuristic. In fact, most people seem to apply this to the <120 chars of twitter statements. Probably the results (while not bein
32,133
Does it make sense to measure recall in recommender systems?
In case of a "top-N" recommender system, it is helpful to construct an "unbiased" test data set (e.g. by adding a thousand random unwatched/unrated movies to the list of watched movies from the holdout data set for a given user), and then scoring the resulting test data set using a model. Once it is done for a bunch of users, one can then calculate "precision vs recall" curve and "recall-at-N vs N" curve (as well as sensitivity/specificity and lift curves) which can be used to judge quality of a given model. This paper, Performance of Recommender Algorithms on Top-N Recommendation Tasks by Cremonesi et al., has more details. If a given model includes time dynamics then the split between training and test should be done along the time dimension (not entirely randomly)
Does it make sense to measure recall in recommender systems?
In case of a "top-N" recommender system, it is helpful to construct an "unbiased" test data set (e.g. by adding a thousand random unwatched/unrated movies to the list of watched movies from the holdou
Does it make sense to measure recall in recommender systems? In case of a "top-N" recommender system, it is helpful to construct an "unbiased" test data set (e.g. by adding a thousand random unwatched/unrated movies to the list of watched movies from the holdout data set for a given user), and then scoring the resulting test data set using a model. Once it is done for a bunch of users, one can then calculate "precision vs recall" curve and "recall-at-N vs N" curve (as well as sensitivity/specificity and lift curves) which can be used to judge quality of a given model. This paper, Performance of Recommender Algorithms on Top-N Recommendation Tasks by Cremonesi et al., has more details. If a given model includes time dynamics then the split between training and test should be done along the time dimension (not entirely randomly)
Does it make sense to measure recall in recommender systems? In case of a "top-N" recommender system, it is helpful to construct an "unbiased" test data set (e.g. by adding a thousand random unwatched/unrated movies to the list of watched movies from the holdou
32,134
Does it make sense to measure recall in recommender systems?
Most of the time recall do not produce a result which can be evaluated in absolute terms. You should use recall value to evaluate one algorithm with respect to another. If an algorithm A has a recall value of 0.2 (as in your example) it is difficult to interpret what this value means. However, if another algorithm B has a recall value of 0.15 (given the same experimental setup) then you can conclude that algorithm A has a better performance than algorithm B with respect to recall. Mean Absolute Error (MAE) is not like this, it can be interpreted by itself.
Does it make sense to measure recall in recommender systems?
Most of the time recall do not produce a result which can be evaluated in absolute terms. You should use recall value to evaluate one algorithm with respect to another. If an algorithm A has a recall
Does it make sense to measure recall in recommender systems? Most of the time recall do not produce a result which can be evaluated in absolute terms. You should use recall value to evaluate one algorithm with respect to another. If an algorithm A has a recall value of 0.2 (as in your example) it is difficult to interpret what this value means. However, if another algorithm B has a recall value of 0.15 (given the same experimental setup) then you can conclude that algorithm A has a better performance than algorithm B with respect to recall. Mean Absolute Error (MAE) is not like this, it can be interpreted by itself.
Does it make sense to measure recall in recommender systems? Most of the time recall do not produce a result which can be evaluated in absolute terms. You should use recall value to evaluate one algorithm with respect to another. If an algorithm A has a recall
32,135
Does it make sense to measure recall in recommender systems?
Another way to deal with this situation would be to not use all the ground truth data. To evaluate the performance of your model, you could also compute the probability of the most likely movies, say arbitrarily 6. Then if a user has watched 50 movies for example, you would pick the top 6 which will serve as your ground truth. Remember what you are looking at is the interpretability of the results and setting thresholds based on probability could be one way to have a more meaningful recall and precision values.
Does it make sense to measure recall in recommender systems?
Another way to deal with this situation would be to not use all the ground truth data. To evaluate the performance of your model, you could also compute the probability of the most likely movies, say
Does it make sense to measure recall in recommender systems? Another way to deal with this situation would be to not use all the ground truth data. To evaluate the performance of your model, you could also compute the probability of the most likely movies, say arbitrarily 6. Then if a user has watched 50 movies for example, you would pick the top 6 which will serve as your ground truth. Remember what you are looking at is the interpretability of the results and setting thresholds based on probability could be one way to have a more meaningful recall and precision values.
Does it make sense to measure recall in recommender systems? Another way to deal with this situation would be to not use all the ground truth data. To evaluate the performance of your model, you could also compute the probability of the most likely movies, say
32,136
What method is used to calculate confidence intervals in R's MASS package function confint.glm?
To summarise the above comments for posterity, the confidence interval is known as the "profile likelihood confidence interval". An explanation of the method is given by Stryhn and Christensen, and in Venables and Ripley's MASS book, §8.4, pp. 220-221. It has weaker assumptions that the better known Wald method, but requires more computation.
What method is used to calculate confidence intervals in R's MASS package function confint.glm?
To summarise the above comments for posterity, the confidence interval is known as the "profile likelihood confidence interval". An explanation of the method is given by Stryhn and Christensen, and i
What method is used to calculate confidence intervals in R's MASS package function confint.glm? To summarise the above comments for posterity, the confidence interval is known as the "profile likelihood confidence interval". An explanation of the method is given by Stryhn and Christensen, and in Venables and Ripley's MASS book, §8.4, pp. 220-221. It has weaker assumptions that the better known Wald method, but requires more computation.
What method is used to calculate confidence intervals in R's MASS package function confint.glm? To summarise the above comments for posterity, the confidence interval is known as the "profile likelihood confidence interval". An explanation of the method is given by Stryhn and Christensen, and i
32,137
What method is used to calculate confidence intervals in R's MASS package function confint.glm?
See this document for a pretty good explanation of how profile confidence intervals are determined: http://www.math.umt.edu/patterson/ProfileLikelihoodCI.pdf
What method is used to calculate confidence intervals in R's MASS package function confint.glm?
See this document for a pretty good explanation of how profile confidence intervals are determined: http://www.math.umt.edu/patterson/ProfileLikelihoodCI.pdf
What method is used to calculate confidence intervals in R's MASS package function confint.glm? See this document for a pretty good explanation of how profile confidence intervals are determined: http://www.math.umt.edu/patterson/ProfileLikelihoodCI.pdf
What method is used to calculate confidence intervals in R's MASS package function confint.glm? See this document for a pretty good explanation of how profile confidence intervals are determined: http://www.math.umt.edu/patterson/ProfileLikelihoodCI.pdf
32,138
What is the difference between the MCD and the MVE estimators?
First off, it is easier to start by answering your question in the uni variate case because then both estimators have an explicit solution as a function of the series of sorted observations $x_{(1)}\leq x_{(2)}\leq,...,x_{(n-1)}\leq x_{(n)}$. The uni-variate version of the MVE is also known as the short estimator and is the solution to $\underset{1\leq i\leq(n-h+1)}{\arg.\min.}\;\;\;x_{(i+h-1)}-x_{(i)}\;\;\;[1]$ The uni-variate version of the MCD is also known as the truncated likelihood estimator and is the solution to: $\underset{1\leq i\leq(n-h+1)}{\arg.\min.}\;\;\;\displaystyle\frac{1}{h}\displaystyle\sum_{j=1}^{i+h-1}x_{(j)}^2-\left(\frac{1}{h}\displaystyle\sum_{j=1}^{i+h-1}x_{(j)}\right)^2\;\;\;[2]$ Now, for symmetrical distribution, the functional corresponding to $[2]$ is proportional to that corresponding to $[1]$ (the likelihood of elliptical distribution computed over the $h$ most central observations). You will find more details on this in this very clear paper: Maxbias Curves of Robust Location Estimators based on Subranges, Croux and Haesbroeck, Metrika, 2001 So these estimators are solutions to different problems, with the definitions above it is easy to show that the solution to equation [1] does in general not equal the solution to equation [2].  Turning to the multivariate setting now (when $p$, the number of variables is greater than 1), the MVE looks for an ellipse through $p+1$ data points that contains $h$ data points, with smallest volume (this is just the multivariate generalization to elliptical distributions of the range for uni-modal distribution on the line). The MCD looks for the ellipse that contains $h$ data points and that has the smallest volume. By analogy to the univariate setting, in the case of the MCD, the ellipse does not in general pass through $p+1$ data points (just as the mean does not in general correspond to any observed value in the sample). Note that these are for the estimators (to which the O.P. specifically refers), not about the differences between the respective algorithms (FastMVE and FastMCD).
What is the difference between the MCD and the MVE estimators?
First off, it is easier to start by answering your question in the uni variate case because then both estimators have an explicit solution as a function of the series of sorted observations $x_{(1)}\
What is the difference between the MCD and the MVE estimators? First off, it is easier to start by answering your question in the uni variate case because then both estimators have an explicit solution as a function of the series of sorted observations $x_{(1)}\leq x_{(2)}\leq,...,x_{(n-1)}\leq x_{(n)}$. The uni-variate version of the MVE is also known as the short estimator and is the solution to $\underset{1\leq i\leq(n-h+1)}{\arg.\min.}\;\;\;x_{(i+h-1)}-x_{(i)}\;\;\;[1]$ The uni-variate version of the MCD is also known as the truncated likelihood estimator and is the solution to: $\underset{1\leq i\leq(n-h+1)}{\arg.\min.}\;\;\;\displaystyle\frac{1}{h}\displaystyle\sum_{j=1}^{i+h-1}x_{(j)}^2-\left(\frac{1}{h}\displaystyle\sum_{j=1}^{i+h-1}x_{(j)}\right)^2\;\;\;[2]$ Now, for symmetrical distribution, the functional corresponding to $[2]$ is proportional to that corresponding to $[1]$ (the likelihood of elliptical distribution computed over the $h$ most central observations). You will find more details on this in this very clear paper: Maxbias Curves of Robust Location Estimators based on Subranges, Croux and Haesbroeck, Metrika, 2001 So these estimators are solutions to different problems, with the definitions above it is easy to show that the solution to equation [1] does in general not equal the solution to equation [2].  Turning to the multivariate setting now (when $p$, the number of variables is greater than 1), the MVE looks for an ellipse through $p+1$ data points that contains $h$ data points, with smallest volume (this is just the multivariate generalization to elliptical distributions of the range for uni-modal distribution on the line). The MCD looks for the ellipse that contains $h$ data points and that has the smallest volume. By analogy to the univariate setting, in the case of the MCD, the ellipse does not in general pass through $p+1$ data points (just as the mean does not in general correspond to any observed value in the sample). Note that these are for the estimators (to which the O.P. specifically refers), not about the differences between the respective algorithms (FastMVE and FastMCD).
What is the difference between the MCD and the MVE estimators? First off, it is easier to start by answering your question in the uni variate case because then both estimators have an explicit solution as a function of the series of sorted observations $x_{(1)}\
32,139
What is the difference between the MCD and the MVE estimators?
Setting: We are given points $\{x_i\}_{i = 1}^n$ each lying in $\mathbb{R}^p$. We set $h$ to be a number between $n/2$ and $n$, exclusive. MVE: The MVE seeks an ellipsoid of minimal volume with two constraints-- (1) That the ellipsoid must contain $h$ points in its interior union its boundary, and (2) the ellipsoid must contain at least $p+1$ points on its boundary. MCD: The MCD is mathematically equivalent to finding an ellipsoid of minimal volume with only one constraint, namely, That the ellipsoid must contain $h$ points in its interior unions its boundary. These are indeed different problems, that is, not mathematically equivalent, as given by the example in @user603's answer. To see the equivalence of the MCD with finding the ellipsoid with minimal volume that covers at least $h$ of the given $n$ points, first note that we can denote an ellipsoid as $E = \{x: (x-c)^t Q (x-c) \leq 1\}$ where $c$ is the center point, and $Q$ a positive definite matrix. Next, the vol$(E) = a \det(Q)^{-.5} = a \det(Q^{-1})^{.5} $ for some constant $a$ (e.g., see this paper). So minimizing the volume of $E$ corresponds to minimizing the determinant of $\Sigma: = Q^{-1}$, which is the covariance matrix of the Gaussian described by that ellipsoid. For details of this calculation: write $Q = UDU^*$ for unitary $U$ and diagonal with positive entries $D$. We can see that $UD^{.5}U^*$ indeed equals $Q^{.5}$. Further, since $Q$ is positive definite $Q^{.5}$ is invertible. It follows that $\det(Q)^{-.5} = \det(Q^{-1})^{.5}$. Setting $\Sigma = Q^{-1}$, and $\mu = c $ (the center of the ellipsoid, we can see that the ellipsoid $E$ is the one characterized by the Gaussian with mean $\mu$ and covariance $\Sigma$. Lastly, the vol$(E) = a*\det(\Sigma)^{.5}$
What is the difference between the MCD and the MVE estimators?
Setting: We are given points $\{x_i\}_{i = 1}^n$ each lying in $\mathbb{R}^p$. We set $h$ to be a number between $n/2$ and $n$, exclusive. MVE: The MVE seeks an ellipsoid of minimal volume with two
What is the difference between the MCD and the MVE estimators? Setting: We are given points $\{x_i\}_{i = 1}^n$ each lying in $\mathbb{R}^p$. We set $h$ to be a number between $n/2$ and $n$, exclusive. MVE: The MVE seeks an ellipsoid of minimal volume with two constraints-- (1) That the ellipsoid must contain $h$ points in its interior union its boundary, and (2) the ellipsoid must contain at least $p+1$ points on its boundary. MCD: The MCD is mathematically equivalent to finding an ellipsoid of minimal volume with only one constraint, namely, That the ellipsoid must contain $h$ points in its interior unions its boundary. These are indeed different problems, that is, not mathematically equivalent, as given by the example in @user603's answer. To see the equivalence of the MCD with finding the ellipsoid with minimal volume that covers at least $h$ of the given $n$ points, first note that we can denote an ellipsoid as $E = \{x: (x-c)^t Q (x-c) \leq 1\}$ where $c$ is the center point, and $Q$ a positive definite matrix. Next, the vol$(E) = a \det(Q)^{-.5} = a \det(Q^{-1})^{.5} $ for some constant $a$ (e.g., see this paper). So minimizing the volume of $E$ corresponds to minimizing the determinant of $\Sigma: = Q^{-1}$, which is the covariance matrix of the Gaussian described by that ellipsoid. For details of this calculation: write $Q = UDU^*$ for unitary $U$ and diagonal with positive entries $D$. We can see that $UD^{.5}U^*$ indeed equals $Q^{.5}$. Further, since $Q$ is positive definite $Q^{.5}$ is invertible. It follows that $\det(Q)^{-.5} = \det(Q^{-1})^{.5}$. Setting $\Sigma = Q^{-1}$, and $\mu = c $ (the center of the ellipsoid, we can see that the ellipsoid $E$ is the one characterized by the Gaussian with mean $\mu$ and covariance $\Sigma$. Lastly, the vol$(E) = a*\det(\Sigma)^{.5}$
What is the difference between the MCD and the MVE estimators? Setting: We are given points $\{x_i\}_{i = 1}^n$ each lying in $\mathbb{R}^p$. We set $h$ to be a number between $n/2$ and $n$, exclusive. MVE: The MVE seeks an ellipsoid of minimal volume with two
32,140
How do I sample without replacement using a sampling-with-replacement function?
You describe a sequence of rejection samples. By definition, after selecting $k$ objects in the process of obtaining a weighted sample of $m$ objects without replacement from a list of $n$ objects, you draw one of the remaining $n-k$ objects according to their relative weights. In your description of the alternative sampling scheme, at the same stage you will be drawing from all $n$ objects according to their relative weights but you will throw back any object equal to one of the first $k$ and try again: that's the rejection. So all you have to convince yourself of is that the chance of drawing one of the remaining $n-k$ objects is the same in either case. If this equivalence isn't perfectly clear, there's a straightforward mathematical demonstration. Let the weights of the first $k$ objects be $w_1, \ldots, w_k$ and let the weights of the remaining $n-k$ objects be $w_{k+1}, \ldots, w_n$. Write $w = w_1 + w_2 + \cdots + w_n$ and let $w_{(k)} = w_1 + w_2 + \cdots + w_k$. The chance of drawing object $i$ ($k \lt i \le n$) without replacement from the $n-k$ remaining objects is of course $$\frac{w_i}{w_{k+1} + w_{k+2} + \cdots + w_n} =\frac{w_i}{w-w_{(k)}}.$$ In the alternative (rejection) scheme, the chance of drawing object $i$ equals $$\sum_{a=0}^\infty \left[\left(\frac{w_{(k)}}{w}\right)^a \frac{w_i}{w}\right] =\frac{1}{1 - w_{(k)}/w}\frac{w_i}{w} = \frac{w_i}{w-w_{(k)}}$$ exactly as before. The sum arises by partitioning the event by the number of rejections ($a$) that precede drawing element $i$; its terms give the chance of a sequence of $a$ draws from the first $k$ elements followed by drawing the $i^\text{th}$ element: because these draws are independent, their chances multiply. It forms a geometric series which is elementary to put into closed form (the first equality). The second equality is a trivial algebraic reduction.
How do I sample without replacement using a sampling-with-replacement function?
You describe a sequence of rejection samples. By definition, after selecting $k$ objects in the process of obtaining a weighted sample of $m$ objects without replacement from a list of $n$ objects, yo
How do I sample without replacement using a sampling-with-replacement function? You describe a sequence of rejection samples. By definition, after selecting $k$ objects in the process of obtaining a weighted sample of $m$ objects without replacement from a list of $n$ objects, you draw one of the remaining $n-k$ objects according to their relative weights. In your description of the alternative sampling scheme, at the same stage you will be drawing from all $n$ objects according to their relative weights but you will throw back any object equal to one of the first $k$ and try again: that's the rejection. So all you have to convince yourself of is that the chance of drawing one of the remaining $n-k$ objects is the same in either case. If this equivalence isn't perfectly clear, there's a straightforward mathematical demonstration. Let the weights of the first $k$ objects be $w_1, \ldots, w_k$ and let the weights of the remaining $n-k$ objects be $w_{k+1}, \ldots, w_n$. Write $w = w_1 + w_2 + \cdots + w_n$ and let $w_{(k)} = w_1 + w_2 + \cdots + w_k$. The chance of drawing object $i$ ($k \lt i \le n$) without replacement from the $n-k$ remaining objects is of course $$\frac{w_i}{w_{k+1} + w_{k+2} + \cdots + w_n} =\frac{w_i}{w-w_{(k)}}.$$ In the alternative (rejection) scheme, the chance of drawing object $i$ equals $$\sum_{a=0}^\infty \left[\left(\frac{w_{(k)}}{w}\right)^a \frac{w_i}{w}\right] =\frac{1}{1 - w_{(k)}/w}\frac{w_i}{w} = \frac{w_i}{w-w_{(k)}}$$ exactly as before. The sum arises by partitioning the event by the number of rejections ($a$) that precede drawing element $i$; its terms give the chance of a sequence of $a$ draws from the first $k$ elements followed by drawing the $i^\text{th}$ element: because these draws are independent, their chances multiply. It forms a geometric series which is elementary to put into closed form (the first equality). The second equality is a trivial algebraic reduction.
How do I sample without replacement using a sampling-with-replacement function? You describe a sequence of rejection samples. By definition, after selecting $k$ objects in the process of obtaining a weighted sample of $m$ objects without replacement from a list of $n$ objects, yo
32,141
How do I sample without replacement using a sampling-with-replacement function?
For this to be worth doing instead of using the sample function in R the vector you're sampling from needs to be about 1e7 or greater in size and the sample has to be relatively small.. say 1e2. If the sample you want is much bigger or the one you're sampling from is smaller sample will be faster. But once a tipping point is achieved a method like Juan describes will be much faster.
How do I sample without replacement using a sampling-with-replacement function?
For this to be worth doing instead of using the sample function in R the vector you're sampling from needs to be about 1e7 or greater in size and the sample has to be relatively small.. say 1e2. If t
How do I sample without replacement using a sampling-with-replacement function? For this to be worth doing instead of using the sample function in R the vector you're sampling from needs to be about 1e7 or greater in size and the sample has to be relatively small.. say 1e2. If the sample you want is much bigger or the one you're sampling from is smaller sample will be faster. But once a tipping point is achieved a method like Juan describes will be much faster.
How do I sample without replacement using a sampling-with-replacement function? For this to be worth doing instead of using the sample function in R the vector you're sampling from needs to be about 1e7 or greater in size and the sample has to be relatively small.. say 1e2. If t
32,142
Generating data to follow given variogram
You can use sequential simulation to generate realizations of a random field that has the covariance structure given in the variogram model. In R this can be done using gstat. See demo(ugsim) and demo(uisim) from R code examples from gstat.
Generating data to follow given variogram
You can use sequential simulation to generate realizations of a random field that has the covariance structure given in the variogram model. In R this can be done using gstat. See demo(ugsim) and demo
Generating data to follow given variogram You can use sequential simulation to generate realizations of a random field that has the covariance structure given in the variogram model. In R this can be done using gstat. See demo(ugsim) and demo(uisim) from R code examples from gstat.
Generating data to follow given variogram You can use sequential simulation to generate realizations of a random field that has the covariance structure given in the variogram model. In R this can be done using gstat. See demo(ugsim) and demo
32,143
A good book with equal stress on theory and math
You will find everything non-Bayesian that you asked about it Frank Harrell's Regression Modeling Strategies. I would leave Bayesian recommendations to more knowledgeable folks (although I do have Gelman, Carlin, Stern and Rubin, as well as Gilks, Richardson and Speigelhalter, on my bookshelf). There should be a few Bayesian biostat books on the market. Update: McCullach and Nelder (1989) is a classic book on GLMs, of course. It was groundbreaking for its time, but I find it rather boring, frankly. Besides, it does not cover the later additions like residual diagnostics, zero-inflated models, or multilevel/hierarchical extensions. Hardin and Hilbe (2007) cover some of this newer stuff in good details with practical examples in Stata (where GLMs and extensions are very well implemented; Hardin used to work at Stata Corp. writing many of these commands, as well as contributing to the sandwich estimator).
A good book with equal stress on theory and math
You will find everything non-Bayesian that you asked about it Frank Harrell's Regression Modeling Strategies. I would leave Bayesian recommendations to more knowledgeable folks (although I do have Gel
A good book with equal stress on theory and math You will find everything non-Bayesian that you asked about it Frank Harrell's Regression Modeling Strategies. I would leave Bayesian recommendations to more knowledgeable folks (although I do have Gelman, Carlin, Stern and Rubin, as well as Gilks, Richardson and Speigelhalter, on my bookshelf). There should be a few Bayesian biostat books on the market. Update: McCullach and Nelder (1989) is a classic book on GLMs, of course. It was groundbreaking for its time, but I find it rather boring, frankly. Besides, it does not cover the later additions like residual diagnostics, zero-inflated models, or multilevel/hierarchical extensions. Hardin and Hilbe (2007) cover some of this newer stuff in good details with practical examples in Stata (where GLMs and extensions are very well implemented; Hardin used to work at Stata Corp. writing many of these commands, as well as contributing to the sandwich estimator).
A good book with equal stress on theory and math You will find everything non-Bayesian that you asked about it Frank Harrell's Regression Modeling Strategies. I would leave Bayesian recommendations to more knowledgeable folks (although I do have Gel
32,144
A good book with equal stress on theory and math
I would recommend following two books: Statistical methods for bioinformatics The elements of statistical learning
A good book with equal stress on theory and math
I would recommend following two books: Statistical methods for bioinformatics The elements of statistical learning
A good book with equal stress on theory and math I would recommend following two books: Statistical methods for bioinformatics The elements of statistical learning
A good book with equal stress on theory and math I would recommend following two books: Statistical methods for bioinformatics The elements of statistical learning
32,145
Derive P(C | A+B) from Cox's two rules
I am not sure what Jaynes considers to be analogous to $P(A\cup B \mid C) = P(A \mid C) + P(B \mid C) - P(AB \mid C)$ but students have cheerfully used one or more of the following on homework and exams: $$ \begin{align*} P(A\mid B \cup C) &= P(A \mid B) + P(C)\\ P(A\mid B \cup C) &= P(A \mid B) + P(C) - P(AC)\\ P(A\mid B \cup C) &= P(A \mid B) + P(A \mid C),\\ P(A\mid B \cup C) &= P(A \mid B) + P(A \mid C) - P(A \mid BC),\\ P(A\mid B \cup C) &= P(AB \mid B\cup C) + P(AC \mid B \cup C) - P(ABC \mid B\cup C). \end{align*} $$ Do you think any of these are correct? Note: Changing my (now-deleted) comment into an addendum to my answer, the rules permit the following manipulations: $P(AB \mid C) = P(A\mid C)P(B \mid AC); P(A \mid C) = 1 - P(A^c \mid C).$ The first introduces conditioning on a subset of $C$ but does not eliminate conditioning on $C$. The second also does not eliminate conditioning on $C$. So any manipulations of $P(A\mid B \cup C)$ will always include terms of the form $P(X\mid B \cup C)$, and $P(A\mid B \cup C)$ cannot be expressed in terms of $P(A \mid B)$, $P(A \mid C)$, $P(A \mid BC)$, etc. without including probabilities conditioned on $B \cup C$ also.
Derive P(C | A+B) from Cox's two rules
I am not sure what Jaynes considers to be analogous to $P(A\cup B \mid C) = P(A \mid C) + P(B \mid C) - P(AB \mid C)$ but students have cheerfully used one or more of the following on homework and ex
Derive P(C | A+B) from Cox's two rules I am not sure what Jaynes considers to be analogous to $P(A\cup B \mid C) = P(A \mid C) + P(B \mid C) - P(AB \mid C)$ but students have cheerfully used one or more of the following on homework and exams: $$ \begin{align*} P(A\mid B \cup C) &= P(A \mid B) + P(C)\\ P(A\mid B \cup C) &= P(A \mid B) + P(C) - P(AC)\\ P(A\mid B \cup C) &= P(A \mid B) + P(A \mid C),\\ P(A\mid B \cup C) &= P(A \mid B) + P(A \mid C) - P(A \mid BC),\\ P(A\mid B \cup C) &= P(AB \mid B\cup C) + P(AC \mid B \cup C) - P(ABC \mid B\cup C). \end{align*} $$ Do you think any of these are correct? Note: Changing my (now-deleted) comment into an addendum to my answer, the rules permit the following manipulations: $P(AB \mid C) = P(A\mid C)P(B \mid AC); P(A \mid C) = 1 - P(A^c \mid C).$ The first introduces conditioning on a subset of $C$ but does not eliminate conditioning on $C$. The second also does not eliminate conditioning on $C$. So any manipulations of $P(A\mid B \cup C)$ will always include terms of the form $P(X\mid B \cup C)$, and $P(A\mid B \cup C)$ cannot be expressed in terms of $P(A \mid B)$, $P(A \mid C)$, $P(A \mid BC)$, etc. without including probabilities conditioned on $B \cup C$ also.
Derive P(C | A+B) from Cox's two rules I am not sure what Jaynes considers to be analogous to $P(A\cup B \mid C) = P(A \mid C) + P(B \mid C) - P(AB \mid C)$ but students have cheerfully used one or more of the following on homework and ex
32,146
Derive P(C | A+B) from Cox's two rules
For problems like this one, it is sometimes helpful to think less about the formulas and instead draw a picture (in this case, a Venn diagram). Now stare at the picture and try to visualize what $P(C | A \cup B)$ represents. If you can pick it out of the picture, then you will see that there are several valid ways to write it (two ways jump to my mind off the bat). If you're still stuck, try going back to the usual proof of the ordinary general addition rule for hints. Remember: a conditional probability concentrates all of its probability mass on the conditioning event (in this case, $A \cup B$). The idea is to focus on the locations where $C$ intersects that event. By the way, the R code for the figure is library(venneuler) vd <- venneuler(c(A=0.2, B=0.2, C=0.2, "A&B"=0.04, "A&C"=0.04, "B&C"=0.04 ,"A&B&C"=0.008)) plot(vd)
Derive P(C | A+B) from Cox's two rules
For problems like this one, it is sometimes helpful to think less about the formulas and instead draw a picture (in this case, a Venn diagram). Now stare at the picture and try to visualize what $P(C
Derive P(C | A+B) from Cox's two rules For problems like this one, it is sometimes helpful to think less about the formulas and instead draw a picture (in this case, a Venn diagram). Now stare at the picture and try to visualize what $P(C | A \cup B)$ represents. If you can pick it out of the picture, then you will see that there are several valid ways to write it (two ways jump to my mind off the bat). If you're still stuck, try going back to the usual proof of the ordinary general addition rule for hints. Remember: a conditional probability concentrates all of its probability mass on the conditioning event (in this case, $A \cup B$). The idea is to focus on the locations where $C$ intersects that event. By the way, the R code for the figure is library(venneuler) vd <- venneuler(c(A=0.2, B=0.2, C=0.2, "A&B"=0.04, "A&C"=0.04, "B&C"=0.04 ,"A&B&C"=0.008)) plot(vd)
Derive P(C | A+B) from Cox's two rules For problems like this one, it is sometimes helpful to think less about the formulas and instead draw a picture (in this case, a Venn diagram). Now stare at the picture and try to visualize what $P(C
32,147
Derive P(C | A+B) from Cox's two rules
You can't get rid of the tautology. I think you are supposed to just add the tautology and apply the product rule and then the sum rule and you get: $$p(C|(A+B)W) = \frac{p(CA|W)+p(CB|W)-p(AB|W)}{p(A|W)+p(B|W)-p(AB|W)}$$ where all the probabilities are expressed as posteriors to the tautology. I think this is the most similar equivalent to the sum rule that you can get for this problem, so that would be the solution. Note that if you add the condition $p(AB|W)=0$ (i.e. $A$ and $B$ are mutually exclusive) you get the same expression that you have to prove in the problem 2.2, that would indicate this solution is most probably correct (by Bayesian induction ;).
Derive P(C | A+B) from Cox's two rules
You can't get rid of the tautology. I think you are supposed to just add the tautology and apply the product rule and then the sum rule and you get: $$p(C|(A+B)W) = \frac{p(CA|W)+p(CB|W)-p(AB|W)}{p(A|
Derive P(C | A+B) from Cox's two rules You can't get rid of the tautology. I think you are supposed to just add the tautology and apply the product rule and then the sum rule and you get: $$p(C|(A+B)W) = \frac{p(CA|W)+p(CB|W)-p(AB|W)}{p(A|W)+p(B|W)-p(AB|W)}$$ where all the probabilities are expressed as posteriors to the tautology. I think this is the most similar equivalent to the sum rule that you can get for this problem, so that would be the solution. Note that if you add the condition $p(AB|W)=0$ (i.e. $A$ and $B$ are mutually exclusive) you get the same expression that you have to prove in the problem 2.2, that would indicate this solution is most probably correct (by Bayesian induction ;).
Derive P(C | A+B) from Cox's two rules You can't get rid of the tautology. I think you are supposed to just add the tautology and apply the product rule and then the sum rule and you get: $$p(C|(A+B)W) = \frac{p(CA|W)+p(CB|W)-p(AB|W)}{p(A|
32,148
Derive P(C | A+B) from Cox's two rules
Bayes Theorem gives $$ p(C\mid A+B) = \frac{p(A+B\mid C)\;p(C)}{p(A+B)} \, . $$ Now, using the conditional and unconditional sum rules, we have $$ p(C\mid A+B) = \frac{p(A\mid C)+p(B\mid C)-p(AB\mid C)}{p(A)+p(B)-p(AB)}\;p(C) \, . $$ Of course, the question is whether or not this formula would be "analogous enough" for Jaynes.
Derive P(C | A+B) from Cox's two rules
Bayes Theorem gives $$ p(C\mid A+B) = \frac{p(A+B\mid C)\;p(C)}{p(A+B)} \, . $$ Now, using the conditional and unconditional sum rules, we have $$ p(C\mid A+B) = \frac{p(A\mid C)+p(B\mid C)-p(A
Derive P(C | A+B) from Cox's two rules Bayes Theorem gives $$ p(C\mid A+B) = \frac{p(A+B\mid C)\;p(C)}{p(A+B)} \, . $$ Now, using the conditional and unconditional sum rules, we have $$ p(C\mid A+B) = \frac{p(A\mid C)+p(B\mid C)-p(AB\mid C)}{p(A)+p(B)-p(AB)}\;p(C) \, . $$ Of course, the question is whether or not this formula would be "analogous enough" for Jaynes.
Derive P(C | A+B) from Cox's two rules Bayes Theorem gives $$ p(C\mid A+B) = \frac{p(A+B\mid C)\;p(C)}{p(A+B)} \, . $$ Now, using the conditional and unconditional sum rules, we have $$ p(C\mid A+B) = \frac{p(A\mid C)+p(B\mid C)-p(A
32,149
Derive P(C | A+B) from Cox's two rules
Following only the Cox's rules, taking $W=X$ as in Jaynes's book, we have the solution from MastermindX: $$ p(C|(A+B)X) = \dfrac{p(C(A+B)|X)}{p((A+B)|X)} \qquad \text{(product rule)}$$ $$ = \dfrac{p((CA+CB)|X)}{p((A+B)|X)} \qquad \text{(distributive property of the conjunction)}$$ $$ = \dfrac{p(CA|X)+p(CB|X)-p(CAB|X)}{p((A+B)|X)} \qquad \text{(sum rule on numerator)}$$ $$ = \dfrac{p(CA|X)+p(CB|X)-p(CAB|X)}{p(A|X)+p(B|X)-p(AB|X)} \qquad \text{(sum rule on demoninator)}$$ $$ = \dfrac{p(A|X)p(C|AX)+p(B|X)p(C|BX)-p(AB|X)p(C|ABX)}{p(A|X)+p(B|X)-p(AB|X)} \qquad \text{(product rule on numerator)}$$ The solution for Ex. 2.1 follows the intention of the Chapter 2 in the product rule, that "we first seek a consistent rule relating the plausibility of the logical product $AB$ to the plausibility of $A$ and $B$ separately" (page 24). Moreover, for mutually exclusive propositions $A$ and $B$, this is equal to the Eq. (2.67) in Ex. 2.2, if we take $\{A_{1} = A$, $A_{2} = B\}$; also indicated by MastermindX. Notice that Jaynes himself does not get rid of the additional information $X$ on Eq. (2.67), so I believe this is the expected solution for both exercises.
Derive P(C | A+B) from Cox's two rules
Following only the Cox's rules, taking $W=X$ as in Jaynes's book, we have the solution from MastermindX: $$ p(C|(A+B)X) = \dfrac{p(C(A+B)|X)}{p((A+B)|X)} \qquad \text{(product rule)}$$ $$ = \dfrac{p((
Derive P(C | A+B) from Cox's two rules Following only the Cox's rules, taking $W=X$ as in Jaynes's book, we have the solution from MastermindX: $$ p(C|(A+B)X) = \dfrac{p(C(A+B)|X)}{p((A+B)|X)} \qquad \text{(product rule)}$$ $$ = \dfrac{p((CA+CB)|X)}{p((A+B)|X)} \qquad \text{(distributive property of the conjunction)}$$ $$ = \dfrac{p(CA|X)+p(CB|X)-p(CAB|X)}{p((A+B)|X)} \qquad \text{(sum rule on numerator)}$$ $$ = \dfrac{p(CA|X)+p(CB|X)-p(CAB|X)}{p(A|X)+p(B|X)-p(AB|X)} \qquad \text{(sum rule on demoninator)}$$ $$ = \dfrac{p(A|X)p(C|AX)+p(B|X)p(C|BX)-p(AB|X)p(C|ABX)}{p(A|X)+p(B|X)-p(AB|X)} \qquad \text{(product rule on numerator)}$$ The solution for Ex. 2.1 follows the intention of the Chapter 2 in the product rule, that "we first seek a consistent rule relating the plausibility of the logical product $AB$ to the plausibility of $A$ and $B$ separately" (page 24). Moreover, for mutually exclusive propositions $A$ and $B$, this is equal to the Eq. (2.67) in Ex. 2.2, if we take $\{A_{1} = A$, $A_{2} = B\}$; also indicated by MastermindX. Notice that Jaynes himself does not get rid of the additional information $X$ on Eq. (2.67), so I believe this is the expected solution for both exercises.
Derive P(C | A+B) from Cox's two rules Following only the Cox's rules, taking $W=X$ as in Jaynes's book, we have the solution from MastermindX: $$ p(C|(A+B)X) = \dfrac{p(C(A+B)|X)}{p((A+B)|X)} \qquad \text{(product rule)}$$ $$ = \dfrac{p((
32,150
Are t tests of coefficients in multiple regression post hoc tests?
General discussion of post hoc analyses post hoc analyses are typically contrasted with a priori, where post hoc analyses are, in some sense, performed after seeing the data, and a priori analyses are set out before seeing the data. In this sense the terms map on closely to the concepts of exploratory versus confirmatory data analysis. Arguably, the prototypical post hoc analysis involves comparing all pairwise comparisons of means in the context of an ANOVA where there are three or more levels to one of the independent variables. SPSS, for example, has a post hoc button specifically designed for running this form of analysis. However, while pairwise comparisons of group means is the prototypical example, any analysis that involves running statistical tests after seeing the data could be described as post hoc. When thinking about the general label "post hoc", it is helpful to think about its purpose. It is used as a caution about using standard inferential procedures on the problem at hand either because many significance tests are being run, or even if only a few significance tests are being performed, there are many that could have been performed had the data been different. Thus, post hoc statistical tests typically try to make adjustments to how inferences are made in order to control Type I error rates over both the analyses that are performed and those that could have been performed had the data been different. The term "post hoc" applied to regression coefficients Thus, the examination of individual significance tests for regression coefficients could be labelled post hoc if the examination is done after seeing the data. Also, if you wanted to control your family wise error rate and you saw the set of significance tests associated with the regression coefficients as a family, then you could apply something like a Bonferroni adjustment to the individual significance tests. That said, many researchers might interpret an overall model more holistically, and it is ultimately up to the reader how they choose to interpret the significance tests associated with individual coefficients. I would not describe the examination of a set of significance tests for regression coefficients as "multiple comparisons", because I think the term "comparisons" pertains more to comparing group means. I'd prefer a term like "multiple significance tests".
Are t tests of coefficients in multiple regression post hoc tests?
General discussion of post hoc analyses post hoc analyses are typically contrasted with a priori, where post hoc analyses are, in some sense, performed after seeing the data, and a priori analyses ar
Are t tests of coefficients in multiple regression post hoc tests? General discussion of post hoc analyses post hoc analyses are typically contrasted with a priori, where post hoc analyses are, in some sense, performed after seeing the data, and a priori analyses are set out before seeing the data. In this sense the terms map on closely to the concepts of exploratory versus confirmatory data analysis. Arguably, the prototypical post hoc analysis involves comparing all pairwise comparisons of means in the context of an ANOVA where there are three or more levels to one of the independent variables. SPSS, for example, has a post hoc button specifically designed for running this form of analysis. However, while pairwise comparisons of group means is the prototypical example, any analysis that involves running statistical tests after seeing the data could be described as post hoc. When thinking about the general label "post hoc", it is helpful to think about its purpose. It is used as a caution about using standard inferential procedures on the problem at hand either because many significance tests are being run, or even if only a few significance tests are being performed, there are many that could have been performed had the data been different. Thus, post hoc statistical tests typically try to make adjustments to how inferences are made in order to control Type I error rates over both the analyses that are performed and those that could have been performed had the data been different. The term "post hoc" applied to regression coefficients Thus, the examination of individual significance tests for regression coefficients could be labelled post hoc if the examination is done after seeing the data. Also, if you wanted to control your family wise error rate and you saw the set of significance tests associated with the regression coefficients as a family, then you could apply something like a Bonferroni adjustment to the individual significance tests. That said, many researchers might interpret an overall model more holistically, and it is ultimately up to the reader how they choose to interpret the significance tests associated with individual coefficients. I would not describe the examination of a set of significance tests for regression coefficients as "multiple comparisons", because I think the term "comparisons" pertains more to comparing group means. I'd prefer a term like "multiple significance tests".
Are t tests of coefficients in multiple regression post hoc tests? General discussion of post hoc analyses post hoc analyses are typically contrasted with a priori, where post hoc analyses are, in some sense, performed after seeing the data, and a priori analyses ar
32,151
Are t tests of coefficients in multiple regression post hoc tests?
It depends on your state of knowledge before the study. If you went into the study knowing that there were variables that were highly likely to be "significant" predictors of the outcome, and you were mostly interested in the influence of some new measure, say "M1", then the F-test is basically uninteresting and you are primarily interested in the relationship of M1 to the outcome. Then the relationship's features and statistical measures of credibility are not "post hoc", ... they are the primary study question.
Are t tests of coefficients in multiple regression post hoc tests?
It depends on your state of knowledge before the study. If you went into the study knowing that there were variables that were highly likely to be "significant" predictors of the outcome, and you were
Are t tests of coefficients in multiple regression post hoc tests? It depends on your state of knowledge before the study. If you went into the study knowing that there were variables that were highly likely to be "significant" predictors of the outcome, and you were mostly interested in the influence of some new measure, say "M1", then the F-test is basically uninteresting and you are primarily interested in the relationship of M1 to the outcome. Then the relationship's features and statistical measures of credibility are not "post hoc", ... they are the primary study question.
Are t tests of coefficients in multiple regression post hoc tests? It depends on your state of knowledge before the study. If you went into the study knowing that there were variables that were highly likely to be "significant" predictors of the outcome, and you were
32,152
Shannon entropy for non-stationary and non-linear signal
Shannon Entropy is a concept related to the distribution of a random variable, not to any particular realization of the r.v. The OP talks about a "non-stationary" signal. This implies that the OP has available a sequence of signals, which can be viewed as a realized sequence of a stochastic process, which is a sequence of random variables. If the process was (strictly) stationary then each r.v. would have the same distribution, hence the same entropy, and the specific realization of the process (the data) could be used to form some estimate of this common entropy. If the stochastic process is not strictly stationary, then each element-random variable of the process may have a different entropy. In that case the theoretical validity of the Entropy concept remains -but if non-stationarity is left totally unrestricted, then we do not have a sufficient amount of data to estimate these different entropies. This is a general issue with non-stationary stochastic process, it affects estimation attempts of all measures, characteristics, moments, statistics etc related to such a process. If we do not somehow restrict the memory and the time-heterogeneity of the process, we won't have enough data to say anything about it. So any question about Shannon Entropy and non-stationary data should include the assumed restrictions on non-stationarity (assumed based on theory and/or on data assessment), in order to be actually answerable.
Shannon entropy for non-stationary and non-linear signal
Shannon Entropy is a concept related to the distribution of a random variable, not to any particular realization of the r.v. The OP talks about a "non-stationary" signal. This implies that the OP has
Shannon entropy for non-stationary and non-linear signal Shannon Entropy is a concept related to the distribution of a random variable, not to any particular realization of the r.v. The OP talks about a "non-stationary" signal. This implies that the OP has available a sequence of signals, which can be viewed as a realized sequence of a stochastic process, which is a sequence of random variables. If the process was (strictly) stationary then each r.v. would have the same distribution, hence the same entropy, and the specific realization of the process (the data) could be used to form some estimate of this common entropy. If the stochastic process is not strictly stationary, then each element-random variable of the process may have a different entropy. In that case the theoretical validity of the Entropy concept remains -but if non-stationarity is left totally unrestricted, then we do not have a sufficient amount of data to estimate these different entropies. This is a general issue with non-stationary stochastic process, it affects estimation attempts of all measures, characteristics, moments, statistics etc related to such a process. If we do not somehow restrict the memory and the time-heterogeneity of the process, we won't have enough data to say anything about it. So any question about Shannon Entropy and non-stationary data should include the assumed restrictions on non-stationarity (assumed based on theory and/or on data assessment), in order to be actually answerable.
Shannon entropy for non-stationary and non-linear signal Shannon Entropy is a concept related to the distribution of a random variable, not to any particular realization of the r.v. The OP talks about a "non-stationary" signal. This implies that the OP has
32,153
Shannon entropy for non-stationary and non-linear signal
I would examine the windowed entropy and empirical PDF/CDF to see how rapidly the signal is changing, and whether it is an issue.
Shannon entropy for non-stationary and non-linear signal
I would examine the windowed entropy and empirical PDF/CDF to see how rapidly the signal is changing, and whether it is an issue.
Shannon entropy for non-stationary and non-linear signal I would examine the windowed entropy and empirical PDF/CDF to see how rapidly the signal is changing, and whether it is an issue.
Shannon entropy for non-stationary and non-linear signal I would examine the windowed entropy and empirical PDF/CDF to see how rapidly the signal is changing, and whether it is an issue.
32,154
Propagation of error using 2nd-order Taylor series
Assuming $Y=g(X)$, we can derive the approximate variance of $Y$ using the second-order Taylor expansion of $g(X)$ about $\mu_X=\mathbf{E}[X]$ as follows: $$\begin{eqnarray*} \mathbf{Var}[Y] &=& \mathbf{Var}[g(X)]\\ &\approx& \mathbf{Var}[g(\mu_X)+g'(\mu_X)(X-\mu_X)+\frac{1}{2}g''(\mu_X)(X-\mu_X)^2]\\ &=& (g'(\mu_X))^2\sigma_{X}^{2}+\frac{1}{4}(g''(\mu_X))^2\mathbf{Var}[(X-\mu_X)^2]\\ & & +g'(\mu_X)g''(\mu_X)\mathbf{Cov}[X-\mu_X,(X-\mu_X)^2]\\ &=& (g'(\mu_X))^2\sigma_{X}^{2}+\frac{1}{4}(g''(\mu_X))^2\mathbf{E}[(X-\mu_X)^4-\sigma_{X}^{4}]\\ & & +g'(\mu_X)g''(\mu_X)\left(\mathbf{E}(X^3)-3\mu_X(\sigma_{X}^{2}+\mu_{X}^{2})+2\mu_{X}^{3}\right)\\ &=& (g'(\mu_X))^2\sigma_{X}^{2}\\ & & +\frac{1}{4}(g''(\mu_X))^2\left(\mathbf{E}[X^4]-4\mu_X\mathbf{E}[X^3]+6\mu_{X}^{2}(\sigma_{X}^{2}+\mu_{X}^{2})-3\mu_{X}^{4}-\sigma_{X}^{4}\right)\\ & & +g'(\mu_X)g''(\mu_X)\left(\mathbf{E}(X^3)-3\mu_X(\sigma_{X}^{2}+\mu_{X}^{2})+2\mu_{X}^{3}\right)\\ \end{eqnarray*}$$ As @whuber pointed out in the comments, this can be cleaned up a bit by using the third and fourth central moments of $X$. A central moment is defined as $\mu_k=\mathbf{E}[(X-\mu_X)^k]$. Notice that $\sigma_{X}^{2}=\mu_2$. Using this new notation, we have that $$\mathbf{Var}[Y]\approx(g'(\mu_X))^2\sigma_{X}^{2}+g'(\mu_X)g''(\mu_X)\mu_3+\frac{1}{4}(g''(\mu_X))^2(\mu_4-\sigma_{X}^{4})$$
Propagation of error using 2nd-order Taylor series
Assuming $Y=g(X)$, we can derive the approximate variance of $Y$ using the second-order Taylor expansion of $g(X)$ about $\mu_X=\mathbf{E}[X]$ as follows: $$\begin{eqnarray*} \mathbf{Var}[Y] &=& \mat
Propagation of error using 2nd-order Taylor series Assuming $Y=g(X)$, we can derive the approximate variance of $Y$ using the second-order Taylor expansion of $g(X)$ about $\mu_X=\mathbf{E}[X]$ as follows: $$\begin{eqnarray*} \mathbf{Var}[Y] &=& \mathbf{Var}[g(X)]\\ &\approx& \mathbf{Var}[g(\mu_X)+g'(\mu_X)(X-\mu_X)+\frac{1}{2}g''(\mu_X)(X-\mu_X)^2]\\ &=& (g'(\mu_X))^2\sigma_{X}^{2}+\frac{1}{4}(g''(\mu_X))^2\mathbf{Var}[(X-\mu_X)^2]\\ & & +g'(\mu_X)g''(\mu_X)\mathbf{Cov}[X-\mu_X,(X-\mu_X)^2]\\ &=& (g'(\mu_X))^2\sigma_{X}^{2}+\frac{1}{4}(g''(\mu_X))^2\mathbf{E}[(X-\mu_X)^4-\sigma_{X}^{4}]\\ & & +g'(\mu_X)g''(\mu_X)\left(\mathbf{E}(X^3)-3\mu_X(\sigma_{X}^{2}+\mu_{X}^{2})+2\mu_{X}^{3}\right)\\ &=& (g'(\mu_X))^2\sigma_{X}^{2}\\ & & +\frac{1}{4}(g''(\mu_X))^2\left(\mathbf{E}[X^4]-4\mu_X\mathbf{E}[X^3]+6\mu_{X}^{2}(\sigma_{X}^{2}+\mu_{X}^{2})-3\mu_{X}^{4}-\sigma_{X}^{4}\right)\\ & & +g'(\mu_X)g''(\mu_X)\left(\mathbf{E}(X^3)-3\mu_X(\sigma_{X}^{2}+\mu_{X}^{2})+2\mu_{X}^{3}\right)\\ \end{eqnarray*}$$ As @whuber pointed out in the comments, this can be cleaned up a bit by using the third and fourth central moments of $X$. A central moment is defined as $\mu_k=\mathbf{E}[(X-\mu_X)^k]$. Notice that $\sigma_{X}^{2}=\mu_2$. Using this new notation, we have that $$\mathbf{Var}[Y]\approx(g'(\mu_X))^2\sigma_{X}^{2}+g'(\mu_X)g''(\mu_X)\mu_3+\frac{1}{4}(g''(\mu_X))^2(\mu_4-\sigma_{X}^{4})$$
Propagation of error using 2nd-order Taylor series Assuming $Y=g(X)$, we can derive the approximate variance of $Y$ using the second-order Taylor expansion of $g(X)$ about $\mu_X=\mathbf{E}[X]$ as follows: $$\begin{eqnarray*} \mathbf{Var}[Y] &=& \mat
32,155
Plotting a discriminant as line on scatterplot
OK, since nobody answered I think that, after some experimentation, I can do it myself. Following discriminant analysis guidelines, let T be the whole cloud's (data X, of 2 variables) sscp matrix (of deviations from cloud's centre), and let W be the pooled within-cluster sscp matrix (of deviations from a cluster centre). B=T-W is the between-cluster sscp matrix. Singular value decomposition of inv(W)B yields us U (left eigenvectors), S (diagonal matrix of eigenvalues), V (right eigenvectors). In my example of 2 clusters only the 1st eigenvalue is nonzero (which means that there is only one discriminant), and so we use only the 1st eigenvector (column) of U: U(1). Now, XU(1) are the sought-for raw discriminant scores. To show the discriminant as a line tiled with those, multiply the scores by cos-between-the-axis-and-the-discriminant (which is the element of the eigenvector U(1)) - just as did it with principal component above. The resulting plot is below.
Plotting a discriminant as line on scatterplot
OK, since nobody answered I think that, after some experimentation, I can do it myself. Following discriminant analysis guidelines, let T be the whole cloud's (data X, of 2 variables) sscp matrix (of
Plotting a discriminant as line on scatterplot OK, since nobody answered I think that, after some experimentation, I can do it myself. Following discriminant analysis guidelines, let T be the whole cloud's (data X, of 2 variables) sscp matrix (of deviations from cloud's centre), and let W be the pooled within-cluster sscp matrix (of deviations from a cluster centre). B=T-W is the between-cluster sscp matrix. Singular value decomposition of inv(W)B yields us U (left eigenvectors), S (diagonal matrix of eigenvalues), V (right eigenvectors). In my example of 2 clusters only the 1st eigenvalue is nonzero (which means that there is only one discriminant), and so we use only the 1st eigenvector (column) of U: U(1). Now, XU(1) are the sought-for raw discriminant scores. To show the discriminant as a line tiled with those, multiply the scores by cos-between-the-axis-and-the-discriminant (which is the element of the eigenvector U(1)) - just as did it with principal component above. The resulting plot is below.
Plotting a discriminant as line on scatterplot OK, since nobody answered I think that, after some experimentation, I can do it myself. Following discriminant analysis guidelines, let T be the whole cloud's (data X, of 2 variables) sscp matrix (of
32,156
Comparing backtesting returns with real trading returns
Letting $\psi_1, \psi_2$ be the sample Sharpe ratios of the two periods, the difference $\Delta \psi = \psi_1 - \psi_2$ is asymptotically normal. Under the null hypothesis that the population Sharpe ratios in the two periods are equal, the difference is asymptotically mean zero. The standard deviation is approximately $\sqrt{\frac{1}{120} + \frac{1}{n}}$, when your Sharpe ratios are 'annualized' to monthly terms. So the simplest test would be to reject the null if $|\Delta\psi|> 1.96 \sqrt{\frac{1}{120} + \frac{1}{n}}$. My answer here is just a realization of @drnexus' answer to this question.
Comparing backtesting returns with real trading returns
Letting $\psi_1, \psi_2$ be the sample Sharpe ratios of the two periods, the difference $\Delta \psi = \psi_1 - \psi_2$ is asymptotically normal. Under the null hypothesis that the population Sharpe r
Comparing backtesting returns with real trading returns Letting $\psi_1, \psi_2$ be the sample Sharpe ratios of the two periods, the difference $\Delta \psi = \psi_1 - \psi_2$ is asymptotically normal. Under the null hypothesis that the population Sharpe ratios in the two periods are equal, the difference is asymptotically mean zero. The standard deviation is approximately $\sqrt{\frac{1}{120} + \frac{1}{n}}$, when your Sharpe ratios are 'annualized' to monthly terms. So the simplest test would be to reject the null if $|\Delta\psi|> 1.96 \sqrt{\frac{1}{120} + \frac{1}{n}}$. My answer here is just a realization of @drnexus' answer to this question.
Comparing backtesting returns with real trading returns Letting $\psi_1, \psi_2$ be the sample Sharpe ratios of the two periods, the difference $\Delta \psi = \psi_1 - \psi_2$ is asymptotically normal. Under the null hypothesis that the population Sharpe r
32,157
Who first developed the idea of "sampling distributions"?
If you refer to the term "sampling distribution," information is hard to find. But the concept is crucial to Jakob Bernoulli's (Switzerland) recognition that there is a distinction between that distribution and the population distribution itself, leading to his formulation (and proof) of a Law of Large Numbers. (The Wikipedia article attributes the first statement of such a law to Cardano (Italy), who--among many other things--was an avid gambler and mathematician in the first half of the 16th century.) Bernoulli's seminal work was published posthumously in 1713 as his ars conjectandi but likely was developed in the late 1600's in response to pioneering work by Pascal and Fermat (France) as recounted by Christian Huyghens (Netherlands) in an influential (and amazingly brief) 1657 textbook, de ratiociniis in ludo aleae. For more background you can read a brief accounting of some of this history I wrote in the form of a review of a Keith Devlin book, The Unfinished Game.
Who first developed the idea of "sampling distributions"?
If you refer to the term "sampling distribution," information is hard to find. But the concept is crucial to Jakob Bernoulli's (Switzerland) recognition that there is a distinction between that distr
Who first developed the idea of "sampling distributions"? If you refer to the term "sampling distribution," information is hard to find. But the concept is crucial to Jakob Bernoulli's (Switzerland) recognition that there is a distinction between that distribution and the population distribution itself, leading to his formulation (and proof) of a Law of Large Numbers. (The Wikipedia article attributes the first statement of such a law to Cardano (Italy), who--among many other things--was an avid gambler and mathematician in the first half of the 16th century.) Bernoulli's seminal work was published posthumously in 1713 as his ars conjectandi but likely was developed in the late 1600's in response to pioneering work by Pascal and Fermat (France) as recounted by Christian Huyghens (Netherlands) in an influential (and amazingly brief) 1657 textbook, de ratiociniis in ludo aleae. For more background you can read a brief accounting of some of this history I wrote in the form of a review of a Keith Devlin book, The Unfinished Game.
Who first developed the idea of "sampling distributions"? If you refer to the term "sampling distribution," information is hard to find. But the concept is crucial to Jakob Bernoulli's (Switzerland) recognition that there is a distinction between that distr
32,158
Treating 'Don't know/Refused' levels of categorical variables
I was just wondering about exactly the same question when analyzing the latest National Hospital Discharge Survey data. Several variables have substantial missing values, such as marital status and type of procedure. This issue came to my attention because these categories showed up with strong (and significant) effects in most logistic regression analyses I was running. One is inclined to wonder why a missing code is given. In the case of marital status, for instance, it is plausible that failure to provide this information could be linked to important factors such as socioeconomic status or type of disease. In your case of high blood pressure, we should ask why would the value not be known or refused? This could be related to practices at the institution (perhaps reflecting lax procedures) or even to the individuals (such as religious beliefs). Those characteristics in turn could be associated with diabetes. Therefore, it seems prudent to continue as you have, rather than to code these values as missing (thereby excluding them from analysis altogether) or attempting to impute the values (which effectively masks the information they provide and could bias the results). It's really not any more difficult to do: you merely have to make sure this variable is treated as categorical and you'll get one more coefficient in the regression output. Furthermore, I suspect the BRFSS datasets are large enough that you don't have to worry about power.
Treating 'Don't know/Refused' levels of categorical variables
I was just wondering about exactly the same question when analyzing the latest National Hospital Discharge Survey data. Several variables have substantial missing values, such as marital status and t
Treating 'Don't know/Refused' levels of categorical variables I was just wondering about exactly the same question when analyzing the latest National Hospital Discharge Survey data. Several variables have substantial missing values, such as marital status and type of procedure. This issue came to my attention because these categories showed up with strong (and significant) effects in most logistic regression analyses I was running. One is inclined to wonder why a missing code is given. In the case of marital status, for instance, it is plausible that failure to provide this information could be linked to important factors such as socioeconomic status or type of disease. In your case of high blood pressure, we should ask why would the value not be known or refused? This could be related to practices at the institution (perhaps reflecting lax procedures) or even to the individuals (such as religious beliefs). Those characteristics in turn could be associated with diabetes. Therefore, it seems prudent to continue as you have, rather than to code these values as missing (thereby excluding them from analysis altogether) or attempting to impute the values (which effectively masks the information they provide and could bias the results). It's really not any more difficult to do: you merely have to make sure this variable is treated as categorical and you'll get one more coefficient in the regression output. Furthermore, I suspect the BRFSS datasets are large enough that you don't have to worry about power.
Treating 'Don't know/Refused' levels of categorical variables I was just wondering about exactly the same question when analyzing the latest National Hospital Discharge Survey data. Several variables have substantial missing values, such as marital status and t
32,159
Treating 'Don't know/Refused' levels of categorical variables
First you have to think over if the missing data are missing completely at random (MCAR), missing at random (MAR) or missing not at random (MNAR) as deletion (in other words complete-case analysis) may lead to biased results. Alternatives are inverse probability weighting, multiple imputation, the full-likelihood method and doubly-robust methods. Multiple imputation with chained equations (MICE) if often the easiest way to go.
Treating 'Don't know/Refused' levels of categorical variables
First you have to think over if the missing data are missing completely at random (MCAR), missing at random (MAR) or missing not at random (MNAR) as deletion (in other words complete-case analysis) ma
Treating 'Don't know/Refused' levels of categorical variables First you have to think over if the missing data are missing completely at random (MCAR), missing at random (MAR) or missing not at random (MNAR) as deletion (in other words complete-case analysis) may lead to biased results. Alternatives are inverse probability weighting, multiple imputation, the full-likelihood method and doubly-robust methods. Multiple imputation with chained equations (MICE) if often the easiest way to go.
Treating 'Don't know/Refused' levels of categorical variables First you have to think over if the missing data are missing completely at random (MCAR), missing at random (MAR) or missing not at random (MNAR) as deletion (in other words complete-case analysis) ma
32,160
Treating 'Don't know/Refused' levels of categorical variables
Do you have any reason to think that study subjects with diabetes were more likely or less likely to end up with the DK/R response? If not (and I'd be pretty surprised to find out you did), including this predictor in the model w/o excluding these cases will result in noise. That is, you'll end up with less precision in your assessment of how "yes" vs. "no" influences the estimated probability of diabetes (because you'll be trying to model the influence of either "yes" or "no" vs. random DK/R responses as opposed to just "yes" vs. "no"). The most straightforward option is to exclude the cases with DK/R responses. Assuming that their "yes/no" responses were indeed missing at random, excluding them will not bias your estimate of the influence of "yes" vs. "no." That approach, however, will reduce your sample size and thus reduce statistical power with regard to the remaining predictors. If you have a lot of DK/R on this variable, you might want to impute "yes"/"no" responses by multiple imputation (arguably the most, maybe only, defensible missing-value imputation strategy).
Treating 'Don't know/Refused' levels of categorical variables
Do you have any reason to think that study subjects with diabetes were more likely or less likely to end up with the DK/R response? If not (and I'd be pretty surprised to find out you did), including
Treating 'Don't know/Refused' levels of categorical variables Do you have any reason to think that study subjects with diabetes were more likely or less likely to end up with the DK/R response? If not (and I'd be pretty surprised to find out you did), including this predictor in the model w/o excluding these cases will result in noise. That is, you'll end up with less precision in your assessment of how "yes" vs. "no" influences the estimated probability of diabetes (because you'll be trying to model the influence of either "yes" or "no" vs. random DK/R responses as opposed to just "yes" vs. "no"). The most straightforward option is to exclude the cases with DK/R responses. Assuming that their "yes/no" responses were indeed missing at random, excluding them will not bias your estimate of the influence of "yes" vs. "no." That approach, however, will reduce your sample size and thus reduce statistical power with regard to the remaining predictors. If you have a lot of DK/R on this variable, you might want to impute "yes"/"no" responses by multiple imputation (arguably the most, maybe only, defensible missing-value imputation strategy).
Treating 'Don't know/Refused' levels of categorical variables Do you have any reason to think that study subjects with diabetes were more likely or less likely to end up with the DK/R response? If not (and I'd be pretty surprised to find out you did), including
32,161
Is there a difference between seasonality / cyclicality / periodicity
Perhaps. Though my take could easily be construed as a bit too anal retentive: I tend to use the term seasonality as a metaphor for the 'seasons' of the year: i.e. Spring, Summer, Fall, Winter (or 'Almost Winter', Winter, 'Still Winter', and 'Construction' if you live in Pennsylvania...). In other words, I would expect a seasonal trend to have a periodicity of roughly 365 days. I tend to use the term 'cyclicality' to refer to a response, which when decomposed in frequency space has a single dominant peak. Or, a bit more generally, much as one could stare at an engine, 'cyclicality' implies a dominant cycle -- the piston moves up, and then it moves down, and then it moves up again. Numerically, I would expect low, high, low, high, low, high, etc. So two things: (1) magnitude &/or sign switches from a low to high and (2) these switches occur with a predictable frequency. This rigor naturally evaporates when talking about business cycles -- however, I often find that a dominant frequency remains, e.g. every business quarter, or every year, things are slow for the first few weeks and high pressure the last few weeks... So there is a dominant period, but it could be very different from 'seasonality' which to me implies a year. Lastly, I tend to use 'periodicity' when referring to the frequency of collecting measurements. Differing from cyclicality, the term 'periodicity' for me implies no expectation for the magnitude or sign of the data collected. But this is just my $0.02. And I'm just a stat student -- take from this what you will.
Is there a difference between seasonality / cyclicality / periodicity
Perhaps. Though my take could easily be construed as a bit too anal retentive: I tend to use the term seasonality as a metaphor for the 'seasons' of the year: i.e. Spring, Summer, Fall, Winter (or 'A
Is there a difference between seasonality / cyclicality / periodicity Perhaps. Though my take could easily be construed as a bit too anal retentive: I tend to use the term seasonality as a metaphor for the 'seasons' of the year: i.e. Spring, Summer, Fall, Winter (or 'Almost Winter', Winter, 'Still Winter', and 'Construction' if you live in Pennsylvania...). In other words, I would expect a seasonal trend to have a periodicity of roughly 365 days. I tend to use the term 'cyclicality' to refer to a response, which when decomposed in frequency space has a single dominant peak. Or, a bit more generally, much as one could stare at an engine, 'cyclicality' implies a dominant cycle -- the piston moves up, and then it moves down, and then it moves up again. Numerically, I would expect low, high, low, high, low, high, etc. So two things: (1) magnitude &/or sign switches from a low to high and (2) these switches occur with a predictable frequency. This rigor naturally evaporates when talking about business cycles -- however, I often find that a dominant frequency remains, e.g. every business quarter, or every year, things are slow for the first few weeks and high pressure the last few weeks... So there is a dominant period, but it could be very different from 'seasonality' which to me implies a year. Lastly, I tend to use 'periodicity' when referring to the frequency of collecting measurements. Differing from cyclicality, the term 'periodicity' for me implies no expectation for the magnitude or sign of the data collected. But this is just my $0.02. And I'm just a stat student -- take from this what you will.
Is there a difference between seasonality / cyclicality / periodicity Perhaps. Though my take could easily be construed as a bit too anal retentive: I tend to use the term seasonality as a metaphor for the 'seasons' of the year: i.e. Spring, Summer, Fall, Winter (or 'A
32,162
Is there a difference between seasonality / cyclicality / periodicity
Yes, there is a difference. A classic time series decomposition model is $$ Y = T + S + C + I, $$ where \begin{align} Y & = \text{data,} \\ T & = \text{trend,} \\ S & = \text{seasonal,} \\ C & = \text{cyclical,} \\ I & = \text{irregular (i.e. error left over).} \end{align} 'seasonal' refers to REGULAR patterns that occur with time, e.g. oatmeal sales higher in winter, or Starbucks coffee sales being highest at 7 a.m. These are usually very predictable. 'cyclical' refers to longer term patterns like business cycles. These aren't as regular as seasonality, and may involve some subjectivity in estimation. 'periodicity' refers to seasonal component. Periodicity could be monthly, biweekly, hourly, etc. The equation above has $+$ signs, indicating an additive model. Multiplicative models are also commonly used if the seasonality is multiplicative. I took out the '*' signs in deference to comments below ;)
Is there a difference between seasonality / cyclicality / periodicity
Yes, there is a difference. A classic time series decomposition model is $$ Y = T + S + C + I, $$ where \begin{align} Y & = \text{data,} \\ T & = \text{trend,} \\ S & = \text{seasonal,} \\ C & = \text
Is there a difference between seasonality / cyclicality / periodicity Yes, there is a difference. A classic time series decomposition model is $$ Y = T + S + C + I, $$ where \begin{align} Y & = \text{data,} \\ T & = \text{trend,} \\ S & = \text{seasonal,} \\ C & = \text{cyclical,} \\ I & = \text{irregular (i.e. error left over).} \end{align} 'seasonal' refers to REGULAR patterns that occur with time, e.g. oatmeal sales higher in winter, or Starbucks coffee sales being highest at 7 a.m. These are usually very predictable. 'cyclical' refers to longer term patterns like business cycles. These aren't as regular as seasonality, and may involve some subjectivity in estimation. 'periodicity' refers to seasonal component. Periodicity could be monthly, biweekly, hourly, etc. The equation above has $+$ signs, indicating an additive model. Multiplicative models are also commonly used if the seasonality is multiplicative. I took out the '*' signs in deference to comments below ;)
Is there a difference between seasonality / cyclicality / periodicity Yes, there is a difference. A classic time series decomposition model is $$ Y = T + S + C + I, $$ where \begin{align} Y & = \text{data,} \\ T & = \text{trend,} \\ S & = \text{seasonal,} \\ C & = \text
32,163
Can a fair coin test be applied to a coin that often lands on its edge?
I'm pretty sure the answer is yes, the standard binomial 'fair coin' test is still valid: if you wish to test whether two of the three probabilities of a multinomial distribution are the same but you're not interested in any hypotheses about the third probability, you can analyse the numbers of the corresponding two outcomes as if they were drawn from a binomial distribution. In fact this seems to make quite a nice exercise about sufficient statistics and conditional likelihood: You can think of this as a multinomial distribution with three possible outcomes and hence two estimable parameters (as the three probabilities must sum to 1). But you're not interested in the probability of the 'middle' outcome, so you can take this to be the nuisance parameter, and the difference between the number of 'top' and 'bottom' outcomes to be the parameter of interest. It's straightforward to show (using the Fisher–Neyman factorization theorem) that the numbers of 'top' and 'bottom' outcomes together form a (two-dimensional) sufficient statistic for the parameter of interest, i.e. the number of 'middle' outcomes doesn't provide any additional information about the value of the parameter of interest. The number of 'middle' outcomes is clearly a sufficient statistic for the nuisance prameter. If we condition on the latter, I think (haven't checked properly) that the resulting conditional likelihood will end up the same as the likelihood for the binomial distribution, i.e. the coin-tossing problem.
Can a fair coin test be applied to a coin that often lands on its edge?
I'm pretty sure the answer is yes, the standard binomial 'fair coin' test is still valid: if you wish to test whether two of the three probabilities of a multinomial distribution are the same but you'
Can a fair coin test be applied to a coin that often lands on its edge? I'm pretty sure the answer is yes, the standard binomial 'fair coin' test is still valid: if you wish to test whether two of the three probabilities of a multinomial distribution are the same but you're not interested in any hypotheses about the third probability, you can analyse the numbers of the corresponding two outcomes as if they were drawn from a binomial distribution. In fact this seems to make quite a nice exercise about sufficient statistics and conditional likelihood: You can think of this as a multinomial distribution with three possible outcomes and hence two estimable parameters (as the three probabilities must sum to 1). But you're not interested in the probability of the 'middle' outcome, so you can take this to be the nuisance parameter, and the difference between the number of 'top' and 'bottom' outcomes to be the parameter of interest. It's straightforward to show (using the Fisher–Neyman factorization theorem) that the numbers of 'top' and 'bottom' outcomes together form a (two-dimensional) sufficient statistic for the parameter of interest, i.e. the number of 'middle' outcomes doesn't provide any additional information about the value of the parameter of interest. The number of 'middle' outcomes is clearly a sufficient statistic for the nuisance prameter. If we condition on the latter, I think (haven't checked properly) that the resulting conditional likelihood will end up the same as the likelihood for the binomial distribution, i.e. the coin-tossing problem.
Can a fair coin test be applied to a coin that often lands on its edge? I'm pretty sure the answer is yes, the standard binomial 'fair coin' test is still valid: if you wish to test whether two of the three probabilities of a multinomial distribution are the same but you'
32,164
Can a fair coin test be applied to a coin that often lands on its edge?
If you frame this as a binomial problem (p, 1-p), not a multinomial problem, you'll only be able to describe the past. You won't be able to say anything about the future. Why? Your removal of the middle "edge flips" is implied in your regrouping of the data. In other words, your "data described" probability "p" of a positive result and probability "1-p" of a negative result will not apply on the next "binomial flip of the coin", because in the future you really have probabilities "x", "y", and "(1-x-y)". Edit (03/27/2011) =============================== I added the following diagram to help explain my comments below.
Can a fair coin test be applied to a coin that often lands on its edge?
If you frame this as a binomial problem (p, 1-p), not a multinomial problem, you'll only be able to describe the past. You won't be able to say anything about the future. Why? Your removal of the m
Can a fair coin test be applied to a coin that often lands on its edge? If you frame this as a binomial problem (p, 1-p), not a multinomial problem, you'll only be able to describe the past. You won't be able to say anything about the future. Why? Your removal of the middle "edge flips" is implied in your regrouping of the data. In other words, your "data described" probability "p" of a positive result and probability "1-p" of a negative result will not apply on the next "binomial flip of the coin", because in the future you really have probabilities "x", "y", and "(1-x-y)". Edit (03/27/2011) =============================== I added the following diagram to help explain my comments below.
Can a fair coin test be applied to a coin that often lands on its edge? If you frame this as a binomial problem (p, 1-p), not a multinomial problem, you'll only be able to describe the past. You won't be able to say anything about the future. Why? Your removal of the m
32,165
How can I test $H_0:\sigma^2_1=\sigma^2_2$?
For comparing variances, Wilcox suggests a percentile bootstrap method. See chapter 5.5.1 of 'Introduction to Robust Estimation and Hypothesis Testing'. This is available as comvar2 from the wrs package in R. edit: to find the amount of bootstrap differences to trim from each side for different values of $\alpha$, one would perform a Monte Carlo study, as suggested by Wilcox. I have a quick and dirty one here in Matlab (duck from thrown shoes): randn('state',0); %to make the results replicable. alphas = [0.001,0.005,0.01,0.025,0.05,0.10,0.15,0.20,0.25,0.333]; nreps = 4096; nsizes = round(2.^ (4:0.5:9)); nboots = 599; cutls = nan(numel(nsizes),numel(alphas)); for ii=1:numel(nsizes) n = nsizes(ii); imbalance = nan(nreps,1); for jj=1:nreps x1 = randn(n,1);x2 = randn(n,1); %make bootstrap samples; x1b = x1(ceil(n * rand(n,nboots))); x2b = x2(ceil(n * rand(n,nboots))); %compute stdevs sig1 = std(x1b,1);sig2 = std(x2b,1); %compute difference in stdevs Dvar = (sig1.^2 - sig2.^2); %compute the minimum of {the # < 0} and {the # > 0} %in (1-alpha) of the cases you want this minimum to match %your l number; then let u = 599 - l + 1 imbalance(jj,1) = min(sum(Dvar < 0),sum(Dvar > 0)); end imbalance = sort(imbalance); cutls(ii,:) = interp1(linspace(0,1,numel(imbalance)),imbalance(:)',alphas,'nearest'); end %plot them; lh = loglog(nsizes(:),cutls + 1); legend(lh,arrayfun(@(x)(sprintf('alpha = %g',x)),alphas,'UniformOutput',false)) ylabel('l + 1'); xlabel('sample size, n_m'); I get the rather unhelpful plot: A little bit of hackery indicates that a model of the form $l + 0.5 = \exp{5.18} \alpha^{0.94} n^{0.067}$ fits my Monte Carlo simulations fairly well, but they do not give the same results that Wilcox quotes in his book. You might be better served running these experiments yourself at your preferred $\alpha$. edit I ran this experiment again, using many more replicates ($2^{18}$) per sample size. Here's a table of the empirical values of $l$. The first row is a NaN, then the alpha (type I rate). Following that, the first column is the size of the samples, $n$, then the empirical values of $l$. (I would expect that as $n \to \infty$ we would have $l \to 599 \alpha /2$) NaN,0.001,0.005,0.01,0.025,0.05,0.1,0.15,0.2,0.25,0.333 16,0,0,1,4,9,22,35,49,64,88 23,0,0,1,4,10,23,37,51,66,91 32,0,0,1,4,10,24,38,52,67,92 45,0,0,1,5,11,25,39,54,69,94 64,0,0,2,5,12,26,41,55,70,95 91,0,1,2,6,13,27,42,56,71,96 128,0,1,2,6,13,28,42,58,72,97 181,0,1,2,6,13,28,43,58,73,98 256,0,1,2,6,14,28,43,58,73,98 362,0,1,2,7,14,29,44,59,74,99 512,0,1,2,7,14,29,44,59,74,99
How can I test $H_0:\sigma^2_1=\sigma^2_2$?
For comparing variances, Wilcox suggests a percentile bootstrap method. See chapter 5.5.1 of 'Introduction to Robust Estimation and Hypothesis Testing'. This is available as comvar2 from the wrs packa
How can I test $H_0:\sigma^2_1=\sigma^2_2$? For comparing variances, Wilcox suggests a percentile bootstrap method. See chapter 5.5.1 of 'Introduction to Robust Estimation and Hypothesis Testing'. This is available as comvar2 from the wrs package in R. edit: to find the amount of bootstrap differences to trim from each side for different values of $\alpha$, one would perform a Monte Carlo study, as suggested by Wilcox. I have a quick and dirty one here in Matlab (duck from thrown shoes): randn('state',0); %to make the results replicable. alphas = [0.001,0.005,0.01,0.025,0.05,0.10,0.15,0.20,0.25,0.333]; nreps = 4096; nsizes = round(2.^ (4:0.5:9)); nboots = 599; cutls = nan(numel(nsizes),numel(alphas)); for ii=1:numel(nsizes) n = nsizes(ii); imbalance = nan(nreps,1); for jj=1:nreps x1 = randn(n,1);x2 = randn(n,1); %make bootstrap samples; x1b = x1(ceil(n * rand(n,nboots))); x2b = x2(ceil(n * rand(n,nboots))); %compute stdevs sig1 = std(x1b,1);sig2 = std(x2b,1); %compute difference in stdevs Dvar = (sig1.^2 - sig2.^2); %compute the minimum of {the # < 0} and {the # > 0} %in (1-alpha) of the cases you want this minimum to match %your l number; then let u = 599 - l + 1 imbalance(jj,1) = min(sum(Dvar < 0),sum(Dvar > 0)); end imbalance = sort(imbalance); cutls(ii,:) = interp1(linspace(0,1,numel(imbalance)),imbalance(:)',alphas,'nearest'); end %plot them; lh = loglog(nsizes(:),cutls + 1); legend(lh,arrayfun(@(x)(sprintf('alpha = %g',x)),alphas,'UniformOutput',false)) ylabel('l + 1'); xlabel('sample size, n_m'); I get the rather unhelpful plot: A little bit of hackery indicates that a model of the form $l + 0.5 = \exp{5.18} \alpha^{0.94} n^{0.067}$ fits my Monte Carlo simulations fairly well, but they do not give the same results that Wilcox quotes in his book. You might be better served running these experiments yourself at your preferred $\alpha$. edit I ran this experiment again, using many more replicates ($2^{18}$) per sample size. Here's a table of the empirical values of $l$. The first row is a NaN, then the alpha (type I rate). Following that, the first column is the size of the samples, $n$, then the empirical values of $l$. (I would expect that as $n \to \infty$ we would have $l \to 599 \alpha /2$) NaN,0.001,0.005,0.01,0.025,0.05,0.1,0.15,0.2,0.25,0.333 16,0,0,1,4,9,22,35,49,64,88 23,0,0,1,4,10,23,37,51,66,91 32,0,0,1,4,10,24,38,52,67,92 45,0,0,1,5,11,25,39,54,69,94 64,0,0,2,5,12,26,41,55,70,95 91,0,1,2,6,13,27,42,56,71,96 128,0,1,2,6,13,28,42,58,72,97 181,0,1,2,6,13,28,43,58,73,98 256,0,1,2,6,14,28,43,58,73,98 362,0,1,2,7,14,29,44,59,74,99 512,0,1,2,7,14,29,44,59,74,99
How can I test $H_0:\sigma^2_1=\sigma^2_2$? For comparing variances, Wilcox suggests a percentile bootstrap method. See chapter 5.5.1 of 'Introduction to Robust Estimation and Hypothesis Testing'. This is available as comvar2 from the wrs packa
32,166
Computer game datasets
Starcraft I Starcraft Data Mining Project, providing some game data. Starcraft AI Competition, does not directly provide data, but allows you to connect a program written by you with the game. Although I did not try it, I expect that data collection is possible this way ;). If you are generally interested in data mining + gaming, you may find the project Robocode aka Java Robot Wars interesting, where you can program a bot for a simpler environment (simpler than Starcraft) and let it battle against other bots.
Computer game datasets
Starcraft I Starcraft Data Mining Project, providing some game data. Starcraft AI Competition, does not directly provide data, but allows you to connect a program written by you with the game. Althou
Computer game datasets Starcraft I Starcraft Data Mining Project, providing some game data. Starcraft AI Competition, does not directly provide data, but allows you to connect a program written by you with the game. Although I did not try it, I expect that data collection is possible this way ;). If you are generally interested in data mining + gaming, you may find the project Robocode aka Java Robot Wars interesting, where you can program a bot for a simpler environment (simpler than Starcraft) and let it battle against other bots.
Computer game datasets Starcraft I Starcraft Data Mining Project, providing some game data. Starcraft AI Competition, does not directly provide data, but allows you to connect a program written by you with the game. Althou
32,167
Computer game datasets
John Myles White has a dataset and analysis of Canabalt scores as posted on Twitter Stats at Berkeley has a dataset for a Video Games Survey.
Computer game datasets
John Myles White has a dataset and analysis of Canabalt scores as posted on Twitter Stats at Berkeley has a dataset for a Video Games Survey.
Computer game datasets John Myles White has a dataset and analysis of Canabalt scores as posted on Twitter Stats at Berkeley has a dataset for a Video Games Survey.
Computer game datasets John Myles White has a dataset and analysis of Canabalt scores as posted on Twitter Stats at Berkeley has a dataset for a Video Games Survey.
32,168
Simple post-stratification weights in R
Looking at the example for postStratify in the manual, you are correct: you seem to be required to give a svydesign object (though you can if needed use svrepdesign to specify it instead). The svydesign object must have ids; all the others are optional, though you will almost certainly want data to have something to work with, and you will probably want some of the others. At this stage I would suggest you ignore all those appearing after data. postStratify also needs strata, the variable to post-stratify on: the example uses apiclus1$stype which simply specifies the school type (E, M or H). It also needs population which you can either specify yourself or take from some other source: the example gives data.frame(stype=c("E","H","M"), Freq=c(4421,755,1018)) though, as you say, table or xtabs can be used instead. Again, you can then ignore all the other options unless you know you need them, so you can end up with something as simple as the example's dclus1p<-postStratify(dclus1, ~stype, pop.types).
Simple post-stratification weights in R
Looking at the example for postStratify in the manual, you are correct: you seem to be required to give a svydesign object (though you can if needed use svrepdesign to specify it instead). The svyde
Simple post-stratification weights in R Looking at the example for postStratify in the manual, you are correct: you seem to be required to give a svydesign object (though you can if needed use svrepdesign to specify it instead). The svydesign object must have ids; all the others are optional, though you will almost certainly want data to have something to work with, and you will probably want some of the others. At this stage I would suggest you ignore all those appearing after data. postStratify also needs strata, the variable to post-stratify on: the example uses apiclus1$stype which simply specifies the school type (E, M or H). It also needs population which you can either specify yourself or take from some other source: the example gives data.frame(stype=c("E","H","M"), Freq=c(4421,755,1018)) though, as you say, table or xtabs can be used instead. Again, you can then ignore all the other options unless you know you need them, so you can end up with something as simple as the example's dclus1p<-postStratify(dclus1, ~stype, pop.types).
Simple post-stratification weights in R Looking at the example for postStratify in the manual, you are correct: you seem to be required to give a svydesign object (though you can if needed use svrepdesign to specify it instead). The svyde
32,169
Simple post-stratification weights in R
If the weights already exist it's really simple. You'd have something like: ANESData <- read.spss("C:/Data/ANESSurvey.spss ... bla bla bla) # Fix id and weights for your data. ANESDesign <- svydesign(id = ~SAMPID, data = ANESData , weights = ~expwgt) I'm assuming you have the foreign and survey packages loaded....
Simple post-stratification weights in R
If the weights already exist it's really simple. You'd have something like: ANESData <- read.spss("C:/Data/ANESSurvey.spss ... bla bla bla) # Fix id and weights for your data. ANESDesign <- svydesi
Simple post-stratification weights in R If the weights already exist it's really simple. You'd have something like: ANESData <- read.spss("C:/Data/ANESSurvey.spss ... bla bla bla) # Fix id and weights for your data. ANESDesign <- svydesign(id = ~SAMPID, data = ANESData , weights = ~expwgt) I'm assuming you have the foreign and survey packages loaded....
Simple post-stratification weights in R If the weights already exist it's really simple. You'd have something like: ANESData <- read.spss("C:/Data/ANESSurvey.spss ... bla bla bla) # Fix id and weights for your data. ANESDesign <- svydesi
32,170
Simple post-stratification weights in R
I'd recommend taking a look at this Github repo: On analyzing ANES in R You'll want to do something like this: anes.design <- svydesign( ~psu_full , strata = ~strata_full , data = y , weights = ~weight_full , nest = TRUE ) Where y is your loaded dta file.
Simple post-stratification weights in R
I'd recommend taking a look at this Github repo: On analyzing ANES in R You'll want to do something like this: anes.design <- svydesign( ~psu_full , strata = ~strata_full ,
Simple post-stratification weights in R I'd recommend taking a look at this Github repo: On analyzing ANES in R You'll want to do something like this: anes.design <- svydesign( ~psu_full , strata = ~strata_full , data = y , weights = ~weight_full , nest = TRUE ) Where y is your loaded dta file.
Simple post-stratification weights in R I'd recommend taking a look at this Github repo: On analyzing ANES in R You'll want to do something like this: anes.design <- svydesign( ~psu_full , strata = ~strata_full ,
32,171
The code variable in the nlm() function
These situations are understood more clearly when having in mind what minimisation or maximisation really is and how optimisation works. Suppose we have function $f$ which has local minimum at $x_0$. Optimisation methods try to construct the sequence $x_i$ which converges to $x_0$. It is always shown that in theory the sequence constructed converges to the point of local minimum for some class of functions $f$. To obtain next candidate in iteration $i$ can be a lengthy process, so it is usual that all algorithms limit the number of iterations. This corresponds to situation 4. Then for each $x$ close to $x_0$ we have that $f(x)>f(x_0)$. So if $f(x_i)>f(x_{i-1})$ this is an indication that we reached the minimum. This corresponds to situation 3 Now if function $f$ has a derivative at $x_0$ then necessarily $\nabla f(x_0)=0$. Newton-Raphson method calculates gradient at each step, so if $\nabla f(x_i)\approx 0$, $x_i$ is probably a solution, which corresponds to situation 1. Each convergent sequence of real vectors is Cauchy sequence and vice versa, roughly meaning that if $x_i$ is close to $x_0$, then $x_i$ is close to $x_{i+1}$ and vice versa, where $i$ is the iteration number. So if $|x_i-x_{i-1}|<\varepsilon$, and we know that in theory $x_i$ converges to $x_0$, then we should be close to the minimum point. This corresponds to situation 2. Converging sequences have the property that they contract, i.e. if we are close to convergence all the remaining elements of the sequence are contained in small area. So if the sequence which in theory should converge starts to take large steps this is an indication that there is no convergence probably. This corresponds to situation 5 Note Strict mathematical definitions were left out intentionally.
The code variable in the nlm() function
These situations are understood more clearly when having in mind what minimisation or maximisation really is and how optimisation works. Suppose we have function $f$ which has local minimum at $x_0$.
The code variable in the nlm() function These situations are understood more clearly when having in mind what minimisation or maximisation really is and how optimisation works. Suppose we have function $f$ which has local minimum at $x_0$. Optimisation methods try to construct the sequence $x_i$ which converges to $x_0$. It is always shown that in theory the sequence constructed converges to the point of local minimum for some class of functions $f$. To obtain next candidate in iteration $i$ can be a lengthy process, so it is usual that all algorithms limit the number of iterations. This corresponds to situation 4. Then for each $x$ close to $x_0$ we have that $f(x)>f(x_0)$. So if $f(x_i)>f(x_{i-1})$ this is an indication that we reached the minimum. This corresponds to situation 3 Now if function $f$ has a derivative at $x_0$ then necessarily $\nabla f(x_0)=0$. Newton-Raphson method calculates gradient at each step, so if $\nabla f(x_i)\approx 0$, $x_i$ is probably a solution, which corresponds to situation 1. Each convergent sequence of real vectors is Cauchy sequence and vice versa, roughly meaning that if $x_i$ is close to $x_0$, then $x_i$ is close to $x_{i+1}$ and vice versa, where $i$ is the iteration number. So if $|x_i-x_{i-1}|<\varepsilon$, and we know that in theory $x_i$ converges to $x_0$, then we should be close to the minimum point. This corresponds to situation 2. Converging sequences have the property that they contract, i.e. if we are close to convergence all the remaining elements of the sequence are contained in small area. So if the sequence which in theory should converge starts to take large steps this is an indication that there is no convergence probably. This corresponds to situation 5 Note Strict mathematical definitions were left out intentionally.
The code variable in the nlm() function These situations are understood more clearly when having in mind what minimisation or maximisation really is and how optimisation works. Suppose we have function $f$ which has local minimum at $x_0$.
32,172
Partialling or regressing out a categorical variable?
I don't think (1) makes any difference. The idea is to partial out from the response and the other predictors the effects of Sex. It doesn't matter if you code 0, 1 (Treatment contrasts) or 1, -1 (Sum to zero contrasts) as the models represent the same "amount" of information which is then removed. Here is an example in R: set.seed(1) dat <- data.frame(Size = c(rnorm(20, 180, sd = 5), rnorm(20, 170, sd = 5)), Sex = gl(2,20,labels = c("Male","Female"))) options(contrasts = c("contr.treatment", "contr.poly")) r1 <- resid(m1 <- lm(Size ~ Sex, data = dat)) options(contrasts = c("contr.sum", "contr.poly")) r2 <- resid(m2 <- lm(Size ~ Sex, data = dat)) options(contrasts = c("contr.treatment", "contr.poly")) From these two models, the residuals are the same and it is this information one would then take into the subsequent model (plus the same thing removing Sex effect form the other covariates): > all.equal(r1, r2) [1] TRUE I happen to agree with (2), but on (3) if Sex is no interest to the researchers, they might still want to control for Sex effects, so my null model would be one that includes Sex and I test alternatives with additional covariates plus Sex. Your point about interactions and testing for effects of the non-interesting variables is an important and valid observation.
Partialling or regressing out a categorical variable?
I don't think (1) makes any difference. The idea is to partial out from the response and the other predictors the effects of Sex. It doesn't matter if you code 0, 1 (Treatment contrasts) or 1, -1 (Sum
Partialling or regressing out a categorical variable? I don't think (1) makes any difference. The idea is to partial out from the response and the other predictors the effects of Sex. It doesn't matter if you code 0, 1 (Treatment contrasts) or 1, -1 (Sum to zero contrasts) as the models represent the same "amount" of information which is then removed. Here is an example in R: set.seed(1) dat <- data.frame(Size = c(rnorm(20, 180, sd = 5), rnorm(20, 170, sd = 5)), Sex = gl(2,20,labels = c("Male","Female"))) options(contrasts = c("contr.treatment", "contr.poly")) r1 <- resid(m1 <- lm(Size ~ Sex, data = dat)) options(contrasts = c("contr.sum", "contr.poly")) r2 <- resid(m2 <- lm(Size ~ Sex, data = dat)) options(contrasts = c("contr.treatment", "contr.poly")) From these two models, the residuals are the same and it is this information one would then take into the subsequent model (plus the same thing removing Sex effect form the other covariates): > all.equal(r1, r2) [1] TRUE I happen to agree with (2), but on (3) if Sex is no interest to the researchers, they might still want to control for Sex effects, so my null model would be one that includes Sex and I test alternatives with additional covariates plus Sex. Your point about interactions and testing for effects of the non-interesting variables is an important and valid observation.
Partialling or regressing out a categorical variable? I don't think (1) makes any difference. The idea is to partial out from the response and the other predictors the effects of Sex. It doesn't matter if you code 0, 1 (Treatment contrasts) or 1, -1 (Sum
32,173
Partialling or regressing out a categorical variable?
It's true that the choice of coding method influences how you interpret the model coefficients. In my experience though (and I realise this can depend on your field), dummy coding is so prevalent that people don't have a huge problem dealing with it. In this example, if male = 0 and female = 1, then the intercept is basically the mean response for males, and the Sex coefficient is the impact on the response due to being female (the "female effect"). Things get more complicated once you are dealing with categorical variables with more than two levels, but the interpretation scheme extends in a natural way. What this ultimately means is that you should be careful that any substantive conclusions you draw from the analysis don't depend on the coding method used.
Partialling or regressing out a categorical variable?
It's true that the choice of coding method influences how you interpret the model coefficients. In my experience though (and I realise this can depend on your field), dummy coding is so prevalent that
Partialling or regressing out a categorical variable? It's true that the choice of coding method influences how you interpret the model coefficients. In my experience though (and I realise this can depend on your field), dummy coding is so prevalent that people don't have a huge problem dealing with it. In this example, if male = 0 and female = 1, then the intercept is basically the mean response for males, and the Sex coefficient is the impact on the response due to being female (the "female effect"). Things get more complicated once you are dealing with categorical variables with more than two levels, but the interpretation scheme extends in a natural way. What this ultimately means is that you should be careful that any substantive conclusions you draw from the analysis don't depend on the coding method used.
Partialling or regressing out a categorical variable? It's true that the choice of coding method influences how you interpret the model coefficients. In my experience though (and I realise this can depend on your field), dummy coding is so prevalent that
32,174
Partialling or regressing out a categorical variable?
Remember though that error will be reduced by adding any addtional factors. Even if gender is insignficant in your model it may still be useful in the study. Signficance can be found in any factor if the sample size is large enough. Conversly, if the sample size is not large enough a signficant effect may not be testable. Hence good model building and power analysis.
Partialling or regressing out a categorical variable?
Remember though that error will be reduced by adding any addtional factors. Even if gender is insignficant in your model it may still be useful in the study. Signficance can be found in any factor i
Partialling or regressing out a categorical variable? Remember though that error will be reduced by adding any addtional factors. Even if gender is insignficant in your model it may still be useful in the study. Signficance can be found in any factor if the sample size is large enough. Conversly, if the sample size is not large enough a signficant effect may not be testable. Hence good model building and power analysis.
Partialling or regressing out a categorical variable? Remember though that error will be reduced by adding any addtional factors. Even if gender is insignficant in your model it may still be useful in the study. Signficance can be found in any factor i
32,175
Partialling or regressing out a categorical variable?
It looks like I can't add a long comment directly to Dr. Simpson's answer. Sorry I have to put my response here. I really appreciate your response, Dr. Simpson! I should clarify my arguments a little bit. What I'm having trouble with the partialling business is not a theoretical but a practical issue. Suppose a linear regression model is of the following form y = a + b * Sex + other fixed effects + residuals I totally agree that, from the theoretical perspective, regardless how we quantify the Sex variable, we would have the same residuals. Even if I code the subjects with some crazy numbers such as male = 10.7 and female = 53.65, I would still get the same residuals as r1 and r2 in your example. However, what matters in those papers is not about the residuals. Instead, the focus is on the interpretation of the intercept a and other fixed effects in the model above, and this may invite problem when partialling. With such a focus in mind, how Sex is coded does seem to have a big consequence on the interpretation of all other effects in the above model. With dummy coding (options(contrasts = c("contr.treatment", "contr.poly")) in R), all other effects except 'b' should be interpreted as being associated with sex group with code "0" (males). With effect coding (options(contrasts = c("contr.sum", "contr.poly")) in R), all other effects except b are the average effects for the whole population regardless the sex. Using your example, the model simplifies to y = a + b * Sex + residuals. The problem can be clearly seen with the following about the estimate of the intercept a: > summary(m1) Call: lm(formula = Size ~ Sex, data = dat) ... Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 180.9526 0.9979 181.332 < 2e-16 *** > summary(m2) Call: lm(formula = Size ~ Sex, data = dat) ... Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 175.4601 0.7056 248.659 < 2e-16 *** Finally it looks like I have to agree that my original argument (3) might not be valid. Continuing your example, > options(contrasts = c("contr.sum", "contr.poly")) > m0 <- lm(Size ~ 1, data = dat) > summary(m0) Call: lm(formula = Size ~ 1, data = dat) ... Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 175.460 1.122 156.4 <2e-16 *** It seems that including Sex in the model does not change the effect estimate, but it does increase the statistical power since more variability in the data is accounted for through the Sex effect. My previous illusion in argument (3) may have come from a dataset with a huge sample size in which adding Sex in the model didn't really change much for the significance of other effects. However, in the conventional balanced ANOVA-type analysis, a between-subjects factor such as Sex does not have consequence on those effects unrelated to the factor because of the orthogonal partitioning of the variances?
Partialling or regressing out a categorical variable?
It looks like I can't add a long comment directly to Dr. Simpson's answer. Sorry I have to put my response here. I really appreciate your response, Dr. Simpson! I should clarify my arguments a little
Partialling or regressing out a categorical variable? It looks like I can't add a long comment directly to Dr. Simpson's answer. Sorry I have to put my response here. I really appreciate your response, Dr. Simpson! I should clarify my arguments a little bit. What I'm having trouble with the partialling business is not a theoretical but a practical issue. Suppose a linear regression model is of the following form y = a + b * Sex + other fixed effects + residuals I totally agree that, from the theoretical perspective, regardless how we quantify the Sex variable, we would have the same residuals. Even if I code the subjects with some crazy numbers such as male = 10.7 and female = 53.65, I would still get the same residuals as r1 and r2 in your example. However, what matters in those papers is not about the residuals. Instead, the focus is on the interpretation of the intercept a and other fixed effects in the model above, and this may invite problem when partialling. With such a focus in mind, how Sex is coded does seem to have a big consequence on the interpretation of all other effects in the above model. With dummy coding (options(contrasts = c("contr.treatment", "contr.poly")) in R), all other effects except 'b' should be interpreted as being associated with sex group with code "0" (males). With effect coding (options(contrasts = c("contr.sum", "contr.poly")) in R), all other effects except b are the average effects for the whole population regardless the sex. Using your example, the model simplifies to y = a + b * Sex + residuals. The problem can be clearly seen with the following about the estimate of the intercept a: > summary(m1) Call: lm(formula = Size ~ Sex, data = dat) ... Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 180.9526 0.9979 181.332 < 2e-16 *** > summary(m2) Call: lm(formula = Size ~ Sex, data = dat) ... Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 175.4601 0.7056 248.659 < 2e-16 *** Finally it looks like I have to agree that my original argument (3) might not be valid. Continuing your example, > options(contrasts = c("contr.sum", "contr.poly")) > m0 <- lm(Size ~ 1, data = dat) > summary(m0) Call: lm(formula = Size ~ 1, data = dat) ... Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 175.460 1.122 156.4 <2e-16 *** It seems that including Sex in the model does not change the effect estimate, but it does increase the statistical power since more variability in the data is accounted for through the Sex effect. My previous illusion in argument (3) may have come from a dataset with a huge sample size in which adding Sex in the model didn't really change much for the significance of other effects. However, in the conventional balanced ANOVA-type analysis, a between-subjects factor such as Sex does not have consequence on those effects unrelated to the factor because of the orthogonal partitioning of the variances?
Partialling or regressing out a categorical variable? It looks like I can't add a long comment directly to Dr. Simpson's answer. Sorry I have to put my response here. I really appreciate your response, Dr. Simpson! I should clarify my arguments a little
32,176
How to calculate sample size for simulation in order to assert some level of goodness in my results?
I think the answer to your question is a couple other questions: how rare does a given test outcome need to be before you don't care about it? How certain do you want to be that you'll actually find at least test that comes out that way if it occurs right at the threshold where you've stopped caring about it. Given those values you can do a power analysis. I'm not 100% confident whether you need to do a multinomial (involving more than one outcome) power analysis or not, I'm guessing a binomial one (either the rare test or not) will work just fine, e.g. http://statpages.org/proppowr.html. Alpha = .05, Power = 80%, Group on proportion 0, Group 1 proportion .0015. Relative sample size, 1; total - just south of 13,000 tests. At which the expected number of test 4s is ~20. That will help you find the number of tests you need to have to detect one of those rare occurring results. However if what you really care about is relative frequency, the problem is harder. I'd conjecture that if you simply multiplied the resulting N from the power analysis by 20 or 30 you'd find a reasonable guess. In practice, if you don't really need to decide the number of tests ahead of time, you might consider running tests until you get 20 or 30 result 4s. By the time you've gotten that many 4s you should start to have a reasonable though not absolute estimate of their relative frequency IMO. Ultimately - there are trade-offs between number of tests run and accuracy. You need to know how precise you want your estimates to be before you can really determine how many is "enough".
How to calculate sample size for simulation in order to assert some level of goodness in my results?
I think the answer to your question is a couple other questions: how rare does a given test outcome need to be before you don't care about it? How certain do you want to be that you'll actually find
How to calculate sample size for simulation in order to assert some level of goodness in my results? I think the answer to your question is a couple other questions: how rare does a given test outcome need to be before you don't care about it? How certain do you want to be that you'll actually find at least test that comes out that way if it occurs right at the threshold where you've stopped caring about it. Given those values you can do a power analysis. I'm not 100% confident whether you need to do a multinomial (involving more than one outcome) power analysis or not, I'm guessing a binomial one (either the rare test or not) will work just fine, e.g. http://statpages.org/proppowr.html. Alpha = .05, Power = 80%, Group on proportion 0, Group 1 proportion .0015. Relative sample size, 1; total - just south of 13,000 tests. At which the expected number of test 4s is ~20. That will help you find the number of tests you need to have to detect one of those rare occurring results. However if what you really care about is relative frequency, the problem is harder. I'd conjecture that if you simply multiplied the resulting N from the power analysis by 20 or 30 you'd find a reasonable guess. In practice, if you don't really need to decide the number of tests ahead of time, you might consider running tests until you get 20 or 30 result 4s. By the time you've gotten that many 4s you should start to have a reasonable though not absolute estimate of their relative frequency IMO. Ultimately - there are trade-offs between number of tests run and accuracy. You need to know how precise you want your estimates to be before you can really determine how many is "enough".
How to calculate sample size for simulation in order to assert some level of goodness in my results? I think the answer to your question is a couple other questions: how rare does a given test outcome need to be before you don't care about it? How certain do you want to be that you'll actually find
32,177
How to calculate sample size for simulation in order to assert some level of goodness in my results?
I think that power analysis is too elaborate for what you're trying to do, and might let your down. With a sample size north of 9 million, I think your estimate for p = Pr(X > 3) = 0.000015 is pretty accurate. So you can use that in a simple binomial(n, p) model to estimate a sample size. Let's say your goal is to observed at least one "Large" event with a probability of 99.9%. Then Pr(L > 0) = 1 - Pr(L = 0) = 1 - 0.999985^n = 0.999 and your desired sample size is n = ln(0.001)/ln(0.999985) = 460514. Of course, if you're feeling lucky and are willing to take a 10% chance of missing a Large event, you only need a sample size of n = 153505. Tripling the sample size cuts your chance of missing the Large event by a factor of 100, so I'd go for the 460,000. BUT...if you're looking for FIVE's, their probability is just south of 1/9180902 and to observe at least one of THOSE with 99.9% probability, you'd need a sample size of about 63.4 million! Do heed DrKNexus' advice about updating your estimate of the probabilities for the Large events, since it might not be constant across all your datasets.
How to calculate sample size for simulation in order to assert some level of goodness in my results?
I think that power analysis is too elaborate for what you're trying to do, and might let your down. With a sample size north of 9 million, I think your estimate for p = Pr(X > 3) = 0.000015 is prett
How to calculate sample size for simulation in order to assert some level of goodness in my results? I think that power analysis is too elaborate for what you're trying to do, and might let your down. With a sample size north of 9 million, I think your estimate for p = Pr(X > 3) = 0.000015 is pretty accurate. So you can use that in a simple binomial(n, p) model to estimate a sample size. Let's say your goal is to observed at least one "Large" event with a probability of 99.9%. Then Pr(L > 0) = 1 - Pr(L = 0) = 1 - 0.999985^n = 0.999 and your desired sample size is n = ln(0.001)/ln(0.999985) = 460514. Of course, if you're feeling lucky and are willing to take a 10% chance of missing a Large event, you only need a sample size of n = 153505. Tripling the sample size cuts your chance of missing the Large event by a factor of 100, so I'd go for the 460,000. BUT...if you're looking for FIVE's, their probability is just south of 1/9180902 and to observe at least one of THOSE with 99.9% probability, you'd need a sample size of about 63.4 million! Do heed DrKNexus' advice about updating your estimate of the probabilities for the Large events, since it might not be constant across all your datasets.
How to calculate sample size for simulation in order to assert some level of goodness in my results? I think that power analysis is too elaborate for what you're trying to do, and might let your down. With a sample size north of 9 million, I think your estimate for p = Pr(X > 3) = 0.000015 is prett
32,178
How to make forecasts for a time series?
Probably the simplest approach is, as Andy W suggested, to use a seasonal univariate time series model. If you use R, try either auto.arima() or ets() from the forecast package. Either should work ok, but a general time series method does not use all the information provided. In particular, it seems that you know the shape of the curve in each year, so it might be better to use that information by modelling each year's data accordingly. What follows is a suggestion that tries to incorporate this information. It sounds like some kind of sigmoidal curve will do the trick. e.g., a shifted logistic: \begin{equation} f_{t,j} = \frac{r_te^{a_t(j-b_t)}}{1+e^{a_t(j-b_t)}} \end{equation} for year $t$ and week $j$ where $a_t$, $b_t$ and $r_t$ are parameters to be estimated. $r_t$ is the asymptotic maximum, $a_t$ controls the rate of increase and $b_t$ is the mid-point when $f_{t,j}=r_t/2$. (Another parameter will be needed to allow the asymmetry you describe whereby the rate of increase up to time $b_t$ is faster than that after $b_t$. The simplest way to do this is to allow $a_t$ to take different values before and after time $b_t$.) The parameters can be estimated using least squares for each year. The parameters each form time series: ${a_1,\dots,a_n}$, ${b_1,\dots,b_n}$ and ${r_1,\dots,r_n}$. These can be forecast using standard time series methods, although with $n=5$ you probably can't do much apart from using the mean of each series for producing forecasts. Then, for year 6, an estimate of the value at week $j$ is simply $\hat{f}(6,j)$ where the forecasts of $a_6$, $b_6$ and $r_6$ are used. Once data start to be observed for year 6 you will want to update this estimate. As each new observation is obtained, estimate the sigmoidal curve to the data from year 6 (you will need at least three observations to start with as there are three parameters). Then take a weighted average of the forecasts obtained using the data up to year 5 and the forecast obtained using only the data from year 6, where the weights are equal to $(40-t)/36$ and $(t-4)/36$ respectively. That is very ad hoc, and I'm sure it can be made more objective by placing it in the context of a larger stochastic model. Nevertheless, it will probably work ok for your purposes.
How to make forecasts for a time series?
Probably the simplest approach is, as Andy W suggested, to use a seasonal univariate time series model. If you use R, try either auto.arima() or ets() from the forecast package. Either should work ok,
How to make forecasts for a time series? Probably the simplest approach is, as Andy W suggested, to use a seasonal univariate time series model. If you use R, try either auto.arima() or ets() from the forecast package. Either should work ok, but a general time series method does not use all the information provided. In particular, it seems that you know the shape of the curve in each year, so it might be better to use that information by modelling each year's data accordingly. What follows is a suggestion that tries to incorporate this information. It sounds like some kind of sigmoidal curve will do the trick. e.g., a shifted logistic: \begin{equation} f_{t,j} = \frac{r_te^{a_t(j-b_t)}}{1+e^{a_t(j-b_t)}} \end{equation} for year $t$ and week $j$ where $a_t$, $b_t$ and $r_t$ are parameters to be estimated. $r_t$ is the asymptotic maximum, $a_t$ controls the rate of increase and $b_t$ is the mid-point when $f_{t,j}=r_t/2$. (Another parameter will be needed to allow the asymmetry you describe whereby the rate of increase up to time $b_t$ is faster than that after $b_t$. The simplest way to do this is to allow $a_t$ to take different values before and after time $b_t$.) The parameters can be estimated using least squares for each year. The parameters each form time series: ${a_1,\dots,a_n}$, ${b_1,\dots,b_n}$ and ${r_1,\dots,r_n}$. These can be forecast using standard time series methods, although with $n=5$ you probably can't do much apart from using the mean of each series for producing forecasts. Then, for year 6, an estimate of the value at week $j$ is simply $\hat{f}(6,j)$ where the forecasts of $a_6$, $b_6$ and $r_6$ are used. Once data start to be observed for year 6 you will want to update this estimate. As each new observation is obtained, estimate the sigmoidal curve to the data from year 6 (you will need at least three observations to start with as there are three parameters). Then take a weighted average of the forecasts obtained using the data up to year 5 and the forecast obtained using only the data from year 6, where the weights are equal to $(40-t)/36$ and $(t-4)/36$ respectively. That is very ad hoc, and I'm sure it can be made more objective by placing it in the context of a larger stochastic model. Nevertheless, it will probably work ok for your purposes.
How to make forecasts for a time series? Probably the simplest approach is, as Andy W suggested, to use a seasonal univariate time series model. If you use R, try either auto.arima() or ets() from the forecast package. Either should work ok,
32,179
How to make forecasts for a time series?
What your asking is essentially what Box Jenkins ARIMA modeling does (your yearly cycles would be referred to as seasonal components). Besides looking up materials on your own, I would suggest Applied Time Series Analysis for the Social Sciences 1980 by R McCleary ; R A Hay ; E E Meidinger ; D McDowall Although I can think of reasonable reasons for why you want to forecast further into the future (and hence assess the error when doing so) it is often very difficult in practice. If you have very strong seasonal components it will be more feasible. Otherwise your estimates will likely reach an equilibrium in relatively few future time periods. If you plan on using R to fit your models you should probably check out Rob Hyndman's website (Hopefully he will give you better advice than me!)
How to make forecasts for a time series?
What your asking is essentially what Box Jenkins ARIMA modeling does (your yearly cycles would be referred to as seasonal components). Besides looking up materials on your own, I would suggest Applie
How to make forecasts for a time series? What your asking is essentially what Box Jenkins ARIMA modeling does (your yearly cycles would be referred to as seasonal components). Besides looking up materials on your own, I would suggest Applied Time Series Analysis for the Social Sciences 1980 by R McCleary ; R A Hay ; E E Meidinger ; D McDowall Although I can think of reasonable reasons for why you want to forecast further into the future (and hence assess the error when doing so) it is often very difficult in practice. If you have very strong seasonal components it will be more feasible. Otherwise your estimates will likely reach an equilibrium in relatively few future time periods. If you plan on using R to fit your models you should probably check out Rob Hyndman's website (Hopefully he will give you better advice than me!)
How to make forecasts for a time series? What your asking is essentially what Box Jenkins ARIMA modeling does (your yearly cycles would be referred to as seasonal components). Besides looking up materials on your own, I would suggest Applie
32,180
How to make forecasts for a time series?
you have 5 years of data and 40 observations per year. Why don't you post them on the web and allow us to actually answer this at ground zero rather than philosophizing at 500 miles high. I look forward to the numbers. WE have seen data like this for example the number of customers who trade in their time sharing week ON a weekly basis. The series each year starts at zero and accumulates to a limiting value.
How to make forecasts for a time series?
you have 5 years of data and 40 observations per year. Why don't you post them on the web and allow us to actually answer this at ground zero rather than philosophizing at 500 miles high. I look forwa
How to make forecasts for a time series? you have 5 years of data and 40 observations per year. Why don't you post them on the web and allow us to actually answer this at ground zero rather than philosophizing at 500 miles high. I look forward to the numbers. WE have seen data like this for example the number of customers who trade in their time sharing week ON a weekly basis. The series each year starts at zero and accumulates to a limiting value.
How to make forecasts for a time series? you have 5 years of data and 40 observations per year. Why don't you post them on the web and allow us to actually answer this at ground zero rather than philosophizing at 500 miles high. I look forwa
32,181
What's the accuracy of data obtained through a random sample?
You can think of this as a binomial trial -- your trials are sampling "redhead" or "not readhead". In which case, you can build a confidence interval for your sample proportion ($j/n$) as documented on Wikipedia: Binomial proportion confidence interval A 95% confidence interval basically says that, using the same sampling algorithm, if you repeated this 100 times, the true proportion would lie in the stated interval 95 times. Update By the way, I think the term you're looking for might be standard error which is the standard deviation of the sampled proportions. In this case, it's $\sqrt{{p (1-p)} \over {n}}$ where $p$ is your estimated proportion. Note that as $n$ increases, the standard error decreases.
What's the accuracy of data obtained through a random sample?
You can think of this as a binomial trial -- your trials are sampling "redhead" or "not readhead". In which case, you can build a confidence interval for your sample proportion ($j/n$) as documented
What's the accuracy of data obtained through a random sample? You can think of this as a binomial trial -- your trials are sampling "redhead" or "not readhead". In which case, you can build a confidence interval for your sample proportion ($j/n$) as documented on Wikipedia: Binomial proportion confidence interval A 95% confidence interval basically says that, using the same sampling algorithm, if you repeated this 100 times, the true proportion would lie in the stated interval 95 times. Update By the way, I think the term you're looking for might be standard error which is the standard deviation of the sampled proportions. In this case, it's $\sqrt{{p (1-p)} \over {n}}$ where $p$ is your estimated proportion. Note that as $n$ increases, the standard error decreases.
What's the accuracy of data obtained through a random sample? You can think of this as a binomial trial -- your trials are sampling "redhead" or "not readhead". In which case, you can build a confidence interval for your sample proportion ($j/n$) as documented
32,182
What's the accuracy of data obtained through a random sample?
if your sample size $n$ is not such a tiny fraction of the population size $N$ as in your example, and if you sample without replacement [Sw/oR], a better expression for the [estimated] SE is $$\hat{SE} = \sqrt{\frac{N - n}{N}\frac{\hat p \hat q}{n}},$$ where $\hat p$ is the estimated proportion $j/n$ and $\hat q = 1- \hat p$. [the term $\frac{N-n}{N}$ is called the FPC [finite population correction]. altho whuber's remark is technically correct, it seems to suggest that nothing can be done to get, say, a confidence interval for the true proportion $p$. if $n$ is large enough to make a normal approximation reasonable [$np > 10$, say], it is unlikely one would get $j=0$. also, if the sample size is large enough for a normal approximation using the true $SE$ to be reasonable, using $\hat{SE}$ instead also gives a reasonable approximation. [if your $n$ is really small and you use Sw/oR, you may have to use the exact hypergeometric distribution for $j$ instead of a normal approximation. if you do SwR, the size of $N$ is irrelevant and you can use exact binomial methods to get a CI for $p$.] in any case, since $p(1-p) \le 1/4$, one could always be conservative and use $\frac{1}{2\sqrt{n}}$ in place of $\sqrt{\frac{\hat p \hat q}{n}}$ in the above. if you do that, it takes a sample of $n = 1,111$ to get an estimated ME [margin of error = 2$\hat {SE}$] of $\pm$.03 [regardless of how big $N$ is!].
What's the accuracy of data obtained through a random sample?
if your sample size $n$ is not such a tiny fraction of the population size $N$ as in your example, and if you sample without replacement [Sw/oR], a better expression for the [estimated] SE is $$\hat{S
What's the accuracy of data obtained through a random sample? if your sample size $n$ is not such a tiny fraction of the population size $N$ as in your example, and if you sample without replacement [Sw/oR], a better expression for the [estimated] SE is $$\hat{SE} = \sqrt{\frac{N - n}{N}\frac{\hat p \hat q}{n}},$$ where $\hat p$ is the estimated proportion $j/n$ and $\hat q = 1- \hat p$. [the term $\frac{N-n}{N}$ is called the FPC [finite population correction]. altho whuber's remark is technically correct, it seems to suggest that nothing can be done to get, say, a confidence interval for the true proportion $p$. if $n$ is large enough to make a normal approximation reasonable [$np > 10$, say], it is unlikely one would get $j=0$. also, if the sample size is large enough for a normal approximation using the true $SE$ to be reasonable, using $\hat{SE}$ instead also gives a reasonable approximation. [if your $n$ is really small and you use Sw/oR, you may have to use the exact hypergeometric distribution for $j$ instead of a normal approximation. if you do SwR, the size of $N$ is irrelevant and you can use exact binomial methods to get a CI for $p$.] in any case, since $p(1-p) \le 1/4$, one could always be conservative and use $\frac{1}{2\sqrt{n}}$ in place of $\sqrt{\frac{\hat p \hat q}{n}}$ in the above. if you do that, it takes a sample of $n = 1,111$ to get an estimated ME [margin of error = 2$\hat {SE}$] of $\pm$.03 [regardless of how big $N$ is!].
What's the accuracy of data obtained through a random sample? if your sample size $n$ is not such a tiny fraction of the population size $N$ as in your example, and if you sample without replacement [Sw/oR], a better expression for the [estimated] SE is $$\hat{S
32,183
Error in estimating the size of a set?
You are estimating proportions. For concreteness, imagine that A is the population of voters and B is the set of voters who vote for a particular candidate. Thus, p would be the percentage of voters who would vote for that candidate. Let: $\pi$ be the true percentage of people who would vote for the candidate In other words: $\pi = \frac{|B|}{|A|}$ Then each one of your samples is a bernoulli trial with probability $\pi$ or equivalently you can imagine that each one of your samples is a poll of potential voters asking them whether they would vote for the candidate. Thus, the MLE of $\pi$ is given by: $p = \frac{n_B}{n}$ where $n_B$ is the number of people who said they would vote for candidate or the number of elements which belong to the set B in your sample of size $n$. The standard error for your estimate is: $\sqrt{\frac{\pi (1-\pi)}{n}}$ The above can be approximated by using the MLE for $\pi$ i.e., by: $\sqrt{\frac{p (1-p)}{n}}$
Error in estimating the size of a set?
You are estimating proportions. For concreteness, imagine that A is the population of voters and B is the set of voters who vote for a particular candidate. Thus, p would be the percentage of voters w
Error in estimating the size of a set? You are estimating proportions. For concreteness, imagine that A is the population of voters and B is the set of voters who vote for a particular candidate. Thus, p would be the percentage of voters who would vote for that candidate. Let: $\pi$ be the true percentage of people who would vote for the candidate In other words: $\pi = \frac{|B|}{|A|}$ Then each one of your samples is a bernoulli trial with probability $\pi$ or equivalently you can imagine that each one of your samples is a poll of potential voters asking them whether they would vote for the candidate. Thus, the MLE of $\pi$ is given by: $p = \frac{n_B}{n}$ where $n_B$ is the number of people who said they would vote for candidate or the number of elements which belong to the set B in your sample of size $n$. The standard error for your estimate is: $\sqrt{\frac{\pi (1-\pi)}{n}}$ The above can be approximated by using the MLE for $\pi$ i.e., by: $\sqrt{\frac{p (1-p)}{n}}$
Error in estimating the size of a set? You are estimating proportions. For concreteness, imagine that A is the population of voters and B is the set of voters who vote for a particular candidate. Thus, p would be the percentage of voters w
32,184
Desirable and undesirable properties of Latin squares in experiments?
Imagine: you were interested in the effect of word type (nouns, adjectives, adverbs, and verbs) on recall. you wanted to include word type as a within-subjects factor (i.e., all participants were exposed to all conditions) Such a design would raise the issue of carry over effects. I.e., the order of the conditions may effect the dependent variable recall. For example, participants might get better at recalling words with practice. Thus, if the conditions were always presented in the same order, then the effect or order would be confounded with the effect of condition (i.e., word type). Latin Squares is one of several strategies for dealing with order effects. A Latin Squares design could involve assigning participants to one of four separate orderings (i.e., a between subjects condition called order): nouns adjectives adverbs verbs adjectives adverbs verbs nouns adverbs verbs nouns adjectives verbs nouns adjectives adverbs Thus, the Latin Squares design only entails a subset of possible orderings, and to some extent the effect of order can be estimated. In a blog post I suggest the following simple rules of thumb: "If order is the focus of the analysis (e.g., skill acquisition looking at effects of practice), then don't worry about order effects If order effects are very strong, it may be better to stick to between subjects designs if order effects are small or moderate or unknown, typical design strategy depends on the number of levels of the within-subjects factor of interest. If there are few levels (e.g., 2,3,4 perhaps), present all orders (counterbalance) If there are more levels (e.g., 4+ perhaps), adopt a latin squares approach or randomise ordering" To specifically answer your question, Latin Squares designs allow you to get the statistical power benefits of a within-subjects design while, potentially at least, minimising the main problem of within subjects designs: i.e., order effects.
Desirable and undesirable properties of Latin squares in experiments?
Imagine: you were interested in the effect of word type (nouns, adjectives, adverbs, and verbs) on recall. you wanted to include word type as a within-subjects factor (i.e., all participants were ex
Desirable and undesirable properties of Latin squares in experiments? Imagine: you were interested in the effect of word type (nouns, adjectives, adverbs, and verbs) on recall. you wanted to include word type as a within-subjects factor (i.e., all participants were exposed to all conditions) Such a design would raise the issue of carry over effects. I.e., the order of the conditions may effect the dependent variable recall. For example, participants might get better at recalling words with practice. Thus, if the conditions were always presented in the same order, then the effect or order would be confounded with the effect of condition (i.e., word type). Latin Squares is one of several strategies for dealing with order effects. A Latin Squares design could involve assigning participants to one of four separate orderings (i.e., a between subjects condition called order): nouns adjectives adverbs verbs adjectives adverbs verbs nouns adverbs verbs nouns adjectives verbs nouns adjectives adverbs Thus, the Latin Squares design only entails a subset of possible orderings, and to some extent the effect of order can be estimated. In a blog post I suggest the following simple rules of thumb: "If order is the focus of the analysis (e.g., skill acquisition looking at effects of practice), then don't worry about order effects If order effects are very strong, it may be better to stick to between subjects designs if order effects are small or moderate or unknown, typical design strategy depends on the number of levels of the within-subjects factor of interest. If there are few levels (e.g., 2,3,4 perhaps), present all orders (counterbalance) If there are more levels (e.g., 4+ perhaps), adopt a latin squares approach or randomise ordering" To specifically answer your question, Latin Squares designs allow you to get the statistical power benefits of a within-subjects design while, potentially at least, minimising the main problem of within subjects designs: i.e., order effects.
Desirable and undesirable properties of Latin squares in experiments? Imagine: you were interested in the effect of word type (nouns, adjectives, adverbs, and verbs) on recall. you wanted to include word type as a within-subjects factor (i.e., all participants were ex
32,185
How to identify transfer functions in a time series regression forecasting model?
The classic approach, described in Box, Jenkins & Reinsell (4th ed, 2008) involves looking at the cross-correlation function and the various auto-correlation functions, and making a lot of subjective decisions about the orders and lags for the various terms. The approach works ok for a single predictor, but is not really suitable for multiple predictors. An alternative approach, described in Pankratz (1991), involves fitting lagged regressions with AR errors and determining the appropriate rational lag structure from the fitted coefficients (also a relatively subjective process). Then refitting the entire model with the supposed lag structures and extracting the residuals. The order of the ARMA error process is determined from these residuals (using AIC for example). Then the final model is re-estimated. This approach works well for multiple predictors, and is considerably simpler to apply than the classic approach. I wish I could say there was this neat automated procedure that did it all for you, but I can't. At least not yet.
How to identify transfer functions in a time series regression forecasting model?
The classic approach, described in Box, Jenkins & Reinsell (4th ed, 2008) involves looking at the cross-correlation function and the various auto-correlation functions, and making a lot of subjective
How to identify transfer functions in a time series regression forecasting model? The classic approach, described in Box, Jenkins & Reinsell (4th ed, 2008) involves looking at the cross-correlation function and the various auto-correlation functions, and making a lot of subjective decisions about the orders and lags for the various terms. The approach works ok for a single predictor, but is not really suitable for multiple predictors. An alternative approach, described in Pankratz (1991), involves fitting lagged regressions with AR errors and determining the appropriate rational lag structure from the fitted coefficients (also a relatively subjective process). Then refitting the entire model with the supposed lag structures and extracting the residuals. The order of the ARMA error process is determined from these residuals (using AIC for example). Then the final model is re-estimated. This approach works well for multiple predictors, and is considerably simpler to apply than the classic approach. I wish I could say there was this neat automated procedure that did it all for you, but I can't. At least not yet.
How to identify transfer functions in a time series regression forecasting model? The classic approach, described in Box, Jenkins & Reinsell (4th ed, 2008) involves looking at the cross-correlation function and the various auto-correlation functions, and making a lot of subjective
32,186
How to identify transfer functions in a time series regression forecasting model?
Originally the idea of examining pre-whitened cross-correlations was suggested by Box and Jenkins. In 1981, Liu and Hanssens published( L.-M. Liu and D.M. Hanssens (1982). "Identification of Multiple-Input Transfer Function Models." Communications in Statistics A 11: 297-314.) a paper that suggested a common filter approach that would effectively deal with multiple inputs whose pre-whitened series exhibit cross-correlative structure. They even created a 2 input model data set to demonstrate their solution. After we programmed that approach and then compared it to the Box-Jenkins pre-whitening approach iteratively implemented by us we decided to not to use either the Pankratz approach or the Liu-Hanssens approach.We would be glad to share the Liu-Hansens test data with you if you wish me to post it to the list.
How to identify transfer functions in a time series regression forecasting model?
Originally the idea of examining pre-whitened cross-correlations was suggested by Box and Jenkins. In 1981, Liu and Hanssens published( L.-M. Liu and D.M. Hanssens (1982). "Identification of Multiple-
How to identify transfer functions in a time series regression forecasting model? Originally the idea of examining pre-whitened cross-correlations was suggested by Box and Jenkins. In 1981, Liu and Hanssens published( L.-M. Liu and D.M. Hanssens (1982). "Identification of Multiple-Input Transfer Function Models." Communications in Statistics A 11: 297-314.) a paper that suggested a common filter approach that would effectively deal with multiple inputs whose pre-whitened series exhibit cross-correlative structure. They even created a 2 input model data set to demonstrate their solution. After we programmed that approach and then compared it to the Box-Jenkins pre-whitening approach iteratively implemented by us we decided to not to use either the Pankratz approach or the Liu-Hanssens approach.We would be glad to share the Liu-Hansens test data with you if you wish me to post it to the list.
How to identify transfer functions in a time series regression forecasting model? Originally the idea of examining pre-whitened cross-correlations was suggested by Box and Jenkins. In 1981, Liu and Hanssens published( L.-M. Liu and D.M. Hanssens (1982). "Identification of Multiple-
32,187
Does the cross validation implementation influence its results?
You can certainly get different results simply because you train on different examples. I very much doubt that there's an algorithm or problem domain where the results of the two would differ in some predictable way.
Does the cross validation implementation influence its results?
You can certainly get different results simply because you train on different examples. I very much doubt that there's an algorithm or problem domain where the results of the two would differ in some
Does the cross validation implementation influence its results? You can certainly get different results simply because you train on different examples. I very much doubt that there's an algorithm or problem domain where the results of the two would differ in some predictable way.
Does the cross validation implementation influence its results? You can certainly get different results simply because you train on different examples. I very much doubt that there's an algorithm or problem domain where the results of the two would differ in some
32,188
Does the cross validation implementation influence its results?
Usually of course the difference is unnoticeable, and so goes my question -- can you think of an example when the result of one type is significantly different from another? I am not sure at all the difference is unnoticeable, and that only in ad hoc example it will be noticeable. Both cross-validation and bootstrapping (sub-sampling) methods depend critically on their design parameters, and this understanding is not complete yet. In general, results within k-fold cross-validation depend critically on the number of folds, so you can expect always different results from what you would observe in sub-sampling. Case in point: say that you have a true linear model with a fixed number of parameters. If you use k-fold cross-validation (with a given, fixed k), and let the number of observations go to infinity, k-fold cross validation will be asymptotically inconsistent for model selection, i.e., it will identify an incorrect model with probability greater than 0. This surprising result is due to Jun Shao, "Linear Model Selection by Cross-Validation", Journal of the American Statistical Association, 88, 486-494 (1993), but more papers can be found in this vein. In general, respectable statistical papers specify the cross-validation protocol, exactly because results are not invariant. In the case where they choose a large number of folds for large datasets, they remark and try to correct for biases in model selection.
Does the cross validation implementation influence its results?
Usually of course the difference is unnoticeable, and so goes my question -- can you think of an example when the result of one type is significantly different from another? I am not sure at al
Does the cross validation implementation influence its results? Usually of course the difference is unnoticeable, and so goes my question -- can you think of an example when the result of one type is significantly different from another? I am not sure at all the difference is unnoticeable, and that only in ad hoc example it will be noticeable. Both cross-validation and bootstrapping (sub-sampling) methods depend critically on their design parameters, and this understanding is not complete yet. In general, results within k-fold cross-validation depend critically on the number of folds, so you can expect always different results from what you would observe in sub-sampling. Case in point: say that you have a true linear model with a fixed number of parameters. If you use k-fold cross-validation (with a given, fixed k), and let the number of observations go to infinity, k-fold cross validation will be asymptotically inconsistent for model selection, i.e., it will identify an incorrect model with probability greater than 0. This surprising result is due to Jun Shao, "Linear Model Selection by Cross-Validation", Journal of the American Statistical Association, 88, 486-494 (1993), but more papers can be found in this vein. In general, respectable statistical papers specify the cross-validation protocol, exactly because results are not invariant. In the case where they choose a large number of folds for large datasets, they remark and try to correct for biases in model selection.
Does the cross validation implementation influence its results? Usually of course the difference is unnoticeable, and so goes my question -- can you think of an example when the result of one type is significantly different from another? I am not sure at al
32,189
How would one find the uncertainty in a mean if the data points themselves have zero-order uncertainty?
Assuming the observations are collected independently of each other, the easiest way I can think of is to propagate uncertainty by using simulation. The idea is to generate random vectors from the (hyper-)cube and take the average of their coordinates; do this a large number of times and collect all the values obtained (the histogram below). Here is an R code to illustrate it. # set up a function to handle the simulation process gen_x <- function() { x = c(runif(1, 16-0.5,16+0.5), runif(1, 21-0.5,21+0.5), runif(1, 22-0.5, 22+0.5)) return(mean(x)) } N=1e+4 # evaluate the function N times hh <- sapply(1:N, function(x) gen_x()) hist(hh) abline(v = mean(c(16,21,22))) The histogram shows the distribution of the average of 16,21,22 taking into account the uncertainty of $\pm 0.5$ and assuming independent uniform distributions centred at the given values. The vertical line shows the sample average $(16+21+22)/3$. The above solution gives the entire distribution of the sample average. If you are only interested in the variance of the sample average, then that's just equal to $1/(12n)$. Indeed, If you denote the sample $X_1,\ldots,X_n$, setting $\bar X = n^{-1}\sum_i X_i$ the sample average and assuming $X_i$ are independent, we have $$ \text{var}(\bar X) = n^{-2}(\text{var}(X_1)+\cdots +\text{var}(X_n)) = n^{-2}\frac{n}{12} = (12n)^{-1}. $$ Here I have used the fact that if $X \sim \text{Unif}(a,b)$, then $\text{var}(X) = (b-a)^2/12$. So to sum up, if your $n$ samples are independent and the measures can be thought of as $\pm 0.5$ around the value measured, then, under the uniform assumption of the measure within each interval, the variance of the average is $(12n)^{-1}$.
How would one find the uncertainty in a mean if the data points themselves have zero-order uncertain
Assuming the observations are collected independently of each other, the easiest way I can think of is to propagate uncertainty by using simulation. The idea is to generate random vectors from the (hy
How would one find the uncertainty in a mean if the data points themselves have zero-order uncertainty? Assuming the observations are collected independently of each other, the easiest way I can think of is to propagate uncertainty by using simulation. The idea is to generate random vectors from the (hyper-)cube and take the average of their coordinates; do this a large number of times and collect all the values obtained (the histogram below). Here is an R code to illustrate it. # set up a function to handle the simulation process gen_x <- function() { x = c(runif(1, 16-0.5,16+0.5), runif(1, 21-0.5,21+0.5), runif(1, 22-0.5, 22+0.5)) return(mean(x)) } N=1e+4 # evaluate the function N times hh <- sapply(1:N, function(x) gen_x()) hist(hh) abline(v = mean(c(16,21,22))) The histogram shows the distribution of the average of 16,21,22 taking into account the uncertainty of $\pm 0.5$ and assuming independent uniform distributions centred at the given values. The vertical line shows the sample average $(16+21+22)/3$. The above solution gives the entire distribution of the sample average. If you are only interested in the variance of the sample average, then that's just equal to $1/(12n)$. Indeed, If you denote the sample $X_1,\ldots,X_n$, setting $\bar X = n^{-1}\sum_i X_i$ the sample average and assuming $X_i$ are independent, we have $$ \text{var}(\bar X) = n^{-2}(\text{var}(X_1)+\cdots +\text{var}(X_n)) = n^{-2}\frac{n}{12} = (12n)^{-1}. $$ Here I have used the fact that if $X \sim \text{Unif}(a,b)$, then $\text{var}(X) = (b-a)^2/12$. So to sum up, if your $n$ samples are independent and the measures can be thought of as $\pm 0.5$ around the value measured, then, under the uniform assumption of the measure within each interval, the variance of the average is $(12n)^{-1}$.
How would one find the uncertainty in a mean if the data points themselves have zero-order uncertain Assuming the observations are collected independently of each other, the easiest way I can think of is to propagate uncertainty by using simulation. The idea is to generate random vectors from the (hy
32,190
How does class balancing via reweighting affect logistic regression?
Towards answering my three questions I offer a perspective based on analysis of a toy problem. While it is meant to illuminate a topic I find to be widely discussed but poorly understood, I do not claim my perspective provides uniformly comprehensive, authoritative conclusions. And the answer is geared towards traditional, simple logistic regression models. It should be less relevant to highly flexible ML models where misspecification is less of an issue. Class reweighting can affect the decision boundary of logistic regression, especially when the model is misspecified (the data doesn't follow the pattern specified by the model). Upweighting one class pushes the model to align its decision boundary with the pattern of that class rather than the other class. As a result, class reweighting can be a good idea if you can't see how to fix the model misspecification and you need high recall on one class. However, the best weighting is use case-dependent, and is sometimes demonstrably not 50-50. In applications, it's very common that a classifier's purpose is to predict rare but important events. Upweighting the minority class can make the model more fit-to-purpose if accuracy in the high-recall regime is the goal. Upweighting the minority class has the side effect of pushing the weighted class balance towards 50-50. It often increases AUC as well, since AUC is optimized at a 50-50 training balance in some models. (This has been proven for linear discriminant analysis in the infinite sample limit, and LDA is very similar to logistic regression.) However, neither class balance nor maximal AUC necessarily provide the best model for a given use case. While the weighting that yields best performance is use case-dependent, and the optimal class balance is not necessarily 50-50, balancing might be a decent rule of thumb, especially when the model's desired operating characteristics are not known in advance. In this case, good average performance over a range of thresholds could be desirable, AUC is a metric for exactly this, and 50-50 balance often seems to provide better AUC, both theoretically and empirically. That said, there is no general guarantee that class balancing maximizes AUC, and definitely no guarantee that it produces the best model for a given use case. Therefore, framing class imbalance as a "problem" that should be "fixed" is misleading and can lead to suboptimal modeling choices. Class reweighting affects the decision boundary Let's fire up R and fit a very simple misspecified model. Our data will have two classes, A and B. Class A will be on the x-axis and B on the y-axis. It's possible to separate these data perfectly (except at the origin), but let's say we don't know that, and we only fit the most obvious logistic regression model with untransformed x and y as predictors. The decision boundary of this model is Now, let's upweight one class by 10x. This time, the decision boundary aligns more closely with the data on the y-axis. What's going on here? From the perspective of the model, the data contains two conflicting patterns. With class A on the x-axis and class B on the y-axis, the classes are not cleanly separated by a single line, as the logistic regression model assumes. In other words, the model is misspecified. The fit must decide how to split the difference. When class A and B have equal weights, the decision boundary is set halfway between the A pattern and the B pattern. When B is upweighted, it's pushed towards the B pattern. Weighting does not just change the intercept, but also aligns the decision boundary more with the distribution of class B. In retrospect, this makes sense. When errors on one class become much more costly, the model fitting process will prefer a model which is highly accurate on that class by keeping it to one side of the decision boundary as much as possible. Changing the slope of the boundary accomplishes this more effectively than just changing the intercept. Similar phenomena should also occur in more complex problems with more predictors and nonlinear decision boundaries. What does this suggest about class balancing? Models are often misspecified in real-world applications, especially when linear-decision-boundary models are used, because the data rarely conform to simple linear formulas. However, misspecified models can still be useful, and we can use weighting to make the model more fit for purpose. In this case, if the purpose of the model is to find all possible elements of class B, and some false positives from class A are tolerable, the model weighted 10x towards class B is probably preferable. It has the same recall, and better precision, than simply shifting the threshold of the unweighted model. (This can be seen by thinking geometrically: if you move the y=x line down until all of B is on one side, more A points are on the wrong side than in the weighted model.) This is essentially importance weighting, as explained here. This means 50-50 weighting is not ideal in this use case. And it doesn't matter if it maximizes AUC. It may, but that would just illustrate how maximizing AUC doesn't always get us what we really want. Reweighting for class balance: a satisficing choice? While reweighting for class balance doesn't always get the best result, it could perhaps be "good enough" in many applications. Very commonly, classifiers are designed to pick out a rare but important event. Upweighting the rare event may improve the fitness of the model for this purpose, and balancing classes can improve AUC, which can be a decent rough proxy for what is really desired. In particular, when the desired operating characteristics of the model are not known in advance, maximizing AUC is arguably a good satisficing choice, because AUC represents average performance over a range of thresholds, and good average performance makes better performance around the operating threshold somewhat more likely. On the other hand, a bit of thought about the use case often reveals a lot about the desired performance characteristics, and could yield better final results than spending 100% of effort towards maximizing AUC. R Code library(tidyverse) ggplot2::theme_set(theme_minimal()) vals <- seq(-1, 3, by = 1/10) n <- length(vals) ones <- 1 + numeric(n) df <- tibble( x = c(1 * vals, 0 * vals), y = c(0 * vals, 1 * vals), class = c(0 * ones, 1 * ones)) %>% mutate(class = factor(class, c(0, 1), labels = c('A', 'B'))) # plot data df %>% ggplot(aes(x = x, y = y, group = class, color = class)) + geom_point(size = 3) + ggtitle('Training data') # Model 1: equal weights on classes --------------- model <- glm(class ~ x + y, family = 'binomial', data = df) beta_0 <- coef(model)['(Intercept)'] beta_x <- coef(model)['x'] beta_y <- coef(model)['y'] # plot data + decision boundary # 0 = beta_0 + beta_x * x + beta_y * y # y = (-beta_x / beta_y) * x + (-beta_0 / beta_y) df %>% ggplot(aes(x = x, y = y, group = class, color = class)) + geom_point(size = 3) + geom_abline(slope = -beta_x/beta_y, intercept = -beta_0/beta_y, color = 'black', linetype = 'dashed') + ggtitle('Unweighted model') # Model 2: upweight class B by 10x ----------------- df <- df %>% mutate(w = if_else(class == 'B', 10, 1)) model <- glm(class ~ x + y, family = 'binomial', data = df, weights = w) beta_0 <- coef(model)['(Intercept)'] beta_x <- coef(model)['x'] beta_y <- coef(model)['y'] # plot data + decision boundary df %>% ggplot(aes(x = x, y = y, group = class, color = class)) + geom_point(size = 3) + geom_abline(slope = -beta_x/beta_y, intercept = -beta_0/beta_y, color = 'black', linetype = 'dashed') + ggtitle('Fit with 10x weight on class B')
How does class balancing via reweighting affect logistic regression?
Towards answering my three questions I offer a perspective based on analysis of a toy problem. While it is meant to illuminate a topic I find to be widely discussed but poorly understood, I do not cla
How does class balancing via reweighting affect logistic regression? Towards answering my three questions I offer a perspective based on analysis of a toy problem. While it is meant to illuminate a topic I find to be widely discussed but poorly understood, I do not claim my perspective provides uniformly comprehensive, authoritative conclusions. And the answer is geared towards traditional, simple logistic regression models. It should be less relevant to highly flexible ML models where misspecification is less of an issue. Class reweighting can affect the decision boundary of logistic regression, especially when the model is misspecified (the data doesn't follow the pattern specified by the model). Upweighting one class pushes the model to align its decision boundary with the pattern of that class rather than the other class. As a result, class reweighting can be a good idea if you can't see how to fix the model misspecification and you need high recall on one class. However, the best weighting is use case-dependent, and is sometimes demonstrably not 50-50. In applications, it's very common that a classifier's purpose is to predict rare but important events. Upweighting the minority class can make the model more fit-to-purpose if accuracy in the high-recall regime is the goal. Upweighting the minority class has the side effect of pushing the weighted class balance towards 50-50. It often increases AUC as well, since AUC is optimized at a 50-50 training balance in some models. (This has been proven for linear discriminant analysis in the infinite sample limit, and LDA is very similar to logistic regression.) However, neither class balance nor maximal AUC necessarily provide the best model for a given use case. While the weighting that yields best performance is use case-dependent, and the optimal class balance is not necessarily 50-50, balancing might be a decent rule of thumb, especially when the model's desired operating characteristics are not known in advance. In this case, good average performance over a range of thresholds could be desirable, AUC is a metric for exactly this, and 50-50 balance often seems to provide better AUC, both theoretically and empirically. That said, there is no general guarantee that class balancing maximizes AUC, and definitely no guarantee that it produces the best model for a given use case. Therefore, framing class imbalance as a "problem" that should be "fixed" is misleading and can lead to suboptimal modeling choices. Class reweighting affects the decision boundary Let's fire up R and fit a very simple misspecified model. Our data will have two classes, A and B. Class A will be on the x-axis and B on the y-axis. It's possible to separate these data perfectly (except at the origin), but let's say we don't know that, and we only fit the most obvious logistic regression model with untransformed x and y as predictors. The decision boundary of this model is Now, let's upweight one class by 10x. This time, the decision boundary aligns more closely with the data on the y-axis. What's going on here? From the perspective of the model, the data contains two conflicting patterns. With class A on the x-axis and class B on the y-axis, the classes are not cleanly separated by a single line, as the logistic regression model assumes. In other words, the model is misspecified. The fit must decide how to split the difference. When class A and B have equal weights, the decision boundary is set halfway between the A pattern and the B pattern. When B is upweighted, it's pushed towards the B pattern. Weighting does not just change the intercept, but also aligns the decision boundary more with the distribution of class B. In retrospect, this makes sense. When errors on one class become much more costly, the model fitting process will prefer a model which is highly accurate on that class by keeping it to one side of the decision boundary as much as possible. Changing the slope of the boundary accomplishes this more effectively than just changing the intercept. Similar phenomena should also occur in more complex problems with more predictors and nonlinear decision boundaries. What does this suggest about class balancing? Models are often misspecified in real-world applications, especially when linear-decision-boundary models are used, because the data rarely conform to simple linear formulas. However, misspecified models can still be useful, and we can use weighting to make the model more fit for purpose. In this case, if the purpose of the model is to find all possible elements of class B, and some false positives from class A are tolerable, the model weighted 10x towards class B is probably preferable. It has the same recall, and better precision, than simply shifting the threshold of the unweighted model. (This can be seen by thinking geometrically: if you move the y=x line down until all of B is on one side, more A points are on the wrong side than in the weighted model.) This is essentially importance weighting, as explained here. This means 50-50 weighting is not ideal in this use case. And it doesn't matter if it maximizes AUC. It may, but that would just illustrate how maximizing AUC doesn't always get us what we really want. Reweighting for class balance: a satisficing choice? While reweighting for class balance doesn't always get the best result, it could perhaps be "good enough" in many applications. Very commonly, classifiers are designed to pick out a rare but important event. Upweighting the rare event may improve the fitness of the model for this purpose, and balancing classes can improve AUC, which can be a decent rough proxy for what is really desired. In particular, when the desired operating characteristics of the model are not known in advance, maximizing AUC is arguably a good satisficing choice, because AUC represents average performance over a range of thresholds, and good average performance makes better performance around the operating threshold somewhat more likely. On the other hand, a bit of thought about the use case often reveals a lot about the desired performance characteristics, and could yield better final results than spending 100% of effort towards maximizing AUC. R Code library(tidyverse) ggplot2::theme_set(theme_minimal()) vals <- seq(-1, 3, by = 1/10) n <- length(vals) ones <- 1 + numeric(n) df <- tibble( x = c(1 * vals, 0 * vals), y = c(0 * vals, 1 * vals), class = c(0 * ones, 1 * ones)) %>% mutate(class = factor(class, c(0, 1), labels = c('A', 'B'))) # plot data df %>% ggplot(aes(x = x, y = y, group = class, color = class)) + geom_point(size = 3) + ggtitle('Training data') # Model 1: equal weights on classes --------------- model <- glm(class ~ x + y, family = 'binomial', data = df) beta_0 <- coef(model)['(Intercept)'] beta_x <- coef(model)['x'] beta_y <- coef(model)['y'] # plot data + decision boundary # 0 = beta_0 + beta_x * x + beta_y * y # y = (-beta_x / beta_y) * x + (-beta_0 / beta_y) df %>% ggplot(aes(x = x, y = y, group = class, color = class)) + geom_point(size = 3) + geom_abline(slope = -beta_x/beta_y, intercept = -beta_0/beta_y, color = 'black', linetype = 'dashed') + ggtitle('Unweighted model') # Model 2: upweight class B by 10x ----------------- df <- df %>% mutate(w = if_else(class == 'B', 10, 1)) model <- glm(class ~ x + y, family = 'binomial', data = df, weights = w) beta_0 <- coef(model)['(Intercept)'] beta_x <- coef(model)['x'] beta_y <- coef(model)['y'] # plot data + decision boundary df %>% ggplot(aes(x = x, y = y, group = class, color = class)) + geom_point(size = 3) + geom_abline(slope = -beta_x/beta_y, intercept = -beta_0/beta_y, color = 'black', linetype = 'dashed') + ggtitle('Fit with 10x weight on class B')
How does class balancing via reweighting affect logistic regression? Towards answering my three questions I offer a perspective based on analysis of a toy problem. While it is meant to illuminate a topic I find to be widely discussed but poorly understood, I do not cla
32,191
Why does a zero entry in the inverse covariance matrix of a joint Gaussian distribution imply conditional independence?
The aim of this answer is to demonstrate this result with minimal algebraic effort. The secret is to maintain a laser-like focus on what matters, ignoring all the rest. Let me illustrate. By definition (many definitions anyway), the value of the density of a multivariate Gaussian with vector mean $\mu$ and invertible covariance matrix $\Sigma$ at the point $\mathbf x = (x_1, x_2, \ldots, x_n)^\prime$ is proportional to $$\exp\left((\mathbf x - \mu)^\prime \Sigma^{-1} (\mathbf x - \mu)/2\right).$$ Conditioning on all the variables except $(x_i,x_j)$ is tantamount to viewing all those other variables as constants. Focus, then, on how this density depends on $(x_i,x_j).$ To do so, we must examine the argument of $\exp.$ Since $\mu$ is constant, too, let's consider how it depends on $(x_i-\mu_i,x_j-\mu_j) = (y_i,y_j).$ Letting $(a_{rs}),$ $1\le r,$ $1\le s$ be the coefficients of $\Sigma^{-1},$ the rules of matrix multiplication imply $$\begin{aligned} (\mathbf x - \mu)^\prime \Sigma^{-1} (\mathbf x - \mu) &= a_{ii}y_i^2/2 + \text{constants}\times y_i \\&+ a_{jj}y_j^2/2 + \text{other constants}\times y_j \\ &+ a_{ij}y_iy_j \\&+ \text{yet other constants}. \end{aligned}$$ That's the sum of four expressions, one per line. The rules of exponentiation then tell us the conditional density is the product of five terms: A normalizing constant from the original (joint) density. A term that is a function only of $y_i$ -- which implies it's a function only of $x_i.$ Another term that is a function only of $y_j$ -- which implies it's a function only of $x_j.$ The $a_{ij}y_iy_j$ term. The exponential of yet other constants. When, as assumed in the question, $a_{ij} = 0,$ term (4) drops out. This leaves the product of constants (from $(1)$ and $(5)$), a function of $x_i$ alone (from $(2)$), and a function of $x_j$ alone (from $(3)$), showing explicitly how the conditional density factors as separate functions of $x_i$ and $x_j.$ This factorization implies the corresponding random variables are independent, QED.
Why does a zero entry in the inverse covariance matrix of a joint Gaussian distribution imply condit
The aim of this answer is to demonstrate this result with minimal algebraic effort. The secret is to maintain a laser-like focus on what matters, ignoring all the rest. Let me illustrate. By definiti
Why does a zero entry in the inverse covariance matrix of a joint Gaussian distribution imply conditional independence? The aim of this answer is to demonstrate this result with minimal algebraic effort. The secret is to maintain a laser-like focus on what matters, ignoring all the rest. Let me illustrate. By definition (many definitions anyway), the value of the density of a multivariate Gaussian with vector mean $\mu$ and invertible covariance matrix $\Sigma$ at the point $\mathbf x = (x_1, x_2, \ldots, x_n)^\prime$ is proportional to $$\exp\left((\mathbf x - \mu)^\prime \Sigma^{-1} (\mathbf x - \mu)/2\right).$$ Conditioning on all the variables except $(x_i,x_j)$ is tantamount to viewing all those other variables as constants. Focus, then, on how this density depends on $(x_i,x_j).$ To do so, we must examine the argument of $\exp.$ Since $\mu$ is constant, too, let's consider how it depends on $(x_i-\mu_i,x_j-\mu_j) = (y_i,y_j).$ Letting $(a_{rs}),$ $1\le r,$ $1\le s$ be the coefficients of $\Sigma^{-1},$ the rules of matrix multiplication imply $$\begin{aligned} (\mathbf x - \mu)^\prime \Sigma^{-1} (\mathbf x - \mu) &= a_{ii}y_i^2/2 + \text{constants}\times y_i \\&+ a_{jj}y_j^2/2 + \text{other constants}\times y_j \\ &+ a_{ij}y_iy_j \\&+ \text{yet other constants}. \end{aligned}$$ That's the sum of four expressions, one per line. The rules of exponentiation then tell us the conditional density is the product of five terms: A normalizing constant from the original (joint) density. A term that is a function only of $y_i$ -- which implies it's a function only of $x_i.$ Another term that is a function only of $y_j$ -- which implies it's a function only of $x_j.$ The $a_{ij}y_iy_j$ term. The exponential of yet other constants. When, as assumed in the question, $a_{ij} = 0,$ term (4) drops out. This leaves the product of constants (from $(1)$ and $(5)$), a function of $x_i$ alone (from $(2)$), and a function of $x_j$ alone (from $(3)$), showing explicitly how the conditional density factors as separate functions of $x_i$ and $x_j.$ This factorization implies the corresponding random variables are independent, QED.
Why does a zero entry in the inverse covariance matrix of a joint Gaussian distribution imply condit The aim of this answer is to demonstrate this result with minimal algebraic effort. The secret is to maintain a laser-like focus on what matters, ignoring all the rest. Let me illustrate. By definiti
32,192
Maximum likelihood estimation, Restricted maximum likelihood estimation and Profile likelihood estimation
Maximum-likelihood is applied mathematical optimisation --- learn the latter well This is too large a field for us to give you a comprehensive answer, but perhaps we can point you in the right direction to find the resources you need. The first thing to stress here is that, mathematically speaking, all forms of maximum likelihood involve maximising a function over an input set. For this reason, the subject falls within the general field of mathematical optimisation. For continuous distributions this optimisation is done using standard calculus methods and for discrete distributions it is done using discrete calculus (or sometimes direct optimisation methods). Now, there are some particular "tricks" that are commonly used in the context of maximum likelihood analysis. For example, most (but not all) maximum likelihood problems involve maximising a function that is a product of a large number of non-negative parts, and so we frequently start by taking logarithms and working in log-space. Nevertheless, at the end of the day this is still just mathematical optimisation applied in a particular context. If you want to get good at it in general, and be able to solve problems that do not conform neatly to standard cases, it is a good idea to give yourself a broad education in the field of mathematical optimisation. The field of mathematical optimisation is absolutely huge; you could probably fill a small library with books and papers on the topic. Nevertheless, there are some obvious places to start and some further places to progress once you have mastered the basics. Over the long-term, I recommend something like the following curriculum: Unconstrained univariate optimisation: Start by learning how to maximise a differentiable univariate function over an "unconstrained" input set (i.e., an input that can be any real number). This is done by looking at the first and second derivatives of the function. There are many introductory calculus textbooks that cover this material in detail. Unconstrained multivariate optimisation: Once you are comfortable with optimisation of univariate functions, learn how to maximise a differentiable multivariate function over an "unconstrained" input set (i.e., an input that can be any vector of real numbers). This is done by looking at the multivariate versions of the first and second derivatives of the function (called the gradient vector and Hessian matrix). Most introductory calculus textbooks will cover multivariate (or at least bivariate) calculus after they cover univariate calculus. Optimisation of composite functions: Once you are comfortable with the basics of unconstrained optimisation, learn to use the chain rules to solve optimisation problems involving composite functions (i.e., functions of functions). In particular, learn the univariate and multivariate chain rules for the first two derivatives and get used to using these to derive the first two derivatives of composite functions so that you can optimise them using standard methods. The goal here is to be able to make transformations of your input parameters and still be able to comfortably optimise the function you are working with. Constrained univariate and multivariate optimisation: Once you are comfortable with unconstrained optimisation, and optimisation of composite functions, learn how to maximise a differentiable function over an input set that is constrained by one or more non-linear equations or inequalities. This is the field of non-linear programming. There are several common techniques in this field, including transformation of input variables (creating composite functions), direct analysis using Lagrangian methods or the Kurush-Kuhn-Tucker method, and methods using "penalty functions". Practice using each of these methods on some tricky constrained optimisaton problems. Over time you will learn which of these methods is the simplest to apply to different types of constrained optimisation problems, and you will be able to derive solutions using alternative methods. Discrete optimisation: Discrete optimisation is generally treated as a seperate subject to optimisation for continuous functions, but there are some clear parallels. Discrete optimisation is usually covered in books on discrete mathematics. It requires you to learn about the difference operators (analogous to differentiation of continuous functions) and discrete calculus. This is something that is relatively easy to understand if you already have a good grounding in standard calculus methods for continuous functions. However, it is something that should be studied in its own right. Again, you should start by looking at unconstrained univariate optimisation, then unconstrained multivariate optimisation, then constrained optimisation, etc. Once you have mastered the basics you can look at some standard discrete optimisation problems in computational theory (e.g., the knapsack problem, bin-packing problem, change-making problem, etc.). If you are really keen you can also start to look at some of the theory relating to computational complexity of these problems. Numerical/simulation methods: Once you have a good grounding in the underlying theory of mathematical optimisation, and you are comfortable with both constrained and unconstrained problems involving univariate or multivariate functions, you can then examine some numerical optimisation methods. This includes dynamic programming, MCMC methods, simulated annealing, evolutionary/genetic algorithms, etc. Once you get to this point you are getting into specialist territory, but it is nice to have a rough idea of how these methods work (and ideally have the ability to program a few of them if needed). As regards the specifics of maximum-likelihood (ML) and restricted maximum-likelihood (REML), these become extremely simple to understand once you have obtained a strong underlying background in mathematical optimisation. Most general textbooks on probability and statistics will have a section on estimation that will include ML as one of the main estimation methods. Resources discussing REML are less common, but I will give you some papers here that may help. Maximum-likelihood (ML) estimation: Maximum-likelihood estimation is an applied example of mathematical optimisation, where you are maximising a joint density of some data with respect to one or more parameters. If there is one parameter then this is a univariate optisation problem and if there is more than one parameter it is a multivariate optimisation problem. It is usually (but not always) the case that the objective function of interest (the joint density) can be written as a product of non-negative parts. In particular, in the case of conditionally independent data, the objective function will be a product of density functions for each individual data point. Consequently, we usually take logarithms and maximise the "log-likelihood function". The maximisation itself proceeds using standard methods, but there are particular names for things in this context --- e.g., we call the first derivative of the log-likelihood the "score function" and the negative of the second derivative the "information function". Maximum likelihood estimation is covered in virtually all textooks that cover statistical estimation. Profile-likelihood (PL) estimation: The profile likelihood is used in some multivariate ML problems when we maximise the multivariate function one parameter at a time. One general optimisation technique that can be used in these cases is to derive the form of the MLE for a single parameter (written as a function of the data and other parameters) and then substitute this maximised parameter value back into the original likelihood function to obtain a partially maximised version of the function than no longer has that parameter. We call this partially maximised version of the likelihood function the "profile-likelihood" function. Understanding the use of this function really just requires you to be familiar with optimisation of multivariate functions using this one-at-a-time method. Profile likelihood is mentioned in some statistical textbooks in the context of finding the MLE in multivariate problems. In any case, it is really just something that arises when using a particular technique for multivariate optimisation. Restricted maximum-likelihood (REML) estimation: This is a variant of maximum-likelihood estimation that involves an attempt to estimate a parameter in a distribution while treating one or more other parameters as "nuisance parameters". REML is frequently used (and illustrated) when estimating variance components in the presence of an unknown mean. The original paper introducing the method is Bartlett (1937) and it was applied to a range of problems in Harville (1977). In Corbeil and Searle (2012) REML was used to estimate variance components in mixed models. You can find a simple introduction to this topic in Zhang (2015)
Maximum likelihood estimation, Restricted maximum likelihood estimation and Profile likelihood estim
Maximum-likelihood is applied mathematical optimisation --- learn the latter well This is too large a field for us to give you a comprehensive answer, but perhaps we can point you in the right directi
Maximum likelihood estimation, Restricted maximum likelihood estimation and Profile likelihood estimation Maximum-likelihood is applied mathematical optimisation --- learn the latter well This is too large a field for us to give you a comprehensive answer, but perhaps we can point you in the right direction to find the resources you need. The first thing to stress here is that, mathematically speaking, all forms of maximum likelihood involve maximising a function over an input set. For this reason, the subject falls within the general field of mathematical optimisation. For continuous distributions this optimisation is done using standard calculus methods and for discrete distributions it is done using discrete calculus (or sometimes direct optimisation methods). Now, there are some particular "tricks" that are commonly used in the context of maximum likelihood analysis. For example, most (but not all) maximum likelihood problems involve maximising a function that is a product of a large number of non-negative parts, and so we frequently start by taking logarithms and working in log-space. Nevertheless, at the end of the day this is still just mathematical optimisation applied in a particular context. If you want to get good at it in general, and be able to solve problems that do not conform neatly to standard cases, it is a good idea to give yourself a broad education in the field of mathematical optimisation. The field of mathematical optimisation is absolutely huge; you could probably fill a small library with books and papers on the topic. Nevertheless, there are some obvious places to start and some further places to progress once you have mastered the basics. Over the long-term, I recommend something like the following curriculum: Unconstrained univariate optimisation: Start by learning how to maximise a differentiable univariate function over an "unconstrained" input set (i.e., an input that can be any real number). This is done by looking at the first and second derivatives of the function. There are many introductory calculus textbooks that cover this material in detail. Unconstrained multivariate optimisation: Once you are comfortable with optimisation of univariate functions, learn how to maximise a differentiable multivariate function over an "unconstrained" input set (i.e., an input that can be any vector of real numbers). This is done by looking at the multivariate versions of the first and second derivatives of the function (called the gradient vector and Hessian matrix). Most introductory calculus textbooks will cover multivariate (or at least bivariate) calculus after they cover univariate calculus. Optimisation of composite functions: Once you are comfortable with the basics of unconstrained optimisation, learn to use the chain rules to solve optimisation problems involving composite functions (i.e., functions of functions). In particular, learn the univariate and multivariate chain rules for the first two derivatives and get used to using these to derive the first two derivatives of composite functions so that you can optimise them using standard methods. The goal here is to be able to make transformations of your input parameters and still be able to comfortably optimise the function you are working with. Constrained univariate and multivariate optimisation: Once you are comfortable with unconstrained optimisation, and optimisation of composite functions, learn how to maximise a differentiable function over an input set that is constrained by one or more non-linear equations or inequalities. This is the field of non-linear programming. There are several common techniques in this field, including transformation of input variables (creating composite functions), direct analysis using Lagrangian methods or the Kurush-Kuhn-Tucker method, and methods using "penalty functions". Practice using each of these methods on some tricky constrained optimisaton problems. Over time you will learn which of these methods is the simplest to apply to different types of constrained optimisation problems, and you will be able to derive solutions using alternative methods. Discrete optimisation: Discrete optimisation is generally treated as a seperate subject to optimisation for continuous functions, but there are some clear parallels. Discrete optimisation is usually covered in books on discrete mathematics. It requires you to learn about the difference operators (analogous to differentiation of continuous functions) and discrete calculus. This is something that is relatively easy to understand if you already have a good grounding in standard calculus methods for continuous functions. However, it is something that should be studied in its own right. Again, you should start by looking at unconstrained univariate optimisation, then unconstrained multivariate optimisation, then constrained optimisation, etc. Once you have mastered the basics you can look at some standard discrete optimisation problems in computational theory (e.g., the knapsack problem, bin-packing problem, change-making problem, etc.). If you are really keen you can also start to look at some of the theory relating to computational complexity of these problems. Numerical/simulation methods: Once you have a good grounding in the underlying theory of mathematical optimisation, and you are comfortable with both constrained and unconstrained problems involving univariate or multivariate functions, you can then examine some numerical optimisation methods. This includes dynamic programming, MCMC methods, simulated annealing, evolutionary/genetic algorithms, etc. Once you get to this point you are getting into specialist territory, but it is nice to have a rough idea of how these methods work (and ideally have the ability to program a few of them if needed). As regards the specifics of maximum-likelihood (ML) and restricted maximum-likelihood (REML), these become extremely simple to understand once you have obtained a strong underlying background in mathematical optimisation. Most general textbooks on probability and statistics will have a section on estimation that will include ML as one of the main estimation methods. Resources discussing REML are less common, but I will give you some papers here that may help. Maximum-likelihood (ML) estimation: Maximum-likelihood estimation is an applied example of mathematical optimisation, where you are maximising a joint density of some data with respect to one or more parameters. If there is one parameter then this is a univariate optisation problem and if there is more than one parameter it is a multivariate optimisation problem. It is usually (but not always) the case that the objective function of interest (the joint density) can be written as a product of non-negative parts. In particular, in the case of conditionally independent data, the objective function will be a product of density functions for each individual data point. Consequently, we usually take logarithms and maximise the "log-likelihood function". The maximisation itself proceeds using standard methods, but there are particular names for things in this context --- e.g., we call the first derivative of the log-likelihood the "score function" and the negative of the second derivative the "information function". Maximum likelihood estimation is covered in virtually all textooks that cover statistical estimation. Profile-likelihood (PL) estimation: The profile likelihood is used in some multivariate ML problems when we maximise the multivariate function one parameter at a time. One general optimisation technique that can be used in these cases is to derive the form of the MLE for a single parameter (written as a function of the data and other parameters) and then substitute this maximised parameter value back into the original likelihood function to obtain a partially maximised version of the function than no longer has that parameter. We call this partially maximised version of the likelihood function the "profile-likelihood" function. Understanding the use of this function really just requires you to be familiar with optimisation of multivariate functions using this one-at-a-time method. Profile likelihood is mentioned in some statistical textbooks in the context of finding the MLE in multivariate problems. In any case, it is really just something that arises when using a particular technique for multivariate optimisation. Restricted maximum-likelihood (REML) estimation: This is a variant of maximum-likelihood estimation that involves an attempt to estimate a parameter in a distribution while treating one or more other parameters as "nuisance parameters". REML is frequently used (and illustrated) when estimating variance components in the presence of an unknown mean. The original paper introducing the method is Bartlett (1937) and it was applied to a range of problems in Harville (1977). In Corbeil and Searle (2012) REML was used to estimate variance components in mixed models. You can find a simple introduction to this topic in Zhang (2015)
Maximum likelihood estimation, Restricted maximum likelihood estimation and Profile likelihood estim Maximum-likelihood is applied mathematical optimisation --- learn the latter well This is too large a field for us to give you a comprehensive answer, but perhaps we can point you in the right directi
32,193
Maximum likelihood estimation, Restricted maximum likelihood estimation and Profile likelihood estimation
This reference will describe the REML method (Section 17.4): http://www.utstat.toronto.edu/~brunner/books/LinearModelsInStatistics.pdf You do not need a reference for the MLEs. The MLEs are found by maximizing the log-likelihood, a standard calculus result. The profile MLEs, I discuss in my comment.
Maximum likelihood estimation, Restricted maximum likelihood estimation and Profile likelihood estim
This reference will describe the REML method (Section 17.4): http://www.utstat.toronto.edu/~brunner/books/LinearModelsInStatistics.pdf You do not need a reference for the MLEs. The MLEs are found by m
Maximum likelihood estimation, Restricted maximum likelihood estimation and Profile likelihood estimation This reference will describe the REML method (Section 17.4): http://www.utstat.toronto.edu/~brunner/books/LinearModelsInStatistics.pdf You do not need a reference for the MLEs. The MLEs are found by maximizing the log-likelihood, a standard calculus result. The profile MLEs, I discuss in my comment.
Maximum likelihood estimation, Restricted maximum likelihood estimation and Profile likelihood estim This reference will describe the REML method (Section 17.4): http://www.utstat.toronto.edu/~brunner/books/LinearModelsInStatistics.pdf You do not need a reference for the MLEs. The MLEs are found by m
32,194
Estimating $f(\mathbb{E}[X])$ with a guaranteed error performance
C1: Yes, by the argument for C2. C2: Yes, this is possible. The key result here is that any continuous function $f$ on $[0,1]$ is also uniformly continuous: for any $\epsilon$, there is some $\gamma$ such that if $|x_1-x_2|<\gamma$ then $|f(x_1)-f(x_2)|<\epsilon$. So it is enough to have a probability of $1-\delta$ of approximating $E[X]$ to within $\gamma$, and then applying $f$ will give a probability of $1-\delta$ of approximating $f(E[X])$ to within $\epsilon$. C3: No algorithm will work for all functions of the class C3. If we had an algorithm, we could use it on a discontinuous function $f$ to tell whether $E[X]>\frac12$, but there are always borderline cases that won't work with the desired probability. In more detail, suppose the user specifies: $f(x) = 1[x>\frac12]$, i.e. $1$ if $x>\frac12$, or $0$ otherwise. $\epsilon=\frac15$ as the accuracy -- so getting $f$ to an accuracy of $\epsilon$ is the same as getting $f$ exactly $\delta=\frac14$, i.e. a probability of $\frac34$ of achieving this accuracy Suppose the algorithm replies that either: $n$ observations are required, or with probability of at least $1-\frac\delta2$, at most $n$ observations are required. Now consider a Bernoulli variable $X$ which is equally likely to be 0 or 1. Let $S$ be the sample mean of $X$ after $n$ observations. Then $$P\left[S=\frac12\right]=\binom{n}{n/2}2^{-n}\simeq\sqrt{\frac2{\pi n}}$$ So there is close to a 50% chance that $S>\frac12$, and close to a 50% chance that $S<\frac12$. There is no way to have a $\frac34$ chance of getting the right answer, or even a $\frac34-\frac18$ chance of getting the right answer when we allow a $\frac18$ possibility of not enough data. With a slightly longer argument, we could show that the same pattern holds when $X$ is Bernoulli and $E[X]$ is anything in the range $\frac12\pm\sqrt{n}/16$. So even if we allow a $\frac\delta2$ probability that $n$ observations are insufficient, there is still no way to get $f(E[X])$ which will be accurate to within $\epsilon$ the needed $1-\delta$ of the time.
Estimating $f(\mathbb{E}[X])$ with a guaranteed error performance
C1: Yes, by the argument for C2. C2: Yes, this is possible. The key result here is that any continuous function $f$ on $[0,1]$ is also uniformly continuous: for any $\epsilon$, there is some $\gamma$
Estimating $f(\mathbb{E}[X])$ with a guaranteed error performance C1: Yes, by the argument for C2. C2: Yes, this is possible. The key result here is that any continuous function $f$ on $[0,1]$ is also uniformly continuous: for any $\epsilon$, there is some $\gamma$ such that if $|x_1-x_2|<\gamma$ then $|f(x_1)-f(x_2)|<\epsilon$. So it is enough to have a probability of $1-\delta$ of approximating $E[X]$ to within $\gamma$, and then applying $f$ will give a probability of $1-\delta$ of approximating $f(E[X])$ to within $\epsilon$. C3: No algorithm will work for all functions of the class C3. If we had an algorithm, we could use it on a discontinuous function $f$ to tell whether $E[X]>\frac12$, but there are always borderline cases that won't work with the desired probability. In more detail, suppose the user specifies: $f(x) = 1[x>\frac12]$, i.e. $1$ if $x>\frac12$, or $0$ otherwise. $\epsilon=\frac15$ as the accuracy -- so getting $f$ to an accuracy of $\epsilon$ is the same as getting $f$ exactly $\delta=\frac14$, i.e. a probability of $\frac34$ of achieving this accuracy Suppose the algorithm replies that either: $n$ observations are required, or with probability of at least $1-\frac\delta2$, at most $n$ observations are required. Now consider a Bernoulli variable $X$ which is equally likely to be 0 or 1. Let $S$ be the sample mean of $X$ after $n$ observations. Then $$P\left[S=\frac12\right]=\binom{n}{n/2}2^{-n}\simeq\sqrt{\frac2{\pi n}}$$ So there is close to a 50% chance that $S>\frac12$, and close to a 50% chance that $S<\frac12$. There is no way to have a $\frac34$ chance of getting the right answer, or even a $\frac34-\frac18$ chance of getting the right answer when we allow a $\frac18$ possibility of not enough data. With a slightly longer argument, we could show that the same pattern holds when $X$ is Bernoulli and $E[X]$ is anything in the range $\frac12\pm\sqrt{n}/16$. So even if we allow a $\frac\delta2$ probability that $n$ observations are insufficient, there is still no way to get $f(E[X])$ which will be accurate to within $\epsilon$ the needed $1-\delta$ of the time.
Estimating $f(\mathbb{E}[X])$ with a guaranteed error performance C1: Yes, by the argument for C2. C2: Yes, this is possible. The key result here is that any continuous function $f$ on $[0,1]$ is also uniformly continuous: for any $\epsilon$, there is some $\gamma$
32,195
Estimating $f(\mathbb{E}[X])$ with a guaranteed error performance
Elaborating on my comment to the OP, I'll give an explicit answer when $f$ is continuous. Let $\psi(\epsilon) = \sup\{\psi : |x - y| < \psi \Longrightarrow |f(x) - f(y)| < \epsilon\}$. Because $f$ is continuous on $[0,1]$ it is also uniformly continuous, implying that $\psi(\epsilon) \downarrow 0$ as $\epsilon \downarrow 0$. By Hoeffding's inequality we have $$ \Pr(|f(\bar X) - f(E(X))| \ge \epsilon) \le \Pr(|\bar X - E(X)| \ge \psi(\epsilon)) \le 2\exp\{-2n\psi(\epsilon)^2\}, $$ where $\bar X = \frac{1}{n} \sum_i X_i$. We can make this less than $\delta$ by taking $$ n \ge \frac{\log(2/\delta)}{2 \psi(\epsilon)^2}. $$ In the special case where $f$ is Lipschitz continuous, i.e., $|f(x) - f(y)| \le L |x - y|$, we have $\psi(\epsilon) \ge \epsilon / L$, and the bound simplifies to $$ n \ge \frac{L^2 \log(2 / \delta)}{2 \epsilon^2}. $$ This can be weakened to allow for non-Lipschitz functions; for example, you get a similar bound by assuming $|f(x) - f(y)| \le L |x - y|^\alpha$ for some $\alpha \le 1$ (i.e., that $f$ is Holder continuous), which should cover just about every continuous function you might care about, at least on $[0,1]$.
Estimating $f(\mathbb{E}[X])$ with a guaranteed error performance
Elaborating on my comment to the OP, I'll give an explicit answer when $f$ is continuous. Let $\psi(\epsilon) = \sup\{\psi : |x - y| < \psi \Longrightarrow |f(x) - f(y)| < \epsilon\}$. Because $f$ is
Estimating $f(\mathbb{E}[X])$ with a guaranteed error performance Elaborating on my comment to the OP, I'll give an explicit answer when $f$ is continuous. Let $\psi(\epsilon) = \sup\{\psi : |x - y| < \psi \Longrightarrow |f(x) - f(y)| < \epsilon\}$. Because $f$ is continuous on $[0,1]$ it is also uniformly continuous, implying that $\psi(\epsilon) \downarrow 0$ as $\epsilon \downarrow 0$. By Hoeffding's inequality we have $$ \Pr(|f(\bar X) - f(E(X))| \ge \epsilon) \le \Pr(|\bar X - E(X)| \ge \psi(\epsilon)) \le 2\exp\{-2n\psi(\epsilon)^2\}, $$ where $\bar X = \frac{1}{n} \sum_i X_i$. We can make this less than $\delta$ by taking $$ n \ge \frac{\log(2/\delta)}{2 \psi(\epsilon)^2}. $$ In the special case where $f$ is Lipschitz continuous, i.e., $|f(x) - f(y)| \le L |x - y|$, we have $\psi(\epsilon) \ge \epsilon / L$, and the bound simplifies to $$ n \ge \frac{L^2 \log(2 / \delta)}{2 \epsilon^2}. $$ This can be weakened to allow for non-Lipschitz functions; for example, you get a similar bound by assuming $|f(x) - f(y)| \le L |x - y|^\alpha$ for some $\alpha \le 1$ (i.e., that $f$ is Holder continuous), which should cover just about every continuous function you might care about, at least on $[0,1]$.
Estimating $f(\mathbb{E}[X])$ with a guaranteed error performance Elaborating on my comment to the OP, I'll give an explicit answer when $f$ is continuous. Let $\psi(\epsilon) = \sup\{\psi : |x - y| < \psi \Longrightarrow |f(x) - f(y)| < \epsilon\}$. Because $f$ is
32,196
Confused when to use Population vs Sample standard deviation in engineering testing
The two forms of standard deviation are relevant to two different types of variability. One is the variability of values within a set of numbers and one is an estimate of the variability of a population from which a sample of numbers has been drawn. The population standard deviation is relevant where the numbers that you have in hand are the entire population, and the sample standard deviation is relevant where the numbers are a sample of a much larger population. For any given set of numbers the sample standard deviation is larger than the population standard deviation because there is extra uncertainty involved: the uncertainty that results from sampling. See this for a bit more information: Intuitive explanation for dividing by $n-1$ when calculating standard deviation? For an example, the population standard deviation of 1,2,3,4,5 is about 1.41 and the sample standard deviation is about 1.58.
Confused when to use Population vs Sample standard deviation in engineering testing
The two forms of standard deviation are relevant to two different types of variability. One is the variability of values within a set of numbers and one is an estimate of the variability of a populati
Confused when to use Population vs Sample standard deviation in engineering testing The two forms of standard deviation are relevant to two different types of variability. One is the variability of values within a set of numbers and one is an estimate of the variability of a population from which a sample of numbers has been drawn. The population standard deviation is relevant where the numbers that you have in hand are the entire population, and the sample standard deviation is relevant where the numbers are a sample of a much larger population. For any given set of numbers the sample standard deviation is larger than the population standard deviation because there is extra uncertainty involved: the uncertainty that results from sampling. See this for a bit more information: Intuitive explanation for dividing by $n-1$ when calculating standard deviation? For an example, the population standard deviation of 1,2,3,4,5 is about 1.41 and the sample standard deviation is about 1.58.
Confused when to use Population vs Sample standard deviation in engineering testing The two forms of standard deviation are relevant to two different types of variability. One is the variability of values within a set of numbers and one is an estimate of the variability of a populati
32,197
Confused when to use Population vs Sample standard deviation in engineering testing
My question is similar pnd1987's question. I wish to use a standard deviation in order to appraise the repeatability of a measurement. Suppose I'm measuring one stable thing over and over. A perfect measuring instrument (with a perfect operator) would give the same number over and over. Instead there is variation, and let's assume there's a normal distribution about the mean. We'd like to appraise the measurement repeatability by the SD of that normal distribution. But we take just N measurements at a time, and hope the SD of those N can estimate the SD of the normal distribution. As N increases, sampleSD and populationSD both converge to the distribution's SD, but for small N, like 5, we get only weak estimates of the distribution's SD. PopulationSD gives an obviously worse estimate than sampleSD, because when N=1 populationSD gives the ridiculous value 0, while sampleSD is correctly indeterminate. However, sampleSD does not correctly estimate the disribution's SD. That is, if we measure N times and take the sampleSD, then measure another N times and take the sampleSD, over and over, and average all the sampleSDs, that average does not converge to the distribution's SD. For N=5, it converges to around 0.94× the distribution SD. (There must be a little theorem here.) SampleSD doesn't quite do what it is said to do. If the measurement variation is normally distributed, then it would be very nice to know the distribution's SD. For example, we can then determine how many measurements to take in order tolerate the variation. Averages of N measurements are also normally distributed, but with a standard deviation 1/sqrt(N) times the original distribution's. Note added: the theorem is not so little -- Cochran's Theorem
Confused when to use Population vs Sample standard deviation in engineering testing
My question is similar pnd1987's question. I wish to use a standard deviation in order to appraise the repeatability of a measurement. Suppose I'm measuring one stable thing over and over. A perfec
Confused when to use Population vs Sample standard deviation in engineering testing My question is similar pnd1987's question. I wish to use a standard deviation in order to appraise the repeatability of a measurement. Suppose I'm measuring one stable thing over and over. A perfect measuring instrument (with a perfect operator) would give the same number over and over. Instead there is variation, and let's assume there's a normal distribution about the mean. We'd like to appraise the measurement repeatability by the SD of that normal distribution. But we take just N measurements at a time, and hope the SD of those N can estimate the SD of the normal distribution. As N increases, sampleSD and populationSD both converge to the distribution's SD, but for small N, like 5, we get only weak estimates of the distribution's SD. PopulationSD gives an obviously worse estimate than sampleSD, because when N=1 populationSD gives the ridiculous value 0, while sampleSD is correctly indeterminate. However, sampleSD does not correctly estimate the disribution's SD. That is, if we measure N times and take the sampleSD, then measure another N times and take the sampleSD, over and over, and average all the sampleSDs, that average does not converge to the distribution's SD. For N=5, it converges to around 0.94× the distribution SD. (There must be a little theorem here.) SampleSD doesn't quite do what it is said to do. If the measurement variation is normally distributed, then it would be very nice to know the distribution's SD. For example, we can then determine how many measurements to take in order tolerate the variation. Averages of N measurements are also normally distributed, but with a standard deviation 1/sqrt(N) times the original distribution's. Note added: the theorem is not so little -- Cochran's Theorem
Confused when to use Population vs Sample standard deviation in engineering testing My question is similar pnd1987's question. I wish to use a standard deviation in order to appraise the repeatability of a measurement. Suppose I'm measuring one stable thing over and over. A perfec
32,198
Distribution of the pooled variance in paired samples
I'm not sure about a reference for this result, but it is possible to derive it relatively easily, so I hope that suffices. One way to approach this problem is to look at it as a problem involving a quadratic form taken on a normal random vector. The pooled sample variance can be expressed as a quadratic form of this kind, and these quadratic forms are generally approximated using the chi-squared distribution (with exact correspondence in some cases). Derivation of the result: In order to show where your assumptions come into the derivation, I will do the first part of the derivation without assuming equal variances for the two groups. If we denote your vectors by $\mathbf{X} = (X_1,...,X_n)$ and $\mathbf{Y} = (Y_1,...,Y_n)$ then your stipulated problem gives the joint normal distribution: $$\begin{bmatrix} \mathbf{X} \\ \mathbf{Y} \end{bmatrix} \sim \text{N} (\boldsymbol{\mu}, \mathbf{\Sigma} ) \quad \quad \quad \boldsymbol{\mu} = \begin{bmatrix} \mu_X \mathbf{1} \\ \mu_Y \mathbf{1} \end{bmatrix} \quad \quad \quad \mathbf{\Sigma} = \begin{bmatrix} \sigma_X^2 \mathbf{I} & \rho \sigma_X \sigma_Y \mathbf{I} \\ \rho \sigma_X \sigma_Y \mathbf{I} & \sigma_Y^2 \mathbf{I} \end{bmatrix}.$$ Letting $\mathbf{C}$ denote the $n \times n$ centering matrix, you can write the pooled sample variance in this problem as the quadratic form: $$\begin{align} S_\text{pooled}^2 &= \begin{bmatrix} \mathbf{X} \\ \mathbf{Y} \end{bmatrix}^\text{T} \mathbf{A} \begin{bmatrix} \mathbf{X} \\ \mathbf{Y} \end{bmatrix} \quad \quad \quad \mathbf{A} \equiv \frac{1}{2(n-1)} \begin{bmatrix} \mathbf{C} & \mathbf{0} \\ \mathbf{0} & \mathbf{C} \end{bmatrix}. \\[6pt] \end{align}$$ Now, using standard formulae for the mean and variance of quadradic forms of normal random vectors, and noting that $\mathbf{C}$ is an idempotent matrix (i.e., $\mathbf{C} = \mathbf{C}^2$), you have: $$\begin{align} \mathbb{E}(S_\text{pooled}^2) &= \text{tr}(\mathbf{A} \mathbf{\Sigma}) + \boldsymbol{\mu}^\text{T} \mathbf{A} \boldsymbol{\mu} \\[6pt] &= \text{tr} \Bigg( \frac{1}{2(n-1)} \begin{bmatrix} \sigma_X^2 \mathbf{C} & \rho \sigma_X \sigma_Y \mathbf{C} \\ \rho \sigma_X \sigma_Y \mathbf{C} & \sigma_Y^2 \mathbf{C} \end{bmatrix} \Bigg) + \mathbf{0} \\[6pt] &= \frac{1}{2(n-1)} \text{tr} \Bigg( \begin{bmatrix} \sigma_X^2 \mathbf{C} & \rho \sigma_X \sigma_Y \mathbf{C} \\ \rho \sigma_X \sigma_Y \mathbf{C} & \sigma_Y^2 \mathbf{C} \end{bmatrix} \Bigg) \\[6pt] &= \frac{1}{2(n-1)} \Bigg[ n \times \frac{n-1}{n} \cdot \sigma_X^2 + n \times \frac{n-1}{n} \cdot \sigma_Y^2 \Bigg] \\[6pt] &= \frac{\sigma_X^2 + \sigma_Y^2}{2}, \\[12pt] \mathbb{V}(S_\text{pooled}^2) &= 2 \text{tr}(\mathbf{A} \mathbf{\Sigma} \mathbf{A} \mathbf{\Sigma}) + 4 \boldsymbol{\mu}^\text{T} \mathbf{A} \mathbf{\Sigma} \mathbf{A} \boldsymbol{\mu} \\[6pt] &= 2 \text{tr} \Bigg( \frac{1}{4(n-1)^2} \begin{bmatrix} \sigma_X^2 \mathbf{C} & \rho \sigma_X \sigma_Y \mathbf{C} \\ \rho \sigma_X \sigma_Y \mathbf{C} & \sigma_Y^2 \mathbf{C} \end{bmatrix}^2 \Bigg) + \mathbf{0} \\[6pt] &= \frac{1}{2(n-1)^2} \text{tr} \Bigg( \begin{bmatrix} (\sigma_X^4 + \rho^2 \sigma_X^2 \sigma_Y^2) \mathbf{C} & (\sigma_X^2 + \sigma_Y^2) \rho \sigma_X \sigma_Y \mathbf{C} \\ (\sigma_X^2 + \sigma_Y^2) \rho \sigma_X \sigma_Y \mathbf{C} & (\sigma_Y^4 + \rho^2 \sigma_X^2 \sigma_Y^2) \mathbf{C} \end{bmatrix} \Bigg) \\[6pt] &= \frac{1}{2(n-1)^2} \Bigg[ n \times \frac{n-1}{n} \cdot (\sigma_X^4 + \rho^2 \sigma_X^2 \sigma_Y^2) + n \times \frac{n-1}{n} \cdot (\sigma_Y^4 + \rho^2 \sigma_X^2 \sigma_Y^2) \Bigg] \\[6pt] &= \frac{1}{2(n-1)} \Bigg[ (\sigma_X^4 + \rho^2 \sigma_X^2 \sigma_Y^2) + (\sigma_Y^4 + \rho^2 \sigma_X^2 \sigma_Y^2) \Bigg] \\[6pt] &= \frac{\sigma_X^4 + \sigma_Y^4 + 2 \rho^2 \sigma_X^2 \sigma_Y^2}{2(n-1)}. \\[12pt] \end{align}$$ Using the equal variance assumption we have $\sigma_X = \sigma_Y = \sigma$ so the moments reduce to: $$\mathbb{E} \bigg( \frac{S_\text{pooled}^2}{\sigma^2} \bigg) = 1 \quad \quad \quad \mathbb{V} \bigg( \frac{S_\text{pooled}^2}{\sigma^2} \bigg) = \frac{1+\rho^2}{n-1}.$$ It is usual to approximate the distribution of the quadratic form by a scaled chi-squared distribution using the method of moments. Equating the first two moments to that distribution gives the variance requirement $\mathbb{V}(S_\text{pooled}^2/\sigma^2) = 2/\nu$, which then gives the degrees-of-freedom parameter: $$\nu = \frac{2(n-1)}{1+\rho^2}.$$ Bear in mind that the degrees-of-freedom parameter here depends on the true correlation coefficient $\rho$, and you may need to estimate this using the sample correlation in your actual problem.
Distribution of the pooled variance in paired samples
I'm not sure about a reference for this result, but it is possible to derive it relatively easily, so I hope that suffices. One way to approach this problem is to look at it as a problem involving a
Distribution of the pooled variance in paired samples I'm not sure about a reference for this result, but it is possible to derive it relatively easily, so I hope that suffices. One way to approach this problem is to look at it as a problem involving a quadratic form taken on a normal random vector. The pooled sample variance can be expressed as a quadratic form of this kind, and these quadratic forms are generally approximated using the chi-squared distribution (with exact correspondence in some cases). Derivation of the result: In order to show where your assumptions come into the derivation, I will do the first part of the derivation without assuming equal variances for the two groups. If we denote your vectors by $\mathbf{X} = (X_1,...,X_n)$ and $\mathbf{Y} = (Y_1,...,Y_n)$ then your stipulated problem gives the joint normal distribution: $$\begin{bmatrix} \mathbf{X} \\ \mathbf{Y} \end{bmatrix} \sim \text{N} (\boldsymbol{\mu}, \mathbf{\Sigma} ) \quad \quad \quad \boldsymbol{\mu} = \begin{bmatrix} \mu_X \mathbf{1} \\ \mu_Y \mathbf{1} \end{bmatrix} \quad \quad \quad \mathbf{\Sigma} = \begin{bmatrix} \sigma_X^2 \mathbf{I} & \rho \sigma_X \sigma_Y \mathbf{I} \\ \rho \sigma_X \sigma_Y \mathbf{I} & \sigma_Y^2 \mathbf{I} \end{bmatrix}.$$ Letting $\mathbf{C}$ denote the $n \times n$ centering matrix, you can write the pooled sample variance in this problem as the quadratic form: $$\begin{align} S_\text{pooled}^2 &= \begin{bmatrix} \mathbf{X} \\ \mathbf{Y} \end{bmatrix}^\text{T} \mathbf{A} \begin{bmatrix} \mathbf{X} \\ \mathbf{Y} \end{bmatrix} \quad \quad \quad \mathbf{A} \equiv \frac{1}{2(n-1)} \begin{bmatrix} \mathbf{C} & \mathbf{0} \\ \mathbf{0} & \mathbf{C} \end{bmatrix}. \\[6pt] \end{align}$$ Now, using standard formulae for the mean and variance of quadradic forms of normal random vectors, and noting that $\mathbf{C}$ is an idempotent matrix (i.e., $\mathbf{C} = \mathbf{C}^2$), you have: $$\begin{align} \mathbb{E}(S_\text{pooled}^2) &= \text{tr}(\mathbf{A} \mathbf{\Sigma}) + \boldsymbol{\mu}^\text{T} \mathbf{A} \boldsymbol{\mu} \\[6pt] &= \text{tr} \Bigg( \frac{1}{2(n-1)} \begin{bmatrix} \sigma_X^2 \mathbf{C} & \rho \sigma_X \sigma_Y \mathbf{C} \\ \rho \sigma_X \sigma_Y \mathbf{C} & \sigma_Y^2 \mathbf{C} \end{bmatrix} \Bigg) + \mathbf{0} \\[6pt] &= \frac{1}{2(n-1)} \text{tr} \Bigg( \begin{bmatrix} \sigma_X^2 \mathbf{C} & \rho \sigma_X \sigma_Y \mathbf{C} \\ \rho \sigma_X \sigma_Y \mathbf{C} & \sigma_Y^2 \mathbf{C} \end{bmatrix} \Bigg) \\[6pt] &= \frac{1}{2(n-1)} \Bigg[ n \times \frac{n-1}{n} \cdot \sigma_X^2 + n \times \frac{n-1}{n} \cdot \sigma_Y^2 \Bigg] \\[6pt] &= \frac{\sigma_X^2 + \sigma_Y^2}{2}, \\[12pt] \mathbb{V}(S_\text{pooled}^2) &= 2 \text{tr}(\mathbf{A} \mathbf{\Sigma} \mathbf{A} \mathbf{\Sigma}) + 4 \boldsymbol{\mu}^\text{T} \mathbf{A} \mathbf{\Sigma} \mathbf{A} \boldsymbol{\mu} \\[6pt] &= 2 \text{tr} \Bigg( \frac{1}{4(n-1)^2} \begin{bmatrix} \sigma_X^2 \mathbf{C} & \rho \sigma_X \sigma_Y \mathbf{C} \\ \rho \sigma_X \sigma_Y \mathbf{C} & \sigma_Y^2 \mathbf{C} \end{bmatrix}^2 \Bigg) + \mathbf{0} \\[6pt] &= \frac{1}{2(n-1)^2} \text{tr} \Bigg( \begin{bmatrix} (\sigma_X^4 + \rho^2 \sigma_X^2 \sigma_Y^2) \mathbf{C} & (\sigma_X^2 + \sigma_Y^2) \rho \sigma_X \sigma_Y \mathbf{C} \\ (\sigma_X^2 + \sigma_Y^2) \rho \sigma_X \sigma_Y \mathbf{C} & (\sigma_Y^4 + \rho^2 \sigma_X^2 \sigma_Y^2) \mathbf{C} \end{bmatrix} \Bigg) \\[6pt] &= \frac{1}{2(n-1)^2} \Bigg[ n \times \frac{n-1}{n} \cdot (\sigma_X^4 + \rho^2 \sigma_X^2 \sigma_Y^2) + n \times \frac{n-1}{n} \cdot (\sigma_Y^4 + \rho^2 \sigma_X^2 \sigma_Y^2) \Bigg] \\[6pt] &= \frac{1}{2(n-1)} \Bigg[ (\sigma_X^4 + \rho^2 \sigma_X^2 \sigma_Y^2) + (\sigma_Y^4 + \rho^2 \sigma_X^2 \sigma_Y^2) \Bigg] \\[6pt] &= \frac{\sigma_X^4 + \sigma_Y^4 + 2 \rho^2 \sigma_X^2 \sigma_Y^2}{2(n-1)}. \\[12pt] \end{align}$$ Using the equal variance assumption we have $\sigma_X = \sigma_Y = \sigma$ so the moments reduce to: $$\mathbb{E} \bigg( \frac{S_\text{pooled}^2}{\sigma^2} \bigg) = 1 \quad \quad \quad \mathbb{V} \bigg( \frac{S_\text{pooled}^2}{\sigma^2} \bigg) = \frac{1+\rho^2}{n-1}.$$ It is usual to approximate the distribution of the quadratic form by a scaled chi-squared distribution using the method of moments. Equating the first two moments to that distribution gives the variance requirement $\mathbb{V}(S_\text{pooled}^2/\sigma^2) = 2/\nu$, which then gives the degrees-of-freedom parameter: $$\nu = \frac{2(n-1)}{1+\rho^2}.$$ Bear in mind that the degrees-of-freedom parameter here depends on the true correlation coefficient $\rho$, and you may need to estimate this using the sample correlation in your actual problem.
Distribution of the pooled variance in paired samples I'm not sure about a reference for this result, but it is possible to derive it relatively easily, so I hope that suffices. One way to approach this problem is to look at it as a problem involving a
32,199
Paired vs independent t-test for A/B test with underlying trends
Pairing of some kind seems crucial because you want to compare Truck A on Wednesdays with Truck B on Wednesdays. However, as you say, a cyclic sales pattern may tend to be non-normal (but see Note at end). In order to have pairing without concern over normality, you might use a paired Wilcoxon test. It seems especially appropriate because the weekly distributional pattern will be similar for the two trucks. Fake data for just one week and paired Wilcoxon test, in R: x1 = c(120, 75, 80, 70, 85, 82, 130) x2 = c(130, 89, 91, 79, 93, 99, 142) # consistently higher wilcox.test(x1,x2, pair=T) Wilcoxon signed rank test data: x1 and x2 V = 0, p-value = 0.01563 alternative hypothesis: true location shift is not equal to 0 The null hypothesis that the two trucks have similar sales is rejected with P-value 0.016 < 0.05, even though there is a weekly trend of higher sales on Sun and Sat. A two-sample Wilcoxon test without pairing does not detect that the second truck has consistently higher sales. [There is a warning message about ties (not shown here), so the P-value may not be exactly correct.] wilcox.test(x1,x2)$p.val [1] 0.1792339 Note: In judging normality for a paired t test, it is the paired differences that should be tested for normality. They may not show as aggressive a weekly pattern as do sales by individual trucks.
Paired vs independent t-test for A/B test with underlying trends
Pairing of some kind seems crucial because you want to compare Truck A on Wednesdays with Truck B on Wednesdays. However, as you say, a cyclic sales pattern may tend to be non-normal (but see Note at
Paired vs independent t-test for A/B test with underlying trends Pairing of some kind seems crucial because you want to compare Truck A on Wednesdays with Truck B on Wednesdays. However, as you say, a cyclic sales pattern may tend to be non-normal (but see Note at end). In order to have pairing without concern over normality, you might use a paired Wilcoxon test. It seems especially appropriate because the weekly distributional pattern will be similar for the two trucks. Fake data for just one week and paired Wilcoxon test, in R: x1 = c(120, 75, 80, 70, 85, 82, 130) x2 = c(130, 89, 91, 79, 93, 99, 142) # consistently higher wilcox.test(x1,x2, pair=T) Wilcoxon signed rank test data: x1 and x2 V = 0, p-value = 0.01563 alternative hypothesis: true location shift is not equal to 0 The null hypothesis that the two trucks have similar sales is rejected with P-value 0.016 < 0.05, even though there is a weekly trend of higher sales on Sun and Sat. A two-sample Wilcoxon test without pairing does not detect that the second truck has consistently higher sales. [There is a warning message about ties (not shown here), so the P-value may not be exactly correct.] wilcox.test(x1,x2)$p.val [1] 0.1792339 Note: In judging normality for a paired t test, it is the paired differences that should be tested for normality. They may not show as aggressive a weekly pattern as do sales by individual trucks.
Paired vs independent t-test for A/B test with underlying trends Pairing of some kind seems crucial because you want to compare Truck A on Wednesdays with Truck B on Wednesdays. However, as you say, a cyclic sales pattern may tend to be non-normal (but see Note at
32,200
Paired vs independent t-test for A/B test with underlying trends
One approach might be to used a mixed model with an indicator for day + a random effect for truck ID. This way, you can account for any truck level variation and assess the effect of the treatment via an indicator. This sounds feasible especially if you have lots of data to make up to the degrees of freedom being used by the indicators. Here is an example of how this might be performed. I have 10 trucks, each truck's sales are measured over the course of a week. We assume that each truck has some differences due to the driver (or something, maybe one truck is newer and is more attractive than older ones, who knows). The hypothesized intervention increases sales by 2 units. Here is a plot of the data where each line is for a specific truck with colors indicating treatment group. A linear mixed effect model for this data may look like model = lmer(sales ~ factor(ndays) + trt + (1|truck), data = design ) The test you case about the the test for the trt variable, assuming you hypothesize additive effects (sales increase by the same amount on each day, not just on weekends). Here is a plot of the model for each truck with the data plotted over the model fit with an opacity. Finally, I'm sure there is a way to do this without mixed effect models. In my own opinion, regression is a natural way to think of these sorts of comparisons, but a cleverly computed t-test is likely capable of accomplishing the same thing. Think of this approach as the most straight forward (in so far as it directly considers the generative processes) but perhaps not the easiest or even best.
Paired vs independent t-test for A/B test with underlying trends
One approach might be to used a mixed model with an indicator for day + a random effect for truck ID. This way, you can account for any truck level variation and assess the effect of the treatment vi
Paired vs independent t-test for A/B test with underlying trends One approach might be to used a mixed model with an indicator for day + a random effect for truck ID. This way, you can account for any truck level variation and assess the effect of the treatment via an indicator. This sounds feasible especially if you have lots of data to make up to the degrees of freedom being used by the indicators. Here is an example of how this might be performed. I have 10 trucks, each truck's sales are measured over the course of a week. We assume that each truck has some differences due to the driver (or something, maybe one truck is newer and is more attractive than older ones, who knows). The hypothesized intervention increases sales by 2 units. Here is a plot of the data where each line is for a specific truck with colors indicating treatment group. A linear mixed effect model for this data may look like model = lmer(sales ~ factor(ndays) + trt + (1|truck), data = design ) The test you case about the the test for the trt variable, assuming you hypothesize additive effects (sales increase by the same amount on each day, not just on weekends). Here is a plot of the model for each truck with the data plotted over the model fit with an opacity. Finally, I'm sure there is a way to do this without mixed effect models. In my own opinion, regression is a natural way to think of these sorts of comparisons, but a cleverly computed t-test is likely capable of accomplishing the same thing. Think of this approach as the most straight forward (in so far as it directly considers the generative processes) but perhaps not the easiest or even best.
Paired vs independent t-test for A/B test with underlying trends One approach might be to used a mixed model with an indicator for day + a random effect for truck ID. This way, you can account for any truck level variation and assess the effect of the treatment vi