idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
31,501
What is cross-validation?
"Cross-validating" refers to attempts to check ("validate") whether the outcome of a statistical analysis will generalise to another sample. Typically, these involve running the same analyses ("cross") on comparable subsets of your data: e.g., you might divide your dataset into two (a "training" set and a "validation" set) and run the analysis on both. An overfitted model is one which adequately, albeit artificially describes your current dataset, but performs poorly on yet unseen (but equivalent) datasets. This can happen if your model was derived ad hocly. Cross-validation is one way of checking against overfitting.
What is cross-validation?
"Cross-validating" refers to attempts to check ("validate") whether the outcome of a statistical analysis will generalise to another sample. Typically, these involve running the same analyses ("cross"
What is cross-validation? "Cross-validating" refers to attempts to check ("validate") whether the outcome of a statistical analysis will generalise to another sample. Typically, these involve running the same analyses ("cross") on comparable subsets of your data: e.g., you might divide your dataset into two (a "training" set and a "validation" set) and run the analysis on both. An overfitted model is one which adequately, albeit artificially describes your current dataset, but performs poorly on yet unseen (but equivalent) datasets. This can happen if your model was derived ad hocly. Cross-validation is one way of checking against overfitting.
What is cross-validation? "Cross-validating" refers to attempts to check ("validate") whether the outcome of a statistical analysis will generalise to another sample. Typically, these involve running the same analyses ("cross"
31,502
Instrumental variables equivalent representation
When I am trying to figure out why a result is true, I usually look at it in the simplest possible case. So, let's try a bivariate system. Here is the structural system: \begin{align} y_i &= \beta x_i + \epsilon_i\\ x_i &= \delta z_i + \nu_i \end{align} Here, everything has a zero mean, and $z$ is a valid instrument for $x$. For 2SLS and CF, we run auxilliary regressions like: \begin{align} y_i &= \beta_\text{2SLS} \hat{x}_i + \epsilon_{2i}\\ y_i &= \beta_\text{CF} x_i + \delta \hat{\nu}_i + \epsilon_{3i} \end{align} Now, the 2SLS estimator will be: \begin{align} \hat{\beta}_\text{2SLS} &= \frac{\sum y_i \hat{x}_i}{\sum (\hat{x}_i)^2} \end{align} By the Frisch-Waugh-Lovell Theorem, the CF estimator will be: \begin{align} \hat{\beta}_\text{CF} &= \frac{\sum e_{y|\hat{\nu},i} e_{x|\hat{\nu},i}}{\sum (e_{x|\hat{\nu},i})^2} \end{align} In that expression, $e_{y|\hat{\nu}}$ means the residuals from a regression of $y$ on $\hat{\nu}$. If you regress $x$ on $\hat{\nu}$, then you get a coefficient of 1 and the residuals from that regression are $\hat{x}$. So, the denominators of the two fractions are the same. Also, this gives you that the second terms in the numerators are the same. What happens when you regress $y$ on $\hat{\nu}$? Well, the residuals from that regression are: \begin{align} y-\left(\frac{\sum y\hat{\nu}}{\sum(\hat{\nu})^2}\right)\hat{\nu} \end{align} So, the conclusion follows from the fact that $\sum \hat{\nu}\hat{x}=0$. It should not be hard to generalize this to the case with an intercept and more right hand side variables.
Instrumental variables equivalent representation
When I am trying to figure out why a result is true, I usually look at it in the simplest possible case. So, let's try a bivariate system. Here is the structural system: \begin{align} y_i &= \beta x
Instrumental variables equivalent representation When I am trying to figure out why a result is true, I usually look at it in the simplest possible case. So, let's try a bivariate system. Here is the structural system: \begin{align} y_i &= \beta x_i + \epsilon_i\\ x_i &= \delta z_i + \nu_i \end{align} Here, everything has a zero mean, and $z$ is a valid instrument for $x$. For 2SLS and CF, we run auxilliary regressions like: \begin{align} y_i &= \beta_\text{2SLS} \hat{x}_i + \epsilon_{2i}\\ y_i &= \beta_\text{CF} x_i + \delta \hat{\nu}_i + \epsilon_{3i} \end{align} Now, the 2SLS estimator will be: \begin{align} \hat{\beta}_\text{2SLS} &= \frac{\sum y_i \hat{x}_i}{\sum (\hat{x}_i)^2} \end{align} By the Frisch-Waugh-Lovell Theorem, the CF estimator will be: \begin{align} \hat{\beta}_\text{CF} &= \frac{\sum e_{y|\hat{\nu},i} e_{x|\hat{\nu},i}}{\sum (e_{x|\hat{\nu},i})^2} \end{align} In that expression, $e_{y|\hat{\nu}}$ means the residuals from a regression of $y$ on $\hat{\nu}$. If you regress $x$ on $\hat{\nu}$, then you get a coefficient of 1 and the residuals from that regression are $\hat{x}$. So, the denominators of the two fractions are the same. Also, this gives you that the second terms in the numerators are the same. What happens when you regress $y$ on $\hat{\nu}$? Well, the residuals from that regression are: \begin{align} y-\left(\frac{\sum y\hat{\nu}}{\sum(\hat{\nu})^2}\right)\hat{\nu} \end{align} So, the conclusion follows from the fact that $\sum \hat{\nu}\hat{x}=0$. It should not be hard to generalize this to the case with an intercept and more right hand side variables.
Instrumental variables equivalent representation When I am trying to figure out why a result is true, I usually look at it in the simplest possible case. So, let's try a bivariate system. Here is the structural system: \begin{align} y_i &= \beta x
31,503
Instrumental variables equivalent representation
Here is a more general argument for the multivariate case to compute $\widehat{\delta}_{2SLS}$: Regress the regressors $Z$ on the instruments $X$ and save the residuals $\widetilde{Z}:=M_{X}Z=Z-X(X'X)^{-1}X'Z$. Regress $y$ on $Z$ and $\widetilde{Z}$, $$ y=Z\widehat{\delta}+\widetilde{Z}\widehat{\theta}+\widehat{u} $$ Recall that the FWL theorem states that we can obtain subvectors of coefficients on variables of "interest" of a long regression by regressing the residuals of a regression of the dependent variable on the remaining ("non-interesting") explanatory variables on the residuals of a regression of the submatrix of interest on the remaining variables. We thus use FWL in 2. to show that $\widehat{\delta}_{{2SLS}}=\widehat{\delta}$: \begin{eqnarray*} \widehat{\delta}&=&(Z'M_{\widetilde{Z}}Z)^{-1}Z'M_{\widetilde{Z}}y\\ &=&(Z'(I-P_{\widetilde{Z}})Z)^{-1}Z'(I-P_{\widetilde{Z}})y \end{eqnarray*} Now, \begin{eqnarray*} P_{\widetilde{Z}}&=&M_{X}Z(Z'M_{X}'M_{X}Z)^{-1}Z'M_{X}\\ &=&M_{X}Z(Z'M_{X}Z)^{-1}Z'M_{X} \end{eqnarray*} so that $$ (I-P_{\widetilde{Z}})Z=Z-M_{X}Z(Z'M_{X}Z)^{-1}Z'M_{X}Z=Z-M_{X}Z=P_{X}Z $$ such that $$ \widehat{\delta}=(Z'P_{X}Z)^{-1}Z'P_{X}y $$
Instrumental variables equivalent representation
Here is a more general argument for the multivariate case to compute $\widehat{\delta}_{2SLS}$: Regress the regressors $Z$ on the instruments $X$ and save the residuals $\widetilde{Z}:=M_{X}Z=Z-X(X'X
Instrumental variables equivalent representation Here is a more general argument for the multivariate case to compute $\widehat{\delta}_{2SLS}$: Regress the regressors $Z$ on the instruments $X$ and save the residuals $\widetilde{Z}:=M_{X}Z=Z-X(X'X)^{-1}X'Z$. Regress $y$ on $Z$ and $\widetilde{Z}$, $$ y=Z\widehat{\delta}+\widetilde{Z}\widehat{\theta}+\widehat{u} $$ Recall that the FWL theorem states that we can obtain subvectors of coefficients on variables of "interest" of a long regression by regressing the residuals of a regression of the dependent variable on the remaining ("non-interesting") explanatory variables on the residuals of a regression of the submatrix of interest on the remaining variables. We thus use FWL in 2. to show that $\widehat{\delta}_{{2SLS}}=\widehat{\delta}$: \begin{eqnarray*} \widehat{\delta}&=&(Z'M_{\widetilde{Z}}Z)^{-1}Z'M_{\widetilde{Z}}y\\ &=&(Z'(I-P_{\widetilde{Z}})Z)^{-1}Z'(I-P_{\widetilde{Z}})y \end{eqnarray*} Now, \begin{eqnarray*} P_{\widetilde{Z}}&=&M_{X}Z(Z'M_{X}'M_{X}Z)^{-1}Z'M_{X}\\ &=&M_{X}Z(Z'M_{X}Z)^{-1}Z'M_{X} \end{eqnarray*} so that $$ (I-P_{\widetilde{Z}})Z=Z-M_{X}Z(Z'M_{X}Z)^{-1}Z'M_{X}Z=Z-M_{X}Z=P_{X}Z $$ such that $$ \widehat{\delta}=(Z'P_{X}Z)^{-1}Z'P_{X}y $$
Instrumental variables equivalent representation Here is a more general argument for the multivariate case to compute $\widehat{\delta}_{2SLS}$: Regress the regressors $Z$ on the instruments $X$ and save the residuals $\widetilde{Z}:=M_{X}Z=Z-X(X'X
31,504
How do residuals relate to the underlying disturbances?
The simplest way to think about it is that your raw residuals ($e_j = y_j-\hat y_j$) are estimates of the corresponding disturbances ($\hat\varepsilon_j = e_j$). However, there are some extra complexities. For example, although we are assuming in the standard OLS model that the errors / disturbances are independent, the residuals cannot all be. In general, only $N-p-1$ residuals can be independent since you have used $p-1$ degrees of freedom in estimating the mean model and the residuals are constrained to sum to $0$. In addition, the standard deviation of the raw residuals is not actually constant. In general, the regression line is fitted such that it will be closer on average to those points with greater leverage. As a result, the standard deviation of the residuals for those points is smaller than that of low leverage points. (For more on this, it may help to read may answers here: Interpreting plot.lm(), and/or here: How to perform residual analysis for binary/dichotomous independent predictors in linear regression?)
How do residuals relate to the underlying disturbances?
The simplest way to think about it is that your raw residuals ($e_j = y_j-\hat y_j$) are estimates of the corresponding disturbances ($\hat\varepsilon_j = e_j$). However, there are some extra complex
How do residuals relate to the underlying disturbances? The simplest way to think about it is that your raw residuals ($e_j = y_j-\hat y_j$) are estimates of the corresponding disturbances ($\hat\varepsilon_j = e_j$). However, there are some extra complexities. For example, although we are assuming in the standard OLS model that the errors / disturbances are independent, the residuals cannot all be. In general, only $N-p-1$ residuals can be independent since you have used $p-1$ degrees of freedom in estimating the mean model and the residuals are constrained to sum to $0$. In addition, the standard deviation of the raw residuals is not actually constant. In general, the regression line is fitted such that it will be closer on average to those points with greater leverage. As a result, the standard deviation of the residuals for those points is smaller than that of low leverage points. (For more on this, it may help to read may answers here: Interpreting plot.lm(), and/or here: How to perform residual analysis for binary/dichotomous independent predictors in linear regression?)
How do residuals relate to the underlying disturbances? The simplest way to think about it is that your raw residuals ($e_j = y_j-\hat y_j$) are estimates of the corresponding disturbances ($\hat\varepsilon_j = e_j$). However, there are some extra complex
31,505
How do residuals relate to the underlying disturbances?
The relationship between $\hat{\varepsilon}$ and $\varepsilon$ is: $$\hat{\varepsilon} = (I-H) \varepsilon$$ where $H$, the hat matrix, is $X(X^TX)^{-1}X^T$. Which is to say that $\hat{\varepsilon}_i$ is a linear combination of all the errors, but typically most of the weight falls on the $i$-th one. Here's an example, using the cars data set in R. Consider the point marked in purple: Let's call it point $i$. The residual, $\hat{\varepsilon}_i\approx 0.98\varepsilon_i +\sum_{j\neq i} w_j \varepsilon_j$, where the $w_j$ for the other errors are in the region of -0.02: We can rewrite that as: $\hat{\varepsilon}_i\approx 0.98\varepsilon_i +\eta_i$ or more generally $\hat{\varepsilon}_i= (1-h_{ii})\varepsilon_i +\eta_i$ where $h_{ii}$ is the $i$-th diagonal element of $H$. Similarly, the $w_j$'s above are $h_{ij}$. If the errors are iid $N(0,\sigma^2)$ then in this example, the weighted sum of those other errors will have a standard deviation corresponding to about 1/7th the effect of the error of the $i$th observation on its residual. Which is to say, in well-behaved regressions, residuals can mostly be treated like a moderately noisy estimate of unobservable the error term. As we consider points further from the center, things work somewhat less nicely (the residual becomes less weighted on the error and the weights on the other errors become less even). With many parameters, or with $X$'s not so nicely distributed, the residuals may be much less like the errors. You may like to try some examples.
How do residuals relate to the underlying disturbances?
The relationship between $\hat{\varepsilon}$ and $\varepsilon$ is: $$\hat{\varepsilon} = (I-H) \varepsilon$$ where $H$, the hat matrix, is $X(X^TX)^{-1}X^T$. Which is to say that $\hat{\varepsilon}_i
How do residuals relate to the underlying disturbances? The relationship between $\hat{\varepsilon}$ and $\varepsilon$ is: $$\hat{\varepsilon} = (I-H) \varepsilon$$ where $H$, the hat matrix, is $X(X^TX)^{-1}X^T$. Which is to say that $\hat{\varepsilon}_i$ is a linear combination of all the errors, but typically most of the weight falls on the $i$-th one. Here's an example, using the cars data set in R. Consider the point marked in purple: Let's call it point $i$. The residual, $\hat{\varepsilon}_i\approx 0.98\varepsilon_i +\sum_{j\neq i} w_j \varepsilon_j$, where the $w_j$ for the other errors are in the region of -0.02: We can rewrite that as: $\hat{\varepsilon}_i\approx 0.98\varepsilon_i +\eta_i$ or more generally $\hat{\varepsilon}_i= (1-h_{ii})\varepsilon_i +\eta_i$ where $h_{ii}$ is the $i$-th diagonal element of $H$. Similarly, the $w_j$'s above are $h_{ij}$. If the errors are iid $N(0,\sigma^2)$ then in this example, the weighted sum of those other errors will have a standard deviation corresponding to about 1/7th the effect of the error of the $i$th observation on its residual. Which is to say, in well-behaved regressions, residuals can mostly be treated like a moderately noisy estimate of unobservable the error term. As we consider points further from the center, things work somewhat less nicely (the residual becomes less weighted on the error and the weights on the other errors become less even). With many parameters, or with $X$'s not so nicely distributed, the residuals may be much less like the errors. You may like to try some examples.
How do residuals relate to the underlying disturbances? The relationship between $\hat{\varepsilon}$ and $\varepsilon$ is: $$\hat{\varepsilon} = (I-H) \varepsilon$$ where $H$, the hat matrix, is $X(X^TX)^{-1}X^T$. Which is to say that $\hat{\varepsilon}_i
31,506
Reproduce linear discriminant analysis projection plot
The discriminant axis (the onto which the points are projected on your Figure 1) is given by the first eigenvector of $\mathbf{W}^{-1}\mathbf{B}$. In case of only two classes this eigenvector is proportional to $\mathbf{W}^{-1}(\mathbf{m}_1-\mathbf{m}_2)$, where $\mathbf{m}_i$ are class centroids. Normalize this vector (or the obtained eigenvector) to get the unit axis vector $\mathbf{v}$. This is enough to draw the axis. To project the (centred) points onto this axis, you simply compute $\mathbf{X}\mathbf{v}\mathbf{v}^\top$. Here $\mathbf{v}\mathbf{v}^\top$ is a linear projector onto $\mathbf{v}$. Here is the data sample from your dropbox and the LDA projection: Here is MATLAB code to produce this figure (as requested): % # data taken from your example X = [-0.9437 -0.0433; -2.4165 -0.5211; -2.0249 -1.0120; ... -3.7482 0.2826; -3.3314 0.1653; -3.1927 0.0043; -2.2233 -0.8607; ... -3.1965 0.7736; -2.5039 0.2960; -4.4509 -0.3555]; G = [1 1 1 1 1 2 2 2 2 2]; % # overall mean mu = mean(X); % # loop over groups for g=1:max(G) mus(g,:) = mean(X(G==g,:)); % # class means Ng(g) = length(find(G==g)); % # number of points per group end Sw = zeros(size(X,2)); % # within-class scatter matrix Sb = zeros(size(X,2)); % # between-class scatter matrix for g=1:max(G) Xg = bsxfun(@minus, X(G==g,:), mus(g,:)); % # centred group data Sw = Sw + transpose(Xg)*Xg; Sb = Sb + Ng(g)*(transpose(mus(g,:) - mu)*(mus(g,:) - mu)); end St = transpose(bsxfun(@minus,X,mu)) * bsxfun(@minus,X,mu); % # total scatter matrix assert(sum(sum((St-Sw-Sb).^2)) < 1e-10, 'Error: Sw + Sb ~= St') % # LDA [U,S] = eig(Sw\Sb); % # projecting data points onto the first discriminant axis Xcentred = bsxfun(@minus, X, mu); Xprojected = Xcentred * U(:,1)*transpose(U(:,1)); Xprojected = bsxfun(@plus, Xprojected, mu); % # preparing the figure colors = [1 0 0; 0 0 1]; figure hold on axis([-5 0 -2.5 2.5]) axis square % # plot discriminant axis plot(mu(1) + U(1,1)*[-2 2], mu(2) + U(2,1)*[-2 2], 'k') % # plot projection lines for each data point for i=1:size(X,1) plot([X(i,1) Xprojected(i,1)], [X(i,2) Xprojected(i,2)], 'k--') end % # plot projected points scatter(Xprojected(:,1), Xprojected(:,2), [], colors(G, :)) % # plot original points scatter(X(:,1), X(:,2), [], colors(G, :), 'filled')
Reproduce linear discriminant analysis projection plot
The discriminant axis (the onto which the points are projected on your Figure 1) is given by the first eigenvector of $\mathbf{W}^{-1}\mathbf{B}$. In case of only two classes this eigenvector is propo
Reproduce linear discriminant analysis projection plot The discriminant axis (the onto which the points are projected on your Figure 1) is given by the first eigenvector of $\mathbf{W}^{-1}\mathbf{B}$. In case of only two classes this eigenvector is proportional to $\mathbf{W}^{-1}(\mathbf{m}_1-\mathbf{m}_2)$, where $\mathbf{m}_i$ are class centroids. Normalize this vector (or the obtained eigenvector) to get the unit axis vector $\mathbf{v}$. This is enough to draw the axis. To project the (centred) points onto this axis, you simply compute $\mathbf{X}\mathbf{v}\mathbf{v}^\top$. Here $\mathbf{v}\mathbf{v}^\top$ is a linear projector onto $\mathbf{v}$. Here is the data sample from your dropbox and the LDA projection: Here is MATLAB code to produce this figure (as requested): % # data taken from your example X = [-0.9437 -0.0433; -2.4165 -0.5211; -2.0249 -1.0120; ... -3.7482 0.2826; -3.3314 0.1653; -3.1927 0.0043; -2.2233 -0.8607; ... -3.1965 0.7736; -2.5039 0.2960; -4.4509 -0.3555]; G = [1 1 1 1 1 2 2 2 2 2]; % # overall mean mu = mean(X); % # loop over groups for g=1:max(G) mus(g,:) = mean(X(G==g,:)); % # class means Ng(g) = length(find(G==g)); % # number of points per group end Sw = zeros(size(X,2)); % # within-class scatter matrix Sb = zeros(size(X,2)); % # between-class scatter matrix for g=1:max(G) Xg = bsxfun(@minus, X(G==g,:), mus(g,:)); % # centred group data Sw = Sw + transpose(Xg)*Xg; Sb = Sb + Ng(g)*(transpose(mus(g,:) - mu)*(mus(g,:) - mu)); end St = transpose(bsxfun(@minus,X,mu)) * bsxfun(@minus,X,mu); % # total scatter matrix assert(sum(sum((St-Sw-Sb).^2)) < 1e-10, 'Error: Sw + Sb ~= St') % # LDA [U,S] = eig(Sw\Sb); % # projecting data points onto the first discriminant axis Xcentred = bsxfun(@minus, X, mu); Xprojected = Xcentred * U(:,1)*transpose(U(:,1)); Xprojected = bsxfun(@plus, Xprojected, mu); % # preparing the figure colors = [1 0 0; 0 0 1]; figure hold on axis([-5 0 -2.5 2.5]) axis square % # plot discriminant axis plot(mu(1) + U(1,1)*[-2 2], mu(2) + U(2,1)*[-2 2], 'k') % # plot projection lines for each data point for i=1:size(X,1) plot([X(i,1) Xprojected(i,1)], [X(i,2) Xprojected(i,2)], 'k--') end % # plot projected points scatter(Xprojected(:,1), Xprojected(:,2), [], colors(G, :)) % # plot original points scatter(X(:,1), X(:,2), [], colors(G, :), 'filled')
Reproduce linear discriminant analysis projection plot The discriminant axis (the onto which the points are projected on your Figure 1) is given by the first eigenvector of $\mathbf{W}^{-1}\mathbf{B}$. In case of only two classes this eigenvector is propo
31,507
Reproduce linear discriminant analysis projection plot
And "my" solution. Many thanks to @ttnphns and @amoeba! set.seed(2014) library(MASS) library(DiscriMiner) # For scatter matrices library(ggplot2) library(grid) # Generate multivariate data mu1 <- c(2, -3) mu2 <- c(2, 5) rho <- 0.6 s1 <- 1 s2 <- 3 Sigma <- matrix(c(s1^2, rho * s1 * s2, rho * s1 * s2, s2^2), byrow = TRUE, nrow = 2) n <- 50 # Multivariate normal sampling X1 <- mvrnorm(n, mu = mu1, Sigma = Sigma) X2 <- mvrnorm(n, mu = mu2, Sigma = Sigma) X <- rbind(X1, X2) # Center data Z <- scale(X, scale = FALSE) # Class variable y <- rep(c(0, 1), each = n) # Scatter matrices B <- betweenCov(variables = X, group = y) W <- withinCov(variables = X, group = y) # Eigenvectors ev <- eigen(solve(W) %*% B)$vectors slope <- - ev[1,1] / ev[2,1] intercept <- ev[2,1] # Create projections on 1st discriminant P <- Z %*% ev[,1] %*% t(ev[,1]) # ggplo2 requires data frame my.df <- data.frame(Z1 = Z[, 1], Z2 = Z[, 2], P1 = P[, 1], P2 = P[, 2]) plt <- ggplot(data = my.df, aes(Z1, Z2)) plt <- plt + geom_segment(aes(xend = P1, yend = P2), size = 0.2, color = "gray") plt <- plt + geom_point(aes(color = factor(y))) plt <- plt + geom_point(aes(x = P1, y = P2, colour = factor(y))) plt <- plt + scale_colour_brewer(palette = "Set1") plt <- plt + geom_abline(intercept = intercept, slope = slope, size = 0.2) plt <- plt + coord_fixed() plt <- plt + xlab(expression(X[1])) + ylab(expression(X[2])) plt <- plt + theme_bw() plt <- plt + theme(axis.title.x = element_text(size = 8), axis.text.x = element_text(size = 8), axis.title.y = element_text(size = 8), axis.text.y = element_text(size = 8), legend.position = "none") plt
Reproduce linear discriminant analysis projection plot
And "my" solution. Many thanks to @ttnphns and @amoeba! set.seed(2014) library(MASS) library(DiscriMiner) # For scatter matrices library(ggplot2) library(grid) # Generate multivariate data mu1 <- c(2,
Reproduce linear discriminant analysis projection plot And "my" solution. Many thanks to @ttnphns and @amoeba! set.seed(2014) library(MASS) library(DiscriMiner) # For scatter matrices library(ggplot2) library(grid) # Generate multivariate data mu1 <- c(2, -3) mu2 <- c(2, 5) rho <- 0.6 s1 <- 1 s2 <- 3 Sigma <- matrix(c(s1^2, rho * s1 * s2, rho * s1 * s2, s2^2), byrow = TRUE, nrow = 2) n <- 50 # Multivariate normal sampling X1 <- mvrnorm(n, mu = mu1, Sigma = Sigma) X2 <- mvrnorm(n, mu = mu2, Sigma = Sigma) X <- rbind(X1, X2) # Center data Z <- scale(X, scale = FALSE) # Class variable y <- rep(c(0, 1), each = n) # Scatter matrices B <- betweenCov(variables = X, group = y) W <- withinCov(variables = X, group = y) # Eigenvectors ev <- eigen(solve(W) %*% B)$vectors slope <- - ev[1,1] / ev[2,1] intercept <- ev[2,1] # Create projections on 1st discriminant P <- Z %*% ev[,1] %*% t(ev[,1]) # ggplo2 requires data frame my.df <- data.frame(Z1 = Z[, 1], Z2 = Z[, 2], P1 = P[, 1], P2 = P[, 2]) plt <- ggplot(data = my.df, aes(Z1, Z2)) plt <- plt + geom_segment(aes(xend = P1, yend = P2), size = 0.2, color = "gray") plt <- plt + geom_point(aes(color = factor(y))) plt <- plt + geom_point(aes(x = P1, y = P2, colour = factor(y))) plt <- plt + scale_colour_brewer(palette = "Set1") plt <- plt + geom_abline(intercept = intercept, slope = slope, size = 0.2) plt <- plt + coord_fixed() plt <- plt + xlab(expression(X[1])) + ylab(expression(X[2])) plt <- plt + theme_bw() plt <- plt + theme(axis.title.x = element_text(size = 8), axis.text.x = element_text(size = 8), axis.title.y = element_text(size = 8), axis.text.y = element_text(size = 8), legend.position = "none") plt
Reproduce linear discriminant analysis projection plot And "my" solution. Many thanks to @ttnphns and @amoeba! set.seed(2014) library(MASS) library(DiscriMiner) # For scatter matrices library(ggplot2) library(grid) # Generate multivariate data mu1 <- c(2,
31,508
Clustering a noisy data or with outliers
As your data seems to be composed of Gaussian Mixtures, try Gaussian Mixture Modeling (aka: EM clustering). This should yield results far superior to k-means on this type of data. If your "noise" is uniform distributed, you can also add a uniform distribution to your mixture model. If your data is much less clean, consider using DBSCAN, MeanShift, OPTICS, HDBSCAN*, ... - density based clusterig seems to be appropriate for this data. DBSCAN is also very tolerant to noise (the "N" is for noise).
Clustering a noisy data or with outliers
As your data seems to be composed of Gaussian Mixtures, try Gaussian Mixture Modeling (aka: EM clustering). This should yield results far superior to k-means on this type of data. If your "noise" is u
Clustering a noisy data or with outliers As your data seems to be composed of Gaussian Mixtures, try Gaussian Mixture Modeling (aka: EM clustering). This should yield results far superior to k-means on this type of data. If your "noise" is uniform distributed, you can also add a uniform distribution to your mixture model. If your data is much less clean, consider using DBSCAN, MeanShift, OPTICS, HDBSCAN*, ... - density based clusterig seems to be appropriate for this data. DBSCAN is also very tolerant to noise (the "N" is for noise).
Clustering a noisy data or with outliers As your data seems to be composed of Gaussian Mixtures, try Gaussian Mixture Modeling (aka: EM clustering). This should yield results far superior to k-means on this type of data. If your "noise" is u
31,509
Clustering a noisy data or with outliers
I recommend you to look at this article. The authors propose robust method where the outliers are removed and the rest of data is clustered. That is why they called the method "trimming". There was also an R package tclust but according to this, it was removed from CRAN. Anyway, the article is worth reading.
Clustering a noisy data or with outliers
I recommend you to look at this article. The authors propose robust method where the outliers are removed and the rest of data is clustered. That is why they called the method "trimming". There was al
Clustering a noisy data or with outliers I recommend you to look at this article. The authors propose robust method where the outliers are removed and the rest of data is clustered. That is why they called the method "trimming". There was also an R package tclust but according to this, it was removed from CRAN. Anyway, the article is worth reading.
Clustering a noisy data or with outliers I recommend you to look at this article. The authors propose robust method where the outliers are removed and the rest of data is clustered. That is why they called the method "trimming". There was al
31,510
What is the difference between a multi-label and a multi-class classification? [duplicate]
Based on the sentence you quoted, each item belongs to one class but can have several labels. Imagine you have animals like a fox, a chicken and a common European viper. A multi-class classification problem would be assigning them to a family: Fox Canidae Chicken Phasianidae Viper Viperidae In phylogeny, any species only has one family (that's by design) so that an animal cannot belong to more than one family. A multi-label classification problem would be assigning them random characteristics: Fox Warm-blooded, furred Chicken Warm-blooded, feathered Viper Cold-blooded Each animal can have several labels and the labels do not form a set of mutually exclusive categories.
What is the difference between a multi-label and a multi-class classification? [duplicate]
Based on the sentence you quoted, each item belongs to one class but can have several labels. Imagine you have animals like a fox, a chicken and a common European viper. A multi-class classification p
What is the difference between a multi-label and a multi-class classification? [duplicate] Based on the sentence you quoted, each item belongs to one class but can have several labels. Imagine you have animals like a fox, a chicken and a common European viper. A multi-class classification problem would be assigning them to a family: Fox Canidae Chicken Phasianidae Viper Viperidae In phylogeny, any species only has one family (that's by design) so that an animal cannot belong to more than one family. A multi-label classification problem would be assigning them random characteristics: Fox Warm-blooded, furred Chicken Warm-blooded, feathered Viper Cold-blooded Each animal can have several labels and the labels do not form a set of mutually exclusive categories.
What is the difference between a multi-label and a multi-class classification? [duplicate] Based on the sentence you quoted, each item belongs to one class but can have several labels. Imagine you have animals like a fox, a chicken and a common European viper. A multi-class classification p
31,511
Difference between log-normal distribution and logging variables, fitting normal
Reference $$ x \sim \log \mathcal{N}(\mu, \sigma^2) \\ \text{if} \\ p(x) = \frac{1}{x \sqrt{2\pi} \sigma} e^{- \frac{\left( \log(x) - \mu\right)^2}{2\sigma^2}}, \quad x > 0 $$ where $$ \text{E}[x] = e^{\mu + \frac{1}{2}\sigma^2}. $$ Note that $$ y \sim \log \mathcal{N}(m, v^2) \iff \log(y) \sim \mathcal{N}(m, v^2), $$ per this Q&A. Answer is fitting a normal distribution to logged data equivalent to fitting a log-normal distribution to the original data? Theoretically? In most situations yes (see the logical equivalency above). The only case I found where it was useful to use the log-normal distribution explicitly was a case study of pollution data. In that instance, it was important to model weekdays and weekends differently in terms of pollution concentration ( $\mu_1 > \mu_2$ in the prior*), but have the expected values of the two log-normal distributions without restriction (I had to allow $e^{\mu_1 + \frac{1}{2}\sigma_1^2} \le e^{\mu_2 + \frac{1}{2}\sigma_2^2}$). Which day each measurement was taken was unknown, so the separate parameters had to be inferred. You could certainly argue that this could be done without invoking the log-normal distribution, but this is what we decided to use and it worked. I tried to test this out with some toy data and realized I don't even know why the meanlog associated with a log-normal distribution is NOT what you get when you take the mean of the logged normal distribution. The reason for this is just a consequence of our notion of distance on the support. Since $\log$ is a monotone increasing function, log-transforming variables preserves order. For example, the median of the log-normal distribution is just $e^\mu$, the exponential of the median of the log-values (since the normal distribution mean is also its median). However, the $\log$ function only preserves order, and not the distance function itself. Means are all about distance: the mean is just the point which, when points are weighted by their probabilities, is the closest to all other points in the Euclidean sense. All the log-values are being compressed towards $0$ in an uneven way (i.e., larger values are compressed more). In fact, the log of the mean of the log-normal distribution is higher than the mean of the log-values (i.e. $\mu$) by $\sigma$: $$ \log \left(e^{\mu + \frac{1}{2} \sigma^2} \right) = \mu + \frac{1}{2} \sigma^2 > \mu. $$ That is, the mean of the log-values is compressed in as a function of the spread of the distribution (i.e., involving $\sigma$) as a result of the $\log$ function compressing distances in an uneven way. *As a side note, these kinds of artificial constraints in priors tend to under-perform other methods for inferring/separating distributions.
Difference between log-normal distribution and logging variables, fitting normal
Reference $$ x \sim \log \mathcal{N}(\mu, \sigma^2) \\ \text{if} \\ p(x) = \frac{1}{x \sqrt{2\pi} \sigma} e^{- \frac{\left( \log(x) - \mu\right)^2}{2\sigma^2}}, \quad x > 0 $$ where $$ \text{E}[x] = e
Difference between log-normal distribution and logging variables, fitting normal Reference $$ x \sim \log \mathcal{N}(\mu, \sigma^2) \\ \text{if} \\ p(x) = \frac{1}{x \sqrt{2\pi} \sigma} e^{- \frac{\left( \log(x) - \mu\right)^2}{2\sigma^2}}, \quad x > 0 $$ where $$ \text{E}[x] = e^{\mu + \frac{1}{2}\sigma^2}. $$ Note that $$ y \sim \log \mathcal{N}(m, v^2) \iff \log(y) \sim \mathcal{N}(m, v^2), $$ per this Q&A. Answer is fitting a normal distribution to logged data equivalent to fitting a log-normal distribution to the original data? Theoretically? In most situations yes (see the logical equivalency above). The only case I found where it was useful to use the log-normal distribution explicitly was a case study of pollution data. In that instance, it was important to model weekdays and weekends differently in terms of pollution concentration ( $\mu_1 > \mu_2$ in the prior*), but have the expected values of the two log-normal distributions without restriction (I had to allow $e^{\mu_1 + \frac{1}{2}\sigma_1^2} \le e^{\mu_2 + \frac{1}{2}\sigma_2^2}$). Which day each measurement was taken was unknown, so the separate parameters had to be inferred. You could certainly argue that this could be done without invoking the log-normal distribution, but this is what we decided to use and it worked. I tried to test this out with some toy data and realized I don't even know why the meanlog associated with a log-normal distribution is NOT what you get when you take the mean of the logged normal distribution. The reason for this is just a consequence of our notion of distance on the support. Since $\log$ is a monotone increasing function, log-transforming variables preserves order. For example, the median of the log-normal distribution is just $e^\mu$, the exponential of the median of the log-values (since the normal distribution mean is also its median). However, the $\log$ function only preserves order, and not the distance function itself. Means are all about distance: the mean is just the point which, when points are weighted by their probabilities, is the closest to all other points in the Euclidean sense. All the log-values are being compressed towards $0$ in an uneven way (i.e., larger values are compressed more). In fact, the log of the mean of the log-normal distribution is higher than the mean of the log-values (i.e. $\mu$) by $\sigma$: $$ \log \left(e^{\mu + \frac{1}{2} \sigma^2} \right) = \mu + \frac{1}{2} \sigma^2 > \mu. $$ That is, the mean of the log-values is compressed in as a function of the spread of the distribution (i.e., involving $\sigma$) as a result of the $\log$ function compressing distances in an uneven way. *As a side note, these kinds of artificial constraints in priors tend to under-perform other methods for inferring/separating distributions.
Difference between log-normal distribution and logging variables, fitting normal Reference $$ x \sim \log \mathcal{N}(\mu, \sigma^2) \\ \text{if} \\ p(x) = \frac{1}{x \sqrt{2\pi} \sigma} e^{- \frac{\left( \log(x) - \mu\right)^2}{2\sigma^2}}, \quad x > 0 $$ where $$ \text{E}[x] = e
31,512
Difference between log-normal distribution and logging variables, fitting normal
I'd like to add to the answer, that you correctly understand relation between log-transformation and mean. But lnorm uses natural logarithm, not base 10. Also, in fitdist call you are trying to fit lognormal distribution to already log-transformed data (see the correct code below). library(fitdistrplus) set.seed(1) test <- rnorm(1000, mean=100) test[test<=0] <- NA #Unnecessary since no values <= 0, but just to prove test<-na.omit(test) log.test <- log(test) mean(log.test) sd(log.test) #4.605 is mean for log.test #0.01035277 is sd for log.test fitdist(test, dist="lnorm", method="mle") # "meanlog" is 4.6050002, "sdlog" is 0.0103476 fitdist(log.test, dist="norm", method="mle") # "meanlog" is 4.6050002, "sdlog" is 0.0103476
Difference between log-normal distribution and logging variables, fitting normal
I'd like to add to the answer, that you correctly understand relation between log-transformation and mean. But lnorm uses natural logarithm, not base 10. Also, in fitdist call you are trying to fit lo
Difference between log-normal distribution and logging variables, fitting normal I'd like to add to the answer, that you correctly understand relation between log-transformation and mean. But lnorm uses natural logarithm, not base 10. Also, in fitdist call you are trying to fit lognormal distribution to already log-transformed data (see the correct code below). library(fitdistrplus) set.seed(1) test <- rnorm(1000, mean=100) test[test<=0] <- NA #Unnecessary since no values <= 0, but just to prove test<-na.omit(test) log.test <- log(test) mean(log.test) sd(log.test) #4.605 is mean for log.test #0.01035277 is sd for log.test fitdist(test, dist="lnorm", method="mle") # "meanlog" is 4.6050002, "sdlog" is 0.0103476 fitdist(log.test, dist="norm", method="mle") # "meanlog" is 4.6050002, "sdlog" is 0.0103476
Difference between log-normal distribution and logging variables, fitting normal I'd like to add to the answer, that you correctly understand relation between log-transformation and mean. But lnorm uses natural logarithm, not base 10. Also, in fitdist call you are trying to fit lo
31,513
EM algorithm Practice Problem
The complete data likelihood should not involve G! It should simply be the likelihood of $\theta$ when the $X$'s are exponential. Note that the complete data likelihood as you have it written simplifies to an exponential likelihood since only one of the $G_{rj}$'s can be 1. Leaving the $G$'s in the complete data likelihood, however, messes you up later on. In part (d) should be taking the expectation of the complete data log likelihood, not the observed data log likelihood. Also, you should not be using the law of total expectation! Recall that G is observed and is not random, so you should only be performing one of those conditional expectations for each $X_j$. Simply replace this conditional expectation by the term $X_j^{(i)}$ and then perform the M-step.
EM algorithm Practice Problem
The complete data likelihood should not involve G! It should simply be the likelihood of $\theta$ when the $X$'s are exponential. Note that the complete data likelihood as you have it written simplif
EM algorithm Practice Problem The complete data likelihood should not involve G! It should simply be the likelihood of $\theta$ when the $X$'s are exponential. Note that the complete data likelihood as you have it written simplifies to an exponential likelihood since only one of the $G_{rj}$'s can be 1. Leaving the $G$'s in the complete data likelihood, however, messes you up later on. In part (d) should be taking the expectation of the complete data log likelihood, not the observed data log likelihood. Also, you should not be using the law of total expectation! Recall that G is observed and is not random, so you should only be performing one of those conditional expectations for each $X_j$. Simply replace this conditional expectation by the term $X_j^{(i)}$ and then perform the M-step.
EM algorithm Practice Problem The complete data likelihood should not involve G! It should simply be the likelihood of $\theta$ when the $X$'s are exponential. Note that the complete data likelihood as you have it written simplif
31,514
EM algorithm Practice Problem
Based off @jsk's comments I will try to remedy my mistakes: $\begin{align*} L(\theta|X,G) &= \prod_{j=1}^n \theta e^{-\theta x_j} \end{align*}$ $\begin{align*} Q(\theta,\theta^i) &= n\log{\theta} - \theta\sum_{j=1}^n \text{E}\left[X_j|G,\theta^i\right]\\ &= n\log{\theta} - \theta\left(\dfrac{\sum_{j=1}^n g_{1j}}{1-e^{-\theta^i}}\right)\left(\dfrac{1}{\theta^i} - e^{-\theta^i}(1+1/\theta^i)\right) - \theta\left(\dfrac{\sum_{j=1}^n g_{2j}}{e^{-\theta^i}(1-e^{-\theta^i})}\right)\left(e^{-\theta^i}(1+1/\theta^i)-e^{-2\theta^i}(2+1/\theta^i)\right) - \theta\left(\dfrac{\sum_{j=1}^n g_{3j}}{e^{-2\theta^i}}\right)\left(e^{-2\theta^i}(2+1/\theta^i)\right)\\ &= n\log{\theta} - \theta N_1 A - \theta N_2 B - \theta N_3 C\\ \dfrac{\partial Q(\theta,\theta^i)}{\partial \theta} &= \dfrac{n}{\theta} - N_1A-N_2B - N_3C \overset{set}{=}0 \end{align*}$ solving for $\theta$ we get $\theta^{(i+1)} = \dfrac{n}{N_1A+N_2B+N_3C}$
EM algorithm Practice Problem
Based off @jsk's comments I will try to remedy my mistakes: $\begin{align*} L(\theta|X,G) &= \prod_{j=1}^n \theta e^{-\theta x_j} \end{align*}$ $\begin{align*} Q(\theta,\theta^i) &= n\log{\theta} - \t
EM algorithm Practice Problem Based off @jsk's comments I will try to remedy my mistakes: $\begin{align*} L(\theta|X,G) &= \prod_{j=1}^n \theta e^{-\theta x_j} \end{align*}$ $\begin{align*} Q(\theta,\theta^i) &= n\log{\theta} - \theta\sum_{j=1}^n \text{E}\left[X_j|G,\theta^i\right]\\ &= n\log{\theta} - \theta\left(\dfrac{\sum_{j=1}^n g_{1j}}{1-e^{-\theta^i}}\right)\left(\dfrac{1}{\theta^i} - e^{-\theta^i}(1+1/\theta^i)\right) - \theta\left(\dfrac{\sum_{j=1}^n g_{2j}}{e^{-\theta^i}(1-e^{-\theta^i})}\right)\left(e^{-\theta^i}(1+1/\theta^i)-e^{-2\theta^i}(2+1/\theta^i)\right) - \theta\left(\dfrac{\sum_{j=1}^n g_{3j}}{e^{-2\theta^i}}\right)\left(e^{-2\theta^i}(2+1/\theta^i)\right)\\ &= n\log{\theta} - \theta N_1 A - \theta N_2 B - \theta N_3 C\\ \dfrac{\partial Q(\theta,\theta^i)}{\partial \theta} &= \dfrac{n}{\theta} - N_1A-N_2B - N_3C \overset{set}{=}0 \end{align*}$ solving for $\theta$ we get $\theta^{(i+1)} = \dfrac{n}{N_1A+N_2B+N_3C}$
EM algorithm Practice Problem Based off @jsk's comments I will try to remedy my mistakes: $\begin{align*} L(\theta|X,G) &= \prod_{j=1}^n \theta e^{-\theta x_j} \end{align*}$ $\begin{align*} Q(\theta,\theta^i) &= n\log{\theta} - \t
31,515
Building a time series model using more than independent variables
Regarding questions (1), (3), and (4); yes, there are a lot of options for modelling multivariate time series, and this is absolutely something you can accomplish with R. You said that you don't have much experience with statistics, so I'm not sure how familiar you are with R (if at all), but a possible approach would be to use the R package "dynlm": ## You'll need these packages install.packages("dynlm",dependencies=TRUE) library(dynlm) if(is.element("zoo",installed.packages()[,1])){ library(zoo) } else { install.packages("zoo",dependencies=TRUE) library(zoo) } ## Generating some nonsense data for demonstration ## 104 dates, 1 week apart d1 <- as.Date("01/01/2012",format='%m/%d/%Y') dSeq <- seq.Date(from=d1, by='week', length.out=104) ## Dependent variable Y <- rnorm(104,50,10) + rnorm(104,10,1)*cos((1:104)/6) ## Independent variable for temperature Temp <- rnorm(104,10,1) + cos((1:104)/12) ## Dummy variable for holidays (just picked a few off the calendar) Holiday <- rep(0,104) Holiday[c(3,3+52, 8,8+52, 22,22+52, 47,47+52, 52,52+52)] <- 1 Holiday <- ifelse(Holiday==0,"N","Y") ## Make a data.frame to hold variables aDF <- data.frame( Date=dSeq, Y=Y, Temp=Temp, Holiday=Holiday) ## Make a time series version of this with the "zoo" function ## for using dynamic linear model. zDF <- aDF zDF[,2] <- zoo(aDF[,2],aDF[,1]) zDF[,3] <- zoo(aDF[,3],aDF[,1]) zDF[,4] <- zoo(aDF[,4],aDF[,1]) ## A possible DLM... type ?dynlm for details of the function dlm1 <- dynlm(Y ~ L(Y,1) + L(Y,13) + Temp + Holiday, data=zDF) ## Model summary summary(dlm1) ## Estimated coefficients: coefficients(dlm1) Like I said, this is just one of many possibilities for analyzing multivariate time series in R; but to be honest, if you are "totally new to statistics" and not working on this particular project with someone who has experience with DLMs or similar models, I highly suggest reading through Forecasting: principles and practice by Rob Hyndman and George Athana­sopou­los. It's a free online book written by two very knowledgeable econometricians and a significant amount of the content is geared towards people with little or no formal background in statistics / forecasting methods. Here's a link: https://www.otexts.org/fpp. On a related note, if you are going to be regularly working with time series data in R, I would suggest installing Hyndman's R package forecast, which is phenomenally useful. Additionally, your second question about deciding which independent variables have a more significant impact on sales is not something which can be succinctly answered. A typical modelling process involves a lot of steps related to diagnostic checking and evaluation of goodness-of-fit, and the tools for accomplishing such tasks can vary greatly depending on which type of statistical model you are using. Unfortunately, if you are brand new to statistics you will almost certainly have to invest a decent amount of time into understanding some important technical aspects of modelling, because there is much more to consider than the correlation of two variables, for example. This is another reason that I recommend reading through Hyndman and Athana­sopou­los' online book, as it addresses a wide variety of fundamental aspects involved in the forecasting process.
Building a time series model using more than independent variables
Regarding questions (1), (3), and (4); yes, there are a lot of options for modelling multivariate time series, and this is absolutely something you can accomplish with R. You said that you don't have
Building a time series model using more than independent variables Regarding questions (1), (3), and (4); yes, there are a lot of options for modelling multivariate time series, and this is absolutely something you can accomplish with R. You said that you don't have much experience with statistics, so I'm not sure how familiar you are with R (if at all), but a possible approach would be to use the R package "dynlm": ## You'll need these packages install.packages("dynlm",dependencies=TRUE) library(dynlm) if(is.element("zoo",installed.packages()[,1])){ library(zoo) } else { install.packages("zoo",dependencies=TRUE) library(zoo) } ## Generating some nonsense data for demonstration ## 104 dates, 1 week apart d1 <- as.Date("01/01/2012",format='%m/%d/%Y') dSeq <- seq.Date(from=d1, by='week', length.out=104) ## Dependent variable Y <- rnorm(104,50,10) + rnorm(104,10,1)*cos((1:104)/6) ## Independent variable for temperature Temp <- rnorm(104,10,1) + cos((1:104)/12) ## Dummy variable for holidays (just picked a few off the calendar) Holiday <- rep(0,104) Holiday[c(3,3+52, 8,8+52, 22,22+52, 47,47+52, 52,52+52)] <- 1 Holiday <- ifelse(Holiday==0,"N","Y") ## Make a data.frame to hold variables aDF <- data.frame( Date=dSeq, Y=Y, Temp=Temp, Holiday=Holiday) ## Make a time series version of this with the "zoo" function ## for using dynamic linear model. zDF <- aDF zDF[,2] <- zoo(aDF[,2],aDF[,1]) zDF[,3] <- zoo(aDF[,3],aDF[,1]) zDF[,4] <- zoo(aDF[,4],aDF[,1]) ## A possible DLM... type ?dynlm for details of the function dlm1 <- dynlm(Y ~ L(Y,1) + L(Y,13) + Temp + Holiday, data=zDF) ## Model summary summary(dlm1) ## Estimated coefficients: coefficients(dlm1) Like I said, this is just one of many possibilities for analyzing multivariate time series in R; but to be honest, if you are "totally new to statistics" and not working on this particular project with someone who has experience with DLMs or similar models, I highly suggest reading through Forecasting: principles and practice by Rob Hyndman and George Athana­sopou­los. It's a free online book written by two very knowledgeable econometricians and a significant amount of the content is geared towards people with little or no formal background in statistics / forecasting methods. Here's a link: https://www.otexts.org/fpp. On a related note, if you are going to be regularly working with time series data in R, I would suggest installing Hyndman's R package forecast, which is phenomenally useful. Additionally, your second question about deciding which independent variables have a more significant impact on sales is not something which can be succinctly answered. A typical modelling process involves a lot of steps related to diagnostic checking and evaluation of goodness-of-fit, and the tools for accomplishing such tasks can vary greatly depending on which type of statistical model you are using. Unfortunately, if you are brand new to statistics you will almost certainly have to invest a decent amount of time into understanding some important technical aspects of modelling, because there is much more to consider than the correlation of two variables, for example. This is another reason that I recommend reading through Hyndman and Athana­sopou­los' online book, as it addresses a wide variety of fundamental aspects involved in the forecasting process.
Building a time series model using more than independent variables Regarding questions (1), (3), and (4); yes, there are a lot of options for modelling multivariate time series, and this is absolutely something you can accomplish with R. You said that you don't have
31,516
Markov Switching and Hidden Markov Models
A hidden Markov model is a bivariate stochastic process $\{Y_t, X_t\}_{t=1,2,...}$ where $\{X_t\}$ is an unobserved Markov chain and, conditional on $\{X_t\}$, $\{Y_t\}$ is an observed sequence of independent random variables such that the conditional distribution of $Y_t$ only depends on $X_t$. When $Y_t$ depends both on $X_t$ and the lagged observations $Y_{t−1}$, it is called Markov Switching Model. An, Y., Hu, Y., Hopkins, J., & Shum, M. (2013). Identifiability and inference of hidden Markov models. Technical report
Markov Switching and Hidden Markov Models
A hidden Markov model is a bivariate stochastic process $\{Y_t, X_t\}_{t=1,2,...}$ where $\{X_t\}$ is an unobserved Markov chain and, conditional on $\{X_t\}$, $\{Y_t\}$ is an observed sequence of ind
Markov Switching and Hidden Markov Models A hidden Markov model is a bivariate stochastic process $\{Y_t, X_t\}_{t=1,2,...}$ where $\{X_t\}$ is an unobserved Markov chain and, conditional on $\{X_t\}$, $\{Y_t\}$ is an observed sequence of independent random variables such that the conditional distribution of $Y_t$ only depends on $X_t$. When $Y_t$ depends both on $X_t$ and the lagged observations $Y_{t−1}$, it is called Markov Switching Model. An, Y., Hu, Y., Hopkins, J., & Shum, M. (2013). Identifiability and inference of hidden Markov models. Technical report
Markov Switching and Hidden Markov Models A hidden Markov model is a bivariate stochastic process $\{Y_t, X_t\}_{t=1,2,...}$ where $\{X_t\}$ is an unobserved Markov chain and, conditional on $\{X_t\}$, $\{Y_t\}$ is an observed sequence of ind
31,517
Markov Switching and Hidden Markov Models
Markov Switching Models are the same thing as Regime Switching Models. A Hidden Markov Switching Model or a Hidden Regime Switching Model (both of which are commonly called a Hidden Markov Model) is different. A Hidden Markov Model (HMM) is a doubly stochastic process. There is an underlying stochastic process that is not observable (hidden), the results of which can be observed (these results being the second stochastic process). The underlying stochastic process that is hidden is what makes this model different. Consider this coin toss example starting on page 5: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1165342 If the man behind the curtain flips 1 coin and tells you the results, you have 2 states (state 1: heads, state 2: tails). Thus, by knowing the results of each coin flip, you can determine the state. Therefore, you cannot construct a HMM. If the man behind the curtain flips 2 coins and tells you the results, you still have 2 states (state 1: coin 1, state 2: coin 2) but they are not uniquely tied to heads or tails. This is a hidden process because you can't tell which state (coin 1 or coin 2) led to the observation (heads or tails). Therefore, you can construct a HMM.
Markov Switching and Hidden Markov Models
Markov Switching Models are the same thing as Regime Switching Models. A Hidden Markov Switching Model or a Hidden Regime Switching Model (both of which are commonly called a Hidden Markov Model) is d
Markov Switching and Hidden Markov Models Markov Switching Models are the same thing as Regime Switching Models. A Hidden Markov Switching Model or a Hidden Regime Switching Model (both of which are commonly called a Hidden Markov Model) is different. A Hidden Markov Model (HMM) is a doubly stochastic process. There is an underlying stochastic process that is not observable (hidden), the results of which can be observed (these results being the second stochastic process). The underlying stochastic process that is hidden is what makes this model different. Consider this coin toss example starting on page 5: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1165342 If the man behind the curtain flips 1 coin and tells you the results, you have 2 states (state 1: heads, state 2: tails). Thus, by knowing the results of each coin flip, you can determine the state. Therefore, you cannot construct a HMM. If the man behind the curtain flips 2 coins and tells you the results, you still have 2 states (state 1: coin 1, state 2: coin 2) but they are not uniquely tied to heads or tails. This is a hidden process because you can't tell which state (coin 1 or coin 2) led to the observation (heads or tails). Therefore, you can construct a HMM.
Markov Switching and Hidden Markov Models Markov Switching Models are the same thing as Regime Switching Models. A Hidden Markov Switching Model or a Hidden Regime Switching Model (both of which are commonly called a Hidden Markov Model) is d
31,518
Can full conditionals determine the joint distribution?
This seemingly simple question is deeper than it looks, leading us all the way to the Hammersley-Clifford theorem. The fact that we can recover the joint distribution from the full conditionals is what makes the Gibbs sampler possible. It may be seen as a surprising result, if we remember that the marginals do not determine the joint distribution. Let's see what happens if we compute formally with the well-known definitions of the joint, conditionals and marginals densities. Since $$ f_{X,Y}(x,y)=f_{X\mid Y}(x\mid y)\,f_Y(y)=f_{Y\mid X}(y\mid x)\,f_X(x) \, , $$ we have $$ \int \frac{f_{Y\mid X}(y\mid x)}{f_{X\mid Y}(x\mid y)}dy = \int \frac{f_Y(y)}{f_X(x)}dy = \frac{1}{f_X(x)} \, , $$ and we can formally recover the joint density from the full conditionals making $$ f_{X,Y}(x,y) = \frac{f_{Y\mid X}(y\mid x)}{\int f_{Y\mid X}(y\mid x)/f_{X\mid Y}(x\mid y)\,dy} \, . \qquad (*) $$ The problem with this formal computation is that it supposes that all the involved objects do exist. For instance, consider what happens if we are given that $$ X\mid Y=y\sim\text{Exp}(y) \qquad \text{and} \qquad Y\mid X=x\sim\text{Exp}(x) \, . $$ It follows that $f_{Y\mid X}(y\mid x)/f_{X\mid Y}(x\mid y) = x /y$, and the integral in the denominator of $(*)$ diverges. To guarantee that we can recover the joint density from the full conditionals using $(*)$ we need the compatibility conditions discussed in this paper: "Compatible Conditional Distributions", Barry C. Arnold and S. James Press, Journal of the American Statistical Association, Vol. 84, No. 405 (1989), pp. 152-156. Finally, read the discussion on the Hammersley-Clifford Theorem in Robert and Casella's book
Can full conditionals determine the joint distribution?
This seemingly simple question is deeper than it looks, leading us all the way to the Hammersley-Clifford theorem. The fact that we can recover the joint distribution from the full conditionals is wha
Can full conditionals determine the joint distribution? This seemingly simple question is deeper than it looks, leading us all the way to the Hammersley-Clifford theorem. The fact that we can recover the joint distribution from the full conditionals is what makes the Gibbs sampler possible. It may be seen as a surprising result, if we remember that the marginals do not determine the joint distribution. Let's see what happens if we compute formally with the well-known definitions of the joint, conditionals and marginals densities. Since $$ f_{X,Y}(x,y)=f_{X\mid Y}(x\mid y)\,f_Y(y)=f_{Y\mid X}(y\mid x)\,f_X(x) \, , $$ we have $$ \int \frac{f_{Y\mid X}(y\mid x)}{f_{X\mid Y}(x\mid y)}dy = \int \frac{f_Y(y)}{f_X(x)}dy = \frac{1}{f_X(x)} \, , $$ and we can formally recover the joint density from the full conditionals making $$ f_{X,Y}(x,y) = \frac{f_{Y\mid X}(y\mid x)}{\int f_{Y\mid X}(y\mid x)/f_{X\mid Y}(x\mid y)\,dy} \, . \qquad (*) $$ The problem with this formal computation is that it supposes that all the involved objects do exist. For instance, consider what happens if we are given that $$ X\mid Y=y\sim\text{Exp}(y) \qquad \text{and} \qquad Y\mid X=x\sim\text{Exp}(x) \, . $$ It follows that $f_{Y\mid X}(y\mid x)/f_{X\mid Y}(x\mid y) = x /y$, and the integral in the denominator of $(*)$ diverges. To guarantee that we can recover the joint density from the full conditionals using $(*)$ we need the compatibility conditions discussed in this paper: "Compatible Conditional Distributions", Barry C. Arnold and S. James Press, Journal of the American Statistical Association, Vol. 84, No. 405 (1989), pp. 152-156. Finally, read the discussion on the Hammersley-Clifford Theorem in Robert and Casella's book
Can full conditionals determine the joint distribution? This seemingly simple question is deeper than it looks, leading us all the way to the Hammersley-Clifford theorem. The fact that we can recover the joint distribution from the full conditionals is wha
31,519
Calculating 95% CI of a proportion of 0%
The presumption is that the trait is possible, just not observed. So, sensible methods will give lower limits that are 0 and upper limits that are positive, depending on the exact assumptions. Most good software will do this for you, but different methods will give different results. With 0/72 observed, my favourite software gives upper 95% limits for the observed proportion that are variously .0499441, .0506512, .0341694, .0606849, depending on which method you use. This may seem surprising, but very competent statisticians disagree on how best to formulate the problem. An excellent survey is Brown, L. D., T. T. Cai, and A. DasGupta. 2001. Interval estimation for a binomial proportion. Statistical Science 16: 101-133. If the trait is impossible, your confidence limits are identically zero.
Calculating 95% CI of a proportion of 0%
The presumption is that the trait is possible, just not observed. So, sensible methods will give lower limits that are 0 and upper limits that are positive, depending on the exact assumptions. Most g
Calculating 95% CI of a proportion of 0% The presumption is that the trait is possible, just not observed. So, sensible methods will give lower limits that are 0 and upper limits that are positive, depending on the exact assumptions. Most good software will do this for you, but different methods will give different results. With 0/72 observed, my favourite software gives upper 95% limits for the observed proportion that are variously .0499441, .0506512, .0341694, .0606849, depending on which method you use. This may seem surprising, but very competent statisticians disagree on how best to formulate the problem. An excellent survey is Brown, L. D., T. T. Cai, and A. DasGupta. 2001. Interval estimation for a binomial proportion. Statistical Science 16: 101-133. If the trait is impossible, your confidence limits are identically zero.
Calculating 95% CI of a proportion of 0% The presumption is that the trait is possible, just not observed. So, sensible methods will give lower limits that are 0 and upper limits that are positive, depending on the exact assumptions. Most g
31,520
What is the "equivalent" of normal distribution in an interval?
The problem is that the normal has so many properties that might lead someone to consider it natural for this or that problem that we're left to ponder which properties are most critical. While I here attempt to answer the question at face value, when choosing a distributional model on the unit interval (or indeed in any other case), I'd strongly urge considering the point in whuber's comment under the question. There's no really 'natural' candidate that's typically indexed by $\mu$ and $\sigma$, though there are two parameter families on the unit interval which have a mean and variance that are functions of the more usual parameters.   The Beta distribution family A very widely used family of two-parameter continuous distributions on the unit interval is the beta family. It should be possible to reparameterize in terms of $\mu$ and $\sigma$, but it would be considerably less 'nice' looking that way. $$f(x;\alpha,\beta) = \frac{1}{\text{B}(\alpha,\beta)} x^{\alpha-1}(1-x)^{\beta-1};\quad 0\leq x\leq 1,\alpha,\beta>0$$ It has $\mu = \frac{\alpha}{\alpha+\beta}$ and $\sigma^2 = \frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)}\,$.   Maximum entropy distribution The maximum entropy distribution with fixed mean and variance on a closed interval appears (via a theorem of Boltzmann's) to be a truncated normal. Truncated normals are sometimes used in various applications, and are indexed by $\mu$ and $\sigma$*, but I wouldn't say they were usually regarded as the most natural for many problems.  * but beware! In the truncated normal written in the usual way, the parameters $\mu$ and $\sigma$ aren't the mean and standard deviation of the truncated variable, but of its untruncated parent. Interestingly, while the beta included the uniform as a special case, the truncated normal includes it as a limiting case.   Given the phrasing of your question, those would be the most obvious candidates, and of those, the most widely applied is no doubt the beta family.
What is the "equivalent" of normal distribution in an interval?
The problem is that the normal has so many properties that might lead someone to consider it natural for this or that problem that we're left to ponder which properties are most critical. While I her
What is the "equivalent" of normal distribution in an interval? The problem is that the normal has so many properties that might lead someone to consider it natural for this or that problem that we're left to ponder which properties are most critical. While I here attempt to answer the question at face value, when choosing a distributional model on the unit interval (or indeed in any other case), I'd strongly urge considering the point in whuber's comment under the question. There's no really 'natural' candidate that's typically indexed by $\mu$ and $\sigma$, though there are two parameter families on the unit interval which have a mean and variance that are functions of the more usual parameters.   The Beta distribution family A very widely used family of two-parameter continuous distributions on the unit interval is the beta family. It should be possible to reparameterize in terms of $\mu$ and $\sigma$, but it would be considerably less 'nice' looking that way. $$f(x;\alpha,\beta) = \frac{1}{\text{B}(\alpha,\beta)} x^{\alpha-1}(1-x)^{\beta-1};\quad 0\leq x\leq 1,\alpha,\beta>0$$ It has $\mu = \frac{\alpha}{\alpha+\beta}$ and $\sigma^2 = \frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)}\,$.   Maximum entropy distribution The maximum entropy distribution with fixed mean and variance on a closed interval appears (via a theorem of Boltzmann's) to be a truncated normal. Truncated normals are sometimes used in various applications, and are indexed by $\mu$ and $\sigma$*, but I wouldn't say they were usually regarded as the most natural for many problems.  * but beware! In the truncated normal written in the usual way, the parameters $\mu$ and $\sigma$ aren't the mean and standard deviation of the truncated variable, but of its untruncated parent. Interestingly, while the beta included the uniform as a special case, the truncated normal includes it as a limiting case.   Given the phrasing of your question, those would be the most obvious candidates, and of those, the most widely applied is no doubt the beta family.
What is the "equivalent" of normal distribution in an interval? The problem is that the normal has so many properties that might lead someone to consider it natural for this or that problem that we're left to ponder which properties are most critical. While I her
31,521
Gridsearch for SVM parameter estimation
The reason for the exponential grid is that both C and gamma are scale parameters that act multiplicatively, so doubling gamma is as likely to have roughly as big an effect (but in the other direction) as halving it. This means that if we use a grid of approximately exponentially increasing values, there is roughly the same amount of "information" about the hyper-parameters obtained by the evaluation of the model selection criterion at each grid point. I usually search on a grid based on integer powers of 2, which seems to work out quite well (I am working on a paper on optimising grid search - if you use too fine a grid you can end up over-fitting the model selection criterion, so a fairly coarse grid turns out to be good for generalisation as well as computational expense.). As to the wide range, unfortunately the optimal hyper-parameter values depends on the nature of the problem, and on the size of the dataset and cannot be determine a-priori. The reason for the large, apparently wasteful grid, is to make sure good values can be found automatically, with high probability. If computational expense is an issue, then rather than use grid search, you can use the Nelder-Mead simplex algorithm to optimise the cross-validation error. This is an optimisation algorithm that does not require gradient information, so it is pretty straightforward to use for any problem where grid-search is currently used. I'm not an R user, but Nelder-Mead is implemented in R via optim.
Gridsearch for SVM parameter estimation
The reason for the exponential grid is that both C and gamma are scale parameters that act multiplicatively, so doubling gamma is as likely to have roughly as big an effect (but in the other direction
Gridsearch for SVM parameter estimation The reason for the exponential grid is that both C and gamma are scale parameters that act multiplicatively, so doubling gamma is as likely to have roughly as big an effect (but in the other direction) as halving it. This means that if we use a grid of approximately exponentially increasing values, there is roughly the same amount of "information" about the hyper-parameters obtained by the evaluation of the model selection criterion at each grid point. I usually search on a grid based on integer powers of 2, which seems to work out quite well (I am working on a paper on optimising grid search - if you use too fine a grid you can end up over-fitting the model selection criterion, so a fairly coarse grid turns out to be good for generalisation as well as computational expense.). As to the wide range, unfortunately the optimal hyper-parameter values depends on the nature of the problem, and on the size of the dataset and cannot be determine a-priori. The reason for the large, apparently wasteful grid, is to make sure good values can be found automatically, with high probability. If computational expense is an issue, then rather than use grid search, you can use the Nelder-Mead simplex algorithm to optimise the cross-validation error. This is an optimisation algorithm that does not require gradient information, so it is pretty straightforward to use for any problem where grid-search is currently used. I'm not an R user, but Nelder-Mead is implemented in R via optim.
Gridsearch for SVM parameter estimation The reason for the exponential grid is that both C and gamma are scale parameters that act multiplicatively, so doubling gamma is as likely to have roughly as big an effect (but in the other direction
31,522
Gridsearch for SVM parameter estimation
This is called the "parameter tuning" issue for SVMs. One of the easiest approaches is to take the median of each for the greatest levels of class prediction accuracy obtained as you go through the CV folds. Also, as a rule of thumb, use a simpler classifier to determine if your data are linearly separable. If k-nearest neighbor (kNN) or linear regression works better, then you shouldn't use a more expensive (computationally) approach like SVM. SVM can be easily overused, so make sure you evaluate linear regression, kNN, linear discriminant analysis, random forests, etc.
Gridsearch for SVM parameter estimation
This is called the "parameter tuning" issue for SVMs. One of the easiest approaches is to take the median of each for the greatest levels of class prediction accuracy obtained as you go through the C
Gridsearch for SVM parameter estimation This is called the "parameter tuning" issue for SVMs. One of the easiest approaches is to take the median of each for the greatest levels of class prediction accuracy obtained as you go through the CV folds. Also, as a rule of thumb, use a simpler classifier to determine if your data are linearly separable. If k-nearest neighbor (kNN) or linear regression works better, then you shouldn't use a more expensive (computationally) approach like SVM. SVM can be easily overused, so make sure you evaluate linear regression, kNN, linear discriminant analysis, random forests, etc.
Gridsearch for SVM parameter estimation This is called the "parameter tuning" issue for SVMs. One of the easiest approaches is to take the median of each for the greatest levels of class prediction accuracy obtained as you go through the C
31,523
Centralization measures in weighted graphs
All centrality measures are dependent on the shape of your data. Laplacian centrality is a convincing measure of centrality for weighted graphs. Define a matrix to store our weights. $ W_{ij} = \left\{ \begin{array}{lr} w_{ij} & : i \neq j\\ 0 & : i = j \end{array} \right. $ Define a matrix, where the diagonal is the sum of the weights associated with a node. $ X_{ij} = \left\{ \begin{array}{lr} 0 & : i \neq j\\ \sum\limits_{i=0}^n W_{i}& : i = j \end{array} \right. $ The Laplacian is then defined by $L = X - W$ We can define a property of the graph, Laplacian energy. $E = \sum\limits_{i=0}^n\lambda_i^2$ Where $\lambda$s are the eigenvalues associated with the Laplacian. Rather than eigensolving our matrices, we can equivalently solve. $E = \sum\limits_{i=0}^n x_i^2 + 2\sum\limits_{i<j}w_{ij}^2$ To define the importance of a particular node in a graph, we remove that node and calculate the energy. Consider the following data, generated from an RBF kernel of 1000 multivariate normal observations centered at the origin with a standard deviation of unity. The indices are the same for both figures. The data was presorted according to the distance of each observation $\in \mathbb R^n$ from the origin. The importance of the Laplacian is beyond the scope of this answer. Laplacians are central to many piercing theorems in spectral graph theory and many practical results in the literature of manifold learning and clustering. I'd highly recommend reading up on the subject if you think you'll be dealing with weighted graphs in the near future.
Centralization measures in weighted graphs
All centrality measures are dependent on the shape of your data. Laplacian centrality is a convincing measure of centrality for weighted graphs. Define a matrix to store our weights. $ W_{ij} = \left\
Centralization measures in weighted graphs All centrality measures are dependent on the shape of your data. Laplacian centrality is a convincing measure of centrality for weighted graphs. Define a matrix to store our weights. $ W_{ij} = \left\{ \begin{array}{lr} w_{ij} & : i \neq j\\ 0 & : i = j \end{array} \right. $ Define a matrix, where the diagonal is the sum of the weights associated with a node. $ X_{ij} = \left\{ \begin{array}{lr} 0 & : i \neq j\\ \sum\limits_{i=0}^n W_{i}& : i = j \end{array} \right. $ The Laplacian is then defined by $L = X - W$ We can define a property of the graph, Laplacian energy. $E = \sum\limits_{i=0}^n\lambda_i^2$ Where $\lambda$s are the eigenvalues associated with the Laplacian. Rather than eigensolving our matrices, we can equivalently solve. $E = \sum\limits_{i=0}^n x_i^2 + 2\sum\limits_{i<j}w_{ij}^2$ To define the importance of a particular node in a graph, we remove that node and calculate the energy. Consider the following data, generated from an RBF kernel of 1000 multivariate normal observations centered at the origin with a standard deviation of unity. The indices are the same for both figures. The data was presorted according to the distance of each observation $\in \mathbb R^n$ from the origin. The importance of the Laplacian is beyond the scope of this answer. Laplacians are central to many piercing theorems in spectral graph theory and many practical results in the literature of manifold learning and clustering. I'd highly recommend reading up on the subject if you think you'll be dealing with weighted graphs in the near future.
Centralization measures in weighted graphs All centrality measures are dependent on the shape of your data. Laplacian centrality is a convincing measure of centrality for weighted graphs. Define a matrix to store our weights. $ W_{ij} = \left\
31,524
Centralization measures in weighted graphs
In order to answer this: Does anyone know how to calculate a centralization score in a weighted graph? Is there a good way to accomplish this in R with igraph or some other package? I think the best way to do this is using the library centiserve. This library contains a lot of centrality methods and one of them is Laplacian Centrality. And the best thing it works with igraph.
Centralization measures in weighted graphs
In order to answer this: Does anyone know how to calculate a centralization score in a weighted graph? Is there a good way to accomplish this in R with igraph or some other package? I think the
Centralization measures in weighted graphs In order to answer this: Does anyone know how to calculate a centralization score in a weighted graph? Is there a good way to accomplish this in R with igraph or some other package? I think the best way to do this is using the library centiserve. This library contains a lot of centrality methods and one of them is Laplacian Centrality. And the best thing it works with igraph.
Centralization measures in weighted graphs In order to answer this: Does anyone know how to calculate a centralization score in a weighted graph? Is there a good way to accomplish this in R with igraph or some other package? I think the
31,525
Expectation of Quotient of Sums of IID Random Variables (Cambridge University worksheet)
Spotting to add $n$ identical copies of $S_m/S_n$ is very clever! But some of us are not so clever, so it is nice to be able to "postpone" the Big Idea to a stage where it is more obvious what to do. Without knowing where to start, there seem be a number of clues that symmetry could be really important (addition is symmetric and we have some summations, and iid variables have the same expectation so maybe they can be swapped around or renamed in useful ways). In fact the "hard" bit of this question is how to deal with the division, the operation which isn't symmetric. How can we exploit the symmetry of summation? From linearity of expectation we have: $\mathbb{E}(S_m/S_n) = \mathbb{E}\left(\frac{X_1 + ... + X_m}{X_1 + ... + X_n}\right) = \mathbb{E}\left(\frac{X_1}{X_1 + .... + X_n}\right) + ... + \mathbb{E}\left(\frac{X_m}{X_1 + .... + X_n}\right)$ But then on symmetry grounds, given that $X_i$ are iid and $m \le n$, all the terms on the right-hand side are the same! Why? Switch the labels of $X_i$ and $X_j$ for $i, j \le n$. Two terms in the denominator switch position but after reordering it still sums to $S_n$, whereas the numerator changes from $X_i$ to $X_j$. So $\mathbb{E}(X_i/S_n) = \mathbb{E}(X_j/S_n)$. Let's write $\mathbb{E}(X_i/S_n)=k$ for $1 \le i \le n$ and since there are $m$ such terms we have $\mathbb{E}(S_m/S_n) = mk$. It looks as if $k=1/n$ which would produce the correct result. But how to prove it? We know $k=\mathbb{E}\left(\frac{X_1}{X_1 + .... + X_n}\right)=\mathbb{E}\left(\frac{X_2}{X_1 + .... + X_n}\right)=...=\mathbb{E}\left(\frac{X_n}{X_1 + .... + X_n}\right)$ It's only at this stage it dawned on me I should be adding these together, to obtain $nk = \mathbb{E}\left(\frac{X_1}{X_1 + .... + X_n}\right) + \mathbb{E}\left(\frac{X_2}{X_1 + .... + X_n}\right) + ... + \mathbb{E}\left(\frac{X_n}{X_1 + .... + X_n}\right)$ $\implies nk = \mathbb{E}\left(\frac{X_1 + ... + X_n}{X_1 + .... + X_n}\right) = \mathbb{E}(1) = 1$ What's nice about this method is that it preserves the unity of the two parts of the question. The reason symmetry is broken, requiring adjustment when $m>n$, is that the terms on the right-hand side after applying linearity of expectation will be of two types, depending on whether the $X_i$ in the numerator lies in the sum in the denominator. (As before, I can switch the labels of $X_i$ and $X_j$ if both appear in the denominator as this just reorders the sum $S_n$, or if neither does as this clearly leaves the sum unchanged, but if one does and one doesn't then one of the terms in the denominator changes and it no longer sums to $S_n$.) For $i \le n$ we have $\mathbb{E}\left(\frac{X_i}{X_1 + .... + X_n}\right)=k$ and for $i>n$ we have $\mathbb{E}\left(\frac{X_i}{X_1 + .... + X_n}\right)=r$, say. Since we have $n$ of the former terms, and $m-n$ of the latter, $\mathbb{E}(S_m/S_n) = nk + (m-n)r = 1 + (m-n)r$ Then finding $r$ is straightforward using independence of $S_n^{-1}$ and $X_i$ for $i>n$: $r=\mathbb{E}(X_i S_n^{-1})=\mathbb{E}(X_i) \mathbb{E}(S_n^{-1})=\mu \mathbb{E}(S_n^{-1})$ So the same "trick" works for both parts, it just involves dealing with two cases if $m>n$. I suspect this is why the two parts of the question were given in this order.
Expectation of Quotient of Sums of IID Random Variables (Cambridge University worksheet)
Spotting to add $n$ identical copies of $S_m/S_n$ is very clever! But some of us are not so clever, so it is nice to be able to "postpone" the Big Idea to a stage where it is more obvious what to do.
Expectation of Quotient of Sums of IID Random Variables (Cambridge University worksheet) Spotting to add $n$ identical copies of $S_m/S_n$ is very clever! But some of us are not so clever, so it is nice to be able to "postpone" the Big Idea to a stage where it is more obvious what to do. Without knowing where to start, there seem be a number of clues that symmetry could be really important (addition is symmetric and we have some summations, and iid variables have the same expectation so maybe they can be swapped around or renamed in useful ways). In fact the "hard" bit of this question is how to deal with the division, the operation which isn't symmetric. How can we exploit the symmetry of summation? From linearity of expectation we have: $\mathbb{E}(S_m/S_n) = \mathbb{E}\left(\frac{X_1 + ... + X_m}{X_1 + ... + X_n}\right) = \mathbb{E}\left(\frac{X_1}{X_1 + .... + X_n}\right) + ... + \mathbb{E}\left(\frac{X_m}{X_1 + .... + X_n}\right)$ But then on symmetry grounds, given that $X_i$ are iid and $m \le n$, all the terms on the right-hand side are the same! Why? Switch the labels of $X_i$ and $X_j$ for $i, j \le n$. Two terms in the denominator switch position but after reordering it still sums to $S_n$, whereas the numerator changes from $X_i$ to $X_j$. So $\mathbb{E}(X_i/S_n) = \mathbb{E}(X_j/S_n)$. Let's write $\mathbb{E}(X_i/S_n)=k$ for $1 \le i \le n$ and since there are $m$ such terms we have $\mathbb{E}(S_m/S_n) = mk$. It looks as if $k=1/n$ which would produce the correct result. But how to prove it? We know $k=\mathbb{E}\left(\frac{X_1}{X_1 + .... + X_n}\right)=\mathbb{E}\left(\frac{X_2}{X_1 + .... + X_n}\right)=...=\mathbb{E}\left(\frac{X_n}{X_1 + .... + X_n}\right)$ It's only at this stage it dawned on me I should be adding these together, to obtain $nk = \mathbb{E}\left(\frac{X_1}{X_1 + .... + X_n}\right) + \mathbb{E}\left(\frac{X_2}{X_1 + .... + X_n}\right) + ... + \mathbb{E}\left(\frac{X_n}{X_1 + .... + X_n}\right)$ $\implies nk = \mathbb{E}\left(\frac{X_1 + ... + X_n}{X_1 + .... + X_n}\right) = \mathbb{E}(1) = 1$ What's nice about this method is that it preserves the unity of the two parts of the question. The reason symmetry is broken, requiring adjustment when $m>n$, is that the terms on the right-hand side after applying linearity of expectation will be of two types, depending on whether the $X_i$ in the numerator lies in the sum in the denominator. (As before, I can switch the labels of $X_i$ and $X_j$ if both appear in the denominator as this just reorders the sum $S_n$, or if neither does as this clearly leaves the sum unchanged, but if one does and one doesn't then one of the terms in the denominator changes and it no longer sums to $S_n$.) For $i \le n$ we have $\mathbb{E}\left(\frac{X_i}{X_1 + .... + X_n}\right)=k$ and for $i>n$ we have $\mathbb{E}\left(\frac{X_i}{X_1 + .... + X_n}\right)=r$, say. Since we have $n$ of the former terms, and $m-n$ of the latter, $\mathbb{E}(S_m/S_n) = nk + (m-n)r = 1 + (m-n)r$ Then finding $r$ is straightforward using independence of $S_n^{-1}$ and $X_i$ for $i>n$: $r=\mathbb{E}(X_i S_n^{-1})=\mathbb{E}(X_i) \mathbb{E}(S_n^{-1})=\mu \mathbb{E}(S_n^{-1})$ So the same "trick" works for both parts, it just involves dealing with two cases if $m>n$. I suspect this is why the two parts of the question were given in this order.
Expectation of Quotient of Sums of IID Random Variables (Cambridge University worksheet) Spotting to add $n$ identical copies of $S_m/S_n$ is very clever! But some of us are not so clever, so it is nice to be able to "postpone" the Big Idea to a stage where it is more obvious what to do.
31,526
Expectation of Quotient of Sums of IID Random Variables (Cambridge University worksheet)
Thanks to whuber for the hint for the first part. Consider $nS_m/S_n$ for the case $m<=n$ We have $\mathbb{E}(nS_m/S_n) = \mathbb{E}((nX_1 + . . . + nX_m)/(X_1 + . . . + X_n))$ $= \mathbb{E}(nX_1/X_1 + . . . + X_n) + . . . + \mathbb{E}(nX_m/X_1 + . . . + X_n)$ and by the iid property, this is equal to: $m\mathbb{E}((X_1+ . .+ X_n)/(X_1+ . . . + X_n)) = m$ Therefore $\mathbb{E}(S_m/S_n) = m/n$ for $m<=n$
Expectation of Quotient of Sums of IID Random Variables (Cambridge University worksheet)
Thanks to whuber for the hint for the first part. Consider $nS_m/S_n$ for the case $m<=n$ We have $\mathbb{E}(nS_m/S_n) = \mathbb{E}((nX_1 + . . . + nX_m)/(X_1 + . . . + X_n))$ $= \mathbb{E}(nX_1/X_1
Expectation of Quotient of Sums of IID Random Variables (Cambridge University worksheet) Thanks to whuber for the hint for the first part. Consider $nS_m/S_n$ for the case $m<=n$ We have $\mathbb{E}(nS_m/S_n) = \mathbb{E}((nX_1 + . . . + nX_m)/(X_1 + . . . + X_n))$ $= \mathbb{E}(nX_1/X_1 + . . . + X_n) + . . . + \mathbb{E}(nX_m/X_1 + . . . + X_n)$ and by the iid property, this is equal to: $m\mathbb{E}((X_1+ . .+ X_n)/(X_1+ . . . + X_n)) = m$ Therefore $\mathbb{E}(S_m/S_n) = m/n$ for $m<=n$
Expectation of Quotient of Sums of IID Random Variables (Cambridge University worksheet) Thanks to whuber for the hint for the first part. Consider $nS_m/S_n$ for the case $m<=n$ We have $\mathbb{E}(nS_m/S_n) = \mathbb{E}((nX_1 + . . . + nX_m)/(X_1 + . . . + X_n))$ $= \mathbb{E}(nX_1/X_1
31,527
Why use bayesglm?
In engineering, as well as supply chain risk management, "engineering knowledge" --eg an educated persons best guess-- may be the best data you have. For example, the likelihood of a tsunami occurring and disrupting the supply chain, without additional data, can be estimated by an expert in the subject (there are better methods for constructing priors). As time passes, tsunamis occur and, as a result, we gain more data, and can update our priors (engineering knowledge) with posteriors (priors adjusted for new data). At some point, there will be so much data that the initial prior is irrelevant, and no matter whom made the prediction, you will have equal predictions of likelihood. It is my belief that if you have that much data, a "traditional" Frequentist approach is (typically) preferable to the Bayesian approach (of course others will disagree, especially with choosing between statistical philosophies rather than sticking to one and selecting an appropriate method). Note that it is entirely possible (and occurs often) that the Frequentist approach yields similar/identical results to the Bayesian. That said, when the difference in methods is a line of code, why not implement multiple methods and compare the results yourself?
Why use bayesglm?
In engineering, as well as supply chain risk management, "engineering knowledge" --eg an educated persons best guess-- may be the best data you have. For example, the likelihood of a tsunami occurring
Why use bayesglm? In engineering, as well as supply chain risk management, "engineering knowledge" --eg an educated persons best guess-- may be the best data you have. For example, the likelihood of a tsunami occurring and disrupting the supply chain, without additional data, can be estimated by an expert in the subject (there are better methods for constructing priors). As time passes, tsunamis occur and, as a result, we gain more data, and can update our priors (engineering knowledge) with posteriors (priors adjusted for new data). At some point, there will be so much data that the initial prior is irrelevant, and no matter whom made the prediction, you will have equal predictions of likelihood. It is my belief that if you have that much data, a "traditional" Frequentist approach is (typically) preferable to the Bayesian approach (of course others will disagree, especially with choosing between statistical philosophies rather than sticking to one and selecting an appropriate method). Note that it is entirely possible (and occurs often) that the Frequentist approach yields similar/identical results to the Bayesian. That said, when the difference in methods is a line of code, why not implement multiple methods and compare the results yourself?
Why use bayesglm? In engineering, as well as supply chain risk management, "engineering knowledge" --eg an educated persons best guess-- may be the best data you have. For example, the likelihood of a tsunami occurring
31,528
Simulating p-values as a function of sample size
You have almost performed what is usually called a power analysis. I say almost, because what you usually measure in a power calculation is not the mean p-value, but rather the probability that, given the sample size and the hypothesised mean difference, you would get a p-value lower than say 0.05. You can make small changes to your calculations in order to get this probability, however. The following script is a modification of your script that calculates the power for sample sizes from 2 to 50: ctrl.mean <- 1 ctrl.sd <- 0.1 treated.mean <- 1.1 treated.sd <- 0.22 n_range <- 2:50 max_samples <- 50 power <- NULL p.theshold <- 0.05 rpt <- 1000 for(n in n_range) { pvals <- replicate(rpt, { t.test(rnorm(n,ctrl.mean, ctrl.sd), y = rnorm(n, treated.mean, treated.sd))$p.value }) power <- rbind(power, mean(pvals < p.theshold) ) } plot(n_range, power, type="l", ylim=c(0, 1)) The way I would read this graph goes like: "Given my assumptions of the two groups, the probability that I would find a significant effect at n = 30 is roughly 50%". Often an 80% chance of finding an actual effect is considered a high level of power. By the way, power analysis is generally considered a good thing. :)
Simulating p-values as a function of sample size
You have almost performed what is usually called a power analysis. I say almost, because what you usually measure in a power calculation is not the mean p-value, but rather the probability that, given
Simulating p-values as a function of sample size You have almost performed what is usually called a power analysis. I say almost, because what you usually measure in a power calculation is not the mean p-value, but rather the probability that, given the sample size and the hypothesised mean difference, you would get a p-value lower than say 0.05. You can make small changes to your calculations in order to get this probability, however. The following script is a modification of your script that calculates the power for sample sizes from 2 to 50: ctrl.mean <- 1 ctrl.sd <- 0.1 treated.mean <- 1.1 treated.sd <- 0.22 n_range <- 2:50 max_samples <- 50 power <- NULL p.theshold <- 0.05 rpt <- 1000 for(n in n_range) { pvals <- replicate(rpt, { t.test(rnorm(n,ctrl.mean, ctrl.sd), y = rnorm(n, treated.mean, treated.sd))$p.value }) power <- rbind(power, mean(pvals < p.theshold) ) } plot(n_range, power, type="l", ylim=c(0, 1)) The way I would read this graph goes like: "Given my assumptions of the two groups, the probability that I would find a significant effect at n = 30 is roughly 50%". Often an 80% chance of finding an actual effect is considered a high level of power. By the way, power analysis is generally considered a good thing. :)
Simulating p-values as a function of sample size You have almost performed what is usually called a power analysis. I say almost, because what you usually measure in a power calculation is not the mean p-value, but rather the probability that, given
31,529
Relative efficiency of Wilcoxon signed rank in small samples
Klotz looked at small sample power of the signed rank test compared to the one sample $t$ in the normal case. [Klotz, J. (1963) "Small Sample Power and Efficiency for the One Sample Wilcoxon and Normal Scores Tests" The Annals of Mathematical Statistics, Vol. 34, No. 2, pp. 624-632] At $n=10$ and $\alpha$ near $0.1$ (exact $\alpha$s aren't achievable of course, unless you go the randomization route, which most people avoid in use, and I think with reason) the relative efficiency to the $t$ at the normal tends to be quite close to the ARE there (0.955), though how close depends (it varies with the mean shift and at smaller $\alpha$, the efficiency will be lower). At smaller sample sizes than 10 the efficiency is generally (a little) higher. At $n=5$ and $n=6$ (both with $\alpha$ close to 0.05), the efficiency was around 0.97 or higher. So, broadly speaking ... the ARE at the normal is an underestimate of the relative efficiency in the small sample case, as long as $\alpha$ isn't small. I believe that for a two-tailed test with $n=4$ your smallest achievable $\alpha$ is 0.125. At that exact significance level and sample size, I think the relative efficiency to the $t$ will be similarly high (perhaps still around the 0.97-0.98 or higher) in the area where the power is interesting. I should probably come back and talk about how to do a simulation, which is relatively straightforward. Edit: I've just done a simulation at the 0.125 level (because it's achievable at this sample size); it looks like - across a range of differences in mean, the typical efficiency is a bit lower, for $n=4$, more around 0.95-0.97 or so - similar to the asymptotic value. Update Here's a plot of the power (2 sided) for the t-test (computed by power.t.test) in normal samples, and simulated power for the Wilcoxon signed rank test - 40000 simulations per point, with the t-test as a control variate. The uncertainty in the position of the dots is less than a pixel: To make this answer more complete I should actually look at the behavior for the case for which the ARE actually is 0.864 (the beta(2,2)).
Relative efficiency of Wilcoxon signed rank in small samples
Klotz looked at small sample power of the signed rank test compared to the one sample $t$ in the normal case. [Klotz, J. (1963) "Small Sample Power and Efficiency for the One Sample Wilcoxon and Norma
Relative efficiency of Wilcoxon signed rank in small samples Klotz looked at small sample power of the signed rank test compared to the one sample $t$ in the normal case. [Klotz, J. (1963) "Small Sample Power and Efficiency for the One Sample Wilcoxon and Normal Scores Tests" The Annals of Mathematical Statistics, Vol. 34, No. 2, pp. 624-632] At $n=10$ and $\alpha$ near $0.1$ (exact $\alpha$s aren't achievable of course, unless you go the randomization route, which most people avoid in use, and I think with reason) the relative efficiency to the $t$ at the normal tends to be quite close to the ARE there (0.955), though how close depends (it varies with the mean shift and at smaller $\alpha$, the efficiency will be lower). At smaller sample sizes than 10 the efficiency is generally (a little) higher. At $n=5$ and $n=6$ (both with $\alpha$ close to 0.05), the efficiency was around 0.97 or higher. So, broadly speaking ... the ARE at the normal is an underestimate of the relative efficiency in the small sample case, as long as $\alpha$ isn't small. I believe that for a two-tailed test with $n=4$ your smallest achievable $\alpha$ is 0.125. At that exact significance level and sample size, I think the relative efficiency to the $t$ will be similarly high (perhaps still around the 0.97-0.98 or higher) in the area where the power is interesting. I should probably come back and talk about how to do a simulation, which is relatively straightforward. Edit: I've just done a simulation at the 0.125 level (because it's achievable at this sample size); it looks like - across a range of differences in mean, the typical efficiency is a bit lower, for $n=4$, more around 0.95-0.97 or so - similar to the asymptotic value. Update Here's a plot of the power (2 sided) for the t-test (computed by power.t.test) in normal samples, and simulated power for the Wilcoxon signed rank test - 40000 simulations per point, with the t-test as a control variate. The uncertainty in the position of the dots is less than a pixel: To make this answer more complete I should actually look at the behavior for the case for which the ARE actually is 0.864 (the beta(2,2)).
Relative efficiency of Wilcoxon signed rank in small samples Klotz looked at small sample power of the signed rank test compared to the one sample $t$ in the normal case. [Klotz, J. (1963) "Small Sample Power and Efficiency for the One Sample Wilcoxon and Norma
31,530
Difference between training, test and holdout set data mining model building
Well, Hastie, Tibshirani, and Friedman, in their seminal The Elements of Statistical Learning (page 222), say to break the data into three sections: Training (50%) Validation (25%) Testing (25%) Where the model is built on the training set, the prediction errors are calculated using the validation set, and the test set is used to assess the generalization error of the final model. This test set should be locked away until the model calibration process is finished to prevent underestimation of the true model error. Hastie, T.; Tibshirani, R. & Friedman, J. The Elements of Statistical Learning: Data Mining, Inference and Prediction Springer Science+Business Media, Inc., 2009
Difference between training, test and holdout set data mining model building
Well, Hastie, Tibshirani, and Friedman, in their seminal The Elements of Statistical Learning (page 222), say to break the data into three sections: Training (50%) Validation (25%) Testing (25%) Whe
Difference between training, test and holdout set data mining model building Well, Hastie, Tibshirani, and Friedman, in their seminal The Elements of Statistical Learning (page 222), say to break the data into three sections: Training (50%) Validation (25%) Testing (25%) Where the model is built on the training set, the prediction errors are calculated using the validation set, and the test set is used to assess the generalization error of the final model. This test set should be locked away until the model calibration process is finished to prevent underestimation of the true model error. Hastie, T.; Tibshirani, R. & Friedman, J. The Elements of Statistical Learning: Data Mining, Inference and Prediction Springer Science+Business Media, Inc., 2009
Difference between training, test and holdout set data mining model building Well, Hastie, Tibshirani, and Friedman, in their seminal The Elements of Statistical Learning (page 222), say to break the data into three sections: Training (50%) Validation (25%) Testing (25%) Whe
31,531
Proportion of explained variance in PCA and LDA
I will first provide a verbal explanation, and then a more technical one. My answer consists of four observations: As @ttnphns explained in the comments above, in PCA each principal component has certain variance, that all together add up to 100% of the total variance. For each principal component, a ratio of its variance to the total variance is called the "proportion of explained variance". This is very well known. On the other hand, in LDA each "discriminant component" has certain "discriminability" (I made these terms up!) associated with it, and they all together add up to 100% of the "total discriminability". So for each "discriminant component" one can define "proportion of discriminability explained". I guess that "proportion of trace" that you are referring to, is exactly that (see below). This is less well known, but still commonplace. Still, one can look at the variance of each discriminant component, and compute "proportion of variance" of each of them. Turns out, they will add up to something that is less than 100%. I do not think that I have ever seen this discussed anywhere, which is the main reason I want to provide this lengthy answer. One can also go one step further and compute the amount of variance that each LDA component "explains"; this is going to be more than just its own variance. Let $\mathbf{T}$ be total scatter matrix of the data (i.e. covariance matrix but without normalizing by the number of data points), $\mathbf{W}$ be the within-class scatter matrix, and $\mathbf{B}$ be between-class scatter matrix. See here for definitions. Conveniently, $\mathbf{T}=\mathbf{W}+\mathbf{B}$. PCA performs eigen-decomposition of $\mathbf{T}$, takes its unit eigenvectors as principal axes, and projections of the data on the eigenvectors as principal components. Variance of each principal component is given by the corresponding eigenvalue. All eigenvalues of $\mathbf{T}$ (which is symmetric and positive-definite) are positive and add up to the $\mathrm{tr}(\mathbf{T})$, which is known as total variance. LDA performs eigen-decomposition of $\mathbf{W}^{-1} \mathbf{B}$, takes its non-orthogonal (!) unit eigenvectors as discriminant axes, and projections on the eigenvectors as discriminant components (a made-up term). For each discriminant component, we can compute a ratio of between-class variance $B$ and within-class variance $W$, i.e. signal-to-noise ratio $B/W$. It turns out that it will be given by the corresponding eigenvalue of $\mathbf{W}^{-1} \mathbf{B}$ (Lemma 1, see below). All eigenvalues of $\mathbf{W}^{-1} \mathbf{B}$ are positive (Lemma 2) so sum up to a positive number $\mathrm{tr}(\mathbf{W}^{-1} \mathbf{B})$ which one can call total signal-to-noise ratio. Each discriminant component has a certain proportion of it, and that is, I believe, what "proportion of trace" refers to. See this answer by @ttnphns for a similar discussion. Interestingly, variances of all discriminant components will add up to something smaller than the total variance (even if the number $K$ of classes in the data set is larger than the number $N$ of dimensions; as there are only $K-1$ discriminant axes, they will not even form a basis in case $K-1<N$). This is a non-trivial observation (Lemma 4) that follows from the fact that all discriminant components have zero correlation (Lemma 3). Which means that we can compute the usual proportion of variance for each discriminant component, but their sum will be less than 100%. However, I am reluctant to refer to these component variances as "explained variances" (let's call them "captured variances" instead). For each LDA component, one can compute the amount of variance it can explain in the data by regressing the data onto this component; this value will in general be larger than this component's own "captured" variance. If there is enough components, then together their explained variance must be 100%. See my answer here for how to compute such explained variance in a general case: Principal component analysis "backwards": how much variance of the data is explained by a given linear combination of the variables? Here is an illustration using the Iris data set (only sepal measurements!): Thin solid lines show PCA axes (they are orthogonal), thick dashed lines show LDA axes (non-orthogonal). Proportions of variance explained by the PCA axes: $79\%$ and $21\%$. Proportions of signal-to-noise ratio of the LDA axes: $96\%$ and $4\%$. Proportions of variance captured by the LDA axes: $48\%$ and $26\%$ (i.e. only $74\%$ together). Proportions of variance explained by the LDA axes: $65\%$ and $35\%$. \begin{array}{lcccc} & \text{LDA axis 1} & \text{LDA axis 2} & \text{PCA axis 1} & \text{PCA axis 2} \\ \text{Captured variance} & 48\% & 26\% & 79\% & 21\% \\ \text{Explained variance} & 65\% & 35\% & 79\% & 21\% \\ \text{Signal-to-noise ratio} & 96\% & 4\% & - & - \\ \end{array} Lemma 1. Eigenvectors $\mathbf{v}$ of $\mathbf{W}^{-1} \mathbf{B}$ (or, equivalently, generalized eigenvectors of the generalized eigenvalue problem $\mathbf{B}\mathbf{v}=\lambda\mathbf{W}\mathbf{v}$) are stationary points of the Rayleigh quotient $$\frac{\mathbf{v}^\top\mathbf{B}\mathbf{v}}{\mathbf{v}^\top\mathbf{W}\mathbf{v}} = \frac{B}{W}$$ (differentiate the latter to see it), with the corresponding values of Rayleigh quotient providing the eigenvalues $\lambda$, QED. Lemma 2. Eigenvalues of $\mathbf{W}^{-1} \mathbf{B} = \mathbf{W}^{-1/2} \mathbf{W}^{-1/2} \mathbf{B}$ are the same as eigenvalues of $\mathbf{W}^{-1/2} \mathbf{B} \mathbf{W}^{-1/2}$ (indeed, these two matrices are similar). The latter is symmetric positive-definite, so all its eigenvalues are positive. Lemma 3. Note that covariance/correlation between discriminant components is zero. Indeed, different eigenvectors $\mathbf{v}_1$ and $\mathbf{v}_2$ of the generalized eigenvalue problem $\mathbf{B}\mathbf{v}=\lambda\mathbf{W}\mathbf{v}$ are both $\mathbf{B}$- and $\mathbf{W}$-orthogonal (see e.g. here), and so are $\mathbf{T}$-orthogonal as well (because $\mathbf{T}=\mathbf{W}+\mathbf{B}$), which means that they have covariance zero: $\mathbf{v}_1^\top \mathbf{T} \mathbf{v}_2=0$. Lemma 4. Discriminant axes form a non-orthogonal basis $\mathbf{V}$, in which the covariance matrix $\mathbf{V}^\top\mathbf{T}\mathbf{V}$ is diagonal. In this case one can prove that $$\mathrm{tr}(\mathbf{V}^\top\mathbf{T}\mathbf{V})<\mathrm{tr}(\mathbf{T}),$$ QED.
Proportion of explained variance in PCA and LDA
I will first provide a verbal explanation, and then a more technical one. My answer consists of four observations: As @ttnphns explained in the comments above, in PCA each principal component has cer
Proportion of explained variance in PCA and LDA I will first provide a verbal explanation, and then a more technical one. My answer consists of four observations: As @ttnphns explained in the comments above, in PCA each principal component has certain variance, that all together add up to 100% of the total variance. For each principal component, a ratio of its variance to the total variance is called the "proportion of explained variance". This is very well known. On the other hand, in LDA each "discriminant component" has certain "discriminability" (I made these terms up!) associated with it, and they all together add up to 100% of the "total discriminability". So for each "discriminant component" one can define "proportion of discriminability explained". I guess that "proportion of trace" that you are referring to, is exactly that (see below). This is less well known, but still commonplace. Still, one can look at the variance of each discriminant component, and compute "proportion of variance" of each of them. Turns out, they will add up to something that is less than 100%. I do not think that I have ever seen this discussed anywhere, which is the main reason I want to provide this lengthy answer. One can also go one step further and compute the amount of variance that each LDA component "explains"; this is going to be more than just its own variance. Let $\mathbf{T}$ be total scatter matrix of the data (i.e. covariance matrix but without normalizing by the number of data points), $\mathbf{W}$ be the within-class scatter matrix, and $\mathbf{B}$ be between-class scatter matrix. See here for definitions. Conveniently, $\mathbf{T}=\mathbf{W}+\mathbf{B}$. PCA performs eigen-decomposition of $\mathbf{T}$, takes its unit eigenvectors as principal axes, and projections of the data on the eigenvectors as principal components. Variance of each principal component is given by the corresponding eigenvalue. All eigenvalues of $\mathbf{T}$ (which is symmetric and positive-definite) are positive and add up to the $\mathrm{tr}(\mathbf{T})$, which is known as total variance. LDA performs eigen-decomposition of $\mathbf{W}^{-1} \mathbf{B}$, takes its non-orthogonal (!) unit eigenvectors as discriminant axes, and projections on the eigenvectors as discriminant components (a made-up term). For each discriminant component, we can compute a ratio of between-class variance $B$ and within-class variance $W$, i.e. signal-to-noise ratio $B/W$. It turns out that it will be given by the corresponding eigenvalue of $\mathbf{W}^{-1} \mathbf{B}$ (Lemma 1, see below). All eigenvalues of $\mathbf{W}^{-1} \mathbf{B}$ are positive (Lemma 2) so sum up to a positive number $\mathrm{tr}(\mathbf{W}^{-1} \mathbf{B})$ which one can call total signal-to-noise ratio. Each discriminant component has a certain proportion of it, and that is, I believe, what "proportion of trace" refers to. See this answer by @ttnphns for a similar discussion. Interestingly, variances of all discriminant components will add up to something smaller than the total variance (even if the number $K$ of classes in the data set is larger than the number $N$ of dimensions; as there are only $K-1$ discriminant axes, they will not even form a basis in case $K-1<N$). This is a non-trivial observation (Lemma 4) that follows from the fact that all discriminant components have zero correlation (Lemma 3). Which means that we can compute the usual proportion of variance for each discriminant component, but their sum will be less than 100%. However, I am reluctant to refer to these component variances as "explained variances" (let's call them "captured variances" instead). For each LDA component, one can compute the amount of variance it can explain in the data by regressing the data onto this component; this value will in general be larger than this component's own "captured" variance. If there is enough components, then together their explained variance must be 100%. See my answer here for how to compute such explained variance in a general case: Principal component analysis "backwards": how much variance of the data is explained by a given linear combination of the variables? Here is an illustration using the Iris data set (only sepal measurements!): Thin solid lines show PCA axes (they are orthogonal), thick dashed lines show LDA axes (non-orthogonal). Proportions of variance explained by the PCA axes: $79\%$ and $21\%$. Proportions of signal-to-noise ratio of the LDA axes: $96\%$ and $4\%$. Proportions of variance captured by the LDA axes: $48\%$ and $26\%$ (i.e. only $74\%$ together). Proportions of variance explained by the LDA axes: $65\%$ and $35\%$. \begin{array}{lcccc} & \text{LDA axis 1} & \text{LDA axis 2} & \text{PCA axis 1} & \text{PCA axis 2} \\ \text{Captured variance} & 48\% & 26\% & 79\% & 21\% \\ \text{Explained variance} & 65\% & 35\% & 79\% & 21\% \\ \text{Signal-to-noise ratio} & 96\% & 4\% & - & - \\ \end{array} Lemma 1. Eigenvectors $\mathbf{v}$ of $\mathbf{W}^{-1} \mathbf{B}$ (or, equivalently, generalized eigenvectors of the generalized eigenvalue problem $\mathbf{B}\mathbf{v}=\lambda\mathbf{W}\mathbf{v}$) are stationary points of the Rayleigh quotient $$\frac{\mathbf{v}^\top\mathbf{B}\mathbf{v}}{\mathbf{v}^\top\mathbf{W}\mathbf{v}} = \frac{B}{W}$$ (differentiate the latter to see it), with the corresponding values of Rayleigh quotient providing the eigenvalues $\lambda$, QED. Lemma 2. Eigenvalues of $\mathbf{W}^{-1} \mathbf{B} = \mathbf{W}^{-1/2} \mathbf{W}^{-1/2} \mathbf{B}$ are the same as eigenvalues of $\mathbf{W}^{-1/2} \mathbf{B} \mathbf{W}^{-1/2}$ (indeed, these two matrices are similar). The latter is symmetric positive-definite, so all its eigenvalues are positive. Lemma 3. Note that covariance/correlation between discriminant components is zero. Indeed, different eigenvectors $\mathbf{v}_1$ and $\mathbf{v}_2$ of the generalized eigenvalue problem $\mathbf{B}\mathbf{v}=\lambda\mathbf{W}\mathbf{v}$ are both $\mathbf{B}$- and $\mathbf{W}$-orthogonal (see e.g. here), and so are $\mathbf{T}$-orthogonal as well (because $\mathbf{T}=\mathbf{W}+\mathbf{B}$), which means that they have covariance zero: $\mathbf{v}_1^\top \mathbf{T} \mathbf{v}_2=0$. Lemma 4. Discriminant axes form a non-orthogonal basis $\mathbf{V}$, in which the covariance matrix $\mathbf{V}^\top\mathbf{T}\mathbf{V}$ is diagonal. In this case one can prove that $$\mathrm{tr}(\mathbf{V}^\top\mathbf{T}\mathbf{V})<\mathrm{tr}(\mathbf{T}),$$ QED.
Proportion of explained variance in PCA and LDA I will first provide a verbal explanation, and then a more technical one. My answer consists of four observations: As @ttnphns explained in the comments above, in PCA each principal component has cer
31,532
Adding weights for highly skewed data sets in logistic regression
That would no longer be maximum likelihood. Such an extreme distribution of $Y$ only presents problems if you are using a classifier, i.e., if you are computing the proportion classified correctly, an improper scoring rule. The probability estimates from standard maximum likelihood are valid. If the total number of "positives" is smaller than 15 times the number of candidate variables, penalized maximum likelihood estimation may be in order.
Adding weights for highly skewed data sets in logistic regression
That would no longer be maximum likelihood. Such an extreme distribution of $Y$ only presents problems if you are using a classifier, i.e., if you are computing the proportion classified correctly, a
Adding weights for highly skewed data sets in logistic regression That would no longer be maximum likelihood. Such an extreme distribution of $Y$ only presents problems if you are using a classifier, i.e., if you are computing the proportion classified correctly, an improper scoring rule. The probability estimates from standard maximum likelihood are valid. If the total number of "positives" is smaller than 15 times the number of candidate variables, penalized maximum likelihood estimation may be in order.
Adding weights for highly skewed data sets in logistic regression That would no longer be maximum likelihood. Such an extreme distribution of $Y$ only presents problems if you are using a classifier, i.e., if you are computing the proportion classified correctly, a
31,533
Adding weights for highly skewed data sets in logistic regression
In cases like this, it is often better to use a flexible link, instead of the logistic link, that can capture this asymmetry. For example a skew-normal, GEV, sinh-arcsinh, and the references therein. There are many others but I cannot post more than 2 links.
Adding weights for highly skewed data sets in logistic regression
In cases like this, it is often better to use a flexible link, instead of the logistic link, that can capture this asymmetry. For example a skew-normal, GEV, sinh-arcsinh, and the references therein.
Adding weights for highly skewed data sets in logistic regression In cases like this, it is often better to use a flexible link, instead of the logistic link, that can capture this asymmetry. For example a skew-normal, GEV, sinh-arcsinh, and the references therein. There are many others but I cannot post more than 2 links.
Adding weights for highly skewed data sets in logistic regression In cases like this, it is often better to use a flexible link, instead of the logistic link, that can capture this asymmetry. For example a skew-normal, GEV, sinh-arcsinh, and the references therein.
31,534
Interaction term using centered variables hierarchical regression analysis? What variables should we center?
You should center the terms involved in the interaction to reduce collinearity e.g. set.seed(10204) x1 <- rnorm(1000, 10, 1) x2 <- rnorm(1000, 10, 1) y <- x1 + rnorm(1000, 5, 5) + x2*rnorm(1000) + x1*x2*rnorm(1000) x1cent <- x1 - mean(x1) x2cent <- x2 - mean(x2) x1x2cent <- x1cent*x2cent m1 <- lm(y ~ x1 + x2 + x1*x2) m2 <- lm(y ~ x1cent + x2cent + x1cent*x2cent) summary(m1) summary(m2) Output: > summary(m1) Call: lm(formula = y ~ x1 + x2 + x1 * x2) Residuals: Min 1Q Median 3Q Max -344.62 -66.29 -1.44 66.05 392.22 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 193.333 335.281 0.577 0.564 x1 -15.830 33.719 -0.469 0.639 x2 -14.065 33.567 -0.419 0.675 x1:x2 1.179 3.375 0.349 0.727 Residual standard error: 101.3 on 996 degrees of freedom Multiple R-squared: 0.002363, Adjusted R-squared: -0.0006416 F-statistic: 0.7865 on 3 and 996 DF, p-value: 0.5015 > summary(m2) Call: lm(formula = y ~ x1cent + x2cent + x1cent * x2cent) Residuals: Min 1Q Median 3Q Max -344.62 -66.29 -1.44 66.05 392.22 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 12.513 3.203 3.907 9.99e-05 *** x1cent -4.106 3.186 -1.289 0.198 x2cent -2.291 3.198 -0.716 0.474 x1cent:x2cent 1.179 3.375 0.349 0.727 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 101.3 on 996 degrees of freedom Multiple R-squared: 0.002363, Adjusted R-squared: -0.0006416 F-statistic: 0.7865 on 3 and 996 DF, p-value: 0.5015 library(perturb) colldiag(m1) colldiag(m2) Whether you center other variables is up to you; centering (as opposed to standardizing) a variable that is not involved in an interaction will change the meaning of the intercept, but not other things e.g. x1 <- rnorm(1000, 10, 1) x2 <- x1 - mean(x1) y <- x1 + rnorm(1000, 5, 5) m1 <- lm(y ~ x1) m2 <- lm(y ~ x2) summary(m1) summary(m2) Output: > summary(m1) Call: lm(formula = y ~ x1) Residuals: Min 1Q Median 3Q Max -16.5288 -3.3348 0.0946 3.4293 14.0678 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 6.5412 1.6003 4.087 4.71e-05 *** x1 0.8548 0.1591 5.373 9.63e-08 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 5.082 on 998 degrees of freedom Multiple R-squared: 0.02812, Adjusted R-squared: 0.02714 F-statistic: 28.87 on 1 and 998 DF, p-value: 9.629e-08 > summary(m2) Call: lm(formula = y ~ x2) Residuals: Min 1Q Median 3Q Max -16.5288 -3.3348 0.0946 3.4293 14.0678 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 15.0965 0.1607 93.931 < 2e-16 *** x2 0.8548 0.1591 5.373 9.63e-08 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 5.082 on 998 degrees of freedom Multiple R-squared: 0.02812, Adjusted R-squared: 0.02714 F-statistic: 28.87 on 1 and 998 DF, p-value: 9.629e-08 But you should take logs of variables because it makes sense to do so or because the residuals from the model indicate that you should, not because they have a lot of variability. Regression does not make assumptions about the distribution of the variables, it makes assumptions about the distribution of the residuals.
Interaction term using centered variables hierarchical regression analysis? What variables should we
You should center the terms involved in the interaction to reduce collinearity e.g. set.seed(10204) x1 <- rnorm(1000, 10, 1) x2 <- rnorm(1000, 10, 1) y <- x1 + rnorm(1000, 5, 5) + x2*rnorm(1000) + x1
Interaction term using centered variables hierarchical regression analysis? What variables should we center? You should center the terms involved in the interaction to reduce collinearity e.g. set.seed(10204) x1 <- rnorm(1000, 10, 1) x2 <- rnorm(1000, 10, 1) y <- x1 + rnorm(1000, 5, 5) + x2*rnorm(1000) + x1*x2*rnorm(1000) x1cent <- x1 - mean(x1) x2cent <- x2 - mean(x2) x1x2cent <- x1cent*x2cent m1 <- lm(y ~ x1 + x2 + x1*x2) m2 <- lm(y ~ x1cent + x2cent + x1cent*x2cent) summary(m1) summary(m2) Output: > summary(m1) Call: lm(formula = y ~ x1 + x2 + x1 * x2) Residuals: Min 1Q Median 3Q Max -344.62 -66.29 -1.44 66.05 392.22 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 193.333 335.281 0.577 0.564 x1 -15.830 33.719 -0.469 0.639 x2 -14.065 33.567 -0.419 0.675 x1:x2 1.179 3.375 0.349 0.727 Residual standard error: 101.3 on 996 degrees of freedom Multiple R-squared: 0.002363, Adjusted R-squared: -0.0006416 F-statistic: 0.7865 on 3 and 996 DF, p-value: 0.5015 > summary(m2) Call: lm(formula = y ~ x1cent + x2cent + x1cent * x2cent) Residuals: Min 1Q Median 3Q Max -344.62 -66.29 -1.44 66.05 392.22 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 12.513 3.203 3.907 9.99e-05 *** x1cent -4.106 3.186 -1.289 0.198 x2cent -2.291 3.198 -0.716 0.474 x1cent:x2cent 1.179 3.375 0.349 0.727 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 101.3 on 996 degrees of freedom Multiple R-squared: 0.002363, Adjusted R-squared: -0.0006416 F-statistic: 0.7865 on 3 and 996 DF, p-value: 0.5015 library(perturb) colldiag(m1) colldiag(m2) Whether you center other variables is up to you; centering (as opposed to standardizing) a variable that is not involved in an interaction will change the meaning of the intercept, but not other things e.g. x1 <- rnorm(1000, 10, 1) x2 <- x1 - mean(x1) y <- x1 + rnorm(1000, 5, 5) m1 <- lm(y ~ x1) m2 <- lm(y ~ x2) summary(m1) summary(m2) Output: > summary(m1) Call: lm(formula = y ~ x1) Residuals: Min 1Q Median 3Q Max -16.5288 -3.3348 0.0946 3.4293 14.0678 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 6.5412 1.6003 4.087 4.71e-05 *** x1 0.8548 0.1591 5.373 9.63e-08 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 5.082 on 998 degrees of freedom Multiple R-squared: 0.02812, Adjusted R-squared: 0.02714 F-statistic: 28.87 on 1 and 998 DF, p-value: 9.629e-08 > summary(m2) Call: lm(formula = y ~ x2) Residuals: Min 1Q Median 3Q Max -16.5288 -3.3348 0.0946 3.4293 14.0678 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 15.0965 0.1607 93.931 < 2e-16 *** x2 0.8548 0.1591 5.373 9.63e-08 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 5.082 on 998 degrees of freedom Multiple R-squared: 0.02812, Adjusted R-squared: 0.02714 F-statistic: 28.87 on 1 and 998 DF, p-value: 9.629e-08 But you should take logs of variables because it makes sense to do so or because the residuals from the model indicate that you should, not because they have a lot of variability. Regression does not make assumptions about the distribution of the variables, it makes assumptions about the distribution of the residuals.
Interaction term using centered variables hierarchical regression analysis? What variables should we You should center the terms involved in the interaction to reduce collinearity e.g. set.seed(10204) x1 <- rnorm(1000, 10, 1) x2 <- rnorm(1000, 10, 1) y <- x1 + rnorm(1000, 5, 5) + x2*rnorm(1000) + x1
31,535
Can machine learning methods be somehow helpful in solving differential equations?
Absolutely! Here is information on the "shooting method". (link) For much harder problems than the example given, the "root finding" takes more work. It is useful to stick some machine learning on top of the output in order to determine which initial conditions are appropriate for the solution of interest. EDIT: Neural Networks (NN) are used to (profoundly) improve computation time for combustion. The networks are trained on the thermo-chemical model and approximate the chemical reactions so that instead of solving (insane) complexity coupled fluid-dynamic and chemistry differential equations, the numeric solver has a reduced set of solves, and the NN with its very short run time, fills in the gaps "well enough". Here is a link. Here is another. EDIT 2: See physics-informed neural networks. https://www.youtube.com/watch?v=hKHl68Fdpq4
Can machine learning methods be somehow helpful in solving differential equations?
Absolutely! Here is information on the "shooting method". (link) For much harder problems than the example given, the "root finding" takes more work. It is useful to stick some machine learning on to
Can machine learning methods be somehow helpful in solving differential equations? Absolutely! Here is information on the "shooting method". (link) For much harder problems than the example given, the "root finding" takes more work. It is useful to stick some machine learning on top of the output in order to determine which initial conditions are appropriate for the solution of interest. EDIT: Neural Networks (NN) are used to (profoundly) improve computation time for combustion. The networks are trained on the thermo-chemical model and approximate the chemical reactions so that instead of solving (insane) complexity coupled fluid-dynamic and chemistry differential equations, the numeric solver has a reduced set of solves, and the NN with its very short run time, fills in the gaps "well enough". Here is a link. Here is another. EDIT 2: See physics-informed neural networks. https://www.youtube.com/watch?v=hKHl68Fdpq4
Can machine learning methods be somehow helpful in solving differential equations? Absolutely! Here is information on the "shooting method". (link) For much harder problems than the example given, the "root finding" takes more work. It is useful to stick some machine learning on to
31,536
Can machine learning methods be somehow helpful in solving differential equations?
I believe so, Archembau, Cornford, Opper, Shawe-Taylor, Girolami, Lawrence and Rattray are all excellent researchers in machine learning, so these would probably be good places to start.
Can machine learning methods be somehow helpful in solving differential equations?
I believe so, Archembau, Cornford, Opper, Shawe-Taylor, Girolami, Lawrence and Rattray are all excellent researchers in machine learning, so these would probably be good places to start.
Can machine learning methods be somehow helpful in solving differential equations? I believe so, Archembau, Cornford, Opper, Shawe-Taylor, Girolami, Lawrence and Rattray are all excellent researchers in machine learning, so these would probably be good places to start.
Can machine learning methods be somehow helpful in solving differential equations? I believe so, Archembau, Cornford, Opper, Shawe-Taylor, Girolami, Lawrence and Rattray are all excellent researchers in machine learning, so these would probably be good places to start.
31,537
Do Bayes factors require multiple comparison correction?
What you are missing is that it very rarely makes sense to use a prior for which all of parameters are independent. That might make sense if the parameters were as varied and logically disconnected as, say, the set {some physical constant, some baseball player's batting average, some yeast gene's expression level}. Most analyses look at sets of parameters that are best modeled as exchangeable, like, say, the set of all current players' batting averages or the set of all yeast genes' expression levels. For estimation, this leads to approaches like this; for testing, it leads to approaches like this.
Do Bayes factors require multiple comparison correction?
What you are missing is that it very rarely makes sense to use a prior for which all of parameters are independent. That might make sense if the parameters were as varied and logically disconnected a
Do Bayes factors require multiple comparison correction? What you are missing is that it very rarely makes sense to use a prior for which all of parameters are independent. That might make sense if the parameters were as varied and logically disconnected as, say, the set {some physical constant, some baseball player's batting average, some yeast gene's expression level}. Most analyses look at sets of parameters that are best modeled as exchangeable, like, say, the set of all current players' batting averages or the set of all yeast genes' expression levels. For estimation, this leads to approaches like this; for testing, it leads to approaches like this.
Do Bayes factors require multiple comparison correction? What you are missing is that it very rarely makes sense to use a prior for which all of parameters are independent. That might make sense if the parameters were as varied and logically disconnected a
31,538
Do Bayes factors require multiple comparison correction?
Not necessarily for Bayes factors. But Bayes factors require conversion to posterior probabilities for proper inference, and posterior probabilities most definitely can require a Bonferroni-style multiplicity adjustment of sorts. Here is the explanation: If the hypotheses are independent a priori, then probability that all nulls are simultaneously true decreases to zero very rapidly as the number of hypotheses tested increases. But there may be scientific doubt as to whether any effect exists: For example, in a neuroimaging study there can be doubt as to whether the association studied has any neuronal basis whatsoever. (Nevertheless, the most extreme statistics are likely to suggest an association, simply because when you look at a large number of statistics, the extreme values will likely be quite atypical.) In cases such as this where there is scientific doubt about whether there is any association whatsoever, you must set your prior probability on the global null hypothesis to some non-infinitesimal value in order to correctly model such doubt. In doing so, your prior probabilities on the component nulls are necessarily increased from commonly-used levels (such as 0.5), depending on the number of hypotheses tested, and depending on your assessment of their degree of prior dependence. Such a change in the prior probabilities on the component nulls causes a change (or adjustment) in the posterior probabilities on the component nulls. This adjustment is similar to the usual Bonferroni adjustment when the components are assumed independent a priori, and is less extreme than the Bonferroni adjustment when the components are assumed dependent. This issue is discussed in the following literature. The last reference discusses how to do the analysis when you assume prior dependence. Jeffreys, H. Theory of Probability. Oxford press. Westfall, P.H., Johnson, W.O. and Utts, J.M.(1997). A Bayesian Perspective on the Bonferroni Adjustment, Biometrika 84, 419–427. Gönen, M., Westfall, P.H. and Johnson, W.O. (2003). Bayesian multiple testing for two-sample multivariate endpoints, Biometrics 59, 76–82.
Do Bayes factors require multiple comparison correction?
Not necessarily for Bayes factors. But Bayes factors require conversion to posterior probabilities for proper inference, and posterior probabilities most definitely can require a Bonferroni-style mult
Do Bayes factors require multiple comparison correction? Not necessarily for Bayes factors. But Bayes factors require conversion to posterior probabilities for proper inference, and posterior probabilities most definitely can require a Bonferroni-style multiplicity adjustment of sorts. Here is the explanation: If the hypotheses are independent a priori, then probability that all nulls are simultaneously true decreases to zero very rapidly as the number of hypotheses tested increases. But there may be scientific doubt as to whether any effect exists: For example, in a neuroimaging study there can be doubt as to whether the association studied has any neuronal basis whatsoever. (Nevertheless, the most extreme statistics are likely to suggest an association, simply because when you look at a large number of statistics, the extreme values will likely be quite atypical.) In cases such as this where there is scientific doubt about whether there is any association whatsoever, you must set your prior probability on the global null hypothesis to some non-infinitesimal value in order to correctly model such doubt. In doing so, your prior probabilities on the component nulls are necessarily increased from commonly-used levels (such as 0.5), depending on the number of hypotheses tested, and depending on your assessment of their degree of prior dependence. Such a change in the prior probabilities on the component nulls causes a change (or adjustment) in the posterior probabilities on the component nulls. This adjustment is similar to the usual Bonferroni adjustment when the components are assumed independent a priori, and is less extreme than the Bonferroni adjustment when the components are assumed dependent. This issue is discussed in the following literature. The last reference discusses how to do the analysis when you assume prior dependence. Jeffreys, H. Theory of Probability. Oxford press. Westfall, P.H., Johnson, W.O. and Utts, J.M.(1997). A Bayesian Perspective on the Bonferroni Adjustment, Biometrika 84, 419–427. Gönen, M., Westfall, P.H. and Johnson, W.O. (2003). Bayesian multiple testing for two-sample multivariate endpoints, Biometrics 59, 76–82.
Do Bayes factors require multiple comparison correction? Not necessarily for Bayes factors. But Bayes factors require conversion to posterior probabilities for proper inference, and posterior probabilities most definitely can require a Bonferroni-style mult
31,539
Do Bayes factors require multiple comparison correction?
In the context of frequentist inference, the posterior odds, or posterior probability of the null (a.k.a. local fdr), is nothing more than a test statistic. It can definitely lead you to many erroneous inferences due to chance alone. If you want some frequentist guarantee on the amount of errors you might be making, the posterior odds should be plugged into some multiplicity control scheme.
Do Bayes factors require multiple comparison correction?
In the context of frequentist inference, the posterior odds, or posterior probability of the null (a.k.a. local fdr), is nothing more than a test statistic. It can definitely lead you to many erroneou
Do Bayes factors require multiple comparison correction? In the context of frequentist inference, the posterior odds, or posterior probability of the null (a.k.a. local fdr), is nothing more than a test statistic. It can definitely lead you to many erroneous inferences due to chance alone. If you want some frequentist guarantee on the amount of errors you might be making, the posterior odds should be plugged into some multiplicity control scheme.
Do Bayes factors require multiple comparison correction? In the context of frequentist inference, the posterior odds, or posterior probability of the null (a.k.a. local fdr), is nothing more than a test statistic. It can definitely lead you to many erroneou
31,540
Should PCA be performed before I do classification?
"PCA chooses the directions in which the variables have the most spread, not the dimensions that have the most relative distances between clustered subclasses." LDA projects the data so that between class variance : within class variance is maximized. This is accomplished by first projecting in a way that makes the covariance matrix spherical. As this step involves inversion of the covariance matrix, it is numerically unstable if too few observations are available. So basically the projection you are looking for is made by LDA. However, PCA can help reducing the number of input variates for the LDA, so the matrix inversion is stabilized. There is an alternative to using PCA for this first projection: PLS. PLS is basically the analogue regression technique to PCA and LDA. Barker, M. and Rayens, W.: Partial least squares for discrimination, Journal of Chemometrics, 2003, 17, 166-173 therefore suggest performing LDA in PLS-scores space. In practice, you'll find that PLS-LDA needs less latent variables than PCA-LDA. However, both methods need the number of latent variables specified (otherwise they do not reduce the no of input variates for the LDA). If you can determine this from your knowledge about the problem, go ahead. However, if you determine the number of latent variables from the data (e.g. % variance explained, quality of PLS-LDA model, ...), do not forget to reserve additional data for testing this model (e.g. outer crossvalidation).
Should PCA be performed before I do classification?
"PCA chooses the directions in which the variables have the most spread, not the dimensions that have the most relative distances between clustered subclasses." LDA projects the data so that between
Should PCA be performed before I do classification? "PCA chooses the directions in which the variables have the most spread, not the dimensions that have the most relative distances between clustered subclasses." LDA projects the data so that between class variance : within class variance is maximized. This is accomplished by first projecting in a way that makes the covariance matrix spherical. As this step involves inversion of the covariance matrix, it is numerically unstable if too few observations are available. So basically the projection you are looking for is made by LDA. However, PCA can help reducing the number of input variates for the LDA, so the matrix inversion is stabilized. There is an alternative to using PCA for this first projection: PLS. PLS is basically the analogue regression technique to PCA and LDA. Barker, M. and Rayens, W.: Partial least squares for discrimination, Journal of Chemometrics, 2003, 17, 166-173 therefore suggest performing LDA in PLS-scores space. In practice, you'll find that PLS-LDA needs less latent variables than PCA-LDA. However, both methods need the number of latent variables specified (otherwise they do not reduce the no of input variates for the LDA). If you can determine this from your knowledge about the problem, go ahead. However, if you determine the number of latent variables from the data (e.g. % variance explained, quality of PLS-LDA model, ...), do not forget to reserve additional data for testing this model (e.g. outer crossvalidation).
Should PCA be performed before I do classification? "PCA chooses the directions in which the variables have the most spread, not the dimensions that have the most relative distances between clustered subclasses." LDA projects the data so that between
31,541
R: Calculating mean and standard error of mean for factors with lm() vs. direct calculation -edited
The difference in standard errors are because in the regression you compute a combined estimate of the variance, while in the other calculation you compute separate estimates of the variance.
R: Calculating mean and standard error of mean for factors with lm() vs. direct calculation -edited
The difference in standard errors are because in the regression you compute a combined estimate of the variance, while in the other calculation you compute separate estimates of the variance.
R: Calculating mean and standard error of mean for factors with lm() vs. direct calculation -edited The difference in standard errors are because in the regression you compute a combined estimate of the variance, while in the other calculation you compute separate estimates of the variance.
R: Calculating mean and standard error of mean for factors with lm() vs. direct calculation -edited The difference in standard errors are because in the regression you compute a combined estimate of the variance, while in the other calculation you compute separate estimates of the variance.
31,542
R: Calculating mean and standard error of mean for factors with lm() vs. direct calculation -edited
The lm function does not estimate means and standard errors of the factor levels but of the contrats associated with the factor levels. If no contrast is specified manually, treatment contrasts are used in R. This is the default for categorical data. The factor mtcars$cyl has three levels (4,6, and 8). By default, the first level, 4, is used as reference category. The intercept of the linear model corresponds to the mean of the dependent variable in the reference category. But the other effects result from a comparison of one factor level with the reference category. Hence, the estimate and standard error for cyl6 are related to the difference between cyl == 6 and cyl == 4. The effect cyl8 is related to the difference between cyl == 8 and cyl == 4. If you want the lm function to calculate the means of the factor levels, you have to exclude the intercept term (0 + ...): summary(lm(mpg ~ 0 + as.factor(cyl), mtcars)) Call: lm(formula = mpg ~ 0 + as.factor(cyl), data = mtcars) Residuals: Min 1Q Median 3Q Max -5.2636 -1.8357 0.0286 1.3893 7.2364 Coefficients: Estimate Std. Error t value Pr(>|t|) as.factor(cyl)4 26.6636 0.9718 27.44 < 2e-16 *** as.factor(cyl)6 19.7429 1.2182 16.21 4.49e-16 *** as.factor(cyl)8 15.1000 0.8614 17.53 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 3.223 on 29 degrees of freedom Multiple R-squared: 0.9785, Adjusted R-squared: 0.9763 F-statistic: 440.9 on 3 and 29 DF, p-value: < 2.2e-16 As you can see, these estimates are identical to the means of the factor levels. But note that the standard errors of the estimates are not identical with the standard errors of the data. By the way: Data can be aggregated easily with the aggregate function: aggregate(mpg ~ cyl, mtcars, function(x) c(M = mean(x), SE = sd(x)/sqrt(length(x)))) cyl mpg.M mpg.SE 1 4 26.6636364 1.3597642 2 6 19.7428571 0.5493967 3 8 15.1000000 0.6842016
R: Calculating mean and standard error of mean for factors with lm() vs. direct calculation -edited
The lm function does not estimate means and standard errors of the factor levels but of the contrats associated with the factor levels. If no contrast is specified manually, treatment contrasts are us
R: Calculating mean and standard error of mean for factors with lm() vs. direct calculation -edited The lm function does not estimate means and standard errors of the factor levels but of the contrats associated with the factor levels. If no contrast is specified manually, treatment contrasts are used in R. This is the default for categorical data. The factor mtcars$cyl has three levels (4,6, and 8). By default, the first level, 4, is used as reference category. The intercept of the linear model corresponds to the mean of the dependent variable in the reference category. But the other effects result from a comparison of one factor level with the reference category. Hence, the estimate and standard error for cyl6 are related to the difference between cyl == 6 and cyl == 4. The effect cyl8 is related to the difference between cyl == 8 and cyl == 4. If you want the lm function to calculate the means of the factor levels, you have to exclude the intercept term (0 + ...): summary(lm(mpg ~ 0 + as.factor(cyl), mtcars)) Call: lm(formula = mpg ~ 0 + as.factor(cyl), data = mtcars) Residuals: Min 1Q Median 3Q Max -5.2636 -1.8357 0.0286 1.3893 7.2364 Coefficients: Estimate Std. Error t value Pr(>|t|) as.factor(cyl)4 26.6636 0.9718 27.44 < 2e-16 *** as.factor(cyl)6 19.7429 1.2182 16.21 4.49e-16 *** as.factor(cyl)8 15.1000 0.8614 17.53 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 3.223 on 29 degrees of freedom Multiple R-squared: 0.9785, Adjusted R-squared: 0.9763 F-statistic: 440.9 on 3 and 29 DF, p-value: < 2.2e-16 As you can see, these estimates are identical to the means of the factor levels. But note that the standard errors of the estimates are not identical with the standard errors of the data. By the way: Data can be aggregated easily with the aggregate function: aggregate(mpg ~ cyl, mtcars, function(x) c(M = mean(x), SE = sd(x)/sqrt(length(x)))) cyl mpg.M mpg.SE 1 4 26.6636364 1.3597642 2 6 19.7428571 0.5493967 3 8 15.1000000 0.6842016
R: Calculating mean and standard error of mean for factors with lm() vs. direct calculation -edited The lm function does not estimate means and standard errors of the factor levels but of the contrats associated with the factor levels. If no contrast is specified manually, treatment contrasts are us
31,543
R: Calculating mean and standard error of mean for factors with lm() vs. direct calculation -edited
In addition to what Sven Hohenstein said, the mtcars data is not balanced. Usually one uses aov for lm with categorical data (which is just a wrapper for lm) which specifically says on ?aov: aov is designed for balanced designs, and the results can be hard to interpret without balance: beware that missing values in the response(s) will likely lose the balance. I think you can also see this on the weird correlations of the model matrix: mf <- model.matrix(mpg ~ cyl, data = mtcars) cor(mf) (Intercept) cyl6 cyl8 (Intercept) 1 NA NA cyl6 NA 1.0000000 -0.4666667 cyl8 NA -0.4666667 1.0000000 Warning message: In cor(mf) : the standard deviation is zero Hence, the standard errors obtained from aov (or lm) will likely be bogus (you can check this if you compare with lme or lmer standard errors.
R: Calculating mean and standard error of mean for factors with lm() vs. direct calculation -edited
In addition to what Sven Hohenstein said, the mtcars data is not balanced. Usually one uses aov for lm with categorical data (which is just a wrapper for lm) which specifically says on ?aov: aov is d
R: Calculating mean and standard error of mean for factors with lm() vs. direct calculation -edited In addition to what Sven Hohenstein said, the mtcars data is not balanced. Usually one uses aov for lm with categorical data (which is just a wrapper for lm) which specifically says on ?aov: aov is designed for balanced designs, and the results can be hard to interpret without balance: beware that missing values in the response(s) will likely lose the balance. I think you can also see this on the weird correlations of the model matrix: mf <- model.matrix(mpg ~ cyl, data = mtcars) cor(mf) (Intercept) cyl6 cyl8 (Intercept) 1 NA NA cyl6 NA 1.0000000 -0.4666667 cyl8 NA -0.4666667 1.0000000 Warning message: In cor(mf) : the standard deviation is zero Hence, the standard errors obtained from aov (or lm) will likely be bogus (you can check this if you compare with lme or lmer standard errors.
R: Calculating mean and standard error of mean for factors with lm() vs. direct calculation -edited In addition to what Sven Hohenstein said, the mtcars data is not balanced. Usually one uses aov for lm with categorical data (which is just a wrapper for lm) which specifically says on ?aov: aov is d
31,544
R: Calculating mean and standard error of mean for factors with lm() vs. direct calculation -edited
Y = matrix(0,5,6) Y[1,] = c(1250, 980, 1800, 2040, 1000, 1180) Y[2,] = c(1700, 3080,1700,2820,5760,3480) Y[3,] = c(2050,3560,2800,1600,4200,2650) Y[4,] = c(4690,4370,4800,9070,3770,5250) Y[5,] = c(7150,3480,5010,4810,8740,7260) n = ncol(Y) R = rowMeans(Y) M = mean(R) s = mean(apply(Y,1,var)) v = var(R) -s/n #z = n/(n+(E(s2)/var(m))) Q = 6/(6+(s/v)) t = Q*R[1] + (1-Z)*M
R: Calculating mean and standard error of mean for factors with lm() vs. direct calculation -edited
Y = matrix(0,5,6) Y[1,] = c(1250, 980, 1800, 2040, 1000, 1180) Y[2,] = c(1700, 3080,1700,2820,5760,3480) Y[3,] = c(2050,3560,2800,1600,4200,2650) Y[4,] = c(4690,4370,4800,9070,3770,5250) Y[5,] = c(715
R: Calculating mean and standard error of mean for factors with lm() vs. direct calculation -edited Y = matrix(0,5,6) Y[1,] = c(1250, 980, 1800, 2040, 1000, 1180) Y[2,] = c(1700, 3080,1700,2820,5760,3480) Y[3,] = c(2050,3560,2800,1600,4200,2650) Y[4,] = c(4690,4370,4800,9070,3770,5250) Y[5,] = c(7150,3480,5010,4810,8740,7260) n = ncol(Y) R = rowMeans(Y) M = mean(R) s = mean(apply(Y,1,var)) v = var(R) -s/n #z = n/(n+(E(s2)/var(m))) Q = 6/(6+(s/v)) t = Q*R[1] + (1-Z)*M
R: Calculating mean and standard error of mean for factors with lm() vs. direct calculation -edited Y = matrix(0,5,6) Y[1,] = c(1250, 980, 1800, 2040, 1000, 1180) Y[2,] = c(1700, 3080,1700,2820,5760,3480) Y[3,] = c(2050,3560,2800,1600,4200,2650) Y[4,] = c(4690,4370,4800,9070,3770,5250) Y[5,] = c(715
31,545
Understanding the whiskers of a boxplot
The value of X that corresponds to the 75 quantile minus the value of X that corresponds to the 25th is the distance. For example, for SAT Math Test, 620 is the 75th and 520 is the 25th quantile. So if you score above 620, you're done better than 75% of the test takers. The whiskers would extend up to 1.5*(620-520) points long.
Understanding the whiskers of a boxplot
The value of X that corresponds to the 75 quantile minus the value of X that corresponds to the 25th is the distance. For example, for SAT Math Test, 620 is the 75th and 520 is the 25th quantile. So i
Understanding the whiskers of a boxplot The value of X that corresponds to the 75 quantile minus the value of X that corresponds to the 25th is the distance. For example, for SAT Math Test, 620 is the 75th and 520 is the 25th quantile. So if you score above 620, you're done better than 75% of the test takers. The whiskers would extend up to 1.5*(620-520) points long.
Understanding the whiskers of a boxplot The value of X that corresponds to the 75 quantile minus the value of X that corresponds to the 25th is the distance. For example, for SAT Math Test, 620 is the 75th and 520 is the 25th quantile. So i
31,546
Understanding the whiskers of a boxplot
A boxplot is intended to summarize a relatively small set of data in a way that clearly shows A central value. The spread of "typical" values. Individual values which depart so much from the central value, relative to the spread, that they are singled out for special attention and separately identified (by name, for instance). These are called "identified values." This is to be done in a robust way: that means the boxplot should not look appreciably different when one, or a relatively small portion, of the data values is arbitrarily changed. The solution adopted by its inventor John Tukey is to use the order statistics--the data as sorted from lowest to highest--in a systematic way. For simplicity (he did calculations mentally or with pencil and paper) Tukey focused on medians: the middle values of batches of numbers. (For batches with even counts, Tukey used the midpoint of the two middle values.) A median is resistant to changes in up to half the data on which it is based, making it excellent as a robust statistic. Thus: The central value is estimated with the median of all the data. The spread is estimated with the difference between the medians of the "upper half"--all data equal to or above the median--and the "lower half"--all data equal to or less than the median. These two medians are called the upper and lower "hinges" or "fourths". They tend nowadays to be replaced by things called the quartiles (which have no universal definition, alas). Invisible fences for screening outliers are erected 1.5 and 3 times the spread beyond the hinges (away from the central value). "The value at each end closest to, but still inside, the inner fence is 'adjacent'." Values beyond the first fence are called "outliers." Values beyond the second fence are "far out." (Those old enough to remember the hippie argot of the '60s will understand the joke.) Since the spread is a difference of data values, these fences have the same units of measure as the original data: this is the sense of "distance" in the question. Concerning the data values to identify, Tukey wrote We can at least identify the extreme values, and might do well to identify a few more. Any graphical method to display the median, hinges, and the identified values arguably deserves to be called a "boxplot" (originally, "box-and-whisker plot"). The fences usually are not depicted. Tukey's design consists of a rectangle describing the hinges with a "waist" at the median. Unobtrusive line-like "whiskers" extend outward from the hinges to the innermost identified values (both above and below the box). Usually these innermost identified values are the adjacent values defined above. Consequently, the default appearance of a boxplot is to extend the whiskers to the most extreme non-outlying data values and to identify (through text labels) the data comprising the ends of the whiskers and all outliers. For example, Tupungatito volcano is the high adjacent value for the volcano heights data depicted at the right of the figure: the whisker stops there. Tupungatito and all taller volcanos are separately identified. So that this will display the data faithfully, distance in the graphic is proportional to differences in data values. (Any departure from direct proportionality would introduce a "Lie Factor" in Tufte's (1983) terminology.) These two boxplots from Tukey's book EDA (p. 41) illustrate the components. It is noteworthy that he has identified non-outlying values at the high and low ends of the States dataset at the left and one low non-outlying value of the Volcano heights at the right. This exemplifies the interplay of rules and judgment that pervades the book. (You can tell these identified data are non-outlying, because you can estimate the locations of the fences. For instance, the hinges of the state heights are near 11,000 and 1,000, giving a spread around 10,000. Multiplying by 1.5 and 3 gives distances of 15,000 and 30,000. Thus, the invisible upper fence must be near 11,000 + 15,000 = 26,000 and the lower fence, at 1,000 - 15,000, would be below zero. The far fences would be near 11,000 + 30,000 = 41,000 and 1,000 - 30,000=-29,000.) References Tufte, Edward. The Visual Display of Quantitative Information. Cheshire Press, 1983. Tukey, John. Chapter 2, EDA. Addison-Wesley, 1977.
Understanding the whiskers of a boxplot
A boxplot is intended to summarize a relatively small set of data in a way that clearly shows A central value. The spread of "typical" values. Individual values which depart so much from the centra
Understanding the whiskers of a boxplot A boxplot is intended to summarize a relatively small set of data in a way that clearly shows A central value. The spread of "typical" values. Individual values which depart so much from the central value, relative to the spread, that they are singled out for special attention and separately identified (by name, for instance). These are called "identified values." This is to be done in a robust way: that means the boxplot should not look appreciably different when one, or a relatively small portion, of the data values is arbitrarily changed. The solution adopted by its inventor John Tukey is to use the order statistics--the data as sorted from lowest to highest--in a systematic way. For simplicity (he did calculations mentally or with pencil and paper) Tukey focused on medians: the middle values of batches of numbers. (For batches with even counts, Tukey used the midpoint of the two middle values.) A median is resistant to changes in up to half the data on which it is based, making it excellent as a robust statistic. Thus: The central value is estimated with the median of all the data. The spread is estimated with the difference between the medians of the "upper half"--all data equal to or above the median--and the "lower half"--all data equal to or less than the median. These two medians are called the upper and lower "hinges" or "fourths". They tend nowadays to be replaced by things called the quartiles (which have no universal definition, alas). Invisible fences for screening outliers are erected 1.5 and 3 times the spread beyond the hinges (away from the central value). "The value at each end closest to, but still inside, the inner fence is 'adjacent'." Values beyond the first fence are called "outliers." Values beyond the second fence are "far out." (Those old enough to remember the hippie argot of the '60s will understand the joke.) Since the spread is a difference of data values, these fences have the same units of measure as the original data: this is the sense of "distance" in the question. Concerning the data values to identify, Tukey wrote We can at least identify the extreme values, and might do well to identify a few more. Any graphical method to display the median, hinges, and the identified values arguably deserves to be called a "boxplot" (originally, "box-and-whisker plot"). The fences usually are not depicted. Tukey's design consists of a rectangle describing the hinges with a "waist" at the median. Unobtrusive line-like "whiskers" extend outward from the hinges to the innermost identified values (both above and below the box). Usually these innermost identified values are the adjacent values defined above. Consequently, the default appearance of a boxplot is to extend the whiskers to the most extreme non-outlying data values and to identify (through text labels) the data comprising the ends of the whiskers and all outliers. For example, Tupungatito volcano is the high adjacent value for the volcano heights data depicted at the right of the figure: the whisker stops there. Tupungatito and all taller volcanos are separately identified. So that this will display the data faithfully, distance in the graphic is proportional to differences in data values. (Any departure from direct proportionality would introduce a "Lie Factor" in Tufte's (1983) terminology.) These two boxplots from Tukey's book EDA (p. 41) illustrate the components. It is noteworthy that he has identified non-outlying values at the high and low ends of the States dataset at the left and one low non-outlying value of the Volcano heights at the right. This exemplifies the interplay of rules and judgment that pervades the book. (You can tell these identified data are non-outlying, because you can estimate the locations of the fences. For instance, the hinges of the state heights are near 11,000 and 1,000, giving a spread around 10,000. Multiplying by 1.5 and 3 gives distances of 15,000 and 30,000. Thus, the invisible upper fence must be near 11,000 + 15,000 = 26,000 and the lower fence, at 1,000 - 15,000, would be below zero. The far fences would be near 11,000 + 30,000 = 41,000 and 1,000 - 30,000=-29,000.) References Tufte, Edward. The Visual Display of Quantitative Information. Cheshire Press, 1983. Tukey, John. Chapter 2, EDA. Addison-Wesley, 1977.
Understanding the whiskers of a boxplot A boxplot is intended to summarize a relatively small set of data in a way that clearly shows A central value. The spread of "typical" values. Individual values which depart so much from the centra
31,547
Replicating Shalizi's New York Times PCA example
It turns out Shalizi does some extra normalization, called inverse document-frequency weighting, with this code scale.cols <- function(x,s) { return(t(apply(x,1,function(x){x*s}))) } div.by.euc.length <- function(x) { scale.rows(x,1/sqrt(rowSums(x^2)+1e-16)) } idf.weight <- function(x) { # IDF weighting doc.freq <- colSums(x>0) doc.freq[doc.freq == 0] <- 1 w <- log(nrow(x)/doc.freq) return(scale.cols(x,w)) } load("pca-examples.Rdata") nyt.frame.raw$class.labels <- NULL nyt.frame <- idf.weight(nyt.frame.raw) nyt.frame <- div.by.euc.length(nyt.frame) When I ran SVD on nyt.frame, instead of nyt.frame.raw (after load, the code above is not needed for that, it's just there for demo), the two classes were distinctly seperable. The distribution of points still did not look like Shalizi's picture, but I think I can work with this. Note: Simply reading nytimes4.csv, the normalized file, is not enough, the data still needs to be centered around 0 (de-meaned). Note: After skipping dropping a bogus column in nytimes4.csv and demeaning, the output is exactly the same as Shalizi's. The same thing in Python / Pandas from pandas import * nyt = read_csv ("nytimes.csv") nyt = nyt.drop(['class.labels'],axis=1) freq = nyt.astype(bool).sum(axis=0) freq = freq.replace(0,1) w = np.log(float(nyt.shape[0])/freq) nyt = nyt.apply(lambda x: x*w,axis=1) nyt = nyt.apply(lambda x: x / np.sqrt(np.sum(np.square(x))+1e-16), axis=1)
Replicating Shalizi's New York Times PCA example
It turns out Shalizi does some extra normalization, called inverse document-frequency weighting, with this code scale.cols <- function(x,s) { return(t(apply(x,1,function(x){x*s}))) } div.by.euc.len
Replicating Shalizi's New York Times PCA example It turns out Shalizi does some extra normalization, called inverse document-frequency weighting, with this code scale.cols <- function(x,s) { return(t(apply(x,1,function(x){x*s}))) } div.by.euc.length <- function(x) { scale.rows(x,1/sqrt(rowSums(x^2)+1e-16)) } idf.weight <- function(x) { # IDF weighting doc.freq <- colSums(x>0) doc.freq[doc.freq == 0] <- 1 w <- log(nrow(x)/doc.freq) return(scale.cols(x,w)) } load("pca-examples.Rdata") nyt.frame.raw$class.labels <- NULL nyt.frame <- idf.weight(nyt.frame.raw) nyt.frame <- div.by.euc.length(nyt.frame) When I ran SVD on nyt.frame, instead of nyt.frame.raw (after load, the code above is not needed for that, it's just there for demo), the two classes were distinctly seperable. The distribution of points still did not look like Shalizi's picture, but I think I can work with this. Note: Simply reading nytimes4.csv, the normalized file, is not enough, the data still needs to be centered around 0 (de-meaned). Note: After skipping dropping a bogus column in nytimes4.csv and demeaning, the output is exactly the same as Shalizi's. The same thing in Python / Pandas from pandas import * nyt = read_csv ("nytimes.csv") nyt = nyt.drop(['class.labels'],axis=1) freq = nyt.astype(bool).sum(axis=0) freq = freq.replace(0,1) w = np.log(float(nyt.shape[0])/freq) nyt = nyt.apply(lambda x: x*w,axis=1) nyt = nyt.apply(lambda x: x / np.sqrt(np.sum(np.square(x))+1e-16), axis=1)
Replicating Shalizi's New York Times PCA example It turns out Shalizi does some extra normalization, called inverse document-frequency weighting, with this code scale.cols <- function(x,s) { return(t(apply(x,1,function(x){x*s}))) } div.by.euc.len
31,548
Spacings between discrete uniform random variables
There are many papers addressing such questions. A good starting place is probably: Pyke R. (1965), Spacings Journal of the Royal Statistical Society. Series B (Methodological) Vol. 27, No. 3 (1965), pp. 395-449 (It has a lot on the continuous case. Many papers refer to this paper, including some that do more with the discrete case.) You should be able to read it online: http://www.jstor.org/discover/10.2307/2345793 (for me it says 'read online free' without me being logged into any institutional access) For continuous uniform distributions, the answers are easy. For discrete distributions, accurate answers are much harder, though if the discrete uniform takes many different values, the continuous calculation can sometimes be a reasonable approximation.
Spacings between discrete uniform random variables
There are many papers addressing such questions. A good starting place is probably: Pyke R. (1965), Spacings Journal of the Royal Statistical Society. Series B (Methodological) Vol. 27, No. 3 (1965),
Spacings between discrete uniform random variables There are many papers addressing such questions. A good starting place is probably: Pyke R. (1965), Spacings Journal of the Royal Statistical Society. Series B (Methodological) Vol. 27, No. 3 (1965), pp. 395-449 (It has a lot on the continuous case. Many papers refer to this paper, including some that do more with the discrete case.) You should be able to read it online: http://www.jstor.org/discover/10.2307/2345793 (for me it says 'read online free' without me being logged into any institutional access) For continuous uniform distributions, the answers are easy. For discrete distributions, accurate answers are much harder, though if the discrete uniform takes many different values, the continuous calculation can sometimes be a reasonable approximation.
Spacings between discrete uniform random variables There are many papers addressing such questions. A good starting place is probably: Pyke R. (1965), Spacings Journal of the Royal Statistical Society. Series B (Methodological) Vol. 27, No. 3 (1965),
31,549
Preparing Bayesian conditional distributions for Gibbs sampling
Ok, what you need to do is compute the joint posterior up to a constant, i.e. $f(y_1,...,y_n|\mu,\tau)p(\mu,\tau)$. Then to compute the conditional posterior $\pi(\mu|\mathbf{y},\tau)$ you just treat the $\tau$ terms as fixed and known, so that some of them can be cancelled out. Then you do the same thing with $\mu$ to get $\pi(\tau|\mathbf{y},\mu)$. So what I mean is, the joint posterior is given by: \begin{equation} \pi(\mu,\tau|\mathbf{y}) \propto \tau^{\frac{n}{2}+\alpha_0-1} \exp\{-\frac{\tau}{2} \sum_{i=1}^{n}(y_i-\mu)^2-\frac{\tau_0}{2}(\mu-\mu_0)^2-\beta_0\tau\}. \end{equation} Now, to get the conditional posterior $\pi(\tau|\mathbf{y},\mu)$ you just remove the $\mu$ terms that just multiply the expression. So we could re-write the joint posterior as: \begin{equation} \pi(\mu,\tau|\mathbf{y}) \propto \tau^{\frac{n}{2}+\alpha_0-1} \exp\{-\frac{\tau_0}{2}(\mu-\mu_0)^2\}\exp\{-\frac{\tau}{2} \sum_{i=1}^{n}(y_i-\mu)^2-\beta_0\tau\}. \end{equation} Now, since we are treating $\mu$ as fixed and known, for the conditonal posterior $\pi(\tau|\mu,\mathbf{y})$ we can remove the first exponential term from the equation and it will still be true (owing to the $\propto$ sign rather than the equals sign). It's important that you get this: the conditional posterior says given that we know $\mu$ what is $\tau$, so $\mu$ is known (and hence fixed). So the kernel for the conditional posterior for $\tau$ becomes (after some re-arranging): \begin{equation} \pi(\tau|\mu,\mathbf{y}) \propto \tau^{\frac{n}{2}+\alpha_0-1}\exp\{-\tau(\frac{1}{2} \sum_{i=1}^{n}(y_i-\mu)^2+\beta_0)\}, \end{equation} which is the kernel for the Gamma distribution with the parameters you state. Now, for $\pi(\mu|\tau,\mathbf{y})$ you do the same thing, but it's a bit trickier because you have to complete the square. After cancelling the relevant terms not involving $\mu$ you get: \begin{equation} \pi(\mu|\tau,\mathbf{y}) \propto \exp\{ -\frac{\tau}{2} \sum_{i=1}^{n} (y_i-\mu)^2 - \frac{\tau_0}{2}(\mu-\mu_0)^2\}. \end{equation} If you multiply out the squared brackets and take the relevant summations, then remove all terms not involving $\mu$ (as they are all just multiplying the kernel by a constant) this becomes: \begin{equation} \pi(\mu|\tau,\mathbf{y}) \propto \exp\{ -\frac{1}{2}[\mu^2(n\tau+\tau_0) - 2\mu(\tau n\bar{y} - \tau_0\mu_0)]\}. \end{equation} If you complete the square here you get the kernel for a Gaussian with the mean and variance you state. Hope that helps.
Preparing Bayesian conditional distributions for Gibbs sampling
Ok, what you need to do is compute the joint posterior up to a constant, i.e. $f(y_1,...,y_n|\mu,\tau)p(\mu,\tau)$. Then to compute the conditional posterior $\pi(\mu|\mathbf{y},\tau)$ you just treat
Preparing Bayesian conditional distributions for Gibbs sampling Ok, what you need to do is compute the joint posterior up to a constant, i.e. $f(y_1,...,y_n|\mu,\tau)p(\mu,\tau)$. Then to compute the conditional posterior $\pi(\mu|\mathbf{y},\tau)$ you just treat the $\tau$ terms as fixed and known, so that some of them can be cancelled out. Then you do the same thing with $\mu$ to get $\pi(\tau|\mathbf{y},\mu)$. So what I mean is, the joint posterior is given by: \begin{equation} \pi(\mu,\tau|\mathbf{y}) \propto \tau^{\frac{n}{2}+\alpha_0-1} \exp\{-\frac{\tau}{2} \sum_{i=1}^{n}(y_i-\mu)^2-\frac{\tau_0}{2}(\mu-\mu_0)^2-\beta_0\tau\}. \end{equation} Now, to get the conditional posterior $\pi(\tau|\mathbf{y},\mu)$ you just remove the $\mu$ terms that just multiply the expression. So we could re-write the joint posterior as: \begin{equation} \pi(\mu,\tau|\mathbf{y}) \propto \tau^{\frac{n}{2}+\alpha_0-1} \exp\{-\frac{\tau_0}{2}(\mu-\mu_0)^2\}\exp\{-\frac{\tau}{2} \sum_{i=1}^{n}(y_i-\mu)^2-\beta_0\tau\}. \end{equation} Now, since we are treating $\mu$ as fixed and known, for the conditonal posterior $\pi(\tau|\mu,\mathbf{y})$ we can remove the first exponential term from the equation and it will still be true (owing to the $\propto$ sign rather than the equals sign). It's important that you get this: the conditional posterior says given that we know $\mu$ what is $\tau$, so $\mu$ is known (and hence fixed). So the kernel for the conditional posterior for $\tau$ becomes (after some re-arranging): \begin{equation} \pi(\tau|\mu,\mathbf{y}) \propto \tau^{\frac{n}{2}+\alpha_0-1}\exp\{-\tau(\frac{1}{2} \sum_{i=1}^{n}(y_i-\mu)^2+\beta_0)\}, \end{equation} which is the kernel for the Gamma distribution with the parameters you state. Now, for $\pi(\mu|\tau,\mathbf{y})$ you do the same thing, but it's a bit trickier because you have to complete the square. After cancelling the relevant terms not involving $\mu$ you get: \begin{equation} \pi(\mu|\tau,\mathbf{y}) \propto \exp\{ -\frac{\tau}{2} \sum_{i=1}^{n} (y_i-\mu)^2 - \frac{\tau_0}{2}(\mu-\mu_0)^2\}. \end{equation} If you multiply out the squared brackets and take the relevant summations, then remove all terms not involving $\mu$ (as they are all just multiplying the kernel by a constant) this becomes: \begin{equation} \pi(\mu|\tau,\mathbf{y}) \propto \exp\{ -\frac{1}{2}[\mu^2(n\tau+\tau_0) - 2\mu(\tau n\bar{y} - \tau_0\mu_0)]\}. \end{equation} If you complete the square here you get the kernel for a Gaussian with the mean and variance you state. Hope that helps.
Preparing Bayesian conditional distributions for Gibbs sampling Ok, what you need to do is compute the joint posterior up to a constant, i.e. $f(y_1,...,y_n|\mu,\tau)p(\mu,\tau)$. Then to compute the conditional posterior $\pi(\mu|\mathbf{y},\tau)$ you just treat
31,550
Non-parametric for two-way ANOVA (3x3)
What question are you trying to answer? If you want an overall test of anything going on, The null is that both main effects and the interaction are all 0, then you can replace all the data points with their ranks and just do a regular ANOVA to compare against an intercept/grand mean only model. This is basically how many of the non-parametric tests work, using the ranks transforms the data to a uniform distribution (under the null) and you get a good approximation by treating it as normal (the Central Limit Theorem applies for the uniform for sample sizes above about 5 or 6). For other questions you could use permutation tests. If you want to test one of the main effects and the interaction together (but allow the other main effect to be non-zero) then you can permute the predictor being tested. I you want to test the interaction while allowing both main effects to be non-zero then you can fit the reduced model of main-effects only and compute the fitted values and residuals, then randomly permute the residuals and add the permuted residuals back to the fitted values and fit the full anova model including the interaction. Repeat this a bunch of times to get the null distribution for the size of the interaction effect to compare with the size of the interaction effect from the original data. There may be existing SAS code for doing things like this, I have seen some basic tutorials on using SAS for bootstrap and permutation tests (the quickest way seems to be using the data step to create all the datasets in one big table, then using by processing to do the analyses). Personally I use R for this type of thing so can't be of more help in using SAS. Edit Here is an example using R code: > fit1 <- aov(breaks ~ wool*tension, data=warpbreaks) > summary(fit1) Df Sum Sq Mean Sq F value Pr(>F) wool 1 451 450.7 3.765 0.058213 . tension 2 2034 1017.1 8.498 0.000693 *** wool:tension 2 1003 501.4 4.189 0.021044 * Residuals 48 5745 119.7 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 > > fit2 <- aov(breaks ~ wool + tension, data=warpbreaks) > > tmpfun <- function() { + new.df <- data.frame(breaks = fitted(fit2) + sample(resid(fit2)), + wool = warpbreaks$wool, + tension = warpbreaks$tension) + fitnew <- aov(breaks ~ wool*tension, data=new.df) + fitnew2 <- update(fitnew, .~ wool + tension) + c(coef(fitnew), F=anova(fitnew2,fitnew)[2,5]) + } > > out <- replicate(10000, tmpfun()) > > # based on only the interaction coefficients > mean(out[5,] >= coef(fit1)[5]) [1] 0.002 > mean(out[6,] >= coef(fit1)[6]) [1] 0.0796 > > # based on F statistic from full-reduced model > mean(out[7,] >= anova(fit2,fit1)[2,5]) [1] 0.022
Non-parametric for two-way ANOVA (3x3)
What question are you trying to answer? If you want an overall test of anything going on, The null is that both main effects and the interaction are all 0, then you can replace all the data points wit
Non-parametric for two-way ANOVA (3x3) What question are you trying to answer? If you want an overall test of anything going on, The null is that both main effects and the interaction are all 0, then you can replace all the data points with their ranks and just do a regular ANOVA to compare against an intercept/grand mean only model. This is basically how many of the non-parametric tests work, using the ranks transforms the data to a uniform distribution (under the null) and you get a good approximation by treating it as normal (the Central Limit Theorem applies for the uniform for sample sizes above about 5 or 6). For other questions you could use permutation tests. If you want to test one of the main effects and the interaction together (but allow the other main effect to be non-zero) then you can permute the predictor being tested. I you want to test the interaction while allowing both main effects to be non-zero then you can fit the reduced model of main-effects only and compute the fitted values and residuals, then randomly permute the residuals and add the permuted residuals back to the fitted values and fit the full anova model including the interaction. Repeat this a bunch of times to get the null distribution for the size of the interaction effect to compare with the size of the interaction effect from the original data. There may be existing SAS code for doing things like this, I have seen some basic tutorials on using SAS for bootstrap and permutation tests (the quickest way seems to be using the data step to create all the datasets in one big table, then using by processing to do the analyses). Personally I use R for this type of thing so can't be of more help in using SAS. Edit Here is an example using R code: > fit1 <- aov(breaks ~ wool*tension, data=warpbreaks) > summary(fit1) Df Sum Sq Mean Sq F value Pr(>F) wool 1 451 450.7 3.765 0.058213 . tension 2 2034 1017.1 8.498 0.000693 *** wool:tension 2 1003 501.4 4.189 0.021044 * Residuals 48 5745 119.7 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 > > fit2 <- aov(breaks ~ wool + tension, data=warpbreaks) > > tmpfun <- function() { + new.df <- data.frame(breaks = fitted(fit2) + sample(resid(fit2)), + wool = warpbreaks$wool, + tension = warpbreaks$tension) + fitnew <- aov(breaks ~ wool*tension, data=new.df) + fitnew2 <- update(fitnew, .~ wool + tension) + c(coef(fitnew), F=anova(fitnew2,fitnew)[2,5]) + } > > out <- replicate(10000, tmpfun()) > > # based on only the interaction coefficients > mean(out[5,] >= coef(fit1)[5]) [1] 0.002 > mean(out[6,] >= coef(fit1)[6]) [1] 0.0796 > > # based on F statistic from full-reduced model > mean(out[7,] >= anova(fit2,fit1)[2,5]) [1] 0.022
Non-parametric for two-way ANOVA (3x3) What question are you trying to answer? If you want an overall test of anything going on, The null is that both main effects and the interaction are all 0, then you can replace all the data points wit
31,551
Non-parametric for two-way ANOVA (3x3)
+1 to @Greg Snow. In keeping with his non-parametric use ranks strategy, you could use ordinal logistic regression. This will allow you to fit a model with multiple factors and interactions between them. The relevant SAS documentation is here. There is a tutorial on how to do this in SAS at UCLA's excellent stats help website here, and another tutorial from Indiana University here.
Non-parametric for two-way ANOVA (3x3)
+1 to @Greg Snow. In keeping with his non-parametric use ranks strategy, you could use ordinal logistic regression. This will allow you to fit a model with multiple factors and interactions between t
Non-parametric for two-way ANOVA (3x3) +1 to @Greg Snow. In keeping with his non-parametric use ranks strategy, you could use ordinal logistic regression. This will allow you to fit a model with multiple factors and interactions between them. The relevant SAS documentation is here. There is a tutorial on how to do this in SAS at UCLA's excellent stats help website here, and another tutorial from Indiana University here.
Non-parametric for two-way ANOVA (3x3) +1 to @Greg Snow. In keeping with his non-parametric use ranks strategy, you could use ordinal logistic regression. This will allow you to fit a model with multiple factors and interactions between t
31,552
How can I explain the intuition behind ANOVA?
ANOVA is a statistical technique used to determine whether a particular classification of the data is useful in understanding the variation of an outcome. Think about dividing people into buckets or classes based on some criteria, like suburban and urban residence. The total variation in the dependent variable (the outcome you care about, like responsiveness to an advertising campaign) can be decomposed into the variation between classes and the variation within classes. When the within-class variation is small relative to the between-class variation, your classification scheme is in some sense meaningful or useful for understanding the world. Members of each cluster behave similarly to one another, but people from different clusters behave distinctively. This decomposition is used to create a formal F test of this hypothesis.
How can I explain the intuition behind ANOVA?
ANOVA is a statistical technique used to determine whether a particular classification of the data is useful in understanding the variation of an outcome. Think about dividing people into buckets or c
How can I explain the intuition behind ANOVA? ANOVA is a statistical technique used to determine whether a particular classification of the data is useful in understanding the variation of an outcome. Think about dividing people into buckets or classes based on some criteria, like suburban and urban residence. The total variation in the dependent variable (the outcome you care about, like responsiveness to an advertising campaign) can be decomposed into the variation between classes and the variation within classes. When the within-class variation is small relative to the between-class variation, your classification scheme is in some sense meaningful or useful for understanding the world. Members of each cluster behave similarly to one another, but people from different clusters behave distinctively. This decomposition is used to create a formal F test of this hypothesis.
How can I explain the intuition behind ANOVA? ANOVA is a statistical technique used to determine whether a particular classification of the data is useful in understanding the variation of an outcome. Think about dividing people into buckets or c
31,553
How can I explain the intuition behind ANOVA?
I found David Lane's online book very useful. In a more fundamental way, there's an invited paper in Annals of Statistics by T.P. Speed called "What is Analysis of Variance?". It took me a few attempts, but at the end it was very informative. The essence of the paper is to show that ANOVA is simply a decomposition of variance into a summation of variances belonging to smaller groups. Another important take away is that you can use ANOVA for more general variances (covariances), which I though was interesting.
How can I explain the intuition behind ANOVA?
I found David Lane's online book very useful. In a more fundamental way, there's an invited paper in Annals of Statistics by T.P. Speed called "What is Analysis of Variance?". It took me a few attemp
How can I explain the intuition behind ANOVA? I found David Lane's online book very useful. In a more fundamental way, there's an invited paper in Annals of Statistics by T.P. Speed called "What is Analysis of Variance?". It took me a few attempts, but at the end it was very informative. The essence of the paper is to show that ANOVA is simply a decomposition of variance into a summation of variances belonging to smaller groups. Another important take away is that you can use ANOVA for more general variances (covariances), which I though was interesting.
How can I explain the intuition behind ANOVA? I found David Lane's online book very useful. In a more fundamental way, there's an invited paper in Annals of Statistics by T.P. Speed called "What is Analysis of Variance?". It took me a few attemp
31,554
How can I explain the intuition behind ANOVA?
You could explain that ANOVA is a decomposition of the data as components that correspond to different groups or variables or sources of variation. An example is $$ {\tiny \begin{pmatrix} 89 & 88 & 97 & 94 \\ 84 & 77 & 92 & 79 \\ 81 & 87 & 87 & 85 \\ 87 & 92 & 89 & 84 \\ 79 & 81 & 80 & 88 \end{pmatrix} = \begin{pmatrix} 86 & 86 & 86 & 86 \\ 86 & 86 & 86 & 86 \\ 86 & 86 & 86 & 86 \\ 86 & 86 & 86 & 86 \\ 86 & 86 & 86 & 86 \end{pmatrix} + \begin{pmatrix} \phantom{-}6 & \phantom{-}6 & \phantom{-}6 & \phantom{-}6 \\ -3& -3 & -3 & -3 \\ -1 & -1 & -1 & -1 \\ \phantom{-}2 & \phantom{-}2 & \phantom{-}2 & \phantom{-}2 \\ -4 & -4 & -4 & -4 \end{pmatrix} + \begin{pmatrix} -2 & -1 & 3 & 0 \\ -2 & -1 & 3 & 0 \\ -2 & -1 & 3 & 0 \\ -2 & -1 & 3 & 0 \\ -2 & -1 & 3 & 0 \end{pmatrix} + \begin{pmatrix} -1 & -3 & \phantom{-}2 & \phantom{-}2 \\ \phantom{-}3 & -5 & \phantom{-}6 & -4 \\ -2 & \phantom{-}3 & -1 & \phantom{-}0 \\ \phantom{-}1 & \phantom{-}5 & -2 & -4 \\ -1 & \phantom{-}0 & -5 & \phantom{-}6 \end{pmatrix} } $$ which represents observations from a two-way ANOVA design (without repliation), with rows and columns as the two groups. The algebraic model is $$ y_{ti} = \mu + \beta_i + \tau_t + \epsilon _{ti} $$ and the corresponding data decomposition is calculated as $$ y_{ti} = \bar{y} + \left\{\bar{y}_i-\bar{y}\right\} + \left\{ \bar{y}_t-\bar{y}\right\} + \left\{y_{ti}-\bar{y}_i - \bar{y}_t +\bar{y}\right\}. $$ For one-way ANOVA an example is $$ {\tiny \begin{pmatrix} 62 & 63 & 68 & 56 \\ 60 & 67 & 66 & 62 \\ 63 & 71 & 71 & 60 \\ 59 & 64 & 67 & 61 \\ & 65 & 68 & 63 \\ & 66 & 68 & 64 \\ & & & 63 \\ & & & 59 \end{pmatrix} = \begin{pmatrix} 64 & 64 & 64 & 64 \\ 64 & 64 & 64 & 64 \\ 64 & 64 & 64 & 64 \\ 64 & 64 & 64 & 64 \\ & 64 & 64 & 64 \\ & 64 & 64 & 64 \\ & & & 64 \\ & & & 64 \end{pmatrix} + \begin{pmatrix} -3 & 2 & 4 & -3 \\ -3 & 2 & 4 & -3 \\ -3 & 2 & 4 & -3 \\ -3 & 2 & 4 & -3 \\ & 2 & 4 & -3 \\ & 2 & 4 & -3 \\ & & & -3 \\ & & & -3 \end{pmatrix} + \begin{pmatrix} \phantom{-}1 & -3 & \phantom{-}0 & -5 \\ -1 & \phantom{-}1 & -2 & \phantom{-}1 \\ \phantom{-}2 & \phantom{-}5 & \phantom{-}3 & -1 \\ -2 & -2 & -1 & \phantom{-}0 \\ & -1 & \phantom{-}0 & \phantom{-}2 \\ & \phantom{-}0 & \phantom{-}0 & \phantom{-}3 \\ & & & \phantom{-}2 \\ & & & -2 \end{pmatrix} }%end tiny $$ and the algebra can be written in the same way. This is mostly a comment, since it is not a full explanation, but could be a helpful component of any explanation, and could be suited to the necessary level. Such tables is used a lot in this famous book.
How can I explain the intuition behind ANOVA?
You could explain that ANOVA is a decomposition of the data as components that correspond to different groups or variables or sources of variation. An example is $$ {\tiny \begin{pmatrix}
How can I explain the intuition behind ANOVA? You could explain that ANOVA is a decomposition of the data as components that correspond to different groups or variables or sources of variation. An example is $$ {\tiny \begin{pmatrix} 89 & 88 & 97 & 94 \\ 84 & 77 & 92 & 79 \\ 81 & 87 & 87 & 85 \\ 87 & 92 & 89 & 84 \\ 79 & 81 & 80 & 88 \end{pmatrix} = \begin{pmatrix} 86 & 86 & 86 & 86 \\ 86 & 86 & 86 & 86 \\ 86 & 86 & 86 & 86 \\ 86 & 86 & 86 & 86 \\ 86 & 86 & 86 & 86 \end{pmatrix} + \begin{pmatrix} \phantom{-}6 & \phantom{-}6 & \phantom{-}6 & \phantom{-}6 \\ -3& -3 & -3 & -3 \\ -1 & -1 & -1 & -1 \\ \phantom{-}2 & \phantom{-}2 & \phantom{-}2 & \phantom{-}2 \\ -4 & -4 & -4 & -4 \end{pmatrix} + \begin{pmatrix} -2 & -1 & 3 & 0 \\ -2 & -1 & 3 & 0 \\ -2 & -1 & 3 & 0 \\ -2 & -1 & 3 & 0 \\ -2 & -1 & 3 & 0 \end{pmatrix} + \begin{pmatrix} -1 & -3 & \phantom{-}2 & \phantom{-}2 \\ \phantom{-}3 & -5 & \phantom{-}6 & -4 \\ -2 & \phantom{-}3 & -1 & \phantom{-}0 \\ \phantom{-}1 & \phantom{-}5 & -2 & -4 \\ -1 & \phantom{-}0 & -5 & \phantom{-}6 \end{pmatrix} } $$ which represents observations from a two-way ANOVA design (without repliation), with rows and columns as the two groups. The algebraic model is $$ y_{ti} = \mu + \beta_i + \tau_t + \epsilon _{ti} $$ and the corresponding data decomposition is calculated as $$ y_{ti} = \bar{y} + \left\{\bar{y}_i-\bar{y}\right\} + \left\{ \bar{y}_t-\bar{y}\right\} + \left\{y_{ti}-\bar{y}_i - \bar{y}_t +\bar{y}\right\}. $$ For one-way ANOVA an example is $$ {\tiny \begin{pmatrix} 62 & 63 & 68 & 56 \\ 60 & 67 & 66 & 62 \\ 63 & 71 & 71 & 60 \\ 59 & 64 & 67 & 61 \\ & 65 & 68 & 63 \\ & 66 & 68 & 64 \\ & & & 63 \\ & & & 59 \end{pmatrix} = \begin{pmatrix} 64 & 64 & 64 & 64 \\ 64 & 64 & 64 & 64 \\ 64 & 64 & 64 & 64 \\ 64 & 64 & 64 & 64 \\ & 64 & 64 & 64 \\ & 64 & 64 & 64 \\ & & & 64 \\ & & & 64 \end{pmatrix} + \begin{pmatrix} -3 & 2 & 4 & -3 \\ -3 & 2 & 4 & -3 \\ -3 & 2 & 4 & -3 \\ -3 & 2 & 4 & -3 \\ & 2 & 4 & -3 \\ & 2 & 4 & -3 \\ & & & -3 \\ & & & -3 \end{pmatrix} + \begin{pmatrix} \phantom{-}1 & -3 & \phantom{-}0 & -5 \\ -1 & \phantom{-}1 & -2 & \phantom{-}1 \\ \phantom{-}2 & \phantom{-}5 & \phantom{-}3 & -1 \\ -2 & -2 & -1 & \phantom{-}0 \\ & -1 & \phantom{-}0 & \phantom{-}2 \\ & \phantom{-}0 & \phantom{-}0 & \phantom{-}3 \\ & & & \phantom{-}2 \\ & & & -2 \end{pmatrix} }%end tiny $$ and the algebra can be written in the same way. This is mostly a comment, since it is not a full explanation, but could be a helpful component of any explanation, and could be suited to the necessary level. Such tables is used a lot in this famous book.
How can I explain the intuition behind ANOVA? You could explain that ANOVA is a decomposition of the data as components that correspond to different groups or variables or sources of variation. An example is $$ {\tiny \begin{pmatrix}
31,555
Are data transformations on non-normal data necessary for an exploratory factor analysis when using the principal axis factoring extraction method?
Factor analysis is essentially a (constrained) linear regression model. In this model, each analyzed variable is the dependent variable, common factors are the IVs, and the implied unique factor serve as the error term. (The constant term is set to zero due to centering or standardizing which are implied in computation of covariances or correlations.) So, exactly like in linear regression, there could exist "strong" assumption of normality - IVs (common factors) are multivariate normal and errors (unique factor) are normal, which automatically leads to that the DV is normal; and "weak" assumption of normality - errors (unique factor) are normal only, therefore the DV needs not to be normal. Both in regression and FA we usually admit "weak" assumption because it is more realistic. Among classic FA extraction methods only the maximum likelihood method, because it departs from the characteristics of population, states that the analyzed variables be multivariate normal. Methods like principal axes or minimal residuals do not require this "strong" assumption (albeit you can make it anyway). Please remember that even if your variables are normal separately, it doesn't necessarily guarantee that your data are multivariate normal. Let us accept "weak" assumption of normality. What is the potential threat coming from strongly skewed data, like your, then? It is outliers. If the distribution of a variable is strongly asymmetric the longer tail becomes extra influential in computing correlations or covariances, and simultaneously it provokes apprehension about whether it still measures the same psychological construct (the factor) as the shorter tail does. It might be cautious to compare whether correlation matrices built on the lower half and the upper half of the rating scale are similar or not. If they are similar enough, you may conclude that both tails measure the same thing and do not transform your variables. Otherwise you should consider transforming or some other action to neutralize the effect of "outlier" long tail. Transformations are plenty. For example, raising to a power>1 or exponentiation are used for left-skewed data, and power<1 or logarithm - for right-skewed. My own experience says that so called optimal transformation via Categorical PCA performed prior FA is almost always beneficial, for it usually leads to more clear, interpretable factors in FA; under the assumption that the number of factors is known, it transforms your data nonlinearly so as to maximize the overall variance accounted by that number of factors.
Are data transformations on non-normal data necessary for an exploratory factor analysis when using
Factor analysis is essentially a (constrained) linear regression model. In this model, each analyzed variable is the dependent variable, common factors are the IVs, and the implied unique factor serve
Are data transformations on non-normal data necessary for an exploratory factor analysis when using the principal axis factoring extraction method? Factor analysis is essentially a (constrained) linear regression model. In this model, each analyzed variable is the dependent variable, common factors are the IVs, and the implied unique factor serve as the error term. (The constant term is set to zero due to centering or standardizing which are implied in computation of covariances or correlations.) So, exactly like in linear regression, there could exist "strong" assumption of normality - IVs (common factors) are multivariate normal and errors (unique factor) are normal, which automatically leads to that the DV is normal; and "weak" assumption of normality - errors (unique factor) are normal only, therefore the DV needs not to be normal. Both in regression and FA we usually admit "weak" assumption because it is more realistic. Among classic FA extraction methods only the maximum likelihood method, because it departs from the characteristics of population, states that the analyzed variables be multivariate normal. Methods like principal axes or minimal residuals do not require this "strong" assumption (albeit you can make it anyway). Please remember that even if your variables are normal separately, it doesn't necessarily guarantee that your data are multivariate normal. Let us accept "weak" assumption of normality. What is the potential threat coming from strongly skewed data, like your, then? It is outliers. If the distribution of a variable is strongly asymmetric the longer tail becomes extra influential in computing correlations or covariances, and simultaneously it provokes apprehension about whether it still measures the same psychological construct (the factor) as the shorter tail does. It might be cautious to compare whether correlation matrices built on the lower half and the upper half of the rating scale are similar or not. If they are similar enough, you may conclude that both tails measure the same thing and do not transform your variables. Otherwise you should consider transforming or some other action to neutralize the effect of "outlier" long tail. Transformations are plenty. For example, raising to a power>1 or exponentiation are used for left-skewed data, and power<1 or logarithm - for right-skewed. My own experience says that so called optimal transformation via Categorical PCA performed prior FA is almost always beneficial, for it usually leads to more clear, interpretable factors in FA; under the assumption that the number of factors is known, it transforms your data nonlinearly so as to maximize the overall variance accounted by that number of factors.
Are data transformations on non-normal data necessary for an exploratory factor analysis when using Factor analysis is essentially a (constrained) linear regression model. In this model, each analyzed variable is the dependent variable, common factors are the IVs, and the implied unique factor serve
31,556
Are data transformations on non-normal data necessary for an exploratory factor analysis when using the principal axis factoring extraction method?
I just post what I learned from Yong and Pearce (2013). To perform a factor analysis, there has to be univariate and multivariate normality within the data (Child, 2006) Yong, A. G., & Pearce, S. (2013). A beginner’s guide to factor analysis: Focusing on exploratory factor analysis. Tutorials in quantitative methods for psychology, 9(2), 79-94. DOI:10.20982/tqmp.09.2.p079
Are data transformations on non-normal data necessary for an exploratory factor analysis when using
I just post what I learned from Yong and Pearce (2013). To perform a factor analysis, there has to be univariate and multivariate normality within the data (Child, 2006) Yong, A. G., & Pearce, S.
Are data transformations on non-normal data necessary for an exploratory factor analysis when using the principal axis factoring extraction method? I just post what I learned from Yong and Pearce (2013). To perform a factor analysis, there has to be univariate and multivariate normality within the data (Child, 2006) Yong, A. G., & Pearce, S. (2013). A beginner’s guide to factor analysis: Focusing on exploratory factor analysis. Tutorials in quantitative methods for psychology, 9(2), 79-94. DOI:10.20982/tqmp.09.2.p079
Are data transformations on non-normal data necessary for an exploratory factor analysis when using I just post what I learned from Yong and Pearce (2013). To perform a factor analysis, there has to be univariate and multivariate normality within the data (Child, 2006) Yong, A. G., & Pearce, S.
31,557
Which model should I use for Cox proportional hazards with paired data?
This subject is covered by a number of papers including: Modelling clustered survival data from multicentre clinical trials The shared frailty model and the power for heterogeneity tests in multicenter trials The Frailty Model, Chapter 3 Proportional hazards models with frailties and random effects. Here is a very brief (and non-exhaustive) summary of the differences between the two approaches. Stratified approach For each pair, there is an unspecified baseline hazard function. The partial likelihood idea is readily adapted by multiplying the partial likelihoods specific to each stratum. Pros: Lack of structure. Cons: It does not provide any information about heterogeneity between pairs; Pairs in which both members shared the same covariate information or which provide only censoring observations do not contribute to the likelihood; this is because no between-pair comparisons are attempted. Frailty approach Within-pair association is accounted for by a random effect common to both members from the same pair. Hence, there is again a different baseline hazard for each pair, but they are not totally unspecified; there is some structure. Estimation is based on the marginal likelihood. Pros: Parsimony: heterogeneity is described by a single parameter; Summary measures about heterogeneity are available (Understanding heterogeneity...); It is possible to study the effect of variables common within the pairs. Cons: software availability (in R, you can look at coxph() or parfm(); in SAS, you can look at proc phreg); research is still ongoing. $$$$ As a conclusion, the choice depends on your research. However, the last reference from the list gives some guideline: For situations where group size is five or greater, it is difficult to justify use of the random effects model over that of the stratified model, this latter model being very much more readily implemented. The story changes for group sizes less than five and, for twin studies in particular, the efficiency gains are such that we would prefer to use a random effects model over a stratified model. The stratified model remains valid but can required from 20 per cent to 30 per cent more observations to achieve the same precision.
Which model should I use for Cox proportional hazards with paired data?
This subject is covered by a number of papers including: Modelling clustered survival data from multicentre clinical trials The shared frailty model and the power for heterogeneity tests in multicent
Which model should I use for Cox proportional hazards with paired data? This subject is covered by a number of papers including: Modelling clustered survival data from multicentre clinical trials The shared frailty model and the power for heterogeneity tests in multicenter trials The Frailty Model, Chapter 3 Proportional hazards models with frailties and random effects. Here is a very brief (and non-exhaustive) summary of the differences between the two approaches. Stratified approach For each pair, there is an unspecified baseline hazard function. The partial likelihood idea is readily adapted by multiplying the partial likelihoods specific to each stratum. Pros: Lack of structure. Cons: It does not provide any information about heterogeneity between pairs; Pairs in which both members shared the same covariate information or which provide only censoring observations do not contribute to the likelihood; this is because no between-pair comparisons are attempted. Frailty approach Within-pair association is accounted for by a random effect common to both members from the same pair. Hence, there is again a different baseline hazard for each pair, but they are not totally unspecified; there is some structure. Estimation is based on the marginal likelihood. Pros: Parsimony: heterogeneity is described by a single parameter; Summary measures about heterogeneity are available (Understanding heterogeneity...); It is possible to study the effect of variables common within the pairs. Cons: software availability (in R, you can look at coxph() or parfm(); in SAS, you can look at proc phreg); research is still ongoing. $$$$ As a conclusion, the choice depends on your research. However, the last reference from the list gives some guideline: For situations where group size is five or greater, it is difficult to justify use of the random effects model over that of the stratified model, this latter model being very much more readily implemented. The story changes for group sizes less than five and, for twin studies in particular, the efficiency gains are such that we would prefer to use a random effects model over a stratified model. The stratified model remains valid but can required from 20 per cent to 30 per cent more observations to achieve the same precision.
Which model should I use for Cox proportional hazards with paired data? This subject is covered by a number of papers including: Modelling clustered survival data from multicentre clinical trials The shared frailty model and the power for heterogeneity tests in multicent
31,558
Making boxplots of hourly data in R
Something like this? library(ggplot2) ggplot(Final, aes(x = as.factor(Tod), y = Temp)) + geom_boxplot() + facet_wrap(~ Loc)
Making boxplots of hourly data in R
Something like this? library(ggplot2) ggplot(Final, aes(x = as.factor(Tod), y = Temp)) + geom_boxplot() + facet_wrap(~ Loc)
Making boxplots of hourly data in R Something like this? library(ggplot2) ggplot(Final, aes(x = as.factor(Tod), y = Temp)) + geom_boxplot() + facet_wrap(~ Loc)
Making boxplots of hourly data in R Something like this? library(ggplot2) ggplot(Final, aes(x = as.factor(Tod), y = Temp)) + geom_boxplot() + facet_wrap(~ Loc)
31,559
Making boxplots of hourly data in R
library(robustbase) adjbox(Final$Temp[Final$Loc=="UK"]~Final$Tod[Final$Loc=="UK"]) Boxplots are a visualization tool, so i'll give you a visual advice. What you have is essentially functional data so you want (for visualization reasons) to use a box-plot tool that acknowledges that. Try the functional boxplot function in the fda package.
Making boxplots of hourly data in R
library(robustbase) adjbox(Final$Temp[Final$Loc=="UK"]~Final$Tod[Final$Loc=="UK"]) Boxplots are a visualization tool, so i'll give you a visual advice. What you have is essentially functional data s
Making boxplots of hourly data in R library(robustbase) adjbox(Final$Temp[Final$Loc=="UK"]~Final$Tod[Final$Loc=="UK"]) Boxplots are a visualization tool, so i'll give you a visual advice. What you have is essentially functional data so you want (for visualization reasons) to use a box-plot tool that acknowledges that. Try the functional boxplot function in the fda package.
Making boxplots of hourly data in R library(robustbase) adjbox(Final$Temp[Final$Loc=="UK"]~Final$Tod[Final$Loc=="UK"]) Boxplots are a visualization tool, so i'll give you a visual advice. What you have is essentially functional data s
31,560
Can I fit a mixed model with subjects that only have 1 observation?
You should keep them in the model. They contribute nothing to estimating the location random effect variance, but you can use them to contribute to estimating the mean structure. More specifically, let $\sigma^{2}_{1}$ be the location random effect variance and $\sigma^{2}_{2}$ the unexplained variance. The likelihood function for a location with only a single observation has no curvature in $\sigma^{2}_{1}$ as long as $\sigma^{2}_{1}+\sigma^{2}_{2}$ remains constant (i.e. the two variances are not identified from each other, but the total variance is identified). But, there is curvature in $\beta$, the regression coefficients. Hopefully this isn't too common in your data set or you will have a very imprecise estimate of the random effect variance.
Can I fit a mixed model with subjects that only have 1 observation?
You should keep them in the model. They contribute nothing to estimating the location random effect variance, but you can use them to contribute to estimating the mean structure. More specifically, l
Can I fit a mixed model with subjects that only have 1 observation? You should keep them in the model. They contribute nothing to estimating the location random effect variance, but you can use them to contribute to estimating the mean structure. More specifically, let $\sigma^{2}_{1}$ be the location random effect variance and $\sigma^{2}_{2}$ the unexplained variance. The likelihood function for a location with only a single observation has no curvature in $\sigma^{2}_{1}$ as long as $\sigma^{2}_{1}+\sigma^{2}_{2}$ remains constant (i.e. the two variances are not identified from each other, but the total variance is identified). But, there is curvature in $\beta$, the regression coefficients. Hopefully this isn't too common in your data set or you will have a very imprecise estimate of the random effect variance.
Can I fit a mixed model with subjects that only have 1 observation? You should keep them in the model. They contribute nothing to estimating the location random effect variance, but you can use them to contribute to estimating the mean structure. More specifically, l
31,561
Difference between dynamic programming and temporal difference learning in reinforcement learning
DP solves for the optimal policy or value function by recursion. It requires knowledge of the markov decision process (MDP) or a model of the world so that the recursions can be carried out. It is typically lumped under "planning" rather than "learning", in that you already know the MDP, and just need to figure out what to do (optimally). TD is model-free: it doesn't require knowledge of a model of the world. It is iterative, and simulation based, and learns by bootstrapping, i.e. the value of a state or action is estimated using the values of other states or actions. For more info, see: http://webdocs.cs.ualberta.ca/~sutton/book/the-book.html http://www.cs.ucl.ac.uk/staff/D.Silver/web/Teaching.html
Difference between dynamic programming and temporal difference learning in reinforcement learning
DP solves for the optimal policy or value function by recursion. It requires knowledge of the markov decision process (MDP) or a model of the world so that the recursions can be carried out. It is t
Difference between dynamic programming and temporal difference learning in reinforcement learning DP solves for the optimal policy or value function by recursion. It requires knowledge of the markov decision process (MDP) or a model of the world so that the recursions can be carried out. It is typically lumped under "planning" rather than "learning", in that you already know the MDP, and just need to figure out what to do (optimally). TD is model-free: it doesn't require knowledge of a model of the world. It is iterative, and simulation based, and learns by bootstrapping, i.e. the value of a state or action is estimated using the values of other states or actions. For more info, see: http://webdocs.cs.ualberta.ca/~sutton/book/the-book.html http://www.cs.ucl.ac.uk/staff/D.Silver/web/Teaching.html
Difference between dynamic programming and temporal difference learning in reinforcement learning DP solves for the optimal policy or value function by recursion. It requires knowledge of the markov decision process (MDP) or a model of the world so that the recursions can be carried out. It is t
31,562
Data mining papers/examples
Check out the Kaggle.com blog, where winners discuss their approaches to solving a data mining competition. You can then go back to the kaggle.com website to get the description and data and try it out yourself.
Data mining papers/examples
Check out the Kaggle.com blog, where winners discuss their approaches to solving a data mining competition. You can then go back to the kaggle.com website to get the description and data and try it ou
Data mining papers/examples Check out the Kaggle.com blog, where winners discuss their approaches to solving a data mining competition. You can then go back to the kaggle.com website to get the description and data and try it out yourself.
Data mining papers/examples Check out the Kaggle.com blog, where winners discuss their approaches to solving a data mining competition. You can then go back to the kaggle.com website to get the description and data and try it ou
31,563
Data mining papers/examples
Here's a good place to start: Top 10 Algorithms in Data Mining Not much in terms of data preparation in there, but plenty on applications. And lots of good links to relevant papers to read.
Data mining papers/examples
Here's a good place to start: Top 10 Algorithms in Data Mining Not much in terms of data preparation in there, but plenty on applications. And lots of good links to relevant papers to read.
Data mining papers/examples Here's a good place to start: Top 10 Algorithms in Data Mining Not much in terms of data preparation in there, but plenty on applications. And lots of good links to relevant papers to read.
Data mining papers/examples Here's a good place to start: Top 10 Algorithms in Data Mining Not much in terms of data preparation in there, but plenty on applications. And lots of good links to relevant papers to read.
31,564
Data mining papers/examples
I recommend You articles from free Journal of Statistical Software. You can find there different applications of data mining/machine learning together with analysis of real data examples. Most articles are about R packages so you can also simultaneously perform their analyses in R. Articles in journal also include R code and packages in R include data. All data are analyzed in depth there so it is very worthy source for me.
Data mining papers/examples
I recommend You articles from free Journal of Statistical Software. You can find there different applications of data mining/machine learning together with analysis of real data examples. Most article
Data mining papers/examples I recommend You articles from free Journal of Statistical Software. You can find there different applications of data mining/machine learning together with analysis of real data examples. Most articles are about R packages so you can also simultaneously perform their analyses in R. Articles in journal also include R code and packages in R include data. All data are analyzed in depth there so it is very worthy source for me.
Data mining papers/examples I recommend You articles from free Journal of Statistical Software. You can find there different applications of data mining/machine learning together with analysis of real data examples. Most article
31,565
Data mining papers/examples
The caret R package has a set of four vignettes that walk through applying various data preparation tasks, supervised learning algorithms, feature selection, and data visualizations starting from some raw example datasets. Even though the focus is on how to do these things using functionality provided by caret itself, it's still generally applicable and pretty good reading for real-world projects. Here are direct links to the four PDF vignettes: caret manual - data and functions caret manual - variable selection caret manual - model training caret manual - variable importance
Data mining papers/examples
The caret R package has a set of four vignettes that walk through applying various data preparation tasks, supervised learning algorithms, feature selection, and data visualizations starting from some
Data mining papers/examples The caret R package has a set of four vignettes that walk through applying various data preparation tasks, supervised learning algorithms, feature selection, and data visualizations starting from some raw example datasets. Even though the focus is on how to do these things using functionality provided by caret itself, it's still generally applicable and pretty good reading for real-world projects. Here are direct links to the four PDF vignettes: caret manual - data and functions caret manual - variable selection caret manual - model training caret manual - variable importance
Data mining papers/examples The caret R package has a set of four vignettes that walk through applying various data preparation tasks, supervised learning algorithms, feature selection, and data visualizations starting from some
31,566
Data mining papers/examples
Here are some that I've found helpful: KDD Cup 2008 and the Workshop on Mining Medical Data
Data mining papers/examples
Here are some that I've found helpful: KDD Cup 2008 and the Workshop on Mining Medical Data
Data mining papers/examples Here are some that I've found helpful: KDD Cup 2008 and the Workshop on Mining Medical Data
Data mining papers/examples Here are some that I've found helpful: KDD Cup 2008 and the Workshop on Mining Medical Data
31,567
How to calculate the R-squared value and assess the model fit in multidimensional scaling?
You can look at the "GOF" component of the result ("goodness of fit"), if you specify the number of dimensions. It returns two numbers, that should be equal unless the distance matrix is not positive. You can also directly look at the eigenvalues: when they become small, you have enough dimensions. In the following example, two dimensions seem sufficient. > cmdscale(eurodist, 1, eig=TRUE)$GOF [1] 0.4690928 0.5401388 > cmdscale(eurodist, 2, eig=TRUE)$GOF [1] 0.7537543 0.8679134 > cmdscale(eurodist, 3, eig=TRUE)$GOF [1] 0.7904600 0.9101784 > r <- cmdscale(eurodist, eig=TRUE) > plot(cumsum(r$eig) / sum(r$eig), type="h", lwd=5, las=1, xlab="Number of dimensions", ylab=expression(R^2)) > plot(r$eig, type="h", lwd=5, las=1, xlab="Number of dimensions", ylab="Eigenvalues")
How to calculate the R-squared value and assess the model fit in multidimensional scaling?
You can look at the "GOF" component of the result ("goodness of fit"), if you specify the number of dimensions. It returns two numbers, that should be equal unless the distance matrix is not positive.
How to calculate the R-squared value and assess the model fit in multidimensional scaling? You can look at the "GOF" component of the result ("goodness of fit"), if you specify the number of dimensions. It returns two numbers, that should be equal unless the distance matrix is not positive. You can also directly look at the eigenvalues: when they become small, you have enough dimensions. In the following example, two dimensions seem sufficient. > cmdscale(eurodist, 1, eig=TRUE)$GOF [1] 0.4690928 0.5401388 > cmdscale(eurodist, 2, eig=TRUE)$GOF [1] 0.7537543 0.8679134 > cmdscale(eurodist, 3, eig=TRUE)$GOF [1] 0.7904600 0.9101784 > r <- cmdscale(eurodist, eig=TRUE) > plot(cumsum(r$eig) / sum(r$eig), type="h", lwd=5, las=1, xlab="Number of dimensions", ylab=expression(R^2)) > plot(r$eig, type="h", lwd=5, las=1, xlab="Number of dimensions", ylab="Eigenvalues")
How to calculate the R-squared value and assess the model fit in multidimensional scaling? You can look at the "GOF" component of the result ("goodness of fit"), if you specify the number of dimensions. It returns two numbers, that should be equal unless the distance matrix is not positive.
31,568
How to calculate the R-squared value and assess the model fit in multidimensional scaling?
I personally prefer to measure goodness of fit using R squared, which cmdscale does not generate. I wrote some additional code to perform this; it is described and provided as the following entry in my "Cognition and Reality" blog post: Computing The Fit Of An MDS Solution Using R.
How to calculate the R-squared value and assess the model fit in multidimensional scaling?
I personally prefer to measure goodness of fit using R squared, which cmdscale does not generate. I wrote some additional code to perform this; it is described and provided as the following entry in
How to calculate the R-squared value and assess the model fit in multidimensional scaling? I personally prefer to measure goodness of fit using R squared, which cmdscale does not generate. I wrote some additional code to perform this; it is described and provided as the following entry in my "Cognition and Reality" blog post: Computing The Fit Of An MDS Solution Using R.
How to calculate the R-squared value and assess the model fit in multidimensional scaling? I personally prefer to measure goodness of fit using R squared, which cmdscale does not generate. I wrote some additional code to perform this; it is described and provided as the following entry in
31,569
Factoring of conditional probability
I like to think of the chain rule as: $$P(A\cap B) = P(B)P(A\mid B) = P(A)P(B \mid A)$$ instead of $P(A\cap B) = P(B)P(A\mid B)$ which is the way G. Jay Kerns expressed it in his comment on the question. There is no mathematical difference, of course, but we can think of $P(A\cap B) = P(A)P(B \mid A)$ as encapsulating the following heuristic way of thinking. We want to find $P(A \cap B)$, the probability that both $A$ and $B$ occurred. Clearly, $A$ must have occurred (which has probability $P(A)$), and if we assume that $A$ has occurred, then in order for $A \cap B$ to occur, $B$ must occur too, which has probability $P(B\mid A)$, (note: not $P(B)$ since we have already assumed that $A$ has occurred), and so $P(A \cap B) = P(A)P(B \mid A)$. Generalizing this argument (which fortunately can be backed up by straight mathematical calculations from the definition of conditional probability), $$P(A\cap B \cap C) = P(A)P(B\mid A)P(C\mid (A \cap B))$$ or, proceeding in the opposite direction, $$P(A\cap B \cap C) = P(C)P(B\mid C)P(A\mid (B \cap C)).$$ Dividing this last equation on both sides by $P(C)$ gives $$\begin{align*} \frac{P(A\cap B \cap C)}{P(C)} &= \frac{P(C)P(B\mid C)P(A\mid B \cap C)}{P(C)}\\ P(A\cap B \mid C) &= P(B\mid C)P(A\mid (B \cap C))\\ &= P(A\mid (B \cap C))P(B\mid C) \end{align*}$$ So the OP should accept that the equality follows from the chain rule (plus an extra step).
Factoring of conditional probability
I like to think of the chain rule as: $$P(A\cap B) = P(B)P(A\mid B) = P(A)P(B \mid A)$$ instead of $P(A\cap B) = P(B)P(A\mid B)$ which is the way G. Jay Kerns expressed it in his comment on the questi
Factoring of conditional probability I like to think of the chain rule as: $$P(A\cap B) = P(B)P(A\mid B) = P(A)P(B \mid A)$$ instead of $P(A\cap B) = P(B)P(A\mid B)$ which is the way G. Jay Kerns expressed it in his comment on the question. There is no mathematical difference, of course, but we can think of $P(A\cap B) = P(A)P(B \mid A)$ as encapsulating the following heuristic way of thinking. We want to find $P(A \cap B)$, the probability that both $A$ and $B$ occurred. Clearly, $A$ must have occurred (which has probability $P(A)$), and if we assume that $A$ has occurred, then in order for $A \cap B$ to occur, $B$ must occur too, which has probability $P(B\mid A)$, (note: not $P(B)$ since we have already assumed that $A$ has occurred), and so $P(A \cap B) = P(A)P(B \mid A)$. Generalizing this argument (which fortunately can be backed up by straight mathematical calculations from the definition of conditional probability), $$P(A\cap B \cap C) = P(A)P(B\mid A)P(C\mid (A \cap B))$$ or, proceeding in the opposite direction, $$P(A\cap B \cap C) = P(C)P(B\mid C)P(A\mid (B \cap C)).$$ Dividing this last equation on both sides by $P(C)$ gives $$\begin{align*} \frac{P(A\cap B \cap C)}{P(C)} &= \frac{P(C)P(B\mid C)P(A\mid B \cap C)}{P(C)}\\ P(A\cap B \mid C) &= P(B\mid C)P(A\mid (B \cap C))\\ &= P(A\mid (B \cap C))P(B\mid C) \end{align*}$$ So the OP should accept that the equality follows from the chain rule (plus an extra step).
Factoring of conditional probability I like to think of the chain rule as: $$P(A\cap B) = P(B)P(A\mid B) = P(A)P(B \mid A)$$ instead of $P(A\cap B) = P(B)P(A\mid B)$ which is the way G. Jay Kerns expressed it in his comment on the questi
31,570
Are there any pre-generated numeric series with known statistical properties?
The US National Institute of Standards and Technology has a set of Statistical Reference Datasets "that provides reference datasets with certified values for a variety of statistical methods", including a set labelled 'univariate summary statistics' with certified values for the mean, standard deviation and lag-1 autocorrelation. It doesn't appear to include values of the median, but accurate computation of the median shouldn't be a problem. Efficient computation of the sample median is a little harder.
Are there any pre-generated numeric series with known statistical properties?
The US National Institute of Standards and Technology has a set of Statistical Reference Datasets "that provides reference datasets with certified values for a variety of statistical methods", includi
Are there any pre-generated numeric series with known statistical properties? The US National Institute of Standards and Technology has a set of Statistical Reference Datasets "that provides reference datasets with certified values for a variety of statistical methods", including a set labelled 'univariate summary statistics' with certified values for the mean, standard deviation and lag-1 autocorrelation. It doesn't appear to include values of the median, but accurate computation of the median shouldn't be a problem. Efficient computation of the sample median is a little harder.
Are there any pre-generated numeric series with known statistical properties? The US National Institute of Standards and Technology has a set of Statistical Reference Datasets "that provides reference datasets with certified values for a variety of statistical methods", includi
31,571
Are there any pre-generated numeric series with known statistical properties?
You could take your favorite statistics toolbox (mine is R) and use that to start generating long timeseries of data. In R for example it is possible to generate data from all kinds of distributions. In this way you can validate that this program you are testing is in line with your other stats program. That only compares the performance to e.g. R, but I'd trust R in this regard :).
Are there any pre-generated numeric series with known statistical properties?
You could take your favorite statistics toolbox (mine is R) and use that to start generating long timeseries of data. In R for example it is possible to generate data from all kinds of distributions.
Are there any pre-generated numeric series with known statistical properties? You could take your favorite statistics toolbox (mine is R) and use that to start generating long timeseries of data. In R for example it is possible to generate data from all kinds of distributions. In this way you can validate that this program you are testing is in line with your other stats program. That only compares the performance to e.g. R, but I'd trust R in this regard :).
Are there any pre-generated numeric series with known statistical properties? You could take your favorite statistics toolbox (mine is R) and use that to start generating long timeseries of data. In R for example it is possible to generate data from all kinds of distributions.
31,572
Visualising successive proportions
One potential idea is the use of Sankey diagrams to document the flow of choices between the categories. Two examples to describe what I am talking about are; Fineo by DensityDesign Parallel Sets by Robert Kosara. Also one of the posters on this site has demonstrated how to make Parallel sets in R. With an update over some of your concerns expressed in the comments. It appears to me that the Parallel Sets program does what you want out of the box. Below is an output of the program, in which I created 4 random variables with 4 categories. Whatever group you initialize to the top of the display will be sequentially divided among the subsequent categories. Creating the splitting down that you desire. Also not apparent in this picture the package has some interactive functionality that allows for easier exploratory data analysis, such as when you hover over one of the categories all of it's descendants are highlighted. I have uploaded the same dataset to Fineo which you can explore here. Besides the initial 4 category variables (named dec1 to dec4) I have also included the concatenated categories that allows you to examine the split categories. The naming convention for the variables with the exp suffix is that it is the dec variable expanded by concatenating the previous chosen categories. So dec3_exp12 would be labeled as 121 if dec1 = 1 and dec2 = 2 and dec3 = 1. You could make the same split type structure in Fineo that is available in ParSets, but it fails to render the categories with $4^3$ or more nodes in this example. After playing around with Fineo abit more it is a neat application, but it is really limited. Parallel Sets has much more functionality, so I would suggest you check that out before the Fineo app. I think the ParSets program is a much better option than the successively splitting the categories into subsets for examination. For an example, using the same random data as above, here is a dot plot plotting the proportion categories in decision 2 chosen conditional on the category chosen for decision 1. You can do the same breakdown for the change from decision 2 to decision 3, but make a small multiple chart for what the initial decision 1 was. You can continue this on infinitely (see below). It may be enlightening, but I suspect it would be fairly daunting by the time you get to many more panels. Below is as requested, visualizing 4 successive category choices. As noted previously, the small numbers by the time you split your graphic into so many categories is problematic. One way to account for that is to map an aesthetic such as size to the baseline in which the proportion is based off of. This shrinks the observations based on smaller numbers from view. You could also use transparency (but I already made the points transparent to distinguish overplotted points in this example). I imagine some were envisioning a Christmas tree like node structure as opposed to dot plots, but I don't know how to make such a graphic. I suspect it would be suspect to the same overwhelming problem though. These small multiples aren't bad, but IMO the Parallel Sets is alot more intuitive and I suspect some non-obvious patterns would be more apparent in that visualization. Maybe someone more imaginative than me can come up with some more interesting data than just 4 random categories.
Visualising successive proportions
One potential idea is the use of Sankey diagrams to document the flow of choices between the categories. Two examples to describe what I am talking about are; Fineo by DensityDesign Parallel Sets by
Visualising successive proportions One potential idea is the use of Sankey diagrams to document the flow of choices between the categories. Two examples to describe what I am talking about are; Fineo by DensityDesign Parallel Sets by Robert Kosara. Also one of the posters on this site has demonstrated how to make Parallel sets in R. With an update over some of your concerns expressed in the comments. It appears to me that the Parallel Sets program does what you want out of the box. Below is an output of the program, in which I created 4 random variables with 4 categories. Whatever group you initialize to the top of the display will be sequentially divided among the subsequent categories. Creating the splitting down that you desire. Also not apparent in this picture the package has some interactive functionality that allows for easier exploratory data analysis, such as when you hover over one of the categories all of it's descendants are highlighted. I have uploaded the same dataset to Fineo which you can explore here. Besides the initial 4 category variables (named dec1 to dec4) I have also included the concatenated categories that allows you to examine the split categories. The naming convention for the variables with the exp suffix is that it is the dec variable expanded by concatenating the previous chosen categories. So dec3_exp12 would be labeled as 121 if dec1 = 1 and dec2 = 2 and dec3 = 1. You could make the same split type structure in Fineo that is available in ParSets, but it fails to render the categories with $4^3$ or more nodes in this example. After playing around with Fineo abit more it is a neat application, but it is really limited. Parallel Sets has much more functionality, so I would suggest you check that out before the Fineo app. I think the ParSets program is a much better option than the successively splitting the categories into subsets for examination. For an example, using the same random data as above, here is a dot plot plotting the proportion categories in decision 2 chosen conditional on the category chosen for decision 1. You can do the same breakdown for the change from decision 2 to decision 3, but make a small multiple chart for what the initial decision 1 was. You can continue this on infinitely (see below). It may be enlightening, but I suspect it would be fairly daunting by the time you get to many more panels. Below is as requested, visualizing 4 successive category choices. As noted previously, the small numbers by the time you split your graphic into so many categories is problematic. One way to account for that is to map an aesthetic such as size to the baseline in which the proportion is based off of. This shrinks the observations based on smaller numbers from view. You could also use transparency (but I already made the points transparent to distinguish overplotted points in this example). I imagine some were envisioning a Christmas tree like node structure as opposed to dot plots, but I don't know how to make such a graphic. I suspect it would be suspect to the same overwhelming problem though. These small multiples aren't bad, but IMO the Parallel Sets is alot more intuitive and I suspect some non-obvious patterns would be more apparent in that visualization. Maybe someone more imaginative than me can come up with some more interesting data than just 4 random categories.
Visualising successive proportions One potential idea is the use of Sankey diagrams to document the flow of choices between the categories. Two examples to describe what I am talking about are; Fineo by DensityDesign Parallel Sets by
31,573
Signal extraction problem: conditional expectation of one item in sum of independent normal RVs
If $X$ and $Y$ are zero-mean independent normal random variables with variances $\sigma_X^2$ and $\sigma_Y^2$ respectively, then $X$ and $X+Y = Z$ are zero-mean jointly normal variables where $\sigma_Z^2 = \sigma_X^2 + \sigma_Y^2$ and $\text{cov}(X, Z) = \text{cov}(X,X+Y) = E[X^2] + E[XY] = \sigma_X^2$ so that the correlation coefficient is $$\rho_{X,Z} = \frac{\sigma_X^2}{\sigma_X\sqrt{\sigma_X^2 + \sigma_Y^2}} = \frac{\sigma_X}{\sqrt{\sigma_X^2 + \sigma_Y^2}}$$. The conditional distribution of $X$ given $Z = z$ is normal, and since the variables involved all have zero means, the conditional mean simplifies to $$z\cdot \rho_{X,Z}\frac{\sigma_X}{\sigma_Z} = z\cdot \frac{\sigma_X^2}{\sigma_X^2 + \sigma_Y^2}$$
Signal extraction problem: conditional expectation of one item in sum of independent normal RVs
If $X$ and $Y$ are zero-mean independent normal random variables with variances $\sigma_X^2$ and $\sigma_Y^2$ respectively, then $X$ and $X+Y = Z$ are zero-mean jointly normal variables where $\sigma_
Signal extraction problem: conditional expectation of one item in sum of independent normal RVs If $X$ and $Y$ are zero-mean independent normal random variables with variances $\sigma_X^2$ and $\sigma_Y^2$ respectively, then $X$ and $X+Y = Z$ are zero-mean jointly normal variables where $\sigma_Z^2 = \sigma_X^2 + \sigma_Y^2$ and $\text{cov}(X, Z) = \text{cov}(X,X+Y) = E[X^2] + E[XY] = \sigma_X^2$ so that the correlation coefficient is $$\rho_{X,Z} = \frac{\sigma_X^2}{\sigma_X\sqrt{\sigma_X^2 + \sigma_Y^2}} = \frac{\sigma_X}{\sqrt{\sigma_X^2 + \sigma_Y^2}}$$. The conditional distribution of $X$ given $Z = z$ is normal, and since the variables involved all have zero means, the conditional mean simplifies to $$z\cdot \rho_{X,Z}\frac{\sigma_X}{\sigma_Z} = z\cdot \frac{\sigma_X^2}{\sigma_X^2 + \sigma_Y^2}$$
Signal extraction problem: conditional expectation of one item in sum of independent normal RVs If $X$ and $Y$ are zero-mean independent normal random variables with variances $\sigma_X^2$ and $\sigma_Y^2$ respectively, then $X$ and $X+Y = Z$ are zero-mean jointly normal variables where $\sigma_
31,574
Signal extraction problem: conditional expectation of one item in sum of independent normal RVs
It is just like the linear model $Z = X + \epsilon$ (take $\epsilon \equiv Y$), but where you regress $X$ on $Z$ rather than $Z$ on $X$. The slope of the regression of $X$ on $Z$ is $\text{cor}(X,Z) \sigma_X / \sigma_Z$, and $\sigma^2_Z = \sigma^2_X + \sigma^2_Y$, while $\text{cor}(X,Z) = \text{cov}(X,Z)/[\sigma_X \sigma_Z] = \sigma_X^2/[\sigma_X \sigma_Z] = \sigma_X / \sqrt{\sigma_X^2+\sigma_Y^2}$. Thus I get the slope to be $\sigma_X^2/(\sigma_X^2+\sigma_Y^2)$ So, the same as you, provided that sigmaX and sigmaY are interpreted as variances.
Signal extraction problem: conditional expectation of one item in sum of independent normal RVs
It is just like the linear model $Z = X + \epsilon$ (take $\epsilon \equiv Y$), but where you regress $X$ on $Z$ rather than $Z$ on $X$. The slope of the regression of $X$ on $Z$ is $\text{cor}(X,Z) \
Signal extraction problem: conditional expectation of one item in sum of independent normal RVs It is just like the linear model $Z = X + \epsilon$ (take $\epsilon \equiv Y$), but where you regress $X$ on $Z$ rather than $Z$ on $X$. The slope of the regression of $X$ on $Z$ is $\text{cor}(X,Z) \sigma_X / \sigma_Z$, and $\sigma^2_Z = \sigma^2_X + \sigma^2_Y$, while $\text{cor}(X,Z) = \text{cov}(X,Z)/[\sigma_X \sigma_Z] = \sigma_X^2/[\sigma_X \sigma_Z] = \sigma_X / \sqrt{\sigma_X^2+\sigma_Y^2}$. Thus I get the slope to be $\sigma_X^2/(\sigma_X^2+\sigma_Y^2)$ So, the same as you, provided that sigmaX and sigmaY are interpreted as variances.
Signal extraction problem: conditional expectation of one item in sum of independent normal RVs It is just like the linear model $Z = X + \epsilon$ (take $\epsilon \equiv Y$), but where you regress $X$ on $Z$ rather than $Z$ on $X$. The slope of the regression of $X$ on $Z$ is $\text{cor}(X,Z) \
31,575
Computing Gaussian mixture model probabilities
Yes, the two probabilities ought to be different, because one is for a mixture and the other is for a sum. Look at an example: The thick red curve is the probability density function for a mixture of three normals ($X$). The dashed curves are its components (each scaled by $\lambda_i$); they are normal. The thick blue curve is the pdf of the normal distribution with the weighted mean and weighted variance that define $Y$; it, too, is normal. In particular, note that the possibility of the mixture having multiple modes (three in this case, between one and three in general) makes it perfectly clear the mixture is not normal in general, because normal distributions are unimodal. The mixture can be modeled as a two step process: first draw one of the three ordered pairs $(\mu_1, \sigma_1)$, $(\mu_2, \sigma_2)$, and $(\mu_3, \sigma_3)$ with probabilities $\lambda_1$, $\lambda_2$, and $\lambda_3$, respectively. Then draw a value $X$ from the normal distribution specified by the parameters you drew (understood as mean and standard deviation). The weighted mean is obtained from a completely different procedure: independently draw a value $X_1$ from a normal distribution with parameters $(\lambda_1 \mu_1, \lambda_1 \sigma_1)$, a value $X_2$ from a normal distribution with parameters $(\lambda_2 \mu_2, \lambda_2 \sigma_2)$, and a value $X_3$ from a normal distribution with parameters $(\lambda_3 \mu_3, \lambda_3 \sigma_3)$. Then form their sum $Y = X_1+X_2+X_3$.
Computing Gaussian mixture model probabilities
Yes, the two probabilities ought to be different, because one is for a mixture and the other is for a sum. Look at an example: The thick red curve is the probability density function for a mixture o
Computing Gaussian mixture model probabilities Yes, the two probabilities ought to be different, because one is for a mixture and the other is for a sum. Look at an example: The thick red curve is the probability density function for a mixture of three normals ($X$). The dashed curves are its components (each scaled by $\lambda_i$); they are normal. The thick blue curve is the pdf of the normal distribution with the weighted mean and weighted variance that define $Y$; it, too, is normal. In particular, note that the possibility of the mixture having multiple modes (three in this case, between one and three in general) makes it perfectly clear the mixture is not normal in general, because normal distributions are unimodal. The mixture can be modeled as a two step process: first draw one of the three ordered pairs $(\mu_1, \sigma_1)$, $(\mu_2, \sigma_2)$, and $(\mu_3, \sigma_3)$ with probabilities $\lambda_1$, $\lambda_2$, and $\lambda_3$, respectively. Then draw a value $X$ from the normal distribution specified by the parameters you drew (understood as mean and standard deviation). The weighted mean is obtained from a completely different procedure: independently draw a value $X_1$ from a normal distribution with parameters $(\lambda_1 \mu_1, \lambda_1 \sigma_1)$, a value $X_2$ from a normal distribution with parameters $(\lambda_2 \mu_2, \lambda_2 \sigma_2)$, and a value $X_3$ from a normal distribution with parameters $(\lambda_3 \mu_3, \lambda_3 \sigma_3)$. Then form their sum $Y = X_1+X_2+X_3$.
Computing Gaussian mixture model probabilities Yes, the two probabilities ought to be different, because one is for a mixture and the other is for a sum. Look at an example: The thick red curve is the probability density function for a mixture o
31,576
Density estimation with a truncated distribution?
The logspline package for R has the oldlogspline function which will estimate densities using a mixture of observed and censored data.
Density estimation with a truncated distribution?
The logspline package for R has the oldlogspline function which will estimate densities using a mixture of observed and censored data.
Density estimation with a truncated distribution? The logspline package for R has the oldlogspline function which will estimate densities using a mixture of observed and censored data.
Density estimation with a truncated distribution? The logspline package for R has the oldlogspline function which will estimate densities using a mixture of observed and censored data.
31,577
Density estimation with a truncated distribution?
The density function also has a from parameter to indicate the left-most side "of the grid at which the density is to be estimated". Continuing from the above example: lines(density(x, from = 0), col = 4, lwd = 3) However, as you can see this is exactly the same distribution without the from parameter as above. It just starts from 0, that's all.
Density estimation with a truncated distribution?
The density function also has a from parameter to indicate the left-most side "of the grid at which the density is to be estimated". Continuing from the above example: lines(density(x, from = 0), col
Density estimation with a truncated distribution? The density function also has a from parameter to indicate the left-most side "of the grid at which the density is to be estimated". Continuing from the above example: lines(density(x, from = 0), col = 4, lwd = 3) However, as you can see this is exactly the same distribution without the from parameter as above. It just starts from 0, that's all.
Density estimation with a truncated distribution? The density function also has a from parameter to indicate the left-most side "of the grid at which the density is to be estimated". Continuing from the above example: lines(density(x, from = 0), col
31,578
How to calculate SE for a binary measure, given sample size n and known population mean?
Each outcome may be thought of a bernoulli trial with success probability $p$. A ${\rm Bernoulli}(p)$ random variable has mean $p$ and variance $p(1-p)$. Therefore the average of $n$ independent ${\rm Bernoulli}(p)$ random variables also has mean $p$ and variance $p(1-p)/n$, which is typically estimate by $\hat{p}(1 - \hat{p})/n$. So, in your example the standard error of your mean estimate is $1/\sqrt{4n}$.
How to calculate SE for a binary measure, given sample size n and known population mean?
Each outcome may be thought of a bernoulli trial with success probability $p$. A ${\rm Bernoulli}(p)$ random variable has mean $p$ and variance $p(1-p)$. Therefore the average of $n$ independent ${\rm
How to calculate SE for a binary measure, given sample size n and known population mean? Each outcome may be thought of a bernoulli trial with success probability $p$. A ${\rm Bernoulli}(p)$ random variable has mean $p$ and variance $p(1-p)$. Therefore the average of $n$ independent ${\rm Bernoulli}(p)$ random variables also has mean $p$ and variance $p(1-p)/n$, which is typically estimate by $\hat{p}(1 - \hat{p})/n$. So, in your example the standard error of your mean estimate is $1/\sqrt{4n}$.
How to calculate SE for a binary measure, given sample size n and known population mean? Each outcome may be thought of a bernoulli trial with success probability $p$. A ${\rm Bernoulli}(p)$ random variable has mean $p$ and variance $p(1-p)$. Therefore the average of $n$ independent ${\rm
31,579
Recommend references on survey sample weighting
I guess one could start with Thomas Lumley's webpage "Survey analysis in R". He is the author of an R package called survey and he has recently published a book about "Complex Surveys: a guide to analysis using R".
Recommend references on survey sample weighting
I guess one could start with Thomas Lumley's webpage "Survey analysis in R". He is the author of an R package called survey and he has recently published a book about "Complex Surveys: a guide to anal
Recommend references on survey sample weighting I guess one could start with Thomas Lumley's webpage "Survey analysis in R". He is the author of an R package called survey and he has recently published a book about "Complex Surveys: a guide to analysis using R".
Recommend references on survey sample weighting I guess one could start with Thomas Lumley's webpage "Survey analysis in R". He is the author of an R package called survey and he has recently published a book about "Complex Surveys: a guide to anal
31,580
Recommend references on survey sample weighting
Lucky for me, Andrew Gelman decided to discuss this topic on his blog last week! There I found the following books recommended in the comments: Applied Survey Data Analysis by Heeringa, West & Burglund Sampling: Design and Analysis by Sharon Lohr Survey Methodology by Groves, et. al. Struggles with Survey Weighting and Regression Modeling by Andrew Gelman Comments from lots of people and Rejoinder from Gelman
Recommend references on survey sample weighting
Lucky for me, Andrew Gelman decided to discuss this topic on his blog last week! There I found the following books recommended in the comments: Applied Survey Data Analysis by Heeringa, West & Burglu
Recommend references on survey sample weighting Lucky for me, Andrew Gelman decided to discuss this topic on his blog last week! There I found the following books recommended in the comments: Applied Survey Data Analysis by Heeringa, West & Burglund Sampling: Design and Analysis by Sharon Lohr Survey Methodology by Groves, et. al. Struggles with Survey Weighting and Regression Modeling by Andrew Gelman Comments from lots of people and Rejoinder from Gelman
Recommend references on survey sample weighting Lucky for me, Andrew Gelman decided to discuss this topic on his blog last week! There I found the following books recommended in the comments: Applied Survey Data Analysis by Heeringa, West & Burglu
31,581
Recommend references on survey sample weighting
The most authoritative reference is Valliant, Dever and Kreuter (2013). Practical Tools for Designing and Weighting Survey Samples, and the accompanying PracTools package. C.f. other answers so far in this thread, the authors are actual practitioners in the survey field, and producing/weighting survey samples is one of the central parts of at least Jill Dever's job. (Rick Valliant had just retired as of 2017, and Frauke Kreuter runs two graduate programs on two different continents at the moment; all three are highly regarded statisticians in the survey world). My own little piece concerns some terminology in the survey weighting literature: Kolenikov (2016). Is it post-stratification or nonresponse adjustment? -- see also references therein.
Recommend references on survey sample weighting
The most authoritative reference is Valliant, Dever and Kreuter (2013). Practical Tools for Designing and Weighting Survey Samples, and the accompanying PracTools package. C.f. other answers so far in
Recommend references on survey sample weighting The most authoritative reference is Valliant, Dever and Kreuter (2013). Practical Tools for Designing and Weighting Survey Samples, and the accompanying PracTools package. C.f. other answers so far in this thread, the authors are actual practitioners in the survey field, and producing/weighting survey samples is one of the central parts of at least Jill Dever's job. (Rick Valliant had just retired as of 2017, and Frauke Kreuter runs two graduate programs on two different continents at the moment; all three are highly regarded statisticians in the survey world). My own little piece concerns some terminology in the survey weighting literature: Kolenikov (2016). Is it post-stratification or nonresponse adjustment? -- see also references therein.
Recommend references on survey sample weighting The most authoritative reference is Valliant, Dever and Kreuter (2013). Practical Tools for Designing and Weighting Survey Samples, and the accompanying PracTools package. C.f. other answers so far in
31,582
Random permutation test for feature selection
(Don't have much time now so I'll answer briefly and then expand later) Say that we are considering a binary classification problem and have a training set of $m$ class 1 samples and $n$ class 2 samples. A permutation test for feature selection looks at each feature individually. A test statistic $\theta$, such as information gain or the normalized difference between the means, is calculated for the feature. The data for the feature is then randomly permuted and partitioned into two sets, one of size $m$ and one of size $n$. The test statistic $\theta_p$ is then calculated based on this new partition $p$. Depending on the computational complexity of the problem, this is then repeated over all possible partitions of the feature into two sets of order $m$ and $n$, or a random subset of these. Now that we have established a distribution over $\theta_p$, we calculate the p-value that the observed test statistic $\theta$ arose from a random partition of the feature. The null hypothesis is that samples from each class come from the same underlying distribution (the feature is irrelevant). This process is repeated over all features, and then the subset of features used for classification can be selected in two ways: The $N$ features with the lowest p-values All features with a p-value$<\epsilon$
Random permutation test for feature selection
(Don't have much time now so I'll answer briefly and then expand later) Say that we are considering a binary classification problem and have a training set of $m$ class 1 samples and $n$ class 2 sampl
Random permutation test for feature selection (Don't have much time now so I'll answer briefly and then expand later) Say that we are considering a binary classification problem and have a training set of $m$ class 1 samples and $n$ class 2 samples. A permutation test for feature selection looks at each feature individually. A test statistic $\theta$, such as information gain or the normalized difference between the means, is calculated for the feature. The data for the feature is then randomly permuted and partitioned into two sets, one of size $m$ and one of size $n$. The test statistic $\theta_p$ is then calculated based on this new partition $p$. Depending on the computational complexity of the problem, this is then repeated over all possible partitions of the feature into two sets of order $m$ and $n$, or a random subset of these. Now that we have established a distribution over $\theta_p$, we calculate the p-value that the observed test statistic $\theta$ arose from a random partition of the feature. The null hypothesis is that samples from each class come from the same underlying distribution (the feature is irrelevant). This process is repeated over all features, and then the subset of features used for classification can be selected in two ways: The $N$ features with the lowest p-values All features with a p-value$<\epsilon$
Random permutation test for feature selection (Don't have much time now so I'll answer briefly and then expand later) Say that we are considering a binary classification problem and have a training set of $m$ class 1 samples and $n$ class 2 sampl
31,583
Is it wrong to jitter before performing Wilcoxon test?
There's a thread on the R-help list about this; see for example: http://tolstoy.newcastle.edu.au/R/e8/help/09/12/9200.html The first suggestion there is to repeat the test a large of number of times with different jittering and then combine the p-values to get an overall p-value, either by taking an average or a maximum. They also suggest that a straightforward permutation test could be used instead (of the two, that's what I'd prefer). See the question Which permutation test implementation in R to use instead of t-tests (paired and non-paired)? for some examples of permutation tests. Elsewhere in that thread, Greg Snow writes: Adding random noise to data in order to avoid a warning is like removing the batteries from a smoke detector to silence it rather than investigating the what is causing the alarm to go off. (See http://tolstoy.newcastle.edu.au/R/e8/help/09/12/9195.html )
Is it wrong to jitter before performing Wilcoxon test?
There's a thread on the R-help list about this; see for example: http://tolstoy.newcastle.edu.au/R/e8/help/09/12/9200.html The first suggestion there is to repeat the test a large of number of times w
Is it wrong to jitter before performing Wilcoxon test? There's a thread on the R-help list about this; see for example: http://tolstoy.newcastle.edu.au/R/e8/help/09/12/9200.html The first suggestion there is to repeat the test a large of number of times with different jittering and then combine the p-values to get an overall p-value, either by taking an average or a maximum. They also suggest that a straightforward permutation test could be used instead (of the two, that's what I'd prefer). See the question Which permutation test implementation in R to use instead of t-tests (paired and non-paired)? for some examples of permutation tests. Elsewhere in that thread, Greg Snow writes: Adding random noise to data in order to avoid a warning is like removing the batteries from a smoke detector to silence it rather than investigating the what is causing the alarm to go off. (See http://tolstoy.newcastle.edu.au/R/e8/help/09/12/9195.html )
Is it wrong to jitter before performing Wilcoxon test? There's a thread on the R-help list about this; see for example: http://tolstoy.newcastle.edu.au/R/e8/help/09/12/9200.html The first suggestion there is to repeat the test a large of number of times w
31,584
Is it wrong to jitter before performing Wilcoxon test?
(disclaimer: I didn't check the code, my answer is just based on your description) I have the feeling that what you want to do is a really bad idea. Wilcoxon is a resampling (or randomization) test for ranks. That is, it takes the rank of the values and compares these ranks to all possible permutations of the ranks (see e.g., here). So, as you realized, ties are pretty bad as you don't get ranks out of them. However, adding random noise (jitter) to your data will transform all ranks, so that they have random ranks! That is, it distorts your data severely. Therefore: It is wrong to do so.
Is it wrong to jitter before performing Wilcoxon test?
(disclaimer: I didn't check the code, my answer is just based on your description) I have the feeling that what you want to do is a really bad idea. Wilcoxon is a resampling (or randomization) test fo
Is it wrong to jitter before performing Wilcoxon test? (disclaimer: I didn't check the code, my answer is just based on your description) I have the feeling that what you want to do is a really bad idea. Wilcoxon is a resampling (or randomization) test for ranks. That is, it takes the rank of the values and compares these ranks to all possible permutations of the ranks (see e.g., here). So, as you realized, ties are pretty bad as you don't get ranks out of them. However, adding random noise (jitter) to your data will transform all ranks, so that they have random ranks! That is, it distorts your data severely. Therefore: It is wrong to do so.
Is it wrong to jitter before performing Wilcoxon test? (disclaimer: I didn't check the code, my answer is just based on your description) I have the feeling that what you want to do is a really bad idea. Wilcoxon is a resampling (or randomization) test fo
31,585
Is it wrong to jitter before performing Wilcoxon test?
You've asked several people what you should do now. In my view, what you should do now is accept that the proper p-value here is 1.000. Your groups don't differ.
Is it wrong to jitter before performing Wilcoxon test?
You've asked several people what you should do now. In my view, what you should do now is accept that the proper p-value here is 1.000. Your groups don't differ.
Is it wrong to jitter before performing Wilcoxon test? You've asked several people what you should do now. In my view, what you should do now is accept that the proper p-value here is 1.000. Your groups don't differ.
Is it wrong to jitter before performing Wilcoxon test? You've asked several people what you should do now. In my view, what you should do now is accept that the proper p-value here is 1.000. Your groups don't differ.
31,586
Jensen-Shannon divergence for bivariate normal distributions
The midpoint measure $\newcommand{\bx}{\mathbf{x}} \newcommand{\KL}{\mathrm{KL}}M$ is a mixture distribution of the two multivariate normals, so it does not have the form that you give in the original post. Let $\varphi_p(\bx)$ be the probability density function of a $\mathcal{N}(\mu_p, \Sigma_p)$ random vector and $\varphi_q(\bx)$ be the pdf of $\mathcal{N}(\mu_q, \Sigma_q)$. Then the pdf of the midpoint measure is $$ \varphi_m(\bx) = \frac{1}{2} \varphi_p(\bx) + \frac{1}{2} \varphi_q(\bx) \> . $$ The Jensen-Shannon divergence is $$ \mathrm{JSD} = \frac{1}{2} (\KL(P\,\|M)+ \KL(Q\|M)) = h(M) - \frac{1}{2} (h(P) + h(Q)) \>, $$ where $h(P)$ denotes the (differential) entropy corresponding to the measure $P$. Thus, your calculation reduces to calculating differential entropies. For the multivariate normal $\mathcal{N}(\mu, \Sigma)$, the answer is well-known to be $$ \frac{1}{2} \log_2\big((2\pi e)^n |\Sigma|\big) $$ and the proof can be found in any number of sources, e.g., Cover and Thomas (1991), pp. 230-231. It is worth pointing out that the entropy of a multivariate normal is invariant with respect to the mean, as the expression above shows. However, this almost assuredly does not carry over to the case of a mixture of normals. (Think about picking one broad normal centered at zero and another concentrated normal where the latter is pushed out far away from the origin.) For the midpoint measure, things appear to be more complicated. That I know of, there is no closed-form expression for the differential entropy $h(M)$. Searching on Google yields a couple potential hits, but the top ones don't appear to give closed forms in the general case. You may be stuck with approximating this quantity in some way. Note also that the paper you reference does not restrict the treatment to only discrete distributions. They treat a case general enough that your problem falls within their framework. See the middle of column two on page 1859. Here is where it is also shown that the divergence is bounded. This holds for the case of two general measures and is not restricted to the case of two discrete distributions. The Jensen-Shannon Divergence has come up a couple of times recently in other questions on this site. See here and here. Addendum: Note that a mixture of normals is not the same as a linear combination of normals. The simplest way to see this is to consider the one-dimensional case. Let $X_1 \sim \mathcal{N}(-\mu, 1)$ and $X_2 \sim \mathcal{N}(\mu, 1)$ and let them be independent of one another. Then a mixture of the two normals using weights $(\alpha, 1-\alpha)$ for $\alpha \in (0,1)$ has the distribution $$ \varphi_m(x) = \alpha \cdot \frac{1}{\sqrt{2\pi}} e^{-\frac{(x+\mu)^2}{2}} + (1-\alpha) \cdot \frac{1}{\sqrt{2\pi}} e^{-\frac{(x-\mu)^2}{2}} \> . $$ The distribution of a linear combination of $X_1$ and $X_2$ using the same weights as before is, via the stable property of the normal distribution is $$ \varphi_{\ell}(x) = \frac{1}{\sqrt{2\pi\sigma^2}} e^{-\frac{(x-(1-2\alpha)\mu)^2}{2\sigma^2}} \>, $$ where $\sigma^2 = \alpha^2 + (1-\alpha)^2$. These two distributions are very different, though they have the same mean. This is not an accident and follows from linearity of expectation. To understand the mixture distribution, imagine that you had to go to a statistical consultant so that she could produce values from this distribution for you. She holds one realization of $X_1$ in one palm and one realization of $X_2$ in the other palm (though you don't know which of the two palms each is in). Now, her assistant flips a biased coin with probability $\alpha$ out of sight of you and then comes and whispers the result into the statistician's ear. She opens one of her palms and shows you the realization, but doesn't tell you the outcome of the coin flip. This process produces the mixture distribution. On the other hand, the linear combination can be understood in the same context. The statistical consultant merely takes both realizations, multiplies the first by $\alpha$ and the second by $(1-\alpha)$, adds the result up and shows it to you.
Jensen-Shannon divergence for bivariate normal distributions
The midpoint measure $\newcommand{\bx}{\mathbf{x}} \newcommand{\KL}{\mathrm{KL}}M$ is a mixture distribution of the two multivariate normals, so it does not have the form that you give in the original
Jensen-Shannon divergence for bivariate normal distributions The midpoint measure $\newcommand{\bx}{\mathbf{x}} \newcommand{\KL}{\mathrm{KL}}M$ is a mixture distribution of the two multivariate normals, so it does not have the form that you give in the original post. Let $\varphi_p(\bx)$ be the probability density function of a $\mathcal{N}(\mu_p, \Sigma_p)$ random vector and $\varphi_q(\bx)$ be the pdf of $\mathcal{N}(\mu_q, \Sigma_q)$. Then the pdf of the midpoint measure is $$ \varphi_m(\bx) = \frac{1}{2} \varphi_p(\bx) + \frac{1}{2} \varphi_q(\bx) \> . $$ The Jensen-Shannon divergence is $$ \mathrm{JSD} = \frac{1}{2} (\KL(P\,\|M)+ \KL(Q\|M)) = h(M) - \frac{1}{2} (h(P) + h(Q)) \>, $$ where $h(P)$ denotes the (differential) entropy corresponding to the measure $P$. Thus, your calculation reduces to calculating differential entropies. For the multivariate normal $\mathcal{N}(\mu, \Sigma)$, the answer is well-known to be $$ \frac{1}{2} \log_2\big((2\pi e)^n |\Sigma|\big) $$ and the proof can be found in any number of sources, e.g., Cover and Thomas (1991), pp. 230-231. It is worth pointing out that the entropy of a multivariate normal is invariant with respect to the mean, as the expression above shows. However, this almost assuredly does not carry over to the case of a mixture of normals. (Think about picking one broad normal centered at zero and another concentrated normal where the latter is pushed out far away from the origin.) For the midpoint measure, things appear to be more complicated. That I know of, there is no closed-form expression for the differential entropy $h(M)$. Searching on Google yields a couple potential hits, but the top ones don't appear to give closed forms in the general case. You may be stuck with approximating this quantity in some way. Note also that the paper you reference does not restrict the treatment to only discrete distributions. They treat a case general enough that your problem falls within their framework. See the middle of column two on page 1859. Here is where it is also shown that the divergence is bounded. This holds for the case of two general measures and is not restricted to the case of two discrete distributions. The Jensen-Shannon Divergence has come up a couple of times recently in other questions on this site. See here and here. Addendum: Note that a mixture of normals is not the same as a linear combination of normals. The simplest way to see this is to consider the one-dimensional case. Let $X_1 \sim \mathcal{N}(-\mu, 1)$ and $X_2 \sim \mathcal{N}(\mu, 1)$ and let them be independent of one another. Then a mixture of the two normals using weights $(\alpha, 1-\alpha)$ for $\alpha \in (0,1)$ has the distribution $$ \varphi_m(x) = \alpha \cdot \frac{1}{\sqrt{2\pi}} e^{-\frac{(x+\mu)^2}{2}} + (1-\alpha) \cdot \frac{1}{\sqrt{2\pi}} e^{-\frac{(x-\mu)^2}{2}} \> . $$ The distribution of a linear combination of $X_1$ and $X_2$ using the same weights as before is, via the stable property of the normal distribution is $$ \varphi_{\ell}(x) = \frac{1}{\sqrt{2\pi\sigma^2}} e^{-\frac{(x-(1-2\alpha)\mu)^2}{2\sigma^2}} \>, $$ where $\sigma^2 = \alpha^2 + (1-\alpha)^2$. These two distributions are very different, though they have the same mean. This is not an accident and follows from linearity of expectation. To understand the mixture distribution, imagine that you had to go to a statistical consultant so that she could produce values from this distribution for you. She holds one realization of $X_1$ in one palm and one realization of $X_2$ in the other palm (though you don't know which of the two palms each is in). Now, her assistant flips a biased coin with probability $\alpha$ out of sight of you and then comes and whispers the result into the statistician's ear. She opens one of her palms and shows you the realization, but doesn't tell you the outcome of the coin flip. This process produces the mixture distribution. On the other hand, the linear combination can be understood in the same context. The statistical consultant merely takes both realizations, multiplies the first by $\alpha$ and the second by $(1-\alpha)$, adds the result up and shows it to you.
Jensen-Shannon divergence for bivariate normal distributions The midpoint measure $\newcommand{\bx}{\mathbf{x}} \newcommand{\KL}{\mathrm{KL}}M$ is a mixture distribution of the two multivariate normals, so it does not have the form that you give in the original
31,587
Jensen-Shannon divergence for bivariate normal distributions
Cardinal's answer is correct. You are trying to get a closed-form solution for the Jensen-Shannon divergence of two Gaussians; no such solution exists. However, you can calculate Jensen-Shannon to arbitrary precision by using Monte Carlo sampling. What you require is a way to calculate $KLD(P|M)$, and by extension $KLD(Q|M)$. The Kullback-Leibler divergence is defined as: $$ KLD(P|M) = \int P(x) log\big(\frac{P(x)}{M(x)}\big) dx $$ The Monte Carlo approximation of this is: $$ KLD_{approx}(P|M) = \frac{1}{n} \sum^n_i log\big(\frac{P(x_i)}{M(x_i)}\big) $$ where the $x_i$ have been sampled from $P(x)$, which is easy as it is a Gaussian in your case. As $n \to \infty$, $KLD_{approx}(P|M) \to KLD(P|M)$. $M(x_i)$ can be calculated as $M(x_i) = \frac{1}{2}P(x_i) + \frac{1}{2}Q(x_i)$.
Jensen-Shannon divergence for bivariate normal distributions
Cardinal's answer is correct. You are trying to get a closed-form solution for the Jensen-Shannon divergence of two Gaussians; no such solution exists. However, you can calculate Jensen-Shannon to ar
Jensen-Shannon divergence for bivariate normal distributions Cardinal's answer is correct. You are trying to get a closed-form solution for the Jensen-Shannon divergence of two Gaussians; no such solution exists. However, you can calculate Jensen-Shannon to arbitrary precision by using Monte Carlo sampling. What you require is a way to calculate $KLD(P|M)$, and by extension $KLD(Q|M)$. The Kullback-Leibler divergence is defined as: $$ KLD(P|M) = \int P(x) log\big(\frac{P(x)}{M(x)}\big) dx $$ The Monte Carlo approximation of this is: $$ KLD_{approx}(P|M) = \frac{1}{n} \sum^n_i log\big(\frac{P(x_i)}{M(x_i)}\big) $$ where the $x_i$ have been sampled from $P(x)$, which is easy as it is a Gaussian in your case. As $n \to \infty$, $KLD_{approx}(P|M) \to KLD(P|M)$. $M(x_i)$ can be calculated as $M(x_i) = \frac{1}{2}P(x_i) + \frac{1}{2}Q(x_i)$.
Jensen-Shannon divergence for bivariate normal distributions Cardinal's answer is correct. You are trying to get a closed-form solution for the Jensen-Shannon divergence of two Gaussians; no such solution exists. However, you can calculate Jensen-Shannon to ar
31,588
Meta analysis on studies with 0-frequency cells
Seems to me this is one of the rare situations where it might well be better to meta-analyse risk differences rather than risk ratios or odds ratios. The risk difference $P(Kid_+ | Mum_+) - P(Kid_+|Mum_-)$ is estimated in each study by $D/(B+D) - C/(A+C)$. That should be finite in all studies even when $C=0$, so there should be no problem meta-analysing it. I agree it seems pretty pointless to consider testing the hypothesis that this risk difference is zero. But it's meaningful to estimate how large it is, i.e. how much more likely a kid is to have the virus when their mum has it than when their mums doesn't.
Meta analysis on studies with 0-frequency cells
Seems to me this is one of the rare situations where it might well be better to meta-analyse risk differences rather than risk ratios or odds ratios. The risk difference $P(Kid_+ | Mum_+) - P(Kid_+|M
Meta analysis on studies with 0-frequency cells Seems to me this is one of the rare situations where it might well be better to meta-analyse risk differences rather than risk ratios or odds ratios. The risk difference $P(Kid_+ | Mum_+) - P(Kid_+|Mum_-)$ is estimated in each study by $D/(B+D) - C/(A+C)$. That should be finite in all studies even when $C=0$, so there should be no problem meta-analysing it. I agree it seems pretty pointless to consider testing the hypothesis that this risk difference is zero. But it's meaningful to estimate how large it is, i.e. how much more likely a kid is to have the virus when their mum has it than when their mums doesn't.
Meta analysis on studies with 0-frequency cells Seems to me this is one of the rare situations where it might well be better to meta-analyse risk differences rather than risk ratios or odds ratios. The risk difference $P(Kid_+ | Mum_+) - P(Kid_+|M
31,589
Meta analysis on studies with 0-frequency cells
Usually 0's imply that you have to use exact methods instead of relying on asymptotical methods such as meta-analysis with odds ratios. If you are willing to assume that the study effect is fixed, an exact Maentel-Hanszel test is the way to go. For an exact random effects analysis, you have to use a binomial regression model with a random study effect. I have done both in a recent applied paper, but the methods section there would not be more helpful to you, as it essentially conveys this information. Edit This paper is not applied, but this is where I got the idea from when confronted with the same issue: [1] Hans C. van Houwelingen, Lidia R. Arends, and Theo Stijnen. Advanced methods in meta-analysis: multivariate approach and meta-regression. Statistics in Medicine, 2002; 21:589–624 Here is the paper where I used this approach (it is not apparent in the abstract, but is mentioned in the methods section): [2] Trivedi H, Nadella R, Szabo A. Hydration with sodium bicarbonate for the prevention of contrast-induced nephropathy: a meta-analysis of randomized controlled trials. Clin Nephrol. 2010 Oct;74(4):288-96.
Meta analysis on studies with 0-frequency cells
Usually 0's imply that you have to use exact methods instead of relying on asymptotical methods such as meta-analysis with odds ratios. If you are willing to assume that the study effect is fixed, an
Meta analysis on studies with 0-frequency cells Usually 0's imply that you have to use exact methods instead of relying on asymptotical methods such as meta-analysis with odds ratios. If you are willing to assume that the study effect is fixed, an exact Maentel-Hanszel test is the way to go. For an exact random effects analysis, you have to use a binomial regression model with a random study effect. I have done both in a recent applied paper, but the methods section there would not be more helpful to you, as it essentially conveys this information. Edit This paper is not applied, but this is where I got the idea from when confronted with the same issue: [1] Hans C. van Houwelingen, Lidia R. Arends, and Theo Stijnen. Advanced methods in meta-analysis: multivariate approach and meta-regression. Statistics in Medicine, 2002; 21:589–624 Here is the paper where I used this approach (it is not apparent in the abstract, but is mentioned in the methods section): [2] Trivedi H, Nadella R, Szabo A. Hydration with sodium bicarbonate for the prevention of contrast-induced nephropathy: a meta-analysis of randomized controlled trials. Clin Nephrol. 2010 Oct;74(4):288-96.
Meta analysis on studies with 0-frequency cells Usually 0's imply that you have to use exact methods instead of relying on asymptotical methods such as meta-analysis with odds ratios. If you are willing to assume that the study effect is fixed, an
31,590
Meta analysis on studies with 0-frequency cells
The metafor package's documentation says that "Adding a small constant to the cells of the 2x2 tables is a common solution to this problem." and also provides an option to do this within the call for rma().
Meta analysis on studies with 0-frequency cells
The metafor package's documentation says that "Adding a small constant to the cells of the 2x2 tables is a common solution to this problem." and also provides an option to do this within the call for
Meta analysis on studies with 0-frequency cells The metafor package's documentation says that "Adding a small constant to the cells of the 2x2 tables is a common solution to this problem." and also provides an option to do this within the call for rma().
Meta analysis on studies with 0-frequency cells The metafor package's documentation says that "Adding a small constant to the cells of the 2x2 tables is a common solution to this problem." and also provides an option to do this within the call for
31,591
How do I calculate if the degree of overlap between two lists is significant?
If I understand your question correctly, you need to use the Hypergeometric distribution. This distribution is usually associated with urn models, i.e there are $n$ balls in an urn, $y$ are painted red, and you draw $m$ balls from the urn. Then if $X$ is the number of balls in your sample of $m$ that are red, $X$ has a hyper-geometric distribution. For your specific example, let $n_A$, $n_B$ and $n_C$ denote the lengths of your three lists and let $n_{AB}$ denote the overlap between $A$ and $B$. Then $$n_{AB} \sim \text{HG}(n_A, n_C, n_B)$$ To calculate a p-value, you could use this R command: #Some example values n_A = 100;n_B = 200; n_C = 500; n_A_B = 50 1-phyper(n_A_B, n_B, n_C-n_B, n_A) [1] 0.008626697 Word of caution. Remember multiple testing, i.e. if you have lots of A and B lists, then you will need to adjust your p-values with a correction. For the example the FDR or Bonferroni corrections.
How do I calculate if the degree of overlap between two lists is significant?
If I understand your question correctly, you need to use the Hypergeometric distribution. This distribution is usually associated with urn models, i.e there are $n$ balls in an urn, $y$ are painted re
How do I calculate if the degree of overlap between two lists is significant? If I understand your question correctly, you need to use the Hypergeometric distribution. This distribution is usually associated with urn models, i.e there are $n$ balls in an urn, $y$ are painted red, and you draw $m$ balls from the urn. Then if $X$ is the number of balls in your sample of $m$ that are red, $X$ has a hyper-geometric distribution. For your specific example, let $n_A$, $n_B$ and $n_C$ denote the lengths of your three lists and let $n_{AB}$ denote the overlap between $A$ and $B$. Then $$n_{AB} \sim \text{HG}(n_A, n_C, n_B)$$ To calculate a p-value, you could use this R command: #Some example values n_A = 100;n_B = 200; n_C = 500; n_A_B = 50 1-phyper(n_A_B, n_B, n_C-n_B, n_A) [1] 0.008626697 Word of caution. Remember multiple testing, i.e. if you have lots of A and B lists, then you will need to adjust your p-values with a correction. For the example the FDR or Bonferroni corrections.
How do I calculate if the degree of overlap between two lists is significant? If I understand your question correctly, you need to use the Hypergeometric distribution. This distribution is usually associated with urn models, i.e there are $n$ balls in an urn, $y$ are painted re
31,592
How do I calculate if the degree of overlap between two lists is significant?
csgillespie's answer seems correct except for one thing: it gives the probability of seeing strictly more than n_A_B in the overlap, P(x > n_A_B), but I think OP wants the pvalue P(x >= n_A_B). You could get the latter by n_A = 100;n_B = 200; n_C = 500; n_A_B = 50 phyper(n_A_B - 1, n_A, n_C-n_A, n_B, lower.tail = FALSE)
How do I calculate if the degree of overlap between two lists is significant?
csgillespie's answer seems correct except for one thing: it gives the probability of seeing strictly more than n_A_B in the overlap, P(x > n_A_B), but I think OP wants the pvalue P(x >= n_A_B). You co
How do I calculate if the degree of overlap between two lists is significant? csgillespie's answer seems correct except for one thing: it gives the probability of seeing strictly more than n_A_B in the overlap, P(x > n_A_B), but I think OP wants the pvalue P(x >= n_A_B). You could get the latter by n_A = 100;n_B = 200; n_C = 500; n_A_B = 50 phyper(n_A_B - 1, n_A, n_C-n_A, n_B, lower.tail = FALSE)
How do I calculate if the degree of overlap between two lists is significant? csgillespie's answer seems correct except for one thing: it gives the probability of seeing strictly more than n_A_B in the overlap, P(x > n_A_B), but I think OP wants the pvalue P(x >= n_A_B). You co
31,593
How do I calculate the probability of 2 sets of random integers to intersect?
A simple intuition why you should get the hypergeometric distribution as answered by Ben: The process is the same as having 100 white balls, colour a random selection of size 50 red, then put all 100 in an urn and sample 25 balls from that urn randomly without repetitions. The number of red balls in this process will be hypergeometric distributed. Without using the formula for the hypergeometric distribution you can compute this more straightforward by multiplying the individual probabities each single draw. Ie. the probability to draw a red ball in the first draw, times the probability to draw a red ball in the second draw, etc. $$P(\text{all red}) = \frac{50}{100}\frac{49}{99}\frac{48}{98} \dots \frac{26}{76} = \frac{50!75!}{25!100!}$$
How do I calculate the probability of 2 sets of random integers to intersect?
A simple intuition why you should get the hypergeometric distribution as answered by Ben: The process is the same as having 100 white balls, colour a random selection of size 50 red, then put all 100
How do I calculate the probability of 2 sets of random integers to intersect? A simple intuition why you should get the hypergeometric distribution as answered by Ben: The process is the same as having 100 white balls, colour a random selection of size 50 red, then put all 100 in an urn and sample 25 balls from that urn randomly without repetitions. The number of red balls in this process will be hypergeometric distributed. Without using the formula for the hypergeometric distribution you can compute this more straightforward by multiplying the individual probabities each single draw. Ie. the probability to draw a red ball in the first draw, times the probability to draw a red ball in the second draw, etc. $$P(\text{all red}) = \frac{50}{100}\frac{49}{99}\frac{48}{98} \dots \frac{26}{76} = \frac{50!75!}{25!100!}$$
How do I calculate the probability of 2 sets of random integers to intersect? A simple intuition why you should get the hypergeometric distribution as answered by Ben: The process is the same as having 100 white balls, colour a random selection of size 50 red, then put all 100
31,594
How do I calculate the probability of 2 sets of random integers to intersect?
This is a simple problem, and it can be generalised easily to sets of any size. For more general application, suppose your initial population has $N$ distinct objects and your samples are denoted as $\mathscr{S}_1$ and $\mathscr{S}_2$, containing $n_1$ and $n_2$ elements respectively. Since each sample is taken without repetition of elements, the first sample will contain $n_1$ distinct elements and there will be $N-n_1$ unsampled elements. The probability of a non-empty intersection is equal to the probability that the second sample contains any of these latter elements, which can be found using the hypergeometric distribution as follows: $$\begin{align} \mathbb{P}(\text{Samples intersect}) &= \mathbb{P}(\mathscr{S}_1 \cap \mathscr{S}_2 \neq \varnothing) \\[18pt] &= 1-\mathbb{P}(\mathscr{S}_1 \cap \mathscr{S}_2 = \varnothing) \\[18pt] &= 1-\text{Hyper}(0|N, n_1, n_2) \\[12pt] &= 1-\frac{N-n_1 \choose n_2}{N \choose n_2} \\[6pt] &= 1-\frac{(N-n_1)!}{n_2! (N-n_1-n_2)!} \cdot \frac{n_2! (N-n_2)!}{N!} \\[6pt] &= 1-\frac{(N-n_1)! (N-n_2)!}{N! (N-n_1-n_2)!}. \\[6pt] \end{align}$$ In your case you have $N=100$, $N_1=50$ and $n_2=25$ which gives: $$\begin{align} \mathbb{P}(\text{Samples intersect}) &= 1-\frac{(100-50)! (100-25)!}{100! (100-50-25)!} \\[6pt] &= 1-\frac{50! \ 75!}{100! \ 25!} \\[12pt] &= 1-5.212394 \times 10^{-10} \\[18pt] &\approx 1. \\[6pt] \end{align}$$
How do I calculate the probability of 2 sets of random integers to intersect?
This is a simple problem, and it can be generalised easily to sets of any size. For more general application, suppose your initial population has $N$ distinct objects and your samples are denoted as
How do I calculate the probability of 2 sets of random integers to intersect? This is a simple problem, and it can be generalised easily to sets of any size. For more general application, suppose your initial population has $N$ distinct objects and your samples are denoted as $\mathscr{S}_1$ and $\mathscr{S}_2$, containing $n_1$ and $n_2$ elements respectively. Since each sample is taken without repetition of elements, the first sample will contain $n_1$ distinct elements and there will be $N-n_1$ unsampled elements. The probability of a non-empty intersection is equal to the probability that the second sample contains any of these latter elements, which can be found using the hypergeometric distribution as follows: $$\begin{align} \mathbb{P}(\text{Samples intersect}) &= \mathbb{P}(\mathscr{S}_1 \cap \mathscr{S}_2 \neq \varnothing) \\[18pt] &= 1-\mathbb{P}(\mathscr{S}_1 \cap \mathscr{S}_2 = \varnothing) \\[18pt] &= 1-\text{Hyper}(0|N, n_1, n_2) \\[12pt] &= 1-\frac{N-n_1 \choose n_2}{N \choose n_2} \\[6pt] &= 1-\frac{(N-n_1)!}{n_2! (N-n_1-n_2)!} \cdot \frac{n_2! (N-n_2)!}{N!} \\[6pt] &= 1-\frac{(N-n_1)! (N-n_2)!}{N! (N-n_1-n_2)!}. \\[6pt] \end{align}$$ In your case you have $N=100$, $N_1=50$ and $n_2=25$ which gives: $$\begin{align} \mathbb{P}(\text{Samples intersect}) &= 1-\frac{(100-50)! (100-25)!}{100! (100-50-25)!} \\[6pt] &= 1-\frac{50! \ 75!}{100! \ 25!} \\[12pt] &= 1-5.212394 \times 10^{-10} \\[18pt] &\approx 1. \\[6pt] \end{align}$$
How do I calculate the probability of 2 sets of random integers to intersect? This is a simple problem, and it can be generalised easily to sets of any size. For more general application, suppose your initial population has $N$ distinct objects and your samples are denoted as
31,595
Why does somebody argue that the number of bootstrap replications should not be a multiple of 10?
I believe that the comment is in reference to the number of bootstrap samples rather than to the size of each sample. It might simplify estimates of some common percentiles from bootstrapped samples. Davison and Hinkley argue on pages 18-19 that if $X_1,...X_N$ are independently distributed with CDF $K$ and if $X_{(j)}$ denotes the $j$th ordered value, then $$\text{E}(X_{(j)}) = K^{-1}\left(\frac{j}{N+1}\right) .$$ This implies that a sensible estimate of $K^{-1}(p)$ is $X_{((N+1)p)}$, assuming that $(N+1)p$ is an integer. So if you want to estimate the 2.5th and 97.5th quantiles to get a 95% confidence interval for some value, take 999 bootstrap samples, put the values in order, and select the 25th and the 975th. I suppose that avoids choosing among the multiple ways of estimating quantiles. Davison, A. C. and Hinkley, D. V. Bootstrap Methods and their Application, Cambridge University Press, 1997.
Why does somebody argue that the number of bootstrap replications should not be a multiple of 10?
I believe that the comment is in reference to the number of bootstrap samples rather than to the size of each sample. It might simplify estimates of some common percentiles from bootstrapped samples.
Why does somebody argue that the number of bootstrap replications should not be a multiple of 10? I believe that the comment is in reference to the number of bootstrap samples rather than to the size of each sample. It might simplify estimates of some common percentiles from bootstrapped samples. Davison and Hinkley argue on pages 18-19 that if $X_1,...X_N$ are independently distributed with CDF $K$ and if $X_{(j)}$ denotes the $j$th ordered value, then $$\text{E}(X_{(j)}) = K^{-1}\left(\frac{j}{N+1}\right) .$$ This implies that a sensible estimate of $K^{-1}(p)$ is $X_{((N+1)p)}$, assuming that $(N+1)p$ is an integer. So if you want to estimate the 2.5th and 97.5th quantiles to get a 95% confidence interval for some value, take 999 bootstrap samples, put the values in order, and select the 25th and the 975th. I suppose that avoids choosing among the multiple ways of estimating quantiles. Davison, A. C. and Hinkley, D. V. Bootstrap Methods and their Application, Cambridge University Press, 1997.
Why does somebody argue that the number of bootstrap replications should not be a multiple of 10? I believe that the comment is in reference to the number of bootstrap samples rather than to the size of each sample. It might simplify estimates of some common percentiles from bootstrapped samples.
31,596
How can I reduce the number of times people are randomly assigned to the same team when using a random number generator?
Your problem seems to be twofold: people notice when they have a common partner two weeks (rounds) in a row. They also notice when they have a common partner frequently. As it turns out, limiting the former possibility also keeps the latter rate low. So, let's solve the first problem. A quick and simple solution is rejection sampling. The way it works here is to conceive of your assignment for a round as being a random sample of all possible assignments (of which there are over $10^{16}$), a set $\Omega.$ After any number $r$ of rounds, suppose you have specified selection probabilities for every possible assignment, based somehow on the results of the previous rounds. To generate the team assignments for the next round, you repeatedly draw an assignment $\omega$ from $\Omega,$ compare its selection probability $\Pr(\omega)$ to a randomly generated probability $U,$ and accept that assignment when $U\le \omega.$ Perhaps the simplest example of this is to set the selection probabilities to zero for any assignment that would pair two teammates who had just played together in the previous round. The algorithm will either not terminate (because no random selection meeting your criteria is possible) or it will yield a satisfactory assignment. Here is an example of the first two rounds of such a schedule showing the first and second values of $\omega$ that were (randomly) selected: , , Round = 1 Team 1 2 3 4 5 6 7 [1,] 4 15 6 3 1 2 11 [2,] 5 16 13 8 7 14 12 [3,] 9 24 18 19 10 22 23 [4,] 25 27 21 20 17 26 28 , , Round = 2 Team 1 2 3 4 5 6 7 [1,] 3 2 16 6 9 1 10 [2,] 5 8 17 11 19 4 20 [3,] 7 12 18 15 26 21 23 [4,] 14 13 25 22 27 28 24 You may check, by inspection, that no pair of people teamed in the first round were teamed again in the second round. Another version of this would be to keep track of how often each pair of people has been teamed in the past and reduce the selection probabilities in inverse proportion to these frequencies. This is one way to minimize the rate of assignments to the same team, but by itself does not solve the first problem of having a common partner two weeks in a row. Here is a summary of the results of the simplest procedure, run for 52 rounds. There were $6$ pairs who never played together, $16$ who played together once, $38$ who played together twice, ..., and $1$ pair who played together in thirteen of the 52 rounds. And of course, by construction, nobody ever had a common teammate two rounds in a row. On average, each person played with any specific other person less than six times (because there are $28-1=27$ other people available and $4-1=3$ others in one's team, for an average chance of teaming with someone equal to $3/27=1/9$ per round; and $52/9 \lt 6.$) The distribution of these pair counts is remarkably similar to a Poisson distribution of rate $52/9.$ This relationship makes the distribution of pair counts predictable before you ever generate a schedule. Another nice feature of this approach is that it is "online" in the sense that you can apply it, without any modification, to extend an existing schedule. This R code generated the example and the figure. pick <- function(n, k) matrix(sample.int(n*k, n*k), n) evaluate <- function(X, A) max(apply(X, 1, function(x) max(A[x, x]))) > 0 update <- function(X, A) { I <- t(matrix(apply(X, 1, combn, m=2), 2)) A[I] <- A[I] + 1 # Increment pair counts A } n <- 7 # Number of teams k <- 4 # Players per team N <- 52 # Number of rounds to schedule A <- matrix(0, k*n, k*n) # Tracks the pairings from the previous round Schedule <- array(NA, c(k, n, N), dimnames=list(Position=1:k, Team=1:n, Round=1:N)) set.seed(17) for (round in 1:N) { while(evaluate(X <- pick(n, k), A)) {} # Rejection sampling A <- update(X, matrix(0, k*n, k*n)) # Record the results for future selections Schedule[,,round] <- t(X) # Save this assignment in the schedule } Schedule2 <- apply(Schedule, c(2,3), sort) # Make the schedule easier to read (Schedule2[,,1:2]) # Inspect it # # Plot a summary of this schedule. # check <- function(Schedule) { stopifnot(unique(apply(Schedule, 3, function(x) length(unique(c(x))))) == n*k) Pairs <- matrix(apply(Schedule, 3, function(S) apply(S, 2, combn, m=2)), 2) A <- table(Pairs[1, ], Pairs[2, ]) table(A[lower.tri(A)] + t(A)[lower.tri(A)]) } y <- c(check(Schedule)) i <- as.numeric(names(y)) plot(i, y, type="h", lwd=2, ylim=c(0, max(y)), main=paste("Common Teammate Frequencies in", N, "Rounds"))
How can I reduce the number of times people are randomly assigned to the same team when using a rand
Your problem seems to be twofold: people notice when they have a common partner two weeks (rounds) in a row. They also notice when they have a common partner frequently. As it turns out, limiting th
How can I reduce the number of times people are randomly assigned to the same team when using a random number generator? Your problem seems to be twofold: people notice when they have a common partner two weeks (rounds) in a row. They also notice when they have a common partner frequently. As it turns out, limiting the former possibility also keeps the latter rate low. So, let's solve the first problem. A quick and simple solution is rejection sampling. The way it works here is to conceive of your assignment for a round as being a random sample of all possible assignments (of which there are over $10^{16}$), a set $\Omega.$ After any number $r$ of rounds, suppose you have specified selection probabilities for every possible assignment, based somehow on the results of the previous rounds. To generate the team assignments for the next round, you repeatedly draw an assignment $\omega$ from $\Omega,$ compare its selection probability $\Pr(\omega)$ to a randomly generated probability $U,$ and accept that assignment when $U\le \omega.$ Perhaps the simplest example of this is to set the selection probabilities to zero for any assignment that would pair two teammates who had just played together in the previous round. The algorithm will either not terminate (because no random selection meeting your criteria is possible) or it will yield a satisfactory assignment. Here is an example of the first two rounds of such a schedule showing the first and second values of $\omega$ that were (randomly) selected: , , Round = 1 Team 1 2 3 4 5 6 7 [1,] 4 15 6 3 1 2 11 [2,] 5 16 13 8 7 14 12 [3,] 9 24 18 19 10 22 23 [4,] 25 27 21 20 17 26 28 , , Round = 2 Team 1 2 3 4 5 6 7 [1,] 3 2 16 6 9 1 10 [2,] 5 8 17 11 19 4 20 [3,] 7 12 18 15 26 21 23 [4,] 14 13 25 22 27 28 24 You may check, by inspection, that no pair of people teamed in the first round were teamed again in the second round. Another version of this would be to keep track of how often each pair of people has been teamed in the past and reduce the selection probabilities in inverse proportion to these frequencies. This is one way to minimize the rate of assignments to the same team, but by itself does not solve the first problem of having a common partner two weeks in a row. Here is a summary of the results of the simplest procedure, run for 52 rounds. There were $6$ pairs who never played together, $16$ who played together once, $38$ who played together twice, ..., and $1$ pair who played together in thirteen of the 52 rounds. And of course, by construction, nobody ever had a common teammate two rounds in a row. On average, each person played with any specific other person less than six times (because there are $28-1=27$ other people available and $4-1=3$ others in one's team, for an average chance of teaming with someone equal to $3/27=1/9$ per round; and $52/9 \lt 6.$) The distribution of these pair counts is remarkably similar to a Poisson distribution of rate $52/9.$ This relationship makes the distribution of pair counts predictable before you ever generate a schedule. Another nice feature of this approach is that it is "online" in the sense that you can apply it, without any modification, to extend an existing schedule. This R code generated the example and the figure. pick <- function(n, k) matrix(sample.int(n*k, n*k), n) evaluate <- function(X, A) max(apply(X, 1, function(x) max(A[x, x]))) > 0 update <- function(X, A) { I <- t(matrix(apply(X, 1, combn, m=2), 2)) A[I] <- A[I] + 1 # Increment pair counts A } n <- 7 # Number of teams k <- 4 # Players per team N <- 52 # Number of rounds to schedule A <- matrix(0, k*n, k*n) # Tracks the pairings from the previous round Schedule <- array(NA, c(k, n, N), dimnames=list(Position=1:k, Team=1:n, Round=1:N)) set.seed(17) for (round in 1:N) { while(evaluate(X <- pick(n, k), A)) {} # Rejection sampling A <- update(X, matrix(0, k*n, k*n)) # Record the results for future selections Schedule[,,round] <- t(X) # Save this assignment in the schedule } Schedule2 <- apply(Schedule, c(2,3), sort) # Make the schedule easier to read (Schedule2[,,1:2]) # Inspect it # # Plot a summary of this schedule. # check <- function(Schedule) { stopifnot(unique(apply(Schedule, 3, function(x) length(unique(c(x))))) == n*k) Pairs <- matrix(apply(Schedule, 3, function(S) apply(S, 2, combn, m=2)), 2) A <- table(Pairs[1, ], Pairs[2, ]) table(A[lower.tri(A)] + t(A)[lower.tri(A)]) } y <- c(check(Schedule)) i <- as.numeric(names(y)) plot(i, y, type="h", lwd=2, ylim=c(0, max(y)), main=paste("Common Teammate Frequencies in", N, "Rounds"))
How can I reduce the number of times people are randomly assigned to the same team when using a rand Your problem seems to be twofold: people notice when they have a common partner two weeks (rounds) in a row. They also notice when they have a common partner frequently. As it turns out, limiting th
31,597
How prior distribution over neural networks parameters is implemented in practice?
A zero-mean, isotropic multivariate gaussian prior on the network weights $\theta$ reduces to a penalty on the $L^2$ norm on the parameter vector $\theta$. Finding the MAP estimate of the posterior reduces to maximizing the probability $p(y|x) = p(y|x,\theta)p(\theta) $, which is equivalent to minimizing the negative logarithm of the same: $$\begin{align} p(y|x) &= p(y|x,\theta)p(\theta) \\ -\ln p(y|x) &= L(y|x,\theta) - \ln p(\theta) \\ &= L(y|x,\theta) - \ln \left[ (2\pi)^{ -\frac{k}{2} } \det(\Sigma)^{ -\frac{1}{2} } \exp \left( -\frac{1}{2} (\theta - \mu)^T \Sigma^{-1} (\theta - \mu) \right) \right] \\ &= L(y|x,\theta) + \frac{1}{2}\theta^T \left(\sigma^2 I \right)^{-1}\theta +C \\ &= L(y|x,\theta) + \frac{\sigma^{-2}}{2}\theta^T \theta \\ &= L(y|x,\theta) + \frac{\lambda}{2} \| \theta \|_2^2 \end{align}$$ where $ L(y|x,\theta)=-\ln p(y|x,\theta)$ is your loss function (e.g. mean square error or categorical cross-entropy loss), the negative log likelihood given the model, the parameters $\theta$, and the data $(x,y)$. Some notes about this derivation: The last line makes the substitution $\sigma^{-2}=\lambda$ and writes the penalty as a norm to make the connection to ridge regression more apparent. We can neglect the constant additive terms $C=-\frac{1}{2}\left(k\ln(2\pi)+\ln|\Sigma|\right)$ because they do not depend on $\theta$; including them will change the value of the extrema, but not its location. This is given as a generic statement about any loss $L$ which can be expressed as the negative log of the probability, so if you're working on a classification problem, a regression problem, or any problem formulated as a probability model, you can just substitute the appropriate expression for $L$. Of course, if you're interested in Bayesian methods, you might not wish to be constrained solely to MAP estimates of the model. Radford Neal looks at some methods to utilize the posterior distribution of $\theta$ in his book Bayesian Learning for Neural Networks, including MCMC to estimate neural networks. Since publication, there are probably many more works which have taken these concepts even further. One could optimize this augmented loss function directly. Alternatively, it could be implemented as weight decay during training; PyTorch does it this way, for instance. The reason you might want to implement weight decay as a component of the optimizer (as opposed to just using on autograd on the regularized loss) is that the gradient update looks like $$\begin{align} \theta_{i+1} &= \theta_i - \eta \frac{\partial}{\partial \theta} \left[L + \frac{\lambda}{2}\| \theta \|_2^2 \right]\\ &= \theta_i - \eta \frac{\partial L}{\partial \theta} - \eta \lambda \theta_i \\ &= (1 - \eta \lambda) \theta_i - \eta \frac{\partial L}{\partial \theta} \end{align}$$ where $\theta_i$ is the parameter vector at the $i$th optimization step and $\eta$ is the learning rate. But when using adaptive optimizers (e.g. Adam), the effect of weight decay is slightly different; see "Decoupled Weight Decay Regularization" by Ilya Loshchilov and Frank Hutter.
How prior distribution over neural networks parameters is implemented in practice?
A zero-mean, isotropic multivariate gaussian prior on the network weights $\theta$ reduces to a penalty on the $L^2$ norm on the parameter vector $\theta$. Finding the MAP estimate of the posterior re
How prior distribution over neural networks parameters is implemented in practice? A zero-mean, isotropic multivariate gaussian prior on the network weights $\theta$ reduces to a penalty on the $L^2$ norm on the parameter vector $\theta$. Finding the MAP estimate of the posterior reduces to maximizing the probability $p(y|x) = p(y|x,\theta)p(\theta) $, which is equivalent to minimizing the negative logarithm of the same: $$\begin{align} p(y|x) &= p(y|x,\theta)p(\theta) \\ -\ln p(y|x) &= L(y|x,\theta) - \ln p(\theta) \\ &= L(y|x,\theta) - \ln \left[ (2\pi)^{ -\frac{k}{2} } \det(\Sigma)^{ -\frac{1}{2} } \exp \left( -\frac{1}{2} (\theta - \mu)^T \Sigma^{-1} (\theta - \mu) \right) \right] \\ &= L(y|x,\theta) + \frac{1}{2}\theta^T \left(\sigma^2 I \right)^{-1}\theta +C \\ &= L(y|x,\theta) + \frac{\sigma^{-2}}{2}\theta^T \theta \\ &= L(y|x,\theta) + \frac{\lambda}{2} \| \theta \|_2^2 \end{align}$$ where $ L(y|x,\theta)=-\ln p(y|x,\theta)$ is your loss function (e.g. mean square error or categorical cross-entropy loss), the negative log likelihood given the model, the parameters $\theta$, and the data $(x,y)$. Some notes about this derivation: The last line makes the substitution $\sigma^{-2}=\lambda$ and writes the penalty as a norm to make the connection to ridge regression more apparent. We can neglect the constant additive terms $C=-\frac{1}{2}\left(k\ln(2\pi)+\ln|\Sigma|\right)$ because they do not depend on $\theta$; including them will change the value of the extrema, but not its location. This is given as a generic statement about any loss $L$ which can be expressed as the negative log of the probability, so if you're working on a classification problem, a regression problem, or any problem formulated as a probability model, you can just substitute the appropriate expression for $L$. Of course, if you're interested in Bayesian methods, you might not wish to be constrained solely to MAP estimates of the model. Radford Neal looks at some methods to utilize the posterior distribution of $\theta$ in his book Bayesian Learning for Neural Networks, including MCMC to estimate neural networks. Since publication, there are probably many more works which have taken these concepts even further. One could optimize this augmented loss function directly. Alternatively, it could be implemented as weight decay during training; PyTorch does it this way, for instance. The reason you might want to implement weight decay as a component of the optimizer (as opposed to just using on autograd on the regularized loss) is that the gradient update looks like $$\begin{align} \theta_{i+1} &= \theta_i - \eta \frac{\partial}{\partial \theta} \left[L + \frac{\lambda}{2}\| \theta \|_2^2 \right]\\ &= \theta_i - \eta \frac{\partial L}{\partial \theta} - \eta \lambda \theta_i \\ &= (1 - \eta \lambda) \theta_i - \eta \frac{\partial L}{\partial \theta} \end{align}$$ where $\theta_i$ is the parameter vector at the $i$th optimization step and $\eta$ is the learning rate. But when using adaptive optimizers (e.g. Adam), the effect of weight decay is slightly different; see "Decoupled Weight Decay Regularization" by Ilya Loshchilov and Frank Hutter.
How prior distribution over neural networks parameters is implemented in practice? A zero-mean, isotropic multivariate gaussian prior on the network weights $\theta$ reduces to a penalty on the $L^2$ norm on the parameter vector $\theta$. Finding the MAP estimate of the posterior re
31,598
Why doesn't Logistic Regression require heteroscedasticity and normality of the residuals, neither a linear relationship?
Isn't that something that would require linear relationships? The assumption is that the effect of covariates is linear on the log odds scale. You might see logistic regression written as $$ \operatorname{logit}(p) = X \beta $$ Here, $\operatorname{logit}(p) = \log\left( \frac{p}{1-p} \right)$. Additionally, remember that linearity does not mean straight lines in GLM. Regarding the errors, the normality assumption isn't required because the errors will be zero or 1? Not quite. Logistic regression estimates a probability, the error (meaning observation minus prediction) will be between 0 and 1. Why doesn't Logistic Regression require the error and linear relationship assumptions that Linear Regression require? Logistic regression is still a linear model, it is just linear in a different space so as to respect the constraint that $0 \leq p \leq 1$. AS for your titular question regarding the error term and its variance, note that a binomial random variable's variance depends on its mean ($\operatorname{Var}(X) = np(1-p)$). Hence, the variance chances as the mean changes, meaning the variance is (technically) heteroskedastic (i.e. non-constant, or at the very least changes based on what $X$ is because $p$ changes based on $X$).
Why doesn't Logistic Regression require heteroscedasticity and normality of the residuals, neither a
Isn't that something that would require linear relationships? The assumption is that the effect of covariates is linear on the log odds scale. You might see logistic regression written as $$ \operat
Why doesn't Logistic Regression require heteroscedasticity and normality of the residuals, neither a linear relationship? Isn't that something that would require linear relationships? The assumption is that the effect of covariates is linear on the log odds scale. You might see logistic regression written as $$ \operatorname{logit}(p) = X \beta $$ Here, $\operatorname{logit}(p) = \log\left( \frac{p}{1-p} \right)$. Additionally, remember that linearity does not mean straight lines in GLM. Regarding the errors, the normality assumption isn't required because the errors will be zero or 1? Not quite. Logistic regression estimates a probability, the error (meaning observation minus prediction) will be between 0 and 1. Why doesn't Logistic Regression require the error and linear relationship assumptions that Linear Regression require? Logistic regression is still a linear model, it is just linear in a different space so as to respect the constraint that $0 \leq p \leq 1$. AS for your titular question regarding the error term and its variance, note that a binomial random variable's variance depends on its mean ($\operatorname{Var}(X) = np(1-p)$). Hence, the variance chances as the mean changes, meaning the variance is (technically) heteroskedastic (i.e. non-constant, or at the very least changes based on what $X$ is because $p$ changes based on $X$).
Why doesn't Logistic Regression require heteroscedasticity and normality of the residuals, neither a Isn't that something that would require linear relationships? The assumption is that the effect of covariates is linear on the log odds scale. You might see logistic regression written as $$ \operat
31,599
Why do we write models in the form of E(Y)?
Because this generalizes nicely to other types of regression models. That way we can express them all in a similar way (possibly with a link function - in your linear regression case the link function is just the identity function). This makes it easier to think of them in a similar way (and also to have similar programming interfaces to them). E.g. logistic regression can be expressed as $\text{logit}(E(Y_i)) := \text{logit}(\pi_i) = \boldsymbol{X}\boldsymbol{\beta}$, with $Y_i \sim \text{Bernoulli}(\pi_i)$, Poisson regression as $\log(E(Y_i)) := \text{log}(\mu_i) = \boldsymbol{X}\boldsymbol{\beta}$, with $Y_i \sim \text{Poisson}(\mu_i)$ and so on.
Why do we write models in the form of E(Y)?
Because this generalizes nicely to other types of regression models. That way we can express them all in a similar way (possibly with a link function - in your linear regression case the link function
Why do we write models in the form of E(Y)? Because this generalizes nicely to other types of regression models. That way we can express them all in a similar way (possibly with a link function - in your linear regression case the link function is just the identity function). This makes it easier to think of them in a similar way (and also to have similar programming interfaces to them). E.g. logistic regression can be expressed as $\text{logit}(E(Y_i)) := \text{logit}(\pi_i) = \boldsymbol{X}\boldsymbol{\beta}$, with $Y_i \sim \text{Bernoulli}(\pi_i)$, Poisson regression as $\log(E(Y_i)) := \text{log}(\mu_i) = \boldsymbol{X}\boldsymbol{\beta}$, with $Y_i \sim \text{Poisson}(\mu_i)$ and so on.
Why do we write models in the form of E(Y)? Because this generalizes nicely to other types of regression models. That way we can express them all in a similar way (possibly with a link function - in your linear regression case the link function
31,600
Why do we write models in the form of E(Y)?
Well, if $E(XB + e) = XB$ we are showing that $E(e) = 0$.
Why do we write models in the form of E(Y)?
Well, if $E(XB + e) = XB$ we are showing that $E(e) = 0$.
Why do we write models in the form of E(Y)? Well, if $E(XB + e) = XB$ we are showing that $E(e) = 0$.
Why do we write models in the form of E(Y)? Well, if $E(XB + e) = XB$ we are showing that $E(e) = 0$.