idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
46,701
Which variable relative importance method to use?
I prefer to compute the proportion of explainable log-likelihood that is explained by each variable. For OLS models the rms package makes this easy: f <- ols(y ~ x1 + x2 + pol(x3, 2) + rcs(x4, 5) + ...) plot(anova(f), what='proportion chisq') # also try what='proportion R2' The default for plot(anova()) is to display the Wald $\chi^2$ statistic minus its degrees of freedom for assessing the partial effect of each variable. Even though this is not scaled $[0,1]$ it is probably the best method in general because it penalizes a variable requiring a large number of parameters to achieve the $\chi^2$. For example, a categorical predictor with 5 levels will have 4 d.f. and a continuous predictor modeled as a restricted cubic spline function with 5 knots will have 4 d.f. If a predictor interacts with any other predictor(s), the $\chi^2$ and partial $R^2$ measures combine the appropriate interaction effects with main effects. For example if the model was y ~ pol(age,2) * sex the statistic for sex is the combined effects of sex as a main effect plus the effect modification that sex provides for the age effect. This is an assessment of whether there is a difference between the sexes for any age. Methods such as random forests, which do not favor additive effects, are not likelihood based, and use multiple trees, require a different notion of variable importance.
Which variable relative importance method to use?
I prefer to compute the proportion of explainable log-likelihood that is explained by each variable. For OLS models the rms package makes this easy: f <- ols(y ~ x1 + x2 + pol(x3, 2) + rcs(x4, 5) + .
Which variable relative importance method to use? I prefer to compute the proportion of explainable log-likelihood that is explained by each variable. For OLS models the rms package makes this easy: f <- ols(y ~ x1 + x2 + pol(x3, 2) + rcs(x4, 5) + ...) plot(anova(f), what='proportion chisq') # also try what='proportion R2' The default for plot(anova()) is to display the Wald $\chi^2$ statistic minus its degrees of freedom for assessing the partial effect of each variable. Even though this is not scaled $[0,1]$ it is probably the best method in general because it penalizes a variable requiring a large number of parameters to achieve the $\chi^2$. For example, a categorical predictor with 5 levels will have 4 d.f. and a continuous predictor modeled as a restricted cubic spline function with 5 knots will have 4 d.f. If a predictor interacts with any other predictor(s), the $\chi^2$ and partial $R^2$ measures combine the appropriate interaction effects with main effects. For example if the model was y ~ pol(age,2) * sex the statistic for sex is the combined effects of sex as a main effect plus the effect modification that sex provides for the age effect. This is an assessment of whether there is a difference between the sexes for any age. Methods such as random forests, which do not favor additive effects, are not likelihood based, and use multiple trees, require a different notion of variable importance.
Which variable relative importance method to use? I prefer to compute the proportion of explainable log-likelihood that is explained by each variable. For OLS models the rms package makes this easy: f <- ols(y ~ x1 + x2 + pol(x3, 2) + rcs(x4, 5) + .
46,702
KS-test - how is the p-value calculated?
This is a good question because while intuitively relatively straight-forward (ie. what is the chance that the value of the Kolmogorov-Smirnov D statistic would be as large or larger than observed?) the calculation of the said probability is not. First let me point that one should differentiate between two-sided and one-sided tests. I will focus on the two-sided test as it is more commonly used. OK, let's say that the distance between two CDFs is $D_{m,n}$ where $m$ and $n$ are the sample sizes concerned. The $p$-value we want is $P(D_{m,n} \geq D_{Obs} | H_0)$. $D_{Obs}$ is what we observe and $H_0$ is that the two population distributions are identical, ie. we have samples from the same population. Clearly the crux of the derivation of the exact null probability distribution of $D_{m,n}$. Here is where Smirnov comes in the story; he proved that [1]: $lim_{n,m \rightarrow \infty} P( D_{Obs} \geq \sqrt{\frac{mn}{m+n}} D_{m,n} ) = L(D_{Obs})$ where: $L(D_{Obs}) = 1 - 2 \Sigma_i^\infty(-1)^{i-1}e^{2i^2 D_{obs}^2}$. Most of the time people actually do not work with $D_{Obs}$ directly but use certain rescaling parameters that reflecting the influence of $n$ and $m$. It also goes without saying that you are not to evaluate the above infinite series directly. Certain implementations sum the first 100 points but even then you need to be mindful about numerical precision issues. There is really nice freely accessible paper from the Journal of Statistical Software on Evaluating Kolmogorov’s Distribution. In the past one relied on empirically derived tables more (eg. [2]). I based my answer on the English language papers mentioned and in the book Nonparametric Statistical Inference by Gibbons and Chakraborti (Chapt. 6). [1]: Smirnov, N. V. (1939), Estimate of deviation between empirical distribution functions in two independent samples (in Russian), Bulletin of Moscow University, 2, 3–16. [2]: Frank J. Massey, Jr. (1951), The Kolmogorov-Smirnov Test for Goodness of Fit, Journal of the American Statistical Association, Vol. 46, No. 253, pp. 68- 78
KS-test - how is the p-value calculated?
This is a good question because while intuitively relatively straight-forward (ie. what is the chance that the value of the Kolmogorov-Smirnov D statistic would be as large or larger than observed?) t
KS-test - how is the p-value calculated? This is a good question because while intuitively relatively straight-forward (ie. what is the chance that the value of the Kolmogorov-Smirnov D statistic would be as large or larger than observed?) the calculation of the said probability is not. First let me point that one should differentiate between two-sided and one-sided tests. I will focus on the two-sided test as it is more commonly used. OK, let's say that the distance between two CDFs is $D_{m,n}$ where $m$ and $n$ are the sample sizes concerned. The $p$-value we want is $P(D_{m,n} \geq D_{Obs} | H_0)$. $D_{Obs}$ is what we observe and $H_0$ is that the two population distributions are identical, ie. we have samples from the same population. Clearly the crux of the derivation of the exact null probability distribution of $D_{m,n}$. Here is where Smirnov comes in the story; he proved that [1]: $lim_{n,m \rightarrow \infty} P( D_{Obs} \geq \sqrt{\frac{mn}{m+n}} D_{m,n} ) = L(D_{Obs})$ where: $L(D_{Obs}) = 1 - 2 \Sigma_i^\infty(-1)^{i-1}e^{2i^2 D_{obs}^2}$. Most of the time people actually do not work with $D_{Obs}$ directly but use certain rescaling parameters that reflecting the influence of $n$ and $m$. It also goes without saying that you are not to evaluate the above infinite series directly. Certain implementations sum the first 100 points but even then you need to be mindful about numerical precision issues. There is really nice freely accessible paper from the Journal of Statistical Software on Evaluating Kolmogorov’s Distribution. In the past one relied on empirically derived tables more (eg. [2]). I based my answer on the English language papers mentioned and in the book Nonparametric Statistical Inference by Gibbons and Chakraborti (Chapt. 6). [1]: Smirnov, N. V. (1939), Estimate of deviation between empirical distribution functions in two independent samples (in Russian), Bulletin of Moscow University, 2, 3–16. [2]: Frank J. Massey, Jr. (1951), The Kolmogorov-Smirnov Test for Goodness of Fit, Journal of the American Statistical Association, Vol. 46, No. 253, pp. 68- 78
KS-test - how is the p-value calculated? This is a good question because while intuitively relatively straight-forward (ie. what is the chance that the value of the Kolmogorov-Smirnov D statistic would be as large or larger than observed?) t
46,703
What is the relationship between graphical models (such as in the Koller book) and the type of analysis you can do with pyMC?
The way I understand it, PGMs is a broad class including Markov Random Fields, Conditional Random Fields and Bayesian Networks (to name a few). PyMC works on Bayesian Networks (i.e. if the network can be represented as a Directed Acyclic Graph).
What is the relationship between graphical models (such as in the Koller book) and the type of analy
The way I understand it, PGMs is a broad class including Markov Random Fields, Conditional Random Fields and Bayesian Networks (to name a few). PyMC works on Bayesian Networks (i.e. if the network can
What is the relationship between graphical models (such as in the Koller book) and the type of analysis you can do with pyMC? The way I understand it, PGMs is a broad class including Markov Random Fields, Conditional Random Fields and Bayesian Networks (to name a few). PyMC works on Bayesian Networks (i.e. if the network can be represented as a Directed Acyclic Graph).
What is the relationship between graphical models (such as in the Koller book) and the type of analy The way I understand it, PGMs is a broad class including Markov Random Fields, Conditional Random Fields and Bayesian Networks (to name a few). PyMC works on Bayesian Networks (i.e. if the network can
46,704
syntax for nls model with breakpoint
In general this is such a nasty problem that we shouldn't apply automatic optimizers like nls in R. However, it is easily solvable upon observing that the model is linear, and meets the assumptions of ordinary least squares (OLS) estimation, conditional upon the value of Thresh. Therefore you can search for solutions reliably by systematically varying Thresh across a reasonable range of values. To illustrate, I simulated some data of this form in R using the equivalent model $$Y = \beta_0 + \beta_1 x_1 + \beta_2 I(x_2 \lt \tau) + \beta_3 x_1 I(x_2 \lt \tau) + \varepsilon$$ where $\beta_0, \ldots, \beta_3$ are the coefficients (but not the same as those in the question!), $I$ is the indicator function, $\tau$ is the threshold parameter, $x_1$ represents values of Pcp, $x_2$ represents values of Pcp + Ant, and $\varepsilon$ has a zero-mean Normal distribution of unknown variance $\sigma^2$. Note that the (intercept, slope) parameters of the two lines are $(\beta_0, \beta_1)$ when $x_2\ge \tau$ and $(\beta_0+\beta_2, \beta_1+\beta_3)$ otherwise. An estimate of $\tau$ is a value (which usually will not be unique) that minimizes the sum of squared OLS residuals. This is equivalent to the maximum likelihood solution. To show how challenging the problem of finding this estimate can be, I plotted the sum of squared residuals versus trial values of $\tau$. The left-hand plot in the figure shows an example of this profile. The optimal value of $\tau$ is marked with a vertical red dotted line. Its jagged, locally-constant, non-differentiable pattern makes it almost impossible for nearly any general-purpose optimizer to find the minimum reliably. (It could easily be caught in a local minimum far from the global minimum. I applied optimize to this problem as a check, and in some examples that's exactly what happened to it.) Given this estimate of $\tau$, the model is linear and can be fit via OLS. This fit is shown in the right-hand plot. The true model consists of two lines of slopes $\pm 1$ crossing at $(0,1)$ and the true threshold is $-1/3$. In this dataset, $x_1$ and $x_2$ were nearly uncorrelated. A strong correlation between them will make the estimates unreliable. Orange squares and blue circles distinguish the cases $x_2 \lt \tau$ and $x_2 \ge \tau$, respectively. The two subsets of data are separately fit with straight lines. A full maximum likelihood solution can be obtained by replacing the sum of squared residuals by the negative log likelihood and applying standard MLE methods to obtain confidence regions for the parameters and to test hypotheses about them. # # Create a dataset with a specified model. # n <- 80 beta <- c(1,1,0,-2) threshold <- -1/3 sigma <- 1 x1 <- seq(1-n,n-1,2)/n set.seed(17) x2 <- rnorm(n) i <- x2 < threshold x <- cbind(1, x1, i, x1*i) y <- x %*% beta + rnorm(n, sd=sigma) # # Display the SSR profile for the threshold. # f <- function(threshold) lm(y ~ x1*I(x2 < threshold)) z <- seq(-1,1,0.5/n)*2*sd(x2) # Search range w <- sapply(z, function(z) sum(resid(f(z))^2)) # Sum of squares of residuals par(mfrow=c(1,2)) plot(z, w, lwd=2, type="l", xlab="Threshold", ylab="SSR",main="Profile") t.opt <- (z[which.min(w)] + z[length(z)+1 - which.min(rev(w))])/2 abline(v=t.opt, lty=3, lwd=3, col="Red") # # Fit the model to the data. # fit <- f(t.opt) # # Report and display the fit. # summary(fit) plot(x1, y, pch=ifelse(x2 < t.opt, 21, 22), bg=ifelse(x2 < t.opt, "Blue", "Orange"), main="Data and Fit") b <- coef(fit) abline(b[c(1,2)], col="Orange", lwd=3, lty=1) abline(b[c(1,2)] + b[c(3,4)], col="Blue", lwd=3, lty=1)
syntax for nls model with breakpoint
In general this is such a nasty problem that we shouldn't apply automatic optimizers like nls in R. However, it is easily solvable upon observing that the model is linear, and meets the assumptions o
syntax for nls model with breakpoint In general this is such a nasty problem that we shouldn't apply automatic optimizers like nls in R. However, it is easily solvable upon observing that the model is linear, and meets the assumptions of ordinary least squares (OLS) estimation, conditional upon the value of Thresh. Therefore you can search for solutions reliably by systematically varying Thresh across a reasonable range of values. To illustrate, I simulated some data of this form in R using the equivalent model $$Y = \beta_0 + \beta_1 x_1 + \beta_2 I(x_2 \lt \tau) + \beta_3 x_1 I(x_2 \lt \tau) + \varepsilon$$ where $\beta_0, \ldots, \beta_3$ are the coefficients (but not the same as those in the question!), $I$ is the indicator function, $\tau$ is the threshold parameter, $x_1$ represents values of Pcp, $x_2$ represents values of Pcp + Ant, and $\varepsilon$ has a zero-mean Normal distribution of unknown variance $\sigma^2$. Note that the (intercept, slope) parameters of the two lines are $(\beta_0, \beta_1)$ when $x_2\ge \tau$ and $(\beta_0+\beta_2, \beta_1+\beta_3)$ otherwise. An estimate of $\tau$ is a value (which usually will not be unique) that minimizes the sum of squared OLS residuals. This is equivalent to the maximum likelihood solution. To show how challenging the problem of finding this estimate can be, I plotted the sum of squared residuals versus trial values of $\tau$. The left-hand plot in the figure shows an example of this profile. The optimal value of $\tau$ is marked with a vertical red dotted line. Its jagged, locally-constant, non-differentiable pattern makes it almost impossible for nearly any general-purpose optimizer to find the minimum reliably. (It could easily be caught in a local minimum far from the global minimum. I applied optimize to this problem as a check, and in some examples that's exactly what happened to it.) Given this estimate of $\tau$, the model is linear and can be fit via OLS. This fit is shown in the right-hand plot. The true model consists of two lines of slopes $\pm 1$ crossing at $(0,1)$ and the true threshold is $-1/3$. In this dataset, $x_1$ and $x_2$ were nearly uncorrelated. A strong correlation between them will make the estimates unreliable. Orange squares and blue circles distinguish the cases $x_2 \lt \tau$ and $x_2 \ge \tau$, respectively. The two subsets of data are separately fit with straight lines. A full maximum likelihood solution can be obtained by replacing the sum of squared residuals by the negative log likelihood and applying standard MLE methods to obtain confidence regions for the parameters and to test hypotheses about them. # # Create a dataset with a specified model. # n <- 80 beta <- c(1,1,0,-2) threshold <- -1/3 sigma <- 1 x1 <- seq(1-n,n-1,2)/n set.seed(17) x2 <- rnorm(n) i <- x2 < threshold x <- cbind(1, x1, i, x1*i) y <- x %*% beta + rnorm(n, sd=sigma) # # Display the SSR profile for the threshold. # f <- function(threshold) lm(y ~ x1*I(x2 < threshold)) z <- seq(-1,1,0.5/n)*2*sd(x2) # Search range w <- sapply(z, function(z) sum(resid(f(z))^2)) # Sum of squares of residuals par(mfrow=c(1,2)) plot(z, w, lwd=2, type="l", xlab="Threshold", ylab="SSR",main="Profile") t.opt <- (z[which.min(w)] + z[length(z)+1 - which.min(rev(w))])/2 abline(v=t.opt, lty=3, lwd=3, col="Red") # # Fit the model to the data. # fit <- f(t.opt) # # Report and display the fit. # summary(fit) plot(x1, y, pch=ifelse(x2 < t.opt, 21, 22), bg=ifelse(x2 < t.opt, "Blue", "Orange"), main="Data and Fit") b <- coef(fit) abline(b[c(1,2)], col="Orange", lwd=3, lty=1) abline(b[c(1,2)] + b[c(3,4)], col="Blue", lwd=3, lty=1)
syntax for nls model with breakpoint In general this is such a nasty problem that we shouldn't apply automatic optimizers like nls in R. However, it is easily solvable upon observing that the model is linear, and meets the assumptions o
46,705
Variance reduction technique in Monte Carlo integration
All the theory you need $Z\sim F$, and you want to estimate $\mathrm{E}[Z]$. Let $\sigma^2=\mathrm{Var}[Z]<\infty$. Simple Monte Carlo Construct $X_1,X_2,\dots$ IID with $X_1\sim F$. Define $\bar{X}_n=\frac{1}{n}\sum_{i=1}^n X_i$. Result: $\mathrm{Var}[\bar{X}_n]=\sigma^2/n$. Strong Law: $\bar{X}_n\to\mathrm{E}[Z]$ a.s. Antithetic Variables Construct $X'_1,X'_2,\dots$ such that, for $i\geq 1$, $X'_i\sim F$; $\mathrm{Cov}[X'_{2i-1},X'_{2i}]<0$; The pairs $(X'_1,X'_2),(X'_3,X'_4) ,\dots$ are IID. Define $Y_i=(X'_{2i-1}+X'_{2i})/2$ and $\bar{Y}_n=\frac{1}{n}\sum_{i=1}^n Y_i$. Result: $$\mathrm{Var}[\bar{Y}_n] =\frac{\sigma^2+\mathrm{Cov}[X'_1,X'_2]}{2n} < \frac{\sigma^2}{2n}<\mathrm{Var}[\bar{X}_n].$$ Strong Law: $\bar{Y}_n\to\mathrm{E}[Z]$ a.s. Your application Define $U_1,U_2,\dots$ IID $\mathrm{U}[0,1]$. For $i\geq 1$, define $X'_{2i-1}=(\log(1-U_i))^2$ and $X'_{2i}=(\log(U_i))^2$. Prove 1, 2, 3 above. Remember that $U_i\sim 1-U_i$, and $\mathrm{Cov}[U_i,1-U_i]<0$. Also, $x\mapsto (\log x)^2$ is monotonically decreasing for $x\in(0,1]$. Simulation n <- 10^6 u <- runif(n) # simple x <- (log(1-u))^2 mean(x) sqrt(var(x)/n) # antithetic y <- ((log(1-u))^2 + (log(u))^2)/2 mean(y) sqrt(var(y)/n)
Variance reduction technique in Monte Carlo integration
All the theory you need $Z\sim F$, and you want to estimate $\mathrm{E}[Z]$. Let $\sigma^2=\mathrm{Var}[Z]<\infty$. Simple Monte Carlo Construct $X_1,X_2,\dots$ IID with $X_1\sim F$. Define $\bar{X}_n
Variance reduction technique in Monte Carlo integration All the theory you need $Z\sim F$, and you want to estimate $\mathrm{E}[Z]$. Let $\sigma^2=\mathrm{Var}[Z]<\infty$. Simple Monte Carlo Construct $X_1,X_2,\dots$ IID with $X_1\sim F$. Define $\bar{X}_n=\frac{1}{n}\sum_{i=1}^n X_i$. Result: $\mathrm{Var}[\bar{X}_n]=\sigma^2/n$. Strong Law: $\bar{X}_n\to\mathrm{E}[Z]$ a.s. Antithetic Variables Construct $X'_1,X'_2,\dots$ such that, for $i\geq 1$, $X'_i\sim F$; $\mathrm{Cov}[X'_{2i-1},X'_{2i}]<0$; The pairs $(X'_1,X'_2),(X'_3,X'_4) ,\dots$ are IID. Define $Y_i=(X'_{2i-1}+X'_{2i})/2$ and $\bar{Y}_n=\frac{1}{n}\sum_{i=1}^n Y_i$. Result: $$\mathrm{Var}[\bar{Y}_n] =\frac{\sigma^2+\mathrm{Cov}[X'_1,X'_2]}{2n} < \frac{\sigma^2}{2n}<\mathrm{Var}[\bar{X}_n].$$ Strong Law: $\bar{Y}_n\to\mathrm{E}[Z]$ a.s. Your application Define $U_1,U_2,\dots$ IID $\mathrm{U}[0,1]$. For $i\geq 1$, define $X'_{2i-1}=(\log(1-U_i))^2$ and $X'_{2i}=(\log(U_i))^2$. Prove 1, 2, 3 above. Remember that $U_i\sim 1-U_i$, and $\mathrm{Cov}[U_i,1-U_i]<0$. Also, $x\mapsto (\log x)^2$ is monotonically decreasing for $x\in(0,1]$. Simulation n <- 10^6 u <- runif(n) # simple x <- (log(1-u))^2 mean(x) sqrt(var(x)/n) # antithetic y <- ((log(1-u))^2 + (log(u))^2)/2 mean(y) sqrt(var(y)/n)
Variance reduction technique in Monte Carlo integration All the theory you need $Z\sim F$, and you want to estimate $\mathrm{E}[Z]$. Let $\sigma^2=\mathrm{Var}[Z]<\infty$. Simple Monte Carlo Construct $X_1,X_2,\dots$ IID with $X_1\sim F$. Define $\bar{X}_n
46,706
P values of coefficients in rlm robust regression
Even if it were correct to use a t-distribution for this calculation, you don't seem to be calculating p-values correctly. You seem to be calculating this: when you want this:
P values of coefficients in rlm robust regression
Even if it were correct to use a t-distribution for this calculation, you don't seem to be calculating p-values correctly. You seem to be calculating this: when you want this:
P values of coefficients in rlm robust regression Even if it were correct to use a t-distribution for this calculation, you don't seem to be calculating p-values correctly. You seem to be calculating this: when you want this:
P values of coefficients in rlm robust regression Even if it were correct to use a t-distribution for this calculation, you don't seem to be calculating p-values correctly. You seem to be calculating this: when you want this:
46,707
How to compute the PDF of a sum of bernoulli and normal variables analytically?
Compute the CDF of $X+N$ using convolution, then differentiate the result. The CDF of $X$ is $$F_X(x) = (1-p)\theta(x) + p\theta(x-1)$$ where $\theta$ is the Heaviside theta function (the indicator function of the nonnegative reals), $$\theta(x) = 1\text{ if }x \ge 0,\ 0\text{ otherwise}.$$ By definition, the CDF of $X+N$ is $$F_{X+N}(y) = \Pr(X+N \le y) = \Pr(X \le y-N) =\mathbb{E}(F_X(y-N)).$$ The last equality computes $F_X(y-N)$ for each possible $N=n$ and integrates over them all, weighting them by their probabilities $f_N(n)dn$. It is a convolution, written as $$\mathbb{E}(F_X(y-N)) = \int_\mathbb{R} F_X(y-n) f_N(n)dn = (F_X\star f_N)(y).$$ Using the expression of $F_X$ in terms of Heaviside functions, linearity of integration breaks this integral into two convolutions of multiples of $\theta$ against $f_N$. But computing such convolutions is trivial, because for any distribution function $f$ with integral $F$, $$(\theta \star f)(y) = \int_\mathbb{R} \theta(y-x)f(x)dx = \int_{-\infty}^y 1 f(x)dx + \int_{y}^\infty 0 f(x)dx = F(y).$$ It should be apparent that the CDF of $X+N$ is a linear combination of the CDF of $N$ and the CDF of $N-1$. Thus differentiation of the CDF to obtain the PDF will obtain the same linear combination of the PDFs. At this point you could simply write down the answer.
How to compute the PDF of a sum of bernoulli and normal variables analytically?
Compute the CDF of $X+N$ using convolution, then differentiate the result. The CDF of $X$ is $$F_X(x) = (1-p)\theta(x) + p\theta(x-1)$$ where $\theta$ is the Heaviside theta function (the indicator f
How to compute the PDF of a sum of bernoulli and normal variables analytically? Compute the CDF of $X+N$ using convolution, then differentiate the result. The CDF of $X$ is $$F_X(x) = (1-p)\theta(x) + p\theta(x-1)$$ where $\theta$ is the Heaviside theta function (the indicator function of the nonnegative reals), $$\theta(x) = 1\text{ if }x \ge 0,\ 0\text{ otherwise}.$$ By definition, the CDF of $X+N$ is $$F_{X+N}(y) = \Pr(X+N \le y) = \Pr(X \le y-N) =\mathbb{E}(F_X(y-N)).$$ The last equality computes $F_X(y-N)$ for each possible $N=n$ and integrates over them all, weighting them by their probabilities $f_N(n)dn$. It is a convolution, written as $$\mathbb{E}(F_X(y-N)) = \int_\mathbb{R} F_X(y-n) f_N(n)dn = (F_X\star f_N)(y).$$ Using the expression of $F_X$ in terms of Heaviside functions, linearity of integration breaks this integral into two convolutions of multiples of $\theta$ against $f_N$. But computing such convolutions is trivial, because for any distribution function $f$ with integral $F$, $$(\theta \star f)(y) = \int_\mathbb{R} \theta(y-x)f(x)dx = \int_{-\infty}^y 1 f(x)dx + \int_{y}^\infty 0 f(x)dx = F(y).$$ It should be apparent that the CDF of $X+N$ is a linear combination of the CDF of $N$ and the CDF of $N-1$. Thus differentiation of the CDF to obtain the PDF will obtain the same linear combination of the PDFs. At this point you could simply write down the answer.
How to compute the PDF of a sum of bernoulli and normal variables analytically? Compute the CDF of $X+N$ using convolution, then differentiate the result. The CDF of $X$ is $$F_X(x) = (1-p)\theta(x) + p\theta(x-1)$$ where $\theta$ is the Heaviside theta function (the indicator f
46,708
How to compute the PDF of a sum of bernoulli and normal variables analytically?
$X$ is Bernoulli distributed with probability $p$. $N$ has mean zero and variance $\sigma^2$. So, with probability $1-p$, $Z=X+N$ has mean zero and variance $\sigma^2$ and with probability $p$ it has unit mean and variance $\sigma^2$. That looks like a mixture of Gaussians to me.
How to compute the PDF of a sum of bernoulli and normal variables analytically?
$X$ is Bernoulli distributed with probability $p$. $N$ has mean zero and variance $\sigma^2$. So, with probability $1-p$, $Z=X+N$ has mean zero and variance $\sigma^2$ and with probability $p$ it ha
How to compute the PDF of a sum of bernoulli and normal variables analytically? $X$ is Bernoulli distributed with probability $p$. $N$ has mean zero and variance $\sigma^2$. So, with probability $1-p$, $Z=X+N$ has mean zero and variance $\sigma^2$ and with probability $p$ it has unit mean and variance $\sigma^2$. That looks like a mixture of Gaussians to me.
How to compute the PDF of a sum of bernoulli and normal variables analytically? $X$ is Bernoulli distributed with probability $p$. $N$ has mean zero and variance $\sigma^2$. So, with probability $1-p$, $Z=X+N$ has mean zero and variance $\sigma^2$ and with probability $p$ it ha
46,709
Distribution of the quotient of two gamma random variables with different rate parameters?
If $X \sim Gamma(a,1)$ is independent of $Y \sim Gamma(b,1)$ then the ratio $X/Y$ has the Beta prime distribution with parameters $a$ and $b$. In fact, the result holds if you replace the common value of the rate parameter ($1$ here) by any other value, because the rate parameter has this property: if $X \sim Gamma(a,S)$, then $\lambda \times X \sim Gamma(a, S/\lambda)$ for any $\lambda >0$. Thus, $X' \sim Gamma(a,c_1)$ is independent of $Y' \sim Gamma(b,c_2)$, then the ratio $X'/Y'$ has the same distribution than $\frac{c_2}{c_1} \times X/Y$ where $X \sim Gamma(a,1)$ is independent of $Y \sim Gamma(b,1)$. Therefore, denoting by $f$ the pdf of the Beta prime distribution then the pdf of $X'/Y'$ is $r \mapsto \frac{c_1}{c_2} f(\frac{c_1}{c_2} r)$. This scaled Beta prime distribution has no devoted name.
Distribution of the quotient of two gamma random variables with different rate parameters?
If $X \sim Gamma(a,1)$ is independent of $Y \sim Gamma(b,1)$ then the ratio $X/Y$ has the Beta prime distribution with parameters $a$ and $b$. In fact, the result holds if you replace the common valu
Distribution of the quotient of two gamma random variables with different rate parameters? If $X \sim Gamma(a,1)$ is independent of $Y \sim Gamma(b,1)$ then the ratio $X/Y$ has the Beta prime distribution with parameters $a$ and $b$. In fact, the result holds if you replace the common value of the rate parameter ($1$ here) by any other value, because the rate parameter has this property: if $X \sim Gamma(a,S)$, then $\lambda \times X \sim Gamma(a, S/\lambda)$ for any $\lambda >0$. Thus, $X' \sim Gamma(a,c_1)$ is independent of $Y' \sim Gamma(b,c_2)$, then the ratio $X'/Y'$ has the same distribution than $\frac{c_2}{c_1} \times X/Y$ where $X \sim Gamma(a,1)$ is independent of $Y \sim Gamma(b,1)$. Therefore, denoting by $f$ the pdf of the Beta prime distribution then the pdf of $X'/Y'$ is $r \mapsto \frac{c_1}{c_2} f(\frac{c_1}{c_2} r)$. This scaled Beta prime distribution has no devoted name.
Distribution of the quotient of two gamma random variables with different rate parameters? If $X \sim Gamma(a,1)$ is independent of $Y \sim Gamma(b,1)$ then the ratio $X/Y$ has the Beta prime distribution with parameters $a$ and $b$. In fact, the result holds if you replace the common valu
46,710
Is it possible to recover original numbers of 2x2 table from odds ratio with given 95% confidence interval?
Short answer no, different margins can produce the same odd's ratio and confidence interval. Some examples to follow. Here is a brief sketch of how to find the minimum possible N for the table. Note that as per your linked site, the standard error can be related to the cell contents by: $$\text{SE} = \sqrt{\frac{1}{a} + \frac{1}{b} + \frac{1}{c} + \frac{1}{d}}$$ Ignoring the actual odds ratio, this value is minimized when $a = b = c = d$. This then produces the relationship: $$\text{SE}^2 = \frac{1}{n/4} + \frac{1}{n/4} + \frac{1}{n/4} + \frac{1}{n/4} = \frac{4}{n/4} = \frac{16}{n}$$ So subsequently we can estimate the minimum possible N for a table as $16/\text{SE}^2$. In your example, the standard error can be recovered by the 95% confidence interval: $$[\log(\text{High}) - \log(\text{Low})]/[2 \cdot 1.96]$$ Which is just over $0.45$, and so the minimum possible N a table can have with that standard error is $16/0.45^2 = 80$ (taking the ceiling of this value). This logic though won't help with finding the maximum. Let's say one row of the table, $c$ and $d$, have really large values. Thus ($1/c + 1/d) \approx 0$, and so we just have: $$\text{SE}^2 = \frac{1}{a} + \frac{1}{b}$$ Let's say that $c/d = 4/9$, so to get an odds ratio of 2.25 we just need $a = b$. So for this example $\text{SE}^2 = 2/a$, and so $a \approx 10$. So the table below: positive negative exposed 10 10 not exposed 4e6 9e6 Produces an odd's ratio of 2.25 and a 95% CI of 0.9365 to 5.4058. Finally, even if you had the total N for the table, there is a symmetry in the standard error, you can simply swap the row totals and recalculate the cells to have the same odd's ratios. In many situations this will produce approximately the same standard error. So we could rewrite your original table: positive negative exposed 14 39 not exposed 11 69 As: positive negative exposed 23 57 not exposed 8 45 Which produces an odds ratio of 2.2697 and a 95% confidence interval of 0.9280 to 5.5516. Not exactly the same, but to a certain extent this identification is dependent on the amount of rounding in the reporting, so I would be hesitant to rely on it, and with larger row totals it will be progressively harder to find the exact N. If you know the N, you could always do a grid search, but I do not believe it will always result in a unique solution. I've even forgot the most obvious symmetry, you can simply flip the numbers on the diagonals of the table and still obtain the exact same odds ratio and standard error. E.g. positive negative exposed 69 11 not exposed 39 14 results in the same summary statistics as your original table.
Is it possible to recover original numbers of 2x2 table from odds ratio with given 95% confidence in
Short answer no, different margins can produce the same odd's ratio and confidence interval. Some examples to follow. Here is a brief sketch of how to find the minimum possible N for the table. Note
Is it possible to recover original numbers of 2x2 table from odds ratio with given 95% confidence interval? Short answer no, different margins can produce the same odd's ratio and confidence interval. Some examples to follow. Here is a brief sketch of how to find the minimum possible N for the table. Note that as per your linked site, the standard error can be related to the cell contents by: $$\text{SE} = \sqrt{\frac{1}{a} + \frac{1}{b} + \frac{1}{c} + \frac{1}{d}}$$ Ignoring the actual odds ratio, this value is minimized when $a = b = c = d$. This then produces the relationship: $$\text{SE}^2 = \frac{1}{n/4} + \frac{1}{n/4} + \frac{1}{n/4} + \frac{1}{n/4} = \frac{4}{n/4} = \frac{16}{n}$$ So subsequently we can estimate the minimum possible N for a table as $16/\text{SE}^2$. In your example, the standard error can be recovered by the 95% confidence interval: $$[\log(\text{High}) - \log(\text{Low})]/[2 \cdot 1.96]$$ Which is just over $0.45$, and so the minimum possible N a table can have with that standard error is $16/0.45^2 = 80$ (taking the ceiling of this value). This logic though won't help with finding the maximum. Let's say one row of the table, $c$ and $d$, have really large values. Thus ($1/c + 1/d) \approx 0$, and so we just have: $$\text{SE}^2 = \frac{1}{a} + \frac{1}{b}$$ Let's say that $c/d = 4/9$, so to get an odds ratio of 2.25 we just need $a = b$. So for this example $\text{SE}^2 = 2/a$, and so $a \approx 10$. So the table below: positive negative exposed 10 10 not exposed 4e6 9e6 Produces an odd's ratio of 2.25 and a 95% CI of 0.9365 to 5.4058. Finally, even if you had the total N for the table, there is a symmetry in the standard error, you can simply swap the row totals and recalculate the cells to have the same odd's ratios. In many situations this will produce approximately the same standard error. So we could rewrite your original table: positive negative exposed 14 39 not exposed 11 69 As: positive negative exposed 23 57 not exposed 8 45 Which produces an odds ratio of 2.2697 and a 95% confidence interval of 0.9280 to 5.5516. Not exactly the same, but to a certain extent this identification is dependent on the amount of rounding in the reporting, so I would be hesitant to rely on it, and with larger row totals it will be progressively harder to find the exact N. If you know the N, you could always do a grid search, but I do not believe it will always result in a unique solution. I've even forgot the most obvious symmetry, you can simply flip the numbers on the diagonals of the table and still obtain the exact same odds ratio and standard error. E.g. positive negative exposed 69 11 not exposed 39 14 results in the same summary statistics as your original table.
Is it possible to recover original numbers of 2x2 table from odds ratio with given 95% confidence in Short answer no, different margins can produce the same odd's ratio and confidence interval. Some examples to follow. Here is a brief sketch of how to find the minimum possible N for the table. Note
46,711
How can I get slope and standard error at several levels of a continuous by continuous interaction in R?
In order to examine simple slopes at different levels of one of the continuous variables, you can simply center the other continuous variable to focus on the slope of interest. In a model with a continuous by continuous interaction, like so: $$y = \beta_0 + \beta_1x_1 + \beta_2x_2 + \beta_3x_1*x_2$$ the two single predictor coefficients ($\beta_1$ and $\beta_2$) are simple slopes for the predictor when the other predictor (however it is centered) is equal to 0. So, if I run your practice code above, I get the following output: Call: lm(formula = y1 ~ x1 * x2) Residuals: Min 1Q Median 3Q Max -281.996 -70.148 -3.702 70.190 209.182 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 17.7519 10.8121 1.642 0.104 x1 1.4175 1.0151 1.397 0.166 x2 0.8222 1.0614 0.775 0.440 x1:x2 0.8911 0.1295 6.882 6.04e-10 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 100.6 on 96 degrees of freedom Multiple R-squared: 0.4283, Adjusted R-squared: 0.4105 F-statistic: 23.98 on 3 and 96 DF, p-value: 1.15e-11 The x1 output gives us the test of the x1 slope at x2 = 0. Thus we get a slope, standard error, and (as a bonus) the test of that parameter estimate compared to 0. If we wanted to get the simple slope of x1 (and standard error and sig. test) when x2 = 6, we simply use a linear transformation to make a value of 6 on x2 the 0 point: x2.6<- x2-6 By viewing summary stats, we can see that this is the exact same variable as before, but it has been shifted down on the number line by 6 units: summary(x2) summary(x2.6) > summary(x2) Min. 1st Qu. Median Mean 3rd Qu. Max. -31.0400 -5.9520 1.3430 0.8396 8.0090 22.3800 > summary(x2.6) Min. 1st Qu. Median Mean 3rd Qu. Max. -37.040 -11.950 -4.657 -5.160 2.009 16.380 Now, if we re-run the same model but substitute x2 for our newly centered variable x2.6, we get this: model1.6<- lm(y1~x1*x2.6) summary(model1.6) Call: lm(formula = y1 ~ x1 * x2.6) Residuals: Min 1Q Median 3Q Max -281.996 -70.148 -3.702 70.190 209.182 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 22.6853 12.6384 1.795 0.0758 . x1 6.7639 1.2346 5.479 3.44e-07 *** x2.6 0.8222 1.0614 0.775 0.4404 x1:x2.6 0.8911 0.1295 6.882 6.04e-10 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 100.6 on 96 degrees of freedom Multiple R-squared: 0.4283, Adjusted R-squared: 0.4105 F-statistic: 23.98 on 3 and 96 DF, p-value: 1.15e-11 If we compare this output to the old output we can see that the omnibus F is still 23.98, the interaction t is still 6.882 and the slope for x2.6 is still .822 (and nonsignificant). However, our coefficient for x1 is now much larger and significant. This slope is now the simple slope of x1 when x2 is equal to 6 (or when x2.6 = 0). By centering by several different variables, we can test several different simple effects (and obtain slopes and standard errors) without that much work. By using a (dreaded in the R community) for loop to iterate through the list, we can test several different simple effects quite efficiently: centeringValues<- c(1,2,3,4,5,6) # Creating a vector of values to center around for(i in 1:length(centeringValues)){ #Making a for loop that iterates through the list x<- x2-i # Creating a predictor that is the newly centered variable print(paste0('x.',centeringValues[i])) # printing x.centering value so you can keep track of output print(summary(lm(y1~x1*x))[4]) # printing coefficients for the model with the center variable } This code first creates a vector of values you want to become the 0 point for the variable you do not want the slope for (in this example, x2). Next, create a for loop that iterates through the positions in this list (i.e. if the list has 3 items, the for loop will iterate through the values 1 to 3). Next, create a new variable that is the centered version of the variable for which you do not want centered slopes (in this case we are interested in simple slopes for x1, so we center x2). Finally, print the coefficients from the model that includes your newly centered variable in place of the raw variable. This results in the following output: [1] "x.1" $coefficients Estimate Std. Error t value Pr(>|t|) (Intercept) 18.5741364 10.8815154 1.7069439 9.106513e-02 x1 2.3085985 1.0143100 2.2760286 2.506664e-02 x 0.8222252 1.0613590 0.7746909 4.404262e-01 x1:x 0.8910530 0.1294695 6.8823366 6.041102e-10 [1] "x.2" $coefficients Estimate Std. Error t value Pr(>|t|) (Intercept) 19.3963616 11.0528627 1.7548722 8.247158e-02 x1 3.1996515 1.0299723 3.1065415 2.489385e-03 x 0.8222252 1.0613590 0.7746909 4.404262e-01 x1:x 0.8910530 0.1294695 6.8823366 6.041102e-10 [1] "x.3" $coefficients Estimate Std. Error t value Pr(>|t|) (Intercept) 20.2185867 11.3215341 1.7858522 7.728065e-02 x1 4.0907045 1.0613132 3.8543802 2.096928e-04 x 0.8222252 1.0613590 0.7746909 4.404262e-01 x1:x 0.8910530 0.1294695 6.8823366 6.041102e-10 [1] "x.4" $coefficients Estimate Std. Error t value Pr(>|t|) (Intercept) 21.0408119 11.6808159 1.8013135 7.479290e-02 x1 4.9817575 1.1070019 4.5002249 1.905339e-05 x 0.8222252 1.0613590 0.7746909 4.404262e-01 x1:x 0.8910530 0.1294695 6.8823366 6.041102e-10 [1] "x.5" $coefficients Estimate Std. Error t value Pr(>|t|) (Intercept) 21.8630371 12.1226545 1.8034859 7.444873e-02 x1 5.8728105 1.1653521 5.0395160 2.193149e-06 x 0.8222252 1.0613590 0.7746909 4.404262e-01 x1:x 0.8910530 0.1294695 6.8823366 6.041102e-10 [1] "x.6" $coefficients Estimate Std. Error t value Pr(>|t|) (Intercept) 22.6852623 12.6383944 1.7949481 7.580894e-02 x1 6.7638636 1.2345698 5.4787212 3.439867e-07 x 0.8222252 1.0613590 0.7746909 4.404262e-01 x1:x 0.8910530 0.1294695 6.8823366 6.041102e-10 Here you can see the output provides the coefficients for several tests, but the only thing that changes each time is the slope for x1. The slope for x1 in each output represents the slope for x1 when x2 is equal to whatever centering value we have assigned for that iteration. Hope this helps!
How can I get slope and standard error at several levels of a continuous by continuous interaction i
In order to examine simple slopes at different levels of one of the continuous variables, you can simply center the other continuous variable to focus on the slope of interest. In a model with a conti
How can I get slope and standard error at several levels of a continuous by continuous interaction in R? In order to examine simple slopes at different levels of one of the continuous variables, you can simply center the other continuous variable to focus on the slope of interest. In a model with a continuous by continuous interaction, like so: $$y = \beta_0 + \beta_1x_1 + \beta_2x_2 + \beta_3x_1*x_2$$ the two single predictor coefficients ($\beta_1$ and $\beta_2$) are simple slopes for the predictor when the other predictor (however it is centered) is equal to 0. So, if I run your practice code above, I get the following output: Call: lm(formula = y1 ~ x1 * x2) Residuals: Min 1Q Median 3Q Max -281.996 -70.148 -3.702 70.190 209.182 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 17.7519 10.8121 1.642 0.104 x1 1.4175 1.0151 1.397 0.166 x2 0.8222 1.0614 0.775 0.440 x1:x2 0.8911 0.1295 6.882 6.04e-10 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 100.6 on 96 degrees of freedom Multiple R-squared: 0.4283, Adjusted R-squared: 0.4105 F-statistic: 23.98 on 3 and 96 DF, p-value: 1.15e-11 The x1 output gives us the test of the x1 slope at x2 = 0. Thus we get a slope, standard error, and (as a bonus) the test of that parameter estimate compared to 0. If we wanted to get the simple slope of x1 (and standard error and sig. test) when x2 = 6, we simply use a linear transformation to make a value of 6 on x2 the 0 point: x2.6<- x2-6 By viewing summary stats, we can see that this is the exact same variable as before, but it has been shifted down on the number line by 6 units: summary(x2) summary(x2.6) > summary(x2) Min. 1st Qu. Median Mean 3rd Qu. Max. -31.0400 -5.9520 1.3430 0.8396 8.0090 22.3800 > summary(x2.6) Min. 1st Qu. Median Mean 3rd Qu. Max. -37.040 -11.950 -4.657 -5.160 2.009 16.380 Now, if we re-run the same model but substitute x2 for our newly centered variable x2.6, we get this: model1.6<- lm(y1~x1*x2.6) summary(model1.6) Call: lm(formula = y1 ~ x1 * x2.6) Residuals: Min 1Q Median 3Q Max -281.996 -70.148 -3.702 70.190 209.182 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 22.6853 12.6384 1.795 0.0758 . x1 6.7639 1.2346 5.479 3.44e-07 *** x2.6 0.8222 1.0614 0.775 0.4404 x1:x2.6 0.8911 0.1295 6.882 6.04e-10 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 100.6 on 96 degrees of freedom Multiple R-squared: 0.4283, Adjusted R-squared: 0.4105 F-statistic: 23.98 on 3 and 96 DF, p-value: 1.15e-11 If we compare this output to the old output we can see that the omnibus F is still 23.98, the interaction t is still 6.882 and the slope for x2.6 is still .822 (and nonsignificant). However, our coefficient for x1 is now much larger and significant. This slope is now the simple slope of x1 when x2 is equal to 6 (or when x2.6 = 0). By centering by several different variables, we can test several different simple effects (and obtain slopes and standard errors) without that much work. By using a (dreaded in the R community) for loop to iterate through the list, we can test several different simple effects quite efficiently: centeringValues<- c(1,2,3,4,5,6) # Creating a vector of values to center around for(i in 1:length(centeringValues)){ #Making a for loop that iterates through the list x<- x2-i # Creating a predictor that is the newly centered variable print(paste0('x.',centeringValues[i])) # printing x.centering value so you can keep track of output print(summary(lm(y1~x1*x))[4]) # printing coefficients for the model with the center variable } This code first creates a vector of values you want to become the 0 point for the variable you do not want the slope for (in this example, x2). Next, create a for loop that iterates through the positions in this list (i.e. if the list has 3 items, the for loop will iterate through the values 1 to 3). Next, create a new variable that is the centered version of the variable for which you do not want centered slopes (in this case we are interested in simple slopes for x1, so we center x2). Finally, print the coefficients from the model that includes your newly centered variable in place of the raw variable. This results in the following output: [1] "x.1" $coefficients Estimate Std. Error t value Pr(>|t|) (Intercept) 18.5741364 10.8815154 1.7069439 9.106513e-02 x1 2.3085985 1.0143100 2.2760286 2.506664e-02 x 0.8222252 1.0613590 0.7746909 4.404262e-01 x1:x 0.8910530 0.1294695 6.8823366 6.041102e-10 [1] "x.2" $coefficients Estimate Std. Error t value Pr(>|t|) (Intercept) 19.3963616 11.0528627 1.7548722 8.247158e-02 x1 3.1996515 1.0299723 3.1065415 2.489385e-03 x 0.8222252 1.0613590 0.7746909 4.404262e-01 x1:x 0.8910530 0.1294695 6.8823366 6.041102e-10 [1] "x.3" $coefficients Estimate Std. Error t value Pr(>|t|) (Intercept) 20.2185867 11.3215341 1.7858522 7.728065e-02 x1 4.0907045 1.0613132 3.8543802 2.096928e-04 x 0.8222252 1.0613590 0.7746909 4.404262e-01 x1:x 0.8910530 0.1294695 6.8823366 6.041102e-10 [1] "x.4" $coefficients Estimate Std. Error t value Pr(>|t|) (Intercept) 21.0408119 11.6808159 1.8013135 7.479290e-02 x1 4.9817575 1.1070019 4.5002249 1.905339e-05 x 0.8222252 1.0613590 0.7746909 4.404262e-01 x1:x 0.8910530 0.1294695 6.8823366 6.041102e-10 [1] "x.5" $coefficients Estimate Std. Error t value Pr(>|t|) (Intercept) 21.8630371 12.1226545 1.8034859 7.444873e-02 x1 5.8728105 1.1653521 5.0395160 2.193149e-06 x 0.8222252 1.0613590 0.7746909 4.404262e-01 x1:x 0.8910530 0.1294695 6.8823366 6.041102e-10 [1] "x.6" $coefficients Estimate Std. Error t value Pr(>|t|) (Intercept) 22.6852623 12.6383944 1.7949481 7.580894e-02 x1 6.7638636 1.2345698 5.4787212 3.439867e-07 x 0.8222252 1.0613590 0.7746909 4.404262e-01 x1:x 0.8910530 0.1294695 6.8823366 6.041102e-10 Here you can see the output provides the coefficients for several tests, but the only thing that changes each time is the slope for x1. The slope for x1 in each output represents the slope for x1 when x2 is equal to whatever centering value we have assigned for that iteration. Hope this helps!
How can I get slope and standard error at several levels of a continuous by continuous interaction i In order to examine simple slopes at different levels of one of the continuous variables, you can simply center the other continuous variable to focus on the slope of interest. In a model with a conti
46,712
How can I get slope and standard error at several levels of a continuous by continuous interaction in R?
While @wools answer appears more than adequate, here is another alternative that allows the calculation of marginal effect of x1 given x2, from a single model output without centering the x variables; According to http://statistics.ats.ucla.edu/stat/r/faq/concon.htm ; where the model is y ~ β0 + β1x1 + β2x2+ β3x1∗x2 then slope for x1 at a given value of x2 is β1 + β3 * x2 So I can choose a few values of x2 as; at.x2<-c(-6, 1, 6) slopes <- coef(model1)["x1"] + coef(model1)["x1:x2"] * at.x2 According to How to calculate the standard error of the marginal effects in interactions (robust regression)? Standard error for slopes = sqrt(var(b1) + var(b3) x2^2 + 2 x2 * cov(b1,b3) ) estvar<-vcov(model1); model1.vcov<-as.data.frame(as.matrix(estvar)) var.b1<-model1.vcov["x1","x1"] var.b3<-model1.vcov["x1:x2","x1:x2"] cov.b1.b3<-model1.vcov["x1","x1:x2"] SEs <- rep(NA, length(at.x2)) for (i in 1:length(at.x2)){ j <- at.x2[i] SEs[i] <- sqrt(var.b1 + var.b3 * j^2 + 2*j* cov.b1.b3) } cbind(SEs, slopes)
How can I get slope and standard error at several levels of a continuous by continuous interaction i
While @wools answer appears more than adequate, here is another alternative that allows the calculation of marginal effect of x1 given x2, from a single model output without centering the x variables;
How can I get slope and standard error at several levels of a continuous by continuous interaction in R? While @wools answer appears more than adequate, here is another alternative that allows the calculation of marginal effect of x1 given x2, from a single model output without centering the x variables; According to http://statistics.ats.ucla.edu/stat/r/faq/concon.htm ; where the model is y ~ β0 + β1x1 + β2x2+ β3x1∗x2 then slope for x1 at a given value of x2 is β1 + β3 * x2 So I can choose a few values of x2 as; at.x2<-c(-6, 1, 6) slopes <- coef(model1)["x1"] + coef(model1)["x1:x2"] * at.x2 According to How to calculate the standard error of the marginal effects in interactions (robust regression)? Standard error for slopes = sqrt(var(b1) + var(b3) x2^2 + 2 x2 * cov(b1,b3) ) estvar<-vcov(model1); model1.vcov<-as.data.frame(as.matrix(estvar)) var.b1<-model1.vcov["x1","x1"] var.b3<-model1.vcov["x1:x2","x1:x2"] cov.b1.b3<-model1.vcov["x1","x1:x2"] SEs <- rep(NA, length(at.x2)) for (i in 1:length(at.x2)){ j <- at.x2[i] SEs[i] <- sqrt(var.b1 + var.b3 * j^2 + 2*j* cov.b1.b3) } cbind(SEs, slopes)
How can I get slope and standard error at several levels of a continuous by continuous interaction i While @wools answer appears more than adequate, here is another alternative that allows the calculation of marginal effect of x1 given x2, from a single model output without centering the x variables;
46,713
Confusion in Gibbs sampling
Since I'm not sure where are you stuck at, I'll try multiple shots: Explanation 1: The thing is that you only need the form of the unnormalized posterior, and that is why it's enough if you can get: $$ p(\theta_1 | \theta_2, D) \propto p(\theta_1, \theta_2 | D) $$ The normalizing constant is not interesting, this is very common in bayesian statistics, . With Gibbs sampling, Metropolis-Hasting or any other Monte Carlo method, what you are doing is drawing samples from this posterior. That is, the more density around a point, the more samples you'll get there. Then, once you have enough samples from this posterior distribution, you know that the normalized density at some point $x$ is the proportion of samples that felt at that point. You can even plot an histogram on the samples to see the (unnormalized) posterior. In other words, if I give you the samples $1,3,4,5,1.....,3,4,16,1$ and I tell you these are samples from a density function, you know to compute the probability of every value. Explanation 2: If you observe the analytical form of your unnormalized posterior (you always know it [1]), two things can happen: a) It has the shape of some known distribution (e.g. Gaussian): then you can get the normalize posterior since you know the normalizing constant of a gaussian distribution. b) It has an ugly form that corresponds to no familiar distribution: then you can always sample with Metropolis-Hastings (there are others). b.1) M-H is not the most efficient of the methods (you reject a lot of samples, usually more than 2/3). If the posterior was ugly but the conditionals of the individual variables are pretty (known distributions) then you can do Gibbs sampling by sampling for one single variable at a time. Explanation 3 If you use conjugate priors for the individual variables, the denominator of their conditional probability will be always nice and familiar, and you will know what the normalizing constant in the denominator is. This is why Gibbs sampling is so popular when the joint probability is ugly but the conditional probabilities are nice. Maybe this thread, and specially the answer with puppies, helps you: Why Normalizing Factor is Required in Bayes Theorem? [1] Edit: not true, see @Xi'an's comment. Update (example) Imagine you have: $$ P(\theta_1, \theta_2 | D) = \frac{ p(D , \theta_1, \theta_2)} {\int_{\theta_1, \theta_2} p(D , \theta_1, \theta_2) \text{d} \theta_1, \theta_2} \propto p(D , \theta_1, \theta_2) $$ If the joint probability is complicated, then you can't know the normalization constant. Sometimes, if it does not contain things like large $\sum$ or $\prod$ that would make it painful to compute, you can even plot the posterior. In this case you would have some 2-D plot with axes $\theta_1$ and $\theta_2$. Yet, you plot is right up to a missing constant. Sampling algorithms say "ok, I don't know what the normalization factor is, but if I might draw samples from this function in such a way that, if $p(D, \theta_1=x_1, \theta_2=x_2)$ is two times $p(D, \theta_1=x_3, \theta_2=x_4)$, then then I should get the sample $(x_1, x_2)$ twice as much as $(x_3, x_4)$" Gibbs sampling does this by sampling every variable separatedly. Imagine $\theta_1$ is a mean $\mu$ and that its conditional probability is (forget about the $\sigma$'s, imagine we know them): $$ p(\mu | D) = \frac{ \mathcal{N}(D | \mu, \sigma_d) \mathcal{N}(\mu, \sigma) } {\int \mathcal{N}(D | \mu, \sigma_d) \mathcal{N}(\mu, \sigma) \text{d} \mu} $$ The product of two normals is another normal with new parameters (see conjugate priors and keep that table always at hand. Even memorize the ones you end up using the most). You do the multiplication, you drop everything that does not depend on $\mu$ into a constant $K$ and you get something that you can express as: $$ p(\mu | D) = K \exp\left(\frac{1}{a}(\mu - b)^2\right) $$ It has the functional form of a Gaussian. Therefore since you know it is a density, $K$ must be the normalizing factor of $\mathcal{N}(b, a)$. Thus, your posterior is a Gaussian distribution with posterior parameters (b,a). The short version is that the product of the prior and the likelihood have the functional form of a familiar distribution (it actually has the form of the prior if you chose conjugates), then you know how to integrate it. For instance, the integral of the $\exp(...)$ element of a normal distribution, that is, a normal without its normalizing factor, is the inverse of its normalizing factor.
Confusion in Gibbs sampling
Since I'm not sure where are you stuck at, I'll try multiple shots: Explanation 1: The thing is that you only need the form of the unnormalized posterior, and that is why it's enough if you can get:
Confusion in Gibbs sampling Since I'm not sure where are you stuck at, I'll try multiple shots: Explanation 1: The thing is that you only need the form of the unnormalized posterior, and that is why it's enough if you can get: $$ p(\theta_1 | \theta_2, D) \propto p(\theta_1, \theta_2 | D) $$ The normalizing constant is not interesting, this is very common in bayesian statistics, . With Gibbs sampling, Metropolis-Hasting or any other Monte Carlo method, what you are doing is drawing samples from this posterior. That is, the more density around a point, the more samples you'll get there. Then, once you have enough samples from this posterior distribution, you know that the normalized density at some point $x$ is the proportion of samples that felt at that point. You can even plot an histogram on the samples to see the (unnormalized) posterior. In other words, if I give you the samples $1,3,4,5,1.....,3,4,16,1$ and I tell you these are samples from a density function, you know to compute the probability of every value. Explanation 2: If you observe the analytical form of your unnormalized posterior (you always know it [1]), two things can happen: a) It has the shape of some known distribution (e.g. Gaussian): then you can get the normalize posterior since you know the normalizing constant of a gaussian distribution. b) It has an ugly form that corresponds to no familiar distribution: then you can always sample with Metropolis-Hastings (there are others). b.1) M-H is not the most efficient of the methods (you reject a lot of samples, usually more than 2/3). If the posterior was ugly but the conditionals of the individual variables are pretty (known distributions) then you can do Gibbs sampling by sampling for one single variable at a time. Explanation 3 If you use conjugate priors for the individual variables, the denominator of their conditional probability will be always nice and familiar, and you will know what the normalizing constant in the denominator is. This is why Gibbs sampling is so popular when the joint probability is ugly but the conditional probabilities are nice. Maybe this thread, and specially the answer with puppies, helps you: Why Normalizing Factor is Required in Bayes Theorem? [1] Edit: not true, see @Xi'an's comment. Update (example) Imagine you have: $$ P(\theta_1, \theta_2 | D) = \frac{ p(D , \theta_1, \theta_2)} {\int_{\theta_1, \theta_2} p(D , \theta_1, \theta_2) \text{d} \theta_1, \theta_2} \propto p(D , \theta_1, \theta_2) $$ If the joint probability is complicated, then you can't know the normalization constant. Sometimes, if it does not contain things like large $\sum$ or $\prod$ that would make it painful to compute, you can even plot the posterior. In this case you would have some 2-D plot with axes $\theta_1$ and $\theta_2$. Yet, you plot is right up to a missing constant. Sampling algorithms say "ok, I don't know what the normalization factor is, but if I might draw samples from this function in such a way that, if $p(D, \theta_1=x_1, \theta_2=x_2)$ is two times $p(D, \theta_1=x_3, \theta_2=x_4)$, then then I should get the sample $(x_1, x_2)$ twice as much as $(x_3, x_4)$" Gibbs sampling does this by sampling every variable separatedly. Imagine $\theta_1$ is a mean $\mu$ and that its conditional probability is (forget about the $\sigma$'s, imagine we know them): $$ p(\mu | D) = \frac{ \mathcal{N}(D | \mu, \sigma_d) \mathcal{N}(\mu, \sigma) } {\int \mathcal{N}(D | \mu, \sigma_d) \mathcal{N}(\mu, \sigma) \text{d} \mu} $$ The product of two normals is another normal with new parameters (see conjugate priors and keep that table always at hand. Even memorize the ones you end up using the most). You do the multiplication, you drop everything that does not depend on $\mu$ into a constant $K$ and you get something that you can express as: $$ p(\mu | D) = K \exp\left(\frac{1}{a}(\mu - b)^2\right) $$ It has the functional form of a Gaussian. Therefore since you know it is a density, $K$ must be the normalizing factor of $\mathcal{N}(b, a)$. Thus, your posterior is a Gaussian distribution with posterior parameters (b,a). The short version is that the product of the prior and the likelihood have the functional form of a familiar distribution (it actually has the form of the prior if you chose conjugates), then you know how to integrate it. For instance, the integral of the $\exp(...)$ element of a normal distribution, that is, a normal without its normalizing factor, is the inverse of its normalizing factor.
Confusion in Gibbs sampling Since I'm not sure where are you stuck at, I'll try multiple shots: Explanation 1: The thing is that you only need the form of the unnormalized posterior, and that is why it's enough if you can get:
46,714
Can the difference between the means of two groups lie outside the confidence interval for the difference?
It is possible for a confidence interval of a mean not to include the sample mean. It is not part of the definition of a CI that it must always cover the sample mean. Thus one may, in theory, construct a CI procedure that never covers the sample mean. But most people would consider that a bad procedure. Let us therefore examine CI procedures that not only have been seriously proposed, but have been studied and found to be good. One of them is the "generalized lognormal confidence interval" procedure explained and studied (via simulation) by Ulf Olsson in the Journal of Statistics Education (Volume 13, Number 1 (2005), http://www.amstat.org/publications/jse/v13n1/olsson.html). This procedure is a reasonable one to use when the (natural) logarithms of the $n$ data are assumed to be independent and identically distributed with a Normal distribution. Recall that when the population mean log is $\mu$ and the population standard deviation of the logarithms is $\sigma$, then the population mean is $\exp(\mu + \sigma^2/2)$. (This relationship uses the lognormal assumption.) We will obtain confidence limits for $\mu+\sigma^2/2$; exponentiating them will give confidence limits for the population mean. The procedure is based on a "generalized confidence interval" known to produce good confidence intervals for complicated combinations of parameters like this one. Calculate the mean logarithm $\bar y$ and the sample variance of the logarithms $s^2$ from the data. A symmetric confidence interval of size $\alpha$ for $\mu + \sigma^2/2$ is found by identifying the middle $100 - 100\alpha\%$ of the distribution of $$T_{2} = \bar y - Z \sqrt{A^2/n} + A^2/2$$ where $Z$ and $A^2$ are independent variates, $Z$ has a standard Normal distribution, $$A^2 = \frac{s^2}{U^2 / (n-1)},$$ and $U^2$ has a chi-squared distribution with $n-1$ degrees of freedom. Because the distribution of $T_2$ is difficult to work with analytically, we may estimate it through simulation. When exponentiated, the $\alpha/2$ and $1-\alpha/2$ quantiles of $T_2$ are the lower and upper confidence limits for $\exp(\mu+\sigma^2/2)$. Olsson's work indicates that once $n \ge 20$ or so, this procedure tends to achieve its nominal characteristics for $\alpha=0.05$. That is, about $2.5\%$ of the time it is less than $\mu+\sigma^2/2$ and $2.5\%$ of the time it is greater than $\mu+\sigma^2/2$. One does not have to look hard to find datasets that (a) appear to meet the assumptions of this test yet for which (b) the confidence interval does not include the sample mean. Here is one with $n=50$: #0.08 0.14 0.21 0.25 0.28 0.3 0.35 0.37 0.39 0.41 0.46 0.51 0.55 0.55 0.66 0.66 0.69 0.71 0.74 0.74 0.77 0.81 0.85 1.04 1.09 1.1 1.17 1.18 1.19 1.25 1.29 1.38 1.54 1.62 1.62 1.68 1.74 1.87 2.11 2.29 2.37 2.42 2.93 2.99 4.8 5.12 5.94 7.09 11.26 120 The generalized CI of $(1.56, 3.95)$ does not include the sample mean of $4.03$. (This was computed using ten million simulated values of the distribution of $T_2$, so it should be pretty accurate. Twenty independent simulations using just a million simulated values never produced an upper limit larger than $4.02$, still below the sample mean.) Although the last few data values ($11.26, 120$) may look like outliers, their logarithms are not. Here is a histogram of their logs: OK, that final value of $\log(120)$ looks a wee bit high. But the (very) powerful Shapiro-Wilk test does not strongly reject the Normal hypothesis ($p = 0.012$). This provides some insight: lognormal (and other heavy-tailed) distributions frequently produce unusually large values by their very nature. Such values can strongly influence the sample mean but should have less influence on estimates of the underlying distributional properties. We shouldn't find anything paradoxical about this. (Although this example concerns a single group, it could be generalized to compare the difference of means between two groups, at some cost in complexity of the calculations. Nothing really changes, though: we may think a CI must include the sample mean only when we become so accustomed to using Normal-theory calculations that we come to believe, through sheer repetition, that all CIs must share their properties.) The following R code will reproduce these calculations and allow you to explore the properties of the generalized lognormal confidence interval. In particular, if you are wondering whether the failure of this CI sometimes to cover the sample mean might be due to an error in coding (a possibility that always worries me!), or if it isn't really a CI for the population mean in the first place, you can reproduce a part of Olsson's work by simulating the coverage of this CI, as in set.seed(17) x.mean <- exp(mu + sigma^2/2) # The true lognormal (population) mean sim <- replicate(1e3, {ci.lognormal(exp(rnorm(n, mu, sigma)))}) mean(sim[1, ] <= x.mean & sim[2, ] >= x.mean) # Fraction of times covering the true mean The output of $0.949$ shows that this nominal $95\%$ interval has covered the true mean $94.9\%$ of the time, which is excellent. (I chose this particular CI procedure specifically because it is so good.) By contrast, you could check how often this interval covers (say) the geometric mean: mean(sim[1, ] <= exp(mu) & sim[2, ] >= exp(mu)) # Fraction of times covering the GM The output of $0.899$ confirms it is not a $95\%$ confidence interval for the geometric mean. Here is the full code (which you will need to compile before running the preceding lines). # # Generalized confidence intervals for lognormal means. # ci.generalized <- function(y, alpha=0.05, n.boot=1e4) { n <- length(y); m <- mean(y); s2 <- var(y) z <- rnorm(n.boot) u2 <- rchisq(n.boot, n-1) a2 <- s2 / (u2 / (n-1)) sim <- m - z * sqrt(a2 / n) + a2 / 2 # Simulated distribution of T^2 return(quantile(sim, c(alpha/2, 1-alpha/2))) # CI for mu + sigma^2/2 } ci.lognormal <- function(x, ...) exp(ci.generalized(log(x), ...)) # # Experiment with simulated data. # n <-50 mu <- 0 # Mean log (the actual value is irrelevant) sigma <- 1/5 # SD of logs (affects the shape of `x`) set.seed(968) # Reproduces the example in the text y <- rnorm(n, mu, sigma) y <- (y - mean(y)) / sd(y) # (Make sure the logs start out looking very Normal) x <- round(exp(y), 2) # Model a limited-precision data collection process x[1] <- 120 # Tweak the data to give them a large sample mean # # Display the data and look at the CI of the mean. # hist(log(x)) ci <- ci.lognormal(x, alpha=0.05, n.boot=1e7) # Takes a few seconds c(p.value=shapiro.test(log(x))$p.value, ci.lognormal(x), sample.mean=mean(x))
Can the difference between the means of two groups lie outside the confidence interval for the diffe
It is possible for a confidence interval of a mean not to include the sample mean. It is not part of the definition of a CI that it must always cover the sample mean. Thus one may, in theory, constru
Can the difference between the means of two groups lie outside the confidence interval for the difference? It is possible for a confidence interval of a mean not to include the sample mean. It is not part of the definition of a CI that it must always cover the sample mean. Thus one may, in theory, construct a CI procedure that never covers the sample mean. But most people would consider that a bad procedure. Let us therefore examine CI procedures that not only have been seriously proposed, but have been studied and found to be good. One of them is the "generalized lognormal confidence interval" procedure explained and studied (via simulation) by Ulf Olsson in the Journal of Statistics Education (Volume 13, Number 1 (2005), http://www.amstat.org/publications/jse/v13n1/olsson.html). This procedure is a reasonable one to use when the (natural) logarithms of the $n$ data are assumed to be independent and identically distributed with a Normal distribution. Recall that when the population mean log is $\mu$ and the population standard deviation of the logarithms is $\sigma$, then the population mean is $\exp(\mu + \sigma^2/2)$. (This relationship uses the lognormal assumption.) We will obtain confidence limits for $\mu+\sigma^2/2$; exponentiating them will give confidence limits for the population mean. The procedure is based on a "generalized confidence interval" known to produce good confidence intervals for complicated combinations of parameters like this one. Calculate the mean logarithm $\bar y$ and the sample variance of the logarithms $s^2$ from the data. A symmetric confidence interval of size $\alpha$ for $\mu + \sigma^2/2$ is found by identifying the middle $100 - 100\alpha\%$ of the distribution of $$T_{2} = \bar y - Z \sqrt{A^2/n} + A^2/2$$ where $Z$ and $A^2$ are independent variates, $Z$ has a standard Normal distribution, $$A^2 = \frac{s^2}{U^2 / (n-1)},$$ and $U^2$ has a chi-squared distribution with $n-1$ degrees of freedom. Because the distribution of $T_2$ is difficult to work with analytically, we may estimate it through simulation. When exponentiated, the $\alpha/2$ and $1-\alpha/2$ quantiles of $T_2$ are the lower and upper confidence limits for $\exp(\mu+\sigma^2/2)$. Olsson's work indicates that once $n \ge 20$ or so, this procedure tends to achieve its nominal characteristics for $\alpha=0.05$. That is, about $2.5\%$ of the time it is less than $\mu+\sigma^2/2$ and $2.5\%$ of the time it is greater than $\mu+\sigma^2/2$. One does not have to look hard to find datasets that (a) appear to meet the assumptions of this test yet for which (b) the confidence interval does not include the sample mean. Here is one with $n=50$: #0.08 0.14 0.21 0.25 0.28 0.3 0.35 0.37 0.39 0.41 0.46 0.51 0.55 0.55 0.66 0.66 0.69 0.71 0.74 0.74 0.77 0.81 0.85 1.04 1.09 1.1 1.17 1.18 1.19 1.25 1.29 1.38 1.54 1.62 1.62 1.68 1.74 1.87 2.11 2.29 2.37 2.42 2.93 2.99 4.8 5.12 5.94 7.09 11.26 120 The generalized CI of $(1.56, 3.95)$ does not include the sample mean of $4.03$. (This was computed using ten million simulated values of the distribution of $T_2$, so it should be pretty accurate. Twenty independent simulations using just a million simulated values never produced an upper limit larger than $4.02$, still below the sample mean.) Although the last few data values ($11.26, 120$) may look like outliers, their logarithms are not. Here is a histogram of their logs: OK, that final value of $\log(120)$ looks a wee bit high. But the (very) powerful Shapiro-Wilk test does not strongly reject the Normal hypothesis ($p = 0.012$). This provides some insight: lognormal (and other heavy-tailed) distributions frequently produce unusually large values by their very nature. Such values can strongly influence the sample mean but should have less influence on estimates of the underlying distributional properties. We shouldn't find anything paradoxical about this. (Although this example concerns a single group, it could be generalized to compare the difference of means between two groups, at some cost in complexity of the calculations. Nothing really changes, though: we may think a CI must include the sample mean only when we become so accustomed to using Normal-theory calculations that we come to believe, through sheer repetition, that all CIs must share their properties.) The following R code will reproduce these calculations and allow you to explore the properties of the generalized lognormal confidence interval. In particular, if you are wondering whether the failure of this CI sometimes to cover the sample mean might be due to an error in coding (a possibility that always worries me!), or if it isn't really a CI for the population mean in the first place, you can reproduce a part of Olsson's work by simulating the coverage of this CI, as in set.seed(17) x.mean <- exp(mu + sigma^2/2) # The true lognormal (population) mean sim <- replicate(1e3, {ci.lognormal(exp(rnorm(n, mu, sigma)))}) mean(sim[1, ] <= x.mean & sim[2, ] >= x.mean) # Fraction of times covering the true mean The output of $0.949$ shows that this nominal $95\%$ interval has covered the true mean $94.9\%$ of the time, which is excellent. (I chose this particular CI procedure specifically because it is so good.) By contrast, you could check how often this interval covers (say) the geometric mean: mean(sim[1, ] <= exp(mu) & sim[2, ] >= exp(mu)) # Fraction of times covering the GM The output of $0.899$ confirms it is not a $95\%$ confidence interval for the geometric mean. Here is the full code (which you will need to compile before running the preceding lines). # # Generalized confidence intervals for lognormal means. # ci.generalized <- function(y, alpha=0.05, n.boot=1e4) { n <- length(y); m <- mean(y); s2 <- var(y) z <- rnorm(n.boot) u2 <- rchisq(n.boot, n-1) a2 <- s2 / (u2 / (n-1)) sim <- m - z * sqrt(a2 / n) + a2 / 2 # Simulated distribution of T^2 return(quantile(sim, c(alpha/2, 1-alpha/2))) # CI for mu + sigma^2/2 } ci.lognormal <- function(x, ...) exp(ci.generalized(log(x), ...)) # # Experiment with simulated data. # n <-50 mu <- 0 # Mean log (the actual value is irrelevant) sigma <- 1/5 # SD of logs (affects the shape of `x`) set.seed(968) # Reproduces the example in the text y <- rnorm(n, mu, sigma) y <- (y - mean(y)) / sd(y) # (Make sure the logs start out looking very Normal) x <- round(exp(y), 2) # Model a limited-precision data collection process x[1] <- 120 # Tweak the data to give them a large sample mean # # Display the data and look at the CI of the mean. # hist(log(x)) ci <- ci.lognormal(x, alpha=0.05, n.boot=1e7) # Takes a few seconds c(p.value=shapiro.test(log(x))$p.value, ci.lognormal(x), sample.mean=mean(x))
Can the difference between the means of two groups lie outside the confidence interval for the diffe It is possible for a confidence interval of a mean not to include the sample mean. It is not part of the definition of a CI that it must always cover the sample mean. Thus one may, in theory, constru
46,715
Can the difference between the means of two groups lie outside the confidence interval for the difference?
Presumably, the confidence interval you are looking at is of the form $\bar{X}_1 - \bar{X}_2 \pm M$ where $M$ is some margin of error measure.This of course includes $\bar{X}_1 - \bar{X}_2$.
Can the difference between the means of two groups lie outside the confidence interval for the diffe
Presumably, the confidence interval you are looking at is of the form $\bar{X}_1 - \bar{X}_2 \pm M$ where $M$ is some margin of error measure.This of course includes $\bar{X}_1 - \bar{X}_2$.
Can the difference between the means of two groups lie outside the confidence interval for the difference? Presumably, the confidence interval you are looking at is of the form $\bar{X}_1 - \bar{X}_2 \pm M$ where $M$ is some margin of error measure.This of course includes $\bar{X}_1 - \bar{X}_2$.
Can the difference between the means of two groups lie outside the confidence interval for the diffe Presumably, the confidence interval you are looking at is of the form $\bar{X}_1 - \bar{X}_2 \pm M$ where $M$ is some margin of error measure.This of course includes $\bar{X}_1 - \bar{X}_2$.
46,716
Proper variable selection: Use only training data or full data?
The distinction here is between how to produce the final model for operational use and how to estimate the generalisation performance of that model. If we are to get an unbiased performance estimate, we must use a sample of data that has not been used to tune any aspect of the model, which includes any feature selection, hyper-parameter tuning, or model selection steps. Thus when estimating the performance of the model, we need to make all of these choices using only the training data, so that the validation/test data remains "statistically pure". However, we want the best possible model to use in operation, so once we have settled on a procedure to build the model, we rebuild it using the entire dataset so that we have the advantage of using a bit more data (which means the model parameters will be estimated a bit better). This usually means that the performance estimate is a little pessimistic as it is really an estimate of the performance of a model trained on a sample of data as large as the training set, rather than of the full dataset. However, it is generally best to have an pessimistic estimate of performance than an optimistic one, which is what you would have if you used the test/validation data for feature, hyper-parameter or model selection. Essentially in performance estimation, we are estimating the performance of a method for producing the final model, rather than the performance of the model itself.
Proper variable selection: Use only training data or full data?
The distinction here is between how to produce the final model for operational use and how to estimate the generalisation performance of that model. If we are to get an unbiased performance estimate,
Proper variable selection: Use only training data or full data? The distinction here is between how to produce the final model for operational use and how to estimate the generalisation performance of that model. If we are to get an unbiased performance estimate, we must use a sample of data that has not been used to tune any aspect of the model, which includes any feature selection, hyper-parameter tuning, or model selection steps. Thus when estimating the performance of the model, we need to make all of these choices using only the training data, so that the validation/test data remains "statistically pure". However, we want the best possible model to use in operation, so once we have settled on a procedure to build the model, we rebuild it using the entire dataset so that we have the advantage of using a bit more data (which means the model parameters will be estimated a bit better). This usually means that the performance estimate is a little pessimistic as it is really an estimate of the performance of a model trained on a sample of data as large as the training set, rather than of the full dataset. However, it is generally best to have an pessimistic estimate of performance than an optimistic one, which is what you would have if you used the test/validation data for feature, hyper-parameter or model selection. Essentially in performance estimation, we are estimating the performance of a method for producing the final model, rather than the performance of the model itself.
Proper variable selection: Use only training data or full data? The distinction here is between how to produce the final model for operational use and how to estimate the generalisation performance of that model. If we are to get an unbiased performance estimate,
46,717
Proper variable selection: Use only training data or full data?
Do everything on the training data. (See edit.) During model development, act like the test data do not exist. Consider how machine learning is used for products like Siri's speech recognition. The goal is to make a prediction about speech that Siri hasn't heard. In fact, that bit of sound has not even occurred. Engineers couldn't possibly include such data into their model development steps, yet the expectation is that the model will have some level of performance on data Siri has never encountered. Having a test set is a simulation of this where you hide the data from the model being developed. Edit: As whuber pointed out, this is for figuring out what kind of model you want to use. Once you decide that you’re using model X for the production version, then your entire data set would be used as you’ve made the decision that it is reliable enough to make decisions where you do not know the correct answer. Those new observations start to function as your out-of-sample data. After all, if you had some stock price predictor do great on cross validation but suddenly start losing tons of money when it got used for real, you’d go back to tweaking your model.
Proper variable selection: Use only training data or full data?
Do everything on the training data. (See edit.) During model development, act like the test data do not exist. Consider how machine learning is used for products like Siri's speech recognition. The go
Proper variable selection: Use only training data or full data? Do everything on the training data. (See edit.) During model development, act like the test data do not exist. Consider how machine learning is used for products like Siri's speech recognition. The goal is to make a prediction about speech that Siri hasn't heard. In fact, that bit of sound has not even occurred. Engineers couldn't possibly include such data into their model development steps, yet the expectation is that the model will have some level of performance on data Siri has never encountered. Having a test set is a simulation of this where you hide the data from the model being developed. Edit: As whuber pointed out, this is for figuring out what kind of model you want to use. Once you decide that you’re using model X for the production version, then your entire data set would be used as you’ve made the decision that it is reliable enough to make decisions where you do not know the correct answer. Those new observations start to function as your out-of-sample data. After all, if you had some stock price predictor do great on cross validation but suddenly start losing tons of money when it got used for real, you’d go back to tweaking your model.
Proper variable selection: Use only training data or full data? Do everything on the training data. (See edit.) During model development, act like the test data do not exist. Consider how machine learning is used for products like Siri's speech recognition. The go
46,718
What does it mean to say that "a topic is a distribution on words"?
Typically, in the context of Latent Dirichlet Allocation (used for Topic Modeling), we assume that the documents come from a generative process. I'll avoid math notation. Look at this figure: (1) Every topic is generated from a Dirichlet distribution of $V$ dimensions where $V$ is the size of your vocabulary. (2) For every document: (2.1) Generate a distribution over topics from a Dirichlet distribution of $T$ dimensions where $T$ is the number of topics in the corpus. (2.2) For every word in the document: (2.2.1) Choose a topic according to the distribution generated at (2.1) (2.2.2) Choose a word according to the distribution corresponding to the chosen topic (generated at (1)) The rigorous mathematical explanaition is here (section 3). So, each topic is a probability distribution over the words of the vocabulary (1) because it says the probability, in that topic, of the word "dog" to appear. And each document has a probability distribution over topics (2.1) which says from which topics the document is more likely to draw its words. We say that a document is a mixture of topics Note: A Dirichlet distribution of three dimensions draws thinks like [0.2,0.4,0.4], [0.3,0.3,0.4], etc. which can be used as Categorical distributions. This is why it is used to generate distributions over $V$ words (topics), and distributions over $T$ topics. See left and right sides of the figure.
What does it mean to say that "a topic is a distribution on words"?
Typically, in the context of Latent Dirichlet Allocation (used for Topic Modeling), we assume that the documents come from a generative process. I'll avoid math notation. Look at this figure: (1) Ev
What does it mean to say that "a topic is a distribution on words"? Typically, in the context of Latent Dirichlet Allocation (used for Topic Modeling), we assume that the documents come from a generative process. I'll avoid math notation. Look at this figure: (1) Every topic is generated from a Dirichlet distribution of $V$ dimensions where $V$ is the size of your vocabulary. (2) For every document: (2.1) Generate a distribution over topics from a Dirichlet distribution of $T$ dimensions where $T$ is the number of topics in the corpus. (2.2) For every word in the document: (2.2.1) Choose a topic according to the distribution generated at (2.1) (2.2.2) Choose a word according to the distribution corresponding to the chosen topic (generated at (1)) The rigorous mathematical explanaition is here (section 3). So, each topic is a probability distribution over the words of the vocabulary (1) because it says the probability, in that topic, of the word "dog" to appear. And each document has a probability distribution over topics (2.1) which says from which topics the document is more likely to draw its words. We say that a document is a mixture of topics Note: A Dirichlet distribution of three dimensions draws thinks like [0.2,0.4,0.4], [0.3,0.3,0.4], etc. which can be used as Categorical distributions. This is why it is used to generate distributions over $V$ words (topics), and distributions over $T$ topics. See left and right sides of the figure.
What does it mean to say that "a topic is a distribution on words"? Typically, in the context of Latent Dirichlet Allocation (used for Topic Modeling), we assume that the documents come from a generative process. I'll avoid math notation. Look at this figure: (1) Ev
46,719
Relationship between Poisson generation and generalized Kullback-Leibler divergence
I worked it out in the end and I'll link to it here, in case someone else is interested. I wrote it up here http://building-babylon.net/2015/02/17/maximum-likelihood-estimation-for-non-negative-matrix-factorisation-and-the-generalised-kullback-leibler-divergence/
Relationship between Poisson generation and generalized Kullback-Leibler divergence
I worked it out in the end and I'll link to it here, in case someone else is interested. I wrote it up here http://building-babylon.net/2015/02/17/maximum-likelihood-estimation-for-non-negative-matri
Relationship between Poisson generation and generalized Kullback-Leibler divergence I worked it out in the end and I'll link to it here, in case someone else is interested. I wrote it up here http://building-babylon.net/2015/02/17/maximum-likelihood-estimation-for-non-negative-matrix-factorisation-and-the-generalised-kullback-leibler-divergence/
Relationship between Poisson generation and generalized Kullback-Leibler divergence I worked it out in the end and I'll link to it here, in case someone else is interested. I wrote it up here http://building-babylon.net/2015/02/17/maximum-likelihood-estimation-for-non-negative-matri
46,720
Relationship between Poisson generation and generalized Kullback-Leibler divergence
We want to proof that \begin{align} argmin_{W,H} \qquad D_{KL}(\boldsymbol{V} | \boldsymbol {WH}) \quad = \quad argmax_{W,H} \qquad p(\boldsymbol{V} | \boldsymbol{W}\boldsymbol{H}) \end{align} under a Poisson distribution. The KL divergence (actually the I-divergence) is defined as \begin{align} D_{KL}(\boldsymbol{V} | \boldsymbol{W}\boldsymbol{H}) = \sum_f \sum_n \left[ v_{fn} \ln \frac{v_{fn}}{\sum_k w_{fk}h_{kn}} + \sum_k w_{fk}h_{kn} - v_{fn} \right] \end{align} And the likelihood can be expressed in terms of the KL divergence: \begin{align} \ln p(\boldsymbol{V} | \boldsymbol{W}\boldsymbol{H}) &= \sum_{f,n} \ln \left[ \exp\left\lbrace -\sum_k w_{fk}h_{kn}\right\rbrace \frac{(\sum w_{fk}h_{kn})^{v_{fn}}}{v_{fn}!} \right] \\ &= \sum_{f,n} \left[ {v_{fn}} \ln\sum w_{fk}h_{kn} -\sum_k w_{fk}h_{kn} - \ln v_{fn}! \right] \\ &= \sum_{f,n} \left[ {v_{fn}} \ln\frac{\sum w_{fk}h_{kn}}{v_{fn}} -\sum_k w_{fk}h_{kn} - \ln v_{fn}! +v_{fn}\ln v_{fn} \right] \\ &= -D_{KL}(\boldsymbol{V} | \boldsymbol{W}\boldsymbol{H}) - v_{fn} + \left[ v_{fn}\ln v_{fn} - \ln v_{fn}! \right] \end{align} Therefore, both the likelihood and the KL divergence have the same optimum with respect to $\boldsymbol{W}, \boldsymbol{H}$.
Relationship between Poisson generation and generalized Kullback-Leibler divergence
We want to proof that \begin{align} argmin_{W,H} \qquad D_{KL}(\boldsymbol{V} | \boldsymbol {WH}) \quad = \quad argmax_{W,H} \qquad p(\boldsymbol{V} | \boldsymbol{W}\boldsymbol{H}) \end{align} under a
Relationship between Poisson generation and generalized Kullback-Leibler divergence We want to proof that \begin{align} argmin_{W,H} \qquad D_{KL}(\boldsymbol{V} | \boldsymbol {WH}) \quad = \quad argmax_{W,H} \qquad p(\boldsymbol{V} | \boldsymbol{W}\boldsymbol{H}) \end{align} under a Poisson distribution. The KL divergence (actually the I-divergence) is defined as \begin{align} D_{KL}(\boldsymbol{V} | \boldsymbol{W}\boldsymbol{H}) = \sum_f \sum_n \left[ v_{fn} \ln \frac{v_{fn}}{\sum_k w_{fk}h_{kn}} + \sum_k w_{fk}h_{kn} - v_{fn} \right] \end{align} And the likelihood can be expressed in terms of the KL divergence: \begin{align} \ln p(\boldsymbol{V} | \boldsymbol{W}\boldsymbol{H}) &= \sum_{f,n} \ln \left[ \exp\left\lbrace -\sum_k w_{fk}h_{kn}\right\rbrace \frac{(\sum w_{fk}h_{kn})^{v_{fn}}}{v_{fn}!} \right] \\ &= \sum_{f,n} \left[ {v_{fn}} \ln\sum w_{fk}h_{kn} -\sum_k w_{fk}h_{kn} - \ln v_{fn}! \right] \\ &= \sum_{f,n} \left[ {v_{fn}} \ln\frac{\sum w_{fk}h_{kn}}{v_{fn}} -\sum_k w_{fk}h_{kn} - \ln v_{fn}! +v_{fn}\ln v_{fn} \right] \\ &= -D_{KL}(\boldsymbol{V} | \boldsymbol{W}\boldsymbol{H}) - v_{fn} + \left[ v_{fn}\ln v_{fn} - \ln v_{fn}! \right] \end{align} Therefore, both the likelihood and the KL divergence have the same optimum with respect to $\boldsymbol{W}, \boldsymbol{H}$.
Relationship between Poisson generation and generalized Kullback-Leibler divergence We want to proof that \begin{align} argmin_{W,H} \qquad D_{KL}(\boldsymbol{V} | \boldsymbol {WH}) \quad = \quad argmax_{W,H} \qquad p(\boldsymbol{V} | \boldsymbol{W}\boldsymbol{H}) \end{align} under a
46,721
What is the limiting distribution of exponential variates modulo 1?
By definition, the law of $X_n - \lfloor X_n\rfloor$ is $$F_n(x) = \Pr(X_n - \lfloor X_n\rfloor \le x)$$ for $0 \le x \lt 1$. The event $E^{(n)}: X_n - \lfloor X_n\rfloor \le x$ is a countable union of the disjoint events $E^{(n)}_i: i \le X_n \le i + x$ for $i=0, 1, 2, \ldots$. Therefore (because probability is countably summable) $$F_n(x) = \Pr(E^{(n)})= \sum_{i=0}^\infty \Pr(E^{(n)}_i).$$ When $X_n$ has an Exponential$(\lambda/n)$ distribution, $$\Pr(E^{(n)}_i) = \Pr( i \le X_n \le i + x) = e^{-\lambda i/n} - e^{-\lambda (i+x)/n} = \left(1 - e^{-\lambda x / n}\right)e^{-\lambda i/n},$$ producing $$F_n(x) = \left(1 - e^{-\lambda x / n}\right)\sum_{i=0}^\infty e^{-\lambda i/n}.$$ The last term sums a geometric series with initial term $1$ and common ratio $e^{-\lambda/n}$, immediately simplifying the whole expression to $$F_n(x) = \frac{1 - e^{-\lambda x/n}}{1 - e^{-\lambda/n}}.$$ The limiting value as $n\to \infty$ is most easily obtained with L'Hopital's Rule, $$\lim_{n\to\infty} F_n(x) = \lim_{n\to\infty} \frac{\lambda x e^{-\lambda x/n}}{\lambda e^{-\lambda/n}} = x\lim_{n\to\infty} e^{\lambda/n(1-x)} = x.$$ This is the law of the Uniform distribution on $[0, 1)$.
What is the limiting distribution of exponential variates modulo 1?
By definition, the law of $X_n - \lfloor X_n\rfloor$ is $$F_n(x) = \Pr(X_n - \lfloor X_n\rfloor \le x)$$ for $0 \le x \lt 1$. The event $E^{(n)}: X_n - \lfloor X_n\rfloor \le x$ is a countable union
What is the limiting distribution of exponential variates modulo 1? By definition, the law of $X_n - \lfloor X_n\rfloor$ is $$F_n(x) = \Pr(X_n - \lfloor X_n\rfloor \le x)$$ for $0 \le x \lt 1$. The event $E^{(n)}: X_n - \lfloor X_n\rfloor \le x$ is a countable union of the disjoint events $E^{(n)}_i: i \le X_n \le i + x$ for $i=0, 1, 2, \ldots$. Therefore (because probability is countably summable) $$F_n(x) = \Pr(E^{(n)})= \sum_{i=0}^\infty \Pr(E^{(n)}_i).$$ When $X_n$ has an Exponential$(\lambda/n)$ distribution, $$\Pr(E^{(n)}_i) = \Pr( i \le X_n \le i + x) = e^{-\lambda i/n} - e^{-\lambda (i+x)/n} = \left(1 - e^{-\lambda x / n}\right)e^{-\lambda i/n},$$ producing $$F_n(x) = \left(1 - e^{-\lambda x / n}\right)\sum_{i=0}^\infty e^{-\lambda i/n}.$$ The last term sums a geometric series with initial term $1$ and common ratio $e^{-\lambda/n}$, immediately simplifying the whole expression to $$F_n(x) = \frac{1 - e^{-\lambda x/n}}{1 - e^{-\lambda/n}}.$$ The limiting value as $n\to \infty$ is most easily obtained with L'Hopital's Rule, $$\lim_{n\to\infty} F_n(x) = \lim_{n\to\infty} \frac{\lambda x e^{-\lambda x/n}}{\lambda e^{-\lambda/n}} = x\lim_{n\to\infty} e^{\lambda/n(1-x)} = x.$$ This is the law of the Uniform distribution on $[0, 1)$.
What is the limiting distribution of exponential variates modulo 1? By definition, the law of $X_n - \lfloor X_n\rfloor$ is $$F_n(x) = \Pr(X_n - \lfloor X_n\rfloor \le x)$$ for $0 \le x \lt 1$. The event $E^{(n)}: X_n - \lfloor X_n\rfloor \le x$ is a countable union
46,722
Overdispersed poisson or negative binomial regression
Instead of overdispersed (or quasi-)poisson regression you can use the NB1 distribution, which has the same linear variance function as ODP and a full-fledged likelihood function instead of the quasilikelihood of ODP. NB1 is implemented in the gamlss package as family=NBII, whereas regular Negative Binomial can be called through family=NBI. All credit for this part of the answer goes to @Achim Zeileis for helping me with a similar question here: Why is the Quasipoisson in glm not treated as a special case of Negative Binomial? , see his post for more info regarding NB/NB1 (and the confusing naming conventions). Regarding ANOVA, I have not been able to find a built-in method for gamlss objects, but it is not hard to write your own implementation of the chi-squared test statistic. An example: set.seed(123) data = rNBII(100,mu = 6,sigma=0.5) #generate NB1 data with mean mu=5 and variance (1+sigma)*mu = 9 h0 = gamlss(data~1,family=PO) #null model: poisson h1 = gamlss(data~1,family=NBII) #alternative model: NB1/ODP df = h1$df.fit - h0$df.fit deviance = as.numeric(-2*logLik(h0) + 2*logLik(h1)) p.value = pchisq(deviance,df,lower.tail=F) > p.value [1] 0.01429169 #reject the null model at > 95% confidence
Overdispersed poisson or negative binomial regression
Instead of overdispersed (or quasi-)poisson regression you can use the NB1 distribution, which has the same linear variance function as ODP and a full-fledged likelihood function instead of the quasil
Overdispersed poisson or negative binomial regression Instead of overdispersed (or quasi-)poisson regression you can use the NB1 distribution, which has the same linear variance function as ODP and a full-fledged likelihood function instead of the quasilikelihood of ODP. NB1 is implemented in the gamlss package as family=NBII, whereas regular Negative Binomial can be called through family=NBI. All credit for this part of the answer goes to @Achim Zeileis for helping me with a similar question here: Why is the Quasipoisson in glm not treated as a special case of Negative Binomial? , see his post for more info regarding NB/NB1 (and the confusing naming conventions). Regarding ANOVA, I have not been able to find a built-in method for gamlss objects, but it is not hard to write your own implementation of the chi-squared test statistic. An example: set.seed(123) data = rNBII(100,mu = 6,sigma=0.5) #generate NB1 data with mean mu=5 and variance (1+sigma)*mu = 9 h0 = gamlss(data~1,family=PO) #null model: poisson h1 = gamlss(data~1,family=NBII) #alternative model: NB1/ODP df = h1$df.fit - h0$df.fit deviance = as.numeric(-2*logLik(h0) + 2*logLik(h1)) p.value = pchisq(deviance,df,lower.tail=F) > p.value [1] 0.01429169 #reject the null model at > 95% confidence
Overdispersed poisson or negative binomial regression Instead of overdispersed (or quasi-)poisson regression you can use the NB1 distribution, which has the same linear variance function as ODP and a full-fledged likelihood function instead of the quasil
46,723
Overdispersed poisson or negative binomial regression
I am not sure exactly what is meant by "in standard R" but if you are open to downloading packages, I believe the pscl package's vuong function may do what you want. It implements a model comparison test that is designed specifically to compare non-nested models; it can compare nested ones as well, but there are more familiar ones that can serve that purpose. Like most other model comparison tests, it is based on comparing the likelihoods of the two models. The Vuong test involves some correction for parsimony and such as well. A decent summary is available at Wikipedia. Here's the original citation: Vuong, Q.H. (1989). Likelihood ratio tests for model selection and non-nested hypotheses. Econometrica. 57(2). 307–333.
Overdispersed poisson or negative binomial regression
I am not sure exactly what is meant by "in standard R" but if you are open to downloading packages, I believe the pscl package's vuong function may do what you want. It implements a model comparison t
Overdispersed poisson or negative binomial regression I am not sure exactly what is meant by "in standard R" but if you are open to downloading packages, I believe the pscl package's vuong function may do what you want. It implements a model comparison test that is designed specifically to compare non-nested models; it can compare nested ones as well, but there are more familiar ones that can serve that purpose. Like most other model comparison tests, it is based on comparing the likelihoods of the two models. The Vuong test involves some correction for parsimony and such as well. A decent summary is available at Wikipedia. Here's the original citation: Vuong, Q.H. (1989). Likelihood ratio tests for model selection and non-nested hypotheses. Econometrica. 57(2). 307–333.
Overdispersed poisson or negative binomial regression I am not sure exactly what is meant by "in standard R" but if you are open to downloading packages, I believe the pscl package's vuong function may do what you want. It implements a model comparison t
46,724
How to measure co-adaptation occuring in a multi-layer perceptron neural network that does not use a drop out?
I would argue that non-identity covariance in the hidden layer activations is one form of "co-adaptation." To compute the hidden layer covariance, just take your trained MLP and find a stack of data. Run the data through the MLP until you compute the activations for the hidden layer of interest: $$ H = \sigma(WX + B) $$ for an MLP with one hidden layer and activation function $\sigma$. Once you have these values, just compute their covariance: $$ S = \frac{1}{n}(H-\bar{H})(H-\bar{H})^\top. $$ To get a single numeric metric from the hidden feature covariance, you could use lots of different things, but I think it's convenient to compute the norm of the covariance less the identity matrix (i.e., the amount of "non-identity stuff" in the observed covariance matrix): $$ \ell = \|S-I\|_F. $$ Using this method you could train up different MLPs and test whether dropout has an impact on the hidden layer activations' tendencies to covary. Of course, there are many different ways to measure "co-adaptation" (or "dependence"); the covariance only measures dependence up through the second statistical moment. You could also do things like measure the kurtosis of the hidden activations, which would get you a different metric, but also one that measures co-adaptation in some sense---much in the same way that ICA models pursue independent components through several different losses. Independence is a really interesting topic in these types of models. There are lots of good papers out there, particularly in the ICA area, but for a great discussion of the different types of independence see Bell & Sejnowski (1997) "The 'Independent Components' of Natural Scenes are Edge Filters" (especially the discussion around Figure 2).
How to measure co-adaptation occuring in a multi-layer perceptron neural network that does not use a
I would argue that non-identity covariance in the hidden layer activations is one form of "co-adaptation." To compute the hidden layer covariance, just take your trained MLP and find a stack of data.
How to measure co-adaptation occuring in a multi-layer perceptron neural network that does not use a drop out? I would argue that non-identity covariance in the hidden layer activations is one form of "co-adaptation." To compute the hidden layer covariance, just take your trained MLP and find a stack of data. Run the data through the MLP until you compute the activations for the hidden layer of interest: $$ H = \sigma(WX + B) $$ for an MLP with one hidden layer and activation function $\sigma$. Once you have these values, just compute their covariance: $$ S = \frac{1}{n}(H-\bar{H})(H-\bar{H})^\top. $$ To get a single numeric metric from the hidden feature covariance, you could use lots of different things, but I think it's convenient to compute the norm of the covariance less the identity matrix (i.e., the amount of "non-identity stuff" in the observed covariance matrix): $$ \ell = \|S-I\|_F. $$ Using this method you could train up different MLPs and test whether dropout has an impact on the hidden layer activations' tendencies to covary. Of course, there are many different ways to measure "co-adaptation" (or "dependence"); the covariance only measures dependence up through the second statistical moment. You could also do things like measure the kurtosis of the hidden activations, which would get you a different metric, but also one that measures co-adaptation in some sense---much in the same way that ICA models pursue independent components through several different losses. Independence is a really interesting topic in these types of models. There are lots of good papers out there, particularly in the ICA area, but for a great discussion of the different types of independence see Bell & Sejnowski (1997) "The 'Independent Components' of Natural Scenes are Edge Filters" (especially the discussion around Figure 2).
How to measure co-adaptation occuring in a multi-layer perceptron neural network that does not use a I would argue that non-identity covariance in the hidden layer activations is one form of "co-adaptation." To compute the hidden layer covariance, just take your trained MLP and find a stack of data.
46,725
In practice, why do we convert categorical class labels to integers for classification
Scikit learn only handles real numbers I believe. So you need to do something like one hot encoding where n numerical dimensions are used to represent membership in the categories. If you just pass in strings they'll get cast to floats in unpredictable ways. There are mathematical reasons some methods (like svm) need floats. IE they are only defined in the space of real numbers. Representing 3 categories as values 1,2,3 in a single method might work but it may also yield suboptimal performance compared to one hot encoding since the split (1,3) vs (2) is difficult to pick up on unless the method can capture very non linear behavior like that. Other methods like random forest can be made to work directly on categorical values. Ie during decision learnings you can propose potential splits as diffrent combinations of categories. For such methods it is often convenient to use ints to represent the categories because an array of ints is much nicer to work with then an array of strings on a computational level. You can also do things like generate all possible combinations of n categories by looking at the bit values of an n-bit integer you are incrementing which can be much faster and memory efficient then searching for splits over n-floats.
In practice, why do we convert categorical class labels to integers for classification
Scikit learn only handles real numbers I believe. So you need to do something like one hot encoding where n numerical dimensions are used to represent membership in the categories. If you just pass in
In practice, why do we convert categorical class labels to integers for classification Scikit learn only handles real numbers I believe. So you need to do something like one hot encoding where n numerical dimensions are used to represent membership in the categories. If you just pass in strings they'll get cast to floats in unpredictable ways. There are mathematical reasons some methods (like svm) need floats. IE they are only defined in the space of real numbers. Representing 3 categories as values 1,2,3 in a single method might work but it may also yield suboptimal performance compared to one hot encoding since the split (1,3) vs (2) is difficult to pick up on unless the method can capture very non linear behavior like that. Other methods like random forest can be made to work directly on categorical values. Ie during decision learnings you can propose potential splits as diffrent combinations of categories. For such methods it is often convenient to use ints to represent the categories because an array of ints is much nicer to work with then an array of strings on a computational level. You can also do things like generate all possible combinations of n categories by looking at the bit values of an n-bit integer you are incrementing which can be much faster and memory efficient then searching for splits over n-floats.
In practice, why do we convert categorical class labels to integers for classification Scikit learn only handles real numbers I believe. So you need to do something like one hot encoding where n numerical dimensions are used to represent membership in the categories. If you just pass in
46,726
In practice, why do we convert categorical class labels to integers for classification
For binary classification you usually use 0/1 or -1/1. Due to symmetry it does not matter which label corresponds to which class. For multiclass classification e.g. for 3-class classification you cannot use 0, 1 and 2 because this way of labeling implies an order (I am not familiar with Iris dataset though) and cannot be used for categorical data. One way to encode categorical labels is to use (1 0 0), (0 1 0) and (0 0 1). You can think of these labels as vertices of a an equilateral triangle in 3-D. Therefore, no order is implied. However if you are using a binary classifier (such as SVM) instead of a truly multiclass classifier we cannot use this labeling. Instead multiple binary classifiers are trained and their results are somehow combined with each other. For example if you have N categories you can train ${N \choose 2}$ classifiers and for each pair you use labels 0/1 to indicate the two classes (out of N) you are training against each other. At the test time majority vote between all ${N \choose 2}$ classifiers can be used to make a prediction. If you are using an interface, perhaps it converts your 0/1/2 labels before interacting with the classifier(s) depending on what that classifier is.
In practice, why do we convert categorical class labels to integers for classification
For binary classification you usually use 0/1 or -1/1. Due to symmetry it does not matter which label corresponds to which class. For multiclass classification e.g. for 3-class classification you ca
In practice, why do we convert categorical class labels to integers for classification For binary classification you usually use 0/1 or -1/1. Due to symmetry it does not matter which label corresponds to which class. For multiclass classification e.g. for 3-class classification you cannot use 0, 1 and 2 because this way of labeling implies an order (I am not familiar with Iris dataset though) and cannot be used for categorical data. One way to encode categorical labels is to use (1 0 0), (0 1 0) and (0 0 1). You can think of these labels as vertices of a an equilateral triangle in 3-D. Therefore, no order is implied. However if you are using a binary classifier (such as SVM) instead of a truly multiclass classifier we cannot use this labeling. Instead multiple binary classifiers are trained and their results are somehow combined with each other. For example if you have N categories you can train ${N \choose 2}$ classifiers and for each pair you use labels 0/1 to indicate the two classes (out of N) you are training against each other. At the test time majority vote between all ${N \choose 2}$ classifiers can be used to make a prediction. If you are using an interface, perhaps it converts your 0/1/2 labels before interacting with the classifier(s) depending on what that classifier is.
In practice, why do we convert categorical class labels to integers for classification For binary classification you usually use 0/1 or -1/1. Due to symmetry it does not matter which label corresponds to which class. For multiclass classification e.g. for 3-class classification you ca
46,727
In practice, why do we convert categorical class labels to integers for classification
It's just a matter of being practical. For binary classification the simplest way is using booleans, for multiclass it's integers. Most back-end libraries are written in statically typed languages (C/C++), and typically use the most basic type that gets the job done without losing information.
In practice, why do we convert categorical class labels to integers for classification
It's just a matter of being practical. For binary classification the simplest way is using booleans, for multiclass it's integers. Most back-end libraries are written in statically typed languages (C/
In practice, why do we convert categorical class labels to integers for classification It's just a matter of being practical. For binary classification the simplest way is using booleans, for multiclass it's integers. Most back-end libraries are written in statically typed languages (C/C++), and typically use the most basic type that gets the job done without losing information.
In practice, why do we convert categorical class labels to integers for classification It's just a matter of being practical. For binary classification the simplest way is using booleans, for multiclass it's integers. Most back-end libraries are written in statically typed languages (C/
46,728
In practice, why do we convert categorical class labels to integers for classification
Some algorithms can handle only numerical inputs, this might be main reason although storage is other reason. Of course some algorithms can do conversion implicitly.
In practice, why do we convert categorical class labels to integers for classification
Some algorithms can handle only numerical inputs, this might be main reason although storage is other reason. Of course some algorithms can do conversion implicitly.
In practice, why do we convert categorical class labels to integers for classification Some algorithms can handle only numerical inputs, this might be main reason although storage is other reason. Of course some algorithms can do conversion implicitly.
In practice, why do we convert categorical class labels to integers for classification Some algorithms can handle only numerical inputs, this might be main reason although storage is other reason. Of course some algorithms can do conversion implicitly.
46,729
In practice, why do we convert categorical class labels to integers for classification
There are few algorithms that by default take care of basic label encoding. But as a developer, you need to make sure that the data which is being passed to the model, is correct representation of reality present in data. For e.g. if your data has column 'Engineer's Role' then Senior > Junior > Fresher. In this case you need to LabelEncode the values to 3 > 2 > 1 respectively.
In practice, why do we convert categorical class labels to integers for classification
There are few algorithms that by default take care of basic label encoding. But as a developer, you need to make sure that the data which is being passed to the model, is correct representation of re
In practice, why do we convert categorical class labels to integers for classification There are few algorithms that by default take care of basic label encoding. But as a developer, you need to make sure that the data which is being passed to the model, is correct representation of reality present in data. For e.g. if your data has column 'Engineer's Role' then Senior > Junior > Fresher. In this case you need to LabelEncode the values to 3 > 2 > 1 respectively.
In practice, why do we convert categorical class labels to integers for classification There are few algorithms that by default take care of basic label encoding. But as a developer, you need to make sure that the data which is being passed to the model, is correct representation of re
46,730
Kernel density estimation vs. machine learning for forecasting in large samples
(First off, I'd consider kernel density estimation a form of a machine learning model, so that's a strange dichotomy to make. But anyway.) If you really do have enough samples to do good density estimation, then the Bayes classifier formed via KDE, or its regression analogue the Nadaraya-Watson model, converges to the optimal model. Any drawbacks of this approach are then purely computational. (Naive KDE requires comparing each test point with every single training point, though you can get much better than that if you're clever.) The other problem is the enormous issue of bandwidth selection, but with a good enough training set this is again only a computational issue. In practice, however, you rarely actually have a good enough sample to perform highly accurate density estimation. Some issues: As the dimension increases, KDE rapidly needs many more samples; vanilla KDE is rarely useful beyond the order of 10 dimensions. Even in low dimensions, a density estimation-based model has essentially no ability to generalize; if your test set has any examples outside the support of your training distribution, you're likely screwed. The reason for this drawbacks is that density estimation-type models assume only that the function being learned is fairly smooth (with respect to the kernel). Other models, by making stronger assumptions, can learn with many fewer training points when the assumptions are reasonably well-met. If you think it's likely that the function you're trying to learn is more or less a sparse linear function of its inputs, then LASSO will be much better at learning that model with a given number of samples than KDE. But if it turns out to be $f(x) = \begin{cases} 1 & \lVert x \rVert > 1\\0 & \text{otherwise}\end{cases}$, LASSO will do essentially nothing and KDE will learn more or less the right model pretty quickly.
Kernel density estimation vs. machine learning for forecasting in large samples
(First off, I'd consider kernel density estimation a form of a machine learning model, so that's a strange dichotomy to make. But anyway.) If you really do have enough samples to do good density estim
Kernel density estimation vs. machine learning for forecasting in large samples (First off, I'd consider kernel density estimation a form of a machine learning model, so that's a strange dichotomy to make. But anyway.) If you really do have enough samples to do good density estimation, then the Bayes classifier formed via KDE, or its regression analogue the Nadaraya-Watson model, converges to the optimal model. Any drawbacks of this approach are then purely computational. (Naive KDE requires comparing each test point with every single training point, though you can get much better than that if you're clever.) The other problem is the enormous issue of bandwidth selection, but with a good enough training set this is again only a computational issue. In practice, however, you rarely actually have a good enough sample to perform highly accurate density estimation. Some issues: As the dimension increases, KDE rapidly needs many more samples; vanilla KDE is rarely useful beyond the order of 10 dimensions. Even in low dimensions, a density estimation-based model has essentially no ability to generalize; if your test set has any examples outside the support of your training distribution, you're likely screwed. The reason for this drawbacks is that density estimation-type models assume only that the function being learned is fairly smooth (with respect to the kernel). Other models, by making stronger assumptions, can learn with many fewer training points when the assumptions are reasonably well-met. If you think it's likely that the function you're trying to learn is more or less a sparse linear function of its inputs, then LASSO will be much better at learning that model with a given number of samples than KDE. But if it turns out to be $f(x) = \begin{cases} 1 & \lVert x \rVert > 1\\0 & \text{otherwise}\end{cases}$, LASSO will do essentially nothing and KDE will learn more or less the right model pretty quickly.
Kernel density estimation vs. machine learning for forecasting in large samples (First off, I'd consider kernel density estimation a form of a machine learning model, so that's a strange dichotomy to make. But anyway.) If you really do have enough samples to do good density estim
46,731
Kernel density estimation vs. machine learning for forecasting in large samples
From your statement, The ultimate goal is forecasting new realizations of $y$ given new realizations of $x$'s. , this already suggests that you want to do regression. I would go for (B). That is to estimate $\mathbb{E}[y | x]$. I am not so sure what you plan to use KDE on. I definitely will not use it to model the density of $x$ because that is not needed. Your goal is to predict $y$ given $x$. There is no need to care about the density of $x$. Perhaps you mean to use KDE to somehow estimate the conditional density $p(y|x)$. But then again, this is overkill because presumably $\mathbb{E}[y | x]$ should be enough for predicting $y$. Estimating $\mathbb{E}[y | x]$ is an easier problem compared to estimating $p(y|x)$. The methods you mention in (B) are for estimating $\mathbb{E}[y | x]$.
Kernel density estimation vs. machine learning for forecasting in large samples
From your statement, The ultimate goal is forecasting new realizations of $y$ given new realizations of $x$'s. , this already suggests that you want to do regression. I would go for (B). That is to
Kernel density estimation vs. machine learning for forecasting in large samples From your statement, The ultimate goal is forecasting new realizations of $y$ given new realizations of $x$'s. , this already suggests that you want to do regression. I would go for (B). That is to estimate $\mathbb{E}[y | x]$. I am not so sure what you plan to use KDE on. I definitely will not use it to model the density of $x$ because that is not needed. Your goal is to predict $y$ given $x$. There is no need to care about the density of $x$. Perhaps you mean to use KDE to somehow estimate the conditional density $p(y|x)$. But then again, this is overkill because presumably $\mathbb{E}[y | x]$ should be enough for predicting $y$. Estimating $\mathbb{E}[y | x]$ is an easier problem compared to estimating $p(y|x)$. The methods you mention in (B) are for estimating $\mathbb{E}[y | x]$.
Kernel density estimation vs. machine learning for forecasting in large samples From your statement, The ultimate goal is forecasting new realizations of $y$ given new realizations of $x$'s. , this already suggests that you want to do regression. I would go for (B). That is to
46,732
Why is the Confidence Interval Changing for this Time-Series
These are irregularly spaced data, there are even more than one observations at a certain time points (e.g. 2 observations at times 8.8, 15.6, 15.8, 6 observations at time 14.6, 4 observations at time 15.4). These are the number of observations at each time: table(mcycle[,1]) # 2.4 2.6 3.2 3.6 4 6.2 6.6 6.8 7.8 8.2 8.8 9.6 10 10.2 10.6 11 # 1 1 1 1 1 1 1 1 1 1 2 1 1 1 1 1 # 11.4 13.2 13.6 13.8 14.6 14.8 15.4 15.6 15.8 16 16.2 16.4 16.6 16.8 17.6 17.8 # 1 1 1 1 6 1 4 2 2 2 3 2 1 3 4 2 # 18.6 19.2 19.4 19.6 20.2 20.4 21.2 21.4 21.8 22 23.2 23.4 24 24.2 24.6 25 # 2 1 2 1 1 1 1 1 1 1 1 1 1 2 1 2 # 25.4 25.6 26 26.2 26.4 27 27.2 27.6 28.2 28.4 28.6 29.4 30.2 31 31.2 32 # 2 1 1 2 1 1 3 1 1 2 1 1 1 1 1 2 # 32.8 33.4 33.8 34.4 34.8 35.2 35.4 35.6 36.2 38 39.2 39.4 40 40.4 41.6 42.4 # 1 1 1 1 1 2 1 2 2 2 1 1 1 1 2 1 # 42.8 43 44 44.4 45 46.6 47.8 48.8 50.6 52 53.2 55 55.4 57.6 # 2 1 1 1 1 1 2 1 1 1 1 2 1 1 The asymmetry of the confidence interval may reflect different degrees of uncertainty depending on the amount of information around a time point. We may expect narrower confidence intervals at periods where more data are available, and vice versa. The following plot suggests that at periods with higher concentration of observations (e.g. around times 14-15 and 26-27) the confidence interval is narrower. plot(mcycle$times, pred[,3] - pred[,2]) mtext(side = 3, text = "width of confidence interval in the plot posted by the OP", adj = 0) Also, be aware that even with evenly spaced data we would observe asymmetric confidence intervals at the beginning and the end of the sample. This is due to the initialization of the Kalman filter (forwards recursions) which typically involves a large variance attached to the initial state vector and the initialization of the Kalman smoother, which is usually initialized with zeros at the end of the sample (backwards recursions). The width of the confidence interval converges to a fixed width. For illustration, we can take this example where the local-level plus seasonal component model is fitted to a series recorded regularly spaced times. As mentioned before, the confidence interval is symmetric except at the beginning and the end of the sample. require("stsm") data("llmseas") m <- stsm.model(model = "llm+seas", y = llmseas) res <- maxlik.fd.scoring(m = m, step = NULL, information = "expected", control = list(maxit = 100, tol = 0.001)) comps <- tsSmooth(res) sse1 <- comps$states[,1] + 1.96 * comps$sse[,1] sse2 <- comps$states[,1] - 1.96 * comps$sse[,1] par(mfrow = c(2, 1), mar = c(3,3,3,3)) plot(ts.union(comps$states[,1], sse1, sse2), ylab = "", plot.type = "single", type = "n") polygon(x = c(time(comps$states), rev(time(comps$states))), y = c(sse2, rev(sse1)), col = "lightgray", border = NA) lines(comps$states[,1]) mtext(side = 3, text = "fitted level and 95% confidence interval", adj = 0) plot(sse2 - sse1, plot.type = "single", ylab = "", type = "b", lty = 2, pch = 16) mtext(side = 1, line = 2, text = "Time") mtext(side = 3, text = "width of confidence interval in the top plot", adj = 0)
Why is the Confidence Interval Changing for this Time-Series
These are irregularly spaced data, there are even more than one observations at a certain time points (e.g. 2 observations at times 8.8, 15.6, 15.8, 6 observations at time 14.6, 4 observations at tim
Why is the Confidence Interval Changing for this Time-Series These are irregularly spaced data, there are even more than one observations at a certain time points (e.g. 2 observations at times 8.8, 15.6, 15.8, 6 observations at time 14.6, 4 observations at time 15.4). These are the number of observations at each time: table(mcycle[,1]) # 2.4 2.6 3.2 3.6 4 6.2 6.6 6.8 7.8 8.2 8.8 9.6 10 10.2 10.6 11 # 1 1 1 1 1 1 1 1 1 1 2 1 1 1 1 1 # 11.4 13.2 13.6 13.8 14.6 14.8 15.4 15.6 15.8 16 16.2 16.4 16.6 16.8 17.6 17.8 # 1 1 1 1 6 1 4 2 2 2 3 2 1 3 4 2 # 18.6 19.2 19.4 19.6 20.2 20.4 21.2 21.4 21.8 22 23.2 23.4 24 24.2 24.6 25 # 2 1 2 1 1 1 1 1 1 1 1 1 1 2 1 2 # 25.4 25.6 26 26.2 26.4 27 27.2 27.6 28.2 28.4 28.6 29.4 30.2 31 31.2 32 # 2 1 1 2 1 1 3 1 1 2 1 1 1 1 1 2 # 32.8 33.4 33.8 34.4 34.8 35.2 35.4 35.6 36.2 38 39.2 39.4 40 40.4 41.6 42.4 # 1 1 1 1 1 2 1 2 2 2 1 1 1 1 2 1 # 42.8 43 44 44.4 45 46.6 47.8 48.8 50.6 52 53.2 55 55.4 57.6 # 2 1 1 1 1 1 2 1 1 1 1 2 1 1 The asymmetry of the confidence interval may reflect different degrees of uncertainty depending on the amount of information around a time point. We may expect narrower confidence intervals at periods where more data are available, and vice versa. The following plot suggests that at periods with higher concentration of observations (e.g. around times 14-15 and 26-27) the confidence interval is narrower. plot(mcycle$times, pred[,3] - pred[,2]) mtext(side = 3, text = "width of confidence interval in the plot posted by the OP", adj = 0) Also, be aware that even with evenly spaced data we would observe asymmetric confidence intervals at the beginning and the end of the sample. This is due to the initialization of the Kalman filter (forwards recursions) which typically involves a large variance attached to the initial state vector and the initialization of the Kalman smoother, which is usually initialized with zeros at the end of the sample (backwards recursions). The width of the confidence interval converges to a fixed width. For illustration, we can take this example where the local-level plus seasonal component model is fitted to a series recorded regularly spaced times. As mentioned before, the confidence interval is symmetric except at the beginning and the end of the sample. require("stsm") data("llmseas") m <- stsm.model(model = "llm+seas", y = llmseas) res <- maxlik.fd.scoring(m = m, step = NULL, information = "expected", control = list(maxit = 100, tol = 0.001)) comps <- tsSmooth(res) sse1 <- comps$states[,1] + 1.96 * comps$sse[,1] sse2 <- comps$states[,1] - 1.96 * comps$sse[,1] par(mfrow = c(2, 1), mar = c(3,3,3,3)) plot(ts.union(comps$states[,1], sse1, sse2), ylab = "", plot.type = "single", type = "n") polygon(x = c(time(comps$states), rev(time(comps$states))), y = c(sse2, rev(sse1)), col = "lightgray", border = NA) lines(comps$states[,1]) mtext(side = 3, text = "fitted level and 95% confidence interval", adj = 0) plot(sse2 - sse1, plot.type = "single", ylab = "", type = "b", lty = 2, pch = 16) mtext(side = 1, line = 2, text = "Time") mtext(side = 3, text = "width of confidence interval in the top plot", adj = 0)
Why is the Confidence Interval Changing for this Time-Series These are irregularly spaced data, there are even more than one observations at a certain time points (e.g. 2 observations at times 8.8, 15.6, 15.8, 6 observations at time 14.6, 4 observations at tim
46,733
Maximum Likelihood Estimation of Dirichlet Mean
Suppose $\mathbf p_1, \ldots, \mathbf p_n$ are iid $\operatorname{Dirichlet}(s \mathbf m)$. If I'm understanding you correctly, your question is "why use an iterative scheme when $\hat {\mathbf m} = \frac 1 N \sum_{i = 1} ^ N \mathbf p_i$ works?" You are correct that this is a reasonable estimator. But it isn't the maximum likelihood estimator, which is what we care about! The Dirichlet likelihood is $$ L_i(\pmb \alpha) = \frac{\Gamma(\sum_k \alpha_k)}{\prod_k\Gamma(\alpha_k)} \prod_k p_{ik}^{\alpha_k - 1} $$ so our goal is to maximize $\prod_i L_i (\pmb \alpha)$ in $\pmb \alpha$; once we do this, we can get the maximum likelihood estimate of $\mathbf m$ by normalizing. But it is easy to see that the likelihood is a function of $\frac 1 N \sum_i \log \mathbf p_i$ rather than $\frac 1 N \sum_i \mathbf p_i$ (I'm using $\log$ elementwise here). In some sense, we might think of $\log \mathbf p_i$ as the "appropriate scale" of the data - at least, for the Dirichlet distribution - rather than the untransformed $\mathbf p_i$. So, we believe that the MLE is not $\frac 1 N \sum_i \mathbf p_i$ but rather is some complicated function of $\frac 1 N \sum_i \log \mathbf p_i$. The question now becomes "why use the MLE rather than the easy estimator?" Well, we have some theorems which say the MLE has certain optimality properties. So, we get a more efficient estimator with the MLE, although $\frac 1 N \sum_i \mathbf p_i$ may still be useful as a starting point for the iterative algorithm. Now, I'm not sure how good the MLE really is here, considering that the data must actually be Dirichlet distributed for it to work, whereas $\frac 1 N \sum \mathbf p_i$ is consistent no matter what. But that is another story.
Maximum Likelihood Estimation of Dirichlet Mean
Suppose $\mathbf p_1, \ldots, \mathbf p_n$ are iid $\operatorname{Dirichlet}(s \mathbf m)$. If I'm understanding you correctly, your question is "why use an iterative scheme when $\hat {\mathbf m} = \
Maximum Likelihood Estimation of Dirichlet Mean Suppose $\mathbf p_1, \ldots, \mathbf p_n$ are iid $\operatorname{Dirichlet}(s \mathbf m)$. If I'm understanding you correctly, your question is "why use an iterative scheme when $\hat {\mathbf m} = \frac 1 N \sum_{i = 1} ^ N \mathbf p_i$ works?" You are correct that this is a reasonable estimator. But it isn't the maximum likelihood estimator, which is what we care about! The Dirichlet likelihood is $$ L_i(\pmb \alpha) = \frac{\Gamma(\sum_k \alpha_k)}{\prod_k\Gamma(\alpha_k)} \prod_k p_{ik}^{\alpha_k - 1} $$ so our goal is to maximize $\prod_i L_i (\pmb \alpha)$ in $\pmb \alpha$; once we do this, we can get the maximum likelihood estimate of $\mathbf m$ by normalizing. But it is easy to see that the likelihood is a function of $\frac 1 N \sum_i \log \mathbf p_i$ rather than $\frac 1 N \sum_i \mathbf p_i$ (I'm using $\log$ elementwise here). In some sense, we might think of $\log \mathbf p_i$ as the "appropriate scale" of the data - at least, for the Dirichlet distribution - rather than the untransformed $\mathbf p_i$. So, we believe that the MLE is not $\frac 1 N \sum_i \mathbf p_i$ but rather is some complicated function of $\frac 1 N \sum_i \log \mathbf p_i$. The question now becomes "why use the MLE rather than the easy estimator?" Well, we have some theorems which say the MLE has certain optimality properties. So, we get a more efficient estimator with the MLE, although $\frac 1 N \sum_i \mathbf p_i$ may still be useful as a starting point for the iterative algorithm. Now, I'm not sure how good the MLE really is here, considering that the data must actually be Dirichlet distributed for it to work, whereas $\frac 1 N \sum \mathbf p_i$ is consistent no matter what. But that is another story.
Maximum Likelihood Estimation of Dirichlet Mean Suppose $\mathbf p_1, \ldots, \mathbf p_n$ are iid $\operatorname{Dirichlet}(s \mathbf m)$. If I'm understanding you correctly, your question is "why use an iterative scheme when $\hat {\mathbf m} = \
46,734
Extension of Median to big data integer distributions?
The trimmed mean is, from where you are starting, one generalisation of the median. If you trim (meaning, ignore rather than drop) 3 values in each tail of an ordered sample of 7 then you get the median; if you trim 0 values, then you get the mean. For small samples, thinking in terms of number trimmed is natural. Here is a Stata-based calculation with your "data" using code published with Cox (2013), but the output should be fairly transparent to users of other software: set obs 7 mat A = (1, 1, 1, 2, 2, 2, 3) mat B = (1, 2, 2, 2, 3, 3, 3) gen A = A[1, _n] gen B = B[1, _n] trimmean A, number(0/3) +---------------------------+ | number # trimmed mean | |---------------------------| | 0 7 1.714286 | | 1 5 1.6 | | 2 3 1.666667 | | 3 1 2 | +---------------------------+ trimmean B, number(0/3) +---------------------------+ | number # trimmed mean | |---------------------------| | 0 7 2.285714 | | 1 5 2.4 | | 2 3 2.333333 | | 3 1 2 | +---------------------------+ As common, results are shown to more decimal places than will be needed. For larger samples, it is more natural, and certainly conventional, to think in terms of the fraction or percent trimmed. The 25% trimmed mean has been given various names, the most common being "midmean". (Those familiar with box plots can think of it as the mean of the values falling inside the box.) The advantages of trimmed means include Ease of understanding and calculation. Trimmed means are used in judging sports as a way of discounting or discouraging bias in voting, so they may even be familiar to users of statistics from outside the field. Clear links to standard ideas, mean and median. Flexibility in choosing that mix of resistance to wild values and use of the information in the other values that is a good trade-off in a project. The disadvantages include Flexibility is another name for arbitrariness. It's not easy to see what the best extensions to bivariate or multivariate cases would be. Values are included or not, at least in the simplest flavour of trimmed means, which may not be subtle enough. Trimmed means other than the limiting cases of mean and median lose many of the attractive properties of either, including the equivariance of median and monotonic transformations emphasised by @whuber. Cox (2013) is a tutorial review emphasising the history of ideas and associated graphics. (It overlooks a brief mention by Jules Verne.) Cox, N. J. 2013. Speaking Stata: Trimming to taste. Stata Journal 13: 640-666. http://www.stata-journal.com/article.html?article=st0313
Extension of Median to big data integer distributions?
The trimmed mean is, from where you are starting, one generalisation of the median. If you trim (meaning, ignore rather than drop) 3 values in each tail of an ordered sample of 7 then you get the medi
Extension of Median to big data integer distributions? The trimmed mean is, from where you are starting, one generalisation of the median. If you trim (meaning, ignore rather than drop) 3 values in each tail of an ordered sample of 7 then you get the median; if you trim 0 values, then you get the mean. For small samples, thinking in terms of number trimmed is natural. Here is a Stata-based calculation with your "data" using code published with Cox (2013), but the output should be fairly transparent to users of other software: set obs 7 mat A = (1, 1, 1, 2, 2, 2, 3) mat B = (1, 2, 2, 2, 3, 3, 3) gen A = A[1, _n] gen B = B[1, _n] trimmean A, number(0/3) +---------------------------+ | number # trimmed mean | |---------------------------| | 0 7 1.714286 | | 1 5 1.6 | | 2 3 1.666667 | | 3 1 2 | +---------------------------+ trimmean B, number(0/3) +---------------------------+ | number # trimmed mean | |---------------------------| | 0 7 2.285714 | | 1 5 2.4 | | 2 3 2.333333 | | 3 1 2 | +---------------------------+ As common, results are shown to more decimal places than will be needed. For larger samples, it is more natural, and certainly conventional, to think in terms of the fraction or percent trimmed. The 25% trimmed mean has been given various names, the most common being "midmean". (Those familiar with box plots can think of it as the mean of the values falling inside the box.) The advantages of trimmed means include Ease of understanding and calculation. Trimmed means are used in judging sports as a way of discounting or discouraging bias in voting, so they may even be familiar to users of statistics from outside the field. Clear links to standard ideas, mean and median. Flexibility in choosing that mix of resistance to wild values and use of the information in the other values that is a good trade-off in a project. The disadvantages include Flexibility is another name for arbitrariness. It's not easy to see what the best extensions to bivariate or multivariate cases would be. Values are included or not, at least in the simplest flavour of trimmed means, which may not be subtle enough. Trimmed means other than the limiting cases of mean and median lose many of the attractive properties of either, including the equivariance of median and monotonic transformations emphasised by @whuber. Cox (2013) is a tutorial review emphasising the history of ideas and associated graphics. (It overlooks a brief mention by Jules Verne.) Cox, N. J. 2013. Speaking Stata: Trimming to taste. Stata Journal 13: 640-666. http://www.stata-journal.com/article.html?article=st0313
Extension of Median to big data integer distributions? The trimmed mean is, from where you are starting, one generalisation of the median. If you trim (meaning, ignore rather than drop) 3 values in each tail of an ordered sample of 7 then you get the medi
46,735
Extension of Median to big data integer distributions?
I disagree with your characterization of B median as "upper 2", because its mean is 16/7=2.29. You alluded to the fact that you didn't like mean for the distribution is skewed, so characterizing the median as "upper 2" would be inconsistent with the sample mean. Mean of sample A is 1.71. Hence, the the central tendency is probably high 1 and low 2 for samples A and B. I propose to use a weighted average of mean and median: $m=w*mean+(1-w)median$. In your case median = 2, and A and B means are 12/7 and 16/7. So, if you use $w=1/3$, then m=1.9 and 2.1 would be consistent with proposed above high 1 and low 2 characterization. You can play with weights w to get a better metric for your study. High $w$ will make it look more like mean, and low $w$ will make it more like a median.
Extension of Median to big data integer distributions?
I disagree with your characterization of B median as "upper 2", because its mean is 16/7=2.29. You alluded to the fact that you didn't like mean for the distribution is skewed, so characterizing the m
Extension of Median to big data integer distributions? I disagree with your characterization of B median as "upper 2", because its mean is 16/7=2.29. You alluded to the fact that you didn't like mean for the distribution is skewed, so characterizing the median as "upper 2" would be inconsistent with the sample mean. Mean of sample A is 1.71. Hence, the the central tendency is probably high 1 and low 2 for samples A and B. I propose to use a weighted average of mean and median: $m=w*mean+(1-w)median$. In your case median = 2, and A and B means are 12/7 and 16/7. So, if you use $w=1/3$, then m=1.9 and 2.1 would be consistent with proposed above high 1 and low 2 characterization. You can play with weights w to get a better metric for your study. High $w$ will make it look more like mean, and low $w$ will make it more like a median.
Extension of Median to big data integer distributions? I disagree with your characterization of B median as "upper 2", because its mean is 16/7=2.29. You alluded to the fact that you didn't like mean for the distribution is skewed, so characterizing the m
46,736
When to use offset() in negative binomial/poisson GLMs in R
1) What is the best way of determining when to use neg. binom. vs. poisson? A common way (not necessarily the best --- what's 'best' depends on your criteria for bestness) to decide this would be to see if there's overdispersion in a Poisson model (e.g. by looking at the residual deviance. For example, look at summary(glm(count~spray,InsectSprays,family=poisson)) - this has a residual deviance of 98.33 for 66 df. That's about 50% larger than we'd expect, so it's probably big enough that it could matter for your inference. [If you want a formal test, pchisq(98.33,66,lower.tail=FALSE), but formal testing of assumptions is generally answering the wrong question.] So I'd be inclined to consider a negative binomial for that case. More generally, if you're not reasonably confident that the Poisson makes sense, you could simply use negative binomial as a default, since it encompasses the Poisson as a limiting case. 2) Is this an appropriate instance to include sampling time in an offset term? Yes, that's appropriate, and it would be my first instinct to include sampling time as an offset (rather than a predictor), since count would be expected to simply be proportional to the length of the sampling interval.
When to use offset() in negative binomial/poisson GLMs in R
1) What is the best way of determining when to use neg. binom. vs. poisson? A common way (not necessarily the best --- what's 'best' depends on your criteria for bestness) to decide this would be to
When to use offset() in negative binomial/poisson GLMs in R 1) What is the best way of determining when to use neg. binom. vs. poisson? A common way (not necessarily the best --- what's 'best' depends on your criteria for bestness) to decide this would be to see if there's overdispersion in a Poisson model (e.g. by looking at the residual deviance. For example, look at summary(glm(count~spray,InsectSprays,family=poisson)) - this has a residual deviance of 98.33 for 66 df. That's about 50% larger than we'd expect, so it's probably big enough that it could matter for your inference. [If you want a formal test, pchisq(98.33,66,lower.tail=FALSE), but formal testing of assumptions is generally answering the wrong question.] So I'd be inclined to consider a negative binomial for that case. More generally, if you're not reasonably confident that the Poisson makes sense, you could simply use negative binomial as a default, since it encompasses the Poisson as a limiting case. 2) Is this an appropriate instance to include sampling time in an offset term? Yes, that's appropriate, and it would be my first instinct to include sampling time as an offset (rather than a predictor), since count would be expected to simply be proportional to the length of the sampling interval.
When to use offset() in negative binomial/poisson GLMs in R 1) What is the best way of determining when to use neg. binom. vs. poisson? A common way (not necessarily the best --- what's 'best' depends on your criteria for bestness) to decide this would be to
46,737
When to use offset() in negative binomial/poisson GLMs in R
What is the best way of determining when to use negative binomial vs. Poisson? Answer: Poisson GLMs assume the mean and variance of the response variable are approximately equal. Overdispersion can occur when this assumption is not met; variance in the data is naturally larger than the mean. This situation is termed "true overdispersion". True overdispersion is dealt with by fitting a model to the data such that the variance is greater than the mean in the response variable. However, the negative binomial GLM does not assume that the variance of the response variable is equal to its mean and, therefore, can be used to model overdispersed data, which is a common property of ecological data. To check if a model is overdispersed or not we divide deviance by residual. For example: model1 <- glm(weight ~ height + age, data = df1, family = poisson(link = "log")) ods <- model1$deviance/model1$df.residual ods If the value of ods is around 1 then the model is not overdispersed. If ods is around 2 or above the model is overdispersed and the prediction/assumption from the model output can be problematic. In such a situation, negative binomial can be use because it does not assume that the variance of the response variable is equal to its mean. Is this an appropriate instance to include sampling time in an offset term? In most cases, sampling occurs for 10 minutes, but it is sometimes 15 or 20 minutes. Answer: Yes, because more sampling efforts mean more species is counted. To give each sampling effort an equal opportunity/weight in the model we need to use this as an offset term. The same logic can be applied for locations with variable survey efforts to observe species abundance (count).
When to use offset() in negative binomial/poisson GLMs in R
What is the best way of determining when to use negative binomial vs. Poisson? Answer: Poisson GLMs assume the mean and variance of the response variable are approximately equal. Overdisp
When to use offset() in negative binomial/poisson GLMs in R What is the best way of determining when to use negative binomial vs. Poisson? Answer: Poisson GLMs assume the mean and variance of the response variable are approximately equal. Overdispersion can occur when this assumption is not met; variance in the data is naturally larger than the mean. This situation is termed "true overdispersion". True overdispersion is dealt with by fitting a model to the data such that the variance is greater than the mean in the response variable. However, the negative binomial GLM does not assume that the variance of the response variable is equal to its mean and, therefore, can be used to model overdispersed data, which is a common property of ecological data. To check if a model is overdispersed or not we divide deviance by residual. For example: model1 <- glm(weight ~ height + age, data = df1, family = poisson(link = "log")) ods <- model1$deviance/model1$df.residual ods If the value of ods is around 1 then the model is not overdispersed. If ods is around 2 or above the model is overdispersed and the prediction/assumption from the model output can be problematic. In such a situation, negative binomial can be use because it does not assume that the variance of the response variable is equal to its mean. Is this an appropriate instance to include sampling time in an offset term? In most cases, sampling occurs for 10 minutes, but it is sometimes 15 or 20 minutes. Answer: Yes, because more sampling efforts mean more species is counted. To give each sampling effort an equal opportunity/weight in the model we need to use this as an offset term. The same logic can be applied for locations with variable survey efforts to observe species abundance (count).
When to use offset() in negative binomial/poisson GLMs in R What is the best way of determining when to use negative binomial vs. Poisson? Answer: Poisson GLMs assume the mean and variance of the response variable are approximately equal. Overdisp
46,738
Clustering data into bins of variable sizes
A dynamic program to minimize the sum of group variances subject to these constraints is simple and reasonably fast, especially for such a narrow range of group sizes. It reproduces the posted solution. The data are plotted as point symbols. The groups are color-coded and separated by vertical lines. Group means are plotted as horizontal lines. Commented R code follows. It computes the solution recursively, achieving efficiency by caching the results as it goes along. The program cluster(x,i) finds (and records) the best solution starting at index i in the data array x by searching among all feasible windows of lengths n.min through n.max beginning at index i. It returns the best value found so far (and, within the global variable cache$Breaks, leaves behind an indicator of the indexes that start each group). It can process arrays of thousands of elements in seconds, depending on how large the range n.max-n.min is. For larger problems it would have to be improved to include some branch-and-bound heuristics to limit the amount of searching. # # Univariate minimum-variance clustering with constraints. # Requires a global data structure `cache`. # cluster <- function(x, i) { # # Cluster x[i:length(x)] recursively. # Begin with the terminal cases. # if (i > cache$Length) return(0) # Nothing to process $ cache$Breaks[i] <<- FALSE # Unmark this break $ if (i + cache$n.min - 1 > cache$Length) return(Inf)# Interval is too short if (!is.na(v <- cache$Cache[i])) return(v) # Use the cached value $ n.min <- cache$n.min + i-1 # Start of search $ n.max <- min(cache$n.max + i-1, cache$Length) # End of search if (n.max < n.min) return(0) # Prevents `R` errors # # The recursion: accumulate the best total within-group variances. # To implement other objective functions, replace `var` by any measure of # within-group homogeneity. # values <- sapply(n.min:n.max, function(k) var(x[i:k]) + cluster(x, k+1)) # # Find and store the best result. # j <- which.min(values) cache$Breaks[n.min + j] <<- TRUE # Mark this as a good break $ cache$Cache[i] <<- values[j] # Cache the result $ return(values[j]) # Pass it to the caller } # # The data. # x <- c(3,2,1,3,4,5,0,0,0,1,2,3,2,8,9,10,9,8,2,3,4,9,5,3) # # Initialize `cache` to specify the constraints; and run the clustering. # system.time({ n <- length(x) cache <- list(n.min=4, n.max=10, # The length constraints Cache=rep(NA, n), # Values already found Breaks=rep(FALSE, n+1), # Group start indexes Length=n) # Cache size cluster(x, 1) # I.e., process x[1:n] cache$Breaks[1] <- TRUE # Indicate the start of the first group $ }) # # Display the results. # breaks <- (1:(n+1))[cache$Breaks] # Group start indexes $ groups <- cumsum(cache$Breaks[-(n+1)]) # Group identifiers averages <- tapply(x, groups, mean) # Group summaries colors <- terrain.colors(max(groups)) # Group plotting colors plot(x, pch=21, bg=colors[groups], ylab="Rating") abline(v = breaks-1/2, col="Gray") invisible(mapply(function(left, right, height, color) { lines(c(left, right)-1/2, c(height, height), col=color, lwd=2) }, breaks[-length(breaks)], breaks[-1], averages, colors))
Clustering data into bins of variable sizes
A dynamic program to minimize the sum of group variances subject to these constraints is simple and reasonably fast, especially for such a narrow range of group sizes. It reproduces the posted soluti
Clustering data into bins of variable sizes A dynamic program to minimize the sum of group variances subject to these constraints is simple and reasonably fast, especially for such a narrow range of group sizes. It reproduces the posted solution. The data are plotted as point symbols. The groups are color-coded and separated by vertical lines. Group means are plotted as horizontal lines. Commented R code follows. It computes the solution recursively, achieving efficiency by caching the results as it goes along. The program cluster(x,i) finds (and records) the best solution starting at index i in the data array x by searching among all feasible windows of lengths n.min through n.max beginning at index i. It returns the best value found so far (and, within the global variable cache$Breaks, leaves behind an indicator of the indexes that start each group). It can process arrays of thousands of elements in seconds, depending on how large the range n.max-n.min is. For larger problems it would have to be improved to include some branch-and-bound heuristics to limit the amount of searching. # # Univariate minimum-variance clustering with constraints. # Requires a global data structure `cache`. # cluster <- function(x, i) { # # Cluster x[i:length(x)] recursively. # Begin with the terminal cases. # if (i > cache$Length) return(0) # Nothing to process $ cache$Breaks[i] <<- FALSE # Unmark this break $ if (i + cache$n.min - 1 > cache$Length) return(Inf)# Interval is too short if (!is.na(v <- cache$Cache[i])) return(v) # Use the cached value $ n.min <- cache$n.min + i-1 # Start of search $ n.max <- min(cache$n.max + i-1, cache$Length) # End of search if (n.max < n.min) return(0) # Prevents `R` errors # # The recursion: accumulate the best total within-group variances. # To implement other objective functions, replace `var` by any measure of # within-group homogeneity. # values <- sapply(n.min:n.max, function(k) var(x[i:k]) + cluster(x, k+1)) # # Find and store the best result. # j <- which.min(values) cache$Breaks[n.min + j] <<- TRUE # Mark this as a good break $ cache$Cache[i] <<- values[j] # Cache the result $ return(values[j]) # Pass it to the caller } # # The data. # x <- c(3,2,1,3,4,5,0,0,0,1,2,3,2,8,9,10,9,8,2,3,4,9,5,3) # # Initialize `cache` to specify the constraints; and run the clustering. # system.time({ n <- length(x) cache <- list(n.min=4, n.max=10, # The length constraints Cache=rep(NA, n), # Values already found Breaks=rep(FALSE, n+1), # Group start indexes Length=n) # Cache size cluster(x, 1) # I.e., process x[1:n] cache$Breaks[1] <- TRUE # Indicate the start of the first group $ }) # # Display the results. # breaks <- (1:(n+1))[cache$Breaks] # Group start indexes $ groups <- cumsum(cache$Breaks[-(n+1)]) # Group identifiers averages <- tapply(x, groups, mean) # Group summaries colors <- terrain.colors(max(groups)) # Group plotting colors plot(x, pch=21, bg=colors[groups], ylab="Rating") abline(v = breaks-1/2, col="Gray") invisible(mapply(function(left, right, height, color) { lines(c(left, right)-1/2, c(height, height), col=color, lwd=2) }, breaks[-length(breaks)], breaks[-1], averages, colors))
Clustering data into bins of variable sizes A dynamic program to minimize the sum of group variances subject to these constraints is simple and reasonably fast, especially for such a narrow range of group sizes. It reproduces the posted soluti
46,739
Confusion in regression function derivation
I'm not sure all that effort by Zhanxiong was necessary. Just expand $$E_{Y|X}([Y-c]^2) = E_{Y|X}Y^2 - 2cE_{Y|X}Y + c^2$$ and note that $c = E_{Y|X}Y$ minimizes the expression (derivative, square completion)...
Confusion in regression function derivation
I'm not sure all that effort by Zhanxiong was necessary. Just expand $$E_{Y|X}([Y-c]^2) = E_{Y|X}Y^2 - 2cE_{Y|X}Y + c^2$$ and note that $c = E_{Y|X}Y$ minimizes the expression (derivative, square comp
Confusion in regression function derivation I'm not sure all that effort by Zhanxiong was necessary. Just expand $$E_{Y|X}([Y-c]^2) = E_{Y|X}Y^2 - 2cE_{Y|X}Y + c^2$$ and note that $c = E_{Y|X}Y$ minimizes the expression (derivative, square completion)...
Confusion in regression function derivation I'm not sure all that effort by Zhanxiong was necessary. Just expand $$E_{Y|X}([Y-c]^2) = E_{Y|X}Y^2 - 2cE_{Y|X}Y + c^2$$ and note that $c = E_{Y|X}Y$ minimizes the expression (derivative, square comp
46,740
Confusion in regression function derivation
This can be shown by a classical method used everywhere related to least squares estimation and conditional expectation. Let $f(x) = E(Y|X = x)$, then write: $$E_{Y|X}[(Y - c)^2|X = x] = E_{Y|X}[(Y - f(x) + f(x) - c)^2|X = x]$$ Expand the complete square and show that the cross product term is 0 as follows: $$E_{Y|X}[(Y - f(x))(f(x) - c)|X = x] = (f(x) - c)E_{Y|X}[Y - f(x)|X = x] = (f(x) - c)(f(x) - f(x)) = 0$$ where the first equality follows from $f(x) - c$ is a function of $x$ (technically, $\sigma(X)$-measurable) thus can be taken out from the conditional expectation. The second equality holds due to the linearity of expectation and our definition of $f(x)$. Therefore, $$E_{Y|X}[(Y - c)^2|X = x] = E_{Y|X}[(Y - f(x) + f(x) - c)^2|X = x] = E_{Y|X}[(Y - f(x))^2|X = x] + (f(x) - c)^2 \geq E_{Y|X}[(Y - f(x))^2|X = x]$$ And the equality can be attained by taking $c = f(x)$, which is the solution.
Confusion in regression function derivation
This can be shown by a classical method used everywhere related to least squares estimation and conditional expectation. Let $f(x) = E(Y|X = x)$, then write: $$E_{Y|X}[(Y - c)^2|X = x] = E_{Y|X}[(Y -
Confusion in regression function derivation This can be shown by a classical method used everywhere related to least squares estimation and conditional expectation. Let $f(x) = E(Y|X = x)$, then write: $$E_{Y|X}[(Y - c)^2|X = x] = E_{Y|X}[(Y - f(x) + f(x) - c)^2|X = x]$$ Expand the complete square and show that the cross product term is 0 as follows: $$E_{Y|X}[(Y - f(x))(f(x) - c)|X = x] = (f(x) - c)E_{Y|X}[Y - f(x)|X = x] = (f(x) - c)(f(x) - f(x)) = 0$$ where the first equality follows from $f(x) - c$ is a function of $x$ (technically, $\sigma(X)$-measurable) thus can be taken out from the conditional expectation. The second equality holds due to the linearity of expectation and our definition of $f(x)$. Therefore, $$E_{Y|X}[(Y - c)^2|X = x] = E_{Y|X}[(Y - f(x) + f(x) - c)^2|X = x] = E_{Y|X}[(Y - f(x))^2|X = x] + (f(x) - c)^2 \geq E_{Y|X}[(Y - f(x))^2|X = x]$$ And the equality can be attained by taking $c = f(x)$, which is the solution.
Confusion in regression function derivation This can be shown by a classical method used everywhere related to least squares estimation and conditional expectation. Let $f(x) = E(Y|X = x)$, then write: $$E_{Y|X}[(Y - c)^2|X = x] = E_{Y|X}[(Y -
46,741
How do you evaluate a generative model?
Discriminative algorithms model P(Class|variables), whereas generative algorithms model P(Class,variables) = P(Class|variables)* P(variables). Hence, by modelling the joint distribution of the variable space, generative algorithms model the underlying process that 'created' your data. My point in starting with this first paragraph is to note that generative algorithms have discriminative properties. Therefore, the same method of evaluating the predictive performance: "compare the predictions with ground truth, using cross-validation." applies to generative models, as well as discriminative ones. However, as you imply, we can additionally asses the ability of the generative algorithms in modelling the underlying process that generates data. A commonly used group of metrics for this is "information theoretic scores" that derive from the idea of likelihood (log-likelihood). Below are some well-known information theoretic scores: 1- log-likelihood (LL) score 2- minimum description length (MDL) score 3- minimum message length (MML) score 4- Akaike Information Criterion (AIC) score 5- Bayesian Information Criterion (BIC) score Note that 2, 3, 4, and 5 use some complexity penalisation factor over the LL score. This is good practice to combat over-fitting.
How do you evaluate a generative model?
Discriminative algorithms model P(Class|variables), whereas generative algorithms model P(Class,variables) = P(Class|variables)* P(variables). Hence, by modelling the joint distribution of the variabl
How do you evaluate a generative model? Discriminative algorithms model P(Class|variables), whereas generative algorithms model P(Class,variables) = P(Class|variables)* P(variables). Hence, by modelling the joint distribution of the variable space, generative algorithms model the underlying process that 'created' your data. My point in starting with this first paragraph is to note that generative algorithms have discriminative properties. Therefore, the same method of evaluating the predictive performance: "compare the predictions with ground truth, using cross-validation." applies to generative models, as well as discriminative ones. However, as you imply, we can additionally asses the ability of the generative algorithms in modelling the underlying process that generates data. A commonly used group of metrics for this is "information theoretic scores" that derive from the idea of likelihood (log-likelihood). Below are some well-known information theoretic scores: 1- log-likelihood (LL) score 2- minimum description length (MDL) score 3- minimum message length (MML) score 4- Akaike Information Criterion (AIC) score 5- Bayesian Information Criterion (BIC) score Note that 2, 3, 4, and 5 use some complexity penalisation factor over the LL score. This is good practice to combat over-fitting.
How do you evaluate a generative model? Discriminative algorithms model P(Class|variables), whereas generative algorithms model P(Class,variables) = P(Class|variables)* P(variables). Hence, by modelling the joint distribution of the variabl
46,742
Binary classification in imbalanced data
A few general strategies: First and foremost, in imbalanced classification problems you want to do stratified cross-validation. This allows you to train your models with the same distribution in your samples. Second, you should probably use Cohen's Kappa metric when tuning your models. It is better in imbalanced scenarios because it takes into account random chance as well. A more detailed description was provided in the answer to this question If you are adventurous, you can look into cost-sensitive machine learning. In these methods you essentially tell the algorithm that it is better to positively identify certain classes. For example, it would be much worse to misidentify a person with cancer as opposed to accurately identifying them. There many methods including sampling (over, under, SMOTE, SMOTEBoost and EasyEnsemble as referenced in this prior question regarding imbalanced datasets and CSL), Weighting, Thresholding, and Ensemble Methods. These are mostly algorithm agnostic methods, there are also algorithms with CSL builtin but I think this is enough to get your started.
Binary classification in imbalanced data
A few general strategies: First and foremost, in imbalanced classification problems you want to do stratified cross-validation. This allows you to train your models with the same distribution in you
Binary classification in imbalanced data A few general strategies: First and foremost, in imbalanced classification problems you want to do stratified cross-validation. This allows you to train your models with the same distribution in your samples. Second, you should probably use Cohen's Kappa metric when tuning your models. It is better in imbalanced scenarios because it takes into account random chance as well. A more detailed description was provided in the answer to this question If you are adventurous, you can look into cost-sensitive machine learning. In these methods you essentially tell the algorithm that it is better to positively identify certain classes. For example, it would be much worse to misidentify a person with cancer as opposed to accurately identifying them. There many methods including sampling (over, under, SMOTE, SMOTEBoost and EasyEnsemble as referenced in this prior question regarding imbalanced datasets and CSL), Weighting, Thresholding, and Ensemble Methods. These are mostly algorithm agnostic methods, there are also algorithms with CSL builtin but I think this is enough to get your started.
Binary classification in imbalanced data A few general strategies: First and foremost, in imbalanced classification problems you want to do stratified cross-validation. This allows you to train your models with the same distribution in you
46,743
Binary classification in imbalanced data
I have faced the same problem trying to predict a single emotion in the RAVDESS dataset. The thing that helped me is: to provide the model with the initial bias and weights; in this way, the model takes care of the class differences through data. You can setup good initialization bias as follows $$ b_0 = \log_e\left(\text{# negative labels}\right)\\ b_1 = \log_e\left(\text{# positive labels}\right) $$ and good initialization weights for the output layer as follows $$ w_0 = \frac{1}{2} \cdot \frac{\text{# total samples}}{\text{# negative labels}}\\[15pt] w_1 = \frac{1}{2} \cdot \frac{\text{# total samples}}{\text{# positive labels}} $$ where $w_0$ is the weight for the negative class and $w_1$ for the positive one. The meaning is that a better bias initialization helps the initial convergence, instead, a good weight initialization helps because you don't have very many of those positive (negative) samples to work with, so you would want to have the classifier heavily weight the few available examples. You can plug the output_bias values inside the model as follows: b0 = np.log(neg) b1 = np.log(pos) output_bias = tf.keras.initializers.Constant([b0, b1]) ... model.add(tf.keras.layers.Flatten()) model.add(tf.keras.layers.Dense(2, activation="softmax", bias_initializer=output_bias)) and the initial class_weights in this way: ... weight_for_negative = (1 / neg) * (total / 2.0) weight_for_positive = (1 / pos) * (total / 2.0) class_weight = {0: weight_for_negative, 1: weight_for_positive} ... model_history = model.fit(x_traincnn, y_train, batch_size = 128, epochs = 800, validation_data = (x_validcnn, y_valid), #callbacks = [mcp_save, lr_reduce, early_stopping, backup]) callbacks = [mcp_save, lr_reduce, early_stopping, tensorboard], class_weight = class_weight) Hope this will help. Tips: I recommend check also fscore, precision, and recall metrics to better interpret the model. Bibliography: TensorFlow Blog - Imbalanced data classification
Binary classification in imbalanced data
I have faced the same problem trying to predict a single emotion in the RAVDESS dataset. The thing that helped me is: to provide the model with the initial bias and weights; in this way, the model tak
Binary classification in imbalanced data I have faced the same problem trying to predict a single emotion in the RAVDESS dataset. The thing that helped me is: to provide the model with the initial bias and weights; in this way, the model takes care of the class differences through data. You can setup good initialization bias as follows $$ b_0 = \log_e\left(\text{# negative labels}\right)\\ b_1 = \log_e\left(\text{# positive labels}\right) $$ and good initialization weights for the output layer as follows $$ w_0 = \frac{1}{2} \cdot \frac{\text{# total samples}}{\text{# negative labels}}\\[15pt] w_1 = \frac{1}{2} \cdot \frac{\text{# total samples}}{\text{# positive labels}} $$ where $w_0$ is the weight for the negative class and $w_1$ for the positive one. The meaning is that a better bias initialization helps the initial convergence, instead, a good weight initialization helps because you don't have very many of those positive (negative) samples to work with, so you would want to have the classifier heavily weight the few available examples. You can plug the output_bias values inside the model as follows: b0 = np.log(neg) b1 = np.log(pos) output_bias = tf.keras.initializers.Constant([b0, b1]) ... model.add(tf.keras.layers.Flatten()) model.add(tf.keras.layers.Dense(2, activation="softmax", bias_initializer=output_bias)) and the initial class_weights in this way: ... weight_for_negative = (1 / neg) * (total / 2.0) weight_for_positive = (1 / pos) * (total / 2.0) class_weight = {0: weight_for_negative, 1: weight_for_positive} ... model_history = model.fit(x_traincnn, y_train, batch_size = 128, epochs = 800, validation_data = (x_validcnn, y_valid), #callbacks = [mcp_save, lr_reduce, early_stopping, backup]) callbacks = [mcp_save, lr_reduce, early_stopping, tensorboard], class_weight = class_weight) Hope this will help. Tips: I recommend check also fscore, precision, and recall metrics to better interpret the model. Bibliography: TensorFlow Blog - Imbalanced data classification
Binary classification in imbalanced data I have faced the same problem trying to predict a single emotion in the RAVDESS dataset. The thing that helped me is: to provide the model with the initial bias and weights; in this way, the model tak
46,744
Binary classification in imbalanced data
if you use SVM , you can assign param class_weight=balanced => weights will be taken into consideration by classifier when training (look sklearn docs) if you want to make threshold different than 0.5. => you can move threshold - THE WAY: if you think false positives are worse than false negatives - you can take this into consideration in costs in the code OR on the stage of decision-making [NB use predict_proba for newly predicted sample & compare it with threshold you decided - e.g. estimator.predict_proba() < 0.3 or < 0.7 instead of estimator.predict().] P.S. but also take a look here Are unbalanced datasets problematic AND here - Is threshold moving unnecessary in balanced classification problem
Binary classification in imbalanced data
if you use SVM , you can assign param class_weight=balanced => weights will be taken into consideration by classifier when training (look sklearn docs) if you want to make threshold different than 0.
Binary classification in imbalanced data if you use SVM , you can assign param class_weight=balanced => weights will be taken into consideration by classifier when training (look sklearn docs) if you want to make threshold different than 0.5. => you can move threshold - THE WAY: if you think false positives are worse than false negatives - you can take this into consideration in costs in the code OR on the stage of decision-making [NB use predict_proba for newly predicted sample & compare it with threshold you decided - e.g. estimator.predict_proba() < 0.3 or < 0.7 instead of estimator.predict().] P.S. but also take a look here Are unbalanced datasets problematic AND here - Is threshold moving unnecessary in balanced classification problem
Binary classification in imbalanced data if you use SVM , you can assign param class_weight=balanced => weights will be taken into consideration by classifier when training (look sklearn docs) if you want to make threshold different than 0.
46,745
Binary classification in imbalanced data
I'd like to answer the actual question ask : "does it make sense to apply specific strategies ?" 80:20 can be interpreted as not so imbalanced data. It depends on the accuracy of your model. If you final model had an accuracy of 99.9%, then please, do not do anything to balance it, your model already find everything. If you have an accuracy of 80%, be careful. Maybe your model is classifying everything in the big group. I would say, it really depends on the output. I always try a first really basic model first. Then I look at the results : what is good, what is not. If it's enough for you, keep it this way, if it's not, these results will be your benchmark you want to overtake. Once you have this benchmark, you can try different algorithms which can help on the balance
Binary classification in imbalanced data
I'd like to answer the actual question ask : "does it make sense to apply specific strategies ?" 80:20 can be interpreted as not so imbalanced data. It depends on the accuracy of your model. If you f
Binary classification in imbalanced data I'd like to answer the actual question ask : "does it make sense to apply specific strategies ?" 80:20 can be interpreted as not so imbalanced data. It depends on the accuracy of your model. If you final model had an accuracy of 99.9%, then please, do not do anything to balance it, your model already find everything. If you have an accuracy of 80%, be careful. Maybe your model is classifying everything in the big group. I would say, it really depends on the output. I always try a first really basic model first. Then I look at the results : what is good, what is not. If it's enough for you, keep it this way, if it's not, these results will be your benchmark you want to overtake. Once you have this benchmark, you can try different algorithms which can help on the balance
Binary classification in imbalanced data I'd like to answer the actual question ask : "does it make sense to apply specific strategies ?" 80:20 can be interpreted as not so imbalanced data. It depends on the accuracy of your model. If you f
46,746
What to do when rejecting a proposed point in MCMC?
The validation of the Metropolis-Hastings algorithm relies on repeating the current value in the Markov chain if the proposed value is rejected. You should not consider the list of accepted points as your sample but instead the Markov chain with transition \begin{align*} X_{t+1} &= Y_{t+1} \quad&\text{if } U_{t+1}\le \pi(Y_{t+1})/\pi(X_t)\\ &= X_t \quad&\text{otherwise} \end{align*} (assuming a symmetric proposal distribution). The repetition of the current value in the event of a rejection is what makes the algorithm valid, i.e., why $\pi$ is the stationary distribution. It is always possible to study the distribution of the accepted and of the rejected values, with some recycling possible by Rao-Blackwellisation, but this study is more advanced and far from necessary to understand the algorithm.
What to do when rejecting a proposed point in MCMC?
The validation of the Metropolis-Hastings algorithm relies on repeating the current value in the Markov chain if the proposed value is rejected. You should not consider the list of accepted points as
What to do when rejecting a proposed point in MCMC? The validation of the Metropolis-Hastings algorithm relies on repeating the current value in the Markov chain if the proposed value is rejected. You should not consider the list of accepted points as your sample but instead the Markov chain with transition \begin{align*} X_{t+1} &= Y_{t+1} \quad&\text{if } U_{t+1}\le \pi(Y_{t+1})/\pi(X_t)\\ &= X_t \quad&\text{otherwise} \end{align*} (assuming a symmetric proposal distribution). The repetition of the current value in the event of a rejection is what makes the algorithm valid, i.e., why $\pi$ is the stationary distribution. It is always possible to study the distribution of the accepted and of the rejected values, with some recycling possible by Rao-Blackwellisation, but this study is more advanced and far from necessary to understand the algorithm.
What to do when rejecting a proposed point in MCMC? The validation of the Metropolis-Hastings algorithm relies on repeating the current value in the Markov chain if the proposed value is rejected. You should not consider the list of accepted points as
46,747
Standardized coefficients for linear models with numeric and factor variables in multiple linear regression using scale() function in R
I'm not sure that standardized coefficients make much sense when you have dummy variables. The idea of a standardized coefficient is that it puts the units of the predictor variable into a form that we understand. I have a sense of what a standard deviation is, whereas if I don't know what time is (or know if its measured in seconds, minutes, hours, days, weeks), I can't interpret the units. If you've got a factor, your measures are dummy coded - you have diet2, or you have diet1. I know what that scale means. Andrew Gelman suggest that instead of dividing by the SD, we divide by 2 SDs, and this makes the effect of a continuous variable comparable to the effect of a dummy coded variable. Paper here: http://www.stat.columbia.edu/~gelman/research/published/standardizing7.pdf , blog entry here: http://andrewgelman.com/2006/06/21/standardizing_r/ Anyway, what you do is not quite the right way, because you don't want standardized coefficients for dummy (factor) variables. But as long as you describe them appropriately, it's OK. If you really want to, you can standardized the variables before you do the analysis, and get truly standardized coefficients. These will be kind of meaningless though: ChickWeight$d2 <- scale(ChickWeight$Diet == 2) ChickWeight$d3 <- scale(ChickWeight$Diet == 3) ChickWeight$d4 <- scale(ChickWeight$Diet == 4) bb <- lm(scale(weight) ~ scale(Time) + d2 + d3 + d4, data=ChickWeight )
Standardized coefficients for linear models with numeric and factor variables in multiple linear reg
I'm not sure that standardized coefficients make much sense when you have dummy variables. The idea of a standardized coefficient is that it puts the units of the predictor variable into a form that
Standardized coefficients for linear models with numeric and factor variables in multiple linear regression using scale() function in R I'm not sure that standardized coefficients make much sense when you have dummy variables. The idea of a standardized coefficient is that it puts the units of the predictor variable into a form that we understand. I have a sense of what a standard deviation is, whereas if I don't know what time is (or know if its measured in seconds, minutes, hours, days, weeks), I can't interpret the units. If you've got a factor, your measures are dummy coded - you have diet2, or you have diet1. I know what that scale means. Andrew Gelman suggest that instead of dividing by the SD, we divide by 2 SDs, and this makes the effect of a continuous variable comparable to the effect of a dummy coded variable. Paper here: http://www.stat.columbia.edu/~gelman/research/published/standardizing7.pdf , blog entry here: http://andrewgelman.com/2006/06/21/standardizing_r/ Anyway, what you do is not quite the right way, because you don't want standardized coefficients for dummy (factor) variables. But as long as you describe them appropriately, it's OK. If you really want to, you can standardized the variables before you do the analysis, and get truly standardized coefficients. These will be kind of meaningless though: ChickWeight$d2 <- scale(ChickWeight$Diet == 2) ChickWeight$d3 <- scale(ChickWeight$Diet == 3) ChickWeight$d4 <- scale(ChickWeight$Diet == 4) bb <- lm(scale(weight) ~ scale(Time) + d2 + d3 + d4, data=ChickWeight )
Standardized coefficients for linear models with numeric and factor variables in multiple linear reg I'm not sure that standardized coefficients make much sense when you have dummy variables. The idea of a standardized coefficient is that it puts the units of the predictor variable into a form that
46,748
Empirical rejection rate of a wrong null hypothesis -is it the "power of the test"?
The usual way to investigate the power properties is via a power curve (or sometimes, power surface, if we want to investigate the response to varying two things at once). On these curves, the y-variable is the rejection rate and the x-value has the particular value of the thing we're varying. The most common type of power curve is one we produce as we vary the parameter that is the subject of the test (e.g. in a test of means, as the true mean changes from the hypothesized value). Here's an example of a power curve for a two-sample t-test under a particular set of conditions: $\hspace{1cm}$ (that one was not generated empirically, but by calling a function) Here's a power comparison of paired t-test (curve) and Signed Rank test (points) for 4 pairs of normal observations (it's actually two-sided, but the left half isn't shown as it's a mirror image of the right half): The t-test was carried out at the exact significance level of the signed rank test (since it can only take a few significance levels). Here's a pair of (one-sided) power curves for a power comparison in a test of normality, where the alternatives are gamma distributed (which include the normal as a limiting case, under appropriate standardization): (this one was generated empirically in essentially the manner you describe) As you suggest, at some specified value of the alternative, you can compute the power, and then as you vary that, you obtain a function that varies with the parameter you change, giving a power curve (or more strictly a curve of rejection rates, since at the null that's not power, but significance level). I've generated quite a few of these curves in various of my answers. See here for another example that compares power from an "algebraic" calculation (/function call) and an empirical calculation (i.e. simulation). Some general advice relating to empirical power: 1) since these are empirical rejection rates (i.e. binomial proportions), we can compute standard errors, confidence intervals and so on. So you know how accurate they are. If I can spare the time I usually simulate enough samples so the standard error is on the order of a pixel in my image (or even somewhat less), at least if it's not a large image (if you're doing vector graphics, think maybe half a percent or so of the height-dimension of the plot instead). 2) power curves are typically going to be smooth. Very smooth. As such, we can, with a bit of cleverness, avoid calculating power at a huge number of values (and indeed we can use this fact to reduce the number of simulations required at each point). One thing I do is to take a transformation of the power that will "straighten" the power curve, at least when we're away from 0 (inverse normal cdf is often a good choice), do cubic spline smoothing, and then transform back (beware doing that anywhere your rejection rates are exactly 0 or 1; you may want to leave those alone). If you do that well, you should be able to get away with 10-20 points or so. If you already have so many simulations your points are accurate to the pixel, after transforming to approximate local linearity, linear interpolation will usually be sufficient, and produces smooth, highly accurate curves after you transform back. If in doubt, produce a few more points and see whether the curves are generally within a couple of standard errors of those simulated values (since if those standard errors are only on the order of a pixel, you can't actually see the difference... so the miniscule bias this might introduce really doesn't matter). You can also exploit obvious symmetries and so on. (In the t-test vs signed rank test power curve above, we exploited a relationship between the within-pair correlation ($\rho$) and the standard error of the difference to give different x-axes (above and below the plot) that then have the same power curves). Sometimes a little fiddling is required to get it just so, but you should get very smooth, more accurate estimates of the power with such smoothing. (On the other hand, sometimes its faster just to do more points - but in any case I would rarely do more than about 30 points because the eye happily fills the rest in.) 3) Since we're doing Monte Carlo simulation we can exploit various variance reduction techniques (though keep in mind the impact on calculated standard errors; at the worst, if you can't calculate it any more, the unreduced variance will be an upper bound). For example, we can use control variates - one thing I did when comparing power of a nonparametric test with a t-test was to compute the empirical rate for both tests and then use the error in the power for the t-test to help reduce the error in the other test (again, smoothing the result a little) ... but it works better if you do it on the right scale. A number of other variance-reduction techniques can also be used. (If I remember rightly, I might have used a control variate on a transformed-scale for the one-sample-t vs signed-rank test comparison above.) But often simple brute force will suffice and requires little brain effort. If all it takes is going for a cup of coffee while the full simulation runs, might as well let it go. (No point spending half an hour working out some clever computation to save 15 minutes of a half hour run time.)
Empirical rejection rate of a wrong null hypothesis -is it the "power of the test"?
The usual way to investigate the power properties is via a power curve (or sometimes, power surface, if we want to investigate the response to varying two things at once). On these curves, the y-varia
Empirical rejection rate of a wrong null hypothesis -is it the "power of the test"? The usual way to investigate the power properties is via a power curve (or sometimes, power surface, if we want to investigate the response to varying two things at once). On these curves, the y-variable is the rejection rate and the x-value has the particular value of the thing we're varying. The most common type of power curve is one we produce as we vary the parameter that is the subject of the test (e.g. in a test of means, as the true mean changes from the hypothesized value). Here's an example of a power curve for a two-sample t-test under a particular set of conditions: $\hspace{1cm}$ (that one was not generated empirically, but by calling a function) Here's a power comparison of paired t-test (curve) and Signed Rank test (points) for 4 pairs of normal observations (it's actually two-sided, but the left half isn't shown as it's a mirror image of the right half): The t-test was carried out at the exact significance level of the signed rank test (since it can only take a few significance levels). Here's a pair of (one-sided) power curves for a power comparison in a test of normality, where the alternatives are gamma distributed (which include the normal as a limiting case, under appropriate standardization): (this one was generated empirically in essentially the manner you describe) As you suggest, at some specified value of the alternative, you can compute the power, and then as you vary that, you obtain a function that varies with the parameter you change, giving a power curve (or more strictly a curve of rejection rates, since at the null that's not power, but significance level). I've generated quite a few of these curves in various of my answers. See here for another example that compares power from an "algebraic" calculation (/function call) and an empirical calculation (i.e. simulation). Some general advice relating to empirical power: 1) since these are empirical rejection rates (i.e. binomial proportions), we can compute standard errors, confidence intervals and so on. So you know how accurate they are. If I can spare the time I usually simulate enough samples so the standard error is on the order of a pixel in my image (or even somewhat less), at least if it's not a large image (if you're doing vector graphics, think maybe half a percent or so of the height-dimension of the plot instead). 2) power curves are typically going to be smooth. Very smooth. As such, we can, with a bit of cleverness, avoid calculating power at a huge number of values (and indeed we can use this fact to reduce the number of simulations required at each point). One thing I do is to take a transformation of the power that will "straighten" the power curve, at least when we're away from 0 (inverse normal cdf is often a good choice), do cubic spline smoothing, and then transform back (beware doing that anywhere your rejection rates are exactly 0 or 1; you may want to leave those alone). If you do that well, you should be able to get away with 10-20 points or so. If you already have so many simulations your points are accurate to the pixel, after transforming to approximate local linearity, linear interpolation will usually be sufficient, and produces smooth, highly accurate curves after you transform back. If in doubt, produce a few more points and see whether the curves are generally within a couple of standard errors of those simulated values (since if those standard errors are only on the order of a pixel, you can't actually see the difference... so the miniscule bias this might introduce really doesn't matter). You can also exploit obvious symmetries and so on. (In the t-test vs signed rank test power curve above, we exploited a relationship between the within-pair correlation ($\rho$) and the standard error of the difference to give different x-axes (above and below the plot) that then have the same power curves). Sometimes a little fiddling is required to get it just so, but you should get very smooth, more accurate estimates of the power with such smoothing. (On the other hand, sometimes its faster just to do more points - but in any case I would rarely do more than about 30 points because the eye happily fills the rest in.) 3) Since we're doing Monte Carlo simulation we can exploit various variance reduction techniques (though keep in mind the impact on calculated standard errors; at the worst, if you can't calculate it any more, the unreduced variance will be an upper bound). For example, we can use control variates - one thing I did when comparing power of a nonparametric test with a t-test was to compute the empirical rate for both tests and then use the error in the power for the t-test to help reduce the error in the other test (again, smoothing the result a little) ... but it works better if you do it on the right scale. A number of other variance-reduction techniques can also be used. (If I remember rightly, I might have used a control variate on a transformed-scale for the one-sample-t vs signed-rank test comparison above.) But often simple brute force will suffice and requires little brain effort. If all it takes is going for a cup of coffee while the full simulation runs, might as well let it go. (No point spending half an hour working out some clever computation to save 15 minutes of a half hour run time.)
Empirical rejection rate of a wrong null hypothesis -is it the "power of the test"? The usual way to investigate the power properties is via a power curve (or sometimes, power surface, if we want to investigate the response to varying two things at once). On these curves, the y-varia
46,749
Empirical rejection rate of a wrong null hypothesis -is it the "power of the test"?
First, a critical part of power calculation is missing from your description: the "non-zero parameter" from which you simulated your data. The power will depends on this parameter as well. There are 4 parts in power/sample size calculation. The sample size $n$, the power $\beta$, the significance level $\alpha$, the true underlying parameter $\mu_a$. Typically, you will fix 3 of the 4 parameters and calculate the other one. In power calculation, you calculate $\beta$ from a given set of $n,\alpha,\mu_a$. In sample size calculation, you calculate $n$ from a given set of $\alpha,\mu_a,\beta$. In your statistical analysis plan, you will need to generate a table to report either $\beta$ or $n$ for a range of other parameters. Normally, your $\alpha$ is fixed to 0.05 or 0.1 depending on the literature. $\mu_a$ is of course unknown and needs to be specified by the non-statistician. For example, suppose you want to test whether a drug can effectively reduce the chance of cancer. The medical investigator should be able to tell you roughly how effect this drug is, say to reduce somewhere between 5% and 10% of cancer if no more than 15%. That is how you pick up $\mu_a$ and how the alternative hypothesis kicks in. Finally, it is valid to report empirical rejection rate of an alternative hypothesis as the power. This happens when there is no closed-form formula to calculate the estimate and confidence interval. For example in non-randomized study, you want estimate a drug effect using regression but have to adjusted for 5 covariates. In this case, it is easier to do simulation study rather than working out the Math. Peter
Empirical rejection rate of a wrong null hypothesis -is it the "power of the test"?
First, a critical part of power calculation is missing from your description: the "non-zero parameter" from which you simulated your data. The power will depends on this parameter as well. There are
Empirical rejection rate of a wrong null hypothesis -is it the "power of the test"? First, a critical part of power calculation is missing from your description: the "non-zero parameter" from which you simulated your data. The power will depends on this parameter as well. There are 4 parts in power/sample size calculation. The sample size $n$, the power $\beta$, the significance level $\alpha$, the true underlying parameter $\mu_a$. Typically, you will fix 3 of the 4 parameters and calculate the other one. In power calculation, you calculate $\beta$ from a given set of $n,\alpha,\mu_a$. In sample size calculation, you calculate $n$ from a given set of $\alpha,\mu_a,\beta$. In your statistical analysis plan, you will need to generate a table to report either $\beta$ or $n$ for a range of other parameters. Normally, your $\alpha$ is fixed to 0.05 or 0.1 depending on the literature. $\mu_a$ is of course unknown and needs to be specified by the non-statistician. For example, suppose you want to test whether a drug can effectively reduce the chance of cancer. The medical investigator should be able to tell you roughly how effect this drug is, say to reduce somewhere between 5% and 10% of cancer if no more than 15%. That is how you pick up $\mu_a$ and how the alternative hypothesis kicks in. Finally, it is valid to report empirical rejection rate of an alternative hypothesis as the power. This happens when there is no closed-form formula to calculate the estimate and confidence interval. For example in non-randomized study, you want estimate a drug effect using regression but have to adjusted for 5 covariates. In this case, it is easier to do simulation study rather than working out the Math. Peter
Empirical rejection rate of a wrong null hypothesis -is it the "power of the test"? First, a critical part of power calculation is missing from your description: the "non-zero parameter" from which you simulated your data. The power will depends on this parameter as well. There are
46,750
Paired samples t-test using a structural equation modeling approach
Many statistical tests can be thought of as structural equation models, and one of those is the paired samples t-test. As you say, the advantage of the SEM approach is that you can use FIML estimation - which is asymptotically equivalent to multiple imputation, but can be easier. You estimate a parameter which represents the difference between the means, and test it against zero. Use the standard error and a t-test. Use the likelihood ratio (chi-square) difference test. Use a Wald test. Demonstration of each of these, using Lavaan. #First, generate some data library(MASS) library(lavaan) d <- as.data.frame(mvrnorm(S=matrix(c(1, 0.8, 0.8, 1), 2), mu=c(0, 0.1) , n=200, empirical=TRUE)) names(d) <- c("y1", "y2") Here's approach 1. Fit a model where the variables are correlated, and the means are estimated and named. Then find the difference between the means. You get an estimate, a standard error, a t-value and a p-value. model.1 <- "y1 ~~ y2 y1 ~~ y1 y2 ~~ y2 y1 ~ m1 * 1 y2 ~ m2 * 1 diff := m2 - m1 " fit.1 <- lavaan(model.1, data=d) summary(fit.1) Here's the bit of the output we want: Estimate Std.err Z-value P(>|z|) Defined parameters: diff 0.100 0.045 2.242 0.025 In Mplus, you'd write (note - untested code, written from memory): Model: y1 with y2; [y1] (m1); [y2] (m2); model constraint: new diff; diff = m1 - m2; The second approach is the likelihood ratio (chi-square) difference test. This is straightforward, because our current model has zero df, and the chi-square is the difference between the two models. Now the chi-square test gives you the p-value. Estimator ML Minimum Function Test Statistic 4.963 Degrees of freedom 1 P-value (Chi-square) 0.026 Note that this is not exactly the same p-value, the chi-square test assumes a large sample size. But it's very close (and will be down to about 30). This approach is less useful, because you don't get things like a standard error of the difference. However, if you want to get a multivariate test p-value, this approach works. In Mplus, you'd write: model constraint: m1 = m2; The third approach is the Wald test. This is the least useful. It assumes that all other parameters are unchanged by a restriction in the model (it's kind of the opposite of a modification index, or a lagrange multiplier in that way - it's the estimate of what the chi-square difference test would be). We use the lavTestWald (fit.1, "m1==m2" ) function in Lavaan. lavTestWald (fit.1, "m1==m2" ) $stat [1] 5.025126 $df [1] 1 $p.value [1] 0.02498211 In Mplus, you'd replace the model constraint section with a model test section: model test: m1 = m2; Notice that the Wald test chi-square is a touch higher, and hence the p-value a touch lower than in the likelihood ratio test (just like logistic regression). The advantage of the Wald test is that you don't have to reestimate the model, so it's faster, but that's rarely an issue these days. OK, that's cool, but we wanted to use FIML. First, let's introduce some missing data to get some bias. If a person has a score on y1 which is greater than 1, they have a 33% chance of being missing on y2; if they have a score on y2 which is less than -1, they have a 33% chance of being missing on y1. set.seed(12345) d$y1a <- ifelse(d$y2 < -1 & runif(nrow(d)) > 0.66, NA, d$y1) d$y2a <- ifelse(d$y1 > 1 & runif(nrow(d)) > 0.66, NA, d$y2) A t-test estimates the difference at 0.14 - approximate a 40% overestimate. When I run lavaan using: fit.1a <- lavaan(model.1a, data=d, missing="ML") summary(fit.1a) Defined parameters: diff 0.108 0.046 2.343 0.019 The overestimate is only 8%. It's also possible to fit this as a multilevel model, which gives a full information estimate: library(lme4) d.long <- as.data.frame(c(d$y1a, d$y2a)) names(d.long) <- "y" d.long$id <- rep(1:200, 2) d.long$x <- c(rep(0, 200), rep(1, 200)) summary(lmer(y ~ x + (1|id), data=d.long)) Fixed effects: Estimate Std. Error t value (Intercept) -0.01206 0.07174 -0.168 x 0.10695 0.04589 2.330
Paired samples t-test using a structural equation modeling approach
Many statistical tests can be thought of as structural equation models, and one of those is the paired samples t-test. As you say, the advantage of the SEM approach is that you can use FIML estimatio
Paired samples t-test using a structural equation modeling approach Many statistical tests can be thought of as structural equation models, and one of those is the paired samples t-test. As you say, the advantage of the SEM approach is that you can use FIML estimation - which is asymptotically equivalent to multiple imputation, but can be easier. You estimate a parameter which represents the difference between the means, and test it against zero. Use the standard error and a t-test. Use the likelihood ratio (chi-square) difference test. Use a Wald test. Demonstration of each of these, using Lavaan. #First, generate some data library(MASS) library(lavaan) d <- as.data.frame(mvrnorm(S=matrix(c(1, 0.8, 0.8, 1), 2), mu=c(0, 0.1) , n=200, empirical=TRUE)) names(d) <- c("y1", "y2") Here's approach 1. Fit a model where the variables are correlated, and the means are estimated and named. Then find the difference between the means. You get an estimate, a standard error, a t-value and a p-value. model.1 <- "y1 ~~ y2 y1 ~~ y1 y2 ~~ y2 y1 ~ m1 * 1 y2 ~ m2 * 1 diff := m2 - m1 " fit.1 <- lavaan(model.1, data=d) summary(fit.1) Here's the bit of the output we want: Estimate Std.err Z-value P(>|z|) Defined parameters: diff 0.100 0.045 2.242 0.025 In Mplus, you'd write (note - untested code, written from memory): Model: y1 with y2; [y1] (m1); [y2] (m2); model constraint: new diff; diff = m1 - m2; The second approach is the likelihood ratio (chi-square) difference test. This is straightforward, because our current model has zero df, and the chi-square is the difference between the two models. Now the chi-square test gives you the p-value. Estimator ML Minimum Function Test Statistic 4.963 Degrees of freedom 1 P-value (Chi-square) 0.026 Note that this is not exactly the same p-value, the chi-square test assumes a large sample size. But it's very close (and will be down to about 30). This approach is less useful, because you don't get things like a standard error of the difference. However, if you want to get a multivariate test p-value, this approach works. In Mplus, you'd write: model constraint: m1 = m2; The third approach is the Wald test. This is the least useful. It assumes that all other parameters are unchanged by a restriction in the model (it's kind of the opposite of a modification index, or a lagrange multiplier in that way - it's the estimate of what the chi-square difference test would be). We use the lavTestWald (fit.1, "m1==m2" ) function in Lavaan. lavTestWald (fit.1, "m1==m2" ) $stat [1] 5.025126 $df [1] 1 $p.value [1] 0.02498211 In Mplus, you'd replace the model constraint section with a model test section: model test: m1 = m2; Notice that the Wald test chi-square is a touch higher, and hence the p-value a touch lower than in the likelihood ratio test (just like logistic regression). The advantage of the Wald test is that you don't have to reestimate the model, so it's faster, but that's rarely an issue these days. OK, that's cool, but we wanted to use FIML. First, let's introduce some missing data to get some bias. If a person has a score on y1 which is greater than 1, they have a 33% chance of being missing on y2; if they have a score on y2 which is less than -1, they have a 33% chance of being missing on y1. set.seed(12345) d$y1a <- ifelse(d$y2 < -1 & runif(nrow(d)) > 0.66, NA, d$y1) d$y2a <- ifelse(d$y1 > 1 & runif(nrow(d)) > 0.66, NA, d$y2) A t-test estimates the difference at 0.14 - approximate a 40% overestimate. When I run lavaan using: fit.1a <- lavaan(model.1a, data=d, missing="ML") summary(fit.1a) Defined parameters: diff 0.108 0.046 2.343 0.019 The overestimate is only 8%. It's also possible to fit this as a multilevel model, which gives a full information estimate: library(lme4) d.long <- as.data.frame(c(d$y1a, d$y2a)) names(d.long) <- "y" d.long$id <- rep(1:200, 2) d.long$x <- c(rep(0, 200), rep(1, 200)) summary(lmer(y ~ x + (1|id), data=d.long)) Fixed effects: Estimate Std. Error t value (Intercept) -0.01206 0.07174 -0.168 x 0.10695 0.04589 2.330
Paired samples t-test using a structural equation modeling approach Many statistical tests can be thought of as structural equation models, and one of those is the paired samples t-test. As you say, the advantage of the SEM approach is that you can use FIML estimatio
46,751
log-rank test in R
The examples provided in ?survdiff are pretty clear. Using some example data included in survival, this survdiff(Surv(futime, fustat) ~ rx,data=ovarian) Is testing for a difference in survival between individuals with rx = 1 and rx = 2. For your data, this will compare survival for males versus females survdiff(Surv(time, Status) ~ sex, data=myeloma) And this will compare survival for <= 65 versus >65. survdiff(Surv(time, Status) ~ age, data=myeloma)
log-rank test in R
The examples provided in ?survdiff are pretty clear. Using some example data included in survival, this survdiff(Surv(futime, fustat) ~ rx,data=ovarian) Is testing for a difference in survival betwee
log-rank test in R The examples provided in ?survdiff are pretty clear. Using some example data included in survival, this survdiff(Surv(futime, fustat) ~ rx,data=ovarian) Is testing for a difference in survival between individuals with rx = 1 and rx = 2. For your data, this will compare survival for males versus females survdiff(Surv(time, Status) ~ sex, data=myeloma) And this will compare survival for <= 65 versus >65. survdiff(Surv(time, Status) ~ age, data=myeloma)
log-rank test in R The examples provided in ?survdiff are pretty clear. Using some example data included in survival, this survdiff(Surv(futime, fustat) ~ rx,data=ovarian) Is testing for a difference in survival betwee
46,752
log-rank test in R
It doesn't look right. If you want to limit the analysis to just males or females, the sex==1 or sex==2 is a separate input, the subset clause. The new commands would be Males<-survdiff(surv(time,Status)~Patients, data = myeloma, sex==1) Females<-survdiff(surv(time,Status)~Patients, data = myeloma, sex==2) You need to specify something as the dependent variable in the equation, and the only remaing variable is Patients. If you actually want to measure the effects of both sex and age together on survival, you need to be doing a stratified log rank test. I've used the function SurvTest(in documentation)/surv_test from the coin package. As far as I could tell, it only takes one stratifying variable, but I came up with a workaround by appending several variables into a new variable and using that as the stratifying variable. There are a couple of other packages that can do the same thing, but I can't think of them right now.
log-rank test in R
It doesn't look right. If you want to limit the analysis to just males or females, the sex==1 or sex==2 is a separate input, the subset clause. The new commands would be Males<-survdiff(surv(time,Stat
log-rank test in R It doesn't look right. If you want to limit the analysis to just males or females, the sex==1 or sex==2 is a separate input, the subset clause. The new commands would be Males<-survdiff(surv(time,Status)~Patients, data = myeloma, sex==1) Females<-survdiff(surv(time,Status)~Patients, data = myeloma, sex==2) You need to specify something as the dependent variable in the equation, and the only remaing variable is Patients. If you actually want to measure the effects of both sex and age together on survival, you need to be doing a stratified log rank test. I've used the function SurvTest(in documentation)/surv_test from the coin package. As far as I could tell, it only takes one stratifying variable, but I came up with a workaround by appending several variables into a new variable and using that as the stratifying variable. There are a couple of other packages that can do the same thing, but I can't think of them right now.
log-rank test in R It doesn't look right. If you want to limit the analysis to just males or females, the sex==1 or sex==2 is a separate input, the subset clause. The new commands would be Males<-survdiff(surv(time,Stat
46,753
Test that two normal distributions have same standard deviation
1) The approach you suggest won't have the null distribution of an actual Kolmogorov-Smirnov-statistic. You could actually do that procedure to compute a test statistic, but to find the p-value you'd have to find the distribution of the resulting test statistic (probably by simulation), or perhaps perform a permutation test. 2) If you know they're normal, the F-test for equality of variances (as mark999 suggested in comments) is probably the way to go, since it will be the likelihood ratio test. If the distributions might be non-normal, that F-test can be badly affected by that assumption failure, and you might instead look at Levene or Brown-Forsythe (also mentioned at the link), or some of the other relatively less sensitive-to-assumptions tests for equality of variance. (If the sample sizes are equal the level of the F-test is not as badly affected, though the power might still be relatively poor.) An alternative might be to keep the F-statistic but use it as the basis of a resampling based approach such as a bootstrap or permutation test. But if you're contemplating such a route, it might be wise (because of potential effects of outliers on power) to base it off one of the more robust measures of scale - and if you have near-normal data, you'd probably want to choose one with reasonably good efficiency at the normal (three mentioned at the link are $Q_n$, $S_n$ and the biweight midvariance) . You might like to search CrossValidated on some of the terms in your question and the responses here - you'll be able to find some additional discussion and advice. For example one such search turns up this, this, and this, the last of which includes this answer by Alan Forsythe. Allingham and Rayner[1] suggest a test based off a Wald test for the differences (rather than the ratio) which has much better level-robustness than the F-test on the heavier-tailed-than-normal distributions considered (often almost as good as the Levene on level, but erring on the conservative side while Levene tends to exceed the level) with good power at the normal (slightly beating the Levene at larger sample sizes, not generally as good at very small sample sizes). If both sample sizes are above 25 or so, it's worth considering this test. Additional searches here will turn up one or two other possibilities besides the ones I mentioned. [1]: Allingham, D. & J.C.W. Rayner (2012), "Testing Equality of Variances for Multiple Univariate Normal Populations", Journal of Statistical Theory and Practice, 6:3, 524-535 (there's a 2011 conference paper version here)
Test that two normal distributions have same standard deviation
1) The approach you suggest won't have the null distribution of an actual Kolmogorov-Smirnov-statistic. You could actually do that procedure to compute a test statistic, but to find the p-value you'd
Test that two normal distributions have same standard deviation 1) The approach you suggest won't have the null distribution of an actual Kolmogorov-Smirnov-statistic. You could actually do that procedure to compute a test statistic, but to find the p-value you'd have to find the distribution of the resulting test statistic (probably by simulation), or perhaps perform a permutation test. 2) If you know they're normal, the F-test for equality of variances (as mark999 suggested in comments) is probably the way to go, since it will be the likelihood ratio test. If the distributions might be non-normal, that F-test can be badly affected by that assumption failure, and you might instead look at Levene or Brown-Forsythe (also mentioned at the link), or some of the other relatively less sensitive-to-assumptions tests for equality of variance. (If the sample sizes are equal the level of the F-test is not as badly affected, though the power might still be relatively poor.) An alternative might be to keep the F-statistic but use it as the basis of a resampling based approach such as a bootstrap or permutation test. But if you're contemplating such a route, it might be wise (because of potential effects of outliers on power) to base it off one of the more robust measures of scale - and if you have near-normal data, you'd probably want to choose one with reasonably good efficiency at the normal (three mentioned at the link are $Q_n$, $S_n$ and the biweight midvariance) . You might like to search CrossValidated on some of the terms in your question and the responses here - you'll be able to find some additional discussion and advice. For example one such search turns up this, this, and this, the last of which includes this answer by Alan Forsythe. Allingham and Rayner[1] suggest a test based off a Wald test for the differences (rather than the ratio) which has much better level-robustness than the F-test on the heavier-tailed-than-normal distributions considered (often almost as good as the Levene on level, but erring on the conservative side while Levene tends to exceed the level) with good power at the normal (slightly beating the Levene at larger sample sizes, not generally as good at very small sample sizes). If both sample sizes are above 25 or so, it's worth considering this test. Additional searches here will turn up one or two other possibilities besides the ones I mentioned. [1]: Allingham, D. & J.C.W. Rayner (2012), "Testing Equality of Variances for Multiple Univariate Normal Populations", Journal of Statistical Theory and Practice, 6:3, 524-535 (there's a 2011 conference paper version here)
Test that two normal distributions have same standard deviation 1) The approach you suggest won't have the null distribution of an actual Kolmogorov-Smirnov-statistic. You could actually do that procedure to compute a test statistic, but to find the p-value you'd
46,754
Distribution function of maximum of n iid standard uniform random variables where n is poisson distributed
The calculations in the question look correct, but care is needed because the distribution of $V_\mu$ is not continuous. (I will use $\mu$ instead of $m$ throughout.) From first definitions we may find the distribution function (CDF) of $V_\mu$ is $$F_\mu(x) = \Pr(V_\mu) \le x) = \sum_{n=0}^\infty x^n \Pr(N_\mu = n) = e^{\mu(x-1)}$$ provided $0 \le x \le 1$. For $x \gt 1$, $F_\mu(x) = 1$ of course. But for $x \lt 0$, necessarily $F_\mu(x) = 0$. Here is its graph when $\mu=1$ showing the jump at $x=0$: The moment generating function, $\phi_\mu(t) = \mathbb{E}(\exp(t V_\mu))$, must be computed with similar care near zero. It can be obtained as a Lebesgue-Stieltjes integral, $$\phi_\mu(t) = \int_\mathbb{R} e^{t x} dF_{\mu}(x)$$ via integration by parts as $$\phi_\mu(t) = e^{t x} F_\mu(x) \vert_{-\infty}^1 - \int_0^1 t e^{t x} e^{\mu(x-1)}dx = e^t - t\frac{e^t - e^{-\mu}}{t+\mu}.$$ As a check, its McLaurin series begins $$\phi_\mu(t) = 1 + \left(\frac{\mu-1+e^{-\mu}}{\mu}\right) t + \left(\frac{\mu^2 - 2\mu + 2 - 2e^{-\mu}}{\mu^2}\right)t^2/2 + \cdots$$ The constant term of $1$ shows the total probability mass is $1$. The next two terms will be useful in addressing the rest of the questions.
Distribution function of maximum of n iid standard uniform random variables where n is poisson distr
The calculations in the question look correct, but care is needed because the distribution of $V_\mu$ is not continuous. (I will use $\mu$ instead of $m$ throughout.) From first definitions we may fi
Distribution function of maximum of n iid standard uniform random variables where n is poisson distributed The calculations in the question look correct, but care is needed because the distribution of $V_\mu$ is not continuous. (I will use $\mu$ instead of $m$ throughout.) From first definitions we may find the distribution function (CDF) of $V_\mu$ is $$F_\mu(x) = \Pr(V_\mu) \le x) = \sum_{n=0}^\infty x^n \Pr(N_\mu = n) = e^{\mu(x-1)}$$ provided $0 \le x \le 1$. For $x \gt 1$, $F_\mu(x) = 1$ of course. But for $x \lt 0$, necessarily $F_\mu(x) = 0$. Here is its graph when $\mu=1$ showing the jump at $x=0$: The moment generating function, $\phi_\mu(t) = \mathbb{E}(\exp(t V_\mu))$, must be computed with similar care near zero. It can be obtained as a Lebesgue-Stieltjes integral, $$\phi_\mu(t) = \int_\mathbb{R} e^{t x} dF_{\mu}(x)$$ via integration by parts as $$\phi_\mu(t) = e^{t x} F_\mu(x) \vert_{-\infty}^1 - \int_0^1 t e^{t x} e^{\mu(x-1)}dx = e^t - t\frac{e^t - e^{-\mu}}{t+\mu}.$$ As a check, its McLaurin series begins $$\phi_\mu(t) = 1 + \left(\frac{\mu-1+e^{-\mu}}{\mu}\right) t + \left(\frac{\mu^2 - 2\mu + 2 - 2e^{-\mu}}{\mu^2}\right)t^2/2 + \cdots$$ The constant term of $1$ shows the total probability mass is $1$. The next two terms will be useful in addressing the rest of the questions.
Distribution function of maximum of n iid standard uniform random variables where n is poisson distr The calculations in the question look correct, but care is needed because the distribution of $V_\mu$ is not continuous. (I will use $\mu$ instead of $m$ throughout.) From first definitions we may fi
46,755
Distribution function of maximum of n iid standard uniform random variables where n is poisson distributed
notice that since this continous you want to find CDF so going off what you had we have $$P(V_{m}<x)=\sum_{n=0}^{\infty}P(V_{m}<x|N_{m}=n)f_{N}(n)=\sum_{n=0}^{\infty}e^{-m}\frac{(xm)^{n}}{n!}=e^{-m}e^{xm}=e^{m(x-1)}$$
Distribution function of maximum of n iid standard uniform random variables where n is poisson distr
notice that since this continous you want to find CDF so going off what you had we have $$P(V_{m}<x)=\sum_{n=0}^{\infty}P(V_{m}<x|N_{m}=n)f_{N}(n)=\sum_{n=0}^{\infty}e^{-m}\frac{(xm)^{n}}{n!}=e^{-m}e
Distribution function of maximum of n iid standard uniform random variables where n is poisson distributed notice that since this continous you want to find CDF so going off what you had we have $$P(V_{m}<x)=\sum_{n=0}^{\infty}P(V_{m}<x|N_{m}=n)f_{N}(n)=\sum_{n=0}^{\infty}e^{-m}\frac{(xm)^{n}}{n!}=e^{-m}e^{xm}=e^{m(x-1)}$$
Distribution function of maximum of n iid standard uniform random variables where n is poisson distr notice that since this continous you want to find CDF so going off what you had we have $$P(V_{m}<x)=\sum_{n=0}^{\infty}P(V_{m}<x|N_{m}=n)f_{N}(n)=\sum_{n=0}^{\infty}e^{-m}\frac{(xm)^{n}}{n!}=e^{-m}e
46,756
Clustering binary categorical data
A simple approach is to fit a mixture of "Naive Bayes" models using EM. The structure of the mixture model is $P(x_{i1},\ldots,x_{in}) = \sum_k P(y_i=k) \prod_j P(x_{ij}|y_i=k)$. Here, $i$ indexes the data points, each of which is a vector of $n$ binary features. $y_i$ is the index of the cluster to which data point $i$ belongs. $P(y_i=k)$ is the (learned) probability of a point being generated by cluster $k$. $P(x_{ij}|y_i=k)$ is the (learned) probability of generating the value of feature $j$ for points belonging to class $k$. This model treats the binary values in each cluster as independent conditioned on their membership in the cluster. This is the discrete analogue of fitting a (diagonal) Gaussian mixture model. My former student Tony Fountain and I applied this kind of model to cluster patterns of die failure on silicon wafers. This model is an instance of what is known variously as Latent Class Analysis or Latent Trait Analysis. A good overview of these techniques can be found at this website by John Uebersax in which he discusses a variety of Latent Class models including Probit Discrete Latent Trait Models. The Probit model uses a latent multivariate Gaussian distribution to model each cluster, which can capture pairwise correlations among the binary responses within the cluster. I believe Uebersax provides a software package, but I have not tried it.
Clustering binary categorical data
A simple approach is to fit a mixture of "Naive Bayes" models using EM. The structure of the mixture model is $P(x_{i1},\ldots,x_{in}) = \sum_k P(y_i=k) \prod_j P(x_{ij}|y_i=k)$. Here, $i$ indexes the
Clustering binary categorical data A simple approach is to fit a mixture of "Naive Bayes" models using EM. The structure of the mixture model is $P(x_{i1},\ldots,x_{in}) = \sum_k P(y_i=k) \prod_j P(x_{ij}|y_i=k)$. Here, $i$ indexes the data points, each of which is a vector of $n$ binary features. $y_i$ is the index of the cluster to which data point $i$ belongs. $P(y_i=k)$ is the (learned) probability of a point being generated by cluster $k$. $P(x_{ij}|y_i=k)$ is the (learned) probability of generating the value of feature $j$ for points belonging to class $k$. This model treats the binary values in each cluster as independent conditioned on their membership in the cluster. This is the discrete analogue of fitting a (diagonal) Gaussian mixture model. My former student Tony Fountain and I applied this kind of model to cluster patterns of die failure on silicon wafers. This model is an instance of what is known variously as Latent Class Analysis or Latent Trait Analysis. A good overview of these techniques can be found at this website by John Uebersax in which he discusses a variety of Latent Class models including Probit Discrete Latent Trait Models. The Probit model uses a latent multivariate Gaussian distribution to model each cluster, which can capture pairwise correlations among the binary responses within the cluster. I believe Uebersax provides a software package, but I have not tried it.
Clustering binary categorical data A simple approach is to fit a mixture of "Naive Bayes" models using EM. The structure of the mixture model is $P(x_{i1},\ldots,x_{in}) = \sum_k P(y_i=k) \prod_j P(x_{ij}|y_i=k)$. Here, $i$ indexes the
46,757
Clustering binary categorical data
What is your similarity? First try to figure out what a meaningful similarity function is for your use case. This is very much use case dependant, so there is no one-size-fits-all solution. Once you have a working notion of similarity, try hierarchical clustering or DBSCAN with this similarity. Note that having a working similarity is a requirement for these algorithms to yield good results. Think outside the box of vectors, and think in your data world Mathematically, you have a vector space. But that isn't what your data means. PCA will maximize variance in this vector space, but what does this mean? Instead, choose approaches that mean something for your data. For example, frequent itemsets and association rules could mean much more on your data. Your data probably isn't random numbers, but there is some reality, some semantics attached to it. You need to get this tie to reality into your analysis.
Clustering binary categorical data
What is your similarity? First try to figure out what a meaningful similarity function is for your use case. This is very much use case dependant, so there is no one-size-fits-all solution. Once you h
Clustering binary categorical data What is your similarity? First try to figure out what a meaningful similarity function is for your use case. This is very much use case dependant, so there is no one-size-fits-all solution. Once you have a working notion of similarity, try hierarchical clustering or DBSCAN with this similarity. Note that having a working similarity is a requirement for these algorithms to yield good results. Think outside the box of vectors, and think in your data world Mathematically, you have a vector space. But that isn't what your data means. PCA will maximize variance in this vector space, but what does this mean? Instead, choose approaches that mean something for your data. For example, frequent itemsets and association rules could mean much more on your data. Your data probably isn't random numbers, but there is some reality, some semantics attached to it. You need to get this tie to reality into your analysis.
Clustering binary categorical data What is your similarity? First try to figure out what a meaningful similarity function is for your use case. This is very much use case dependant, so there is no one-size-fits-all solution. Once you h
46,758
Clustering binary categorical data
Affinity propagation clustering could be an interesting method for you to try. But it is more important to pick a binary metric that matches you requirements. If you have an appropriate similarity metric, it would also be helpful to visualize the data with MDS methods (or non-linear dimensionality reduction) in 2D or 3D space.
Clustering binary categorical data
Affinity propagation clustering could be an interesting method for you to try. But it is more important to pick a binary metric that matches you requirements. If you have an appropriate similarity met
Clustering binary categorical data Affinity propagation clustering could be an interesting method for you to try. But it is more important to pick a binary metric that matches you requirements. If you have an appropriate similarity metric, it would also be helpful to visualize the data with MDS methods (or non-linear dimensionality reduction) in 2D or 3D space.
Clustering binary categorical data Affinity propagation clustering could be an interesting method for you to try. But it is more important to pick a binary metric that matches you requirements. If you have an appropriate similarity met
46,759
Clustering binary categorical data
I used hierarchical clustering with cosine distance for a similar problem and it worked well. If they have no services in common the distance will be 1. If they have exacltly the same services the distance will be 0.
Clustering binary categorical data
I used hierarchical clustering with cosine distance for a similar problem and it worked well. If they have no services in common the distance will be 1. If they have exacltly the same services the dis
Clustering binary categorical data I used hierarchical clustering with cosine distance for a similar problem and it worked well. If they have no services in common the distance will be 1. If they have exacltly the same services the distance will be 0.
Clustering binary categorical data I used hierarchical clustering with cosine distance for a similar problem and it worked well. If they have no services in common the distance will be 1. If they have exacltly the same services the dis
46,760
Can First Differencing Cause Negative Serial Correlation
Short answer is yes, differencing will introduce a negative autocorrelation into the differenced series in most situations. Assuming a mean centered variable to make the notation a bit simpler, the covariance between the differenced series can be represented as: $$Cov(\Delta X_t,\Delta X_{t-1}) = E[\Delta X_t \cdot \Delta X_{t-1}]$$ Where $\Delta X_t = X_t - X_{t-1}$ $\Delta X_{t-1} = X_{t-1} - X_{t-2}$ Breaking this down into the original variables, we then have: \begin{align} E[X_t \cdot X_{t-1}] &= E[(X_t - X_{t-1}) \cdot (X_{t-1} - X_{t-2}) ] \\ &= E[X_tX_{t-1} - X_tX_{t-2} - X_{t-1}X_{t-1} + X_{t-1}X_{t-2}] \end{align} The multiplications are then just variances and covariances of the levels: $$Cov(X_t,X_{t-1}) - Cov(X_t,X_{t-2}) - Var(X_{t-1}) + Cov(X_{t-1},X_{t-2})$$ So here we can see that many different situations will result in negative autocorrelations of the differenced series - basically only in the case that the auto-correlations of the levels are really large (e.g. an integrated series) will the differences have a small negative auto-correlation. With random data the autocorrelation of the differences will be approximately -0.5, as with random data those covariance terms among the levels will be 0, so it is just $-Var(X_{t-1})$ for the numerator, but with the differences is $Var(X_t) - Var(X_{t-1})$ in the denominator. This is typically called over-differencing. The solution is to not over-difference the data to begin with.
Can First Differencing Cause Negative Serial Correlation
Short answer is yes, differencing will introduce a negative autocorrelation into the differenced series in most situations. Assuming a mean centered variable to make the notation a bit simpler, the co
Can First Differencing Cause Negative Serial Correlation Short answer is yes, differencing will introduce a negative autocorrelation into the differenced series in most situations. Assuming a mean centered variable to make the notation a bit simpler, the covariance between the differenced series can be represented as: $$Cov(\Delta X_t,\Delta X_{t-1}) = E[\Delta X_t \cdot \Delta X_{t-1}]$$ Where $\Delta X_t = X_t - X_{t-1}$ $\Delta X_{t-1} = X_{t-1} - X_{t-2}$ Breaking this down into the original variables, we then have: \begin{align} E[X_t \cdot X_{t-1}] &= E[(X_t - X_{t-1}) \cdot (X_{t-1} - X_{t-2}) ] \\ &= E[X_tX_{t-1} - X_tX_{t-2} - X_{t-1}X_{t-1} + X_{t-1}X_{t-2}] \end{align} The multiplications are then just variances and covariances of the levels: $$Cov(X_t,X_{t-1}) - Cov(X_t,X_{t-2}) - Var(X_{t-1}) + Cov(X_{t-1},X_{t-2})$$ So here we can see that many different situations will result in negative autocorrelations of the differenced series - basically only in the case that the auto-correlations of the levels are really large (e.g. an integrated series) will the differences have a small negative auto-correlation. With random data the autocorrelation of the differences will be approximately -0.5, as with random data those covariance terms among the levels will be 0, so it is just $-Var(X_{t-1})$ for the numerator, but with the differences is $Var(X_t) - Var(X_{t-1})$ in the denominator. This is typically called over-differencing. The solution is to not over-difference the data to begin with.
Can First Differencing Cause Negative Serial Correlation Short answer is yes, differencing will introduce a negative autocorrelation into the differenced series in most situations. Assuming a mean centered variable to make the notation a bit simpler, the co
46,761
Latent Dirichlet Allocation in PyMC
When defining w, the p parameter must be a list of doubles, not a list of lists of doubles. This means you have to define a w variable for each word in each document. Also it helps to 'complete' the Dirichlet variables using the CompletedDirichlet function. Here is the working code: import numpy as np import pymc as pm K = 2 # number of topics V = 4 # number of words D = 3 # number of documents data = np.array([[1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0]]) alpha = np.ones(K) beta = np.ones(V) theta = pm.Container([pm.CompletedDirichlet("theta_%s" % i, pm.Dirichlet("ptheta_%s" % i, theta=alpha)) for i in range(D)]) phi = pm.Container([pm.CompletedDirichlet("phi_%s" % k, pm.Dirichlet("pphi_%s" % k, theta=beta)) for k in range(K)]) Wd = [len(doc) for doc in data] z = pm.Container([pm.Categorical('z_%i' % d, p = theta[d], size=Wd[d], value=np.random.randint(K, size=Wd[d])) for d in range(D)]) # cannot use p=phi[z[d][i]] here since phi is an ordinary list while z[d][i] is stochastic w = pm.Container([pm.Categorical("w_%i_%i" % (d,i), p = pm.Lambda('phi_z_%i_%i' % (d,i), lambda z=z[d][i], phi=phi: phi[z]), value=data[d][i], observed=True) for d in range(D) for i in range(Wd[d])]) model = pm.Model([theta, phi, z, w]) mcmc = pm.MCMC(model) mcmc.sample(100)
Latent Dirichlet Allocation in PyMC
When defining w, the p parameter must be a list of doubles, not a list of lists of doubles. This means you have to define a w variable for each word in each document. Also it helps to 'complete' the
Latent Dirichlet Allocation in PyMC When defining w, the p parameter must be a list of doubles, not a list of lists of doubles. This means you have to define a w variable for each word in each document. Also it helps to 'complete' the Dirichlet variables using the CompletedDirichlet function. Here is the working code: import numpy as np import pymc as pm K = 2 # number of topics V = 4 # number of words D = 3 # number of documents data = np.array([[1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0]]) alpha = np.ones(K) beta = np.ones(V) theta = pm.Container([pm.CompletedDirichlet("theta_%s" % i, pm.Dirichlet("ptheta_%s" % i, theta=alpha)) for i in range(D)]) phi = pm.Container([pm.CompletedDirichlet("phi_%s" % k, pm.Dirichlet("pphi_%s" % k, theta=beta)) for k in range(K)]) Wd = [len(doc) for doc in data] z = pm.Container([pm.Categorical('z_%i' % d, p = theta[d], size=Wd[d], value=np.random.randint(K, size=Wd[d])) for d in range(D)]) # cannot use p=phi[z[d][i]] here since phi is an ordinary list while z[d][i] is stochastic w = pm.Container([pm.Categorical("w_%i_%i" % (d,i), p = pm.Lambda('phi_z_%i_%i' % (d,i), lambda z=z[d][i], phi=phi: phi[z]), value=data[d][i], observed=True) for d in range(D) for i in range(Wd[d])]) model = pm.Model([theta, phi, z, w]) mcmc = pm.MCMC(model) mcmc.sample(100)
Latent Dirichlet Allocation in PyMC When defining w, the p parameter must be a list of doubles, not a list of lists of doubles. This means you have to define a w variable for each word in each document. Also it helps to 'complete' the
46,762
Best feature selection method for naive Bayes classification
There are two different routes you can take. The key word is 'relevance', and how you interpret it. 1) You can use a Chi-Squared test or Mutual information for feature relevance extraction as explained in detail on this link. In a nutshell, Mutual information measures how much information the presence or absence of a particular term contributes to making the correct classification decision. On the other hand, you can use the Chi Squared test to check whether the occurrence of a specific variable and the occurrence of a specific class are independent. Implementating these in R should be straight-forward. 2) Alternatively, you can adopt a wrapper feature selection strategy, where the primary goal is constructing and selecting subsets of features that are useful to build an accurate classifier. This contrasts with 1, where the goal is finding or ranking all potentially relevant variables. Note that selecting the most relevant variables is usually suboptimal for boosting the accuracy of your classifier, particularly if the variables are redundant. Conversely, a subset of useful variables may exclude many redundant, but relevant, variables.
Best feature selection method for naive Bayes classification
There are two different routes you can take. The key word is 'relevance', and how you interpret it. 1) You can use a Chi-Squared test or Mutual information for feature relevance extraction as explain
Best feature selection method for naive Bayes classification There are two different routes you can take. The key word is 'relevance', and how you interpret it. 1) You can use a Chi-Squared test or Mutual information for feature relevance extraction as explained in detail on this link. In a nutshell, Mutual information measures how much information the presence or absence of a particular term contributes to making the correct classification decision. On the other hand, you can use the Chi Squared test to check whether the occurrence of a specific variable and the occurrence of a specific class are independent. Implementating these in R should be straight-forward. 2) Alternatively, you can adopt a wrapper feature selection strategy, where the primary goal is constructing and selecting subsets of features that are useful to build an accurate classifier. This contrasts with 1, where the goal is finding or ranking all potentially relevant variables. Note that selecting the most relevant variables is usually suboptimal for boosting the accuracy of your classifier, particularly if the variables are redundant. Conversely, a subset of useful variables may exclude many redundant, but relevant, variables.
Best feature selection method for naive Bayes classification There are two different routes you can take. The key word is 'relevance', and how you interpret it. 1) You can use a Chi-Squared test or Mutual information for feature relevance extraction as explain
46,763
Best feature selection method for naive Bayes classification
The R package caret (**C**lassification **A**nd **R**Egression **T**raining) has built-in feature selection tools and supports naive Bayes. I figured I'd post this as an answer instead of a comment because I'm more confident about this one, having used it myself in the past.
Best feature selection method for naive Bayes classification
The R package caret (**C**lassification **A**nd **R**Egression **T**raining) has built-in feature selection tools and supports naive Bayes. I figured I'd post this as an answer instead of a comment be
Best feature selection method for naive Bayes classification The R package caret (**C**lassification **A**nd **R**Egression **T**raining) has built-in feature selection tools and supports naive Bayes. I figured I'd post this as an answer instead of a comment because I'm more confident about this one, having used it myself in the past.
Best feature selection method for naive Bayes classification The R package caret (**C**lassification **A**nd **R**Egression **T**raining) has built-in feature selection tools and supports naive Bayes. I figured I'd post this as an answer instead of a comment be
46,764
Limiting expression for Power law tail index from a quantile function?
From the Chain Rule, $$1 = \frac{d}{dq}\left(q\right) = \frac{d}{dq}\left(F(Q(q))\right) = F^\prime(Q(q)) Q^\prime(q).$$ Letting $0 \lt q \lt 1$, substituting $z = Q(q)$ into the limiting expression for $\alpha$, and solving the preceding equation for $F^\prime(Q(q))$ in terms of $Q^\prime(q)$ gives $$\alpha = \lim_{z\to\infty}\left(\frac{F'(z)}{1 - F(z)}\right) = \lim_{Q(q)\to\infty}\left(\frac{F'(Q(q))}{1 - F(Q(q))}\right) =\lim_{q\to 1^{-}}\left(\frac{1/Q^{\prime}(q)}{1 - q}\right).$$ Both the numerator and denominator approach zero. Applying L'Hopital's Rule gives the simple useful formula $$\alpha =\lim_{q\to 1^{-}}\left(\frac{\frac{d}{dq}\left(1/Q^{\prime}(q)\right)}{\frac{d}{dq}(1 - q)}\right) = \lim_{q\to 1^{-}}\frac{Q^{\prime\prime}(q)}{\left(Q^\prime(q)\right)^2}.$$ For a Pareto distribution $F(z) = 1 - \exp(-\alpha z),$ whence $Q(q) = -\frac{1}{\alpha}\log(1-q)$ and differentiating yields $Q^\prime(q) = \frac{1}{\alpha}\frac{1}{1-q}$, $Q^{\prime\prime}(q) = \frac{1}{\alpha}\frac{1}{(1-q)^2}$. We obtain $$\alpha = \lim_{q\to 1^{-}}\frac{Q^{\prime\prime}(q)}{\left(Q^\prime(q)\right)^2} = \lim_{q\to 1^{-}}\frac{\frac{1}{\alpha}\frac{1}{(1-q)^2}}{\left(\frac{1}{\alpha}\frac{1}{1-q}\right)^2} = \lim_{q\to 1^{-}} \alpha,$$ which checks out beautifully.
Limiting expression for Power law tail index from a quantile function?
From the Chain Rule, $$1 = \frac{d}{dq}\left(q\right) = \frac{d}{dq}\left(F(Q(q))\right) = F^\prime(Q(q)) Q^\prime(q).$$ Letting $0 \lt q \lt 1$, substituting $z = Q(q)$ into the limiting expression f
Limiting expression for Power law tail index from a quantile function? From the Chain Rule, $$1 = \frac{d}{dq}\left(q\right) = \frac{d}{dq}\left(F(Q(q))\right) = F^\prime(Q(q)) Q^\prime(q).$$ Letting $0 \lt q \lt 1$, substituting $z = Q(q)$ into the limiting expression for $\alpha$, and solving the preceding equation for $F^\prime(Q(q))$ in terms of $Q^\prime(q)$ gives $$\alpha = \lim_{z\to\infty}\left(\frac{F'(z)}{1 - F(z)}\right) = \lim_{Q(q)\to\infty}\left(\frac{F'(Q(q))}{1 - F(Q(q))}\right) =\lim_{q\to 1^{-}}\left(\frac{1/Q^{\prime}(q)}{1 - q}\right).$$ Both the numerator and denominator approach zero. Applying L'Hopital's Rule gives the simple useful formula $$\alpha =\lim_{q\to 1^{-}}\left(\frac{\frac{d}{dq}\left(1/Q^{\prime}(q)\right)}{\frac{d}{dq}(1 - q)}\right) = \lim_{q\to 1^{-}}\frac{Q^{\prime\prime}(q)}{\left(Q^\prime(q)\right)^2}.$$ For a Pareto distribution $F(z) = 1 - \exp(-\alpha z),$ whence $Q(q) = -\frac{1}{\alpha}\log(1-q)$ and differentiating yields $Q^\prime(q) = \frac{1}{\alpha}\frac{1}{1-q}$, $Q^{\prime\prime}(q) = \frac{1}{\alpha}\frac{1}{(1-q)^2}$. We obtain $$\alpha = \lim_{q\to 1^{-}}\frac{Q^{\prime\prime}(q)}{\left(Q^\prime(q)\right)^2} = \lim_{q\to 1^{-}}\frac{\frac{1}{\alpha}\frac{1}{(1-q)^2}}{\left(\frac{1}{\alpha}\frac{1}{1-q}\right)^2} = \lim_{q\to 1^{-}} \alpha,$$ which checks out beautifully.
Limiting expression for Power law tail index from a quantile function? From the Chain Rule, $$1 = \frac{d}{dq}\left(q\right) = \frac{d}{dq}\left(F(Q(q))\right) = F^\prime(Q(q)) Q^\prime(q).$$ Letting $0 \lt q \lt 1$, substituting $z = Q(q)$ into the limiting expression f
46,765
Heteroscedasticity in residuals vs. fitted plot
Your response variable isn't really continuous. It is presumably discrete (you can't buy .5 ounces, and moreover, beers only come in certain ounce sizes). In addition, no one can buy less than 0 ounces (you can clearly see the floor effect in your top--untransformed--residual plot). As a result, using an OLS regression (that assumes normal residuals) is likely to be inappropriate. You should probably try to use Poisson regression. In fact, a zero-inflated Poisson, negative binomial, or zero-inflated negative binomial are more likely what you will end up needing.
Heteroscedasticity in residuals vs. fitted plot
Your response variable isn't really continuous. It is presumably discrete (you can't buy .5 ounces, and moreover, beers only come in certain ounce sizes). In addition, no one can buy less than 0 oun
Heteroscedasticity in residuals vs. fitted plot Your response variable isn't really continuous. It is presumably discrete (you can't buy .5 ounces, and moreover, beers only come in certain ounce sizes). In addition, no one can buy less than 0 ounces (you can clearly see the floor effect in your top--untransformed--residual plot). As a result, using an OLS regression (that assumes normal residuals) is likely to be inappropriate. You should probably try to use Poisson regression. In fact, a zero-inflated Poisson, negative binomial, or zero-inflated negative binomial are more likely what you will end up needing.
Heteroscedasticity in residuals vs. fitted plot Your response variable isn't really continuous. It is presumably discrete (you can't buy .5 ounces, and moreover, beers only come in certain ounce sizes). In addition, no one can buy less than 0 oun
46,766
Heteroscedasticity in residuals vs. fitted plot
Not only is your variable apparently discrete it clearly shows lack of fit at the left and right ends Discreteness (red arrows) and lack of fit (green ellipses) apparent in residual plot. You can't properly assess heteroskedasticity with a test statistic that assumes that the model for the mean is correct ... when it plainly isn't. Further, the fact that the t-value is large is unsurprising, since the sample size is huge. [A large t-statistic isn't saying "the heteroskedasticity is dramatic", it's saying "the sample size is big, so the standard error is tiny". The impact on your inference is more related something like an effect size. There may be hetero in that plot, but it's not terribly severe; there are more important issues to deal with first. I'd suggest considering a gamma glm rather than fitting the logs with a linear model (presuming there are no exact zeros). Taking logs tends to make the discreteness in the low end perhaps "loom larger" than it would with a model on the original scale. You should then work on the lack of fit problem, and then assess the degree of the hetero issue, but don't rely on a test statistic to assess the size/importance of it.
Heteroscedasticity in residuals vs. fitted plot
Not only is your variable apparently discrete it clearly shows lack of fit at the left and right ends Discreteness (red arrows) and lack of fit (green ellipses) apparent in residual plot. You can't p
Heteroscedasticity in residuals vs. fitted plot Not only is your variable apparently discrete it clearly shows lack of fit at the left and right ends Discreteness (red arrows) and lack of fit (green ellipses) apparent in residual plot. You can't properly assess heteroskedasticity with a test statistic that assumes that the model for the mean is correct ... when it plainly isn't. Further, the fact that the t-value is large is unsurprising, since the sample size is huge. [A large t-statistic isn't saying "the heteroskedasticity is dramatic", it's saying "the sample size is big, so the standard error is tiny". The impact on your inference is more related something like an effect size. There may be hetero in that plot, but it's not terribly severe; there are more important issues to deal with first. I'd suggest considering a gamma glm rather than fitting the logs with a linear model (presuming there are no exact zeros). Taking logs tends to make the discreteness in the low end perhaps "loom larger" than it would with a model on the original scale. You should then work on the lack of fit problem, and then assess the degree of the hetero issue, but don't rely on a test statistic to assess the size/importance of it.
Heteroscedasticity in residuals vs. fitted plot Not only is your variable apparently discrete it clearly shows lack of fit at the left and right ends Discreteness (red arrows) and lack of fit (green ellipses) apparent in residual plot. You can't p
46,767
ordinal regression or Spearman correlation?
Yes, CBS sounds ordinal. If you're only interested in comparing bivariate relationships, comparison of Spearman's $\rho$s seems fair enough to me. However, an ordinal regression model would allow you to estimate the independent relationships of BMI and stature while controlling the effects of each other predictor. You could also test whether these factors moderate each other by including an interaction term. For more on that, see "How to test whether a regression coefficient is moderated by a grouping variable?" and "What is the correct way to test for significant differences between coefficients?" The method described in the latter question is correct for your purposes in multiple regression (though it wasn't in that OP's case). If you want to calculate the probability that your two predictors' slope coefficients would differ by at least as much as they do in your sample if (1) you were to collect another equivalent sample from the exact same population, and if (2) your predictors are actually equally related to CBS, then you can do this with a z-test: $$Z = \frac{b_1 - b_2}{\sqrt{SE_{b_1}^2 + SE_{b_2}^2}}$$ You can convert the resultant z statistic to a probability using pnorm in r or with other methods described here: "How to deal with Z-score greater than 3?" By the way, if you have continuous height data, you probably ought to reconsider dichotomizing stature into short and non-short. This wastes information that could improve your regression model and correlation estimates; you would probably get smaller standard errors by entering both BMI and height data as originally measured (if you have it) as predictors in multiple ordinal regression of CBS.
ordinal regression or Spearman correlation?
Yes, CBS sounds ordinal. If you're only interested in comparing bivariate relationships, comparison of Spearman's $\rho$s seems fair enough to me. However, an ordinal regression model would allow you
ordinal regression or Spearman correlation? Yes, CBS sounds ordinal. If you're only interested in comparing bivariate relationships, comparison of Spearman's $\rho$s seems fair enough to me. However, an ordinal regression model would allow you to estimate the independent relationships of BMI and stature while controlling the effects of each other predictor. You could also test whether these factors moderate each other by including an interaction term. For more on that, see "How to test whether a regression coefficient is moderated by a grouping variable?" and "What is the correct way to test for significant differences between coefficients?" The method described in the latter question is correct for your purposes in multiple regression (though it wasn't in that OP's case). If you want to calculate the probability that your two predictors' slope coefficients would differ by at least as much as they do in your sample if (1) you were to collect another equivalent sample from the exact same population, and if (2) your predictors are actually equally related to CBS, then you can do this with a z-test: $$Z = \frac{b_1 - b_2}{\sqrt{SE_{b_1}^2 + SE_{b_2}^2}}$$ You can convert the resultant z statistic to a probability using pnorm in r or with other methods described here: "How to deal with Z-score greater than 3?" By the way, if you have continuous height data, you probably ought to reconsider dichotomizing stature into short and non-short. This wastes information that could improve your regression model and correlation estimates; you would probably get smaller standard errors by entering both BMI and height data as originally measured (if you have it) as predictors in multiple ordinal regression of CBS.
ordinal regression or Spearman correlation? Yes, CBS sounds ordinal. If you're only interested in comparing bivariate relationships, comparison of Spearman's $\rho$s seems fair enough to me. However, an ordinal regression model would allow you
46,768
Can ICC values be negative?
You might want to read the original paper to get a sense of what your ICC statistic is doing, how it is constructed and what it means. Apparently, the ICC can go negative, since the numerator involves a difference between two quantities. It probably means that you should use a different measure. I would estimate the between judge and within judge variation with a mixed effects model and look to see if there is a meaningful difference between the judges. In my experience, when the math gives you something stupid (like a negative estimate for something that should be positive), it is because one is trying to estimate something that does not exist, or that makes no sense, or that the data do not support.
Can ICC values be negative?
You might want to read the original paper to get a sense of what your ICC statistic is doing, how it is constructed and what it means. Apparently, the ICC can go negative, since the numerator involves
Can ICC values be negative? You might want to read the original paper to get a sense of what your ICC statistic is doing, how it is constructed and what it means. Apparently, the ICC can go negative, since the numerator involves a difference between two quantities. It probably means that you should use a different measure. I would estimate the between judge and within judge variation with a mixed effects model and look to see if there is a meaningful difference between the judges. In my experience, when the math gives you something stupid (like a negative estimate for something that should be positive), it is because one is trying to estimate something that does not exist, or that makes no sense, or that the data do not support.
Can ICC values be negative? You might want to read the original paper to get a sense of what your ICC statistic is doing, how it is constructed and what it means. Apparently, the ICC can go negative, since the numerator involves
46,769
How to learn the points inside a square with its boundary?
Your specification of the square as having sides parallel to the boundary makes the problem relatively straightforward - as long as you don't require it to be written as a single SVM, but can settle for something like it. One very simple estimate of the boundary of the square is for those points which have $+1$ labels to simply take the min and max of $x$ and the min and max of $y$. (It will be too small, of course, but if there are lots of points, not my much.) If you then exclude the points with "$0$" labels above the top and below the bottom of that square, you can use SVM or something similar to SVM on the left and the right half of the $x$'s (it's only a 1-D problem for each side! Easy.) -- so on the right side, for example, you just need the smallest $x$ with a "$0$"-label above the largest "$1$"-label $x$), ... (For your next approximation of the square, you could split the difference between the biggest $1$-x and the next biggest $0$-x: You can then repeat the procedure for the left, top and bottom boundaries. This gives a much better estimate of the boundary than the initial one.) Of course this won't be exactly square (indeed to this point the algorithm is really for a rectangle with sides parallel to the axes), but the 4 sets of green and pink lines (not all shown) give you boundaries inside which you want to fit the square. So from there it's a matter of expanding the sides of the blue on the "narrow" direction of the almost-square rectangle and shrinking the "wide" direction, until it's square. Note that you don't shrink/expand the sides evenly; if you want to be SVM-ish, you'd do it so as to even up the amount of remaining "wiggle room" (distance between blue and green or pink, whichever is closer) they have (that is, you'd move the side with the biggest gap between green and pink first, until you reach the size of the next-smallest gap between blue and either green or pink, then change both of those simultaneously until you hit the next smallest gap, and so on). (With a bit of thinking most of this step can be done very simply.) So this does some initial processing ($\cal{O}(n)$) to find the inner and outer boxes and the blue rectangle - essentially four trivial "SVM"s, followed by a simple set of expansion/shrinkage calculations to find an actual square. If there really is a square that perfectly separates the "$1$" and "$0$" cases, that should work quite well and give a nicely SVM-like solution. (If there's not perfect separation, you may need to actually adapt this further in order to minimize misclassification.)
How to learn the points inside a square with its boundary?
Your specification of the square as having sides parallel to the boundary makes the problem relatively straightforward - as long as you don't require it to be written as a single SVM, but can settle f
How to learn the points inside a square with its boundary? Your specification of the square as having sides parallel to the boundary makes the problem relatively straightforward - as long as you don't require it to be written as a single SVM, but can settle for something like it. One very simple estimate of the boundary of the square is for those points which have $+1$ labels to simply take the min and max of $x$ and the min and max of $y$. (It will be too small, of course, but if there are lots of points, not my much.) If you then exclude the points with "$0$" labels above the top and below the bottom of that square, you can use SVM or something similar to SVM on the left and the right half of the $x$'s (it's only a 1-D problem for each side! Easy.) -- so on the right side, for example, you just need the smallest $x$ with a "$0$"-label above the largest "$1$"-label $x$), ... (For your next approximation of the square, you could split the difference between the biggest $1$-x and the next biggest $0$-x: You can then repeat the procedure for the left, top and bottom boundaries. This gives a much better estimate of the boundary than the initial one.) Of course this won't be exactly square (indeed to this point the algorithm is really for a rectangle with sides parallel to the axes), but the 4 sets of green and pink lines (not all shown) give you boundaries inside which you want to fit the square. So from there it's a matter of expanding the sides of the blue on the "narrow" direction of the almost-square rectangle and shrinking the "wide" direction, until it's square. Note that you don't shrink/expand the sides evenly; if you want to be SVM-ish, you'd do it so as to even up the amount of remaining "wiggle room" (distance between blue and green or pink, whichever is closer) they have (that is, you'd move the side with the biggest gap between green and pink first, until you reach the size of the next-smallest gap between blue and either green or pink, then change both of those simultaneously until you hit the next smallest gap, and so on). (With a bit of thinking most of this step can be done very simply.) So this does some initial processing ($\cal{O}(n)$) to find the inner and outer boxes and the blue rectangle - essentially four trivial "SVM"s, followed by a simple set of expansion/shrinkage calculations to find an actual square. If there really is a square that perfectly separates the "$1$" and "$0$" cases, that should work quite well and give a nicely SVM-like solution. (If there's not perfect separation, you may need to actually adapt this further in order to minimize misclassification.)
How to learn the points inside a square with its boundary? Your specification of the square as having sides parallel to the boundary makes the problem relatively straightforward - as long as you don't require it to be written as a single SVM, but can settle f
46,770
How to learn the points inside a square with its boundary?
SVM is a linear classifier which means that it can only learn to decide which side of a straight line the points should go. To make a square you obviously need four straight lines. So the short answer is no a linear SVM can not learn a square. However, you can apply a kernel function to your data to map it into a higher dimension. If you choose the right kernel, there could be a high dimensional line that corresponds roughly to a square in the lower dimensional space. Think of it like this. Your points lie on a piece of paper and the SVM is a pair of scissors that gets to make one straight cut. You want to capture just the points in the square with that one cut. If the page is flat you can't do it. If you pinch the paper in the middle of the square so that part is raised. you could cut below the pinch and with a single cut select those points.
How to learn the points inside a square with its boundary?
SVM is a linear classifier which means that it can only learn to decide which side of a straight line the points should go. To make a square you obviously need four straight lines. So the short answer
How to learn the points inside a square with its boundary? SVM is a linear classifier which means that it can only learn to decide which side of a straight line the points should go. To make a square you obviously need four straight lines. So the short answer is no a linear SVM can not learn a square. However, you can apply a kernel function to your data to map it into a higher dimension. If you choose the right kernel, there could be a high dimensional line that corresponds roughly to a square in the lower dimensional space. Think of it like this. Your points lie on a piece of paper and the SVM is a pair of scissors that gets to make one straight cut. You want to capture just the points in the square with that one cut. If the page is flat you can't do it. If you pinch the paper in the middle of the square so that part is raised. you could cut below the pinch and with a single cut select those points.
How to learn the points inside a square with its boundary? SVM is a linear classifier which means that it can only learn to decide which side of a straight line the points should go. To make a square you obviously need four straight lines. So the short answer
46,771
Alternatives to bag-of-words based classifiers for text classification?
I suggest two alternatives, that have been extensively used in Text Classification: Using Latent Semantic Indexing, which consists of applying Singular Value Decomposition to the DocumentXTerm matrix in order to identify relevant (concept) components, or in other words, aims to group words into classes that represent concepts or semantic fields. Using a lexical database like WordNet or BabelNet concepts in order to index the documents, allowing semantic-level comparison of documents. This approach is not statistical, and it faces a problem with Word Sense Disambiguation. Both methods can be applied before training. None of the them aim at catching word order.
Alternatives to bag-of-words based classifiers for text classification?
I suggest two alternatives, that have been extensively used in Text Classification: Using Latent Semantic Indexing, which consists of applying Singular Value Decomposition to the DocumentXTerm matrix
Alternatives to bag-of-words based classifiers for text classification? I suggest two alternatives, that have been extensively used in Text Classification: Using Latent Semantic Indexing, which consists of applying Singular Value Decomposition to the DocumentXTerm matrix in order to identify relevant (concept) components, or in other words, aims to group words into classes that represent concepts or semantic fields. Using a lexical database like WordNet or BabelNet concepts in order to index the documents, allowing semantic-level comparison of documents. This approach is not statistical, and it faces a problem with Word Sense Disambiguation. Both methods can be applied before training. None of the them aim at catching word order.
Alternatives to bag-of-words based classifiers for text classification? I suggest two alternatives, that have been extensively used in Text Classification: Using Latent Semantic Indexing, which consists of applying Singular Value Decomposition to the DocumentXTerm matrix
46,772
Alternatives to bag-of-words based classifiers for text classification?
The continuous word representation using Neural Networks is widely used to represent words. Surprisingly, it has the ability to model the semantic context of words, i.e. detect similar words and put them near together in feature space. You can use the word2vect tool to process a large text corpus and create word vector. It is worth noting that for specific domain you need utilize a domain specific corpus for constructing word vectors.
Alternatives to bag-of-words based classifiers for text classification?
The continuous word representation using Neural Networks is widely used to represent words. Surprisingly, it has the ability to model the semantic context of words, i.e. detect similar words and put t
Alternatives to bag-of-words based classifiers for text classification? The continuous word representation using Neural Networks is widely used to represent words. Surprisingly, it has the ability to model the semantic context of words, i.e. detect similar words and put them near together in feature space. You can use the word2vect tool to process a large text corpus and create word vector. It is worth noting that for specific domain you need utilize a domain specific corpus for constructing word vectors.
Alternatives to bag-of-words based classifiers for text classification? The continuous word representation using Neural Networks is widely used to represent words. Surprisingly, it has the ability to model the semantic context of words, i.e. detect similar words and put t
46,773
Alternatives to bag-of-words based classifiers for text classification?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. You should take a look at log-linear models; it's definitely a valid choice in your situation.
Alternatives to bag-of-words based classifiers for text classification?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Alternatives to bag-of-words based classifiers for text classification? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. You should take a look at log-linear models; it's definitely a valid choice in your situation.
Alternatives to bag-of-words based classifiers for text classification? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
46,774
Alternatives to bag-of-words based classifiers for text classification?
API models exist which can achieve this. https://askmacgyver.com/explore/program/universal-topic-classifier/5S2Q5x8K It takes an array of categories or "bag of words" and a text string to analyze. It then returns a sorted percentage of relevance for this provided keywords. Input Data { "text": "this bank provides an excelent service to its clients when opening a new account and with other operations", "classes": [ "bank account", "online banking", "technical support", "mortgage", "retirement savings", "mutual funds", "student loan", "credit card", "financial news" ], "minCutOff": "0.001" } API Response { "bank account": 0.6448822158372491, "technical support": 0.40099627067600924, "financial news": 0.28635987039897565, "mortgage": 0.2676284175575462, "student loan": 0.257628495744561, "online banking": 0.32395217514082025, "credit card": 0.2144582134037077, "mutual funds": 0.09250890827081894, "retirement savings": 0.13690496892541437 }
Alternatives to bag-of-words based classifiers for text classification?
API models exist which can achieve this. https://askmacgyver.com/explore/program/universal-topic-classifier/5S2Q5x8K It takes an array of categories or "bag of words" and a text string to analyze. It
Alternatives to bag-of-words based classifiers for text classification? API models exist which can achieve this. https://askmacgyver.com/explore/program/universal-topic-classifier/5S2Q5x8K It takes an array of categories or "bag of words" and a text string to analyze. It then returns a sorted percentage of relevance for this provided keywords. Input Data { "text": "this bank provides an excelent service to its clients when opening a new account and with other operations", "classes": [ "bank account", "online banking", "technical support", "mortgage", "retirement savings", "mutual funds", "student loan", "credit card", "financial news" ], "minCutOff": "0.001" } API Response { "bank account": 0.6448822158372491, "technical support": 0.40099627067600924, "financial news": 0.28635987039897565, "mortgage": 0.2676284175575462, "student loan": 0.257628495744561, "online banking": 0.32395217514082025, "credit card": 0.2144582134037077, "mutual funds": 0.09250890827081894, "retirement savings": 0.13690496892541437 }
Alternatives to bag-of-words based classifiers for text classification? API models exist which can achieve this. https://askmacgyver.com/explore/program/universal-topic-classifier/5S2Q5x8K It takes an array of categories or "bag of words" and a text string to analyze. It
46,775
Feasible Generalized Least Square in R
Estimating Regression Models with Multiplicative Heteroscedasticity The model that you have described is discussed in Harvey (1976). Let me rewrite the model $$ \begin{align} \mathbb{E}(Y_i \mid \mathbf{X}_i, \mathbf{Z}_i) &= \mathbf{X}_i'\boldsymbol{\beta} \\ \mathbb{V}(Y_i \mid \mathbf{X}_i, \mathbf{Z}_i) &\equiv \sigma^2_i \\ &= \exp(\mathbf{Z}_i'\boldsymbol{\alpha}) \\ \end{align} $$ Note that it is possible that $\mathbf{X}_i$ and $\mathbf{Z}_i$ have common elements. Rewriting the conditional mean equation equivalently as $$ \begin{align} Y_i &= \mathbf{X}_i'\boldsymbol{\beta} + \varepsilon_i \\ \mathbb{E}(\varepsilon_i \mid \mathbf{X}_i, \mathbf{Z}_i) &= 0 \end{align} $$ Two-step estimation The main aim of Harvey (1976) is to provide (efficient) estimates of $\boldsymbol{\alpha}$, rather than to use that estimate to compute a WLS estimator. However, once the parameters $\boldsymbol{\alpha}$ are computed, they can be so used. The two-step estimator that you have described, follows the procedure: 1. Compute the OLS estimator $\hat{\boldsymbol{\beta}}$, and the OLS residuals $\hat{\varepsilon}_i^2$, 2. Compute the estimates $\hat{\boldsymbol{\alpha}}$, from the regression $$ \log \hat{\varepsilon}_i^2 = \mathbf{Z}_i'\boldsymbol{\alpha} + \nu_i $$ Maximum likelihood estimation The MLE is claimed to be up to 60% more efficient than the two-step estimator above. There are some other advantages that are apparent to me including that the MLE would be the pseudo- maximum likelihood under distributional misspecification, and that there is no need to recompute the WLS after computing $\hat{\boldsymbol{\alpha}}$. However, this does not seem to be borne out by my calculations below. Under conditional normality of the response, the likelihood is very simple to write down and optimize $$ \begin{align} \log L_i(\boldsymbol{\beta}, \boldsymbol{\alpha}) &= -\frac{1}{2}(\log2 \pi + \mathbf{Z}_i'\boldsymbol{\alpha})\\ &\qquad -\frac{1}{2}\left(\dfrac{(Y_i - \mathbf{X}_i'\boldsymbol{\beta})^2}{\exp(\mathbf{Z}_i'\boldsymbol{\alpha})}\right) \end{align} $$ Two-step estimation: R To implement this, first let us simulate some heteroskedastic data using the model given, and estimate it using OLS. #========================================================== # simulate the heteroskedastic data #========================================================== iN = 1000 iK1 = 7 iK2 = 4 mX = cbind(1, matrix(rnorm(iN*iK1), nrow = iN, ncol = iK1)) mZ = cbind(1, matrix(rnorm(iN*iK2), nrow = iN, ncol = iK2)) vBeta = rnorm(1 + iK1) vAlpha = rnorm(1 + iK2) vY = rnorm(iN, mean = mX %*% vBeta, sd = sqrt(exp(mZ %*% vAlpha))) #========================================================== # fit the data using OLS #========================================================== vBetaOLS = coef(lmHetMean <- lm.fit(y = vY, x = mX)) Next, we can get the results using the two-step procedure: #========================================================== # two-step estimation #========================================================== residHet = resid(lmHetMean) vVarEst = exp(fitted(lmHetVar <- lm.fit(y = log(residHet^2), x = mZ))) vBetaTS = coef(lm.fit(y = vY/vVarEst, x = apply(mX, 2, function(x) x/vVarEst))) Maximum likelihood estimation: R #========================================================== # likelihood function #========================================================== fnLogLik = function(vParam, vY, mX, mZ) { vBeta = vParam[1:ncol(mX)] vAlpha = vParam[(ncol(mX)+1):(ncol(mX)+ncol(mZ))] negLogLik = -sum(0.5*(log(2*pi) - mZ %*% vAlpha - (vY - mX %*% vBeta)^2/(exp(mZ %*% vAlpha)))) return(negLogLik) } # test the function # debugonce(fnLogLik) fnLogLik(c(vBeta, vAlpha), vY, mX, mZ) #========================================================== # MLE #========================================================== vParam0 = rnorm(13) optimHet = optim(vParam0, fnLogLik, vY = vY, mX = mX, mZ = mZ) vBetaML = optimHet$par #========================================================== # collect all the results #========================================================== cbind(vBeta, vBetaOLS, vBetaTS, vBetaML = vBetaML[1:8]) I am not entirely sure why the ML results are a bit farther off than even the OLS, and I am not ruling out a coding mistake. But as you can see, the 2-step estimator seems to do better than the OLS estimator.
Feasible Generalized Least Square in R
Estimating Regression Models with Multiplicative Heteroscedasticity The model that you have described is discussed in Harvey (1976). Let me rewrite the model $$ \begin{align} \mathbb{E}(Y_i \mid \math
Feasible Generalized Least Square in R Estimating Regression Models with Multiplicative Heteroscedasticity The model that you have described is discussed in Harvey (1976). Let me rewrite the model $$ \begin{align} \mathbb{E}(Y_i \mid \mathbf{X}_i, \mathbf{Z}_i) &= \mathbf{X}_i'\boldsymbol{\beta} \\ \mathbb{V}(Y_i \mid \mathbf{X}_i, \mathbf{Z}_i) &\equiv \sigma^2_i \\ &= \exp(\mathbf{Z}_i'\boldsymbol{\alpha}) \\ \end{align} $$ Note that it is possible that $\mathbf{X}_i$ and $\mathbf{Z}_i$ have common elements. Rewriting the conditional mean equation equivalently as $$ \begin{align} Y_i &= \mathbf{X}_i'\boldsymbol{\beta} + \varepsilon_i \\ \mathbb{E}(\varepsilon_i \mid \mathbf{X}_i, \mathbf{Z}_i) &= 0 \end{align} $$ Two-step estimation The main aim of Harvey (1976) is to provide (efficient) estimates of $\boldsymbol{\alpha}$, rather than to use that estimate to compute a WLS estimator. However, once the parameters $\boldsymbol{\alpha}$ are computed, they can be so used. The two-step estimator that you have described, follows the procedure: 1. Compute the OLS estimator $\hat{\boldsymbol{\beta}}$, and the OLS residuals $\hat{\varepsilon}_i^2$, 2. Compute the estimates $\hat{\boldsymbol{\alpha}}$, from the regression $$ \log \hat{\varepsilon}_i^2 = \mathbf{Z}_i'\boldsymbol{\alpha} + \nu_i $$ Maximum likelihood estimation The MLE is claimed to be up to 60% more efficient than the two-step estimator above. There are some other advantages that are apparent to me including that the MLE would be the pseudo- maximum likelihood under distributional misspecification, and that there is no need to recompute the WLS after computing $\hat{\boldsymbol{\alpha}}$. However, this does not seem to be borne out by my calculations below. Under conditional normality of the response, the likelihood is very simple to write down and optimize $$ \begin{align} \log L_i(\boldsymbol{\beta}, \boldsymbol{\alpha}) &= -\frac{1}{2}(\log2 \pi + \mathbf{Z}_i'\boldsymbol{\alpha})\\ &\qquad -\frac{1}{2}\left(\dfrac{(Y_i - \mathbf{X}_i'\boldsymbol{\beta})^2}{\exp(\mathbf{Z}_i'\boldsymbol{\alpha})}\right) \end{align} $$ Two-step estimation: R To implement this, first let us simulate some heteroskedastic data using the model given, and estimate it using OLS. #========================================================== # simulate the heteroskedastic data #========================================================== iN = 1000 iK1 = 7 iK2 = 4 mX = cbind(1, matrix(rnorm(iN*iK1), nrow = iN, ncol = iK1)) mZ = cbind(1, matrix(rnorm(iN*iK2), nrow = iN, ncol = iK2)) vBeta = rnorm(1 + iK1) vAlpha = rnorm(1 + iK2) vY = rnorm(iN, mean = mX %*% vBeta, sd = sqrt(exp(mZ %*% vAlpha))) #========================================================== # fit the data using OLS #========================================================== vBetaOLS = coef(lmHetMean <- lm.fit(y = vY, x = mX)) Next, we can get the results using the two-step procedure: #========================================================== # two-step estimation #========================================================== residHet = resid(lmHetMean) vVarEst = exp(fitted(lmHetVar <- lm.fit(y = log(residHet^2), x = mZ))) vBetaTS = coef(lm.fit(y = vY/vVarEst, x = apply(mX, 2, function(x) x/vVarEst))) Maximum likelihood estimation: R #========================================================== # likelihood function #========================================================== fnLogLik = function(vParam, vY, mX, mZ) { vBeta = vParam[1:ncol(mX)] vAlpha = vParam[(ncol(mX)+1):(ncol(mX)+ncol(mZ))] negLogLik = -sum(0.5*(log(2*pi) - mZ %*% vAlpha - (vY - mX %*% vBeta)^2/(exp(mZ %*% vAlpha)))) return(negLogLik) } # test the function # debugonce(fnLogLik) fnLogLik(c(vBeta, vAlpha), vY, mX, mZ) #========================================================== # MLE #========================================================== vParam0 = rnorm(13) optimHet = optim(vParam0, fnLogLik, vY = vY, mX = mX, mZ = mZ) vBetaML = optimHet$par #========================================================== # collect all the results #========================================================== cbind(vBeta, vBetaOLS, vBetaTS, vBetaML = vBetaML[1:8]) I am not entirely sure why the ML results are a bit farther off than even the OLS, and I am not ruling out a coding mistake. But as you can see, the 2-step estimator seems to do better than the OLS estimator.
Feasible Generalized Least Square in R Estimating Regression Models with Multiplicative Heteroscedasticity The model that you have described is discussed in Harvey (1976). Let me rewrite the model $$ \begin{align} \mathbb{E}(Y_i \mid \math
46,776
Feasible Generalized Least Square in R
There should also be a minus sign before $\log(2\pi)$ in negLogLik since this term appears in the denominator of the likelihood function, see https://en.wikipedia.org/wiki/Maximum_likelihood_estimation#Non-independent_variables. However, this change is a neutral operation in a minimization problem since $\log(2\pi)$ is a constant term. When you replace the optim function by the nlminb function (arguments do not have to be altered), the ML estimator of beta will be much closer to the original beta vector. Probably, optim just does not work as well as nlminb in this situation. For nlminb see https://stat.ethz.ch/R-manual/R-devel/library/stats/html/nlminb.html.
Feasible Generalized Least Square in R
There should also be a minus sign before $\log(2\pi)$ in negLogLik since this term appears in the denominator of the likelihood function, see https://en.wikipedia.org/wiki/Maximum_likelihood_estimatio
Feasible Generalized Least Square in R There should also be a minus sign before $\log(2\pi)$ in negLogLik since this term appears in the denominator of the likelihood function, see https://en.wikipedia.org/wiki/Maximum_likelihood_estimation#Non-independent_variables. However, this change is a neutral operation in a minimization problem since $\log(2\pi)$ is a constant term. When you replace the optim function by the nlminb function (arguments do not have to be altered), the ML estimator of beta will be much closer to the original beta vector. Probably, optim just does not work as well as nlminb in this situation. For nlminb see https://stat.ethz.ch/R-manual/R-devel/library/stats/html/nlminb.html.
Feasible Generalized Least Square in R There should also be a minus sign before $\log(2\pi)$ in negLogLik since this term appears in the denominator of the likelihood function, see https://en.wikipedia.org/wiki/Maximum_likelihood_estimatio
46,777
Final Model Prediction using K-Fold Cross-Validation and Machine Learning Methods
"To produce the estimates on the test set do I simply average the weights and biases from each of the 10 different calibrated models and use this parametrization to produce outputs to compare with my test set for the target function?" No. Cross-validation is a procedure for estimating the test performance of a method of producing a model, rather than of the model itself. So the best thing to do is to perform k-fold cross-validation to determine the best hyper-parameter settings, e.g. number of hidden units, values of regularisation parameters etc. Then train a single network on the whole calibration set (or several and pick the one with the best value of the regularised training criterion to guard against local minima). Evaluate the performance of that model using the test set. In the case of neural networks, averaging the weights and biases of individual models won't work as different models will choose different internal representations, so the corresponding hidden units of different networks will represent different (distributed) concepts. If you average their weights, they mean of these concepts will be meaningless.
Final Model Prediction using K-Fold Cross-Validation and Machine Learning Methods
"To produce the estimates on the test set do I simply average the weights and biases from each of the 10 different calibrated models and use this parametrization to produce outputs to compare with my
Final Model Prediction using K-Fold Cross-Validation and Machine Learning Methods "To produce the estimates on the test set do I simply average the weights and biases from each of the 10 different calibrated models and use this parametrization to produce outputs to compare with my test set for the target function?" No. Cross-validation is a procedure for estimating the test performance of a method of producing a model, rather than of the model itself. So the best thing to do is to perform k-fold cross-validation to determine the best hyper-parameter settings, e.g. number of hidden units, values of regularisation parameters etc. Then train a single network on the whole calibration set (or several and pick the one with the best value of the regularised training criterion to guard against local minima). Evaluate the performance of that model using the test set. In the case of neural networks, averaging the weights and biases of individual models won't work as different models will choose different internal representations, so the corresponding hidden units of different networks will represent different (distributed) concepts. If you average their weights, they mean of these concepts will be meaningless.
Final Model Prediction using K-Fold Cross-Validation and Machine Learning Methods "To produce the estimates on the test set do I simply average the weights and biases from each of the 10 different calibrated models and use this parametrization to produce outputs to compare with my
46,778
Final Model Prediction using K-Fold Cross-Validation and Machine Learning Methods
I think the correct answer to this question is provided by a document of sklearn here: http://scikit-learn.org/stable/modules/cross_validation.html Basically by doing cross-validation(CV), compared with hold-out validation, we can reduce the amount of data taken by the validation set, thus increase the amount of data used by training set. This solves the problem where the amount of training data is not enough while we still want to have training, validation and test set. As written in the document: "The performance measure reported by k-fold cross-validation is then the average of the values computed in the loop. This approach can be computationally expensive, but does not waste too much data (as it is the case when fixing an arbitrary test set), which is a major advantage in problem such as inverse inference where the number of samples is very small."
Final Model Prediction using K-Fold Cross-Validation and Machine Learning Methods
I think the correct answer to this question is provided by a document of sklearn here: http://scikit-learn.org/stable/modules/cross_validation.html Basically by doing cross-validation(CV), compared wi
Final Model Prediction using K-Fold Cross-Validation and Machine Learning Methods I think the correct answer to this question is provided by a document of sklearn here: http://scikit-learn.org/stable/modules/cross_validation.html Basically by doing cross-validation(CV), compared with hold-out validation, we can reduce the amount of data taken by the validation set, thus increase the amount of data used by training set. This solves the problem where the amount of training data is not enough while we still want to have training, validation and test set. As written in the document: "The performance measure reported by k-fold cross-validation is then the average of the values computed in the loop. This approach can be computationally expensive, but does not waste too much data (as it is the case when fixing an arbitrary test set), which is a major advantage in problem such as inverse inference where the number of samples is very small."
Final Model Prediction using K-Fold Cross-Validation and Machine Learning Methods I think the correct answer to this question is provided by a document of sklearn here: http://scikit-learn.org/stable/modules/cross_validation.html Basically by doing cross-validation(CV), compared wi
46,779
F-test formula under robust standard error
In the linear regression model $$\mathbf y = \mathbf X\beta + \mathbf u$$ with $K$ regressors and a sample of size $N$, if we have $q$ linear restrictions on the parameters that we want to test, $$\mathbf R\beta = \mathbf r$$ where $\mathbf R$ is $q \times K$, then we have the Wald statistic $$W= (\mathbf R\hat \beta - \mathbf r)'(\mathbf R \mathbf {\hat V} \mathbf R')^{-1}(\mathbf R\hat \beta - \mathbf r) \sim_{asymp.} \mathcal \chi^2_q$$ and where $\mathbf {\hat V}$ is the consistently estimated heteroskedasticity-robust asymptotic variance-covariance matrix of the estimator, $$\mathbf {\hat V}=(\mathbf X'\mathbf X)^{-1}\left(\sum_{i=1}^n \hat u_i^2\mathbf x_i'\mathbf x_i\right)(\mathbf X'\mathbf X)^{-1}$$ or $$\mathbf {\hat V}=\frac {N}{N-K}(\mathbf X'\mathbf X)^{-1}\left(\sum_{i=1}^n \hat u_i^2\mathbf x_i'\mathbf x_i\right)(\mathbf X'\mathbf X)^{-1}$$ (there is some evidence that this degrees-of-freedom correction improves finite-sample performance). If we divide the statistic by $q$ we obtain an approximate $F$-statistic $$W/q \sim_{approx} F_{q, N-K}$$ but why add one more layer of approximation? ADDENDUM 2-8-2014 The reason why we obtain an approximate $F$-statistic if we divide a chi-square by its degrees of freedom is because $$\lim_{N-K \rightarrow \infty} qF_{q, N-K} = \chi^2_q$$ see this post.
F-test formula under robust standard error
In the linear regression model $$\mathbf y = \mathbf X\beta + \mathbf u$$ with $K$ regressors and a sample of size $N$, if we have $q$ linear restrictions on the parameters that we want to test, $$\m
F-test formula under robust standard error In the linear regression model $$\mathbf y = \mathbf X\beta + \mathbf u$$ with $K$ regressors and a sample of size $N$, if we have $q$ linear restrictions on the parameters that we want to test, $$\mathbf R\beta = \mathbf r$$ where $\mathbf R$ is $q \times K$, then we have the Wald statistic $$W= (\mathbf R\hat \beta - \mathbf r)'(\mathbf R \mathbf {\hat V} \mathbf R')^{-1}(\mathbf R\hat \beta - \mathbf r) \sim_{asymp.} \mathcal \chi^2_q$$ and where $\mathbf {\hat V}$ is the consistently estimated heteroskedasticity-robust asymptotic variance-covariance matrix of the estimator, $$\mathbf {\hat V}=(\mathbf X'\mathbf X)^{-1}\left(\sum_{i=1}^n \hat u_i^2\mathbf x_i'\mathbf x_i\right)(\mathbf X'\mathbf X)^{-1}$$ or $$\mathbf {\hat V}=\frac {N}{N-K}(\mathbf X'\mathbf X)^{-1}\left(\sum_{i=1}^n \hat u_i^2\mathbf x_i'\mathbf x_i\right)(\mathbf X'\mathbf X)^{-1}$$ (there is some evidence that this degrees-of-freedom correction improves finite-sample performance). If we divide the statistic by $q$ we obtain an approximate $F$-statistic $$W/q \sim_{approx} F_{q, N-K}$$ but why add one more layer of approximation? ADDENDUM 2-8-2014 The reason why we obtain an approximate $F$-statistic if we divide a chi-square by its degrees of freedom is because $$\lim_{N-K \rightarrow \infty} qF_{q, N-K} = \chi^2_q$$ see this post.
F-test formula under robust standard error In the linear regression model $$\mathbf y = \mathbf X\beta + \mathbf u$$ with $K$ regressors and a sample of size $N$, if we have $q$ linear restrictions on the parameters that we want to test, $$\m
46,780
Probability enemy territory captured in X turns
The basic rules of engagement provide the probability distribution for transitions from $a$ attacking armies and $d$ defending armies to $a^\prime$ attackers and $d^\prime$ defenders, where $0 \le a^\prime \le a$ and $0 \le d^\prime \le d$. Beginning with $A$ attacking armies and $D$ defending armies, there are therefore $(A+1)(D+1)$ possible states indexed by $a=0, 1, \ldots, A$ and $d=0, 1, \ldots, D$. This is a Markov Chain on these states. All the information about outcomes is therefore contained in its transition matrix $\mathbb{P}_{A,D}$. In particular, after $k$ transitions ("turns" in the battle) from an initial state vector $s$, the distribution of resulting states $s^\prime$ is given by $\mathbb{P}_{A,D}^k\cdot s$. Because $\mathbb{P}_{A,D}$ can be large but $k$ is likely small, it may be most expedient simply to calculate this distribution iteratively via $$s(k) = \mathbb{P}_{A,D}\cdot s(k-1)$$ beginning with the initial state $s(0) = s$. These ideas were used to produce graphs of the winning chances for the attacker (blue) and defender (red) as a function of the number of turns played. (The gray lines are the chances that the battle remains unresolved after each turn.) The total time needed to do these computations using the brute-force method described below was five seconds. This is more than competitive with a Monte-Carlo simulation. For much larger numbers of armies, simulation will be superior in its use of time and RAM, but even so it is helpful to have a way to compute exact results, if only to test a simulator. It is interesting that the attacker needs relatively few additional armies in order to secure a huge advantage. In particular, one might naively expect to need $0.7/0.6$ times as many attacking armies. The randomness of the outcomes magnifies a small initial superiority, however. Also, attackers have one slight advantage: they still win if they are wiped out, provided the defenders are also wiped out. (The actual rules might differ on this account; if so, the code would need slight modification to accommodate any additional restrictions concerning minimum numbers of attackers.) An Algorithm A solution with brute-force computation takes only a few seconds even when $A=100$, $D=60$, and $k=5$ as in the question. The only difficulties concern how to index the states. Because R appears to be preferred and it organizes arrays by columns, I have written the transition vectors as columns and multiply state vectors by the transition matrix from the left (which is the opposite of the usual convention). I index the states $1, 2, \ldots, (A+1)(D+1)$ by letting the first coordinate vary fastest from largest to smallest. For instance, with $(A,D)=(2,1)$ the states are $(2,1), (1,1), (0,1), (2,0), (1,0), (0,0)$. The transition matrix $\mathbb{P}_{2,1}$ is computed in R as From To 2:1 1:1 0:1 2:0 1:0 0:0 2:1 0.048 0.00 0 0 0 0 1:1 0.112 0.12 0 0 0 0 0:1 0.000 0.28 1 0 0 0 2:0 0.252 0.00 0 1 0 0 1:0 0.588 0.18 0 0 1 0 0:0 0.000 0.42 0 0 0 1 using the command mc(2, 1)$transition (whose source is given below). For instance, the entry in row 1:0 and column 2:1 asserts there is a chance of 0.588 that two armies attacking one defender (the state $(2,1)$) will, in one turn, reduce to one attacking army and no defending armies (the state $(1,0)$). To check the validity of this calculation, we need to know the rules. The transition from $(a,d)$ yields a pair of random variables $(a^\prime, d^\prime)$. Let $X$ be a Binomial$(a, p)$ variable, where by default $p = 0.6$, and $Y$ be an independent Binomial$(d, q)$ variable with $q = 0.7$ by default. Then $a^\prime = \max(0,a-Y)$ and $d^\prime = \max(0,d-X)$. We say that the attacker "wins" provided $a\gt 0$ and $d^\prime=0$ and that the attacker "loses" provided $a\gt 0,$ $a^\prime=0$, and $d^\prime\gt 0$. In the case of a win or loss the engagement ends. Otherwise it may continue (at the attacker's option). Returning to the example, beginning in the state $(2,1)$ we find that $X$ has a Binomial$(2,0.6)$ distribution, whence $$\Pr(X=0)=0.4^2=0.16,\Pr(X=1)=2(0.4)(0.6)=0.48, \Pr(X=2)=0.6^2=0.36.$$ Consequently When $X=0, d^\prime=\max(0,1-0) = 1$, so ${\Pr}_{2,1}(d^\prime=1) = 0.16.$ When $X=1, d^\prime=\max(0,0)=0$; and When $X=2, d^\prime=\max(0,-1)=0$, so ${\Pr}_{2,1}(d^\prime=0) = 0.48+0.36=0.84.$ Likewise $Y$ has a Binomial$(1,0.7)$ distribution, giving it a $0.3$ chance of being $0$ and a $0.7$ chance of being $1$. Accordingly ${\Pr}_{2,1}(a^\prime=2)=\Pr(Y=0)=0.3$ and ${\Pr}_{2,1}(a^\prime=1)=\Pr(Y=1)=0.7$. Because $X$ and $Y$ are independent, the chances multiply. For example, the chance of arriving at $(a^\prime, d^\prime) = (1,0)$ is $${\Pr}_{2,1}\left((a^\prime, d^\prime)\right) = {\Pr}_{2,1}(a^\prime=1){\Pr}_{2,1}(d^\prime=0) = 0.7\times 0.84 = 0.588.$$ The other entries in the transition matrix are similarly verified. Notice that the terminal states $(0,1), (2,0), (1,0),$ and $(0,0)$ just make transitions to themselves. The first is a loss for the attacker while the rest are counted as wins. Working code To implement this I have used auxiliary functions fight to compute the probabilities, transition to produce the distribution of transitions from a single state, and mc to assemble those into a transition matrix. The function battle repeatedly applies this transition matrix to the initial distribution where state $(A,D)$ has probability $1$ and all other states have probability $0$. It summarizes the attacker's chances of winning and losing. Finally, by iterating battle over plausible choices of $A$, it becomes a tool to help with game strategy, which amounts to selecting an appropriate number of armies from among those available in order to attack a defender. The only applicable rule is that the number of armies must be strictly less than the number of available (because any attacking armies available after the engagement will be moved into the defender's territory but at least one army must be left behind). (This modular implementation makes it relatively easy to check the calculations by hand, at least for small cases. Doing so increases the confidence that the calculations are correct. Moreover, by limiting each module (R function) to less than a half dozen executable lines, each is sufficiently simple to undergo careful testing and review. I admit my testing has been cursory, but it did include a wide range of values of $A$, $D$, $p$, and $q$ as well as spot-checks like the example given above.) The output of this code consists of an array of graphs showing the winning and losing chances of the attacker as a function of the number of turns. It also reports the total amount of time needed to do the calculations for each graph. On this workstation it takes under 2.5 seconds to perform the calculations for $A=100$ attackers, $D=60$ defenders, and $k=5$ turns (using the command battle(100, 60, 5)). Easy modifications of battle will enable deeper analysis, such as evaluating the numbers of attacking armies that are likely to survive any engagement: such information, more than the mere chance of winning after $k$ turns, is more important to developing an optimal game strategy. fight <- function(n.attack, n.defend, n.defend.max, attack.p) { # # Returns a probability vector for the number of surviving defenders # indexed by n.defend.max down to 0. # NB: This is the only part that needs modification if a "luck factor" # is introduced. # p.casualty <- dbinom(0:n.attack, n.attack, attack.p) n.survive <- n.defend - (0:n.attack) if (n.attack > n.defend) { # Collapse the negative values into a single state with no survivors. p.casualty <- c(p.casualty[n.survive > 0], sum(p.casualty[n.survive <= 0])) n.survive <- n.survive[n.survive >= 0] } # Pad the return vector with zeros, fore and aft, as needed. c(rep(0, n.defend.max-n.defend), p.casualty, rep(0, max(0, n.defend-n.attack))) } transition <- function(attack, defend, attack.max, defend.max, attack.p, defend.p) { # # Returns the transition probabilities from the state (attack, defend), # in descending order with `attack` changing fastest. # a <- fight(defend, attack, attack.max, defend.p) d <- fight(attack, defend, defend.max, attack.p) return(as.vector(outer(a, d))) } mc <- function(attack.max, defend.max, attack.p=0.6, defend.p=0.7) { # # Returns the transition matrix for a round in which attack.max armies # (not including any reserved ones) engage defend.max armies. # Transitions are in *columns* (not rows, which is the convention). # The matrix is square with dimensions (attack.max+1)(defend.max+1). # # Also returns indicator vectors $wins and $loses showing which # states correspond to wins and losses for the attacker, respectively. # i <- expand.grid(A=attack.max:0, D=defend.max:0) # All states x <- apply(i, 1, function(ad, ...) transition(ad[1], ad[2], ...), attack.max=attack.max, defend.max=defend.max, attack.p=attack.p, defend.p=defend.p) # Name the indexes in `x` to assist human reading of the results. s <- paste(i$A, i$D, sep=":") dimnames(x) <- list(To=s, From=s) return(list(transition=x, n=dim(i)[1], wins=(i$D==0), loses=(i$A==0 & i$D > 0))) } battle <- function(attack, defend, k=5, ...) { # # Conduct a battle of `attack` attacking armies against `defend` defending # armies for `k` turns. # Return an array whose rows (indexed by `k`) give chances of the attacker # winning and losing at that turn. # # (The code is readily modified to report the distributions of numbers of # remaining armies.) # if (attack <= 0 || defend <= 0) stop("Both army counts must be positive.") # # Find the probability distributions after 1, 2, ..., k turns. # x <- mc(attack, defend, ...) # Transition matrix structure y <- c(1, rep(0, x$n-1)) # $Beginning distribution: all is in state 1. p <- matrix(NA, k, 3, dimnames=list(Turns=1:k, Outcome=c("Wins", "Loses", "Undecided"))) for (i in 1:k) { y <- x$transition %*% y # $ # Summarize this turn. p[i, "Wins"] <- sum(y[x$wins]) # $ p[i, "Loses"] <- sum(y[x$loses]) # $ p[i, "Undecided"] <- 1 - (p[i, "Wins"] + p[i, "Loses"]) } return(p) } # # Study near-equal battles involving a given number of defenders. # d <- 60 k <- 5 p.a <- 0.6; p.d <- 0.7 times <- numeric(0) par(mfrow=c(2,2)) for (a in round(d * exp(seq(log(63/60), log(75/60), length.out=4)))) { u <- system.time(p <- zapsmall(battle(a, d, k, attack.p=p.a, defend.p=p.d))) times <- c(times, u[3]) plot(p[, "Wins"], type="l", ylim=c(0,1), col="Blue", xlab="Turns", ylab="Probability", main=paste(a, "Attackers vs.", d, "Defenders"), sub=paste("p(Attack) =", p.a, "p(Defense) =", p.d)) lines(p[, "Undecided"], col="Gray") lines(p[, "Loses"], col="Red") } times
Probability enemy territory captured in X turns
The basic rules of engagement provide the probability distribution for transitions from $a$ attacking armies and $d$ defending armies to $a^\prime$ attackers and $d^\prime$ defenders, where $0 \le a^\
Probability enemy territory captured in X turns The basic rules of engagement provide the probability distribution for transitions from $a$ attacking armies and $d$ defending armies to $a^\prime$ attackers and $d^\prime$ defenders, where $0 \le a^\prime \le a$ and $0 \le d^\prime \le d$. Beginning with $A$ attacking armies and $D$ defending armies, there are therefore $(A+1)(D+1)$ possible states indexed by $a=0, 1, \ldots, A$ and $d=0, 1, \ldots, D$. This is a Markov Chain on these states. All the information about outcomes is therefore contained in its transition matrix $\mathbb{P}_{A,D}$. In particular, after $k$ transitions ("turns" in the battle) from an initial state vector $s$, the distribution of resulting states $s^\prime$ is given by $\mathbb{P}_{A,D}^k\cdot s$. Because $\mathbb{P}_{A,D}$ can be large but $k$ is likely small, it may be most expedient simply to calculate this distribution iteratively via $$s(k) = \mathbb{P}_{A,D}\cdot s(k-1)$$ beginning with the initial state $s(0) = s$. These ideas were used to produce graphs of the winning chances for the attacker (blue) and defender (red) as a function of the number of turns played. (The gray lines are the chances that the battle remains unresolved after each turn.) The total time needed to do these computations using the brute-force method described below was five seconds. This is more than competitive with a Monte-Carlo simulation. For much larger numbers of armies, simulation will be superior in its use of time and RAM, but even so it is helpful to have a way to compute exact results, if only to test a simulator. It is interesting that the attacker needs relatively few additional armies in order to secure a huge advantage. In particular, one might naively expect to need $0.7/0.6$ times as many attacking armies. The randomness of the outcomes magnifies a small initial superiority, however. Also, attackers have one slight advantage: they still win if they are wiped out, provided the defenders are also wiped out. (The actual rules might differ on this account; if so, the code would need slight modification to accommodate any additional restrictions concerning minimum numbers of attackers.) An Algorithm A solution with brute-force computation takes only a few seconds even when $A=100$, $D=60$, and $k=5$ as in the question. The only difficulties concern how to index the states. Because R appears to be preferred and it organizes arrays by columns, I have written the transition vectors as columns and multiply state vectors by the transition matrix from the left (which is the opposite of the usual convention). I index the states $1, 2, \ldots, (A+1)(D+1)$ by letting the first coordinate vary fastest from largest to smallest. For instance, with $(A,D)=(2,1)$ the states are $(2,1), (1,1), (0,1), (2,0), (1,0), (0,0)$. The transition matrix $\mathbb{P}_{2,1}$ is computed in R as From To 2:1 1:1 0:1 2:0 1:0 0:0 2:1 0.048 0.00 0 0 0 0 1:1 0.112 0.12 0 0 0 0 0:1 0.000 0.28 1 0 0 0 2:0 0.252 0.00 0 1 0 0 1:0 0.588 0.18 0 0 1 0 0:0 0.000 0.42 0 0 0 1 using the command mc(2, 1)$transition (whose source is given below). For instance, the entry in row 1:0 and column 2:1 asserts there is a chance of 0.588 that two armies attacking one defender (the state $(2,1)$) will, in one turn, reduce to one attacking army and no defending armies (the state $(1,0)$). To check the validity of this calculation, we need to know the rules. The transition from $(a,d)$ yields a pair of random variables $(a^\prime, d^\prime)$. Let $X$ be a Binomial$(a, p)$ variable, where by default $p = 0.6$, and $Y$ be an independent Binomial$(d, q)$ variable with $q = 0.7$ by default. Then $a^\prime = \max(0,a-Y)$ and $d^\prime = \max(0,d-X)$. We say that the attacker "wins" provided $a\gt 0$ and $d^\prime=0$ and that the attacker "loses" provided $a\gt 0,$ $a^\prime=0$, and $d^\prime\gt 0$. In the case of a win or loss the engagement ends. Otherwise it may continue (at the attacker's option). Returning to the example, beginning in the state $(2,1)$ we find that $X$ has a Binomial$(2,0.6)$ distribution, whence $$\Pr(X=0)=0.4^2=0.16,\Pr(X=1)=2(0.4)(0.6)=0.48, \Pr(X=2)=0.6^2=0.36.$$ Consequently When $X=0, d^\prime=\max(0,1-0) = 1$, so ${\Pr}_{2,1}(d^\prime=1) = 0.16.$ When $X=1, d^\prime=\max(0,0)=0$; and When $X=2, d^\prime=\max(0,-1)=0$, so ${\Pr}_{2,1}(d^\prime=0) = 0.48+0.36=0.84.$ Likewise $Y$ has a Binomial$(1,0.7)$ distribution, giving it a $0.3$ chance of being $0$ and a $0.7$ chance of being $1$. Accordingly ${\Pr}_{2,1}(a^\prime=2)=\Pr(Y=0)=0.3$ and ${\Pr}_{2,1}(a^\prime=1)=\Pr(Y=1)=0.7$. Because $X$ and $Y$ are independent, the chances multiply. For example, the chance of arriving at $(a^\prime, d^\prime) = (1,0)$ is $${\Pr}_{2,1}\left((a^\prime, d^\prime)\right) = {\Pr}_{2,1}(a^\prime=1){\Pr}_{2,1}(d^\prime=0) = 0.7\times 0.84 = 0.588.$$ The other entries in the transition matrix are similarly verified. Notice that the terminal states $(0,1), (2,0), (1,0),$ and $(0,0)$ just make transitions to themselves. The first is a loss for the attacker while the rest are counted as wins. Working code To implement this I have used auxiliary functions fight to compute the probabilities, transition to produce the distribution of transitions from a single state, and mc to assemble those into a transition matrix. The function battle repeatedly applies this transition matrix to the initial distribution where state $(A,D)$ has probability $1$ and all other states have probability $0$. It summarizes the attacker's chances of winning and losing. Finally, by iterating battle over plausible choices of $A$, it becomes a tool to help with game strategy, which amounts to selecting an appropriate number of armies from among those available in order to attack a defender. The only applicable rule is that the number of armies must be strictly less than the number of available (because any attacking armies available after the engagement will be moved into the defender's territory but at least one army must be left behind). (This modular implementation makes it relatively easy to check the calculations by hand, at least for small cases. Doing so increases the confidence that the calculations are correct. Moreover, by limiting each module (R function) to less than a half dozen executable lines, each is sufficiently simple to undergo careful testing and review. I admit my testing has been cursory, but it did include a wide range of values of $A$, $D$, $p$, and $q$ as well as spot-checks like the example given above.) The output of this code consists of an array of graphs showing the winning and losing chances of the attacker as a function of the number of turns. It also reports the total amount of time needed to do the calculations for each graph. On this workstation it takes under 2.5 seconds to perform the calculations for $A=100$ attackers, $D=60$ defenders, and $k=5$ turns (using the command battle(100, 60, 5)). Easy modifications of battle will enable deeper analysis, such as evaluating the numbers of attacking armies that are likely to survive any engagement: such information, more than the mere chance of winning after $k$ turns, is more important to developing an optimal game strategy. fight <- function(n.attack, n.defend, n.defend.max, attack.p) { # # Returns a probability vector for the number of surviving defenders # indexed by n.defend.max down to 0. # NB: This is the only part that needs modification if a "luck factor" # is introduced. # p.casualty <- dbinom(0:n.attack, n.attack, attack.p) n.survive <- n.defend - (0:n.attack) if (n.attack > n.defend) { # Collapse the negative values into a single state with no survivors. p.casualty <- c(p.casualty[n.survive > 0], sum(p.casualty[n.survive <= 0])) n.survive <- n.survive[n.survive >= 0] } # Pad the return vector with zeros, fore and aft, as needed. c(rep(0, n.defend.max-n.defend), p.casualty, rep(0, max(0, n.defend-n.attack))) } transition <- function(attack, defend, attack.max, defend.max, attack.p, defend.p) { # # Returns the transition probabilities from the state (attack, defend), # in descending order with `attack` changing fastest. # a <- fight(defend, attack, attack.max, defend.p) d <- fight(attack, defend, defend.max, attack.p) return(as.vector(outer(a, d))) } mc <- function(attack.max, defend.max, attack.p=0.6, defend.p=0.7) { # # Returns the transition matrix for a round in which attack.max armies # (not including any reserved ones) engage defend.max armies. # Transitions are in *columns* (not rows, which is the convention). # The matrix is square with dimensions (attack.max+1)(defend.max+1). # # Also returns indicator vectors $wins and $loses showing which # states correspond to wins and losses for the attacker, respectively. # i <- expand.grid(A=attack.max:0, D=defend.max:0) # All states x <- apply(i, 1, function(ad, ...) transition(ad[1], ad[2], ...), attack.max=attack.max, defend.max=defend.max, attack.p=attack.p, defend.p=defend.p) # Name the indexes in `x` to assist human reading of the results. s <- paste(i$A, i$D, sep=":") dimnames(x) <- list(To=s, From=s) return(list(transition=x, n=dim(i)[1], wins=(i$D==0), loses=(i$A==0 & i$D > 0))) } battle <- function(attack, defend, k=5, ...) { # # Conduct a battle of `attack` attacking armies against `defend` defending # armies for `k` turns. # Return an array whose rows (indexed by `k`) give chances of the attacker # winning and losing at that turn. # # (The code is readily modified to report the distributions of numbers of # remaining armies.) # if (attack <= 0 || defend <= 0) stop("Both army counts must be positive.") # # Find the probability distributions after 1, 2, ..., k turns. # x <- mc(attack, defend, ...) # Transition matrix structure y <- c(1, rep(0, x$n-1)) # $Beginning distribution: all is in state 1. p <- matrix(NA, k, 3, dimnames=list(Turns=1:k, Outcome=c("Wins", "Loses", "Undecided"))) for (i in 1:k) { y <- x$transition %*% y # $ # Summarize this turn. p[i, "Wins"] <- sum(y[x$wins]) # $ p[i, "Loses"] <- sum(y[x$loses]) # $ p[i, "Undecided"] <- 1 - (p[i, "Wins"] + p[i, "Loses"]) } return(p) } # # Study near-equal battles involving a given number of defenders. # d <- 60 k <- 5 p.a <- 0.6; p.d <- 0.7 times <- numeric(0) par(mfrow=c(2,2)) for (a in round(d * exp(seq(log(63/60), log(75/60), length.out=4)))) { u <- system.time(p <- zapsmall(battle(a, d, k, attack.p=p.a, defend.p=p.d))) times <- c(times, u[3]) plot(p[, "Wins"], type="l", ylim=c(0,1), col="Blue", xlab="Turns", ylab="Probability", main=paste(a, "Attackers vs.", d, "Defenders"), sub=paste("p(Attack) =", p.a, "p(Defense) =", p.d)) lines(p[, "Undecided"], col="Gray") lines(p[, "Loses"], col="Red") } times
Probability enemy territory captured in X turns The basic rules of engagement provide the probability distribution for transitions from $a$ attacking armies and $d$ defending armies to $a^\prime$ attackers and $d^\prime$ defenders, where $0 \le a^\
46,781
Probability enemy territory captured in X turns
I use this to simulate, perhaps it might give more insight and perhaps someone might come with an analytical solution. When defender has equal to attacker, the defender "wins" When the attacker didn't destroy defender by turn X, the attacker "loses" When defender reaches 0 troops, attacker "wins" Here, N1 is the defender troops, N2 is attacker troops. simulateBattle <-function(N1, N2, nsim=10000, max_turn=100,verbose=F) { result <- 1:nsim turn <- 1:nsim for (i in 1:nsim) { t <- 1 n1 <- N1; n2 <- N2 if (verbose) { print(paste("turn", t, "n1", n1, "n2", n2)) } while (n1 < n2 && n1 > 0 && t < max_turn) { temp_n1 <- n1 n1 <- n1 - rbinom(1, n2 - 1, p=0.6) # attack with n2 - 1 n2 <- n2 - rbinom(1, temp_n1 , p=0.7) # defend with all t <- t + 1 if (verbose) { print(paste("turn", t, "n1", n1, "n2", n2)) } } turn[i] <- t result[i] <- n1 <= 0 } cat(paste("P(attacker_wins): ", mean(result))) } Results: # Only first turn: simulateBattle(25, 40, max_turn=2) [1] 0.36301 # Up until 2 turns: simulateBattle(25, 40, max_turn=3) [1] 0.9983 # Up until 3 turns: simulateBattle(25, 40, max_turn=4) [1] 0.99999
Probability enemy territory captured in X turns
I use this to simulate, perhaps it might give more insight and perhaps someone might come with an analytical solution. When defender has equal to attacker, the defender "wins" When the attacker didn
Probability enemy territory captured in X turns I use this to simulate, perhaps it might give more insight and perhaps someone might come with an analytical solution. When defender has equal to attacker, the defender "wins" When the attacker didn't destroy defender by turn X, the attacker "loses" When defender reaches 0 troops, attacker "wins" Here, N1 is the defender troops, N2 is attacker troops. simulateBattle <-function(N1, N2, nsim=10000, max_turn=100,verbose=F) { result <- 1:nsim turn <- 1:nsim for (i in 1:nsim) { t <- 1 n1 <- N1; n2 <- N2 if (verbose) { print(paste("turn", t, "n1", n1, "n2", n2)) } while (n1 < n2 && n1 > 0 && t < max_turn) { temp_n1 <- n1 n1 <- n1 - rbinom(1, n2 - 1, p=0.6) # attack with n2 - 1 n2 <- n2 - rbinom(1, temp_n1 , p=0.7) # defend with all t <- t + 1 if (verbose) { print(paste("turn", t, "n1", n1, "n2", n2)) } } turn[i] <- t result[i] <- n1 <= 0 } cat(paste("P(attacker_wins): ", mean(result))) } Results: # Only first turn: simulateBattle(25, 40, max_turn=2) [1] 0.36301 # Up until 2 turns: simulateBattle(25, 40, max_turn=3) [1] 0.9983 # Up until 3 turns: simulateBattle(25, 40, max_turn=4) [1] 0.99999
Probability enemy territory captured in X turns I use this to simulate, perhaps it might give more insight and perhaps someone might come with an analytical solution. When defender has equal to attacker, the defender "wins" When the attacker didn
46,782
Condition number of covariance matrix
Yes, the scales of your variables affect the condition number. This is a real phenomenon with practical consequences; for example, I am using linear least-squares to solve a fitting problem, and if I just drop in the appropriate columns my condition number is of order 10^18 (presumably worse, as this is the limit of my numerical precision). If on the other hand I rescale my variables so each column of the fit matrix has the same sum-of-squares amplitude, the condition number of the fit matrix drops to less than a hundred. If I use the ill-conditioned matrix to compute fit values, they and the residuals are terrible; if I use the rescaled matrix and then rescale the variables, I get good stable fits. What this means in terms of correlation and covariance matrices is that if you want to work with differently-scaled variables, you should keep the individual variable scales separate from the correlation matrix. If you do this, then a bad condition number of the correlation matrix corresponds to real, strong correlations between your variables. If you construct a covariance matrix by multiplying the scales in, then indeed, you can get a bad condition number just because your variables have different scales. You don't say exactly what you want to do with your generated covariance matrices. If you're trying to evaluate the performance of an algorithm, then you have revealed a shortcoming in that algorithm: it works better if you rescale all your variables first. If you're doing something else, well, the fact is that if your variables have different scales, the covariance matrices really will have horrible condition numbers.
Condition number of covariance matrix
Yes, the scales of your variables affect the condition number. This is a real phenomenon with practical consequences; for example, I am using linear least-squares to solve a fitting problem, and if I
Condition number of covariance matrix Yes, the scales of your variables affect the condition number. This is a real phenomenon with practical consequences; for example, I am using linear least-squares to solve a fitting problem, and if I just drop in the appropriate columns my condition number is of order 10^18 (presumably worse, as this is the limit of my numerical precision). If on the other hand I rescale my variables so each column of the fit matrix has the same sum-of-squares amplitude, the condition number of the fit matrix drops to less than a hundred. If I use the ill-conditioned matrix to compute fit values, they and the residuals are terrible; if I use the rescaled matrix and then rescale the variables, I get good stable fits. What this means in terms of correlation and covariance matrices is that if you want to work with differently-scaled variables, you should keep the individual variable scales separate from the correlation matrix. If you do this, then a bad condition number of the correlation matrix corresponds to real, strong correlations between your variables. If you construct a covariance matrix by multiplying the scales in, then indeed, you can get a bad condition number just because your variables have different scales. You don't say exactly what you want to do with your generated covariance matrices. If you're trying to evaluate the performance of an algorithm, then you have revealed a shortcoming in that algorithm: it works better if you rescale all your variables first. If you're doing something else, well, the fact is that if your variables have different scales, the covariance matrices really will have horrible condition numbers.
Condition number of covariance matrix Yes, the scales of your variables affect the condition number. This is a real phenomenon with practical consequences; for example, I am using linear least-squares to solve a fitting problem, and if I
46,783
Condition number of covariance matrix
In general, it is really really unlikely the covariance matrix is ill-conditioned. There are results by Tao and Vu (http://arxiv.org/pdf/math/0703307v1.pdf theorem P2). General rule I keep in mind is Marcenko-Pastur: If you have each column of a matrix X of dimension N*P being sampled independently then so long as (N/P) or (P/N) is not close to 1 you will not get ill-conditioning. (i.e. as a rule of thumb, you are generally safe if you multiply 2 matrices as $EE^{T}$ where the dimensions are not close to one another. This is the case I frequently encounter) Besides, if you know the spectrum of the correlation matrix, the answer is known analytically. Write the Cholesky-decomposition of the correlation matrix $C = GG^{T}$ The Covariance matrix will be $S = \Sigma GG^{T} \Sigma$ where $\Sigma$ is a diagonal matrix having standard deviations. Therefore, the condition number of $S$ is the square of the condition number of $\Sigma G$ which you can find exactly if you so desire
Condition number of covariance matrix
In general, it is really really unlikely the covariance matrix is ill-conditioned. There are results by Tao and Vu (http://arxiv.org/pdf/math/0703307v1.pdf theorem P2). General rule I keep in mind is
Condition number of covariance matrix In general, it is really really unlikely the covariance matrix is ill-conditioned. There are results by Tao and Vu (http://arxiv.org/pdf/math/0703307v1.pdf theorem P2). General rule I keep in mind is Marcenko-Pastur: If you have each column of a matrix X of dimension N*P being sampled independently then so long as (N/P) or (P/N) is not close to 1 you will not get ill-conditioning. (i.e. as a rule of thumb, you are generally safe if you multiply 2 matrices as $EE^{T}$ where the dimensions are not close to one another. This is the case I frequently encounter) Besides, if you know the spectrum of the correlation matrix, the answer is known analytically. Write the Cholesky-decomposition of the correlation matrix $C = GG^{T}$ The Covariance matrix will be $S = \Sigma GG^{T} \Sigma$ where $\Sigma$ is a diagonal matrix having standard deviations. Therefore, the condition number of $S$ is the square of the condition number of $\Sigma G$ which you can find exactly if you so desire
Condition number of covariance matrix In general, it is really really unlikely the covariance matrix is ill-conditioned. There are results by Tao and Vu (http://arxiv.org/pdf/math/0703307v1.pdf theorem P2). General rule I keep in mind is
46,784
Condition number of covariance matrix
Why don't you draw your covariance matrix from an inverse Wishart distribution? Gamma distribution is usually used as a prior for a single dimensional variance, Wishart is the multivariate case of the Gamma distribution. It is used as the conjugate prior for the covariance of a multi-variate normal. Sampling the values on the diagonal and the off-diagonal values separately actually does not make much sense, since these are dependent, right? There are built-in functions (for Matlab, Python etc...) to draw from the inverse Wishart and you supply it with a positive definite matrix as the scale parameter, so condition number should not be a problem for the drawn samples.
Condition number of covariance matrix
Why don't you draw your covariance matrix from an inverse Wishart distribution? Gamma distribution is usually used as a prior for a single dimensional variance, Wishart is the multivariate case of the
Condition number of covariance matrix Why don't you draw your covariance matrix from an inverse Wishart distribution? Gamma distribution is usually used as a prior for a single dimensional variance, Wishart is the multivariate case of the Gamma distribution. It is used as the conjugate prior for the covariance of a multi-variate normal. Sampling the values on the diagonal and the off-diagonal values separately actually does not make much sense, since these are dependent, right? There are built-in functions (for Matlab, Python etc...) to draw from the inverse Wishart and you supply it with a positive definite matrix as the scale parameter, so condition number should not be a problem for the drawn samples.
Condition number of covariance matrix Why don't you draw your covariance matrix from an inverse Wishart distribution? Gamma distribution is usually used as a prior for a single dimensional variance, Wishart is the multivariate case of the
46,785
Condition number of covariance matrix
Easiest to interpret is to generate a spectrum and the orthogonal group (rotation matrix): $V^T D V$. You can put whatever prior you want on the eigenvalues. Probably there are some good ones depending on context.
Condition number of covariance matrix
Easiest to interpret is to generate a spectrum and the orthogonal group (rotation matrix): $V^T D V$. You can put whatever prior you want on the eigenvalues. Probably there are some good ones dependin
Condition number of covariance matrix Easiest to interpret is to generate a spectrum and the orthogonal group (rotation matrix): $V^T D V$. You can put whatever prior you want on the eigenvalues. Probably there are some good ones depending on context.
Condition number of covariance matrix Easiest to interpret is to generate a spectrum and the orthogonal group (rotation matrix): $V^T D V$. You can put whatever prior you want on the eigenvalues. Probably there are some good ones dependin
46,786
How to select a number of components to retain in kernel PCA?
The reason you get 124 components even though you only had 10 original features is (probably) because you have 124 samples. In kernel PCA, the data are mapped to a space which is very high dimensional (has many more than 10 dimensions), and so the number of PCs is only limited by the number of samples. Now, your eigenvalues are actually not so uniform as you seem to think. Here is the plot of your data: One could argue that there is some sort of an "elbow" around 15 components, and that after around 20 components the spectrum becomes very monotonic. So 15-20 components seems like a reasonable number on the basis of eigenvalues only. However, if you want to use kPCA as a first step of some classification or decoding algorithms, then it is always a better idea to select the number of components to retain via cross-validation.
How to select a number of components to retain in kernel PCA?
The reason you get 124 components even though you only had 10 original features is (probably) because you have 124 samples. In kernel PCA, the data are mapped to a space which is very high dimensional
How to select a number of components to retain in kernel PCA? The reason you get 124 components even though you only had 10 original features is (probably) because you have 124 samples. In kernel PCA, the data are mapped to a space which is very high dimensional (has many more than 10 dimensions), and so the number of PCs is only limited by the number of samples. Now, your eigenvalues are actually not so uniform as you seem to think. Here is the plot of your data: One could argue that there is some sort of an "elbow" around 15 components, and that after around 20 components the spectrum becomes very monotonic. So 15-20 components seems like a reasonable number on the basis of eigenvalues only. However, if you want to use kPCA as a first step of some classification or decoding algorithms, then it is always a better idea to select the number of components to retain via cross-validation.
How to select a number of components to retain in kernel PCA? The reason you get 124 components even though you only had 10 original features is (probably) because you have 124 samples. In kernel PCA, the data are mapped to a space which is very high dimensional
46,787
How to select a number of components to retain in kernel PCA?
Here's the explained variance plot $\frac{\sum_{i=1}^k\lambda_i}{\sum_{i=1}^{124}\lambda_i}$. You need 90 PCs to explain 90% of the variance. In my opinion, your kernel is not so good. Maybe you should try other kernels and see if this plot become more like in the picture below, which is from this paper: Williams, Christopher KI. "On a connection between kernel PCA and metric multidimensional scaling." Machine Learning 46.1-3 (2002): 11-19. It's good when the explained variance is very steep on the left.
How to select a number of components to retain in kernel PCA?
Here's the explained variance plot $\frac{\sum_{i=1}^k\lambda_i}{\sum_{i=1}^{124}\lambda_i}$. You need 90 PCs to explain 90% of the variance. In my opinion, your kernel is not so good. Maybe you shou
How to select a number of components to retain in kernel PCA? Here's the explained variance plot $\frac{\sum_{i=1}^k\lambda_i}{\sum_{i=1}^{124}\lambda_i}$. You need 90 PCs to explain 90% of the variance. In my opinion, your kernel is not so good. Maybe you should try other kernels and see if this plot become more like in the picture below, which is from this paper: Williams, Christopher KI. "On a connection between kernel PCA and metric multidimensional scaling." Machine Learning 46.1-3 (2002): 11-19. It's good when the explained variance is very steep on the left.
How to select a number of components to retain in kernel PCA? Here's the explained variance plot $\frac{\sum_{i=1}^k\lambda_i}{\sum_{i=1}^{124}\lambda_i}$. You need 90 PCs to explain 90% of the variance. In my opinion, your kernel is not so good. Maybe you shou
46,788
When is Likelihood Function Positive Semidefinite
The Fisher Information is defined as $${\left(\mathcal{I} \left(\theta \right) \right)}_{i, j} = \operatorname{E} \left[\left. \left(\frac{\partial}{\partial\theta_i} \log f(X;\theta)\right) \left(\frac{\partial}{\partial\theta_j} \log f(X;\theta)\right) \right|\theta\right]$$ (the question in the post you linked to states mistakenly otherwise, and the answer politely corrects it). Under the following regularity conditions: 1) The support of the random variable involved does not depend on the unknown parameter vector 2) The derivatives of the loglikelihood w.r.t the parameters exist up to 3d order 3) The expected value of the squared 1st derivative is finite and under the assumption that the specification is correct (i.e. the specified distribution family includes the actual distribution that the random variable follows) then the Fisher Information equals the (negative of the) inverted Hessian of the loglikelihood for one observation. This equality is called the "Information Matrix Equality" for obvious reasons. While the three regularity conditions are relatively "mild" (or at least can be checked), the assumption of correct specification is at the heart of the issues of statistical inference, especially with observational data. It simply is too strong a condition to be accepted easily. And this is the reason why it is a major issue to prove that the log-likelihood is concave in the parameters (which leads in many cases to consistency and asymptotic normality irrespective of whether the specification is correct -the quasi-MLE case), and not just assume it by assuming that the Information Matrix Equality holds. So you were absolutely right in thinking "too good to be true". On the side, you neglected the presence of the minus sign -so the Hessian of the log-likelihood (for one observation) would be negative-semidefinite, as it should since we seek to maximize it, not minimize it.
When is Likelihood Function Positive Semidefinite
The Fisher Information is defined as $${\left(\mathcal{I} \left(\theta \right) \right)}_{i, j} = \operatorname{E} \left[\left. \left(\frac{\partial}{\partial\theta_i} \log f(X;\theta)\right) \left(\
When is Likelihood Function Positive Semidefinite The Fisher Information is defined as $${\left(\mathcal{I} \left(\theta \right) \right)}_{i, j} = \operatorname{E} \left[\left. \left(\frac{\partial}{\partial\theta_i} \log f(X;\theta)\right) \left(\frac{\partial}{\partial\theta_j} \log f(X;\theta)\right) \right|\theta\right]$$ (the question in the post you linked to states mistakenly otherwise, and the answer politely corrects it). Under the following regularity conditions: 1) The support of the random variable involved does not depend on the unknown parameter vector 2) The derivatives of the loglikelihood w.r.t the parameters exist up to 3d order 3) The expected value of the squared 1st derivative is finite and under the assumption that the specification is correct (i.e. the specified distribution family includes the actual distribution that the random variable follows) then the Fisher Information equals the (negative of the) inverted Hessian of the loglikelihood for one observation. This equality is called the "Information Matrix Equality" for obvious reasons. While the three regularity conditions are relatively "mild" (or at least can be checked), the assumption of correct specification is at the heart of the issues of statistical inference, especially with observational data. It simply is too strong a condition to be accepted easily. And this is the reason why it is a major issue to prove that the log-likelihood is concave in the parameters (which leads in many cases to consistency and asymptotic normality irrespective of whether the specification is correct -the quasi-MLE case), and not just assume it by assuming that the Information Matrix Equality holds. So you were absolutely right in thinking "too good to be true". On the side, you neglected the presence of the minus sign -so the Hessian of the log-likelihood (for one observation) would be negative-semidefinite, as it should since we seek to maximize it, not minimize it.
When is Likelihood Function Positive Semidefinite The Fisher Information is defined as $${\left(\mathcal{I} \left(\theta \right) \right)}_{i, j} = \operatorname{E} \left[\left. \left(\frac{\partial}{\partial\theta_i} \log f(X;\theta)\right) \left(\
46,789
Multivariate logistic distribution
Using copulas you can create a multivariate distribution generalized from any univariate distribution, so yes it is possible to find a multivariate distribution with all the marginal distributions equal to logistic distributions, however it will probably not be a simple function of a covariance matrix, that relationship is pretty unique to the normal distribution.
Multivariate logistic distribution
Using copulas you can create a multivariate distribution generalized from any univariate distribution, so yes it is possible to find a multivariate distribution with all the marginal distributions equ
Multivariate logistic distribution Using copulas you can create a multivariate distribution generalized from any univariate distribution, so yes it is possible to find a multivariate distribution with all the marginal distributions equal to logistic distributions, however it will probably not be a simple function of a covariance matrix, that relationship is pretty unique to the normal distribution.
Multivariate logistic distribution Using copulas you can create a multivariate distribution generalized from any univariate distribution, so yes it is possible to find a multivariate distribution with all the marginal distributions equ
46,790
Multivariate logistic distribution
Yes. In fact, the multivariate normal and logistic distributions are members of the more general family of elliptically-contoured distributions, which can be derived from their univariate counterparts. Both the univariate and multivariate normal distributions share the same probability density generator, which is proportional to $$ g(u) = exp(-u/2) $$ That is, if $u =\big(\dfrac{x-\mu}{\sigma}\big)^2$ we have the normal distribution and if $u = (x-\mu)'\Sigma^{-1}(x-\mu)$ we have the multivariate normal distribution. The same story goes for the univariate and multivariate logistic distributions, which share the same probability density generator, which is proportional to $$ g(u) = \dfrac{exp(-u)}{(1+exp(-u))^2}. $$ If $u =\big(\dfrac{x-\mu}{\sigma}\big)^2$ we have the logistic distribution and if $u = (x-\mu)'\Sigma^{-1}(x-\mu)$ we have the multivariate logistic distribution. Furthermore, it is well known that the parent and marginal distributions of any elliptically contoured distributions share the same type of distribution. See the wikipedia page on elliptical distributions for more details https://en.wikipedia.org/wiki/Elliptical_distribution
Multivariate logistic distribution
Yes. In fact, the multivariate normal and logistic distributions are members of the more general family of elliptically-contoured distributions, which can be derived from their univariate counterparts
Multivariate logistic distribution Yes. In fact, the multivariate normal and logistic distributions are members of the more general family of elliptically-contoured distributions, which can be derived from their univariate counterparts. Both the univariate and multivariate normal distributions share the same probability density generator, which is proportional to $$ g(u) = exp(-u/2) $$ That is, if $u =\big(\dfrac{x-\mu}{\sigma}\big)^2$ we have the normal distribution and if $u = (x-\mu)'\Sigma^{-1}(x-\mu)$ we have the multivariate normal distribution. The same story goes for the univariate and multivariate logistic distributions, which share the same probability density generator, which is proportional to $$ g(u) = \dfrac{exp(-u)}{(1+exp(-u))^2}. $$ If $u =\big(\dfrac{x-\mu}{\sigma}\big)^2$ we have the logistic distribution and if $u = (x-\mu)'\Sigma^{-1}(x-\mu)$ we have the multivariate logistic distribution. Furthermore, it is well known that the parent and marginal distributions of any elliptically contoured distributions share the same type of distribution. See the wikipedia page on elliptical distributions for more details https://en.wikipedia.org/wiki/Elliptical_distribution
Multivariate logistic distribution Yes. In fact, the multivariate normal and logistic distributions are members of the more general family of elliptically-contoured distributions, which can be derived from their univariate counterparts
46,791
Multivariate logistic distribution
I don't think any such distribution is known to literature. Books on continuous multivariate distributions (such as Kotz '00) and books on the logistic distribution (such as N. Balakrishnan '92) don't mention any such generalization. Most multivariate distributions discussed there contains at most two parameters which in some cases govern the covariance between the variables (besides the parameters $\mu_i$ and $\sigma_i$ for the mean and standard deviation in each variable $i$). No single distribution is given which uses (as much parameters as) the covariance matrix $\Sigma$. However, that does not guarantee no such distribution is possible.
Multivariate logistic distribution
I don't think any such distribution is known to literature. Books on continuous multivariate distributions (such as Kotz '00) and books on the logistic distribution (such as N. Balakrishnan '92) don'
Multivariate logistic distribution I don't think any such distribution is known to literature. Books on continuous multivariate distributions (such as Kotz '00) and books on the logistic distribution (such as N. Balakrishnan '92) don't mention any such generalization. Most multivariate distributions discussed there contains at most two parameters which in some cases govern the covariance between the variables (besides the parameters $\mu_i$ and $\sigma_i$ for the mean and standard deviation in each variable $i$). No single distribution is given which uses (as much parameters as) the covariance matrix $\Sigma$. However, that does not guarantee no such distribution is possible.
Multivariate logistic distribution I don't think any such distribution is known to literature. Books on continuous multivariate distributions (such as Kotz '00) and books on the logistic distribution (such as N. Balakrishnan '92) don'
46,792
Using lme with a fixed beta (slope), and estimating the intercept only
Yes. If you know that the slope is, for example, 1.5, then you just subtract 1.5 * p_w from the outcome and refit the model with just the intercept term. So, in your example: x$ComboRate.adj <- x$ComboRate - 1.5 * x$p_w lme.2.combo <- lme(ComboRate.adj ~ 1, random = ~1 | Rat,data=x) This is the same as using an offset in your model. To check that this works, first try using the actual slope estimate you got from the lme.1.combo model to make sure you get the same intercept estimate from the offset model.
Using lme with a fixed beta (slope), and estimating the intercept only
Yes. If you know that the slope is, for example, 1.5, then you just subtract 1.5 * p_w from the outcome and refit the model with just the intercept term. So, in your example: x$ComboRate.adj <- x$Comb
Using lme with a fixed beta (slope), and estimating the intercept only Yes. If you know that the slope is, for example, 1.5, then you just subtract 1.5 * p_w from the outcome and refit the model with just the intercept term. So, in your example: x$ComboRate.adj <- x$ComboRate - 1.5 * x$p_w lme.2.combo <- lme(ComboRate.adj ~ 1, random = ~1 | Rat,data=x) This is the same as using an offset in your model. To check that this works, first try using the actual slope estimate you got from the lme.1.combo model to make sure you get the same intercept estimate from the offset model.
Using lme with a fixed beta (slope), and estimating the intercept only Yes. If you know that the slope is, for example, 1.5, then you just subtract 1.5 * p_w from the outcome and refit the model with just the intercept term. So, in your example: x$ComboRate.adj <- x$Comb
46,793
Dropping term for correlation between random effects in lme and interpretting summary output
While in principle your approach works, this is not quite the 'standard' way of making the random intercepts and slopes uncorrelated. With lme you can use pdClasses (see help(pdClasses)) to give a particular structure to the variance-covariance matrix of the random effects. Here, you want to make that matrix diagonal. You can do that with: m3 <- lme(distance ~ age + Sex, data = Orthodont, random = list(Subject = pdDiag(~ age)) m3 Linear mixed-effects model fit by REML Data: Orthodont Log-restricted-likelihood: -218.3227 Fixed: distance ~ age + Sex (Intercept) age SexFemale 17.5806928 0.6601852 -2.0117005 Random effects: Formula: ~age | Subject Structure: Diagonal (Intercept) age Residual StdDev: 1.474092 0.09998003 1.402591 Number of Observations: 108 Number of Groups: 27 You will find that the parameter estimates are actually identical to model m2, but the presentation of the results is more "logical".
Dropping term for correlation between random effects in lme and interpretting summary output
While in principle your approach works, this is not quite the 'standard' way of making the random intercepts and slopes uncorrelated. With lme you can use pdClasses (see help(pdClasses)) to give a par
Dropping term for correlation between random effects in lme and interpretting summary output While in principle your approach works, this is not quite the 'standard' way of making the random intercepts and slopes uncorrelated. With lme you can use pdClasses (see help(pdClasses)) to give a particular structure to the variance-covariance matrix of the random effects. Here, you want to make that matrix diagonal. You can do that with: m3 <- lme(distance ~ age + Sex, data = Orthodont, random = list(Subject = pdDiag(~ age)) m3 Linear mixed-effects model fit by REML Data: Orthodont Log-restricted-likelihood: -218.3227 Fixed: distance ~ age + Sex (Intercept) age SexFemale 17.5806928 0.6601852 -2.0117005 Random effects: Formula: ~age | Subject Structure: Diagonal (Intercept) age Residual StdDev: 1.474092 0.09998003 1.402591 Number of Observations: 108 Number of Groups: 27 You will find that the parameter estimates are actually identical to model m2, but the presentation of the results is more "logical".
Dropping term for correlation between random effects in lme and interpretting summary output While in principle your approach works, this is not quite the 'standard' way of making the random intercepts and slopes uncorrelated. With lme you can use pdClasses (see help(pdClasses)) to give a par
46,794
Approximation of Cauchy distribution
The ratio of two arbitrary normal random variables is not in general Cauchy. Even the ratio of two jointly normal random variables is not in general Cauchy. Let's assume you're dealing with a ratio that does have a Cauchy distribution. Then all manner of quantities converge - including quantiles and many functions of quantiles, trimmed and winsorized moments, and cdfs.
Approximation of Cauchy distribution
The ratio of two arbitrary normal random variables is not in general Cauchy. Even the ratio of two jointly normal random variables is not in general Cauchy. Let's assume you're dealing with a ratio th
Approximation of Cauchy distribution The ratio of two arbitrary normal random variables is not in general Cauchy. Even the ratio of two jointly normal random variables is not in general Cauchy. Let's assume you're dealing with a ratio that does have a Cauchy distribution. Then all manner of quantities converge - including quantiles and many functions of quantiles, trimmed and winsorized moments, and cdfs.
Approximation of Cauchy distribution The ratio of two arbitrary normal random variables is not in general Cauchy. Even the ratio of two jointly normal random variables is not in general Cauchy. Let's assume you're dealing with a ratio th
46,795
Approximation of Cauchy distribution
First, the ratio is Cauchy only if the denominator distribution is centered at 0. In any cases, the statistics on the ratio of $y$ and $x$ can be approximated as: $$\hat{\mu}_{y:x} = \mu_y/\mu_x + \sigma^2_x * \mu_y / \mu_x^3 + cov(x,y) * \sigma^2_x * \sigma^2_y / \mu_x^2$$ $$\hat{\sigma}^2_{y:x} = \sigma^2_x\times\mu_y / \mu_x^4 + \sigma^2_y / \mu_x^2 - 2 * cov(x,y) * \sigma^2_x * \sigma^2_y / \mu_x^3$$ if one supposes that the variances are neglictable with respect to the mean (see the post How to parameterize the ratio of two normally distributed variables, or the inverse of one?). However, I think that as suggested in other answers, using the quantiles would be more appropriated.
Approximation of Cauchy distribution
First, the ratio is Cauchy only if the denominator distribution is centered at 0. In any cases, the statistics on the ratio of $y$ and $x$ can be approximated as: $$\hat{\mu}_{y:x} = \mu_y/\mu_x + \s
Approximation of Cauchy distribution First, the ratio is Cauchy only if the denominator distribution is centered at 0. In any cases, the statistics on the ratio of $y$ and $x$ can be approximated as: $$\hat{\mu}_{y:x} = \mu_y/\mu_x + \sigma^2_x * \mu_y / \mu_x^3 + cov(x,y) * \sigma^2_x * \sigma^2_y / \mu_x^2$$ $$\hat{\sigma}^2_{y:x} = \sigma^2_x\times\mu_y / \mu_x^4 + \sigma^2_y / \mu_x^2 - 2 * cov(x,y) * \sigma^2_x * \sigma^2_y / \mu_x^3$$ if one supposes that the variances are neglictable with respect to the mean (see the post How to parameterize the ratio of two normally distributed variables, or the inverse of one?). However, I think that as suggested in other answers, using the quantiles would be more appropriated.
Approximation of Cauchy distribution First, the ratio is Cauchy only if the denominator distribution is centered at 0. In any cases, the statistics on the ratio of $y$ and $x$ can be approximated as: $$\hat{\mu}_{y:x} = \mu_y/\mu_x + \s
46,796
Approximation of Cauchy distribution
Your question assumes that the distribution of the denominator is centered at 0. If this is so, the median and the mad will converge (to 0 and 1, respectively). nn<-exp(seq(log(10),log(100000),l=20)) aa<-rep(NA,length(nn)) bb<-rep(NA,length(nn)) for(i in 1:length(nn)){ x1<-rt(nn[i],df=1) aa[i]<-median(x1) bb[i]<-mad(x1,constant=1) } par(mfrow=c(2,1)) plot(bb,type="l",ylab="mad",xlab="log sample size") plot(aa,type="l",ylab="med",xlab="log sample size") btw, you have to change the consistency factor in the computation of the mad from $1.4826=1/\Phi^{-1}(0.75)$ to $1=1/t^{-1}_{0.75}$ (the quantile function of the Cauchy distribution evaluated at $q=0.75$)
Approximation of Cauchy distribution
Your question assumes that the distribution of the denominator is centered at 0. If this is so, the median and the mad will converge (to 0 and 1, respectively). nn<-exp(seq(log(10),log(100000),l=20))
Approximation of Cauchy distribution Your question assumes that the distribution of the denominator is centered at 0. If this is so, the median and the mad will converge (to 0 and 1, respectively). nn<-exp(seq(log(10),log(100000),l=20)) aa<-rep(NA,length(nn)) bb<-rep(NA,length(nn)) for(i in 1:length(nn)){ x1<-rt(nn[i],df=1) aa[i]<-median(x1) bb[i]<-mad(x1,constant=1) } par(mfrow=c(2,1)) plot(bb,type="l",ylab="mad",xlab="log sample size") plot(aa,type="l",ylab="med",xlab="log sample size") btw, you have to change the consistency factor in the computation of the mad from $1.4826=1/\Phi^{-1}(0.75)$ to $1=1/t^{-1}_{0.75}$ (the quantile function of the Cauchy distribution evaluated at $q=0.75$)
Approximation of Cauchy distribution Your question assumes that the distribution of the denominator is centered at 0. If this is so, the median and the mad will converge (to 0 and 1, respectively). nn<-exp(seq(log(10),log(100000),l=20))
46,797
Why does a fixed-effect OLS need unique time elements?
Your understanding of fixed effects regression seems perfectly fine. When you do the within transformation to obtain the fixed effects estimator $$y_{it} - \overline{y}_{i} = (X_{it} - \overline{X}_i)\beta + \epsilon_{it} - \overline{\epsilon}_i$$ the time-sorting order does not matter because $\overline{y}_{i} = \frac{1}{T}\sum^{T}_{t=1}y_{it}$, $\overline{x}_{i} = \frac{1}{T}\sum^{T}_{t=1}x_{it}$, and $\overline{\epsilon}_{i} = \frac{1}{T}\sum^{T}_{t=1}\epsilon_{it}$ sum out the time component no matter the sorting order within each individual (or firm/country/whatever your $i$ subscript may be). I'm not an R guy but in Stata you would run into the same problem for duplicate time values in the time variable. Again this wouldn't matter for the fixed effects estimation and in fact you do not even need to specify a time variable. For example, webuse nlswork xtset idcode xtreg ln_wage age hours, fe will give you the same estimates as xtset idcode year xtreg ln_wage age hours, fe The sorting order of the time values can sometimes be important for inference though. If you were to use the xtserial command after the above fixed effects regression Stata will tell you xtserial age time variable not set, use -tsset varname ... if you haven't used xtset idcode year before. For this purpose it can be problematic if you have 2 observations for an individual in a given year but you do not know if one observation is dated before/after the other (for instance, if a month or quarter variable is missing). I'm sure this is not the case for you but sometimes people specify the time variable to be annual when in fact they have monthly data. If they wanted to run such a regression they would need to aggregate the data first to the annual level. Otherwise to solve the duplicate time values problem one would generate a new time variable for year-month combinations. The within estimator itself does not need a specified time component itself.
Why does a fixed-effect OLS need unique time elements?
Your understanding of fixed effects regression seems perfectly fine. When you do the within transformation to obtain the fixed effects estimator $$y_{it} - \overline{y}_{i} = (X_{it} - \overline{X}_i)
Why does a fixed-effect OLS need unique time elements? Your understanding of fixed effects regression seems perfectly fine. When you do the within transformation to obtain the fixed effects estimator $$y_{it} - \overline{y}_{i} = (X_{it} - \overline{X}_i)\beta + \epsilon_{it} - \overline{\epsilon}_i$$ the time-sorting order does not matter because $\overline{y}_{i} = \frac{1}{T}\sum^{T}_{t=1}y_{it}$, $\overline{x}_{i} = \frac{1}{T}\sum^{T}_{t=1}x_{it}$, and $\overline{\epsilon}_{i} = \frac{1}{T}\sum^{T}_{t=1}\epsilon_{it}$ sum out the time component no matter the sorting order within each individual (or firm/country/whatever your $i$ subscript may be). I'm not an R guy but in Stata you would run into the same problem for duplicate time values in the time variable. Again this wouldn't matter for the fixed effects estimation and in fact you do not even need to specify a time variable. For example, webuse nlswork xtset idcode xtreg ln_wage age hours, fe will give you the same estimates as xtset idcode year xtreg ln_wage age hours, fe The sorting order of the time values can sometimes be important for inference though. If you were to use the xtserial command after the above fixed effects regression Stata will tell you xtserial age time variable not set, use -tsset varname ... if you haven't used xtset idcode year before. For this purpose it can be problematic if you have 2 observations for an individual in a given year but you do not know if one observation is dated before/after the other (for instance, if a month or quarter variable is missing). I'm sure this is not the case for you but sometimes people specify the time variable to be annual when in fact they have monthly data. If they wanted to run such a regression they would need to aggregate the data first to the annual level. Otherwise to solve the duplicate time values problem one would generate a new time variable for year-month combinations. The within estimator itself does not need a specified time component itself.
Why does a fixed-effect OLS need unique time elements? Your understanding of fixed effects regression seems perfectly fine. When you do the within transformation to obtain the fixed effects estimator $$y_{it} - \overline{y}_{i} = (X_{it} - \overline{X}_i)
46,798
Why does a fixed-effect OLS need unique time elements?
Indeed, plm will not allow you to run a FE model, when there is a lower-level unit (i.e. you want household instead of individual, country instead of states etc). And indeed, there's nothing wrong about doing what you want. The trick in this case is just to make the time variable unique, crossing it with the sub-level unit: make a time-individual if you do at household level, or state-year if you do at region level. See similar post: https://stackoverflow.com/questions/43510067/fixed-effects-plm-package-r-multiple-observations-per-year-id/43573731 library(plm) #> Loading required package: Formula data("Produc", package = "plm") Produc$year_state <- paste(Produc$year, Produc$state, sep="_") ## will throw warning Produc_plm <- pdata.frame(Produc, index = c("region", "year")) #> Warning in pdata.frame(Produc, index = c("region", "year")): duplicate couples (id-time) in resulting pdata.frame #> to find out which, use e.g. table(index(your_pdataframe), useNA = "ifany") ## will throw error: reg_plm_1 <- plm(gsp ~ pcap, data = Produc_plm) #> Warning: non-unique values when setting 'row.names': '1-1970', '1-1971', #> '1-1972', '1-1973', '1-1974', '1-1975', '1-1976', '1-1977', '1-1978', #> '1-1979', '1-1980', '1-1981', '1-1982', '1-1983', '1-1984', '1-1985', #> Error in `.rowNamesDF<-`(x, value = value): duplicate 'row.names' are not allowed Use the trick instead: Produc_plm2 <- pdata.frame(Produc, index = c("region", "year_state")) reg_plm_2 <- plm(gsp ~ pcap, data = Produc_plm2) Let's check with package lfe if we got it right: library(lfe) #> Loading required package: Matrix #> #> Attaching package: 'lfe' #> The following object is masked from 'package:plm': #> #> sargan library(broom) reg_lfe_1 <- felm(gsp ~ pcap|region, data = Produc) all.equal(as.data.frame(tidy(reg_plm_2)), as.data.frame(tidy(reg_lfe_1))) #> [1] TRUE
Why does a fixed-effect OLS need unique time elements?
Indeed, plm will not allow you to run a FE model, when there is a lower-level unit (i.e. you want household instead of individual, country instead of states etc). And indeed, there's nothing wrong abo
Why does a fixed-effect OLS need unique time elements? Indeed, plm will not allow you to run a FE model, when there is a lower-level unit (i.e. you want household instead of individual, country instead of states etc). And indeed, there's nothing wrong about doing what you want. The trick in this case is just to make the time variable unique, crossing it with the sub-level unit: make a time-individual if you do at household level, or state-year if you do at region level. See similar post: https://stackoverflow.com/questions/43510067/fixed-effects-plm-package-r-multiple-observations-per-year-id/43573731 library(plm) #> Loading required package: Formula data("Produc", package = "plm") Produc$year_state <- paste(Produc$year, Produc$state, sep="_") ## will throw warning Produc_plm <- pdata.frame(Produc, index = c("region", "year")) #> Warning in pdata.frame(Produc, index = c("region", "year")): duplicate couples (id-time) in resulting pdata.frame #> to find out which, use e.g. table(index(your_pdataframe), useNA = "ifany") ## will throw error: reg_plm_1 <- plm(gsp ~ pcap, data = Produc_plm) #> Warning: non-unique values when setting 'row.names': '1-1970', '1-1971', #> '1-1972', '1-1973', '1-1974', '1-1975', '1-1976', '1-1977', '1-1978', #> '1-1979', '1-1980', '1-1981', '1-1982', '1-1983', '1-1984', '1-1985', #> Error in `.rowNamesDF<-`(x, value = value): duplicate 'row.names' are not allowed Use the trick instead: Produc_plm2 <- pdata.frame(Produc, index = c("region", "year_state")) reg_plm_2 <- plm(gsp ~ pcap, data = Produc_plm2) Let's check with package lfe if we got it right: library(lfe) #> Loading required package: Matrix #> #> Attaching package: 'lfe' #> The following object is masked from 'package:plm': #> #> sargan library(broom) reg_lfe_1 <- felm(gsp ~ pcap|region, data = Produc) all.equal(as.data.frame(tidy(reg_plm_2)), as.data.frame(tidy(reg_lfe_1))) #> [1] TRUE
Why does a fixed-effect OLS need unique time elements? Indeed, plm will not allow you to run a FE model, when there is a lower-level unit (i.e. you want household instead of individual, country instead of states etc). And indeed, there's nothing wrong abo
46,799
Finding the best path through the matrix in DTW
You have presented a matrix showing the pointwise distance computed by using the squared Euclidean distance. Each element of this matrix will be referred to as cost[i,j]. You target is the accumulated distance matrix. Each element of this matrix will be referred to as DTW[i,j]. To compute the distance using this formula: DTW[i, j] := cost[i,j] + minimum(DTW[i-1, j], DTW[i, j-1], DTW[i-1, j-1]) Two more requirements are to be defined: DTW[i, 0] = infinite DTW[0, j] = infinite Then you can compute the first column and row such as: 4 33 3 24 5 20 1 4 3 4 4 5 9 18 34 1 3 4 5 6 7 Then, step by step, you iterate through the columns from left to right and you reach the target: the accumulated cost matrix. 4 33 9 8 9 13 22 3 24 8 9 13 18 26 5 20 8 9 9 10 14 1 4 8 13 21 34 54 3 4 4 5 9 18 34 1 3 4 5 6 7 The DTW distance is defined as the element DTW[n,m], thus the top right element of the accumulated distance matrix and it is the sum of the cost along the best possible warping path. Now you can use backtracking to identify the best possible warping path by iteratively choosing the minimum neighbour starting from the top right. 4 8 9 13 22 3 8 5 8 1 4 3 4 1 3 4 5 6 7 Finally it is important to mention that in most applications, the warping path is further constraints which can prevent pathological warping.
Finding the best path through the matrix in DTW
You have presented a matrix showing the pointwise distance computed by using the squared Euclidean distance. Each element of this matrix will be referred to as cost[i,j]. You target is the accumulated
Finding the best path through the matrix in DTW You have presented a matrix showing the pointwise distance computed by using the squared Euclidean distance. Each element of this matrix will be referred to as cost[i,j]. You target is the accumulated distance matrix. Each element of this matrix will be referred to as DTW[i,j]. To compute the distance using this formula: DTW[i, j] := cost[i,j] + minimum(DTW[i-1, j], DTW[i, j-1], DTW[i-1, j-1]) Two more requirements are to be defined: DTW[i, 0] = infinite DTW[0, j] = infinite Then you can compute the first column and row such as: 4 33 3 24 5 20 1 4 3 4 4 5 9 18 34 1 3 4 5 6 7 Then, step by step, you iterate through the columns from left to right and you reach the target: the accumulated cost matrix. 4 33 9 8 9 13 22 3 24 8 9 13 18 26 5 20 8 9 9 10 14 1 4 8 13 21 34 54 3 4 4 5 9 18 34 1 3 4 5 6 7 The DTW distance is defined as the element DTW[n,m], thus the top right element of the accumulated distance matrix and it is the sum of the cost along the best possible warping path. Now you can use backtracking to identify the best possible warping path by iteratively choosing the minimum neighbour starting from the top right. 4 8 9 13 22 3 8 5 8 1 4 3 4 1 3 4 5 6 7 Finally it is important to mention that in most applications, the warping path is further constraints which can prevent pathological warping.
Finding the best path through the matrix in DTW You have presented a matrix showing the pointwise distance computed by using the squared Euclidean distance. Each element of this matrix will be referred to as cost[i,j]. You target is the accumulated
46,800
Use of Wilcoxon test for non-normal data akin to Two One Sided T-test
The short answer is yes, you can do it, since the TOST methodology is not restricted to t-tests. The p-value is the larger of the two p-values. A quick Google search led me to a methodological article (Meier U. Nonparametric equivalence testing with respect to the median difference. Pharm Stat. 2010 Apr-Jun;9(2):142-50) describing this procedure in detail.
Use of Wilcoxon test for non-normal data akin to Two One Sided T-test
The short answer is yes, you can do it, since the TOST methodology is not restricted to t-tests. The p-value is the larger of the two p-values. A quick Google search led me to a methodological article
Use of Wilcoxon test for non-normal data akin to Two One Sided T-test The short answer is yes, you can do it, since the TOST methodology is not restricted to t-tests. The p-value is the larger of the two p-values. A quick Google search led me to a methodological article (Meier U. Nonparametric equivalence testing with respect to the median difference. Pharm Stat. 2010 Apr-Jun;9(2):142-50) describing this procedure in detail.
Use of Wilcoxon test for non-normal data akin to Two One Sided T-test The short answer is yes, you can do it, since the TOST methodology is not restricted to t-tests. The p-value is the larger of the two p-values. A quick Google search led me to a methodological article