idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
42,401
Minimum dimension of sufficient statistics
Since sufficiency of fixed dimension only occurs in exponential families (Darmois-Pitman-Koopman lemma), apart from distributions with varying support like the Uniform, let us consider an exponential family with parameter $\theta$ and density [against a fixed dominating measure] $$f_\theta(x)=\exp\left\{\sum_{i=1}^k a_i(\theta) T_i(x) -\psi(\theta)\right\}$$ Assuming the functions $a_i$ are linearly independent on the maximal support of $\theta$ (namely, the range of $\theta$'s for which the density is integrable), the model associated with this density can be reparameterised in $\alpha=(\alpha_1,\ldots,\alpha_k)$ which varies in (at least) the parameter space $$A=\left\{\alpha=(\alpha_1,\ldots,\alpha_k);\,\exists\theta\in\Theta, \alpha_i=a_i(\theta) \right\}$$at least, since the parameter space can be naturally expanded to its natural limit $$A=\left\{\alpha=(\alpha_1,\ldots,\alpha_k);\,\displaystyle{\int \exp\left\{\sum_{i=1}^k \alpha_i T_i(x)\right\}\text{d}\lambda(x)}<\infty \right\}\,,$$and that the functions $T_i$ are also linearly independent over the support $\cal X$ of $X$, the statistic $$T=(T_1(X),\ldots,T_k(X)$$is sufficient and with density $$g_\alpha(t)=\exp\left\{\sum_{i=1}^k \alpha_i t_i-\mu(\alpha)\right\}$$ against the appropriate dominating measure. Therefore on the natural space of the exponential family (which may be larger than the original parameter space), the sufficient statistic is of the same dimension as the parameter. Even though the domain of variation of $T(x)$ can be constrained by non-linear relations, there is a sample size after which the dimensional constraint vanishes. A completely different approach, avoiding exponential families, is provided by Edward W. Barankin and Melvin Katz, Jr. Sufficient Statistics of Minimal Dimension Sankhyā: The Indian Journal of Statistics Vol. 21, No. 3/4 (Aug., 1959), pp. 217-246 and they show the following result where $r$ is the dimension of the sufficient statistic $T$ and $\rho(x^0)$ is the (local) rank of the second derivative of the log-likelihood in $\theta$ and $x$ [the definition is a bit too intricate to be reproduced here].
Minimum dimension of sufficient statistics
Since sufficiency of fixed dimension only occurs in exponential families (Darmois-Pitman-Koopman lemma), apart from distributions with varying support like the Uniform, let us consider an exponential
Minimum dimension of sufficient statistics Since sufficiency of fixed dimension only occurs in exponential families (Darmois-Pitman-Koopman lemma), apart from distributions with varying support like the Uniform, let us consider an exponential family with parameter $\theta$ and density [against a fixed dominating measure] $$f_\theta(x)=\exp\left\{\sum_{i=1}^k a_i(\theta) T_i(x) -\psi(\theta)\right\}$$ Assuming the functions $a_i$ are linearly independent on the maximal support of $\theta$ (namely, the range of $\theta$'s for which the density is integrable), the model associated with this density can be reparameterised in $\alpha=(\alpha_1,\ldots,\alpha_k)$ which varies in (at least) the parameter space $$A=\left\{\alpha=(\alpha_1,\ldots,\alpha_k);\,\exists\theta\in\Theta, \alpha_i=a_i(\theta) \right\}$$at least, since the parameter space can be naturally expanded to its natural limit $$A=\left\{\alpha=(\alpha_1,\ldots,\alpha_k);\,\displaystyle{\int \exp\left\{\sum_{i=1}^k \alpha_i T_i(x)\right\}\text{d}\lambda(x)}<\infty \right\}\,,$$and that the functions $T_i$ are also linearly independent over the support $\cal X$ of $X$, the statistic $$T=(T_1(X),\ldots,T_k(X)$$is sufficient and with density $$g_\alpha(t)=\exp\left\{\sum_{i=1}^k \alpha_i t_i-\mu(\alpha)\right\}$$ against the appropriate dominating measure. Therefore on the natural space of the exponential family (which may be larger than the original parameter space), the sufficient statistic is of the same dimension as the parameter. Even though the domain of variation of $T(x)$ can be constrained by non-linear relations, there is a sample size after which the dimensional constraint vanishes. A completely different approach, avoiding exponential families, is provided by Edward W. Barankin and Melvin Katz, Jr. Sufficient Statistics of Minimal Dimension Sankhyā: The Indian Journal of Statistics Vol. 21, No. 3/4 (Aug., 1959), pp. 217-246 and they show the following result where $r$ is the dimension of the sufficient statistic $T$ and $\rho(x^0)$ is the (local) rank of the second derivative of the log-likelihood in $\theta$ and $x$ [the definition is a bit too intricate to be reproduced here].
Minimum dimension of sufficient statistics Since sufficiency of fixed dimension only occurs in exponential families (Darmois-Pitman-Koopman lemma), apart from distributions with varying support like the Uniform, let us consider an exponential
42,402
Expected number of times you spent in a state of an absorbing markov chain, given the eventual absorbing state
Using the Wikipedia notation, the full matrix of transition probabilities is written as $$P = \left(\begin{array}{cc} Q & R \\ 0 & I \end{array}\right)$$ with the states organized so that the last states are all absorbing. In the OP's example from population genetics, the states would be organized as $1, 2, \ldots, M-1, 0, M$. If $\tau$ denotes the first time the Markov chain enters the set of absorbing states, the absorption probability for state $k$ given that the Markov chain starts in state $i$ is $$B_{ik} := P(X_{\tau} = k \mid X_0 = i).$$ Observe that if $X_n = j$ for a transient state $j$ then $\tau > n$, and by the time homogeneity of the Markov chain $$P(X_{\tau} = k \mid X_n = j) = B_{jk}.$$ From Wikipedia it follows that $$B_{ik} = (NR)_{ik}.$$ The expected number of visits to state $j$ conditionally on the chain starting in $i$ and being absorbed in $k$ is $$\xi_{ik}(j) = E\left( \sum_{n=0}^{\infty} 1(X_n = j) \ \middle| \ X_0 = i, X_{\tau} = k\right) = \sum_{n=0}^{\infty} P(X_n = j \mid X_0 = i, X_{\tau} = k) .$$ Now the probabilities in the infinite sum can be rewritten as follows \begin{align*} P(X_n = j \mid X_0 = i, X_{\tau} = k) & = \frac{P(X_{\tau} = k, X_n = j \mid X_0 = i)}{P(X_{\tau} = k \mid X_0 = i)} \\ & = \frac{P(X_{\tau} = k \mid X_n = j, X_0 = i)P(X_n = j \mid X_0 = i)}{P(X_{\tau} = k \mid X_0 = i)} \\ & = \frac{P(X_{\tau} = k \mid X_n = j)}{P(X_{\tau} = k \mid X_0 = i)} (Q^n)_{ij} \\ & = \frac{B_{jk}}{B_{ik}}(Q^n)_{ij}. \end{align*} From this we obtain the formula \begin{align*} \xi_{ik}(j) & = \frac{B_{jk}}{B_{ik}}\sum_{n=0}^{\infty} (Q^n)_{ij} = \frac{B_{jk}}{B_{ik}}N_{ij}, \end{align*} where the absorption probabilities $B_{ij}$ and $B_{ik}$ can be computed from $N$ and $R$ using the formula above.
Expected number of times you spent in a state of an absorbing markov chain, given the eventual absor
Using the Wikipedia notation, the full matrix of transition probabilities is written as $$P = \left(\begin{array}{cc} Q & R \\ 0 & I \end{array}\right)$$ with the states organized so that the last st
Expected number of times you spent in a state of an absorbing markov chain, given the eventual absorbing state Using the Wikipedia notation, the full matrix of transition probabilities is written as $$P = \left(\begin{array}{cc} Q & R \\ 0 & I \end{array}\right)$$ with the states organized so that the last states are all absorbing. In the OP's example from population genetics, the states would be organized as $1, 2, \ldots, M-1, 0, M$. If $\tau$ denotes the first time the Markov chain enters the set of absorbing states, the absorption probability for state $k$ given that the Markov chain starts in state $i$ is $$B_{ik} := P(X_{\tau} = k \mid X_0 = i).$$ Observe that if $X_n = j$ for a transient state $j$ then $\tau > n$, and by the time homogeneity of the Markov chain $$P(X_{\tau} = k \mid X_n = j) = B_{jk}.$$ From Wikipedia it follows that $$B_{ik} = (NR)_{ik}.$$ The expected number of visits to state $j$ conditionally on the chain starting in $i$ and being absorbed in $k$ is $$\xi_{ik}(j) = E\left( \sum_{n=0}^{\infty} 1(X_n = j) \ \middle| \ X_0 = i, X_{\tau} = k\right) = \sum_{n=0}^{\infty} P(X_n = j \mid X_0 = i, X_{\tau} = k) .$$ Now the probabilities in the infinite sum can be rewritten as follows \begin{align*} P(X_n = j \mid X_0 = i, X_{\tau} = k) & = \frac{P(X_{\tau} = k, X_n = j \mid X_0 = i)}{P(X_{\tau} = k \mid X_0 = i)} \\ & = \frac{P(X_{\tau} = k \mid X_n = j, X_0 = i)P(X_n = j \mid X_0 = i)}{P(X_{\tau} = k \mid X_0 = i)} \\ & = \frac{P(X_{\tau} = k \mid X_n = j)}{P(X_{\tau} = k \mid X_0 = i)} (Q^n)_{ij} \\ & = \frac{B_{jk}}{B_{ik}}(Q^n)_{ij}. \end{align*} From this we obtain the formula \begin{align*} \xi_{ik}(j) & = \frac{B_{jk}}{B_{ik}}\sum_{n=0}^{\infty} (Q^n)_{ij} = \frac{B_{jk}}{B_{ik}}N_{ij}, \end{align*} where the absorption probabilities $B_{ij}$ and $B_{ik}$ can be computed from $N$ and $R$ using the formula above.
Expected number of times you spent in a state of an absorbing markov chain, given the eventual absor Using the Wikipedia notation, the full matrix of transition probabilities is written as $$P = \left(\begin{array}{cc} Q & R \\ 0 & I \end{array}\right)$$ with the states organized so that the last st
42,403
Are the balls drawn randomly (independently of the number of balls existing in their colours)?
Because your proposed test has no theoretical foundation and does not account for the correlations among the counts, it would not be a good use of our time to evaluate it. Instead, let's develop some tests that will work. The Likelihood Ratio test is guaranteed to work well when all counts are relatively large. So will the chi-squared test provided only a small proportion of all balls in the urn are taken in the sample. (The usual rule of thumb is to be cautious when the sample exceeds ten percent of the total.) In other cases, simulating from the null distribution is effective. The likelihood of observing counts $x_1,\ldots, x_r$ in a sample from an urn with $N_1,\ldots, N_r$ balls of each color is $$\mathcal{L}(x;N) = \binom{N_1}{x_1}\binom{N_2}{x_2}\cdots \binom{N_r}{x_r}\ /\ \binom{N_1+\cdots+N_r}{x_1+\cdots+N_r}.$$ This is best expressed in terms of its deviance $D=-2\log\mathcal L$ because asymptotically, the deviance has a $\chi^2$ distribution with $r-1$ degrees of freedom By sampling a few thousand times (with the computer) you can estimate the distribution of $D.$ The p-value of the observed values is the area of the right tail determined by the deviance of the observations. As an example, consider an urn with $r=7$ colors in the quantities $(N_i) = (34,45,41,35,49,47,51,42).$ I computed the null deviance in ten thousand samples of size $167$ (half the balls are in each sample), shown in the figure at the left. I also computed the deviance in another ten thousand samples where the colors were selected with different probabilities ranging from 20% (for color $1$) down to just 6% (for color $7$). The latter deviances tend to be large. Indeed, approximately 96% of them exceed the 95th percentile of the null deviances. In this sense, the power of this test, when conducted at the $alpha=100-95\% = 5\%$ level, is 96%. We can play the same game with the usual $\chi^2$ test. If this test were directly applicable, the null distribution of its p-values would be uniform. The null distribution is skewed, though, with most of the p-values very large: see the left graphic in the next plot. But, upon examining this simulation, we could declare a chi-squared test result to be significant at the $\alpha$ level whenever its p-value is less than the $\alpha$ quantile of this null distribution. That quantile is near $0.4$ (rather than the expected $0.05$), shown with the shaded bars. The right hand graphic similarly displays the distribution of chi-squared test p-values under the alternative hypothesis. Its "nominal power" is the rate at which the reported (nominal) p-value is less than $\alpha,$ shown with the area of the red bar. That is only 48.2%, far less than the 96% achieved by the Likelihood Ratio (LR) test. However, when we use the $\alpha$ quantile of the null distribution as our threshold, now the null hypothesis is rejected in 95% of the simulated data. The power, 95%, is essentially the same as the LR test. What these simulations demonstrate, then, is It is invalid to apply the standard chi-squared test or LR test unless the sample is a small portion of the urn and the sample counts are fairly large. The skewed null distribution in the second figure shows what goes wrong. Nevertheless, both of these tests can be used provided the p-value is computed using the actual null distribution (as estimated through simulation) rather than using the standard formulas (which rely on asymptotically large samples of urns with huge populations). The R code to perform these computations and display these figures follows. # # Multivariate hypergeometric distribution. # Draw `n` balls without replacement from an urn with length(N) colors, each # appearing N[i] times. # # Returns a vector of counts of the colors. # rmhyper <- function(n, N, p) { if (missing(p)) p <- rep(1/sum(N), length(N)) prob <- rep(p, N) prob <- prob / sum(prob) x <- sample(rep(seq_along(N), N), n, prob=prob) tabulate(x, length(N)) } # # Returns the likelihood of observing `k` in a hypergeometric draw specified # by `N`. # dmhyper <- function(k, N, log.p=TRUE) { q <- sum(lchoose(N, k)) - lchoose(sum(N), sum(k)) if (!isTRUE(log.p)) q <- exp(q) return(q) } # # Perform a chi-squared test. # mhyper.test <- function(k, N, ...) { chisq.test(k, p=N / sum(N), ...) } # # Simulations. # alpha <- 0.05 n.sim <- 1e4 set.seed(17) N <- rpois(8, 40) + 1 # Determine the population randomly p <- rgamma(length(N), 10/4) # Determine the alternative hypothesis randomly p <- rev(sort(p / sum(p))) n <- ceiling(sum(N)*0.5) # # Display the sampling probabilities. # plot(p, type="h", col=rainbow(length(N), .9, .8), lwd=3, ylim=c(0, max(p)), xlab="Color", main="Probabilities") # # The chi-squared test p-values are ok when the sample is a small fraction # of the population. # p.values.null <- replicate(n.sim, mhyper.test(rmhyper(n,N), N, simulate.p.value=FALSE)$p.value) p.values.alt <- replicate(n.sim, mhyper.test(rmhyper(n,N, p), N, simulate.p.value=FALSE)$p.value) power <- round(100*mean(p.values.alt <= alpha), 1) power.alt <- round(100*mean(p.values.alt <= quantile(p.values.null, alpha))) k <- round(20 * quantile(p.values.null, alpha)) - 1 b <- seq(0, 1, by=0.05) par(mfrow=c(1,2)) h <- hist(p.values.null, freq=FALSE, breaks=b, xlab="p", col=c("#d08080", rep("#e0e0e0", k), rep("White", 20-k-1)), main="Histogram of Chi-square P-values\nNull Distribution") hist(p.values.alt, freq=FALSE, breaks=b, xlab="p", ylim=c(0, max(h$density)), col=c("#d08080", rep("#e0e0e0", k), rep("White", 20-k-1)), sub=bquote(paste("Nominal power = ", .(power), "%; Simulation power = ", .(power.alt), "%")), main="Histogram of Chi-square P-values\nAlternative Distribution") q.null <- -2 * replicate(n.sim, dmhyper(rmhyper(n, N), N)) q <- -2 * replicate(n.sim, dmhyper(rmhyper(n, N, p), N)) xlim <- range(c(q, q.null)) h <- hist(q.null, xlim=xlim, freq=FALSE, breaks=50, xlab=expression(-2~~log(p)), col="#f0f0f0", main="Null distribution of deviance") abline(v = quantile(q.null, 1-alpha), col="Red", lwd=2) power <- signif(mean(q >= quantile(q.null, 1-0.05)), 2) hist(q, xlim=xlim, freq=FALSE, breaks=50, xlab=expression(-2~~log(p)), col="#f0f0f0", ylim=c(0, max(h$density)), main="Alternative distribution of deviance", sub=bquote(paste("Power = ", .(100*power), "%"))) abline(v = quantile(q.null, 1-alpha), col="Red", lwd=2) par(mfrow=c(1,1))
Are the balls drawn randomly (independently of the number of balls existing in their colours)?
Because your proposed test has no theoretical foundation and does not account for the correlations among the counts, it would not be a good use of our time to evaluate it. Instead, let's develop some
Are the balls drawn randomly (independently of the number of balls existing in their colours)? Because your proposed test has no theoretical foundation and does not account for the correlations among the counts, it would not be a good use of our time to evaluate it. Instead, let's develop some tests that will work. The Likelihood Ratio test is guaranteed to work well when all counts are relatively large. So will the chi-squared test provided only a small proportion of all balls in the urn are taken in the sample. (The usual rule of thumb is to be cautious when the sample exceeds ten percent of the total.) In other cases, simulating from the null distribution is effective. The likelihood of observing counts $x_1,\ldots, x_r$ in a sample from an urn with $N_1,\ldots, N_r$ balls of each color is $$\mathcal{L}(x;N) = \binom{N_1}{x_1}\binom{N_2}{x_2}\cdots \binom{N_r}{x_r}\ /\ \binom{N_1+\cdots+N_r}{x_1+\cdots+N_r}.$$ This is best expressed in terms of its deviance $D=-2\log\mathcal L$ because asymptotically, the deviance has a $\chi^2$ distribution with $r-1$ degrees of freedom By sampling a few thousand times (with the computer) you can estimate the distribution of $D.$ The p-value of the observed values is the area of the right tail determined by the deviance of the observations. As an example, consider an urn with $r=7$ colors in the quantities $(N_i) = (34,45,41,35,49,47,51,42).$ I computed the null deviance in ten thousand samples of size $167$ (half the balls are in each sample), shown in the figure at the left. I also computed the deviance in another ten thousand samples where the colors were selected with different probabilities ranging from 20% (for color $1$) down to just 6% (for color $7$). The latter deviances tend to be large. Indeed, approximately 96% of them exceed the 95th percentile of the null deviances. In this sense, the power of this test, when conducted at the $alpha=100-95\% = 5\%$ level, is 96%. We can play the same game with the usual $\chi^2$ test. If this test were directly applicable, the null distribution of its p-values would be uniform. The null distribution is skewed, though, with most of the p-values very large: see the left graphic in the next plot. But, upon examining this simulation, we could declare a chi-squared test result to be significant at the $\alpha$ level whenever its p-value is less than the $\alpha$ quantile of this null distribution. That quantile is near $0.4$ (rather than the expected $0.05$), shown with the shaded bars. The right hand graphic similarly displays the distribution of chi-squared test p-values under the alternative hypothesis. Its "nominal power" is the rate at which the reported (nominal) p-value is less than $\alpha,$ shown with the area of the red bar. That is only 48.2%, far less than the 96% achieved by the Likelihood Ratio (LR) test. However, when we use the $\alpha$ quantile of the null distribution as our threshold, now the null hypothesis is rejected in 95% of the simulated data. The power, 95%, is essentially the same as the LR test. What these simulations demonstrate, then, is It is invalid to apply the standard chi-squared test or LR test unless the sample is a small portion of the urn and the sample counts are fairly large. The skewed null distribution in the second figure shows what goes wrong. Nevertheless, both of these tests can be used provided the p-value is computed using the actual null distribution (as estimated through simulation) rather than using the standard formulas (which rely on asymptotically large samples of urns with huge populations). The R code to perform these computations and display these figures follows. # # Multivariate hypergeometric distribution. # Draw `n` balls without replacement from an urn with length(N) colors, each # appearing N[i] times. # # Returns a vector of counts of the colors. # rmhyper <- function(n, N, p) { if (missing(p)) p <- rep(1/sum(N), length(N)) prob <- rep(p, N) prob <- prob / sum(prob) x <- sample(rep(seq_along(N), N), n, prob=prob) tabulate(x, length(N)) } # # Returns the likelihood of observing `k` in a hypergeometric draw specified # by `N`. # dmhyper <- function(k, N, log.p=TRUE) { q <- sum(lchoose(N, k)) - lchoose(sum(N), sum(k)) if (!isTRUE(log.p)) q <- exp(q) return(q) } # # Perform a chi-squared test. # mhyper.test <- function(k, N, ...) { chisq.test(k, p=N / sum(N), ...) } # # Simulations. # alpha <- 0.05 n.sim <- 1e4 set.seed(17) N <- rpois(8, 40) + 1 # Determine the population randomly p <- rgamma(length(N), 10/4) # Determine the alternative hypothesis randomly p <- rev(sort(p / sum(p))) n <- ceiling(sum(N)*0.5) # # Display the sampling probabilities. # plot(p, type="h", col=rainbow(length(N), .9, .8), lwd=3, ylim=c(0, max(p)), xlab="Color", main="Probabilities") # # The chi-squared test p-values are ok when the sample is a small fraction # of the population. # p.values.null <- replicate(n.sim, mhyper.test(rmhyper(n,N), N, simulate.p.value=FALSE)$p.value) p.values.alt <- replicate(n.sim, mhyper.test(rmhyper(n,N, p), N, simulate.p.value=FALSE)$p.value) power <- round(100*mean(p.values.alt <= alpha), 1) power.alt <- round(100*mean(p.values.alt <= quantile(p.values.null, alpha))) k <- round(20 * quantile(p.values.null, alpha)) - 1 b <- seq(0, 1, by=0.05) par(mfrow=c(1,2)) h <- hist(p.values.null, freq=FALSE, breaks=b, xlab="p", col=c("#d08080", rep("#e0e0e0", k), rep("White", 20-k-1)), main="Histogram of Chi-square P-values\nNull Distribution") hist(p.values.alt, freq=FALSE, breaks=b, xlab="p", ylim=c(0, max(h$density)), col=c("#d08080", rep("#e0e0e0", k), rep("White", 20-k-1)), sub=bquote(paste("Nominal power = ", .(power), "%; Simulation power = ", .(power.alt), "%")), main="Histogram of Chi-square P-values\nAlternative Distribution") q.null <- -2 * replicate(n.sim, dmhyper(rmhyper(n, N), N)) q <- -2 * replicate(n.sim, dmhyper(rmhyper(n, N, p), N)) xlim <- range(c(q, q.null)) h <- hist(q.null, xlim=xlim, freq=FALSE, breaks=50, xlab=expression(-2~~log(p)), col="#f0f0f0", main="Null distribution of deviance") abline(v = quantile(q.null, 1-alpha), col="Red", lwd=2) power <- signif(mean(q >= quantile(q.null, 1-0.05)), 2) hist(q, xlim=xlim, freq=FALSE, breaks=50, xlab=expression(-2~~log(p)), col="#f0f0f0", ylim=c(0, max(h$density)), main="Alternative distribution of deviance", sub=bquote(paste("Power = ", .(100*power), "%"))) abline(v = quantile(q.null, 1-alpha), col="Red", lwd=2) par(mfrow=c(1,1))
Are the balls drawn randomly (independently of the number of balls existing in their colours)? Because your proposed test has no theoretical foundation and does not account for the correlations among the counts, it would not be a good use of our time to evaluate it. Instead, let's develop some
42,404
Enlarging a random sample
Correction I withdraw the previous answer (below) as wrong. @Glen_b is correct, if the second sample is not based on any results in the first, then the overall probability of selection is $\dfrac{n}{N}$, the same as for a SRS (proof below). Further, you can treat the result as a SRS although the size was not fixed in advance. However, you stated that you took the second sample because you required more precision. If you made the decision solely because standard errors for estimates were larger than expected, then bias in the final estimated standard errors will be small (Hansen, Hurwitz, and Madow (1953, pp.77-80) (provided that the initial SE themselves were precisely estimated). In that case, I would go ahead and treat the observations as having come from a SRS. If, however, you took the second sample to reduce a borderline p-value to a smaller value, then serious bias is possible. Proof that probability of selection is n/N: Let $p_1 =\dfrac{n_1}{N}$ be the probability of selection in the first sample, and let $p_2$ be the conditional probability of selection in the second sample: $p_2 =\dfrac{n_2}{N-n_1}$. Let $n = n_1 + n_2$ be the final sample size. Then the probability of selection of an observation to the final sample is: \begin{align} p & = p_1 + (1-p_1)\thinspace p_2 \\ & = \frac{n_1}{N} + (1-\frac{n_1}{N}) \frac{n_2}{N-n_1} \\ & = \frac{n_1}{N} + \frac{N-n_1}{N}\frac{n_2}{N-n_1} \\ & = \frac{n_1 + n_2}{N} \\ & = \frac{n}{N} \end{align} Reference: Hansen, MH, WN Hurwitz, and W Madow. 1953. Sample Survey Methods and Theory. Volume I Methods and Applications. New York: Wiley. Original Answer: This kind of problem frequently arises in practice and the solution is similar. As @David Z implies, you must investigate the possibility of systematic differences between the first and second surveys and samples. The resulting combined sample can indeed be considered random, but not simple random. The probabilities of selection differ between the two samples, so the analysis will have to be weighted. You compute weights as follows: Let the number in the population be $N$. Then the probability of selection for the original sample is: $$ f_1 = \frac{107}{N} $$ To be selected in the second sample, one must first not be selected in the first sample; then be selected in the second: $$ f_2 = (1 - f_1) \frac{50}{N - 107} $$ The sample weights will be $W = \dfrac{1}{f_1}$ in sample 1 and $W = \dfrac{1}{f_2}$ in sample 2. To illustrate, suppose $N = 1020$, then the weights in the first and second sample will be $9.5327$ and $20.4$, respectively These calculations will require modification if there was non-response in either sample.
Enlarging a random sample
Correction I withdraw the previous answer (below) as wrong. @Glen_b is correct, if the second sample is not based on any results in the first, then the overall probability of selection is $\dfrac{n}{
Enlarging a random sample Correction I withdraw the previous answer (below) as wrong. @Glen_b is correct, if the second sample is not based on any results in the first, then the overall probability of selection is $\dfrac{n}{N}$, the same as for a SRS (proof below). Further, you can treat the result as a SRS although the size was not fixed in advance. However, you stated that you took the second sample because you required more precision. If you made the decision solely because standard errors for estimates were larger than expected, then bias in the final estimated standard errors will be small (Hansen, Hurwitz, and Madow (1953, pp.77-80) (provided that the initial SE themselves were precisely estimated). In that case, I would go ahead and treat the observations as having come from a SRS. If, however, you took the second sample to reduce a borderline p-value to a smaller value, then serious bias is possible. Proof that probability of selection is n/N: Let $p_1 =\dfrac{n_1}{N}$ be the probability of selection in the first sample, and let $p_2$ be the conditional probability of selection in the second sample: $p_2 =\dfrac{n_2}{N-n_1}$. Let $n = n_1 + n_2$ be the final sample size. Then the probability of selection of an observation to the final sample is: \begin{align} p & = p_1 + (1-p_1)\thinspace p_2 \\ & = \frac{n_1}{N} + (1-\frac{n_1}{N}) \frac{n_2}{N-n_1} \\ & = \frac{n_1}{N} + \frac{N-n_1}{N}\frac{n_2}{N-n_1} \\ & = \frac{n_1 + n_2}{N} \\ & = \frac{n}{N} \end{align} Reference: Hansen, MH, WN Hurwitz, and W Madow. 1953. Sample Survey Methods and Theory. Volume I Methods and Applications. New York: Wiley. Original Answer: This kind of problem frequently arises in practice and the solution is similar. As @David Z implies, you must investigate the possibility of systematic differences between the first and second surveys and samples. The resulting combined sample can indeed be considered random, but not simple random. The probabilities of selection differ between the two samples, so the analysis will have to be weighted. You compute weights as follows: Let the number in the population be $N$. Then the probability of selection for the original sample is: $$ f_1 = \frac{107}{N} $$ To be selected in the second sample, one must first not be selected in the first sample; then be selected in the second: $$ f_2 = (1 - f_1) \frac{50}{N - 107} $$ The sample weights will be $W = \dfrac{1}{f_1}$ in sample 1 and $W = \dfrac{1}{f_2}$ in sample 2. To illustrate, suppose $N = 1020$, then the weights in the first and second sample will be $9.5327$ and $20.4$, respectively These calculations will require modification if there was non-response in either sample.
Enlarging a random sample Correction I withdraw the previous answer (below) as wrong. @Glen_b is correct, if the second sample is not based on any results in the first, then the overall probability of selection is $\dfrac{n}{
42,405
Enlarging a random sample
It depends on whether an assumption about same distribution between the remaining 900+ after the first sampling process and original 1000+ can be held. If possible you can perform an test about mean or heteroscedasticity of the parameter(s) of interested.
Enlarging a random sample
It depends on whether an assumption about same distribution between the remaining 900+ after the first sampling process and original 1000+ can be held. If possible you can perform an test about mean
Enlarging a random sample It depends on whether an assumption about same distribution between the remaining 900+ after the first sampling process and original 1000+ can be held. If possible you can perform an test about mean or heteroscedasticity of the parameter(s) of interested.
Enlarging a random sample It depends on whether an assumption about same distribution between the remaining 900+ after the first sampling process and original 1000+ can be held. If possible you can perform an test about mean
42,406
Enlarging a random sample
I am not quite sure how random this sample could be considered. I don't have a great experience of statistics but the method you suggest doesn't sound very good to me. What I would do is put back the sample I took from the entire population, reorder them and take again a random sample from it. Or I would just take a new larger sample from the population and compare my results with the sample from the first try. It might be a good idea to describe the limitations you have, as in why sampling is expensive for you, so other people can give a more elaborate answer.
Enlarging a random sample
I am not quite sure how random this sample could be considered. I don't have a great experience of statistics but the method you suggest doesn't sound very good to me. What I would do is put back the
Enlarging a random sample I am not quite sure how random this sample could be considered. I don't have a great experience of statistics but the method you suggest doesn't sound very good to me. What I would do is put back the sample I took from the entire population, reorder them and take again a random sample from it. Or I would just take a new larger sample from the population and compare my results with the sample from the first try. It might be a good idea to describe the limitations you have, as in why sampling is expensive for you, so other people can give a more elaborate answer.
Enlarging a random sample I am not quite sure how random this sample could be considered. I don't have a great experience of statistics but the method you suggest doesn't sound very good to me. What I would do is put back the
42,407
LRT for one-sided Bernoulli parameter
You can take the derivative of $\log(\lambda)$ in order to get a rejection region. This is problem 8.3 in (1). (1) Casella, G., and Berger, R. L. (2002). Statistical inference. Duxbury Press.
LRT for one-sided Bernoulli parameter
You can take the derivative of $\log(\lambda)$ in order to get a rejection region. This is problem 8.3 in (1). (1) Casella, G., and Berger, R. L. (2002). Statistical inference. Duxbury Press.
LRT for one-sided Bernoulli parameter You can take the derivative of $\log(\lambda)$ in order to get a rejection region. This is problem 8.3 in (1). (1) Casella, G., and Berger, R. L. (2002). Statistical inference. Duxbury Press.
LRT for one-sided Bernoulli parameter You can take the derivative of $\log(\lambda)$ in order to get a rejection region. This is problem 8.3 in (1). (1) Casella, G., and Berger, R. L. (2002). Statistical inference. Duxbury Press.
42,408
LRT for one-sided Bernoulli parameter
Let $k=\sum_ix_i$. The likelihood $L(\theta|X)=\theta^k(1-\theta)^{n-k}$ is unimodal and maximized at $\theta=\frac{k}{n}$. Hence, the likelihood ratio $\lambda(X)$ is given by $$\lambda(X)=\frac{\theta_0^k(1-\theta_0)^{n-k}}{\left(\frac{k}{n}\right)^k\left(1-\frac{k}{n}\right)^{n-k}}$$ when $\theta_0<\frac{k}{n}$ and by $\lambda(X)=1$ otherwise. To deal with the case where $\theta_0<\frac{k}{n}$, it is sufficient to show that $\log(\lambda(X))<c$ is equivalent to $k>b$ for some $b$. Note that $\log(\lambda(X))$ is actually a function of $k$, so we just need to show that $\log(\lambda(X))$ is decreasing as a function of $k$, i.e. that the derivative with respect to $k$ is negative. One may compute (via a fairly messy but straightforward computation) $$\frac{d}{dk}\log(\lambda(X))=\log\theta_0-\log(1-\theta_0)+\log(n-k)-\log k.$$ Then using $\theta_0<\frac{k}{n}$, and substituting, one obtains that the above expression is always less than zero, as desired.
LRT for one-sided Bernoulli parameter
Let $k=\sum_ix_i$. The likelihood $L(\theta|X)=\theta^k(1-\theta)^{n-k}$ is unimodal and maximized at $\theta=\frac{k}{n}$. Hence, the likelihood ratio $\lambda(X)$ is given by $$\lambda(X)=\frac{\the
LRT for one-sided Bernoulli parameter Let $k=\sum_ix_i$. The likelihood $L(\theta|X)=\theta^k(1-\theta)^{n-k}$ is unimodal and maximized at $\theta=\frac{k}{n}$. Hence, the likelihood ratio $\lambda(X)$ is given by $$\lambda(X)=\frac{\theta_0^k(1-\theta_0)^{n-k}}{\left(\frac{k}{n}\right)^k\left(1-\frac{k}{n}\right)^{n-k}}$$ when $\theta_0<\frac{k}{n}$ and by $\lambda(X)=1$ otherwise. To deal with the case where $\theta_0<\frac{k}{n}$, it is sufficient to show that $\log(\lambda(X))<c$ is equivalent to $k>b$ for some $b$. Note that $\log(\lambda(X))$ is actually a function of $k$, so we just need to show that $\log(\lambda(X))$ is decreasing as a function of $k$, i.e. that the derivative with respect to $k$ is negative. One may compute (via a fairly messy but straightforward computation) $$\frac{d}{dk}\log(\lambda(X))=\log\theta_0-\log(1-\theta_0)+\log(n-k)-\log k.$$ Then using $\theta_0<\frac{k}{n}$, and substituting, one obtains that the above expression is always less than zero, as desired.
LRT for one-sided Bernoulli parameter Let $k=\sum_ix_i$. The likelihood $L(\theta|X)=\theta^k(1-\theta)^{n-k}$ is unimodal and maximized at $\theta=\frac{k}{n}$. Hence, the likelihood ratio $\lambda(X)$ is given by $$\lambda(X)=\frac{\the
42,409
A question about the effective sample size in life tables
Observations censored within the interval under consideration are not at risk of death for the whole period. They don't count as a whole person-period of exposure, but only the fraction to which they were exposed. Under the uniformity assumption, on average they're exposed half a period. So on average each of the $c_j$ censored people will lose half a person-period of exposure. The text is a little awkwardly worded, but it's treating the $c_j$ people unexposed for on average half the period as equivalent to half of the $c_j$ people who were censored being unexposed to risk of death in the study (equivalently, not in the study) -- it's the same number of person-periods of exposure. In the diagram below, censored observations are marked with an "o" when censored and uncensored observations that died are marked with an "x at death". The uncensored ones count just as they would if there were no censoring at all, but the censored ones have reduced exposure: I've split the censored values off separately and then sorted them by exposure. If you took the censored values with shorter exposure times, you could (on average) use them to "fill up" the exposure time of the ones with longer exposure, leaving it as half the censored lives had full exposure and half had none. That is, you lose $c_j/2$ person-periods of exposure on average, but you could treat that as equivalent to simply losing half the censored people at the start of the period (and the other half being exposed for the entire period), reducing the count by $c_j/2$.
A question about the effective sample size in life tables
Observations censored within the interval under consideration are not at risk of death for the whole period. They don't count as a whole person-period of exposure, but only the fraction to which they
A question about the effective sample size in life tables Observations censored within the interval under consideration are not at risk of death for the whole period. They don't count as a whole person-period of exposure, but only the fraction to which they were exposed. Under the uniformity assumption, on average they're exposed half a period. So on average each of the $c_j$ censored people will lose half a person-period of exposure. The text is a little awkwardly worded, but it's treating the $c_j$ people unexposed for on average half the period as equivalent to half of the $c_j$ people who were censored being unexposed to risk of death in the study (equivalently, not in the study) -- it's the same number of person-periods of exposure. In the diagram below, censored observations are marked with an "o" when censored and uncensored observations that died are marked with an "x at death". The uncensored ones count just as they would if there were no censoring at all, but the censored ones have reduced exposure: I've split the censored values off separately and then sorted them by exposure. If you took the censored values with shorter exposure times, you could (on average) use them to "fill up" the exposure time of the ones with longer exposure, leaving it as half the censored lives had full exposure and half had none. That is, you lose $c_j/2$ person-periods of exposure on average, but you could treat that as equivalent to simply losing half the censored people at the start of the period (and the other half being exposed for the entire period), reducing the count by $c_j/2$.
A question about the effective sample size in life tables Observations censored within the interval under consideration are not at risk of death for the whole period. They don't count as a whole person-period of exposure, but only the fraction to which they
42,410
A question about the effective sample size in life tables
thank you for this discussion, I'm also dubitative regarding this denominator correction. When you estimate a rate you have to consider the total time spent by subject during a given period of time, so one usually assumes that censored (as well as dying) people have lived half of the period on average. But this is not the case when you are estimating a conditional probability, i.e. the probability of dying during the period given that a subject is still alive at the beginning of the period. In this case at the denominator one has the number of subject entering the period- Why to correct ith the half of censoring in this case?
A question about the effective sample size in life tables
thank you for this discussion, I'm also dubitative regarding this denominator correction. When you estimate a rate you have to consider the total time spent by subject during a given period of time, s
A question about the effective sample size in life tables thank you for this discussion, I'm also dubitative regarding this denominator correction. When you estimate a rate you have to consider the total time spent by subject during a given period of time, so one usually assumes that censored (as well as dying) people have lived half of the period on average. But this is not the case when you are estimating a conditional probability, i.e. the probability of dying during the period given that a subject is still alive at the beginning of the period. In this case at the denominator one has the number of subject entering the period- Why to correct ith the half of censoring in this case?
A question about the effective sample size in life tables thank you for this discussion, I'm also dubitative regarding this denominator correction. When you estimate a rate you have to consider the total time spent by subject during a given period of time, s
42,411
Making sense of the first difference regression model
Actually, the two procedures are the same. The difference between $$ \Delta Y_t = B\Delta X_t + \Delta \epsilon_t $$ and $$ \Delta Y_t = B\Delta X_t + v_t $$ is that you can estimate the second but not the first because you don't observe $\epsilon_t$. So the first equation is rather a theoretical model whilst the second is the estimating equation that you would use in practice. If you wanted to directly subtract $Y_{t-1}$ from both sides manually then this can only be done if you observe the true errors. You will notice that $v_t$ is an estimate of $\epsilon_t$. Re-arrange the theoretical model and the regression equation, if $\Delta Y_t - B\Delta X_t = \Delta \epsilon_t$ and $\Delta Y_t - B\Delta X_t = v_t$, then it must be true that $\Delta \epsilon_t = v_t$. Consider a simple example with two time periods and $B=0.3$ being constant over time. $$ \begin{array}{c|lc|r} time & Y_t & X_t & Y_t - BX_t =v_t \\ \hline 1 & 10 & 17 & \\ 2 & 13 & 21 & \\ \hline \Delta & 3 & 4 & 3 - 0.3\cdot 4 = 1.8 \end{array} $$ Suppose that $v_t$ was a consistent estimate of $\epsilon_t$ in all periods (which is true here because we have deterministically specified the data generating process by fixing $B$), then $\widehat{v}_t = \Delta \epsilon_t = 1.8$ is the residual from our second regression as an estimate of the error of the first equation.
Making sense of the first difference regression model
Actually, the two procedures are the same. The difference between $$ \Delta Y_t = B\Delta X_t + \Delta \epsilon_t $$ and $$ \Delta Y_t = B\Delta X_t + v_t $$ is that you can estimate the second but
Making sense of the first difference regression model Actually, the two procedures are the same. The difference between $$ \Delta Y_t = B\Delta X_t + \Delta \epsilon_t $$ and $$ \Delta Y_t = B\Delta X_t + v_t $$ is that you can estimate the second but not the first because you don't observe $\epsilon_t$. So the first equation is rather a theoretical model whilst the second is the estimating equation that you would use in practice. If you wanted to directly subtract $Y_{t-1}$ from both sides manually then this can only be done if you observe the true errors. You will notice that $v_t$ is an estimate of $\epsilon_t$. Re-arrange the theoretical model and the regression equation, if $\Delta Y_t - B\Delta X_t = \Delta \epsilon_t$ and $\Delta Y_t - B\Delta X_t = v_t$, then it must be true that $\Delta \epsilon_t = v_t$. Consider a simple example with two time periods and $B=0.3$ being constant over time. $$ \begin{array}{c|lc|r} time & Y_t & X_t & Y_t - BX_t =v_t \\ \hline 1 & 10 & 17 & \\ 2 & 13 & 21 & \\ \hline \Delta & 3 & 4 & 3 - 0.3\cdot 4 = 1.8 \end{array} $$ Suppose that $v_t$ was a consistent estimate of $\epsilon_t$ in all periods (which is true here because we have deterministically specified the data generating process by fixing $B$), then $\widehat{v}_t = \Delta \epsilon_t = 1.8$ is the residual from our second regression as an estimate of the error of the first equation.
Making sense of the first difference regression model Actually, the two procedures are the same. The difference between $$ \Delta Y_t = B\Delta X_t + \Delta \epsilon_t $$ and $$ \Delta Y_t = B\Delta X_t + v_t $$ is that you can estimate the second but
42,412
Markov decision process in R for a song suggestion software?
This question is months old now but it's still interesting. To me this sounds like a massive contingency table or sparse data, tensor problem. This doesn't make any of the MDP or reinforcement learning issues moot, it just realigns the statistical framework within which they are modeled. The decision or dependent variable is whether or not a song from a potentially very large playlist gets chosen and, once chosen, whether it gets rejected or played. Correct me if I'm wrong, but can't this be treated with effect coding or 0, 1 for yes/no -- is it played? -- and -1 if it's rejected? Based on the question, I don't see any reason to treat this as a sequential Markov chain or longitudinal time series, particularly given the random nature of the draws from the playlist, but can be convinced otherwise. Exceptions to this rule could include consideration of whether the algorithm is "learning" song preferences as a function, for instance, of genre. Sparsity would be a function of the interval for the time frame over which the choices are aggregated as well as the size of the playlist. If the interval is too short or the playlist is too large, sparsity is the inevitable outcome. The state-of-the-art for tensor modeling are probably David Dunson's papers, e.g., Bayesian Tensor Regression, but there are lots of people with plenty of papers working in this field (see DDs papers on his Duke website for reviews).
Markov decision process in R for a song suggestion software?
This question is months old now but it's still interesting. To me this sounds like a massive contingency table or sparse data, tensor problem. This doesn't make any of the MDP or reinforcement learnin
Markov decision process in R for a song suggestion software? This question is months old now but it's still interesting. To me this sounds like a massive contingency table or sparse data, tensor problem. This doesn't make any of the MDP or reinforcement learning issues moot, it just realigns the statistical framework within which they are modeled. The decision or dependent variable is whether or not a song from a potentially very large playlist gets chosen and, once chosen, whether it gets rejected or played. Correct me if I'm wrong, but can't this be treated with effect coding or 0, 1 for yes/no -- is it played? -- and -1 if it's rejected? Based on the question, I don't see any reason to treat this as a sequential Markov chain or longitudinal time series, particularly given the random nature of the draws from the playlist, but can be convinced otherwise. Exceptions to this rule could include consideration of whether the algorithm is "learning" song preferences as a function, for instance, of genre. Sparsity would be a function of the interval for the time frame over which the choices are aggregated as well as the size of the playlist. If the interval is too short or the playlist is too large, sparsity is the inevitable outcome. The state-of-the-art for tensor modeling are probably David Dunson's papers, e.g., Bayesian Tensor Regression, but there are lots of people with plenty of papers working in this field (see DDs papers on his Duke website for reviews).
Markov decision process in R for a song suggestion software? This question is months old now but it's still interesting. To me this sounds like a massive contingency table or sparse data, tensor problem. This doesn't make any of the MDP or reinforcement learnin
42,413
Markov decision process in R for a song suggestion software?
The problem can be modeled as Markov Decision problem. I have tried to fit the problem in MDP framework, let me know if this is of any help. Assuming that there exists a method to select a song within a playlist 'cluster', the states would act as such clusters for MDP. Defining a transition probability matrix between clusters, actions by MDP would be a change in this matrix. Here, app user inputs would act as external disturbances in the model. The reward/cost model would depend on these external disturbances. In summary, States: Playlist clusters Actions: Updates in transition probability matrix Disturbances: App user inputs as accept/reject the song Cost function: Stochastic output on whether a user would accept/reject next few suggestions An extension would be to simulate user behavior as a stochastic process, and infer the parameters based on actual data.
Markov decision process in R for a song suggestion software?
The problem can be modeled as Markov Decision problem. I have tried to fit the problem in MDP framework, let me know if this is of any help. Assuming that there exists a method to select a song within
Markov decision process in R for a song suggestion software? The problem can be modeled as Markov Decision problem. I have tried to fit the problem in MDP framework, let me know if this is of any help. Assuming that there exists a method to select a song within a playlist 'cluster', the states would act as such clusters for MDP. Defining a transition probability matrix between clusters, actions by MDP would be a change in this matrix. Here, app user inputs would act as external disturbances in the model. The reward/cost model would depend on these external disturbances. In summary, States: Playlist clusters Actions: Updates in transition probability matrix Disturbances: App user inputs as accept/reject the song Cost function: Stochastic output on whether a user would accept/reject next few suggestions An extension would be to simulate user behavior as a stochastic process, and infer the parameters based on actual data.
Markov decision process in R for a song suggestion software? The problem can be modeled as Markov Decision problem. I have tried to fit the problem in MDP framework, let me know if this is of any help. Assuming that there exists a method to select a song within
42,414
Markov decision process in R for a song suggestion software?
It is right that there is a Reinforcement Learning problem here. The negative reinforcement would be when the person skips the song, and positive when he/she doesn't. The action in this case would be choosing the song, and you wanted the state to be a playlist. I don't think this is a good idea. First, you'd have a varied number of songs (thus actions) per playlist (state) which doesn't make much sense. I would generalise it a bit: state: for example the person's mood or music preference of the person. action: choose a type of music / artist for example. reward: negative when skipped, otherwise 0 or positive. This way the method is more generic, and the meta data (music type, artist, ..) can easily be extracted from a MP3 file for example. I have not used any R package with MDPs, but this link seems interesting: Reinforcement Learning in R: Markov Decision Process (MDP) and Value Iteration
Markov decision process in R for a song suggestion software?
It is right that there is a Reinforcement Learning problem here. The negative reinforcement would be when the person skips the song, and positive when he/she doesn't. The action in this case would be
Markov decision process in R for a song suggestion software? It is right that there is a Reinforcement Learning problem here. The negative reinforcement would be when the person skips the song, and positive when he/she doesn't. The action in this case would be choosing the song, and you wanted the state to be a playlist. I don't think this is a good idea. First, you'd have a varied number of songs (thus actions) per playlist (state) which doesn't make much sense. I would generalise it a bit: state: for example the person's mood or music preference of the person. action: choose a type of music / artist for example. reward: negative when skipped, otherwise 0 or positive. This way the method is more generic, and the meta data (music type, artist, ..) can easily be extracted from a MP3 file for example. I have not used any R package with MDPs, but this link seems interesting: Reinforcement Learning in R: Markov Decision Process (MDP) and Value Iteration
Markov decision process in R for a song suggestion software? It is right that there is a Reinforcement Learning problem here. The negative reinforcement would be when the person skips the song, and positive when he/she doesn't. The action in this case would be
42,415
Linear regression scaling independent variables
Using logs of the inputs will make your model nonlinear but that may be what you want. With financial data, natural log usually makes models more predictable. Typically if you what to have the coefficients have the same scale, you normalize the inputs by: x=(x-mean(x))/stdev(x) That being said, it actually doesn't really matter if you don't rescale the inputs as long as the software displays all of the digits. However, you won't be able to see the influence of the particular input by the coefficient magnitude due to the differing scales.
Linear regression scaling independent variables
Using logs of the inputs will make your model nonlinear but that may be what you want. With financial data, natural log usually makes models more predictable. Typically if you what to have the coeffic
Linear regression scaling independent variables Using logs of the inputs will make your model nonlinear but that may be what you want. With financial data, natural log usually makes models more predictable. Typically if you what to have the coefficients have the same scale, you normalize the inputs by: x=(x-mean(x))/stdev(x) That being said, it actually doesn't really matter if you don't rescale the inputs as long as the software displays all of the digits. However, you won't be able to see the influence of the particular input by the coefficient magnitude due to the differing scales.
Linear regression scaling independent variables Using logs of the inputs will make your model nonlinear but that may be what you want. With financial data, natural log usually makes models more predictable. Typically if you what to have the coeffic
42,416
Linear regression scaling independent variables
Values of independent variables $x$ are very high compare to dependent $y$, $\beta$-value would be very small, there will be many errors like.......sampling error, rounding error. Yes! you really need to rebase $x$, The best way is to take logs $$\ln{\hat y}=\beta\ln{\hat x}+\alpha$$ $$\ln{y}=\beta\ln{x}+\alpha+\epsilon~~........\epsilon~is~error$$ Note:- the linear form is linear in the regression parameters associated with the covariates. Nonlinear regression
Linear regression scaling independent variables
Values of independent variables $x$ are very high compare to dependent $y$, $\beta$-value would be very small, there will be many errors like.......sampling error, rounding error. Yes! you really need
Linear regression scaling independent variables Values of independent variables $x$ are very high compare to dependent $y$, $\beta$-value would be very small, there will be many errors like.......sampling error, rounding error. Yes! you really need to rebase $x$, The best way is to take logs $$\ln{\hat y}=\beta\ln{\hat x}+\alpha$$ $$\ln{y}=\beta\ln{x}+\alpha+\epsilon~~........\epsilon~is~error$$ Note:- the linear form is linear in the regression parameters associated with the covariates. Nonlinear regression
Linear regression scaling independent variables Values of independent variables $x$ are very high compare to dependent $y$, $\beta$-value would be very small, there will be many errors like.......sampling error, rounding error. Yes! you really need
42,417
How do I do a classification problem with autoencoders (AEs)
Yes, once you've trained the network as an autoencoder, if you're only interested in classification you can just ignore the decoding part of the network and just feed the output of the deepest hidden layer into the classifier layer. This page might be informative.
How do I do a classification problem with autoencoders (AEs)
Yes, once you've trained the network as an autoencoder, if you're only interested in classification you can just ignore the decoding part of the network and just feed the output of the deepest hidden
How do I do a classification problem with autoencoders (AEs) Yes, once you've trained the network as an autoencoder, if you're only interested in classification you can just ignore the decoding part of the network and just feed the output of the deepest hidden layer into the classifier layer. This page might be informative.
How do I do a classification problem with autoencoders (AEs) Yes, once you've trained the network as an autoencoder, if you're only interested in classification you can just ignore the decoding part of the network and just feed the output of the deepest hidden
42,418
sample from a von Mises distribution by transforming a RV?
Simulation for the von Mises distribution is generally done via some form of rejection sampling. There is no method available to transform a random variate from a different distribution to a von Mises random variate in the way you describe. A natural way would have been some form of inversion sampling, but the CDF of the von Mises distribution is not analytic, so this may not be possible. Two distributions similar to von Mises that may be of interest to you are the Wrapped Normal and Wrapped Cauchy distribution. For the Wrapped Normal, we can simply take $X \sim N(\mu, \sigma^2),$ then $\Theta = X ~ \text{[mod} ~ 2\pi]$ to have $\theta \sim WN(\mu, \rho),$ where $\rho = e^{-\frac{1}{2} \sigma^{2}}.$ For the Wrapped Cauchy with parameters $\mu$ and $\rho$, get a random variate $u$ from $\text{Uniform}(0, 2 \pi),$ then $$ V = cos (u)$$ $$ c = 2\rho / (1+\rho^2)$$ $$ \theta = \cos^{-1}\frac{V + c}{1 + cV} + \mu ~~ \text{[mod} ~ 2\pi].$$ Then $\theta \sim WC(\mu, \rho)$. This procedure is due to Fisher (1995).
sample from a von Mises distribution by transforming a RV?
Simulation for the von Mises distribution is generally done via some form of rejection sampling. There is no method available to transform a random variate from a different distribution to a von Mises
sample from a von Mises distribution by transforming a RV? Simulation for the von Mises distribution is generally done via some form of rejection sampling. There is no method available to transform a random variate from a different distribution to a von Mises random variate in the way you describe. A natural way would have been some form of inversion sampling, but the CDF of the von Mises distribution is not analytic, so this may not be possible. Two distributions similar to von Mises that may be of interest to you are the Wrapped Normal and Wrapped Cauchy distribution. For the Wrapped Normal, we can simply take $X \sim N(\mu, \sigma^2),$ then $\Theta = X ~ \text{[mod} ~ 2\pi]$ to have $\theta \sim WN(\mu, \rho),$ where $\rho = e^{-\frac{1}{2} \sigma^{2}}.$ For the Wrapped Cauchy with parameters $\mu$ and $\rho$, get a random variate $u$ from $\text{Uniform}(0, 2 \pi),$ then $$ V = cos (u)$$ $$ c = 2\rho / (1+\rho^2)$$ $$ \theta = \cos^{-1}\frac{V + c}{1 + cV} + \mu ~~ \text{[mod} ~ 2\pi].$$ Then $\theta \sim WC(\mu, \rho)$. This procedure is due to Fisher (1995).
sample from a von Mises distribution by transforming a RV? Simulation for the von Mises distribution is generally done via some form of rejection sampling. There is no method available to transform a random variate from a different distribution to a von Mises
42,419
What does cluster size mean (in context of k-means)?
The area of the original (occupied) data space, and to a lesser extend the area of the clusters convex hull, volume, spread. Not the cardinality. Counterexample in 1 dimension, k=2: 1,2,3,4,5,6,7,8,9,10,100. Both kmeans clusters will cover a "similar" area of the data range (out of 1..100, values in 1..52.75 will be assigned to cluster 1, 52.75..100 will be cluster 2, but the cardinalities are 10 to 1). Note that the "spread" isn't always that similar: the last cluster has spread zero. But you could also use 1,1,1,1,1,1,1,1,1,10 as example. But this is just a rule of thumb, not a guarantee. Remember that kmeans tries to minimize the SSQ. You usually do not improve SSQ by more evenly distributing the points, but the minimum SSQ will usually benefit from minimizing the SSQ of each part even if that means making the cardinalities different.
What does cluster size mean (in context of k-means)?
The area of the original (occupied) data space, and to a lesser extend the area of the clusters convex hull, volume, spread. Not the cardinality. Counterexample in 1 dimension, k=2: 1,2,3,4,5,6,7,8,9,
What does cluster size mean (in context of k-means)? The area of the original (occupied) data space, and to a lesser extend the area of the clusters convex hull, volume, spread. Not the cardinality. Counterexample in 1 dimension, k=2: 1,2,3,4,5,6,7,8,9,10,100. Both kmeans clusters will cover a "similar" area of the data range (out of 1..100, values in 1..52.75 will be assigned to cluster 1, 52.75..100 will be cluster 2, but the cardinalities are 10 to 1). Note that the "spread" isn't always that similar: the last cluster has spread zero. But you could also use 1,1,1,1,1,1,1,1,1,10 as example. But this is just a rule of thumb, not a guarantee. Remember that kmeans tries to minimize the SSQ. You usually do not improve SSQ by more evenly distributing the points, but the minimum SSQ will usually benefit from minimizing the SSQ of each part even if that means making the cardinalities different.
What does cluster size mean (in context of k-means)? The area of the original (occupied) data space, and to a lesser extend the area of the clusters convex hull, volume, spread. Not the cardinality. Counterexample in 1 dimension, k=2: 1,2,3,4,5,6,7,8,9,
42,420
Is there any package in R that's commonly used for semi-supervised learning? [closed]
You can try for example the upclass package http://cran.r-project.org/web/packages/upclass/index.html There you will find the standard pdf reference and a vignette explaining it all along with examples. I think the function upclassify() would match your requirements.
Is there any package in R that's commonly used for semi-supervised learning? [closed]
You can try for example the upclass package http://cran.r-project.org/web/packages/upclass/index.html There you will find the standard pdf reference and a vignette explaining it all along with exampl
Is there any package in R that's commonly used for semi-supervised learning? [closed] You can try for example the upclass package http://cran.r-project.org/web/packages/upclass/index.html There you will find the standard pdf reference and a vignette explaining it all along with examples. I think the function upclassify() would match your requirements.
Is there any package in R that's commonly used for semi-supervised learning? [closed] You can try for example the upclass package http://cran.r-project.org/web/packages/upclass/index.html There you will find the standard pdf reference and a vignette explaining it all along with exampl
42,421
Is there any package in R that's commonly used for semi-supervised learning? [closed]
Probably irrelevant now, but it might make the answer more complete to also mention the spa package as well. It uses a graph-based technique to learn a model. Basically it uses information from both the data point features and how similar data points are to each other. If you have a distance matrix for the data points, it might work well. This article explains it with a bit more detail and some examples. You can find the package binary files here: https://cran.r-project.org/web/packages/spa/index.html edited to provide some additional info about package.
Is there any package in R that's commonly used for semi-supervised learning? [closed]
Probably irrelevant now, but it might make the answer more complete to also mention the spa package as well. It uses a graph-based technique to learn a model. Basically it uses information from both t
Is there any package in R that's commonly used for semi-supervised learning? [closed] Probably irrelevant now, but it might make the answer more complete to also mention the spa package as well. It uses a graph-based technique to learn a model. Basically it uses information from both the data point features and how similar data points are to each other. If you have a distance matrix for the data points, it might work well. This article explains it with a bit more detail and some examples. You can find the package binary files here: https://cran.r-project.org/web/packages/spa/index.html edited to provide some additional info about package.
Is there any package in R that's commonly used for semi-supervised learning? [closed] Probably irrelevant now, but it might make the answer more complete to also mention the spa package as well. It uses a graph-based technique to learn a model. Basically it uses information from both t
42,422
How can increasing the dimension increase the variance without increasing the bias in kNN?
First of all, the bias of a classifier is the discrepancy between its averaged estimated and true function, wheras the variance of a classifier is the expected divergence of the estimated prediction function from its average value (i.e. how dependent the classifier is on the random sampling made in the training set). Hence, the presence of bias indicates something basically wrong with the model, whereas variance is also bad, but a model with high variance could at least predict well on average. The key to understand examples generating Figures 2.7 and 2.8 is: The variance is due to the sampling variance of the 1-nearest neighbor. In low dimensions and with $N = 1000$, the nearest neighbor is very close to $0$, and so both the bias and variance are small. As the dimension $p$ increases, the nearest neighbor tends to stray further from the target point, and both bias and variance are incurred. By $p = 10$, for more than $99\%$ of the samples the nearest neighbor is a distance greater than $0.5$ from the origin. Recall the the target function of the example generating Figure 2.7 depends on $p$ variables, and hence the MSE error is largely due to the bias. Conversely, in Figure 2.8 the target function of the example depends only on $1$ variable, and thus the variance dominates. More generally, this happens when you are dealing with low dimensions. I hope this could help.
How can increasing the dimension increase the variance without increasing the bias in kNN?
First of all, the bias of a classifier is the discrepancy between its averaged estimated and true function, wheras the variance of a classifier is the expected divergence of the estimated prediction f
How can increasing the dimension increase the variance without increasing the bias in kNN? First of all, the bias of a classifier is the discrepancy between its averaged estimated and true function, wheras the variance of a classifier is the expected divergence of the estimated prediction function from its average value (i.e. how dependent the classifier is on the random sampling made in the training set). Hence, the presence of bias indicates something basically wrong with the model, whereas variance is also bad, but a model with high variance could at least predict well on average. The key to understand examples generating Figures 2.7 and 2.8 is: The variance is due to the sampling variance of the 1-nearest neighbor. In low dimensions and with $N = 1000$, the nearest neighbor is very close to $0$, and so both the bias and variance are small. As the dimension $p$ increases, the nearest neighbor tends to stray further from the target point, and both bias and variance are incurred. By $p = 10$, for more than $99\%$ of the samples the nearest neighbor is a distance greater than $0.5$ from the origin. Recall the the target function of the example generating Figure 2.7 depends on $p$ variables, and hence the MSE error is largely due to the bias. Conversely, in Figure 2.8 the target function of the example depends only on $1$ variable, and thus the variance dominates. More generally, this happens when you are dealing with low dimensions. I hope this could help.
How can increasing the dimension increase the variance without increasing the bias in kNN? First of all, the bias of a classifier is the discrepancy between its averaged estimated and true function, wheras the variance of a classifier is the expected divergence of the estimated prediction f
42,423
How can increasing the dimension increase the variance without increasing the bias in kNN?
Well, I don't know whether it's appropriate to answer a question asked by myself... But I think I have a relatively intuitive answer and I just wanna share it. First let me add the true function in Figure 2.7 for comparison: $$Y=f_1(X)=e^{-8||X||^2}$$ and the one in Figure 2.8 is $$Y=f_2(X)=\frac12(X_1+1)^3$$ As @stochazesthai said, the true function of 2.7 depends on all $p$ components and 2.8 only $1$ component. On the other hand, the 1-NN algorithm involves the ordinary norm (by default), so the distance is measured by all components. Another thing to mention is that the expectation is taken to the estimated target $\hat{y}$ over the sample distribution. Now consider the input $X$. Given any distance $d$ to the origin, when $p=1$, there're only $2$ choices of the value of $X$, which are $d$ and $-d$. When $p$ is increasing, with any fixed distance, the choices of $X$ will be dramatically increased, where the value of the first component $X_1$ can be oscillating more and more freely. Then consider the 1-NN. When $p$ is increasing, as @stochazesthai quoted, the nearest neighbor of origin will be far away with high probability, which means that the smallest $||X||$ will be large. Hence for $f_1$ (where $||X||$ involved), $E(\hat{y}_0)$ will increase a lot when $p$ is increasing, so the bias will increase significantly; but at the same time $\hat{y}_0$ will also be large with high probability, so the variance will not increase too much. On the other hand, for $f_2$ (where only $X_1$ involved), when $p$ is increasing, as I mentioned above, $X_1$ can oscillate more and more dramatically with the same distance $E_{\mathcal{T}}(\hat{y}_0)$. So the increasing of variance will dominate, but $E_{\mathcal{T}}(\hat{y}_0)$ itself will not change a lot, so the bias will roughly unchanged in comparing with variance. Hopefully it's kind of helpful.
How can increasing the dimension increase the variance without increasing the bias in kNN?
Well, I don't know whether it's appropriate to answer a question asked by myself... But I think I have a relatively intuitive answer and I just wanna share it. First let me add the true function in F
How can increasing the dimension increase the variance without increasing the bias in kNN? Well, I don't know whether it's appropriate to answer a question asked by myself... But I think I have a relatively intuitive answer and I just wanna share it. First let me add the true function in Figure 2.7 for comparison: $$Y=f_1(X)=e^{-8||X||^2}$$ and the one in Figure 2.8 is $$Y=f_2(X)=\frac12(X_1+1)^3$$ As @stochazesthai said, the true function of 2.7 depends on all $p$ components and 2.8 only $1$ component. On the other hand, the 1-NN algorithm involves the ordinary norm (by default), so the distance is measured by all components. Another thing to mention is that the expectation is taken to the estimated target $\hat{y}$ over the sample distribution. Now consider the input $X$. Given any distance $d$ to the origin, when $p=1$, there're only $2$ choices of the value of $X$, which are $d$ and $-d$. When $p$ is increasing, with any fixed distance, the choices of $X$ will be dramatically increased, where the value of the first component $X_1$ can be oscillating more and more freely. Then consider the 1-NN. When $p$ is increasing, as @stochazesthai quoted, the nearest neighbor of origin will be far away with high probability, which means that the smallest $||X||$ will be large. Hence for $f_1$ (where $||X||$ involved), $E(\hat{y}_0)$ will increase a lot when $p$ is increasing, so the bias will increase significantly; but at the same time $\hat{y}_0$ will also be large with high probability, so the variance will not increase too much. On the other hand, for $f_2$ (where only $X_1$ involved), when $p$ is increasing, as I mentioned above, $X_1$ can oscillate more and more dramatically with the same distance $E_{\mathcal{T}}(\hat{y}_0)$. So the increasing of variance will dominate, but $E_{\mathcal{T}}(\hat{y}_0)$ itself will not change a lot, so the bias will roughly unchanged in comparing with variance. Hopefully it's kind of helpful.
How can increasing the dimension increase the variance without increasing the bias in kNN? Well, I don't know whether it's appropriate to answer a question asked by myself... But I think I have a relatively intuitive answer and I just wanna share it. First let me add the true function in F
42,424
Motivation for gradient descent method over canonical method (for OLS/MLE) for simple linear regression?
For ordinary linear regression, maximum likelihood and least squares are the same, i.e., give the same answer (the maximum likelihood solution is the least squares solution, if you derive the so called ``normal equations'' you'll see this, also see the book The Elements of Statistical Learning which discusses this). But this is separate from how you find that solution. Gradient descent is only one method to find the solution, and it's actually quite a bad one at that (slow to converge). For example, Newton's method is much better for OLS (using various numerical algorithms to avoid inverting the Hessian directly). But you are right in the sense that for very big problems, gradient descent becomes more useful because 2nd order methods like Newton's method can be computationally very expensive (again, there are approximations to that too). I don't think EM is relevant for OLS, it can be useful for optimizing non-convex problems (OLS is convex).
Motivation for gradient descent method over canonical method (for OLS/MLE) for simple linear regress
For ordinary linear regression, maximum likelihood and least squares are the same, i.e., give the same answer (the maximum likelihood solution is the least squares solution, if you derive the so call
Motivation for gradient descent method over canonical method (for OLS/MLE) for simple linear regression? For ordinary linear regression, maximum likelihood and least squares are the same, i.e., give the same answer (the maximum likelihood solution is the least squares solution, if you derive the so called ``normal equations'' you'll see this, also see the book The Elements of Statistical Learning which discusses this). But this is separate from how you find that solution. Gradient descent is only one method to find the solution, and it's actually quite a bad one at that (slow to converge). For example, Newton's method is much better for OLS (using various numerical algorithms to avoid inverting the Hessian directly). But you are right in the sense that for very big problems, gradient descent becomes more useful because 2nd order methods like Newton's method can be computationally very expensive (again, there are approximations to that too). I don't think EM is relevant for OLS, it can be useful for optimizing non-convex problems (OLS is convex).
Motivation for gradient descent method over canonical method (for OLS/MLE) for simple linear regress For ordinary linear regression, maximum likelihood and least squares are the same, i.e., give the same answer (the maximum likelihood solution is the least squares solution, if you derive the so call
42,425
non-normal residuals in ARIMA
Your QQplots could indicate $t$-distributed error terms might fit better. You could try to fit an ARIMA-model with $t$-distributed innovation terms, and see if the fit is very different from the fit you have now. I have done such things with the bugs software, there are certainly other options.
non-normal residuals in ARIMA
Your QQplots could indicate $t$-distributed error terms might fit better. You could try to fit an ARIMA-model with $t$-distributed innovation terms, and see if the fit is very different from the fit y
non-normal residuals in ARIMA Your QQplots could indicate $t$-distributed error terms might fit better. You could try to fit an ARIMA-model with $t$-distributed innovation terms, and see if the fit is very different from the fit you have now. I have done such things with the bugs software, there are certainly other options.
non-normal residuals in ARIMA Your QQplots could indicate $t$-distributed error terms might fit better. You could try to fit an ARIMA-model with $t$-distributed innovation terms, and see if the fit is very different from the fit y
42,426
non-normal residuals in ARIMA
Before you do an ARIMA model you have to check if the data is be stationary and if any seasonality should be defined using autocorrelation (ACF) and partial correlation functions(PACF). The auto correlation should follow the 95% confidence bands. Stationary data is detected using a run sequence plot or auto correlation. If it is not stationary you might have to detrend it. My guess is it was not stationary.
non-normal residuals in ARIMA
Before you do an ARIMA model you have to check if the data is be stationary and if any seasonality should be defined using autocorrelation (ACF) and partial correlation functions(PACF). The auto corre
non-normal residuals in ARIMA Before you do an ARIMA model you have to check if the data is be stationary and if any seasonality should be defined using autocorrelation (ACF) and partial correlation functions(PACF). The auto correlation should follow the 95% confidence bands. Stationary data is detected using a run sequence plot or auto correlation. If it is not stationary you might have to detrend it. My guess is it was not stationary.
non-normal residuals in ARIMA Before you do an ARIMA model you have to check if the data is be stationary and if any seasonality should be defined using autocorrelation (ACF) and partial correlation functions(PACF). The auto corre
42,427
non-normal residuals in ARIMA
If the residuals contain pulses or level shifts this can lead to "non-normality" . Try detecting Interventions and add them as necessary. Another way residuals can exhibit non-normality is if there is a deterministic change in error variance suggesting Weighted Least Squares OR if the model's parameters are not constant over time suggesting data segmentation..
non-normal residuals in ARIMA
If the residuals contain pulses or level shifts this can lead to "non-normality" . Try detecting Interventions and add them as necessary. Another way residuals can exhibit non-normality is if there is
non-normal residuals in ARIMA If the residuals contain pulses or level shifts this can lead to "non-normality" . Try detecting Interventions and add them as necessary. Another way residuals can exhibit non-normality is if there is a deterministic change in error variance suggesting Weighted Least Squares OR if the model's parameters are not constant over time suggesting data segmentation..
non-normal residuals in ARIMA If the residuals contain pulses or level shifts this can lead to "non-normality" . Try detecting Interventions and add them as necessary. Another way residuals can exhibit non-normality is if there is
42,428
Geometry of robust linear model
Yes! Robust regression has a clear geometric interpretation. One can think about the geometry of an estimator by looking at the group of equivariance to which it belongs. Quick example; Example scale estimator $S(x)$ (the usual variance, $\sigma^2(x)$, and the median absolute deviation, $\mbox{mad}(x)$, are two cart bearing members of this group) are equivariant to multiplications of the data by a constant: $$S(\alpha x)=|\alpha|S(x),\quad\alpha\in\mathbb{R}$$ In other words, the group of equivariance defines the transformations of the data which, in some sense, you don't need to care about when using the estimator because when such a transformation is applied to the data, the estimator changes with the data 'in the natural way'. These groups of equivariance also have bearing on important properties of the estimators such as consistency. Likewise, a regression estimator $T(\pmb x,y)\in\mathbb{R}^p$ is characterized by at least two group of equivariance: $T(\pmb x,y)$ is regression equivariant: $$T(\pmb x,y+\pmb\beta'x)=T(\pmb x,y)+\pmb\beta,\quad\pmb\beta\in\mathbb{R}^p$$ $T(\pmb x,y)$ is affine equivariant: $$T(\pmb x\pmb A,y)=\pmb A^{-1}T(\pmb x,y)$$ for any non singular matrix $\pmb A\in\mathbb{R}^{p\times p}$. This implies that $T(\pmb x,y)$ is residual admissible: the regression estimates only depend on the data through the vector of residuals. The regression estimator estimated by rlm, like the usual OLS estimators, all satisfy affine and regression equivariance. Note that there exist some robust regression estimators that belong to group of equivariance to which OLS doesn't belong (e.g. that have in this sense a stronger geometry than OLS). Think of invariance to monotone transformations that holds for quantile-based estimators such as the $\mbox{mad}(x)$ (and in the case of monotone transformation of the responses for quantile regression) but not for the variance (or the usual OLS estimators).
Geometry of robust linear model
Yes! Robust regression has a clear geometric interpretation. One can think about the geometry of an estimator by looking at the group of equivariance to which it belongs. Quick example; Example sca
Geometry of robust linear model Yes! Robust regression has a clear geometric interpretation. One can think about the geometry of an estimator by looking at the group of equivariance to which it belongs. Quick example; Example scale estimator $S(x)$ (the usual variance, $\sigma^2(x)$, and the median absolute deviation, $\mbox{mad}(x)$, are two cart bearing members of this group) are equivariant to multiplications of the data by a constant: $$S(\alpha x)=|\alpha|S(x),\quad\alpha\in\mathbb{R}$$ In other words, the group of equivariance defines the transformations of the data which, in some sense, you don't need to care about when using the estimator because when such a transformation is applied to the data, the estimator changes with the data 'in the natural way'. These groups of equivariance also have bearing on important properties of the estimators such as consistency. Likewise, a regression estimator $T(\pmb x,y)\in\mathbb{R}^p$ is characterized by at least two group of equivariance: $T(\pmb x,y)$ is regression equivariant: $$T(\pmb x,y+\pmb\beta'x)=T(\pmb x,y)+\pmb\beta,\quad\pmb\beta\in\mathbb{R}^p$$ $T(\pmb x,y)$ is affine equivariant: $$T(\pmb x\pmb A,y)=\pmb A^{-1}T(\pmb x,y)$$ for any non singular matrix $\pmb A\in\mathbb{R}^{p\times p}$. This implies that $T(\pmb x,y)$ is residual admissible: the regression estimates only depend on the data through the vector of residuals. The regression estimator estimated by rlm, like the usual OLS estimators, all satisfy affine and regression equivariance. Note that there exist some robust regression estimators that belong to group of equivariance to which OLS doesn't belong (e.g. that have in this sense a stronger geometry than OLS). Think of invariance to monotone transformations that holds for quantile-based estimators such as the $\mbox{mad}(x)$ (and in the case of monotone transformation of the responses for quantile regression) but not for the variance (or the usual OLS estimators).
Geometry of robust linear model Yes! Robust regression has a clear geometric interpretation. One can think about the geometry of an estimator by looking at the group of equivariance to which it belongs. Quick example; Example sca
42,429
How to pretrain Convolution filter
As far as I know, pre-training is not so popular nowadays. Try to use proper initialisation like this or this. Use Batch Normalisation (it will make the previous point less important, though). You will get more in terms of accuracy, but with some computational overhead. P.S. Now when I wrote this, I found a great answer on Cross Validated.
How to pretrain Convolution filter
As far as I know, pre-training is not so popular nowadays. Try to use proper initialisation like this or this. Use Batch Normalisation (it will make the previous point less important, though). You w
How to pretrain Convolution filter As far as I know, pre-training is not so popular nowadays. Try to use proper initialisation like this or this. Use Batch Normalisation (it will make the previous point less important, though). You will get more in terms of accuracy, but with some computational overhead. P.S. Now when I wrote this, I found a great answer on Cross Validated.
How to pretrain Convolution filter As far as I know, pre-training is not so popular nowadays. Try to use proper initialisation like this or this. Use Batch Normalisation (it will make the previous point less important, though). You w
42,430
Interpretation of Cramér's V
The quote is correct. If you have two categorical variables and you recode them into two sets of dummy variables and then perform canonical correlation analysis (CCA) on these two sets (leaving out any one dummy from each set as redundant) - you will get canonical correlations (see CCA algorithm, to compute) the average of the squares of which is exactly the Cramer's V squared between the original categorical variables. An example. Two nominal variables A (3 categories) and B (4 categories) were recoded into dummy sets. A B A1 A2 A3 B1 B2 B3 B4 1 1 1 0 0 1 0 0 0 1 1 1 0 0 1 0 0 0 1 2 1 0 0 0 1 0 0 1 2 1 0 0 0 1 0 0 1 4 1 0 0 0 0 0 1 2 1 0 1 0 1 0 0 0 2 1 0 1 0 1 0 0 0 2 2 0 1 0 0 1 0 0 2 2 0 1 0 0 1 0 0 2 2 0 1 0 0 1 0 0 2 2 0 1 0 0 1 0 0 2 2 0 1 0 0 1 0 0 2 3 0 1 0 0 0 1 0 2 3 0 1 0 0 0 1 0 2 4 0 1 0 0 0 0 1 2 4 0 1 0 0 0 0 1 3 1 0 0 1 1 0 0 0 3 1 0 0 1 1 0 0 0 3 2 0 0 1 0 1 0 0 3 4 0 0 1 0 0 0 1 Throwing one arbitrary dummy from each set out, compute correlations and perform CCA on one set (2 variables) vs the other set (3 variables). You'll extract two pair of canonical latent roots with correlations: Canonical correlations and Eigenvalues: Can Corr Eigenval 1 .3921542 .1817327 2 .0859611 .0074443 (.3921542^2 + .0859611^2) / 2 = 0.08059 = squared Cramer's V between A and B. Note also that if one of the two categorical variables is dichotomous, squared Cramer's V is as well equivalent to R-square of linear regression of it by the dummies from the second variable. If you forget about CCA of dummy variables and think about CCA in general, that is, about CCA of any numeric quantitative variables, then you may further know that the mean (or sum, to be exact) squared canonical correlation is what is known by name Pillai's trace - the statistic that has the same meaning in multivariate regression as R-square has in univariate regression. Thus, Cramer's V squared clearly appears to be homologous to multivariate R-square (Pillai trace); V being for two categorical variables and R-square being for two sets of quantitative variables. This fact sheds light on the phrase ...as a percentage of their maximum possible [shared] variation.
Interpretation of Cramér's V
The quote is correct. If you have two categorical variables and you recode them into two sets of dummy variables and then perform canonical correlation analysis (CCA) on these two sets (leaving out an
Interpretation of Cramér's V The quote is correct. If you have two categorical variables and you recode them into two sets of dummy variables and then perform canonical correlation analysis (CCA) on these two sets (leaving out any one dummy from each set as redundant) - you will get canonical correlations (see CCA algorithm, to compute) the average of the squares of which is exactly the Cramer's V squared between the original categorical variables. An example. Two nominal variables A (3 categories) and B (4 categories) were recoded into dummy sets. A B A1 A2 A3 B1 B2 B3 B4 1 1 1 0 0 1 0 0 0 1 1 1 0 0 1 0 0 0 1 2 1 0 0 0 1 0 0 1 2 1 0 0 0 1 0 0 1 4 1 0 0 0 0 0 1 2 1 0 1 0 1 0 0 0 2 1 0 1 0 1 0 0 0 2 2 0 1 0 0 1 0 0 2 2 0 1 0 0 1 0 0 2 2 0 1 0 0 1 0 0 2 2 0 1 0 0 1 0 0 2 2 0 1 0 0 1 0 0 2 3 0 1 0 0 0 1 0 2 3 0 1 0 0 0 1 0 2 4 0 1 0 0 0 0 1 2 4 0 1 0 0 0 0 1 3 1 0 0 1 1 0 0 0 3 1 0 0 1 1 0 0 0 3 2 0 0 1 0 1 0 0 3 4 0 0 1 0 0 0 1 Throwing one arbitrary dummy from each set out, compute correlations and perform CCA on one set (2 variables) vs the other set (3 variables). You'll extract two pair of canonical latent roots with correlations: Canonical correlations and Eigenvalues: Can Corr Eigenval 1 .3921542 .1817327 2 .0859611 .0074443 (.3921542^2 + .0859611^2) / 2 = 0.08059 = squared Cramer's V between A and B. Note also that if one of the two categorical variables is dichotomous, squared Cramer's V is as well equivalent to R-square of linear regression of it by the dummies from the second variable. If you forget about CCA of dummy variables and think about CCA in general, that is, about CCA of any numeric quantitative variables, then you may further know that the mean (or sum, to be exact) squared canonical correlation is what is known by name Pillai's trace - the statistic that has the same meaning in multivariate regression as R-square has in univariate regression. Thus, Cramer's V squared clearly appears to be homologous to multivariate R-square (Pillai trace); V being for two categorical variables and R-square being for two sets of quantitative variables. This fact sheds light on the phrase ...as a percentage of their maximum possible [shared] variation.
Interpretation of Cramér's V The quote is correct. If you have two categorical variables and you recode them into two sets of dummy variables and then perform canonical correlation analysis (CCA) on these two sets (leaving out an
42,431
What do these decision boundaries indicate in random forest and svm?
The images you present are the same as those here: link. The following is some code, translated to R with some adjustments, to work through this. The RF selected (2 trees) is not acceptable. This is not apples-to-apples, so any of the authors' assertions about "entropy" can be mis-informative. First we get the data: #reproducibility set.seed(136526) #I like to use question number as random seed #libraries library(data.table) #to read the url library(randomForest) #to have randomForests library(miscTools) #column medians #main program #get data wine_df = fread("https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv") #conver to frame wine_df <- as.data.frame(wine_df) #parse data Y <- (wine_df[,12]) X <- wine_df[,-12] Next we find the right size of random forest for it. max_trees <- 100 #same range N_retest <- 35 #fair sample size err <- matrix(0,max_trees,N_retest) #initialize for the loop for (i in 1:max_trees){ for (j in 1:N_retest){ #fit random forest with "i" number of trees my_rf <- randomForest(x = X, y = Y, ntree = i) #pop out sum of squared residuals divided by n temp <- mean(my_rf$mse) err[i,j] <- temp } } Now we can look at how many elements should be in the ensemble: #make friendly for boxplot err_frame <- as.data.frame(t(err)) names(err_frame) <- as.character(1:max_trees) #central tendency my_meds <- colMedians((err_frame)) #normalized slope of central tendency est <- smooth.spline(x = 1:max_trees,y = my_meds,spar = 0.7) pred <- predict(est) my_sl <- c(diff(pred$y)/diff(pred$x)) my_sl <- (0.7-0.4)*(my_sl-min(my_sl))/(max(my_sl)-min(my_sl))+0.4 #make boxplot boxplot(err_frame, main = "MSE vs. number of trees", xlab = "number of trees", ylab = "forest mean MSE", xlim= c(0,75)) #draw central tendency (red) lines(est, col="red", lwd=2) #draw slope lines(pred$x,c(0.4,my_sl),col="green") points(pred$x,c(0.4,my_sl),col="green", pch=16) grid() legend(x = 60,y = 0.6,c("bxp","fit","slope"), col = c("black","Red","Green"), lty = c(NA, 1,1), pch = c(22,-1,20), pt.cex = c(1.2,1,1) ) And it gives us this, which I then manually draw blue and black lines on in a version of midangle-skree heuristic to get a "decent" ensemble size of 30. It is two tangent lines from the slope: one at highest slope, one at right end of domain. We make a ray from intersection of those tangent lines to the slope-line along the mid-angle. The next highest point after the intersection informs tree-count. Now that we have a decent random forest we can look at errors. First we compute the error. # make "final" model my_rf_fin <- randomForest(x = X, y = (Y), ntree = 30) #predict on it pred_fin <- predict(my_rf_fin) #compute error fit_err <- pred_fin - Y The first plots to start with are basic EDA plots including the 4-plot of error. #EDA on error par(mfrow = n2mfrow(4) ) #run seq plot(fit_err, type="l") grid() #lag plot plot(fit_err[2:length(fit_err)],fit_err[1:(length(fit_err)-1)] ) abline(a = 0,b=1, col="Green", lwd=2) grid() #histogram hist(fit_err,breaks = 128, main = "") grid() #normal quantile qqnorm(fit_err, main = "") grid() par(mfrow = c(1,1)) Which yields: The error is reasonably well behaved. It is narrow tailed. There is a non-Gaussian set of samples on the right side of the lag plot. The central part of the distribution looks triangular. It isn't Gaussian, but it wasn't expected to be. This is a discrete level output modeled as continuous. Here is a variability plot of actual vs. predicted, and of error vs. predicted. If systematically over-predicts the poorest class as better than rated, and under-predicts the highest class as poorer than rated. This random forest is less poorly constructed, and likely is a healthier function approximator. Next steps: make the boundary plot like yours on the first 2 principle components. Notes on the code: I'm not a big scikit.learn guy, so I am going to misunderstand parts of what they are doing. Standard disclaimers apply. Two trees in an ensemble is a contradiction in terms like "one man army". The random forest is no "one man army" because it would be CART as a non-weak learner. The author did a disservice to an ensemble learner by selecting 2 elements as the ensemble size. The big joy of a random forest is you can add ensemble elements. Never (ever) accept a random forest smaller than 20 trees. Double-check any forest smaller than 50 trees. The author has no split between training/validation or test. They use all the data to fit the learners. A better way is to split into those groups then determine the ensemble parameters, then make the model with the combined train/valid data. I don't see that here. Author does not specify whether the "y" is discretized or continuous. This means the RF might be living in regression instead of classification.
What do these decision boundaries indicate in random forest and svm?
The images you present are the same as those here: link. The following is some code, translated to R with some adjustments, to work through this. The RF selected (2 trees) is not acceptable. This is
What do these decision boundaries indicate in random forest and svm? The images you present are the same as those here: link. The following is some code, translated to R with some adjustments, to work through this. The RF selected (2 trees) is not acceptable. This is not apples-to-apples, so any of the authors' assertions about "entropy" can be mis-informative. First we get the data: #reproducibility set.seed(136526) #I like to use question number as random seed #libraries library(data.table) #to read the url library(randomForest) #to have randomForests library(miscTools) #column medians #main program #get data wine_df = fread("https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv") #conver to frame wine_df <- as.data.frame(wine_df) #parse data Y <- (wine_df[,12]) X <- wine_df[,-12] Next we find the right size of random forest for it. max_trees <- 100 #same range N_retest <- 35 #fair sample size err <- matrix(0,max_trees,N_retest) #initialize for the loop for (i in 1:max_trees){ for (j in 1:N_retest){ #fit random forest with "i" number of trees my_rf <- randomForest(x = X, y = Y, ntree = i) #pop out sum of squared residuals divided by n temp <- mean(my_rf$mse) err[i,j] <- temp } } Now we can look at how many elements should be in the ensemble: #make friendly for boxplot err_frame <- as.data.frame(t(err)) names(err_frame) <- as.character(1:max_trees) #central tendency my_meds <- colMedians((err_frame)) #normalized slope of central tendency est <- smooth.spline(x = 1:max_trees,y = my_meds,spar = 0.7) pred <- predict(est) my_sl <- c(diff(pred$y)/diff(pred$x)) my_sl <- (0.7-0.4)*(my_sl-min(my_sl))/(max(my_sl)-min(my_sl))+0.4 #make boxplot boxplot(err_frame, main = "MSE vs. number of trees", xlab = "number of trees", ylab = "forest mean MSE", xlim= c(0,75)) #draw central tendency (red) lines(est, col="red", lwd=2) #draw slope lines(pred$x,c(0.4,my_sl),col="green") points(pred$x,c(0.4,my_sl),col="green", pch=16) grid() legend(x = 60,y = 0.6,c("bxp","fit","slope"), col = c("black","Red","Green"), lty = c(NA, 1,1), pch = c(22,-1,20), pt.cex = c(1.2,1,1) ) And it gives us this, which I then manually draw blue and black lines on in a version of midangle-skree heuristic to get a "decent" ensemble size of 30. It is two tangent lines from the slope: one at highest slope, one at right end of domain. We make a ray from intersection of those tangent lines to the slope-line along the mid-angle. The next highest point after the intersection informs tree-count. Now that we have a decent random forest we can look at errors. First we compute the error. # make "final" model my_rf_fin <- randomForest(x = X, y = (Y), ntree = 30) #predict on it pred_fin <- predict(my_rf_fin) #compute error fit_err <- pred_fin - Y The first plots to start with are basic EDA plots including the 4-plot of error. #EDA on error par(mfrow = n2mfrow(4) ) #run seq plot(fit_err, type="l") grid() #lag plot plot(fit_err[2:length(fit_err)],fit_err[1:(length(fit_err)-1)] ) abline(a = 0,b=1, col="Green", lwd=2) grid() #histogram hist(fit_err,breaks = 128, main = "") grid() #normal quantile qqnorm(fit_err, main = "") grid() par(mfrow = c(1,1)) Which yields: The error is reasonably well behaved. It is narrow tailed. There is a non-Gaussian set of samples on the right side of the lag plot. The central part of the distribution looks triangular. It isn't Gaussian, but it wasn't expected to be. This is a discrete level output modeled as continuous. Here is a variability plot of actual vs. predicted, and of error vs. predicted. If systematically over-predicts the poorest class as better than rated, and under-predicts the highest class as poorer than rated. This random forest is less poorly constructed, and likely is a healthier function approximator. Next steps: make the boundary plot like yours on the first 2 principle components. Notes on the code: I'm not a big scikit.learn guy, so I am going to misunderstand parts of what they are doing. Standard disclaimers apply. Two trees in an ensemble is a contradiction in terms like "one man army". The random forest is no "one man army" because it would be CART as a non-weak learner. The author did a disservice to an ensemble learner by selecting 2 elements as the ensemble size. The big joy of a random forest is you can add ensemble elements. Never (ever) accept a random forest smaller than 20 trees. Double-check any forest smaller than 50 trees. The author has no split between training/validation or test. They use all the data to fit the learners. A better way is to split into those groups then determine the ensemble parameters, then make the model with the combined train/valid data. I don't see that here. Author does not specify whether the "y" is discretized or continuous. This means the RF might be living in regression instead of classification.
What do these decision boundaries indicate in random forest and svm? The images you present are the same as those here: link. The following is some code, translated to R with some adjustments, to work through this. The RF selected (2 trees) is not acceptable. This is
42,432
Understanding the effect of hyperparameters in machine learning experiments
Many dedicated optimization methods exist for hyperparameter tuning. Sequential model based optimization (a Bayesian inspired method) is a particularly popular research topic, for instance here. Metaheuristic approaches like genetic algorithms, particle swarm optimization and simulated annealing are also common, see for instance here. If you want to model the effect of hyperparameters, random search is a good sampling strategy to start from. You can find implementations of such optimization methods in tuning libraries like Optunity, HyperOpt and Spearmint.
Understanding the effect of hyperparameters in machine learning experiments
Many dedicated optimization methods exist for hyperparameter tuning. Sequential model based optimization (a Bayesian inspired method) is a particularly popular research topic, for instance here. Metah
Understanding the effect of hyperparameters in machine learning experiments Many dedicated optimization methods exist for hyperparameter tuning. Sequential model based optimization (a Bayesian inspired method) is a particularly popular research topic, for instance here. Metaheuristic approaches like genetic algorithms, particle swarm optimization and simulated annealing are also common, see for instance here. If you want to model the effect of hyperparameters, random search is a good sampling strategy to start from. You can find implementations of such optimization methods in tuning libraries like Optunity, HyperOpt and Spearmint.
Understanding the effect of hyperparameters in machine learning experiments Many dedicated optimization methods exist for hyperparameter tuning. Sequential model based optimization (a Bayesian inspired method) is a particularly popular research topic, for instance here. Metah
42,433
Understanding the effect of hyperparameters in machine learning experiments
Design of Experiments on Wikipedia might be a good place for additional reading. At its most general, it covers how to make progress in problems where there is so much complexity that the intuitive approach is no longer constructive.
Understanding the effect of hyperparameters in machine learning experiments
Design of Experiments on Wikipedia might be a good place for additional reading. At its most general, it covers how to make progress in problems where there is so much complexity that the intuitive ap
Understanding the effect of hyperparameters in machine learning experiments Design of Experiments on Wikipedia might be a good place for additional reading. At its most general, it covers how to make progress in problems where there is so much complexity that the intuitive approach is no longer constructive.
Understanding the effect of hyperparameters in machine learning experiments Design of Experiments on Wikipedia might be a good place for additional reading. At its most general, it covers how to make progress in problems where there is so much complexity that the intuitive ap
42,434
Real motivation for using mixed effect models, and when to use them and when not to
There are many reasons for using mixed or random effects models, but I'll highlight one due to my time constraints. Let's say you have 1500 subjects with 10 measurements taken on each subject, along with many covariates. You could model the response measurements $Y$ using fixed subject terms as $Y=X\beta+\epsilon.$ However, this would require entering 1499 dummy variable terms for subjects into the model (or 1500 if you didn't mind the $X$ matrix not being full rank), along with their covariates. Instead of using a fixed-effects approach for this, you could simply assume subject is a random effect with a given covariance structure (e.g. a compound symmetric covariance). You could then fit a fixed and random effects (called mixed effects) model as: $Y=X\beta+Zu+\epsilon.$ Using this approach, you'd only need to estimate the effects for the intercept (if any), for each of the covariates ($X$) in the model, and only two random effects for subjects ($\sigma_\epsilon^2$ and $\sigma_2$.) You no longer need to estimate separate effects for all those subjects! A good way to determine if your factors should be random or fixed comes from Kleinbaum, et al.'s Applied Regression Analysis and Other Multivariable Methods book. It states the following: "Fixed Factor: A variable in a regression model whose possible values (i.e. levels) are the only ones of interest. Random factor: A variable in a regression model whose levels are regarded as a random sample from some large population of levels." It goes on to say: "When applying the above definitions to epidemiologic studies, we typically postulate that: a. Subjects, litters, observers, families, and households are random factors; b. Gender, age, marital status, day of the week, and education are fixed factors; and c. Locations, treatments, clinics, exposures, and time may be considered as either random or fixed factors, depending on the context of the study." Further, it states: "When in doubt, one approach for deciding how to classify a particular study variable is to consider the following question: 'If I was able to replicate the study, would I want a given factor to have the exact same categories as observed in the current study?' Equivalently, 'Would I want a replicate study to use the same treatments, days of week, or subjects as used int he current study?' If your answer is yes: treat the factor as fixed. no: treat the factor as random."
Real motivation for using mixed effect models, and when to use them and when not to
There are many reasons for using mixed or random effects models, but I'll highlight one due to my time constraints. Let's say you have 1500 subjects with 10 measurements taken on each subject, along
Real motivation for using mixed effect models, and when to use them and when not to There are many reasons for using mixed or random effects models, but I'll highlight one due to my time constraints. Let's say you have 1500 subjects with 10 measurements taken on each subject, along with many covariates. You could model the response measurements $Y$ using fixed subject terms as $Y=X\beta+\epsilon.$ However, this would require entering 1499 dummy variable terms for subjects into the model (or 1500 if you didn't mind the $X$ matrix not being full rank), along with their covariates. Instead of using a fixed-effects approach for this, you could simply assume subject is a random effect with a given covariance structure (e.g. a compound symmetric covariance). You could then fit a fixed and random effects (called mixed effects) model as: $Y=X\beta+Zu+\epsilon.$ Using this approach, you'd only need to estimate the effects for the intercept (if any), for each of the covariates ($X$) in the model, and only two random effects for subjects ($\sigma_\epsilon^2$ and $\sigma_2$.) You no longer need to estimate separate effects for all those subjects! A good way to determine if your factors should be random or fixed comes from Kleinbaum, et al.'s Applied Regression Analysis and Other Multivariable Methods book. It states the following: "Fixed Factor: A variable in a regression model whose possible values (i.e. levels) are the only ones of interest. Random factor: A variable in a regression model whose levels are regarded as a random sample from some large population of levels." It goes on to say: "When applying the above definitions to epidemiologic studies, we typically postulate that: a. Subjects, litters, observers, families, and households are random factors; b. Gender, age, marital status, day of the week, and education are fixed factors; and c. Locations, treatments, clinics, exposures, and time may be considered as either random or fixed factors, depending on the context of the study." Further, it states: "When in doubt, one approach for deciding how to classify a particular study variable is to consider the following question: 'If I was able to replicate the study, would I want a given factor to have the exact same categories as observed in the current study?' Equivalently, 'Would I want a replicate study to use the same treatments, days of week, or subjects as used int he current study?' If your answer is yes: treat the factor as fixed. no: treat the factor as random."
Real motivation for using mixed effect models, and when to use them and when not to There are many reasons for using mixed or random effects models, but I'll highlight one due to my time constraints. Let's say you have 1500 subjects with 10 measurements taken on each subject, along
42,435
Hierarchical Gamma-Poisson CDF?
If you integrate the other way, you get a closed form expression: \begin{align*} P(X \leq x|r,\nu) &= \int_0^\infty P(X \leq x| \lambda)P(\lambda|r,v) \text{d}\lambda\\ &= \int_0^\infty \sum_{k=0}^x \frac{\lambda^k\exp(-\lambda)}{k!} \frac{\lambda^{r-1}e^{-\lambda/v}}{\Gamma(r)v^r} \text{d}\lambda\\ &= \frac{1}{\Gamma(r)v^r} \sum_{k=0}^x \frac{1}{k!} \int_0^\infty \lambda^{k+r-1}e^{-\lambda\{1+v^{-1}\}}\text{d}\lambda\\ &= \sum_{k=0}^x \frac{1}{k!} \frac{v^{-r}}{(1+v^{-1})^{k+r}}\,\frac{\Gamma(k+r)}{\Gamma(r)} \end{align*}
Hierarchical Gamma-Poisson CDF?
If you integrate the other way, you get a closed form expression: \begin{align*} P(X \leq x|r,\nu) &= \int_0^\infty P(X \leq x| \lambda)P(\lambda|r,v) \text{d}\lambda\\ &= \int_0^\infty \sum_{k=0}^x \
Hierarchical Gamma-Poisson CDF? If you integrate the other way, you get a closed form expression: \begin{align*} P(X \leq x|r,\nu) &= \int_0^\infty P(X \leq x| \lambda)P(\lambda|r,v) \text{d}\lambda\\ &= \int_0^\infty \sum_{k=0}^x \frac{\lambda^k\exp(-\lambda)}{k!} \frac{\lambda^{r-1}e^{-\lambda/v}}{\Gamma(r)v^r} \text{d}\lambda\\ &= \frac{1}{\Gamma(r)v^r} \sum_{k=0}^x \frac{1}{k!} \int_0^\infty \lambda^{k+r-1}e^{-\lambda\{1+v^{-1}\}}\text{d}\lambda\\ &= \sum_{k=0}^x \frac{1}{k!} \frac{v^{-r}}{(1+v^{-1})^{k+r}}\,\frac{\Gamma(k+r)}{\Gamma(r)} \end{align*}
Hierarchical Gamma-Poisson CDF? If you integrate the other way, you get a closed form expression: \begin{align*} P(X \leq x|r,\nu) &= \int_0^\infty P(X \leq x| \lambda)P(\lambda|r,v) \text{d}\lambda\\ &= \int_0^\infty \sum_{k=0}^x \
42,436
Hierarchical Gamma-Poisson CDF?
Yes when there is no closed form of the PDF you can use the Metropolis-Hastings Algorithm. Once you understand the algorithm you can easily program it yourself but upon a google search many have done so in python. Here http://www.nehalemlabs.net/prototype/blog/2014/02/24/an-introduction-to-the-metropolis-method-with-python/. I have done this in R with the following code and some added plots: ## M-H algorithm to draw values from a Poisson with lambda=8 ## using a negative binomial distribution with mean equal to ## lambda current as the proposal dist. ## intial values and vectors to start the loop, algorithm uses ## three chains nsim <- 1000 lam.vec1 <- numeric(nsim) lam.vec2 <- numeric(nsim) lam.vec3 <- numeric(nsim) lam.vec1[1] <- 7 lam.vec2[1] <- 6 lam.vec3[1] <- 5 jump.vec1 <- numeric(nsim - 1) jump.vec2 <- numeric(nsim - 1) jump.vec3 <- numeric(nsim - 1) s <- 4 for (i in 2:nsim) { ## assigning lambda current for the three chains lam.curr1 <- lam.vec1[i - 1] lam.curr2 <- lam.vec2[i - 1] lam.curr3 <- lam.vec3[i - 1] ## obtaining lambda candidate for three chains by drawing from ## the proposal dist. lam.cand1 <- rnbinom(1, size = s, mu = lam.curr1) lam.cand2 <- rnbinom(1, size = s, mu = lam.curr2) lam.cand3 <- rnbinom(1, size = s, mu = lam.curr3) ## numerator for the M-H ratio r.num1 <- dpois(lam.cand1, lambda = 8) * dnbinom(lam.curr1, size = s, mu = lam.cand1) r.num2 <- dpois(lam.cand2, lambda = 8) * dnbinom(lam.curr2, size = s, mu = lam.cand2) r.num3 <- dpois(lam.cand3, lambda = 8) * dnbinom(lam.curr3, size = s, mu = lam.cand3) ## denominator for M-H ratio r.den1 <- dpois(lam.curr1, lambda = 8) * dnbinom(lam.cand1, size = s, mu = lam.curr1) r.den2 <- dpois(lam.curr2, lambda = 8) * dnbinom(lam.cand2, size = s, mu = lam.curr2) r.den3 <- dpois(lam.curr3, lambda = 8) * dnbinom(lam.cand3, size = s, mu = lam.curr3) ## M-H ratio r1 <- r.num1/r.den1 r2 <- r.num2/r.den2 r3 <- r.num3/r.den3 ## accept with probability min(1,r1) p.accept1 <- min(1, r1) p.accept2 <- min(1, r2) p.accept3 <- min(1, r3) u.vec <- runif(3) ## deciding to jump to lambda cand or not ifelse(u.vec[1] <= p.accept1, lam.vec1[i] <- lam.cand1, lam.vec1[i] <- lam.curr1) ifelse(u.vec[2] <= p.accept2, lam.vec2[i] <- lam.cand2, lam.vec2[i] <- lam.curr2) ifelse(u.vec[3] <= p.accept3, lam.vec3[i] <- lam.cand3, lam.vec3[i] <- lam.curr3) ## recording whether we jumped or not jump.vec1[i - 1] <- ifelse(u.vec[1] <= p.accept1, 1, 0) jump.vec2[i - 1] <- ifelse(u.vec[2] <= p.accept2, 1, 0) jump.vec3[i - 1] <- ifelse(u.vec[3] <= p.accept3, 1, 0) } ## Look at chains plot(seq(1:nsim), lam.vec1, type = "l", ylab = expression(lambda), col = 4, main = "Traceplot of Three Chains") lines(seq(1:nsim), lam.vec2, col = 2) lines(seq(1:nsim), lam.vec3, col = 3) # mean(jump.vec1) mean(jump.vec2) mean(jump.vec3) post.lam <- c(lam.vec1, lam.vec2, lam.vec3) x <- c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10) plot(table(post.lam)/length(post.lam), col = 3, type = "h", xlab = expression(lambda), ylab = "Density") points(dpois(rep(1:20, 1), lambda = 8), lwd = 2) legend("topright", legend = c("Analytic Poission"), pch = 1, lwd = 2, col = c(1), bty = "n", cex = 1) The R code is clunky but was written as such for illustrative purposes.
Hierarchical Gamma-Poisson CDF?
Yes when there is no closed form of the PDF you can use the Metropolis-Hastings Algorithm. Once you understand the algorithm you can easily program it yourself but upon a google search many have done
Hierarchical Gamma-Poisson CDF? Yes when there is no closed form of the PDF you can use the Metropolis-Hastings Algorithm. Once you understand the algorithm you can easily program it yourself but upon a google search many have done so in python. Here http://www.nehalemlabs.net/prototype/blog/2014/02/24/an-introduction-to-the-metropolis-method-with-python/. I have done this in R with the following code and some added plots: ## M-H algorithm to draw values from a Poisson with lambda=8 ## using a negative binomial distribution with mean equal to ## lambda current as the proposal dist. ## intial values and vectors to start the loop, algorithm uses ## three chains nsim <- 1000 lam.vec1 <- numeric(nsim) lam.vec2 <- numeric(nsim) lam.vec3 <- numeric(nsim) lam.vec1[1] <- 7 lam.vec2[1] <- 6 lam.vec3[1] <- 5 jump.vec1 <- numeric(nsim - 1) jump.vec2 <- numeric(nsim - 1) jump.vec3 <- numeric(nsim - 1) s <- 4 for (i in 2:nsim) { ## assigning lambda current for the three chains lam.curr1 <- lam.vec1[i - 1] lam.curr2 <- lam.vec2[i - 1] lam.curr3 <- lam.vec3[i - 1] ## obtaining lambda candidate for three chains by drawing from ## the proposal dist. lam.cand1 <- rnbinom(1, size = s, mu = lam.curr1) lam.cand2 <- rnbinom(1, size = s, mu = lam.curr2) lam.cand3 <- rnbinom(1, size = s, mu = lam.curr3) ## numerator for the M-H ratio r.num1 <- dpois(lam.cand1, lambda = 8) * dnbinom(lam.curr1, size = s, mu = lam.cand1) r.num2 <- dpois(lam.cand2, lambda = 8) * dnbinom(lam.curr2, size = s, mu = lam.cand2) r.num3 <- dpois(lam.cand3, lambda = 8) * dnbinom(lam.curr3, size = s, mu = lam.cand3) ## denominator for M-H ratio r.den1 <- dpois(lam.curr1, lambda = 8) * dnbinom(lam.cand1, size = s, mu = lam.curr1) r.den2 <- dpois(lam.curr2, lambda = 8) * dnbinom(lam.cand2, size = s, mu = lam.curr2) r.den3 <- dpois(lam.curr3, lambda = 8) * dnbinom(lam.cand3, size = s, mu = lam.curr3) ## M-H ratio r1 <- r.num1/r.den1 r2 <- r.num2/r.den2 r3 <- r.num3/r.den3 ## accept with probability min(1,r1) p.accept1 <- min(1, r1) p.accept2 <- min(1, r2) p.accept3 <- min(1, r3) u.vec <- runif(3) ## deciding to jump to lambda cand or not ifelse(u.vec[1] <= p.accept1, lam.vec1[i] <- lam.cand1, lam.vec1[i] <- lam.curr1) ifelse(u.vec[2] <= p.accept2, lam.vec2[i] <- lam.cand2, lam.vec2[i] <- lam.curr2) ifelse(u.vec[3] <= p.accept3, lam.vec3[i] <- lam.cand3, lam.vec3[i] <- lam.curr3) ## recording whether we jumped or not jump.vec1[i - 1] <- ifelse(u.vec[1] <= p.accept1, 1, 0) jump.vec2[i - 1] <- ifelse(u.vec[2] <= p.accept2, 1, 0) jump.vec3[i - 1] <- ifelse(u.vec[3] <= p.accept3, 1, 0) } ## Look at chains plot(seq(1:nsim), lam.vec1, type = "l", ylab = expression(lambda), col = 4, main = "Traceplot of Three Chains") lines(seq(1:nsim), lam.vec2, col = 2) lines(seq(1:nsim), lam.vec3, col = 3) # mean(jump.vec1) mean(jump.vec2) mean(jump.vec3) post.lam <- c(lam.vec1, lam.vec2, lam.vec3) x <- c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10) plot(table(post.lam)/length(post.lam), col = 3, type = "h", xlab = expression(lambda), ylab = "Density") points(dpois(rep(1:20, 1), lambda = 8), lwd = 2) legend("topright", legend = c("Analytic Poission"), pch = 1, lwd = 2, col = c(1), bty = "n", cex = 1) The R code is clunky but was written as such for illustrative purposes.
Hierarchical Gamma-Poisson CDF? Yes when there is no closed form of the PDF you can use the Metropolis-Hastings Algorithm. Once you understand the algorithm you can easily program it yourself but upon a google search many have done
42,437
Test for trend (ordinal predictor, continuous outcome)?
Very late answer but you might find this question, and this post helpful. Personally, I would just run a linear model with the categories represented as (ordered) integers, and examine the coefficient. There is also some nice guidance in the BMJ stats series.
Test for trend (ordinal predictor, continuous outcome)?
Very late answer but you might find this question, and this post helpful. Personally, I would just run a linear model with the categories represented as (ordered) integers, and examine the coefficient
Test for trend (ordinal predictor, continuous outcome)? Very late answer but you might find this question, and this post helpful. Personally, I would just run a linear model with the categories represented as (ordered) integers, and examine the coefficient. There is also some nice guidance in the BMJ stats series.
Test for trend (ordinal predictor, continuous outcome)? Very late answer but you might find this question, and this post helpful. Personally, I would just run a linear model with the categories represented as (ordered) integers, and examine the coefficient
42,438
Test for trend (ordinal predictor, continuous outcome)?
The first step is to plot the three histograms and see how much they overlap. If there is enough data to make a histogram (40+ samples) that can be used to identify if the distribution in each category is normally distributed or not, and they are normally distributed then doing t-test for paired means and Levene's test for difference in variance (i.e., standard deviation squared) will be powerful and allow for determining whether or not there is an organized difference between the groups either with respect to location of the continuous variable or its variability. Doing these tests as one-sided tests would allow for identifying how they rank with respect to each other, e.g. A>B>C or not. If there are fewer than 40 samples in each group, or if there is no method by which one can normalize the data, for example by taking reciprocals, logarithms, etc., then one should use Wilcoxon for difference of location and Conover for difference of variance as those "non-parametric" tests rank the results and do not require normal conditions for their use.
Test for trend (ordinal predictor, continuous outcome)?
The first step is to plot the three histograms and see how much they overlap. If there is enough data to make a histogram (40+ samples) that can be used to identify if the distribution in each catego
Test for trend (ordinal predictor, continuous outcome)? The first step is to plot the three histograms and see how much they overlap. If there is enough data to make a histogram (40+ samples) that can be used to identify if the distribution in each category is normally distributed or not, and they are normally distributed then doing t-test for paired means and Levene's test for difference in variance (i.e., standard deviation squared) will be powerful and allow for determining whether or not there is an organized difference between the groups either with respect to location of the continuous variable or its variability. Doing these tests as one-sided tests would allow for identifying how they rank with respect to each other, e.g. A>B>C or not. If there are fewer than 40 samples in each group, or if there is no method by which one can normalize the data, for example by taking reciprocals, logarithms, etc., then one should use Wilcoxon for difference of location and Conover for difference of variance as those "non-parametric" tests rank the results and do not require normal conditions for their use.
Test for trend (ordinal predictor, continuous outcome)? The first step is to plot the three histograms and see how much they overlap. If there is enough data to make a histogram (40+ samples) that can be used to identify if the distribution in each catego
42,439
Test for trend (ordinal predictor, continuous outcome)?
Jonckheere-Terpstra test is the tool you was probably looking for.
Test for trend (ordinal predictor, continuous outcome)?
Jonckheere-Terpstra test is the tool you was probably looking for.
Test for trend (ordinal predictor, continuous outcome)? Jonckheere-Terpstra test is the tool you was probably looking for.
Test for trend (ordinal predictor, continuous outcome)? Jonckheere-Terpstra test is the tool you was probably looking for.
42,440
Test for trend (ordinal predictor, continuous outcome)?
The R package brms (Bayesian regression models using Stan) can use Bayesian estimation to fit models with ordinal predictors. brms can handle a wide range of designs (e.g. multivariate and/or longitudinal and/or hierarchical) and a wide range of distributions (counts, ordinal outcomes, continuous outcomes, censored, etc. etc.). It can also incorporate temporal and spatial auto-correlation and fit mixture models. The author of brms, Paul Buerkner, has provided a large number of vignettes to demonstrate how to use the package's many features. You can see the one for monotonic ordinal predictors at: https://cran.r-project.org/web/packages/brms/vignettes/brms_monotonic.html
Test for trend (ordinal predictor, continuous outcome)?
The R package brms (Bayesian regression models using Stan) can use Bayesian estimation to fit models with ordinal predictors. brms can handle a wide range of designs (e.g. multivariate and/or longitud
Test for trend (ordinal predictor, continuous outcome)? The R package brms (Bayesian regression models using Stan) can use Bayesian estimation to fit models with ordinal predictors. brms can handle a wide range of designs (e.g. multivariate and/or longitudinal and/or hierarchical) and a wide range of distributions (counts, ordinal outcomes, continuous outcomes, censored, etc. etc.). It can also incorporate temporal and spatial auto-correlation and fit mixture models. The author of brms, Paul Buerkner, has provided a large number of vignettes to demonstrate how to use the package's many features. You can see the one for monotonic ordinal predictors at: https://cran.r-project.org/web/packages/brms/vignettes/brms_monotonic.html
Test for trend (ordinal predictor, continuous outcome)? The R package brms (Bayesian regression models using Stan) can use Bayesian estimation to fit models with ordinal predictors. brms can handle a wide range of designs (e.g. multivariate and/or longitud
42,441
How to prove that Asymptotic Variance-Covariance matrix of OLS estimator is Positive Definite?
Applying the Law of Iterated Expectations, $$E[(\mathbf{x}_i\cdot \epsilon_i)(\mathbf{x}_i\cdot \epsilon_i)'] = E[\epsilon_i^2\mathbf{x}_i\mathbf{x}_i']=E\Big(E[\epsilon_i^2\mid \mathbf{x}_i]\mathbf{x}_i\mathbf{x}_i'\Big) $$ For exposition purposes, assume that the regressors are two, $X_1$ and $X_2$. Set also $E[\epsilon_i^2\mid \mathbf{x}_i] \equiv h$. Then the determinant of the matrix (where we first take the expected value and then we calculate the determinant) is $$D=E[hX_1^2]\cdot E[hX_2^2] - \left(E[hX_1X_2]\right)^2 \neq 0$$ since it is assumed non-singular. First, it is evident that if the one variable was a linear function of the other, then the above determinant would be zero. So the assumption of non-singularity rules out full linear dependence. This also rules out a correlation coefficient $\rho_x$ equal to unity (keep that). Now, set $Z_1 =\sqrt{h}X_1, \;\; Z_2 =\sqrt{h}X_2$. (Note that the correlation coefficient $\rho_z$ between $Z_1$ and $Z_2$ cannot be equal to unity too). Then the determinant can be written $$D= E[Z_1^2]\cdot E[Z_2^2] - \left(E[Z_1Z_2]\right)^2$$ Without loss of generality, assume that the variables have zero mean. Then we have $$D= {\rm Var}(Z_1)\cdot {\rm Var}(Z_2) - \left({\rm Cov}[Z_1,Z_2]\right)^2$$ $$= {\rm Var}(Z_1)\cdot {\rm Var}(Z_2) - \rho_z^2 {\rm Var}(Z_1)\cdot {\rm Var}(Z_2)$$ $$\Rightarrow (1-\rho_z^2)\cdot {\rm Var}(Z_1)\cdot {\rm Var}(Z_2)$$ For positive definiteness we want all leading principal minors to be greater than zero. Here, the first minor is ${\rm Var}(Z_1)>0$ and the second minor is $D>0$ since $\rho_z <|1|$. ADDENDUM Moving to 3 dimensions, under the expected value and the transformation to $Z$ variables, we have a variance-covariance matrix with full rank. Then it is positive semi-definite, so $D_{3\times 3} \geq 0$. But by assumption it is non-singular (so $D_{3\times 3} \neq 0$) therefore it is positive-definite since we are left only with $D_{3\times 3} > 0$. With all leading principal minors up to three dimensions strictly positive, let's move to four dimensions. We only need to show in addition the $4\times 4$ determinant to be greater than zero. But the matrix is a covariance matrix so $D_{4\times 4} \geq 0$. But this is also the full determinant of the matrix, and since by assumption the matrix is non-singular, we have that $D_{4\times 4} > 0$. Hence, it is positive definite. Move on to five dimensions. Same reasoning. Etc. I stress again the fact that this result depends critically on the expected value operator, which transforms the matrix into a variance-covariance one. Otherwise, the outer product of a $k\times 1$ column vector of numbers is a singular matrix.
How to prove that Asymptotic Variance-Covariance matrix of OLS estimator is Positive Definite?
Applying the Law of Iterated Expectations, $$E[(\mathbf{x}_i\cdot \epsilon_i)(\mathbf{x}_i\cdot \epsilon_i)'] = E[\epsilon_i^2\mathbf{x}_i\mathbf{x}_i']=E\Big(E[\epsilon_i^2\mid \mathbf{x}_i]\mathbf{x
How to prove that Asymptotic Variance-Covariance matrix of OLS estimator is Positive Definite? Applying the Law of Iterated Expectations, $$E[(\mathbf{x}_i\cdot \epsilon_i)(\mathbf{x}_i\cdot \epsilon_i)'] = E[\epsilon_i^2\mathbf{x}_i\mathbf{x}_i']=E\Big(E[\epsilon_i^2\mid \mathbf{x}_i]\mathbf{x}_i\mathbf{x}_i'\Big) $$ For exposition purposes, assume that the regressors are two, $X_1$ and $X_2$. Set also $E[\epsilon_i^2\mid \mathbf{x}_i] \equiv h$. Then the determinant of the matrix (where we first take the expected value and then we calculate the determinant) is $$D=E[hX_1^2]\cdot E[hX_2^2] - \left(E[hX_1X_2]\right)^2 \neq 0$$ since it is assumed non-singular. First, it is evident that if the one variable was a linear function of the other, then the above determinant would be zero. So the assumption of non-singularity rules out full linear dependence. This also rules out a correlation coefficient $\rho_x$ equal to unity (keep that). Now, set $Z_1 =\sqrt{h}X_1, \;\; Z_2 =\sqrt{h}X_2$. (Note that the correlation coefficient $\rho_z$ between $Z_1$ and $Z_2$ cannot be equal to unity too). Then the determinant can be written $$D= E[Z_1^2]\cdot E[Z_2^2] - \left(E[Z_1Z_2]\right)^2$$ Without loss of generality, assume that the variables have zero mean. Then we have $$D= {\rm Var}(Z_1)\cdot {\rm Var}(Z_2) - \left({\rm Cov}[Z_1,Z_2]\right)^2$$ $$= {\rm Var}(Z_1)\cdot {\rm Var}(Z_2) - \rho_z^2 {\rm Var}(Z_1)\cdot {\rm Var}(Z_2)$$ $$\Rightarrow (1-\rho_z^2)\cdot {\rm Var}(Z_1)\cdot {\rm Var}(Z_2)$$ For positive definiteness we want all leading principal minors to be greater than zero. Here, the first minor is ${\rm Var}(Z_1)>0$ and the second minor is $D>0$ since $\rho_z <|1|$. ADDENDUM Moving to 3 dimensions, under the expected value and the transformation to $Z$ variables, we have a variance-covariance matrix with full rank. Then it is positive semi-definite, so $D_{3\times 3} \geq 0$. But by assumption it is non-singular (so $D_{3\times 3} \neq 0$) therefore it is positive-definite since we are left only with $D_{3\times 3} > 0$. With all leading principal minors up to three dimensions strictly positive, let's move to four dimensions. We only need to show in addition the $4\times 4$ determinant to be greater than zero. But the matrix is a covariance matrix so $D_{4\times 4} \geq 0$. But this is also the full determinant of the matrix, and since by assumption the matrix is non-singular, we have that $D_{4\times 4} > 0$. Hence, it is positive definite. Move on to five dimensions. Same reasoning. Etc. I stress again the fact that this result depends critically on the expected value operator, which transforms the matrix into a variance-covariance one. Otherwise, the outer product of a $k\times 1$ column vector of numbers is a singular matrix.
How to prove that Asymptotic Variance-Covariance matrix of OLS estimator is Positive Definite? Applying the Law of Iterated Expectations, $$E[(\mathbf{x}_i\cdot \epsilon_i)(\mathbf{x}_i\cdot \epsilon_i)'] = E[\epsilon_i^2\mathbf{x}_i\mathbf{x}_i']=E\Big(E[\epsilon_i^2\mid \mathbf{x}_i]\mathbf{x
42,442
Time Series Stationarity
Since you have put the tag "augmented-dickey-fuller", I will assume you did the ADF test with a trend. So in that case, if you think your series do have a deterministic trend, then the result of the test is indeed correct, because it is trend stationary. That is, once you account for the deterministic trend (as you do in the ADF test), the process is stationary.
Time Series Stationarity
Since you have put the tag "augmented-dickey-fuller", I will assume you did the ADF test with a trend. So in that case, if you think your series do have a deterministic trend, then the result of the
Time Series Stationarity Since you have put the tag "augmented-dickey-fuller", I will assume you did the ADF test with a trend. So in that case, if you think your series do have a deterministic trend, then the result of the test is indeed correct, because it is trend stationary. That is, once you account for the deterministic trend (as you do in the ADF test), the process is stationary.
Time Series Stationarity Since you have put the tag "augmented-dickey-fuller", I will assume you did the ADF test with a trend. So in that case, if you think your series do have a deterministic trend, then the result of the
42,443
Mean of predictive distribution
While writing$$p(x_{N+1}|D)=\mathbb{E}_{\mu|D}[p(x_{N+1}|\mu)]$$is formally correct, this leads to confusion when applying the Law of Total Expectation, in that the formula $$\mathbb{E}_{x_{N+1}|D} \Big[\mathbb{E}_{\mu|D}\Big[p(x_{N+1}|\mu)\Big]\Big]$$ does not make sense. What you want to find is $\mathbb{E}_{x_{N+1}|D}[X_{N+1}]$ and $\text{var}_{x_{N+1}|D}[X_{N+1}]$. So, if you apply the Law of Total Expectation, you get $$\mathbb{E}_{x_{N+1}|D}[X_{N+1}]=\mathbb{E}_{\mu|D}\left\{\mathbb{E}_{x_{N+1}|\mu,D}[X_{N+1}]\right\}=\mathbb{E}_{\mu|D}\left\{\mu\right\}=\frac{a_0 + \sum_{n=1}^N x_n}{\beta_0 + N}\,.$$ Similarly, for the variance, \begin{align*} \text{var}_{x_{N+1}|D}[X_{N+1}] &= \mathbb{E}_{\mu|D}\left\{\text{var}_{x_{N+1}|\mu,D}[X_{N+1}]\right\}+\text{var}_{\mu|D}[\mathbb{E}_{x_{N+1}|\mu,D}\left\{X_{N+1}\right\}]\\ &= \mathbb{E}_{\mu|D}\left\{\mu\right\}+\text{var}_{\mu|D}[\mu]\\ &= \frac{a_0 + \sum_{n=1}^N x_n}{\beta_0 + N}+\frac{a_0 + \sum_{n=1}^N x_n}{(\beta_0 + N)^2}\\ &= \frac{a_0 + \sum_{n=1}^N x_n}{\beta_0 + N}\,\frac{\beta_0 + N+1}{\beta_0 + N} \end{align*}
Mean of predictive distribution
While writing$$p(x_{N+1}|D)=\mathbb{E}_{\mu|D}[p(x_{N+1}|\mu)]$$is formally correct, this leads to confusion when applying the Law of Total Expectation, in that the formula $$\mathbb{E}_{x_{N+1}|D} \B
Mean of predictive distribution While writing$$p(x_{N+1}|D)=\mathbb{E}_{\mu|D}[p(x_{N+1}|\mu)]$$is formally correct, this leads to confusion when applying the Law of Total Expectation, in that the formula $$\mathbb{E}_{x_{N+1}|D} \Big[\mathbb{E}_{\mu|D}\Big[p(x_{N+1}|\mu)\Big]\Big]$$ does not make sense. What you want to find is $\mathbb{E}_{x_{N+1}|D}[X_{N+1}]$ and $\text{var}_{x_{N+1}|D}[X_{N+1}]$. So, if you apply the Law of Total Expectation, you get $$\mathbb{E}_{x_{N+1}|D}[X_{N+1}]=\mathbb{E}_{\mu|D}\left\{\mathbb{E}_{x_{N+1}|\mu,D}[X_{N+1}]\right\}=\mathbb{E}_{\mu|D}\left\{\mu\right\}=\frac{a_0 + \sum_{n=1}^N x_n}{\beta_0 + N}\,.$$ Similarly, for the variance, \begin{align*} \text{var}_{x_{N+1}|D}[X_{N+1}] &= \mathbb{E}_{\mu|D}\left\{\text{var}_{x_{N+1}|\mu,D}[X_{N+1}]\right\}+\text{var}_{\mu|D}[\mathbb{E}_{x_{N+1}|\mu,D}\left\{X_{N+1}\right\}]\\ &= \mathbb{E}_{\mu|D}\left\{\mu\right\}+\text{var}_{\mu|D}[\mu]\\ &= \frac{a_0 + \sum_{n=1}^N x_n}{\beta_0 + N}+\frac{a_0 + \sum_{n=1}^N x_n}{(\beta_0 + N)^2}\\ &= \frac{a_0 + \sum_{n=1}^N x_n}{\beta_0 + N}\,\frac{\beta_0 + N+1}{\beta_0 + N} \end{align*}
Mean of predictive distribution While writing$$p(x_{N+1}|D)=\mathbb{E}_{\mu|D}[p(x_{N+1}|\mu)]$$is formally correct, this leads to confusion when applying the Law of Total Expectation, in that the formula $$\mathbb{E}_{x_{N+1}|D} \B
42,444
Should I take the Shapiro Wilk test with a pinch of salt here?
With a sample size that large, I'd just ignore the normality assumptions, not least because normality statistics--including SW--are sensitive to sample size. You could inspect the QQ plot (which as you say, looks fine) or the histogram, but I find it difficult to trust my own eyes (or others' for that matter). You could also inspect skewness and kurtosis statistics, using that standard rules of thumb, but those are a little flaky too. Basically, I wouldn't worry too much.
Should I take the Shapiro Wilk test with a pinch of salt here?
With a sample size that large, I'd just ignore the normality assumptions, not least because normality statistics--including SW--are sensitive to sample size. You could inspect the QQ plot (which as yo
Should I take the Shapiro Wilk test with a pinch of salt here? With a sample size that large, I'd just ignore the normality assumptions, not least because normality statistics--including SW--are sensitive to sample size. You could inspect the QQ plot (which as you say, looks fine) or the histogram, but I find it difficult to trust my own eyes (or others' for that matter). You could also inspect skewness and kurtosis statistics, using that standard rules of thumb, but those are a little flaky too. Basically, I wouldn't worry too much.
Should I take the Shapiro Wilk test with a pinch of salt here? With a sample size that large, I'd just ignore the normality assumptions, not least because normality statistics--including SW--are sensitive to sample size. You could inspect the QQ plot (which as yo
42,445
Where is the correlation parameter in the linear mixed-effect model equation?
Maybe this will help you: in lme4 you can specify correlated intercept and slope: y ~ x + (x | g) that translates to: y ~ 1 + x + (1 + x | g) where y is a response variable and x is a predictor, while g is some grouping variable for random effects. lme4 by default assumes that the terms could be correlated, but you can also define them as uncorrelated: y ~ x + (x || g) that translates to: y ~ 1 + x + (1 | g) + (0 + x | g) In the second case intercept and slope are treated as independent, i.e. their correlation is constrained to be zero. So yes, correlated parameter is a part of variance-covariance matrix. Check the article by Bates et al. (in press) for more information on this.
Where is the correlation parameter in the linear mixed-effect model equation?
Maybe this will help you: in lme4 you can specify correlated intercept and slope: y ~ x + (x | g) that translates to: y ~ 1 + x + (1 + x | g) where y is a response variable and x is a predictor, while
Where is the correlation parameter in the linear mixed-effect model equation? Maybe this will help you: in lme4 you can specify correlated intercept and slope: y ~ x + (x | g) that translates to: y ~ 1 + x + (1 + x | g) where y is a response variable and x is a predictor, while g is some grouping variable for random effects. lme4 by default assumes that the terms could be correlated, but you can also define them as uncorrelated: y ~ x + (x || g) that translates to: y ~ 1 + x + (1 | g) + (0 + x | g) In the second case intercept and slope are treated as independent, i.e. their correlation is constrained to be zero. So yes, correlated parameter is a part of variance-covariance matrix. Check the article by Bates et al. (in press) for more information on this.
Where is the correlation parameter in the linear mixed-effect model equation? Maybe this will help you: in lme4 you can specify correlated intercept and slope: y ~ x + (x | g) that translates to: y ~ 1 + x + (1 + x | g) where y is a response variable and x is a predictor, while
42,446
Where is the correlation parameter in the linear mixed-effect model equation?
A model of type LMM/GLMM is defined by two equations: An equation including the linear combination of fixed and random effect predictors (as for example Eq. 12 in Moscatelli et al. (2012)) and a second equation for the variance-covariance matrix of the error term. The cross-correlation parameter is in the variance-covariance matrix of the between-subject error term (i.e., the variance-covariance matrix of the random effect). In (Moscatelli et al 2012) these are Eq. 3 and Eq. 5, respectively. In Eq. 5 there is no cross-correlation parameter because the model has only one random-effect parameter
Where is the correlation parameter in the linear mixed-effect model equation?
A model of type LMM/GLMM is defined by two equations: An equation including the linear combination of fixed and random effect predictors (as for example Eq. 12 in Moscatelli et al. (2012)) and a secon
Where is the correlation parameter in the linear mixed-effect model equation? A model of type LMM/GLMM is defined by two equations: An equation including the linear combination of fixed and random effect predictors (as for example Eq. 12 in Moscatelli et al. (2012)) and a second equation for the variance-covariance matrix of the error term. The cross-correlation parameter is in the variance-covariance matrix of the between-subject error term (i.e., the variance-covariance matrix of the random effect). In (Moscatelli et al 2012) these are Eq. 3 and Eq. 5, respectively. In Eq. 5 there is no cross-correlation parameter because the model has only one random-effect parameter
Where is the correlation parameter in the linear mixed-effect model equation? A model of type LMM/GLMM is defined by two equations: An equation including the linear combination of fixed and random effect predictors (as for example Eq. 12 in Moscatelli et al. (2012)) and a secon
42,447
Zero-inflated negative binomial models: why not use two separate models?
Zero-inflated negative binomial models don't assume that all the zeroes come from the Bernoulli process; some may come from the negative binomial process. Toss a coin & write down zero if it's tails. If it's heads then start tossing again & write down the number of tails until you have three, say, heads. There are two different reasons for writing down zero so you can't separate the data into two parts for different models. Hurdle models on the other hand, can indeed be seen as two separate models—a truncated negative binomial for the non-zero count component & a Bernoulli for the zero component—for which the likelihoods are separately maximized. See here.
Zero-inflated negative binomial models: why not use two separate models?
Zero-inflated negative binomial models don't assume that all the zeroes come from the Bernoulli process; some may come from the negative binomial process. Toss a coin & write down zero if it's tails.
Zero-inflated negative binomial models: why not use two separate models? Zero-inflated negative binomial models don't assume that all the zeroes come from the Bernoulli process; some may come from the negative binomial process. Toss a coin & write down zero if it's tails. If it's heads then start tossing again & write down the number of tails until you have three, say, heads. There are two different reasons for writing down zero so you can't separate the data into two parts for different models. Hurdle models on the other hand, can indeed be seen as two separate models—a truncated negative binomial for the non-zero count component & a Bernoulli for the zero component—for which the likelihoods are separately maximized. See here.
Zero-inflated negative binomial models: why not use two separate models? Zero-inflated negative binomial models don't assume that all the zeroes come from the Bernoulli process; some may come from the negative binomial process. Toss a coin & write down zero if it's tails.
42,448
Beginner learning resources : Pdf and likelihood function for non-Gaussian time series model
Let $\Phi=(\phi_1,...,\phi_p)$ and $\mathbf{y}_{t-1} = (y_{t-1},...,y_{t-p})$ So your equations become $y_t = \Phi^{T}\mathbf{y}_{t-1} + \eta(t)$ and $z_{t} = y_{t} + v(t)$ thus $$ z_t - \Phi^{T}\mathbf{y}_{t-1} = \eta(t) + v(t) $$ Where $p$ is the parameter of the Bernoulli distributed variable $\eta$; note that $$E[\eta(t) + v(t)]=p$$ and $$var[\eta(t) + v(t)]=p(1-p)+\sigma^2$$ Where $\sigma^2$ is the variance of $v(t)$. Using this we construct the likelihood function $$ \prod^{N}_{n=p+1} \mathcal{H}(z_t - \Phi^{T}\mathbf{y}_{t-1}\,|\,p\,,\,p(1-p)+\sigma^2) $$ where $\mathcal{H}$ is a pdf of unknown mean $p$ and variance $p(1-p)+\sigma^2$. Personally I would assume $\mathcal{H}$ is a normal distribution since on average $\eta(t) + v(t) \sim \mathcal{N}(\,p\,,\,p(1-p)+\sigma^2)$ but this is fuzzy logic since it violates the assumption that $(\eta(t) + v(t))$ are identically distributed...but that's my 2 cents anyway.
Beginner learning resources : Pdf and likelihood function for non-Gaussian time series model
Let $\Phi=(\phi_1,...,\phi_p)$ and $\mathbf{y}_{t-1} = (y_{t-1},...,y_{t-p})$ So your equations become $y_t = \Phi^{T}\mathbf{y}_{t-1} + \eta(t)$ and $z_{t} = y_{t} + v(t)$ thus $$ z_t - \Phi^{T}\math
Beginner learning resources : Pdf and likelihood function for non-Gaussian time series model Let $\Phi=(\phi_1,...,\phi_p)$ and $\mathbf{y}_{t-1} = (y_{t-1},...,y_{t-p})$ So your equations become $y_t = \Phi^{T}\mathbf{y}_{t-1} + \eta(t)$ and $z_{t} = y_{t} + v(t)$ thus $$ z_t - \Phi^{T}\mathbf{y}_{t-1} = \eta(t) + v(t) $$ Where $p$ is the parameter of the Bernoulli distributed variable $\eta$; note that $$E[\eta(t) + v(t)]=p$$ and $$var[\eta(t) + v(t)]=p(1-p)+\sigma^2$$ Where $\sigma^2$ is the variance of $v(t)$. Using this we construct the likelihood function $$ \prod^{N}_{n=p+1} \mathcal{H}(z_t - \Phi^{T}\mathbf{y}_{t-1}\,|\,p\,,\,p(1-p)+\sigma^2) $$ where $\mathcal{H}$ is a pdf of unknown mean $p$ and variance $p(1-p)+\sigma^2$. Personally I would assume $\mathcal{H}$ is a normal distribution since on average $\eta(t) + v(t) \sim \mathcal{N}(\,p\,,\,p(1-p)+\sigma^2)$ but this is fuzzy logic since it violates the assumption that $(\eta(t) + v(t))$ are identically distributed...but that's my 2 cents anyway.
Beginner learning resources : Pdf and likelihood function for non-Gaussian time series model Let $\Phi=(\phi_1,...,\phi_p)$ and $\mathbf{y}_{t-1} = (y_{t-1},...,y_{t-p})$ So your equations become $y_t = \Phi^{T}\mathbf{y}_{t-1} + \eta(t)$ and $z_{t} = y_{t} + v(t)$ thus $$ z_t - \Phi^{T}\math
42,449
"...if the data is linearly separable"
You can use the tour to look at the data. This is a movie of linear projections of the data, so that if the data is linearly separable you should see the groups separate somewhere. Tours are available in ggobi, and the tourr package in R. Video examples can be viewed at the Cook & Swayne "Interactive and Dynamic Graphics for Data Analysis" web site, see the chapter on supervised classification. These videos are in mov format, it is time I put them up on vimeo instead. You can also look at material at my data mining class site or the multivariate data analysis site, and videos at vimeo. The tour will work for data up to about 15 dimensions, beyond that it takes too much time to watch to find separations. Combining with projection pursuit can help some. There is also a package called classifly on CRAN which combines tours with data and classification methods. It will make a grid of predictions to see boundaries between classes in high-d.
"...if the data is linearly separable"
You can use the tour to look at the data. This is a movie of linear projections of the data, so that if the data is linearly separable you should see the groups separate somewhere. Tours are available
"...if the data is linearly separable" You can use the tour to look at the data. This is a movie of linear projections of the data, so that if the data is linearly separable you should see the groups separate somewhere. Tours are available in ggobi, and the tourr package in R. Video examples can be viewed at the Cook & Swayne "Interactive and Dynamic Graphics for Data Analysis" web site, see the chapter on supervised classification. These videos are in mov format, it is time I put them up on vimeo instead. You can also look at material at my data mining class site or the multivariate data analysis site, and videos at vimeo. The tour will work for data up to about 15 dimensions, beyond that it takes too much time to watch to find separations. Combining with projection pursuit can help some. There is also a package called classifly on CRAN which combines tours with data and classification methods. It will make a grid of predictions to see boundaries between classes in high-d.
"...if the data is linearly separable" You can use the tour to look at the data. This is a movie of linear projections of the data, so that if the data is linearly separable you should see the groups separate somewhere. Tours are available
42,450
How to choose a kernel for KDE
This is not really a data visualization question. The information is fairly readily available online, eg http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/AV0405/MISHRA/kde.html mentions using AMISE to select bandwidth, same approach for kernels could be used. But for EDA, you would want to work like the recommendation for histograms, re-plot with different binwidths to learn different things in the data. Sometimes using a different kernel may be helpful. The normal kernel is generally useful, and I think the bandwidth is more important than the actual kernel. I would suggest adding tags: distributions, nonparametric. Possibly get better answers under these topics.
How to choose a kernel for KDE
This is not really a data visualization question. The information is fairly readily available online, eg http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/AV0405/MISHRA/kde.html mentions using A
How to choose a kernel for KDE This is not really a data visualization question. The information is fairly readily available online, eg http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/AV0405/MISHRA/kde.html mentions using AMISE to select bandwidth, same approach for kernels could be used. But for EDA, you would want to work like the recommendation for histograms, re-plot with different binwidths to learn different things in the data. Sometimes using a different kernel may be helpful. The normal kernel is generally useful, and I think the bandwidth is more important than the actual kernel. I would suggest adding tags: distributions, nonparametric. Possibly get better answers under these topics.
How to choose a kernel for KDE This is not really a data visualization question. The information is fairly readily available online, eg http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/AV0405/MISHRA/kde.html mentions using A
42,451
How to choose a kernel for KDE
The framework of regularization theory (see Regularization Theory and Neural Networks Architectures by Girosi et. al) allows to tackle the problem of looking for a good kernel in a systematic way. The idea is that the kernel is determined by a smoothness stabilizer which is analogous to controlling the complexity in the MDL sense, or the bias-variance error decomposition. The idea is that you attempt to solve the problem, $$ H(f) = \sum_{i}\left(f(x_{i})-y_{i}\right)^{2} + \lambda ||Df||^{2} $$ where $D$ is a differential operator like for example $\frac{d^{2}}{dx^{2}}$. Now it can be proved that this results in the following solution, $$ f(x) = \sum_{i}c_{i}G(x-x_{i}) $$ where $G$ is the Green function associated with the regularizer. By means of cross-validation you can search for good values of $\lambda$ and the order of the differential operator.
How to choose a kernel for KDE
The framework of regularization theory (see Regularization Theory and Neural Networks Architectures by Girosi et. al) allows to tackle the problem of looking for a good kernel in a systematic way. The
How to choose a kernel for KDE The framework of regularization theory (see Regularization Theory and Neural Networks Architectures by Girosi et. al) allows to tackle the problem of looking for a good kernel in a systematic way. The idea is that the kernel is determined by a smoothness stabilizer which is analogous to controlling the complexity in the MDL sense, or the bias-variance error decomposition. The idea is that you attempt to solve the problem, $$ H(f) = \sum_{i}\left(f(x_{i})-y_{i}\right)^{2} + \lambda ||Df||^{2} $$ where $D$ is a differential operator like for example $\frac{d^{2}}{dx^{2}}$. Now it can be proved that this results in the following solution, $$ f(x) = \sum_{i}c_{i}G(x-x_{i}) $$ where $G$ is the Green function associated with the regularizer. By means of cross-validation you can search for good values of $\lambda$ and the order of the differential operator.
How to choose a kernel for KDE The framework of regularization theory (see Regularization Theory and Neural Networks Architectures by Girosi et. al) allows to tackle the problem of looking for a good kernel in a systematic way. The
42,452
What are some good examples of exploratory data analysis today?
One example I enjoy (and is a simple illustration) is the work by Michael Maltz on analyzing the uniform crime reports that police agencies supply to the FBI. See: Maltz, M. D. (2010). Look before you analyze: Visualizing data in criminal justice. In Piquero, A. . and Weisburd, D., editors, Handbook of Quantitative Criminology, chapter 3, pages 25-52. Springer New York, New York, NY. For some background, the FBI does not have standardized ways to report missing or incomplete reports (they collect data monthly, so an agency could report for some months but not the entire year). So the uncritical would observe zeroes or very low numbers for a particular jurisdiction and not presume missing data, e.g. see the numbers for Florida in Parker & Pruitt (2000). So there is quite a bit of precedent in the criminology literature of modelling this data without discovering such errors. Here is a good example from blogs discussing published papers: Uri Simonsohn on the Data Colada blog and Felix Schönbrodt on a failed replication in pyschology and how ceiling effects of the instrument are not an issue. Here are the images of the original and replication ECDF's from the Data Colada blog: There are also some good examples on this site. I thought I had a good example here but a few others that I really enjoyed are: Improving data analysis through a better visualization of data? Which permutation test implementation in R to use instead of t-tests (paired and non-paired)?. A terrific quote by G. Jay Kerns here "In my opinion, these data are a perfect (?) example that a well chosen picture is worth 1000 hypothesis tests. We don't need statistics to tell the difference between a pencil and a barn.". This is a bit more of a contentious one which I might rename to If a statistician were in a cave her whole life and then one day was shown a scatterplot what would she see? I realize these aren't published, but I think are illustrative nonetheless. I'm sure you could cull up more on this site as well.
What are some good examples of exploratory data analysis today?
One example I enjoy (and is a simple illustration) is the work by Michael Maltz on analyzing the uniform crime reports that police agencies supply to the FBI. See: Maltz, M. D. (2010). Look before yo
What are some good examples of exploratory data analysis today? One example I enjoy (and is a simple illustration) is the work by Michael Maltz on analyzing the uniform crime reports that police agencies supply to the FBI. See: Maltz, M. D. (2010). Look before you analyze: Visualizing data in criminal justice. In Piquero, A. . and Weisburd, D., editors, Handbook of Quantitative Criminology, chapter 3, pages 25-52. Springer New York, New York, NY. For some background, the FBI does not have standardized ways to report missing or incomplete reports (they collect data monthly, so an agency could report for some months but not the entire year). So the uncritical would observe zeroes or very low numbers for a particular jurisdiction and not presume missing data, e.g. see the numbers for Florida in Parker & Pruitt (2000). So there is quite a bit of precedent in the criminology literature of modelling this data without discovering such errors. Here is a good example from blogs discussing published papers: Uri Simonsohn on the Data Colada blog and Felix Schönbrodt on a failed replication in pyschology and how ceiling effects of the instrument are not an issue. Here are the images of the original and replication ECDF's from the Data Colada blog: There are also some good examples on this site. I thought I had a good example here but a few others that I really enjoyed are: Improving data analysis through a better visualization of data? Which permutation test implementation in R to use instead of t-tests (paired and non-paired)?. A terrific quote by G. Jay Kerns here "In my opinion, these data are a perfect (?) example that a well chosen picture is worth 1000 hypothesis tests. We don't need statistics to tell the difference between a pencil and a barn.". This is a bit more of a contentious one which I might rename to If a statistician were in a cave her whole life and then one day was shown a scatterplot what would she see? I realize these aren't published, but I think are illustrative nonetheless. I'm sure you could cull up more on this site as well.
What are some good examples of exploratory data analysis today? One example I enjoy (and is a simple illustration) is the work by Michael Maltz on analyzing the uniform crime reports that police agencies supply to the FBI. See: Maltz, M. D. (2010). Look before yo
42,453
What are some good examples of exploratory data analysis today?
Our neuroscientist and colleage Dr. Trejo had a successful experience with exploratory data analysis applied to adult neurogenesis in his work "Involvement of specific adult hippocampal neurogenic subpopulations on behavior acquisition and persistence abilities" (under peer-review so details cannot be provided still). I would suggest you to contact Dr. Trejo and chat with him. He's truly cooperative so I'm sure he will explain his case with more detail. The issues they were facing in his lab were two: Their data only showed "trends" about their original hypothesis on the relationship between neural structure and learning-memory processes. They spent weeks manually looking for relevant correlations with classical statistical packages. (Automated) Exploratory data analysis helped them to found some hints on were could be the key variables involving the relationship. Of course, a further confirmatory work was done then.
What are some good examples of exploratory data analysis today?
Our neuroscientist and colleage Dr. Trejo had a successful experience with exploratory data analysis applied to adult neurogenesis in his work "Involvement of specific adult hippocampal neurogenic sub
What are some good examples of exploratory data analysis today? Our neuroscientist and colleage Dr. Trejo had a successful experience with exploratory data analysis applied to adult neurogenesis in his work "Involvement of specific adult hippocampal neurogenic subpopulations on behavior acquisition and persistence abilities" (under peer-review so details cannot be provided still). I would suggest you to contact Dr. Trejo and chat with him. He's truly cooperative so I'm sure he will explain his case with more detail. The issues they were facing in his lab were two: Their data only showed "trends" about their original hypothesis on the relationship between neural structure and learning-memory processes. They spent weeks manually looking for relevant correlations with classical statistical packages. (Automated) Exploratory data analysis helped them to found some hints on were could be the key variables involving the relationship. Of course, a further confirmatory work was done then.
What are some good examples of exploratory data analysis today? Our neuroscientist and colleage Dr. Trejo had a successful experience with exploratory data analysis applied to adult neurogenesis in his work "Involvement of specific adult hippocampal neurogenic sub
42,454
Does the log likelihood become unimodal when the sample size goes to infinity?
Regarding "I would expect that the number and depth of the local minima would decrease as the sample size increases", this is not true in general. For example, let $X_1,\dots,X_n$ be a random sample from the $k$-component mixture $$ w_1\cdot\mathrm{N}(\mu_1,\sigma_1^2) + \dots + w_k\cdot\mathrm{N}(\mu_k,\sigma_k^2) \, , $$ in which $w_i\geq 0$ and $\sum_{i=1}^k w_i=1$. Define $\theta_i=(w_i,\mu_i,\sigma_i^2)$, for $i=1\dots,k$, and $\theta=(\theta_1,\dots,\theta_k)$, and let $x=(x_1,\dots,x_n)$. The likelihood function is $$ L_x(\theta) = \prod_{i=1}^n \sum_{j=1}^k w_j\cdot\frac{1}{\sqrt{2\pi}\sigma_j} e^{-(x_i-\mu_j)^2/2\sigma_j^2} \, . $$ Since for any permutation $\tau:\{1,\dots,k\}\xrightarrow{\rm 1:1}\{1,\dots,k\}$ we have $$ L_x(\theta_1,\dots,\theta_k) = L_x(\theta_{\tau(1)},\dots,\theta_{\tau(k)}) \, , $$ for this model the likelihood has at least $k!$ symmetric modes, no matter how large the sample size $n$ is.
Does the log likelihood become unimodal when the sample size goes to infinity?
Regarding "I would expect that the number and depth of the local minima would decrease as the sample size increases", this is not true in general. For example, let $X_1,\dots,X_n$ be a random sample f
Does the log likelihood become unimodal when the sample size goes to infinity? Regarding "I would expect that the number and depth of the local minima would decrease as the sample size increases", this is not true in general. For example, let $X_1,\dots,X_n$ be a random sample from the $k$-component mixture $$ w_1\cdot\mathrm{N}(\mu_1,\sigma_1^2) + \dots + w_k\cdot\mathrm{N}(\mu_k,\sigma_k^2) \, , $$ in which $w_i\geq 0$ and $\sum_{i=1}^k w_i=1$. Define $\theta_i=(w_i,\mu_i,\sigma_i^2)$, for $i=1\dots,k$, and $\theta=(\theta_1,\dots,\theta_k)$, and let $x=(x_1,\dots,x_n)$. The likelihood function is $$ L_x(\theta) = \prod_{i=1}^n \sum_{j=1}^k w_j\cdot\frac{1}{\sqrt{2\pi}\sigma_j} e^{-(x_i-\mu_j)^2/2\sigma_j^2} \, . $$ Since for any permutation $\tau:\{1,\dots,k\}\xrightarrow{\rm 1:1}\{1,\dots,k\}$ we have $$ L_x(\theta_1,\dots,\theta_k) = L_x(\theta_{\tau(1)},\dots,\theta_{\tau(k)}) \, , $$ for this model the likelihood has at least $k!$ symmetric modes, no matter how large the sample size $n$ is.
Does the log likelihood become unimodal when the sample size goes to infinity? Regarding "I would expect that the number and depth of the local minima would decrease as the sample size increases", this is not true in general. For example, let $X_1,\dots,X_n$ be a random sample f
42,455
Presenting results of a meta-analysis with multiple moderators?
In essence, there are 6 different combinations of those two moderators. So, you could just compute and present the estimated/predicted effects for each of those combinations. Here is an example using a dataset from the metafor package that also happens to include two categorical moderators with 2 and 3 levels, respectively: ### load data dat <- get(data(dat.mcdaniel1994)) ### calculate r-to-z transformed correlations and corresponding sampling variances dat <- escalc(measure="ZCOR", ri=ri, ni=ni, data=dat) ### fit mixed-effects meta-regression model (struct has 2 levels, type as 3 levels) res <- rma(yi, vi, mods = ~ struct + type, data=dat) ### compute the estimated/predicted correlation for each combination predict(res, newmods=rbind(c(0,0,0), c(0,1,0), c(0,0,1), c(1,0,0), c(1,1,0), c(1,0,1)), transf=transf.ztor, addx=TRUE, digits=2) The results are: pred ci.lb ci.ub cr.lb cr.ub X.intrcpt X.structu X.typep X.types 1 0.26 0.22 0.30 -0.08 0.54 1 0 0 0 2 0.19 0.01 0.36 -0.19 0.52 1 0 1 0 3 0.30 0.19 0.40 -0.05 0.58 1 0 0 1 4 0.20 0.13 0.26 -0.15 0.50 1 1 0 0 5 0.13 -0.04 0.29 -0.24 0.47 1 1 1 0 6 0.24 0.10 0.36 -0.13 0.54 1 1 0 1 One could also put these results into a forest plot (not showing the individual estimates, just these estimated outcomes): slabs <- c("struct = s, type = j", "struct = s, type = p", "struct = s, type = s", "struct = u, type = j", "struct = u, type = p", "struct = u, type = s") par(mar=c(4,4,1,2)) forest(sav$pred, sei=sav$se, slab=slabs, transf=transf.ztor, xlab="Correlation", xlim=c(-.4,.7)) text(-.4, 8, "Structure/Type", pos=4, font=2) text(.7, 8, "Correlation [95% CI]", pos=2, font=2) That would look something like this: There is a bit of redundancy in presenting all 6 combinations, since your model assumes that the influence of the two moderators is additive. But I think this makes the results quite clear.
Presenting results of a meta-analysis with multiple moderators?
In essence, there are 6 different combinations of those two moderators. So, you could just compute and present the estimated/predicted effects for each of those combinations. Here is an example using
Presenting results of a meta-analysis with multiple moderators? In essence, there are 6 different combinations of those two moderators. So, you could just compute and present the estimated/predicted effects for each of those combinations. Here is an example using a dataset from the metafor package that also happens to include two categorical moderators with 2 and 3 levels, respectively: ### load data dat <- get(data(dat.mcdaniel1994)) ### calculate r-to-z transformed correlations and corresponding sampling variances dat <- escalc(measure="ZCOR", ri=ri, ni=ni, data=dat) ### fit mixed-effects meta-regression model (struct has 2 levels, type as 3 levels) res <- rma(yi, vi, mods = ~ struct + type, data=dat) ### compute the estimated/predicted correlation for each combination predict(res, newmods=rbind(c(0,0,0), c(0,1,0), c(0,0,1), c(1,0,0), c(1,1,0), c(1,0,1)), transf=transf.ztor, addx=TRUE, digits=2) The results are: pred ci.lb ci.ub cr.lb cr.ub X.intrcpt X.structu X.typep X.types 1 0.26 0.22 0.30 -0.08 0.54 1 0 0 0 2 0.19 0.01 0.36 -0.19 0.52 1 0 1 0 3 0.30 0.19 0.40 -0.05 0.58 1 0 0 1 4 0.20 0.13 0.26 -0.15 0.50 1 1 0 0 5 0.13 -0.04 0.29 -0.24 0.47 1 1 1 0 6 0.24 0.10 0.36 -0.13 0.54 1 1 0 1 One could also put these results into a forest plot (not showing the individual estimates, just these estimated outcomes): slabs <- c("struct = s, type = j", "struct = s, type = p", "struct = s, type = s", "struct = u, type = j", "struct = u, type = p", "struct = u, type = s") par(mar=c(4,4,1,2)) forest(sav$pred, sei=sav$se, slab=slabs, transf=transf.ztor, xlab="Correlation", xlim=c(-.4,.7)) text(-.4, 8, "Structure/Type", pos=4, font=2) text(.7, 8, "Correlation [95% CI]", pos=2, font=2) That would look something like this: There is a bit of redundancy in presenting all 6 combinations, since your model assumes that the influence of the two moderators is additive. But I think this makes the results quite clear.
Presenting results of a meta-analysis with multiple moderators? In essence, there are 6 different combinations of those two moderators. So, you could just compute and present the estimated/predicted effects for each of those combinations. Here is an example using
42,456
Low explained variance in Random Forest (R randomForest)
edited response* Some few changes which may drive forward a little signal... Scaling: RF is only scaling invariant to the features not to the responses. RFreg uses mean square error as loss function and CV squared residuals to assess performance. Try to take the logarithm or sqaure root to your responses to lower leverage of few 'outliers'. Filtering: Use the function rfcv from randomForest to select variables. Otherwise a linear filter may be useful. Collinearity filtering: "I checked chi-square between pairs of variables and removed the ones that could be associated (p-value < 0.05), but the result is the same." -Don't use a specific p-value threshold < 0.05. Use any threshold by any similarity measures which makes your model work (CV performance). Did you remove both members of the pairs? Variable importance: Variable importance of broken model should not be trusted. Evaluating RF perfomance: That RF do fit it's own training set is irrelevant. The trees of RFreg are grown almost to max depth and will overfit the training set. Only cross-validation(segmentaion, OOB, nFold, etc.) can be used to assess the performance. The following code shows how %var explained is computed and how the OOB prediction is made. library(randomForest) obs = 500 vars = 100 X = replicate(vars,factor(sample(1:3,obs,replace=T))) y = rnorm(obs,sd=5)^2 RF = randomForest(X,y,importance=T,ntree=20,keep.inbag=T) #var explained printed print(RF) cat("% Var explained: \n", 100 * (1-sum((RF$y-RF$pred )^2) / sum((RF$y-mean(RF$y))^2) ) ) #how out-of-bag predicted values are formed #matrix of i row obs with j col predictions from j trees allTreePred = predict(RF,X,predict.all=T)$individual #for i'th sample take mean of those trees where i'th sample was OOB (inbag==0) OOBpred = sapply(1:obs,function(i) mean(allTreePred[i,RF$inbag[i,]==0])) #we can see the values are the same +/- float precision hist(OOBpred-RF$predicted) #if using RF to predict it's own training data Ypred = predict(RF,X) #any obs (i) will be present in ~0.62 of the nodes and influence it's own #prediction value. Therefore does the following prediction plot falsely #look promising par(mfrow=c(1,2),mar=c(4,4,3,3)) ylims=range(c(pred,OOBpred)) plot(y ,Ypred,ylim=ylims,main=paste("simple pred \n R^2=" ,round(cor(y,Ypred ),2))) plot(y,OOBpred,ylim=ylims,main=paste("OOB prediction \n R^2=" ,round(cor(y,OOBpred),2)))
Low explained variance in Random Forest (R randomForest)
edited response* Some few changes which may drive forward a little signal... Scaling: RF is only scaling invariant to the features not to the responses. RFreg uses mean square error as loss function a
Low explained variance in Random Forest (R randomForest) edited response* Some few changes which may drive forward a little signal... Scaling: RF is only scaling invariant to the features not to the responses. RFreg uses mean square error as loss function and CV squared residuals to assess performance. Try to take the logarithm or sqaure root to your responses to lower leverage of few 'outliers'. Filtering: Use the function rfcv from randomForest to select variables. Otherwise a linear filter may be useful. Collinearity filtering: "I checked chi-square between pairs of variables and removed the ones that could be associated (p-value < 0.05), but the result is the same." -Don't use a specific p-value threshold < 0.05. Use any threshold by any similarity measures which makes your model work (CV performance). Did you remove both members of the pairs? Variable importance: Variable importance of broken model should not be trusted. Evaluating RF perfomance: That RF do fit it's own training set is irrelevant. The trees of RFreg are grown almost to max depth and will overfit the training set. Only cross-validation(segmentaion, OOB, nFold, etc.) can be used to assess the performance. The following code shows how %var explained is computed and how the OOB prediction is made. library(randomForest) obs = 500 vars = 100 X = replicate(vars,factor(sample(1:3,obs,replace=T))) y = rnorm(obs,sd=5)^2 RF = randomForest(X,y,importance=T,ntree=20,keep.inbag=T) #var explained printed print(RF) cat("% Var explained: \n", 100 * (1-sum((RF$y-RF$pred )^2) / sum((RF$y-mean(RF$y))^2) ) ) #how out-of-bag predicted values are formed #matrix of i row obs with j col predictions from j trees allTreePred = predict(RF,X,predict.all=T)$individual #for i'th sample take mean of those trees where i'th sample was OOB (inbag==0) OOBpred = sapply(1:obs,function(i) mean(allTreePred[i,RF$inbag[i,]==0])) #we can see the values are the same +/- float precision hist(OOBpred-RF$predicted) #if using RF to predict it's own training data Ypred = predict(RF,X) #any obs (i) will be present in ~0.62 of the nodes and influence it's own #prediction value. Therefore does the following prediction plot falsely #look promising par(mfrow=c(1,2),mar=c(4,4,3,3)) ylims=range(c(pred,OOBpred)) plot(y ,Ypred,ylim=ylims,main=paste("simple pred \n R^2=" ,round(cor(y,Ypred ),2))) plot(y,OOBpred,ylim=ylims,main=paste("OOB prediction \n R^2=" ,round(cor(y,OOBpred),2)))
Low explained variance in Random Forest (R randomForest) edited response* Some few changes which may drive forward a little signal... Scaling: RF is only scaling invariant to the features not to the responses. RFreg uses mean square error as loss function a
42,457
Birthday "Paradox" -- with a different perspective
When in doubt, simulate. (I'm sure that you can actually put together a formula, but it will likely be painful to look at.) I'll work with the criterion being "at least one triple birthday" and "at least eleven double birthdays" -- changing the code below to checking for exactly so many birthdays is not hard. n.sims <- 1e5 n.persons <- 60 counter <- 0 pb <- winProgressBar(max=n.sims) for ( ii in 1:n.sims ) { setWinProgressBar(pb,ii,paste(ii,"of",n.sims)) set.seed(ii) birthdays <- sample(x=365,size=n.persons,replace=TRUE) birthday.table <- table(birthdays) if ( sum(birthday.table>=4) == 0 & sum(birthday.table==3) >= 1 & sum(birthday.table==2) >= 11 ) counter <- counter+1 } close(pb) counter/n.sims Out of 100,000 simulations, I get six hits, for a $p$-value of $p=0.00006$. If you twiddle the set.seed() command, e.g., to set.seed(2*ii), you will get slightly different results (in this case $p=0.00004$), which serves as a sort of sensitivity analysis.
Birthday "Paradox" -- with a different perspective
When in doubt, simulate. (I'm sure that you can actually put together a formula, but it will likely be painful to look at.) I'll work with the criterion being "at least one triple birthday" and "at le
Birthday "Paradox" -- with a different perspective When in doubt, simulate. (I'm sure that you can actually put together a formula, but it will likely be painful to look at.) I'll work with the criterion being "at least one triple birthday" and "at least eleven double birthdays" -- changing the code below to checking for exactly so many birthdays is not hard. n.sims <- 1e5 n.persons <- 60 counter <- 0 pb <- winProgressBar(max=n.sims) for ( ii in 1:n.sims ) { setWinProgressBar(pb,ii,paste(ii,"of",n.sims)) set.seed(ii) birthdays <- sample(x=365,size=n.persons,replace=TRUE) birthday.table <- table(birthdays) if ( sum(birthday.table>=4) == 0 & sum(birthday.table==3) >= 1 & sum(birthday.table==2) >= 11 ) counter <- counter+1 } close(pb) counter/n.sims Out of 100,000 simulations, I get six hits, for a $p$-value of $p=0.00006$. If you twiddle the set.seed() command, e.g., to set.seed(2*ii), you will get slightly different results (in this case $p=0.00004$), which serves as a sort of sensitivity analysis.
Birthday "Paradox" -- with a different perspective When in doubt, simulate. (I'm sure that you can actually put together a formula, but it will likely be painful to look at.) I'll work with the criterion being "at least one triple birthday" and "at le
42,458
Birthday "Paradox" -- with a different perspective
Assuming you aim for the case that exactly (so not at least) 11 double birthdays and 1 triple birthdays occur: $$\begin{align}p(60,n_2=11,n_3=1) &= \frac{\text{possibilities with 11 double birthdays and 1 triple birthdays}}{\text{all possibilities}}\\ &= \frac{\frac{60!}{35!22!3!} 21!! \,\cdot \,365 \cdot 364 \cdot \, ... \, \cdot (365-n+1+1\cdot11+2\cdot1)}{365^{60}}\end{align}$$ which my calculation in R factorial(60)/(factorial(35)*factorial(22)*factorial(3))* pracma::factorial2(21)* cumprod(319:365)[47]/365^60 approximates as $3.64 * 10^{-5}$ Explanation of the terms in the calculation $365^{60}$ is the number of ways to select random 60 birthdays among equally probable 365 days. $365 \cdot 364 \cdot \, ... \, \cdot (365-n+1+1\cdot11+2\cdot1)$ is the number of ways to select random 60-11-2 unique days among equally probable 365 days. $\frac{60!}{35!22!3!}$ is the number of unique ways to partition 60 people in groups of 35, 22, and 3 (the numbers of people with single birthdays, double birthdays, and triple birthdays) $21!! = 21 \cdot 19 \cdot ... \cdot 3 \cdot 1$ is the number of ways that we can partition the 22 people in the double birthdays group into pairs.
Birthday "Paradox" -- with a different perspective
Assuming you aim for the case that exactly (so not at least) 11 double birthdays and 1 triple birthdays occur: $$\begin{align}p(60,n_2=11,n_3=1) &= \frac{\text{possibilities with 11 double birthdays a
Birthday "Paradox" -- with a different perspective Assuming you aim for the case that exactly (so not at least) 11 double birthdays and 1 triple birthdays occur: $$\begin{align}p(60,n_2=11,n_3=1) &= \frac{\text{possibilities with 11 double birthdays and 1 triple birthdays}}{\text{all possibilities}}\\ &= \frac{\frac{60!}{35!22!3!} 21!! \,\cdot \,365 \cdot 364 \cdot \, ... \, \cdot (365-n+1+1\cdot11+2\cdot1)}{365^{60}}\end{align}$$ which my calculation in R factorial(60)/(factorial(35)*factorial(22)*factorial(3))* pracma::factorial2(21)* cumprod(319:365)[47]/365^60 approximates as $3.64 * 10^{-5}$ Explanation of the terms in the calculation $365^{60}$ is the number of ways to select random 60 birthdays among equally probable 365 days. $365 \cdot 364 \cdot \, ... \, \cdot (365-n+1+1\cdot11+2\cdot1)$ is the number of ways to select random 60-11-2 unique days among equally probable 365 days. $\frac{60!}{35!22!3!}$ is the number of unique ways to partition 60 people in groups of 35, 22, and 3 (the numbers of people with single birthdays, double birthdays, and triple birthdays) $21!! = 21 \cdot 19 \cdot ... \cdot 3 \cdot 1$ is the number of ways that we can partition the 22 people in the double birthdays group into pairs.
Birthday "Paradox" -- with a different perspective Assuming you aim for the case that exactly (so not at least) 11 double birthdays and 1 triple birthdays occur: $$\begin{align}p(60,n_2=11,n_3=1) &= \frac{\text{possibilities with 11 double birthdays a
42,459
How to evaluate fit of a logistic regression
Standard univariate logistic regression of $y$ on $x$ finds the coefficients $\alpha$, $\beta$ that best fit your training data $\{(x_i, y_i), i \in [1, N]\}$ in the following equation: (model 1): $y_i = (1 + exp(-(\alpha + \beta x_i)))^{-1}$ Note that the fit will be bad if the $y$ in your data are not in $(0,1)$, so you'll have to transform your data if you want to use logistic regression. One option might be to transform $y$ into a proportion (number of occurences of the "phenomenon" divided by population of the corresponding area?). Also, the fact that "one of the inputs for which I need a predicted output is far larger than the inputs which I used to make a regression" is a problem, because you will be extrapolating the results of the model to unknown regions of the data. A value of $x_i$ much higher than the ones in the training sample will probably give you a $\hat y_i$ very close to 1. Directly assessing prediction error Once the model is fitted and you have your estimated parameters $\hat \alpha$ and $\hat \beta$, you get a predicted output value $\hat y_i$ for each observation $x_i$: $\hat y_i = (1 + exp(-(\hat \alpha + \hat \beta x_i)))^{-1}$. You can easily assess goodness of fit on your graphic calculator using the observed $y_i$ and their corresponding predict values $\hat y_i$: either by plotting one against the other (if the fit was perfect this would give you a straight line, the identity line, because then $y=\hat y$) or by computing an error measure, for instance the root mean squared error: $rmse= \sqrt{\frac{1}{N} \sum_{i=0}^N (y_i - \hat y_i)^2}$. This will tell the average distance between observed outcomes $y_i$ and the model-predicted outcomes $\hat y_i$ (the lower $rmse$, the better the fit). It is not a standardized score like $R^2$, but is easy to compute and interpret. Now to asses the prediction power of the model, it is best to compare $y_i$ and $\hat y_i$ on a validation dataset, ie. data that were not used in the fit (eg. by withholding a portion of data during training, see cross-validation for more info). Pseudo-R² The usual $R^2$ of linear regression does not apply to logistic regression, for which several alternative measures exist. In all variants, $R^2$ is a real value between 0 and 1, and the closer to 1 the better the model. One of them uses the likelihood ratio, and is defined as follows: $R^2_L = 1 - \frac{L_1}{L_0}$, where $L_1$ and $L_0$ are the log-likelihood of (respectively) model 1 (see above) and the following model 0, which is a logistic regression on just a constant (and does not depend on $x$): (model 0): $y_i = (1 + exp(-\alpha))^{-1}$ For any logistic regression model with $y \in \{0,1\}$ the log-likelihood is computed from the observed $y$ and the predicted $\hat y$, using the following formula (but I'm not sure it applies for continuous $y \in [0,1]$): $L=\sum_{i= 1}^N(y_i \ln(\hat y_i)+(1−y_i)\ln(1−\hat y_i))$ Another pseudo-R² is based on the linear correlation of $y$ and $\hat y$, which is easily computed on any graphic calculator with stat functions: $R^2_{cor} = \left( \widehat {cor(y_i, \hat y_i)} \right) ^2$
How to evaluate fit of a logistic regression
Standard univariate logistic regression of $y$ on $x$ finds the coefficients $\alpha$, $\beta$ that best fit your training data $\{(x_i, y_i), i \in [1, N]\}$ in the following equation: (model 1): $y
How to evaluate fit of a logistic regression Standard univariate logistic regression of $y$ on $x$ finds the coefficients $\alpha$, $\beta$ that best fit your training data $\{(x_i, y_i), i \in [1, N]\}$ in the following equation: (model 1): $y_i = (1 + exp(-(\alpha + \beta x_i)))^{-1}$ Note that the fit will be bad if the $y$ in your data are not in $(0,1)$, so you'll have to transform your data if you want to use logistic regression. One option might be to transform $y$ into a proportion (number of occurences of the "phenomenon" divided by population of the corresponding area?). Also, the fact that "one of the inputs for which I need a predicted output is far larger than the inputs which I used to make a regression" is a problem, because you will be extrapolating the results of the model to unknown regions of the data. A value of $x_i$ much higher than the ones in the training sample will probably give you a $\hat y_i$ very close to 1. Directly assessing prediction error Once the model is fitted and you have your estimated parameters $\hat \alpha$ and $\hat \beta$, you get a predicted output value $\hat y_i$ for each observation $x_i$: $\hat y_i = (1 + exp(-(\hat \alpha + \hat \beta x_i)))^{-1}$. You can easily assess goodness of fit on your graphic calculator using the observed $y_i$ and their corresponding predict values $\hat y_i$: either by plotting one against the other (if the fit was perfect this would give you a straight line, the identity line, because then $y=\hat y$) or by computing an error measure, for instance the root mean squared error: $rmse= \sqrt{\frac{1}{N} \sum_{i=0}^N (y_i - \hat y_i)^2}$. This will tell the average distance between observed outcomes $y_i$ and the model-predicted outcomes $\hat y_i$ (the lower $rmse$, the better the fit). It is not a standardized score like $R^2$, but is easy to compute and interpret. Now to asses the prediction power of the model, it is best to compare $y_i$ and $\hat y_i$ on a validation dataset, ie. data that were not used in the fit (eg. by withholding a portion of data during training, see cross-validation for more info). Pseudo-R² The usual $R^2$ of linear regression does not apply to logistic regression, for which several alternative measures exist. In all variants, $R^2$ is a real value between 0 and 1, and the closer to 1 the better the model. One of them uses the likelihood ratio, and is defined as follows: $R^2_L = 1 - \frac{L_1}{L_0}$, where $L_1$ and $L_0$ are the log-likelihood of (respectively) model 1 (see above) and the following model 0, which is a logistic regression on just a constant (and does not depend on $x$): (model 0): $y_i = (1 + exp(-\alpha))^{-1}$ For any logistic regression model with $y \in \{0,1\}$ the log-likelihood is computed from the observed $y$ and the predicted $\hat y$, using the following formula (but I'm not sure it applies for continuous $y \in [0,1]$): $L=\sum_{i= 1}^N(y_i \ln(\hat y_i)+(1−y_i)\ln(1−\hat y_i))$ Another pseudo-R² is based on the linear correlation of $y$ and $\hat y$, which is easily computed on any graphic calculator with stat functions: $R^2_{cor} = \left( \widehat {cor(y_i, \hat y_i)} \right) ^2$
How to evaluate fit of a logistic regression Standard univariate logistic regression of $y$ on $x$ finds the coefficients $\alpha$, $\beta$ that best fit your training data $\{(x_i, y_i), i \in [1, N]\}$ in the following equation: (model 1): $y
42,460
How to decide which penalty measure to use ? any general guidelines or thumb rules out of textbook
There can be many considerations to this matter. To name a few: Inference: the distribution of ridge estimates is fairly simple to derive. Lasso, and basically any other penalty that performs variable selection, has only limited probabilistic results. Sparsity: If you desire a model with only a few predictors (say, for speed of prediction, for interpretability, ...) then you will want $l_1$ regularization. Speed of computation: The time complexity of the learning can be a consideration. There are differences between the algorithms. See here for some guidance. This becomes especially important if you plug the whole procedure in a cross validation scheme where models are fitted repeatedly.
How to decide which penalty measure to use ? any general guidelines or thumb rules out of textbook
There can be many considerations to this matter. To name a few: Inference: the distribution of ridge estimates is fairly simple to derive. Lasso, and basically any other penalty that performs variab
How to decide which penalty measure to use ? any general guidelines or thumb rules out of textbook There can be many considerations to this matter. To name a few: Inference: the distribution of ridge estimates is fairly simple to derive. Lasso, and basically any other penalty that performs variable selection, has only limited probabilistic results. Sparsity: If you desire a model with only a few predictors (say, for speed of prediction, for interpretability, ...) then you will want $l_1$ regularization. Speed of computation: The time complexity of the learning can be a consideration. There are differences between the algorithms. See here for some guidance. This becomes especially important if you plug the whole procedure in a cross validation scheme where models are fitted repeatedly.
How to decide which penalty measure to use ? any general guidelines or thumb rules out of textbook There can be many considerations to this matter. To name a few: Inference: the distribution of ridge estimates is fairly simple to derive. Lasso, and basically any other penalty that performs variab
42,461
re-arrange elements in two vectors to minimize the elementwise difference between them
Sort the two vectors and match them in order. The question seeks a permutation $\sigma$ of the indexes $\{1,2,\ldots, n\}$ that minimizes $$f_{X,Y}(\sigma) = \sum_{i=1}^n |X_i - Y_{\sigma(i)}|.$$ Let $X_{[1]} \le X_{[2]} \le \cdots \le X_{[n]}$ be the order statistics for $X$. Use similar notation for the order statistics of $Y$. Then a permutation guaranteed to minimize $f_{X,Y}(\sigma)$ is the one that matches $X_{[i]}$ to $Y_{[i]}$ for each $i$. To see this, suppose that $X_i \le X_j$ but $Y_{\sigma(i)} \ge Y_{\sigma(j)}$. There are only six ways in which these four numbers can be ordered. Three of them are the following: $X_i \le X_j \le Y_{\sigma(j)} \le Y_{\sigma(i)}$, $X_i \le Y_{\sigma(j)}\le X_j \le Y_{\sigma(i)}$, and $X_i \le Y_{\sigma(j))}\le Y_{\sigma(i)}\le X_j$. Changing $\sigma$ to match $i$ with $\sigma(j)$ and $j$ with $\sigma(i)$ decreases $f_{X,Y}(\sigma)$ by $0$, $2|X_j - Y_{\sigma(j)}|$, and $2|Y_{\sigma(i)}-Y_{\sigma(j)}|$, respectively. The other three cases are similar to these (but with the $Y_{*}$ in the positions of the $X_{*}$), with similar results upon changing $\sigma$. In no case does $f$ increase. Therefore, if $\sigma$ does not match up the ordered sequences of the $X$ and $Y$, we can always replace it with one that does match them up and achieves no worse a value of $f$, QED. Because sorting can be accomplished (in the usual models of computing) with $O(n\log(n))$ effort, such a solution can be found in $O(n\log(n))$ time by sorting both the $X_i$ and $Y_i$.
re-arrange elements in two vectors to minimize the elementwise difference between them
Sort the two vectors and match them in order. The question seeks a permutation $\sigma$ of the indexes $\{1,2,\ldots, n\}$ that minimizes $$f_{X,Y}(\sigma) = \sum_{i=1}^n |X_i - Y_{\sigma(i)}|.$$ Let
re-arrange elements in two vectors to minimize the elementwise difference between them Sort the two vectors and match them in order. The question seeks a permutation $\sigma$ of the indexes $\{1,2,\ldots, n\}$ that minimizes $$f_{X,Y}(\sigma) = \sum_{i=1}^n |X_i - Y_{\sigma(i)}|.$$ Let $X_{[1]} \le X_{[2]} \le \cdots \le X_{[n]}$ be the order statistics for $X$. Use similar notation for the order statistics of $Y$. Then a permutation guaranteed to minimize $f_{X,Y}(\sigma)$ is the one that matches $X_{[i]}$ to $Y_{[i]}$ for each $i$. To see this, suppose that $X_i \le X_j$ but $Y_{\sigma(i)} \ge Y_{\sigma(j)}$. There are only six ways in which these four numbers can be ordered. Three of them are the following: $X_i \le X_j \le Y_{\sigma(j)} \le Y_{\sigma(i)}$, $X_i \le Y_{\sigma(j)}\le X_j \le Y_{\sigma(i)}$, and $X_i \le Y_{\sigma(j))}\le Y_{\sigma(i)}\le X_j$. Changing $\sigma$ to match $i$ with $\sigma(j)$ and $j$ with $\sigma(i)$ decreases $f_{X,Y}(\sigma)$ by $0$, $2|X_j - Y_{\sigma(j)}|$, and $2|Y_{\sigma(i)}-Y_{\sigma(j)}|$, respectively. The other three cases are similar to these (but with the $Y_{*}$ in the positions of the $X_{*}$), with similar results upon changing $\sigma$. In no case does $f$ increase. Therefore, if $\sigma$ does not match up the ordered sequences of the $X$ and $Y$, we can always replace it with one that does match them up and achieves no worse a value of $f$, QED. Because sorting can be accomplished (in the usual models of computing) with $O(n\log(n))$ effort, such a solution can be found in $O(n\log(n))$ time by sorting both the $X_i$ and $Y_i$.
re-arrange elements in two vectors to minimize the elementwise difference between them Sort the two vectors and match them in order. The question seeks a permutation $\sigma$ of the indexes $\{1,2,\ldots, n\}$ that minimizes $$f_{X,Y}(\sigma) = \sum_{i=1}^n |X_i - Y_{\sigma(i)}|.$$ Let
42,462
Boundary or threshold test for regression-type scatter plot
Such a pattern would often occur when no "boundary" actually exists. Here I generate X and Y as independent right-skew random variates, yet such a pattern occurs: The impression of any sense of a boundary in my plot is completely bogus, yet it looks very similar to yours. (There's an actual vertical boundary in this bivariate distribution at $x=80$, but I could generate very similar looking plots without any boundaries at all.) Here's the code I used to generate the plot (in R): x = rbeta(1000,1,10)*80 y = rbeta(1000,1,3)/1.5+.3 plot(x,y,ylim=c(0,1)) Trying it a few more times it looks like about a third of the time it gives a plot that seems to have such a slanting boundary. No doubt a little fiddling with distributions could improve the proportion of times it occurs and at the same time make it look even more like your picture (this shifted/scaled beta(1,10)$\times$beta(1,3) was the very first counterexample I tried). Given my picture doesn't actually have any boundary there, one should be careful of over-interpreting such a pattern. You'd need a characterization of what makes it a boundary that wouldn't generate lots of false positives on examples like the one I give.
Boundary or threshold test for regression-type scatter plot
Such a pattern would often occur when no "boundary" actually exists. Here I generate X and Y as independent right-skew random variates, yet such a pattern occurs: The impression of any sense of a bou
Boundary or threshold test for regression-type scatter plot Such a pattern would often occur when no "boundary" actually exists. Here I generate X and Y as independent right-skew random variates, yet such a pattern occurs: The impression of any sense of a boundary in my plot is completely bogus, yet it looks very similar to yours. (There's an actual vertical boundary in this bivariate distribution at $x=80$, but I could generate very similar looking plots without any boundaries at all.) Here's the code I used to generate the plot (in R): x = rbeta(1000,1,10)*80 y = rbeta(1000,1,3)/1.5+.3 plot(x,y,ylim=c(0,1)) Trying it a few more times it looks like about a third of the time it gives a plot that seems to have such a slanting boundary. No doubt a little fiddling with distributions could improve the proportion of times it occurs and at the same time make it look even more like your picture (this shifted/scaled beta(1,10)$\times$beta(1,3) was the very first counterexample I tried). Given my picture doesn't actually have any boundary there, one should be careful of over-interpreting such a pattern. You'd need a characterization of what makes it a boundary that wouldn't generate lots of false positives on examples like the one I give.
Boundary or threshold test for regression-type scatter plot Such a pattern would often occur when no "boundary" actually exists. Here I generate X and Y as independent right-skew random variates, yet such a pattern occurs: The impression of any sense of a bou
42,463
Boundary or threshold test for regression-type scatter plot
You can use a permutation based test for such threshold. Permutation-based test It tests the hypothesis whether a "data-sparse" region above the threshold line is due to a random chance or not. In brief: The basic idea behind is to calculate the area of the "data-sparse" region and use it as a statistic. The next step is to randomly permute the X-coordinates of the scatter-plot and repeat the calculation of the area of "data-sparse" region. Probability p is the proportion of times the calculated area exceeded the original area. If p is sufficiently small the "data-sparse" region deemed to be significant.
Boundary or threshold test for regression-type scatter plot
You can use a permutation based test for such threshold. Permutation-based test It tests the hypothesis whether a "data-sparse" region above the threshold line is due to a random chance or not. In br
Boundary or threshold test for regression-type scatter plot You can use a permutation based test for such threshold. Permutation-based test It tests the hypothesis whether a "data-sparse" region above the threshold line is due to a random chance or not. In brief: The basic idea behind is to calculate the area of the "data-sparse" region and use it as a statistic. The next step is to randomly permute the X-coordinates of the scatter-plot and repeat the calculation of the area of "data-sparse" region. Probability p is the proportion of times the calculated area exceeded the original area. If p is sufficiently small the "data-sparse" region deemed to be significant.
Boundary or threshold test for regression-type scatter plot You can use a permutation based test for such threshold. Permutation-based test It tests the hypothesis whether a "data-sparse" region above the threshold line is due to a random chance or not. In br
42,464
Boundary or threshold test for regression-type scatter plot
I would start by finding the "upper envelope" of your data and then representing the "envelope" as a straight line or piece-wise linear function. For starters, you could estimate the "envelope" as a piece-wise constant function f(x) =max(yk, given abs(x-xk) is below delta), where delta is a parameter, say 3 and (xk, yk) are your data points. Drawing a straight line through points (xk, f(xk)) should be straightforward :)
Boundary or threshold test for regression-type scatter plot
I would start by finding the "upper envelope" of your data and then representing the "envelope" as a straight line or piece-wise linear function. For starters, you could estimate the "envelope" as
Boundary or threshold test for regression-type scatter plot I would start by finding the "upper envelope" of your data and then representing the "envelope" as a straight line or piece-wise linear function. For starters, you could estimate the "envelope" as a piece-wise constant function f(x) =max(yk, given abs(x-xk) is below delta), where delta is a parameter, say 3 and (xk, yk) are your data points. Drawing a straight line through points (xk, f(xk)) should be straightforward :)
Boundary or threshold test for regression-type scatter plot I would start by finding the "upper envelope" of your data and then representing the "envelope" as a straight line or piece-wise linear function. For starters, you could estimate the "envelope" as
42,465
Boundary or threshold test for regression-type scatter plot
My intention on how this problem might be solved is: Calculate linear model to receive the regression line $r$. Calculate the normal vector $v$ to the resulting regression line. Shift $r$ by $v$ till all data points are under $r$. To optimize $r$, you might rotate it by some angle $\alpha$ and stop for the best $\alpha$ you found, maybe using the Residual Sum of Squares as reference term. Like I tried to show in this figure: Another approach could be to use Support Vector machines. I don't know if this is possible with your data, but maybe you can produce some dummy points located above your data and split them from your original points using a SVM. This is just some idea I came up with. Though, I would prefer the first method.
Boundary or threshold test for regression-type scatter plot
My intention on how this problem might be solved is: Calculate linear model to receive the regression line $r$. Calculate the normal vector $v$ to the resulting regression line. Shift $r$ by $v$ till
Boundary or threshold test for regression-type scatter plot My intention on how this problem might be solved is: Calculate linear model to receive the regression line $r$. Calculate the normal vector $v$ to the resulting regression line. Shift $r$ by $v$ till all data points are under $r$. To optimize $r$, you might rotate it by some angle $\alpha$ and stop for the best $\alpha$ you found, maybe using the Residual Sum of Squares as reference term. Like I tried to show in this figure: Another approach could be to use Support Vector machines. I don't know if this is possible with your data, but maybe you can produce some dummy points located above your data and split them from your original points using a SVM. This is just some idea I came up with. Though, I would prefer the first method.
Boundary or threshold test for regression-type scatter plot My intention on how this problem might be solved is: Calculate linear model to receive the regression line $r$. Calculate the normal vector $v$ to the resulting regression line. Shift $r$ by $v$ till
42,466
Boundary or threshold test for regression-type scatter plot
This is potentially not the most robust solution. But you may be able to seriously improve the quality of the envelope using something along these lines: break down your data into n intervals (where the number of intervals depends on the density of your data) find the max of you data within each interval Pass linear regression model though the selected maximum data points.
Boundary or threshold test for regression-type scatter plot
This is potentially not the most robust solution. But you may be able to seriously improve the quality of the envelope using something along these lines: break down your data into n intervals (where
Boundary or threshold test for regression-type scatter plot This is potentially not the most robust solution. But you may be able to seriously improve the quality of the envelope using something along these lines: break down your data into n intervals (where the number of intervals depends on the density of your data) find the max of you data within each interval Pass linear regression model though the selected maximum data points.
Boundary or threshold test for regression-type scatter plot This is potentially not the most robust solution. But you may be able to seriously improve the quality of the envelope using something along these lines: break down your data into n intervals (where
42,467
Random Forest checklist
Scaling is not required; RF training is invariant to all combinations of monotonic transformations of predictors. classwt is not reliable; RF and unbalanced data is long story, try browsing the site or ask a more detailed question. RF shouldn't have any problems with correlated predictors (provided that you have enough trees). Optimizing the model by removing variables with smallest DecreaseGini may be unstable and thus pretty tricky -- remember that you need to do cross-validation and a proper test to detect significant effect of some variable on a model performance, importance measures on they own are not enough.
Random Forest checklist
Scaling is not required; RF training is invariant to all combinations of monotonic transformations of predictors. classwt is not reliable; RF and unbalanced data is long story, try browsing the site
Random Forest checklist Scaling is not required; RF training is invariant to all combinations of monotonic transformations of predictors. classwt is not reliable; RF and unbalanced data is long story, try browsing the site or ask a more detailed question. RF shouldn't have any problems with correlated predictors (provided that you have enough trees). Optimizing the model by removing variables with smallest DecreaseGini may be unstable and thus pretty tricky -- remember that you need to do cross-validation and a proper test to detect significant effect of some variable on a model performance, importance measures on they own are not enough.
Random Forest checklist Scaling is not required; RF training is invariant to all combinations of monotonic transformations of predictors. classwt is not reliable; RF and unbalanced data is long story, try browsing the site
42,468
Forecasting: Different Model for 1 month, 2 month, 6 month forecasts?
What you are talking about is using a "direct" forecasting strategy rather than the more popular "recursive" forecasting strategy. In a recursive strategy, one model is fitted, usually based on minimizing the one-step forecast mean squared error, and the forecasts for all future horizons are estimated by iterating the equations over time. If the model is linear, and the same as the data generating process, this is optimal. But in reality, the model is at best an approximation to whatever generated the data, and most real-world phenomena are non-linear. Contrary to what @Tom Reilly asserts, there is considerable academic literature on this problem, but very little software implements alternatives to the recursive approach. The problem is addressed, and some of the associated literature is referenced, in my recent paper with Souhaib Ben Taieb: https://robjhyndman.com/publications/boostingar/ The bottom line is that you should try different approaches and choose what works best for your problem. In this case, I suggest you try the direct approach (with different parameters for different horizons) and compare the forecast accuracy with what you get using the recursive approach.
Forecasting: Different Model for 1 month, 2 month, 6 month forecasts?
What you are talking about is using a "direct" forecasting strategy rather than the more popular "recursive" forecasting strategy. In a recursive strategy, one model is fitted, usually based on minimi
Forecasting: Different Model for 1 month, 2 month, 6 month forecasts? What you are talking about is using a "direct" forecasting strategy rather than the more popular "recursive" forecasting strategy. In a recursive strategy, one model is fitted, usually based on minimizing the one-step forecast mean squared error, and the forecasts for all future horizons are estimated by iterating the equations over time. If the model is linear, and the same as the data generating process, this is optimal. But in reality, the model is at best an approximation to whatever generated the data, and most real-world phenomena are non-linear. Contrary to what @Tom Reilly asserts, there is considerable academic literature on this problem, but very little software implements alternatives to the recursive approach. The problem is addressed, and some of the associated literature is referenced, in my recent paper with Souhaib Ben Taieb: https://robjhyndman.com/publications/boostingar/ The bottom line is that you should try different approaches and choose what works best for your problem. In this case, I suggest you try the direct approach (with different parameters for different horizons) and compare the forecast accuracy with what you get using the recursive approach.
Forecasting: Different Model for 1 month, 2 month, 6 month forecasts? What you are talking about is using a "direct" forecasting strategy rather than the more popular "recursive" forecasting strategy. In a recursive strategy, one model is fitted, usually based on minimi
42,469
Forecasting: Different Model for 1 month, 2 month, 6 month forecasts?
This is a very common issue in business forecasting, where we want sensible forecasts for the short term without sacrificing long term performance. You can test your data to see this by going back over multiple lags and holdout periods to see which models work best at different lags-- maybe one model does well 1-3 periods out, but another does much better over a longer horizon. The forecast and accuracy functions from the forecast package in R make doing this sort of thing quite straightforward. Once you have that data, you will have some more subjective choices to make-- is using a single model that has some connection to explaining the data preferable, even if it has more error than a mixed approach that would use different models for different time horizons?
Forecasting: Different Model for 1 month, 2 month, 6 month forecasts?
This is a very common issue in business forecasting, where we want sensible forecasts for the short term without sacrificing long term performance. You can test your data to see this by going back ov
Forecasting: Different Model for 1 month, 2 month, 6 month forecasts? This is a very common issue in business forecasting, where we want sensible forecasts for the short term without sacrificing long term performance. You can test your data to see this by going back over multiple lags and holdout periods to see which models work best at different lags-- maybe one model does well 1-3 periods out, but another does much better over a longer horizon. The forecast and accuracy functions from the forecast package in R make doing this sort of thing quite straightforward. Once you have that data, you will have some more subjective choices to make-- is using a single model that has some connection to explaining the data preferable, even if it has more error than a mixed approach that would use different models for different time horizons?
Forecasting: Different Model for 1 month, 2 month, 6 month forecasts? This is a very common issue in business forecasting, where we want sensible forecasts for the short term without sacrificing long term performance. You can test your data to see this by going back ov
42,470
Forecasting: Different Model for 1 month, 2 month, 6 month forecasts?
John, I am not sure who is feeding you these theories about model parameters being optimal for periods out, but it just isn't right. Please share the source of your comments(books, web pages, software, etc.) I would like to read more about this. You want a model to describe the historical variations and also the periods that are outliers and then forecast it. That's it.
Forecasting: Different Model for 1 month, 2 month, 6 month forecasts?
John, I am not sure who is feeding you these theories about model parameters being optimal for periods out, but it just isn't right. Please share the source of your comments(books, web pages, software
Forecasting: Different Model for 1 month, 2 month, 6 month forecasts? John, I am not sure who is feeding you these theories about model parameters being optimal for periods out, but it just isn't right. Please share the source of your comments(books, web pages, software, etc.) I would like to read more about this. You want a model to describe the historical variations and also the periods that are outliers and then forecast it. That's it.
Forecasting: Different Model for 1 month, 2 month, 6 month forecasts? John, I am not sure who is feeding you these theories about model parameters being optimal for periods out, but it just isn't right. Please share the source of your comments(books, web pages, software
42,471
Feature selection: permutation test Vs deleting a variable
I think the result might be very similar, unless the classification algorithm you are using is biased. Permuting a single variable does not chance the characteristics of your dataset. If your dataset has $n$ records and $m$ features it will still have $n$ records and $m$ features if you permute one of them. If you delete one feature or set it to 0, the resulting dataset will have $m-1$ features. This is a subtle point: the accuracy on a dataset with $m$ features and the one on a dataset with $m-1$ are not directly comparable. Random forests (RF) usually uses the permutation approach: in order to compute the importance of a feature, we compare the decrease in accuracy after permutation. I guess that if you just delete that feature you are a bit less confident in comparing the resulting accuracy. For example, lets say we have 2 features $F_1$ binary and very predictive and $F_2$ with lots of categories and not predictive at all. RF is known to be biased towards $F_2$ because of the many categories. If you permute it, the characteristics of your dataset do not change and the difference in RF accuracy is just due to the predictiveness of $F_2$; If you delete it, the difference in RF accuracy might be higher because it takes into account that you helped the RF in decreasing its bias.
Feature selection: permutation test Vs deleting a variable
I think the result might be very similar, unless the classification algorithm you are using is biased. Permuting a single variable does not chance the characteristics of your dataset. If your dataset
Feature selection: permutation test Vs deleting a variable I think the result might be very similar, unless the classification algorithm you are using is biased. Permuting a single variable does not chance the characteristics of your dataset. If your dataset has $n$ records and $m$ features it will still have $n$ records and $m$ features if you permute one of them. If you delete one feature or set it to 0, the resulting dataset will have $m-1$ features. This is a subtle point: the accuracy on a dataset with $m$ features and the one on a dataset with $m-1$ are not directly comparable. Random forests (RF) usually uses the permutation approach: in order to compute the importance of a feature, we compare the decrease in accuracy after permutation. I guess that if you just delete that feature you are a bit less confident in comparing the resulting accuracy. For example, lets say we have 2 features $F_1$ binary and very predictive and $F_2$ with lots of categories and not predictive at all. RF is known to be biased towards $F_2$ because of the many categories. If you permute it, the characteristics of your dataset do not change and the difference in RF accuracy is just due to the predictiveness of $F_2$; If you delete it, the difference in RF accuracy might be higher because it takes into account that you helped the RF in decreasing its bias.
Feature selection: permutation test Vs deleting a variable I think the result might be very similar, unless the classification algorithm you are using is biased. Permuting a single variable does not chance the characteristics of your dataset. If your dataset
42,472
Naive Bayes with unbalanced classes
Tackling the Poor Assumptions of Naive Bayes Text Classiffiers suggests some modifications to Naive Bayes in order to correct for biased sample sets. Also have a look at this (and similar) CV posts on class imbalance, unbalanced class labels, etc.
Naive Bayes with unbalanced classes
Tackling the Poor Assumptions of Naive Bayes Text Classiffiers suggests some modifications to Naive Bayes in order to correct for biased sample sets. Also have a look at this (and similar) CV posts on
Naive Bayes with unbalanced classes Tackling the Poor Assumptions of Naive Bayes Text Classiffiers suggests some modifications to Naive Bayes in order to correct for biased sample sets. Also have a look at this (and similar) CV posts on class imbalance, unbalanced class labels, etc.
Naive Bayes with unbalanced classes Tackling the Poor Assumptions of Naive Bayes Text Classiffiers suggests some modifications to Naive Bayes in order to correct for biased sample sets. Also have a look at this (and similar) CV posts on
42,473
Naive Bayes with unbalanced classes
I know an answer has already been accepted. i thought the following could help future reader.Try implementing this to handle unbalanced classes, This worked pretty good for me. Naive Bayes for Text Classification with Unbalanced Classes
Naive Bayes with unbalanced classes
I know an answer has already been accepted. i thought the following could help future reader.Try implementing this to handle unbalanced classes, This worked pretty good for me. Naive Bayes for Text Cl
Naive Bayes with unbalanced classes I know an answer has already been accepted. i thought the following could help future reader.Try implementing this to handle unbalanced classes, This worked pretty good for me. Naive Bayes for Text Classification with Unbalanced Classes
Naive Bayes with unbalanced classes I know an answer has already been accepted. i thought the following could help future reader.Try implementing this to handle unbalanced classes, This worked pretty good for me. Naive Bayes for Text Cl
42,474
Benefits of CART over ID3 algorithm
CART does binary splits. ID3, C45 and the family exhaust one attribute once it is used. This makes sometimes a difference which means that in CART the decisions on how to split values based on an attribute are delayed. Which means that there are pretty good chances that a CART might catch better splits than C45. The drawback is that with CART you can't create rules and the whole tree is larger and harder to interpret. Anyway, the interpretation is not always useful.
Benefits of CART over ID3 algorithm
CART does binary splits. ID3, C45 and the family exhaust one attribute once it is used. This makes sometimes a difference which means that in CART the decisions on how to split values based on an attr
Benefits of CART over ID3 algorithm CART does binary splits. ID3, C45 and the family exhaust one attribute once it is used. This makes sometimes a difference which means that in CART the decisions on how to split values based on an attribute are delayed. Which means that there are pretty good chances that a CART might catch better splits than C45. The drawback is that with CART you can't create rules and the whole tree is larger and harder to interpret. Anyway, the interpretation is not always useful.
Benefits of CART over ID3 algorithm CART does binary splits. ID3, C45 and the family exhaust one attribute once it is used. This makes sometimes a difference which means that in CART the decisions on how to split values based on an attr
42,475
Error term interpretation in the Cox PH model
This is a very old question, but still doens't have an answer. The simple answer to the question is that in its original formulation (both in discrete time and later in continuous time) the Cox model does not have an error term. This implies that all sources of individual heterogeneity are captured by the vector of observable characteristics, $X$. This is clearly extremely restrictive as we know that in reality exist many sources of variation in the hazard that are not observable. Lancaster (1979) generalized the Cox model to include unobserved heteorgeneity (frailty) by introducing a multiplicative error term (and assuming a parametric form for it). $$ h(t| x, v) = h_0(t) \exp(X \beta)\cdot v $$ This mixed proportional hazard model has been modified in several ways over the years. Econometricians, in particular, don't like functional form assumptions, and a large literature have gone in the direction of relaxig the assumptions relative to the specification of the error term $v$ and as far as possible also keeping $h_0(t)$ semi- or non-parametrically identified. See Hausman and Woutersen (2014) introduction for an excellent review with references to key related papers. The paper itself provides a nice example of a recent result in this area of econometrics. The mixed proportional hazard is uniquely identified from the data without resorting to any functional form assumptions on $v$ nor $h_0(t)$. The main assumptions to achieve this are the PH assumption and the existence of time-varying exgoenous regressors (i.e. uncorrelated with $v$), whose variation is used to identify all other model components. The use of time-varying covariates for indentifications dates back to older works by Heckman and Honore' (you can find references to their work in the Hausman and Woutersen (2014) paper as well). Note that a causal interpretation of the right-hand-side regressors is challenging, since it's typically hard to believe in the independence between right-hand side regressors and $v$ (as this is a fundamentally untestable assumption).
Error term interpretation in the Cox PH model
This is a very old question, but still doens't have an answer. The simple answer to the question is that in its original formulation (both in discrete time and later in continuous time) the Cox model
Error term interpretation in the Cox PH model This is a very old question, but still doens't have an answer. The simple answer to the question is that in its original formulation (both in discrete time and later in continuous time) the Cox model does not have an error term. This implies that all sources of individual heterogeneity are captured by the vector of observable characteristics, $X$. This is clearly extremely restrictive as we know that in reality exist many sources of variation in the hazard that are not observable. Lancaster (1979) generalized the Cox model to include unobserved heteorgeneity (frailty) by introducing a multiplicative error term (and assuming a parametric form for it). $$ h(t| x, v) = h_0(t) \exp(X \beta)\cdot v $$ This mixed proportional hazard model has been modified in several ways over the years. Econometricians, in particular, don't like functional form assumptions, and a large literature have gone in the direction of relaxig the assumptions relative to the specification of the error term $v$ and as far as possible also keeping $h_0(t)$ semi- or non-parametrically identified. See Hausman and Woutersen (2014) introduction for an excellent review with references to key related papers. The paper itself provides a nice example of a recent result in this area of econometrics. The mixed proportional hazard is uniquely identified from the data without resorting to any functional form assumptions on $v$ nor $h_0(t)$. The main assumptions to achieve this are the PH assumption and the existence of time-varying exgoenous regressors (i.e. uncorrelated with $v$), whose variation is used to identify all other model components. The use of time-varying covariates for indentifications dates back to older works by Heckman and Honore' (you can find references to their work in the Hausman and Woutersen (2014) paper as well). Note that a causal interpretation of the right-hand-side regressors is challenging, since it's typically hard to believe in the independence between right-hand side regressors and $v$ (as this is a fundamentally untestable assumption).
Error term interpretation in the Cox PH model This is a very old question, but still doens't have an answer. The simple answer to the question is that in its original formulation (both in discrete time and later in continuous time) the Cox model
42,476
Error term interpretation in the Cox PH model
You seem to be confusing the Cox proportional hazards model with a linear Gaussian model. The Cox model has no need for an error term. A consequence of this is that an omitted variable can destroy $\beta$s that are in the model, unlike normal regression when a terms is omitted that is orthogonal to other terms.
Error term interpretation in the Cox PH model
You seem to be confusing the Cox proportional hazards model with a linear Gaussian model. The Cox model has no need for an error term. A consequence of this is that an omitted variable can destroy $
Error term interpretation in the Cox PH model You seem to be confusing the Cox proportional hazards model with a linear Gaussian model. The Cox model has no need for an error term. A consequence of this is that an omitted variable can destroy $\beta$s that are in the model, unlike normal regression when a terms is omitted that is orthogonal to other terms.
Error term interpretation in the Cox PH model You seem to be confusing the Cox proportional hazards model with a linear Gaussian model. The Cox model has no need for an error term. A consequence of this is that an omitted variable can destroy $
42,477
Appropriate GLM when response variable is proportion, but not binomial
Before venturing into the territory of GLMs it might be worth fitting a regression model on an appropriately transformed version of the response variable. If we let $0<Y_i<1$ be the area-proportion (and assuming you don't have any proportions that are exactly zero or one) then a reasonable regression model would be: $$\log \bigg( \frac{Y_i}{1-Y_i} \bigg) = \beta_0 + \sum_{k=1}^m \beta_1 x_{i,k} + \varepsilon_i \quad \quad \quad \quad \quad \varepsilon_i \sim \text{IID N}(0, \sigma^2).$$ This is a transformation that is closely related to a scaled variant of the hyperbolic tangent function. If we let $\mu_i \equiv \beta_0 + \sum_k \beta_1 x_{i,k}$ denote the regression part of the equation then we have: $$\log \bigg( \frac{Y_i}{1-Y_i} \bigg) = \mu_i + \varepsilon_i \quad \quad \iff \quad \quad Y_i = \frac{\exp(\mu_i+\varepsilon_i)}{1 + \exp(\mu_i+\varepsilon_i)}$$ Obviously this regression equation might not fit your data, particularly if there is complicated spatial autocorrelation. Nevertheless, it is a reasonable starting point for modelling to get a simple understanding of the relationship with expalantory variables. This is a linear regression model that can be fit using standard MLE methods. You can then use diagnostic plots to look for nonlinearity, which would indicate a failure of the transformation. You can also use diagnostic methods to test for spatial auto-correlation, etc., to see if you need to generalise your model.
Appropriate GLM when response variable is proportion, but not binomial
Before venturing into the territory of GLMs it might be worth fitting a regression model on an appropriately transformed version of the response variable. If we let $0<Y_i<1$ be the area-proportion (
Appropriate GLM when response variable is proportion, but not binomial Before venturing into the territory of GLMs it might be worth fitting a regression model on an appropriately transformed version of the response variable. If we let $0<Y_i<1$ be the area-proportion (and assuming you don't have any proportions that are exactly zero or one) then a reasonable regression model would be: $$\log \bigg( \frac{Y_i}{1-Y_i} \bigg) = \beta_0 + \sum_{k=1}^m \beta_1 x_{i,k} + \varepsilon_i \quad \quad \quad \quad \quad \varepsilon_i \sim \text{IID N}(0, \sigma^2).$$ This is a transformation that is closely related to a scaled variant of the hyperbolic tangent function. If we let $\mu_i \equiv \beta_0 + \sum_k \beta_1 x_{i,k}$ denote the regression part of the equation then we have: $$\log \bigg( \frac{Y_i}{1-Y_i} \bigg) = \mu_i + \varepsilon_i \quad \quad \iff \quad \quad Y_i = \frac{\exp(\mu_i+\varepsilon_i)}{1 + \exp(\mu_i+\varepsilon_i)}$$ Obviously this regression equation might not fit your data, particularly if there is complicated spatial autocorrelation. Nevertheless, it is a reasonable starting point for modelling to get a simple understanding of the relationship with expalantory variables. This is a linear regression model that can be fit using standard MLE methods. You can then use diagnostic plots to look for nonlinearity, which would indicate a failure of the transformation. You can also use diagnostic methods to test for spatial auto-correlation, etc., to see if you need to generalise your model.
Appropriate GLM when response variable is proportion, but not binomial Before venturing into the territory of GLMs it might be worth fitting a regression model on an appropriately transformed version of the response variable. If we let $0<Y_i<1$ be the area-proportion (
42,478
Appropriate GLM when response variable is proportion, but not binomial
Beta-regression comes to mind, as you mentioned. Look around the site and the respective tag, beta-regression. R has a package, betareg as well.
Appropriate GLM when response variable is proportion, but not binomial
Beta-regression comes to mind, as you mentioned. Look around the site and the respective tag, beta-regression. R has a package, betareg as well.
Appropriate GLM when response variable is proportion, but not binomial Beta-regression comes to mind, as you mentioned. Look around the site and the respective tag, beta-regression. R has a package, betareg as well.
Appropriate GLM when response variable is proportion, but not binomial Beta-regression comes to mind, as you mentioned. Look around the site and the respective tag, beta-regression. R has a package, betareg as well.
42,479
How can we evaluate the predicted values using Scikit-Learn
In order to get the accuracy of the predication you can do: print accuracy_score(expected, y_1) If you want a few metrics, such as, precision, recall, f1-score you can get a classification report: print classification_report(expected, y_1) A confusion matrix will tell how many of the samples that were classified are classified according to which label. This will tell you if your classifier confuses some categories. The functions to get these metrics are independent of the classification model you are using. (So you can easily test an SVM for example) You should use predict() since this will give the labels of the classified samples. predict_proba will give the propability of a sample belonging to a category I recommend reading a few of the documentation pages: Classification report Accuracy score confusion matrix Adaboost classifier in scikit-learn
How can we evaluate the predicted values using Scikit-Learn
In order to get the accuracy of the predication you can do: print accuracy_score(expected, y_1) If you want a few metrics, such as, precision, recall, f1-score you can get a classification report: pr
How can we evaluate the predicted values using Scikit-Learn In order to get the accuracy of the predication you can do: print accuracy_score(expected, y_1) If you want a few metrics, such as, precision, recall, f1-score you can get a classification report: print classification_report(expected, y_1) A confusion matrix will tell how many of the samples that were classified are classified according to which label. This will tell you if your classifier confuses some categories. The functions to get these metrics are independent of the classification model you are using. (So you can easily test an SVM for example) You should use predict() since this will give the labels of the classified samples. predict_proba will give the propability of a sample belonging to a category I recommend reading a few of the documentation pages: Classification report Accuracy score confusion matrix Adaboost classifier in scikit-learn
How can we evaluate the predicted values using Scikit-Learn In order to get the accuracy of the predication you can do: print accuracy_score(expected, y_1) If you want a few metrics, such as, precision, recall, f1-score you can get a classification report: pr
42,480
If the level of a test is decreased, would the power of the test be expected to increase?
The likelihood of making a type 1 error v. a type 2 error is inversely proportional. Thus, if you make your rejection of the null less stringent, all else being equal, the power of your test should increase. From Wikipedia on Statistical Power: “One easy way to increase the power of a test is to carry out a less conservative test by using a larger significance criterion, for example 0.10 instead of 0.05. This increases the chance of rejecting the null hypothesis (i.e. obtaining a statistically significant result) when the null hypothesis is false, that is, reduces the risk of a Type II error (false negative regarding whether an effect exists).”
If the level of a test is decreased, would the power of the test be expected to increase?
The likelihood of making a type 1 error v. a type 2 error is inversely proportional. Thus, if you make your rejection of the null less stringent, all else being equal, the power of your test should in
If the level of a test is decreased, would the power of the test be expected to increase? The likelihood of making a type 1 error v. a type 2 error is inversely proportional. Thus, if you make your rejection of the null less stringent, all else being equal, the power of your test should increase. From Wikipedia on Statistical Power: “One easy way to increase the power of a test is to carry out a less conservative test by using a larger significance criterion, for example 0.10 instead of 0.05. This increases the chance of rejecting the null hypothesis (i.e. obtaining a statistically significant result) when the null hypothesis is false, that is, reduces the risk of a Type II error (false negative regarding whether an effect exists).”
If the level of a test is decreased, would the power of the test be expected to increase? The likelihood of making a type 1 error v. a type 2 error is inversely proportional. Thus, if you make your rejection of the null less stringent, all else being equal, the power of your test should in
42,481
If the level of a test is decreased, would the power of the test be expected to increase?
The level is of a test is its significance level, $\alpha$. Decreasing the level means making $\alpha$ smaller. If you make $\alpha$ smaller, you make $\beta$ larger* ... and power is $1-\beta$. What you're doing when you lower $\alpha$ is moving the critical value further out into the tail, so you make it harder to reject whether $H_0$ is true or false, since by moving the critical value further into the tail, you have reduced the set of potential samples that are in the rejection region. It's easy to see this 'lockstep' reduction in probability of rejection under $H_0$ and $H_1$ ($\alpha$ and $1-\beta$ respectively) in the context of a power curve, for example, with a one-sample t-test. If you draw the power against the difference between the true population mean and the hypothesized population mean, $\delta=\mu-\mu_0$, you get a curve that is at $\alpha$ when $\delta=0$ and increases as $\delta$ gets further away from 0. If you reduce $\alpha$ by pushing the critical value further into the tail, you "pull the curve down", because you have eliminated some possible** sample arrangements that would previously have led to rejection: As we move from blue (10% significance level) to dark red (5%) to green (1%), ceteris paribus, the whole power curve moves down. * nearly always with typical sorts of tests, but it is possible to construct cases where this doesn't necessarily happen ** whether $|\delta|$ was $0$, or small, or large, those possible values of the statistic between the old critical value and the new no longer count as rejections, so each possible value for $\delta$ has a lower rejection rate.
If the level of a test is decreased, would the power of the test be expected to increase?
The level is of a test is its significance level, $\alpha$. Decreasing the level means making $\alpha$ smaller. If you make $\alpha$ smaller, you make $\beta$ larger* ... and power is $1-\beta$. What
If the level of a test is decreased, would the power of the test be expected to increase? The level is of a test is its significance level, $\alpha$. Decreasing the level means making $\alpha$ smaller. If you make $\alpha$ smaller, you make $\beta$ larger* ... and power is $1-\beta$. What you're doing when you lower $\alpha$ is moving the critical value further out into the tail, so you make it harder to reject whether $H_0$ is true or false, since by moving the critical value further into the tail, you have reduced the set of potential samples that are in the rejection region. It's easy to see this 'lockstep' reduction in probability of rejection under $H_0$ and $H_1$ ($\alpha$ and $1-\beta$ respectively) in the context of a power curve, for example, with a one-sample t-test. If you draw the power against the difference between the true population mean and the hypothesized population mean, $\delta=\mu-\mu_0$, you get a curve that is at $\alpha$ when $\delta=0$ and increases as $\delta$ gets further away from 0. If you reduce $\alpha$ by pushing the critical value further into the tail, you "pull the curve down", because you have eliminated some possible** sample arrangements that would previously have led to rejection: As we move from blue (10% significance level) to dark red (5%) to green (1%), ceteris paribus, the whole power curve moves down. * nearly always with typical sorts of tests, but it is possible to construct cases where this doesn't necessarily happen ** whether $|\delta|$ was $0$, or small, or large, those possible values of the statistic between the old critical value and the new no longer count as rejections, so each possible value for $\delta$ has a lower rejection rate.
If the level of a test is decreased, would the power of the test be expected to increase? The level is of a test is its significance level, $\alpha$. Decreasing the level means making $\alpha$ smaller. If you make $\alpha$ smaller, you make $\beta$ larger* ... and power is $1-\beta$. What
42,482
If the level of a test is decreased, would the power of the test be expected to increase?
Ceteris paribus, when you decrease the significance level $\alpha$ in a classical hypothesis test, you are increasing the amount of evidence required to reject the null hypothesis. This means that you are less likely to reject the null hypothesis, which lowers the probability of a Type I error, but also reduces the power of your test.
If the level of a test is decreased, would the power of the test be expected to increase?
Ceteris paribus, when you decrease the significance level $\alpha$ in a classical hypothesis test, you are increasing the amount of evidence required to reject the null hypothesis. This means that yo
If the level of a test is decreased, would the power of the test be expected to increase? Ceteris paribus, when you decrease the significance level $\alpha$ in a classical hypothesis test, you are increasing the amount of evidence required to reject the null hypothesis. This means that you are less likely to reject the null hypothesis, which lowers the probability of a Type I error, but also reduces the power of your test.
If the level of a test is decreased, would the power of the test be expected to increase? Ceteris paribus, when you decrease the significance level $\alpha$ in a classical hypothesis test, you are increasing the amount of evidence required to reject the null hypothesis. This means that yo
42,483
Significance testing of cross-validated classification accuracy: shuffling vs. binomial test
Radmacher and colleagues (J. Comput. Biol. 9:505-511) describe a process for computing the significance of an error rate. We permute the class labels (a few thousand times) and repeat the entire cross-validation procedure to assess the probability of producing a cross-validated error rate as small as the observed one. One concept related to the current question is that the summaries from folds of cross-validation are not independent, and this presents challenges in obtaining confidence intervals and p-values. This is discussed by Jiang and colleagues (Stat Appl Genet Mol Biol. 2008;7(1)). Also note that a 'significant' error-rate measure is a bare-minimum requirement for a prediction rule and does not say much about its usefulness. However, given that small samples are often used and there are many problems with prediction rules, it is still helpful as a sanity check.
Significance testing of cross-validated classification accuracy: shuffling vs. binomial test
Radmacher and colleagues (J. Comput. Biol. 9:505-511) describe a process for computing the significance of an error rate. We permute the class labels (a few thousand times) and repeat the entire cross
Significance testing of cross-validated classification accuracy: shuffling vs. binomial test Radmacher and colleagues (J. Comput. Biol. 9:505-511) describe a process for computing the significance of an error rate. We permute the class labels (a few thousand times) and repeat the entire cross-validation procedure to assess the probability of producing a cross-validated error rate as small as the observed one. One concept related to the current question is that the summaries from folds of cross-validation are not independent, and this presents challenges in obtaining confidence intervals and p-values. This is discussed by Jiang and colleagues (Stat Appl Genet Mol Biol. 2008;7(1)). Also note that a 'significant' error-rate measure is a bare-minimum requirement for a prediction rule and does not say much about its usefulness. However, given that small samples are often used and there are many problems with prediction rules, it is still helpful as a sanity check.
Significance testing of cross-validated classification accuracy: shuffling vs. binomial test Radmacher and colleagues (J. Comput. Biol. 9:505-511) describe a process for computing the significance of an error rate. We permute the class labels (a few thousand times) and repeat the entire cross
42,484
Significance testing of cross-validated classification accuracy: shuffling vs. binomial test
This is not really an answer to my question, but I would like to provide an explicit and very simple simulation demonstrating the symptoms I described, and I don't want to clutter the question too much. Let's consider the absolutely simplest case possible. I will take 400 one-dimensional samples: 200 are equal to -1 and 200 are equal to +1. All samples located at -1 belong to class A, and all samples located at +1 belong to class B. The classifier will measure the mean (centroid) of each class and assign every test sample to the class whose centroid is closer. Nothing can be simpler than that. Here is an illustration ("A" means 200 points from class A, "B" means 200 points from class B): ---------A-----------B---------> I can do exactly the same Monte Carlo cross-validation as I described above: I randomly select $K=4$ test cases, 2 from each class, train the classifier on the remaining training set, and classify these four; this is repeated 100 times. Obviously the number of correct classification is 400, i.e. 100% accuracy. Especially for @cbeleites I can also run a usual 100-fold (stratified) CV, the accuracy is also 100%. Note that it does not make sense to iterate this CV, because nothing will change. And now we do the shuffles. I randomly shuffle the labels, repeat exactly the same procedure and get the number of correct classifications $B_j$. Then shuffle the labels again, and repeat it 100 times. Results: mean of $B_j$ is around 200 (very close, like 198-202 on different runs), so chance level. But variance of $B_j$ is in the range of 400-900 (on different runs). This is true for both Monte Carlo CV and standard 100-fold CV. The variance is always MUCH larger than the expected binomial variance that should be equal to 400*0.5*(1-0.5) = 100. Now either I am overlooking a completely stupid mistake (which is absolutely possible!), or we have a big problem with all the reasoning from Confidence interval for cross-validated classification accuracy, because the binomial intervals don't make any sense. For example, if I spoil my ideal class separation by relabeling 80 points from class A to B and vice versa, then my actual number of decoded samples becomes 240. The stability over iterations of CV is perfect. The binomial confidence interval binofit(240,400) is [0.55, 0.65] which excludes 0.5 so we would conclude that the decoding is significant. But the variance of shuffled correct decoders is still on average around 500-600, so standard deviation is around let's say 22, so 95% interval for the null hypothesis of random decoding is around 200$\pm$45, which includes 240, which means not significant. As far as I can see this problem has nothing to do with different CV folds not being independent, it's entirely different problem that has to do with finite sample size. The larger the sample size, the smaller the variance of $B_j$ (now I am back to Monte Carlo cross-validation, where I can still classify 400 cases even if the sample size is much larger). But I have to go to sample sizes above 10000 to get variance close to 100. Such sample sizes are way beyond realistic. Update: In comments above @julieth quoted a paper by Jiang et al. Calculating Confidence Intervals for Prediction Error in Microarray Classification Using Resampling: "... the test [I think they mean "training" -- amoeba] set on which prediction of the $i$-th case has $n-2$ specimens in common with the training set on which prediction of the $j$-th is based, hence, the number of prediction errors is not binomial". In other words, they claim that the reason for non-binomiality is that training sets are not mutually exclusive. Turns out, Nadeau and Bengio have a mammoth 49-page long paper about it called Inference for the Generalization Error where they discuss exactly this issue in great detail. I did not believe it at first, so I used the simulation above to check this claim. If I increase the total number of samples to 4000, I can use the Monte Carlo CV procedure with 100 folds (each time classifying 4 test cases) to get 400 predictions. On the shuffled data (I increased the number of shuffles to 1000) the mean number of correct classification is 199 and the variance is 346: still a whole lot more than 100 even though the sample size is now as large as 4000. But now I can also do the following: split my 4000 samples in 100 stratified parts of 40, and in each part use 36 samples to predict 4. I will also get 400 predictions, but this time all training sets are mutually exclusive. The outcome (also after 1000 shuffles): mean 199, variance 98. Wow! Nadeau, Bengio, and @julieth seem to be right. And binomial assumption seems to be dead wrong. I wonder how many papers there are out there using binomial confidence intervals and tests...
Significance testing of cross-validated classification accuracy: shuffling vs. binomial test
This is not really an answer to my question, but I would like to provide an explicit and very simple simulation demonstrating the symptoms I described, and I don't want to clutter the question too muc
Significance testing of cross-validated classification accuracy: shuffling vs. binomial test This is not really an answer to my question, but I would like to provide an explicit and very simple simulation demonstrating the symptoms I described, and I don't want to clutter the question too much. Let's consider the absolutely simplest case possible. I will take 400 one-dimensional samples: 200 are equal to -1 and 200 are equal to +1. All samples located at -1 belong to class A, and all samples located at +1 belong to class B. The classifier will measure the mean (centroid) of each class and assign every test sample to the class whose centroid is closer. Nothing can be simpler than that. Here is an illustration ("A" means 200 points from class A, "B" means 200 points from class B): ---------A-----------B---------> I can do exactly the same Monte Carlo cross-validation as I described above: I randomly select $K=4$ test cases, 2 from each class, train the classifier on the remaining training set, and classify these four; this is repeated 100 times. Obviously the number of correct classification is 400, i.e. 100% accuracy. Especially for @cbeleites I can also run a usual 100-fold (stratified) CV, the accuracy is also 100%. Note that it does not make sense to iterate this CV, because nothing will change. And now we do the shuffles. I randomly shuffle the labels, repeat exactly the same procedure and get the number of correct classifications $B_j$. Then shuffle the labels again, and repeat it 100 times. Results: mean of $B_j$ is around 200 (very close, like 198-202 on different runs), so chance level. But variance of $B_j$ is in the range of 400-900 (on different runs). This is true for both Monte Carlo CV and standard 100-fold CV. The variance is always MUCH larger than the expected binomial variance that should be equal to 400*0.5*(1-0.5) = 100. Now either I am overlooking a completely stupid mistake (which is absolutely possible!), or we have a big problem with all the reasoning from Confidence interval for cross-validated classification accuracy, because the binomial intervals don't make any sense. For example, if I spoil my ideal class separation by relabeling 80 points from class A to B and vice versa, then my actual number of decoded samples becomes 240. The stability over iterations of CV is perfect. The binomial confidence interval binofit(240,400) is [0.55, 0.65] which excludes 0.5 so we would conclude that the decoding is significant. But the variance of shuffled correct decoders is still on average around 500-600, so standard deviation is around let's say 22, so 95% interval for the null hypothesis of random decoding is around 200$\pm$45, which includes 240, which means not significant. As far as I can see this problem has nothing to do with different CV folds not being independent, it's entirely different problem that has to do with finite sample size. The larger the sample size, the smaller the variance of $B_j$ (now I am back to Monte Carlo cross-validation, where I can still classify 400 cases even if the sample size is much larger). But I have to go to sample sizes above 10000 to get variance close to 100. Such sample sizes are way beyond realistic. Update: In comments above @julieth quoted a paper by Jiang et al. Calculating Confidence Intervals for Prediction Error in Microarray Classification Using Resampling: "... the test [I think they mean "training" -- amoeba] set on which prediction of the $i$-th case has $n-2$ specimens in common with the training set on which prediction of the $j$-th is based, hence, the number of prediction errors is not binomial". In other words, they claim that the reason for non-binomiality is that training sets are not mutually exclusive. Turns out, Nadeau and Bengio have a mammoth 49-page long paper about it called Inference for the Generalization Error where they discuss exactly this issue in great detail. I did not believe it at first, so I used the simulation above to check this claim. If I increase the total number of samples to 4000, I can use the Monte Carlo CV procedure with 100 folds (each time classifying 4 test cases) to get 400 predictions. On the shuffled data (I increased the number of shuffles to 1000) the mean number of correct classification is 199 and the variance is 346: still a whole lot more than 100 even though the sample size is now as large as 4000. But now I can also do the following: split my 4000 samples in 100 stratified parts of 40, and in each part use 36 samples to predict 4. I will also get 400 predictions, but this time all training sets are mutually exclusive. The outcome (also after 1000 shuffles): mean 199, variance 98. Wow! Nadeau, Bengio, and @julieth seem to be right. And binomial assumption seems to be dead wrong. I wonder how many papers there are out there using binomial confidence intervals and tests...
Significance testing of cross-validated classification accuracy: shuffling vs. binomial test This is not really an answer to my question, but I would like to provide an explicit and very simple simulation demonstrating the symptoms I described, and I don't want to clutter the question too muc
42,485
How to find the Fisher Information of a function of the MLE of a Geometric (p) distribution?
By the formula for the MLE, I understand that you are dealing with the variant of the Geometric distribution where the random variables can take the value $0$. In this case we have $$E(X_1) = \frac {1-p}{p},\,\,\, \text {Var}(X_1) = \frac {1-p}{p^2}$$ The Fisher Information of a single observation can be derived by applying its definition : $$I_1(p) = \operatorname{E} \left[\left. \left(\frac{\partial}{\partial p} \ln f(X_1;p)\right)^2\right|p\right] = \operatorname{E} \left[\left. \left(\frac{\partial}{\partial p} \ln(1-p)^{X_1}p \right)^2\right|p \right]$$ $$=\operatorname{E} \left(-\frac {X_1}{1-p}+\frac 1p \right)^2 = \operatorname{E} \left(\frac {X_1^2}{(1-p)^2}+\frac 1{p^2}-2\frac {X_1}{(1-p)p}\right)$$ $$=\frac 1{p^2} - \frac {2}{(1-p)p} E(X_1)+ \frac {1}{(1-p)^2}\left(\text {Var}(X_1) + (E[X_1])^2\right)$$ $$=\frac 1{p^2}- \frac {2}{(1-p)p}\cdot \frac {1-p}{p} + \frac {1}{(1-p)^2}\left( \frac {1-p}{p^2} + \frac {(1-p)^2}{p^2}\right)$$ $$=\frac 1{p^2}- \frac {2}{p^2}+\frac {1}{(1-p)p^2}+\frac 1{p^2} = \frac {1}{(1-p)p^2}$$ We also have $$\frac {d\theta}{dp} = \frac {1}{(1-p)^2}$$ So $$I_1(\theta) = I_1(p)\cdot \left(\frac {d\theta}{dp} \right)^{-2} = \frac {1}{(1-p)p^2}\cdot (1-p)^4 = \frac {(1-p)^3}{p^2}$$
How to find the Fisher Information of a function of the MLE of a Geometric (p) distribution?
By the formula for the MLE, I understand that you are dealing with the variant of the Geometric distribution where the random variables can take the value $0$. In this case we have $$E(X_1) = \frac {1
How to find the Fisher Information of a function of the MLE of a Geometric (p) distribution? By the formula for the MLE, I understand that you are dealing with the variant of the Geometric distribution where the random variables can take the value $0$. In this case we have $$E(X_1) = \frac {1-p}{p},\,\,\, \text {Var}(X_1) = \frac {1-p}{p^2}$$ The Fisher Information of a single observation can be derived by applying its definition : $$I_1(p) = \operatorname{E} \left[\left. \left(\frac{\partial}{\partial p} \ln f(X_1;p)\right)^2\right|p\right] = \operatorname{E} \left[\left. \left(\frac{\partial}{\partial p} \ln(1-p)^{X_1}p \right)^2\right|p \right]$$ $$=\operatorname{E} \left(-\frac {X_1}{1-p}+\frac 1p \right)^2 = \operatorname{E} \left(\frac {X_1^2}{(1-p)^2}+\frac 1{p^2}-2\frac {X_1}{(1-p)p}\right)$$ $$=\frac 1{p^2} - \frac {2}{(1-p)p} E(X_1)+ \frac {1}{(1-p)^2}\left(\text {Var}(X_1) + (E[X_1])^2\right)$$ $$=\frac 1{p^2}- \frac {2}{(1-p)p}\cdot \frac {1-p}{p} + \frac {1}{(1-p)^2}\left( \frac {1-p}{p^2} + \frac {(1-p)^2}{p^2}\right)$$ $$=\frac 1{p^2}- \frac {2}{p^2}+\frac {1}{(1-p)p^2}+\frac 1{p^2} = \frac {1}{(1-p)p^2}$$ We also have $$\frac {d\theta}{dp} = \frac {1}{(1-p)^2}$$ So $$I_1(\theta) = I_1(p)\cdot \left(\frac {d\theta}{dp} \right)^{-2} = \frac {1}{(1-p)p^2}\cdot (1-p)^4 = \frac {(1-p)^3}{p^2}$$
How to find the Fisher Information of a function of the MLE of a Geometric (p) distribution? By the formula for the MLE, I understand that you are dealing with the variant of the Geometric distribution where the random variables can take the value $0$. In this case we have $$E(X_1) = \frac {1
42,486
When is the inverse of the Fisher Information exact? (MLE)
Sometimes, when doing MLE problems, I see that the variance expression gotten from the inverse of the Fisher Information is exactly like what it should be and sometimes it isn't. All depends on what 'exactly like it should be' means ;) At any rate, I think you need to take a look at the Cramer-Rao bound again. The inverse of the fisher information only gives a lower bound on the variance of an unbiased estimator. An estimator which achieve this is called an efficient estimator, as it has the lowest possible variance while being unbiased. But there's lots of unbiased estimators which do not achieve this bound. There's lots of estimators which are biased, but still useful in practise. For example, the maximum likelihood estimator of variance is biased, but still good enough a lot of the time. These are probably what you're encountering. As for why it's sometimes difficult to find a closed form solution of estimates of variance, etc, I'll only say that it would be pretty astonishing if every probability distribution out there had nice, closed form expressions for the stuff we're interested in.
When is the inverse of the Fisher Information exact? (MLE)
Sometimes, when doing MLE problems, I see that the variance expression gotten from the inverse of the Fisher Information is exactly like what it should be and sometimes it isn't. All depends on what
When is the inverse of the Fisher Information exact? (MLE) Sometimes, when doing MLE problems, I see that the variance expression gotten from the inverse of the Fisher Information is exactly like what it should be and sometimes it isn't. All depends on what 'exactly like it should be' means ;) At any rate, I think you need to take a look at the Cramer-Rao bound again. The inverse of the fisher information only gives a lower bound on the variance of an unbiased estimator. An estimator which achieve this is called an efficient estimator, as it has the lowest possible variance while being unbiased. But there's lots of unbiased estimators which do not achieve this bound. There's lots of estimators which are biased, but still useful in practise. For example, the maximum likelihood estimator of variance is biased, but still good enough a lot of the time. These are probably what you're encountering. As for why it's sometimes difficult to find a closed form solution of estimates of variance, etc, I'll only say that it would be pretty astonishing if every probability distribution out there had nice, closed form expressions for the stuff we're interested in.
When is the inverse of the Fisher Information exact? (MLE) Sometimes, when doing MLE problems, I see that the variance expression gotten from the inverse of the Fisher Information is exactly like what it should be and sometimes it isn't. All depends on what
42,487
Is it possible for a bayesian model to "forget" about some points?
You can remove their contribution from the likelihood or log-likelihood by division or subtraction, exactly reversing their contribution when they were multiplied or added in in the first place. It's like saying "There's a long multiplication I've done: $3\times 4\times 2\times 5\times 6\times 6\times 2 \times x$. I have the answer, and but I want to compute it as if the 4 weren't there. Is there a way to do that?" (Yes, just divide by 4. Now think of $x$ as your prior and the numbers as contributions of observations to the likelihood. You need to do the equivalent of 'divide by 4') This is so facile as to worry me that I'd missed something obvious; I couldn't see a way to make a viable answer out of it beyond 'divide by its contribution to the likelihood'. But here's something more than the obvious to make it worth an answer: In some cases, you might want to consider accumulated numerical error. If you multiply-and-divde or add-and-subtract floating point terms in and out a lot (say over a million observations, of which you keep only the last five hundred), eventually your computation might accumulate enough numerical 'fluff' to make it instead worth redoing periodically from scratch.
Is it possible for a bayesian model to "forget" about some points?
You can remove their contribution from the likelihood or log-likelihood by division or subtraction, exactly reversing their contribution when they were multiplied or added in in the first place. It's
Is it possible for a bayesian model to "forget" about some points? You can remove their contribution from the likelihood or log-likelihood by division or subtraction, exactly reversing their contribution when they were multiplied or added in in the first place. It's like saying "There's a long multiplication I've done: $3\times 4\times 2\times 5\times 6\times 6\times 2 \times x$. I have the answer, and but I want to compute it as if the 4 weren't there. Is there a way to do that?" (Yes, just divide by 4. Now think of $x$ as your prior and the numbers as contributions of observations to the likelihood. You need to do the equivalent of 'divide by 4') This is so facile as to worry me that I'd missed something obvious; I couldn't see a way to make a viable answer out of it beyond 'divide by its contribution to the likelihood'. But here's something more than the obvious to make it worth an answer: In some cases, you might want to consider accumulated numerical error. If you multiply-and-divde or add-and-subtract floating point terms in and out a lot (say over a million observations, of which you keep only the last five hundred), eventually your computation might accumulate enough numerical 'fluff' to make it instead worth redoing periodically from scratch.
Is it possible for a bayesian model to "forget" about some points? You can remove their contribution from the likelihood or log-likelihood by division or subtraction, exactly reversing their contribution when they were multiplied or added in in the first place. It's
42,488
Help in understanding how to apply correctly False Discovery Rate adjustment
Note that Benjamini-Hochberg controls the false discovery rate to be less than or equal to the specified level, which has been satisfied by all of your scenarios. In terms of the procedure you have followed, there are no faults there. However, it is worth pointing out that for real gene expression data you may get better results using a specific method such as Storey, John D., and Robert Tibshirani. "Statistical significance for genomewide studies." Proceedings of the National Academy of Sciences 100, no. 16 (2003): 9440-9445. This particular method exploits the fact that in such studies it is virtually impossible that the number of significant genes is zero.
Help in understanding how to apply correctly False Discovery Rate adjustment
Note that Benjamini-Hochberg controls the false discovery rate to be less than or equal to the specified level, which has been satisfied by all of your scenarios. In terms of the procedure you have fo
Help in understanding how to apply correctly False Discovery Rate adjustment Note that Benjamini-Hochberg controls the false discovery rate to be less than or equal to the specified level, which has been satisfied by all of your scenarios. In terms of the procedure you have followed, there are no faults there. However, it is worth pointing out that for real gene expression data you may get better results using a specific method such as Storey, John D., and Robert Tibshirani. "Statistical significance for genomewide studies." Proceedings of the National Academy of Sciences 100, no. 16 (2003): 9440-9445. This particular method exploits the fact that in such studies it is virtually impossible that the number of significant genes is zero.
Help in understanding how to apply correctly False Discovery Rate adjustment Note that Benjamini-Hochberg controls the false discovery rate to be less than or equal to the specified level, which has been satisfied by all of your scenarios. In terms of the procedure you have fo
42,489
Comparing AIC and adjusted $R^2$
(I assume your homework has been turned in by now ;-). I'll answer this so that it doesn't stay officially unanswered.) @user12202013 is right you don't compare an AIC to an $R^2_{\rm adj}$. You can compare the AICs from two different models, and you can compare the $R^2_{\rm adj}$s from two different models, as ways to help you think about which model fits better. However, I don't think that was the point of the exercise. What you need to recognize is that linear regression (OLS) is a special case (i.e., simplified version) of the generalized linear model. (The topic of GLiMs is a bit involved, but to learn more about it, it may help you to read my answer here: Difference between logit and probit models.) Moreover, the R function glm() fits a linear model assuming normally distributed errors by default. In other words, I think the assignment was to notice that you got the same model using two different function calls as @Roland and @user777 hinted.
Comparing AIC and adjusted $R^2$
(I assume your homework has been turned in by now ;-). I'll answer this so that it doesn't stay officially unanswered.) @user12202013 is right you don't compare an AIC to an $R^2_{\rm adj}$. You ca
Comparing AIC and adjusted $R^2$ (I assume your homework has been turned in by now ;-). I'll answer this so that it doesn't stay officially unanswered.) @user12202013 is right you don't compare an AIC to an $R^2_{\rm adj}$. You can compare the AICs from two different models, and you can compare the $R^2_{\rm adj}$s from two different models, as ways to help you think about which model fits better. However, I don't think that was the point of the exercise. What you need to recognize is that linear regression (OLS) is a special case (i.e., simplified version) of the generalized linear model. (The topic of GLiMs is a bit involved, but to learn more about it, it may help you to read my answer here: Difference between logit and probit models.) Moreover, the R function glm() fits a linear model assuming normally distributed errors by default. In other words, I think the assignment was to notice that you got the same model using two different function calls as @Roland and @user777 hinted.
Comparing AIC and adjusted $R^2$ (I assume your homework has been turned in by now ;-). I'll answer this so that it doesn't stay officially unanswered.) @user12202013 is right you don't compare an AIC to an $R^2_{\rm adj}$. You ca
42,490
R neural network model with target vector as output containing survival predictions
Why not just put binary indicator for event as a target variable and length of time period as a explanatory variable (plus other covariates)? If event happens then target is 1 and time period is calculated as time until event happens - start time. For some observations where target is 0 this time period is 36, if measured in months. For some observations where target is 1 it can be much less. Can there be panel attrition where some observation is removed from the data set before whole surveillance period is over? That must be accounted somehow. To get individual survival probabilities for various time intervals you score new data set with just developed model object i times where i number of different time periods and particular time period has value n. Then just concatenate i vectors containing time period specific probabilities. Idea is that measured time period with other covariates accounts for the survival probability conditional on length of time in the observational study. EDIT: I looked for R package neuralnet. You can have individual time period specific survival events in the target matrix in the following way. C1 is covariate1, T1 is vector of survival events in the time period 1 etc. Your data frame / matrix could look like this: ID T1 T2 T3 T4 T5 T6 C1 C2 C3 CN 1 1 1 1 1 1 1 X11 X12 X13 X1N 2 1 0 0 0 0 0 X21 X22 X23 X2N .. Use following code: survexample=neuralnet(T1+T2+T3+T4+T5+T6~C1+C2+...+CN,data=example,hidden=n,err.fct="ce",linear.output=FALSE) This example code does classifying and forces output vector values to be in the range of [0,1].
R neural network model with target vector as output containing survival predictions
Why not just put binary indicator for event as a target variable and length of time period as a explanatory variable (plus other covariates)? If event happens then target is 1 and time period is calcu
R neural network model with target vector as output containing survival predictions Why not just put binary indicator for event as a target variable and length of time period as a explanatory variable (plus other covariates)? If event happens then target is 1 and time period is calculated as time until event happens - start time. For some observations where target is 0 this time period is 36, if measured in months. For some observations where target is 1 it can be much less. Can there be panel attrition where some observation is removed from the data set before whole surveillance period is over? That must be accounted somehow. To get individual survival probabilities for various time intervals you score new data set with just developed model object i times where i number of different time periods and particular time period has value n. Then just concatenate i vectors containing time period specific probabilities. Idea is that measured time period with other covariates accounts for the survival probability conditional on length of time in the observational study. EDIT: I looked for R package neuralnet. You can have individual time period specific survival events in the target matrix in the following way. C1 is covariate1, T1 is vector of survival events in the time period 1 etc. Your data frame / matrix could look like this: ID T1 T2 T3 T4 T5 T6 C1 C2 C3 CN 1 1 1 1 1 1 1 X11 X12 X13 X1N 2 1 0 0 0 0 0 X21 X22 X23 X2N .. Use following code: survexample=neuralnet(T1+T2+T3+T4+T5+T6~C1+C2+...+CN,data=example,hidden=n,err.fct="ce",linear.output=FALSE) This example code does classifying and forces output vector values to be in the range of [0,1].
R neural network model with target vector as output containing survival predictions Why not just put binary indicator for event as a target variable and length of time period as a explanatory variable (plus other covariates)? If event happens then target is 1 and time period is calcu
42,491
Is it possible to do a test of significance for a string occurrence in two datasets
Note: This answer is now deprecated in light of the new information added by the OP. This answer is of related interest in the context, hence not deleted. You can do a z-test of equality of proportions if the word appears in both the the dictionaries. There are two steps to this process -- a combination of Python and statistics: Efficiently create the dictionary of words that are common, computing their relative counts in each of the samples. Compute a two sample test of proportions, again efficiently for the entire common dictionary. Efficient creation of common dictionary An efficient way to compute the proportions (note that all code is Python 3.3) would be to use dictionary comprehensions: import math as math import scipy.stats as sps from collections import defaultdict dictA = {'word1': 1, 'word2': 4, 'word7': 99, 'word13': 17} dictB = {'word71': 1, 'word3': 4, 'word2': 99, 'word7': 17, 'word9': 45} # compute the sums of the frequencies of occurrence of all the words # NOTE this is expensive, but is done only once sumValuesA = sum(dictA.values()) sumValuesB = sum(dictB.values()) dictAB = {key: (value, dictB.get(key)) for key, value in dictA.items() if key in dictB.keys()} print(dictAB) Now you have a dictionary that contains the counts of the words in either dictionary. You can form the test of proportions of your choice using dictAB. Comparing proportions across samples It is possible to test, based on the proportion of successes in given numbers of trials, whether the probabilities of success are statistically equal across two given samples. Just to be clear, in what follows, samples are two documents which contains the words, the trials are the total number of words in either document, and the successes are the total number of a particular word in either document. That is $H_0: p_1 = p_2$ where $p_1$ is the probability of success in population 1, and $p_2$ is the probability of success in sample 2. The test statistic is $$ Z = \dfrac{\hat{p}_1 - \hat{p}_2}{\sqrt{\hat{p}(1-\hat{p})\left(\tfrac{1}{N_1}+ \tfrac{1}{N_2}\right)}} $$ where $\hat{p}_j = \tfrac{X_j}{N_j}, \, j = 1, 2$, and $X_j$ is the number of successes in the $j$-th population and $N_j$ is the number of trials in the $j$-th population, and $\hat{p} = \tfrac{X_1 + X_2}{N_1 + N_2}$. Under the null hypothesis, this statistic is standard normal distributed. Here is the Python code to do this: #================================================ # compute the two sample difference of proportions #================================================ def fnDiffProp(x1, x2, n1, n2): ''' inputs: x1: the number of successes in the first sample x2: the number of successes in the second sample n1: the total number of 'trials' in the first sample n2: the total number of 'trials' in the second sample output: the test statistic, and the p-value as a tuple ''' hatP = (x1 + x2)/(n1 + n2) hatQ = 1 - hatP hatP1 = x1/n1 hatP2 = x1/n2 Z = (hatP1 - hatP2)/(math.sqrt(hatP*hatQ*(1/n1 + 1/n2))) pVal = 2*(1 - sps.norm.cdf(Z)) return((Z, pVal)) # apply the function above to each of the common words across the # two samples dictPropTest = {key: fnDiffProp(value[0], value[1], sumValuesA, sumValuesB) for key, value in dictAB.items() } To interpret for example, the difference in proportion of 'word7' is very significant across documents, whereas it is not for 'word2'.
Is it possible to do a test of significance for a string occurrence in two datasets
Note: This answer is now deprecated in light of the new information added by the OP. This answer is of related interest in the context, hence not deleted. You can do a z-test of equality of proportion
Is it possible to do a test of significance for a string occurrence in two datasets Note: This answer is now deprecated in light of the new information added by the OP. This answer is of related interest in the context, hence not deleted. You can do a z-test of equality of proportions if the word appears in both the the dictionaries. There are two steps to this process -- a combination of Python and statistics: Efficiently create the dictionary of words that are common, computing their relative counts in each of the samples. Compute a two sample test of proportions, again efficiently for the entire common dictionary. Efficient creation of common dictionary An efficient way to compute the proportions (note that all code is Python 3.3) would be to use dictionary comprehensions: import math as math import scipy.stats as sps from collections import defaultdict dictA = {'word1': 1, 'word2': 4, 'word7': 99, 'word13': 17} dictB = {'word71': 1, 'word3': 4, 'word2': 99, 'word7': 17, 'word9': 45} # compute the sums of the frequencies of occurrence of all the words # NOTE this is expensive, but is done only once sumValuesA = sum(dictA.values()) sumValuesB = sum(dictB.values()) dictAB = {key: (value, dictB.get(key)) for key, value in dictA.items() if key in dictB.keys()} print(dictAB) Now you have a dictionary that contains the counts of the words in either dictionary. You can form the test of proportions of your choice using dictAB. Comparing proportions across samples It is possible to test, based on the proportion of successes in given numbers of trials, whether the probabilities of success are statistically equal across two given samples. Just to be clear, in what follows, samples are two documents which contains the words, the trials are the total number of words in either document, and the successes are the total number of a particular word in either document. That is $H_0: p_1 = p_2$ where $p_1$ is the probability of success in population 1, and $p_2$ is the probability of success in sample 2. The test statistic is $$ Z = \dfrac{\hat{p}_1 - \hat{p}_2}{\sqrt{\hat{p}(1-\hat{p})\left(\tfrac{1}{N_1}+ \tfrac{1}{N_2}\right)}} $$ where $\hat{p}_j = \tfrac{X_j}{N_j}, \, j = 1, 2$, and $X_j$ is the number of successes in the $j$-th population and $N_j$ is the number of trials in the $j$-th population, and $\hat{p} = \tfrac{X_1 + X_2}{N_1 + N_2}$. Under the null hypothesis, this statistic is standard normal distributed. Here is the Python code to do this: #================================================ # compute the two sample difference of proportions #================================================ def fnDiffProp(x1, x2, n1, n2): ''' inputs: x1: the number of successes in the first sample x2: the number of successes in the second sample n1: the total number of 'trials' in the first sample n2: the total number of 'trials' in the second sample output: the test statistic, and the p-value as a tuple ''' hatP = (x1 + x2)/(n1 + n2) hatQ = 1 - hatP hatP1 = x1/n1 hatP2 = x1/n2 Z = (hatP1 - hatP2)/(math.sqrt(hatP*hatQ*(1/n1 + 1/n2))) pVal = 2*(1 - sps.norm.cdf(Z)) return((Z, pVal)) # apply the function above to each of the common words across the # two samples dictPropTest = {key: fnDiffProp(value[0], value[1], sumValuesA, sumValuesB) for key, value in dictAB.items() } To interpret for example, the difference in proportion of 'word7' is very significant across documents, whereas it is not for 'word2'.
Is it possible to do a test of significance for a string occurrence in two datasets Note: This answer is now deprecated in light of the new information added by the OP. This answer is of related interest in the context, hence not deleted. You can do a z-test of equality of proportion
42,492
Is it possible to do a test of significance for a string occurrence in two datasets
You want to know when the difference between counts (a Gap) of the same word in different groups is unexpectedly large. It sounds like you are assuming that the Gap between different groups generally speaking should be small. You also only have 2 groups of counts - you only give 2 datasets - which would make testing for significance difficult. You could try a Bayesian approach, provided that you are willing to make some assumptions. Bayes' theorem relates the data you have to ideas about the distributions from which your data came. Specifically, the theorem, when applied to the multiple hypotheses of your problem looks like: $P(H_1|Gap) = \frac{P(Gap|H_1)P(H_1)}{P(Gap|H_1)P(H_1) + P(Gap|H_2)P(H_2)}$ for the first hypothesis and $P(H_2|Gap) = \frac{P(Gap|H_2)P(H_2)}{P(Gap|H_1)P(H_1) + P(Gap|H_2)P(H_2)}$ for the second. The issue here is the definition of $H_1$ and $H_2$: what are they and how do you specify them? Once you know what they are, what about conditional distributions? I took a stab at doing this in Python (code/results below) but first, how I answered these questions. I assumed that you have no idea whether the Gap between 2 counts of a word should be large or small, so I assumed that for each word it could go either way, with 50/50 chance and set $P(H_1) = 0.5$ and $P(H_2) = 0.5$. Next, what about the conditionals? Well, you want to know about the probability of a Gap given some expected Gap size. As the Gaps will be integers, it made sense to model the distribution of Gaps as a Poisson distribution, with each hypothesis $H$ getting its own $\lambda$ parameter. That is, each $H \sim Pois(\lambda, Gap)$ with a different $\lambda$. These assumptions are made with no real data, and that is the drawback of this approach. That being said, here is what an implementation could look like: import math def pois(l, k): return math.exp(-l) * (l**k)/math.factorial(k) def bayes_test(h1, h2, gap): lp1 = pois(h1, gap) * 0.5 lp2 = pois(h2, gap) * 0.5 pb = lp1 + lp2 print ''.join(["P( H1 | gap ): ", str(lp1/pb), "\nP( H2 | gap ): ", str(lp2/pb)]) return [lp1/pb, lp2/pb] And here is what possible output would look like: >>> bayes_test(7, 30, 10) P( H1 | gap ): 0.999785530317 P( H2 | gap ): 0.000214469683207 [0.9997855303167933, 0.00021446968320674848] Using the Poisson distribution function I defined, we set $\lambda_1 = 7$ and $\lambda_2 = 30$, meaning that our hypotheses are that the expected Gap size is either 7 for $H_1$ or 30 for $H_2$, and we test with some observed Gap size of 10. We see then that it is much more likely that our data is explained by an expected Gap size of 7 than of 30. So, the takeaway is that if you can rework your question to be asking about several hypotheses, you can use a Bayesian approach to ask which hypothesis is more probable.
Is it possible to do a test of significance for a string occurrence in two datasets
You want to know when the difference between counts (a Gap) of the same word in different groups is unexpectedly large. It sounds like you are assuming that the Gap between different groups generally
Is it possible to do a test of significance for a string occurrence in two datasets You want to know when the difference between counts (a Gap) of the same word in different groups is unexpectedly large. It sounds like you are assuming that the Gap between different groups generally speaking should be small. You also only have 2 groups of counts - you only give 2 datasets - which would make testing for significance difficult. You could try a Bayesian approach, provided that you are willing to make some assumptions. Bayes' theorem relates the data you have to ideas about the distributions from which your data came. Specifically, the theorem, when applied to the multiple hypotheses of your problem looks like: $P(H_1|Gap) = \frac{P(Gap|H_1)P(H_1)}{P(Gap|H_1)P(H_1) + P(Gap|H_2)P(H_2)}$ for the first hypothesis and $P(H_2|Gap) = \frac{P(Gap|H_2)P(H_2)}{P(Gap|H_1)P(H_1) + P(Gap|H_2)P(H_2)}$ for the second. The issue here is the definition of $H_1$ and $H_2$: what are they and how do you specify them? Once you know what they are, what about conditional distributions? I took a stab at doing this in Python (code/results below) but first, how I answered these questions. I assumed that you have no idea whether the Gap between 2 counts of a word should be large or small, so I assumed that for each word it could go either way, with 50/50 chance and set $P(H_1) = 0.5$ and $P(H_2) = 0.5$. Next, what about the conditionals? Well, you want to know about the probability of a Gap given some expected Gap size. As the Gaps will be integers, it made sense to model the distribution of Gaps as a Poisson distribution, with each hypothesis $H$ getting its own $\lambda$ parameter. That is, each $H \sim Pois(\lambda, Gap)$ with a different $\lambda$. These assumptions are made with no real data, and that is the drawback of this approach. That being said, here is what an implementation could look like: import math def pois(l, k): return math.exp(-l) * (l**k)/math.factorial(k) def bayes_test(h1, h2, gap): lp1 = pois(h1, gap) * 0.5 lp2 = pois(h2, gap) * 0.5 pb = lp1 + lp2 print ''.join(["P( H1 | gap ): ", str(lp1/pb), "\nP( H2 | gap ): ", str(lp2/pb)]) return [lp1/pb, lp2/pb] And here is what possible output would look like: >>> bayes_test(7, 30, 10) P( H1 | gap ): 0.999785530317 P( H2 | gap ): 0.000214469683207 [0.9997855303167933, 0.00021446968320674848] Using the Poisson distribution function I defined, we set $\lambda_1 = 7$ and $\lambda_2 = 30$, meaning that our hypotheses are that the expected Gap size is either 7 for $H_1$ or 30 for $H_2$, and we test with some observed Gap size of 10. We see then that it is much more likely that our data is explained by an expected Gap size of 7 than of 30. So, the takeaway is that if you can rework your question to be asking about several hypotheses, you can use a Bayesian approach to ask which hypothesis is more probable.
Is it possible to do a test of significance for a string occurrence in two datasets You want to know when the difference between counts (a Gap) of the same word in different groups is unexpectedly large. It sounds like you are assuming that the Gap between different groups generally
42,493
Trying to understand unbiased estimator
I'm not 100% sure about this either, as the post you link to mentions a lot of subtleties that I haven't considered or studied much yet...but here's an attempt to answer nevertheless. As you've stated it, $h(x)$ is your estimator, because you're using it to approximate $f$. "The process of choosing the linear regression model" is the process of choosing the estimator function, and using it to estimate $\hat\beta$ is just that: using the estimator to produce an estimate. That is, no, these are not components of the estimator itself; these are the process from which the estimator originates, and the process it serves, respectively. The limited way of testing estimator bias with which I am amateurishly familiar is simulation testing. In such a special circumstance as this, we may actually know $\hat\beta_{true}$ (probably should in many such cases). Furthermore, by simulating data according to predefined parameters, we can test how different values affect the errors of our estimators. As I understand it, these systematic errors are the primary concerns in considerations of estimator bias. For example, one usually wants an estimator for which the accuracy depends minimally on sample size, or at least won't lose accuracy as sample sizes increase. I think this is sometimes a problem in significance testing, in that some tests will reject the null too often when the null is actually true and the sample size is very large (e.g., the Shapiro-Wilk test). Another example of estimator bias (I think...another place where I might be mistaken) might be your typical parametric test when used in conditions that violate its assumptions. Non-normal distributions can bias parametric tests that assume normally distributed data, whereas nonparametric tests are often relatively unbiased estimators. Sometimes biasing is more complex and even interactive. For example, I recently read that substituting polychoric correlations for Pearson's $r$ correlations in a matrix on which confirmatory factor analysis is to be performed can inflate (bias) standard errors of parameter estimates and $\chi^2$ goodness-of-fit when using maximum likelihood estimation (Babakus, 1985). The choice of estimator really starts to get hairy in latent factor modeling... In any case, problems like these are often discovered by simulation testing, wherein the true parameters are designated and altered systematically, random data are generated based on these settings, and estimates are found to deviate from the true values to different degrees depending on the parameters of the simulated distributions. The extent of that dependence on the distributional parameters is the estimator's sensitivity to those parameters; if the sensitivity is non-negligible, the estimator is biased when the parameters to which it is sensitive enter certain ranges. These are often not the parameters it is used to estimate! OLS multiple regression is sensitive to multicollinearity in regressors, for another example, whereas ridge regression can correct for bias somewhat when regressors are strongly related (collinear).
Trying to understand unbiased estimator
I'm not 100% sure about this either, as the post you link to mentions a lot of subtleties that I haven't considered or studied much yet...but here's an attempt to answer nevertheless. As you've state
Trying to understand unbiased estimator I'm not 100% sure about this either, as the post you link to mentions a lot of subtleties that I haven't considered or studied much yet...but here's an attempt to answer nevertheless. As you've stated it, $h(x)$ is your estimator, because you're using it to approximate $f$. "The process of choosing the linear regression model" is the process of choosing the estimator function, and using it to estimate $\hat\beta$ is just that: using the estimator to produce an estimate. That is, no, these are not components of the estimator itself; these are the process from which the estimator originates, and the process it serves, respectively. The limited way of testing estimator bias with which I am amateurishly familiar is simulation testing. In such a special circumstance as this, we may actually know $\hat\beta_{true}$ (probably should in many such cases). Furthermore, by simulating data according to predefined parameters, we can test how different values affect the errors of our estimators. As I understand it, these systematic errors are the primary concerns in considerations of estimator bias. For example, one usually wants an estimator for which the accuracy depends minimally on sample size, or at least won't lose accuracy as sample sizes increase. I think this is sometimes a problem in significance testing, in that some tests will reject the null too often when the null is actually true and the sample size is very large (e.g., the Shapiro-Wilk test). Another example of estimator bias (I think...another place where I might be mistaken) might be your typical parametric test when used in conditions that violate its assumptions. Non-normal distributions can bias parametric tests that assume normally distributed data, whereas nonparametric tests are often relatively unbiased estimators. Sometimes biasing is more complex and even interactive. For example, I recently read that substituting polychoric correlations for Pearson's $r$ correlations in a matrix on which confirmatory factor analysis is to be performed can inflate (bias) standard errors of parameter estimates and $\chi^2$ goodness-of-fit when using maximum likelihood estimation (Babakus, 1985). The choice of estimator really starts to get hairy in latent factor modeling... In any case, problems like these are often discovered by simulation testing, wherein the true parameters are designated and altered systematically, random data are generated based on these settings, and estimates are found to deviate from the true values to different degrees depending on the parameters of the simulated distributions. The extent of that dependence on the distributional parameters is the estimator's sensitivity to those parameters; if the sensitivity is non-negligible, the estimator is biased when the parameters to which it is sensitive enter certain ranges. These are often not the parameters it is used to estimate! OLS multiple regression is sensitive to multicollinearity in regressors, for another example, whereas ridge regression can correct for bias somewhat when regressors are strongly related (collinear).
Trying to understand unbiased estimator I'm not 100% sure about this either, as the post you link to mentions a lot of subtleties that I haven't considered or studied much yet...but here's an attempt to answer nevertheless. As you've state
42,494
Trying to understand unbiased estimator
On looking at those lecture notes, the sample size is only two - so you can hardly do better without knowing anything about f than simply averaging the two values. The notions of "bias" and "variance" can be defined only relative to some kind of model structure. This is clearly encapsulated by the $ E_x[.] $ and $ E_D[.] $ operators in the lecture notes. Note though, that they are a tad clumsy in that $ E_x [E_D [.]] $ should really be written as $ E_x [E_{D|x}]] $ as D and x are related by the model. These basically describe "how did you choose the C values" ($ E_x $) and "given the choice of X values how did you choose the Y values" ($ E_{D|x} $). Generally speaking, the latter encspsulates the model assumptions with the former generally assumed known. Now, it seems like you are talking about a problem with "no noise" - now if you know that $ y=f (x) $ exactly - then any estimator that doesn't interpolate the observed data is necessarily wrong. The only source of "randomness" is which particular "X" values and "Y" values are observed. This has a similar flavour to design based inference for sample surveys. The notion of bias in this context depends on the "sample space" for the X values, the "sampling distribution" for the X values, and the function $ f(x) $. I use quotes as it is entirely reasonable to consider degenerate cases where the X values are not "random" but fixed at prespecified values (as is the case for prediction of a new Y value not used to fit the model). Now basically you can't get any further with the bias of an estimated function $ h $ unless you impose some conditions on what the function $ f$ might look like. In fact, without the prescence of any noise this is "merely" a transformation of a random variable. If you propose/assume a "sampling distribution " for the $ X $ values, call this cdf $ G_X(x)=Pr (X\leq x) $, then the corresponding distribution for the response is $ G_Y (y)=Pr (Y\leq y)=Pr (f (X)\leq y)=\int I\{f (x)\leq y\} dG_X (x)$. For 1-to-1 continuous, differentiable functions you can simplify this further to state that the pdf for "y" must have the form $$ g_Y (y)=g_X (f^{-1}(y))|\frac {\partial f^{-1}(y)}{\partial y}|$$ where $ f^{-1}(.)$ is the inverse transformation of $ f (.) $. So for a linear function $ f (x)=a +bx $ (inverse function of $ f^{-1}(y)=b^{-1} (y-a) $ ) combined with a uniform $[-1, 1] $ pdf for $ X $ gives $$ g_Y (y)=\frac {I[-1 \leq b^{-1}(y-a) \leq 1]}{2|b|}$$ That is, Y is uniform $[a-b, a+b] $ if $ b> 0 $ and uniform $[a+b, a-b] $ otherwise. Now we can show that the mle for $ a, b $ is given by the same OLS "saturated" fit , namely $\hat {b}=\frac {y_1-y_2}{x_1-x_2} $ and $\hat {a}=\frac {y_2x_1-y_1x_2}{x_1-x_2}$. In fact these must be the exact values for $ a $ and $ b $ provided the linear function is correct - regardless of the sampling distribution for X. Another way of saying this is that there can be only one noiseless linear relationship between two or more X-Y pairs. This also leads to an extremely aggressive predictive distribution degenerate at $ \hat {y }=\hat {a}+\hat {b} x $ with zero margin for error (after observing x). The aggressive prediction comes from the "no noise" assumption. As a final remark, the observed data provides no information on what that relationship is for "X-Y" pairs that are not fully observed. This can only come from other pieces of information - such as assumptions about smoothness, and continuity of $ f (.) $. This makes calculating bias impossible in a general sense, because your answer will depend on some arbitrary unknown function. You have to assume something about what it could be in order to calculate the bias of an estimator for $ f(.) $ (eg $ f (.) $ has a third order derivative, no singularities, is analytic, etc). But these choices cannot be disentangled from the choices resulting from standard model checking (eg add a quadratic term if a plot of the residuals shows curvature). This clouds the practical use of "bias" in a rigorous and completely general fashion, as the observed data set is analysed to decide model structure. Different data sets get analysed in different ways, adding a "human element" to a bias calculation (and variance too) that is difficult to both automate (making Monte Carlo infeasible) and write down a formula for what happens. Having said that, the notion of bias is still useful as part of a check of model assumptions - but is generally better thought of in terms of complexity and stability of the model IMO. Bias is also useful as a conceptual tool to aid understanding of general model fitting issuues and the tension between explaing the observed data and predicting unobserved data.
Trying to understand unbiased estimator
On looking at those lecture notes, the sample size is only two - so you can hardly do better without knowing anything about f than simply averaging the two values. The notions of "bias" and "varianc
Trying to understand unbiased estimator On looking at those lecture notes, the sample size is only two - so you can hardly do better without knowing anything about f than simply averaging the two values. The notions of "bias" and "variance" can be defined only relative to some kind of model structure. This is clearly encapsulated by the $ E_x[.] $ and $ E_D[.] $ operators in the lecture notes. Note though, that they are a tad clumsy in that $ E_x [E_D [.]] $ should really be written as $ E_x [E_{D|x}]] $ as D and x are related by the model. These basically describe "how did you choose the C values" ($ E_x $) and "given the choice of X values how did you choose the Y values" ($ E_{D|x} $). Generally speaking, the latter encspsulates the model assumptions with the former generally assumed known. Now, it seems like you are talking about a problem with "no noise" - now if you know that $ y=f (x) $ exactly - then any estimator that doesn't interpolate the observed data is necessarily wrong. The only source of "randomness" is which particular "X" values and "Y" values are observed. This has a similar flavour to design based inference for sample surveys. The notion of bias in this context depends on the "sample space" for the X values, the "sampling distribution" for the X values, and the function $ f(x) $. I use quotes as it is entirely reasonable to consider degenerate cases where the X values are not "random" but fixed at prespecified values (as is the case for prediction of a new Y value not used to fit the model). Now basically you can't get any further with the bias of an estimated function $ h $ unless you impose some conditions on what the function $ f$ might look like. In fact, without the prescence of any noise this is "merely" a transformation of a random variable. If you propose/assume a "sampling distribution " for the $ X $ values, call this cdf $ G_X(x)=Pr (X\leq x) $, then the corresponding distribution for the response is $ G_Y (y)=Pr (Y\leq y)=Pr (f (X)\leq y)=\int I\{f (x)\leq y\} dG_X (x)$. For 1-to-1 continuous, differentiable functions you can simplify this further to state that the pdf for "y" must have the form $$ g_Y (y)=g_X (f^{-1}(y))|\frac {\partial f^{-1}(y)}{\partial y}|$$ where $ f^{-1}(.)$ is the inverse transformation of $ f (.) $. So for a linear function $ f (x)=a +bx $ (inverse function of $ f^{-1}(y)=b^{-1} (y-a) $ ) combined with a uniform $[-1, 1] $ pdf for $ X $ gives $$ g_Y (y)=\frac {I[-1 \leq b^{-1}(y-a) \leq 1]}{2|b|}$$ That is, Y is uniform $[a-b, a+b] $ if $ b> 0 $ and uniform $[a+b, a-b] $ otherwise. Now we can show that the mle for $ a, b $ is given by the same OLS "saturated" fit , namely $\hat {b}=\frac {y_1-y_2}{x_1-x_2} $ and $\hat {a}=\frac {y_2x_1-y_1x_2}{x_1-x_2}$. In fact these must be the exact values for $ a $ and $ b $ provided the linear function is correct - regardless of the sampling distribution for X. Another way of saying this is that there can be only one noiseless linear relationship between two or more X-Y pairs. This also leads to an extremely aggressive predictive distribution degenerate at $ \hat {y }=\hat {a}+\hat {b} x $ with zero margin for error (after observing x). The aggressive prediction comes from the "no noise" assumption. As a final remark, the observed data provides no information on what that relationship is for "X-Y" pairs that are not fully observed. This can only come from other pieces of information - such as assumptions about smoothness, and continuity of $ f (.) $. This makes calculating bias impossible in a general sense, because your answer will depend on some arbitrary unknown function. You have to assume something about what it could be in order to calculate the bias of an estimator for $ f(.) $ (eg $ f (.) $ has a third order derivative, no singularities, is analytic, etc). But these choices cannot be disentangled from the choices resulting from standard model checking (eg add a quadratic term if a plot of the residuals shows curvature). This clouds the practical use of "bias" in a rigorous and completely general fashion, as the observed data set is analysed to decide model structure. Different data sets get analysed in different ways, adding a "human element" to a bias calculation (and variance too) that is difficult to both automate (making Monte Carlo infeasible) and write down a formula for what happens. Having said that, the notion of bias is still useful as part of a check of model assumptions - but is generally better thought of in terms of complexity and stability of the model IMO. Bias is also useful as a conceptual tool to aid understanding of general model fitting issuues and the tension between explaing the observed data and predicting unobserved data.
Trying to understand unbiased estimator On looking at those lecture notes, the sample size is only two - so you can hardly do better without knowing anything about f than simply averaging the two values. The notions of "bias" and "varianc
42,495
Model selection criterion produces non-normal residuals
We use the Akaike or Schwarz information criteria to compare a set of "Candidate Models". The candidates means that you have already fitted your regression models and did the model adequacy checking including the normality assumption of your residuals. You are just not sure about the balance between the number of parameters used to fit the model and the closeness of the fit. Even though you may not see a direct normality assumption in the definitions of AIC or BIC criteria, but (as far as I can say) you need to check the normality assumption before comparing your models. Of course, you can still apply these criteria, even for non-normal errors. But how meaningful would be you final model, will be under a big question mark. You can also have a look at this question.
Model selection criterion produces non-normal residuals
We use the Akaike or Schwarz information criteria to compare a set of "Candidate Models". The candidates means that you have already fitted your regression models and did the model adequacy checking i
Model selection criterion produces non-normal residuals We use the Akaike or Schwarz information criteria to compare a set of "Candidate Models". The candidates means that you have already fitted your regression models and did the model adequacy checking including the normality assumption of your residuals. You are just not sure about the balance between the number of parameters used to fit the model and the closeness of the fit. Even though you may not see a direct normality assumption in the definitions of AIC or BIC criteria, but (as far as I can say) you need to check the normality assumption before comparing your models. Of course, you can still apply these criteria, even for non-normal errors. But how meaningful would be you final model, will be under a big question mark. You can also have a look at this question.
Model selection criterion produces non-normal residuals We use the Akaike or Schwarz information criteria to compare a set of "Candidate Models". The candidates means that you have already fitted your regression models and did the model adequacy checking i
42,496
Model selection criterion produces non-normal residuals
AIC and BIC both have consist of two elements: the likelihood and the penalty for the number of parameters. The likelihood need not be normal likelihood, it can be whatever you find reasonable. However, if you assume the likelihood to be normal (and use it in AIC or BIC), you need normal residuals, too. In other words, the distribution of residuals has to match the distributional assumption used to calculate the likelihood for AIC or BIC.
Model selection criterion produces non-normal residuals
AIC and BIC both have consist of two elements: the likelihood and the penalty for the number of parameters. The likelihood need not be normal likelihood, it can be whatever you find reasonable. Howev
Model selection criterion produces non-normal residuals AIC and BIC both have consist of two elements: the likelihood and the penalty for the number of parameters. The likelihood need not be normal likelihood, it can be whatever you find reasonable. However, if you assume the likelihood to be normal (and use it in AIC or BIC), you need normal residuals, too. In other words, the distribution of residuals has to match the distributional assumption used to calculate the likelihood for AIC or BIC.
Model selection criterion produces non-normal residuals AIC and BIC both have consist of two elements: the likelihood and the penalty for the number of parameters. The likelihood need not be normal likelihood, it can be whatever you find reasonable. Howev
42,497
What can I do with these two time series?
For forecasting low frequency variables with high frequency you can use MIDAS regression. The idea behind this regression is quite simple, average the high-frequency variable and then use it as a regressor. The key is to use custom weights. Suppose we have $Y_t$ which is sampled monthly and $X_\tau$, which is sampled daily. Then MIDAS regression is defined as follows: $$Y_t=\sum_{h=0}^k\beta_hX_{tm-h}+\varepsilon_t$$ where we assume that for observation $t=s$ we have $m$ observations $\tau=sm-m+1,...,sm$. We also assume that $\beta_h=g(h,\theta)$, for some function $g$ and hyperparameter $\theta$. So if you want to test out whether the $X_\tau$ is a good predictor for $Y_t$, fit a MIDAS regression with different weight functions and inspect the results. If you enter MIDAS regression into google you'll find many articles. Forecasting monthly CPI with daily variable was investigated in the article "Forecasting with mixed frequencies" by Armesto, Engemann and Owyang. The MIDAS regression idea was introduced by Eric Ghysels, you can look into his articles. There are two software packages for fitting MIDAS regression: MIDAS Matlab toolbox and midasr R package. They both have user guides, where you can find more detailed examples and links to other literature. Note, this is only one possible way of solving your problem. Others surely exist too, but as I am the developer of midasr R package, I am biased in my suggestions.
What can I do with these two time series?
For forecasting low frequency variables with high frequency you can use MIDAS regression. The idea behind this regression is quite simple, average the high-frequency variable and then use it as a regr
What can I do with these two time series? For forecasting low frequency variables with high frequency you can use MIDAS regression. The idea behind this regression is quite simple, average the high-frequency variable and then use it as a regressor. The key is to use custom weights. Suppose we have $Y_t$ which is sampled monthly and $X_\tau$, which is sampled daily. Then MIDAS regression is defined as follows: $$Y_t=\sum_{h=0}^k\beta_hX_{tm-h}+\varepsilon_t$$ where we assume that for observation $t=s$ we have $m$ observations $\tau=sm-m+1,...,sm$. We also assume that $\beta_h=g(h,\theta)$, for some function $g$ and hyperparameter $\theta$. So if you want to test out whether the $X_\tau$ is a good predictor for $Y_t$, fit a MIDAS regression with different weight functions and inspect the results. If you enter MIDAS regression into google you'll find many articles. Forecasting monthly CPI with daily variable was investigated in the article "Forecasting with mixed frequencies" by Armesto, Engemann and Owyang. The MIDAS regression idea was introduced by Eric Ghysels, you can look into his articles. There are two software packages for fitting MIDAS regression: MIDAS Matlab toolbox and midasr R package. They both have user guides, where you can find more detailed examples and links to other literature. Note, this is only one possible way of solving your problem. Others surely exist too, but as I am the developer of midasr R package, I am biased in my suggestions.
What can I do with these two time series? For forecasting low frequency variables with high frequency you can use MIDAS regression. The idea behind this regression is quite simple, average the high-frequency variable and then use it as a regr
42,498
What can I do with these two time series?
For starters, I would check if the draws from list A are two standard deviations outside of the draws from list B. Secondly, I would check if a 1 month moving average of list A is significantly different from the more slowly sampled list B.
What can I do with these two time series?
For starters, I would check if the draws from list A are two standard deviations outside of the draws from list B. Secondly, I would check if a 1 month moving average of list A is significantly differ
What can I do with these two time series? For starters, I would check if the draws from list A are two standard deviations outside of the draws from list B. Secondly, I would check if a 1 month moving average of list A is significantly different from the more slowly sampled list B.
What can I do with these two time series? For starters, I would check if the draws from list A are two standard deviations outside of the draws from list B. Secondly, I would check if a 1 month moving average of list A is significantly differ
42,499
What can I do with these two time series?
Fit seasonal time series models to both and then compare the seasonality. The TBATS model can handle daily data. It is available in the forecast package in R.
What can I do with these two time series?
Fit seasonal time series models to both and then compare the seasonality. The TBATS model can handle daily data. It is available in the forecast package in R.
What can I do with these two time series? Fit seasonal time series models to both and then compare the seasonality. The TBATS model can handle daily data. It is available in the forecast package in R.
What can I do with these two time series? Fit seasonal time series models to both and then compare the seasonality. The TBATS model can handle daily data. It is available in the forecast package in R.
42,500
Estimate single ARIMA for multiple timeseries
That can happen when the model is not suitable for the data. stepwise=FALSE makes auto.arima work harder to find the best model. So of course, sometimes it finds a different model than when stepwise=TRUE. It is impossible to say with the information provided. You should be aware that comparing AIC values with different values of $d$ is inappropriate. The AIC can only be used to compare models with the same $d$.
Estimate single ARIMA for multiple timeseries
That can happen when the model is not suitable for the data. stepwise=FALSE makes auto.arima work harder to find the best model. So of course, sometimes it finds a different model than when stepwise=T
Estimate single ARIMA for multiple timeseries That can happen when the model is not suitable for the data. stepwise=FALSE makes auto.arima work harder to find the best model. So of course, sometimes it finds a different model than when stepwise=TRUE. It is impossible to say with the information provided. You should be aware that comparing AIC values with different values of $d$ is inappropriate. The AIC can only be used to compare models with the same $d$.
Estimate single ARIMA for multiple timeseries That can happen when the model is not suitable for the data. stepwise=FALSE makes auto.arima work harder to find the best model. So of course, sometimes it finds a different model than when stepwise=T