idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
17,001
Do all observations arise from probability distributions?
Probability is not fundamentally about the nature of the world (which may, or may not, be deterministic) but about what you know about it. Consider this example. You are sitting with your friends, Alice and Bob. I have a standard deck of cards, and shuffle them well. What is the probability that the top card is the Ace of spades? Clearly $\frac{1}{52}$. I show the top card to Alice, but not to you or Bob. If I ask Alice what is the probability that the top card is the Ace of Spades she will surely answer either $1$ or $0$, but not $\frac{1}{52}$. But If I ask Bob, he will still have to say $\frac{1}{52}$. The point is to demonstrate that probability is not fundamentally about reality, but about your knowledge of reality. The cards have not changed their order. Consider tossing a fair coin. What is the probability that it will land heads? $\frac{1}{2}$? But in fact tossing a coin is a deterministic process, at least by the standards of modern physics. Scientists have built machines which can predict a coin toss from video of the first milliseconds of motion. Some magicians have trained themselves to toss coins so accurately that they can get either heads or tails at will. I suppose they do this by finely calibrating the force with which the coin is flipped, and the moment at which they catch it, so that they know exactly how many times it has turned over. But if I, not having trained to do this, were to toss a coin, I couldn't predict the outcome. When I flip a coin it might sometimes go twice or three times as high, or have twice or three times as much angular momentum, as at other times. At best I might say it had turned over between 3 and 15 times. So it is clear that for me, even if I took notice of which way up it was to begin with, my probability will still be close to $\frac{1}{2}$. Again, the point is not that the process is not deterministic - clearly some scientists and magicians can do it - but that I don't know the parameters of the deterministic function to a high degree of accuracy. My initial ignorance, or imprecision in knowledge, integrated over time, expands out to cover the entire space of possible outcomes, such that I have no idea which way the coin will end up. Back to your question My question is on the use of the word observation. Does the above quote imply that any data we collect or observe in nature/physics/experiments arise from probability distribution? How about deterministic process, which surely in not probabilistic? Go back to the coin toss. The coin toss is in modern terms an entirely deterministic process - if we start with the initial conditions and integrate over time we will get the answer. What makes it "random" is we do not know the initial conditions with enough accuracy to predict if it will be heads or tails. We can take our estimates of the initial conditions and their error bars, and run millions of deterministic Monte-Carlo simulations using slightly different initial conditions, and in each individual simulation there will be an answer, but the answers will be different, and the ratio of heads to tails will be about $\frac{1}{2}$. So another way of thinking about it is to say that, supposing for a moment that the universe is deterministic, then the "probability distribution" is the weighted distribution of the time integrals of all possible pasts. That is, every possible past - those which we don't know to be false - integrated deterministically through time to the present. (Kolmogorov will be turning in his grave no doubt.) So in this view, an observation is a ground truth, and we can integrate backwards through time to eliminate possible pasts which would not give rise to that observation. If you have just been dealt the Ace of Hearts, that means I wasn't dealt it earlier. In summary, I would not say that an observation arises from a probability distribution. There is a reality out there which gives rise to observations. Observations give us information about that reality (specifically about the past) and we can combine observations to create a model of the past which allows us to make predictions about the future. That model of the past is the probability distribution referred to.
Do all observations arise from probability distributions?
Probability is not fundamentally about the nature of the world (which may, or may not, be deterministic) but about what you know about it. Consider this example. You are sitting with your friends, Al
Do all observations arise from probability distributions? Probability is not fundamentally about the nature of the world (which may, or may not, be deterministic) but about what you know about it. Consider this example. You are sitting with your friends, Alice and Bob. I have a standard deck of cards, and shuffle them well. What is the probability that the top card is the Ace of spades? Clearly $\frac{1}{52}$. I show the top card to Alice, but not to you or Bob. If I ask Alice what is the probability that the top card is the Ace of Spades she will surely answer either $1$ or $0$, but not $\frac{1}{52}$. But If I ask Bob, he will still have to say $\frac{1}{52}$. The point is to demonstrate that probability is not fundamentally about reality, but about your knowledge of reality. The cards have not changed their order. Consider tossing a fair coin. What is the probability that it will land heads? $\frac{1}{2}$? But in fact tossing a coin is a deterministic process, at least by the standards of modern physics. Scientists have built machines which can predict a coin toss from video of the first milliseconds of motion. Some magicians have trained themselves to toss coins so accurately that they can get either heads or tails at will. I suppose they do this by finely calibrating the force with which the coin is flipped, and the moment at which they catch it, so that they know exactly how many times it has turned over. But if I, not having trained to do this, were to toss a coin, I couldn't predict the outcome. When I flip a coin it might sometimes go twice or three times as high, or have twice or three times as much angular momentum, as at other times. At best I might say it had turned over between 3 and 15 times. So it is clear that for me, even if I took notice of which way up it was to begin with, my probability will still be close to $\frac{1}{2}$. Again, the point is not that the process is not deterministic - clearly some scientists and magicians can do it - but that I don't know the parameters of the deterministic function to a high degree of accuracy. My initial ignorance, or imprecision in knowledge, integrated over time, expands out to cover the entire space of possible outcomes, such that I have no idea which way the coin will end up. Back to your question My question is on the use of the word observation. Does the above quote imply that any data we collect or observe in nature/physics/experiments arise from probability distribution? How about deterministic process, which surely in not probabilistic? Go back to the coin toss. The coin toss is in modern terms an entirely deterministic process - if we start with the initial conditions and integrate over time we will get the answer. What makes it "random" is we do not know the initial conditions with enough accuracy to predict if it will be heads or tails. We can take our estimates of the initial conditions and their error bars, and run millions of deterministic Monte-Carlo simulations using slightly different initial conditions, and in each individual simulation there will be an answer, but the answers will be different, and the ratio of heads to tails will be about $\frac{1}{2}$. So another way of thinking about it is to say that, supposing for a moment that the universe is deterministic, then the "probability distribution" is the weighted distribution of the time integrals of all possible pasts. That is, every possible past - those which we don't know to be false - integrated deterministically through time to the present. (Kolmogorov will be turning in his grave no doubt.) So in this view, an observation is a ground truth, and we can integrate backwards through time to eliminate possible pasts which would not give rise to that observation. If you have just been dealt the Ace of Hearts, that means I wasn't dealt it earlier. In summary, I would not say that an observation arises from a probability distribution. There is a reality out there which gives rise to observations. Observations give us information about that reality (specifically about the past) and we can combine observations to create a model of the past which allows us to make predictions about the future. That model of the past is the probability distribution referred to.
Do all observations arise from probability distributions? Probability is not fundamentally about the nature of the world (which may, or may not, be deterministic) but about what you know about it. Consider this example. You are sitting with your friends, Al
17,002
Do all observations arise from probability distributions?
Below is the quote from Karl Pearson You are not placing an exact quote of Karl Pearson, so it is difficult to respond directly to the thinking of Pearson. His thinking was positivist/idealistic and his ideas where like physical laws being relative to the observations by humans. It is not observations that arise from probability distributions as if the underlying laws that govern them need to be probabilistic. Instead it is more like the other way around. The laws of nature stem from observations (and these observations happen to be of a probabilistic nature due to variations that occur in experiments). It is not that observations arise from probability distributions, but they follow probability distributions. Finding out whether the underlying 'reality' is deterministic or not, that is not the goal of a positivist science, because it can only use the observations. These observations happen to have a random behaviour but whether their nature is random or deterministic is outside the grasp of science. So science should focus on describing the parameters of these distributions and not about the meaning or cause behind it (which would be metaphysics). The difference with Newtonian notions or contemporary biologist in the time of Pearson is that science should be about data (observations) and not about unverifiable theories and notions about reality. Science is about measurements and that makes science a practice that deals with statistics (the field that is about description and analysis of data). This question is actually more about philosophy of science than about statistics.
Do all observations arise from probability distributions?
Below is the quote from Karl Pearson You are not placing an exact quote of Karl Pearson, so it is difficult to respond directly to the thinking of Pearson. His thinking was positivist/idealistic and
Do all observations arise from probability distributions? Below is the quote from Karl Pearson You are not placing an exact quote of Karl Pearson, so it is difficult to respond directly to the thinking of Pearson. His thinking was positivist/idealistic and his ideas where like physical laws being relative to the observations by humans. It is not observations that arise from probability distributions as if the underlying laws that govern them need to be probabilistic. Instead it is more like the other way around. The laws of nature stem from observations (and these observations happen to be of a probabilistic nature due to variations that occur in experiments). It is not that observations arise from probability distributions, but they follow probability distributions. Finding out whether the underlying 'reality' is deterministic or not, that is not the goal of a positivist science, because it can only use the observations. These observations happen to have a random behaviour but whether their nature is random or deterministic is outside the grasp of science. So science should focus on describing the parameters of these distributions and not about the meaning or cause behind it (which would be metaphysics). The difference with Newtonian notions or contemporary biologist in the time of Pearson is that science should be about data (observations) and not about unverifiable theories and notions about reality. Science is about measurements and that makes science a practice that deals with statistics (the field that is about description and analysis of data). This question is actually more about philosophy of science than about statistics.
Do all observations arise from probability distributions? Below is the quote from Karl Pearson You are not placing an exact quote of Karl Pearson, so it is difficult to respond directly to the thinking of Pearson. His thinking was positivist/idealistic and
17,003
Which OLS assumptions are colliders violating?
I will assume models without intercepts to have shorter notation. Say the structural causal model is \begin{aligned} Y&=\beta_1X+u, \\ Z&=\gamma_1X+\gamma_2Y+v, \\ X&=w \end{aligned} with $u,v,w$ being mutually independent zero-mean exogenous structural errors so that $Z$ is a collider: $X\rightarrow Z\leftarrow Y$. Let us specify a linear regression as $$ Y=\alpha_1X+\alpha_2Z+\varepsilon $$ and get ready to estimate it with OLS. We would wish for $\hat\alpha_1^{OLS}\rightarrow\beta_1$ as $n\rightarrow\infty$. This would be the case if the following two conditions held simultaneously: $\alpha_1=\beta_1$ and the relevant OLS assumptions were satisfied. However, this is not the case. Suppose $\alpha_1=\beta_1$. Then from the structural causal model and the specified regression we get \begin{aligned} \varepsilon&=-\alpha_2Z+u \\ &=-\alpha_2(\gamma_1X+\gamma_2Y+v)+u. \end{aligned} Thus $\varepsilon$ is a linear function of $X$. This violates the assumption $\mathbb{E}(\varepsilon|X)=0$. This assumption is what Wooldridge calls Assumption MLR.4 (Zero Conditional Mean) in "Introductory Econometrics: A Modern Approach". Note that it is specific to the desired causal interpretation of regression parameters; noncausal interpretations (such as regression as a model of the conditional expectation function of $Y|X,Z$) do not require it. Since it is violated, we cannot have both conditions above to hold simultaneously. Therefore, $\beta_1$ cannot be the target to which the OLS estimator of $\alpha_1$ converges.
Which OLS assumptions are colliders violating?
I will assume models without intercepts to have shorter notation. Say the structural causal model is \begin{aligned} Y&=\beta_1X+u, \\ Z&=\gamma_1X+\gamma_2Y+v, \\ X&=w \end{aligned} with $u,v,w$ bein
Which OLS assumptions are colliders violating? I will assume models without intercepts to have shorter notation. Say the structural causal model is \begin{aligned} Y&=\beta_1X+u, \\ Z&=\gamma_1X+\gamma_2Y+v, \\ X&=w \end{aligned} with $u,v,w$ being mutually independent zero-mean exogenous structural errors so that $Z$ is a collider: $X\rightarrow Z\leftarrow Y$. Let us specify a linear regression as $$ Y=\alpha_1X+\alpha_2Z+\varepsilon $$ and get ready to estimate it with OLS. We would wish for $\hat\alpha_1^{OLS}\rightarrow\beta_1$ as $n\rightarrow\infty$. This would be the case if the following two conditions held simultaneously: $\alpha_1=\beta_1$ and the relevant OLS assumptions were satisfied. However, this is not the case. Suppose $\alpha_1=\beta_1$. Then from the structural causal model and the specified regression we get \begin{aligned} \varepsilon&=-\alpha_2Z+u \\ &=-\alpha_2(\gamma_1X+\gamma_2Y+v)+u. \end{aligned} Thus $\varepsilon$ is a linear function of $X$. This violates the assumption $\mathbb{E}(\varepsilon|X)=0$. This assumption is what Wooldridge calls Assumption MLR.4 (Zero Conditional Mean) in "Introductory Econometrics: A Modern Approach". Note that it is specific to the desired causal interpretation of regression parameters; noncausal interpretations (such as regression as a model of the conditional expectation function of $Y|X,Z$) do not require it. Since it is violated, we cannot have both conditions above to hold simultaneously. Therefore, $\beta_1$ cannot be the target to which the OLS estimator of $\alpha_1$ converges.
Which OLS assumptions are colliders violating? I will assume models without intercepts to have shorter notation. Say the structural causal model is \begin{aligned} Y&=\beta_1X+u, \\ Z&=\gamma_1X+\gamma_2Y+v, \\ X&=w \end{aligned} with $u,v,w$ bein
17,004
Which OLS assumptions are colliders violating?
It is very easy to demonstrate that all the assumptions of OLS can be satisfied and yet collider bias persists. Here, I generate data in which $z$ is a collider for the effect of $x$ on $y$. library(tidyverse) r = rerun(1000,{ w = rnorm(100) u = rnorm(100) z = 3*u-w + rnorm(100, 0, 0.5) x = 2*w + rnorm(100, 0, 0.3) y = 5*x - u + rnorm(100, 0, 0.75) mod1 = lm(y~x+w) mod2 = lm(y~x+z) tibble(`No Collider` = coef(mod1)['x'], `Collider` = coef(mod2)['x']) }) %>% bind_rows Note all the assumptions of linear regression are satisfied: i) Observations are iid ii) The functional form is correct iii) Homogeneity of variance, and iv) The likelihood is normal (though this is not as important, hence its place last...) Plotting a 1000 replications of this experiment, we find that model 1 (which correctly blocks the effect of confounders "closing the back door") provides an unbiased estimate of the effect of $x$ on $y$. However, model 2 (which conditions on the colider) has a systematic bias resulting in an estimated effect of $x$ on $y$ which is smaller than the truth. EDIT: 1) , we can prove that the estimate for $\beta$ must be unbiased, that is, $E(\hat{\beta}) = \beta$. The coefficients of the model are unbiased estimates sure, but the question becomes unbiased estimates of what? Whatever they are, they are not the unbiased estimates of the causal effect of $x$ on $y$. 2) I do not think observations are iid is an OLS assumption You are correct. The assumptions I've listed here would be the assumptions of a gaussian GLM which are stricter than OLS Also, did you mean homogeneity (if so, what does it mean?) I did mean homogeneity, but I meant homogeneity of variance not errors. I've fixed that. Homogeneity of variance is a simpler way of saying (or spelling) homoskedasticity. 3) Can controlling for a collider "fool" us? Is it wrong to control for a collider? If so, why? Let's start from there Yes, it can. This example demonstrates this. The real effect of changing x one unit is 5. The first model controlling for x and u (thereby blocking all backdoors from y to x) shows an unbiased estimate of 5. The model controlling for the collider produces an estimate of x's effect on y which is systematically lower than than 5. The "why" of colliders is still a bit of a mystery to me. In the readings I've done, authors just say "the flow of information is blocked by a collider, but conditioning on the collider opens the back door" or something in that spirit. If you find a satisfactory explanation for why the collider bias happens, let me know. 4) I don't think your model is well specified. In the population y is a function of x and u. Yet you are only controlling for x What if u is some expensive measurement, or one we forgot to collect? We can't collect data on everything that effects the outcome. That being said, you're right to be suspicious of this. There are more formal ways of checking that the model you've written down is consistent with the data that involve checking conditional independence. You can find ways to test these implications here under "Testable Implications".
Which OLS assumptions are colliders violating?
It is very easy to demonstrate that all the assumptions of OLS can be satisfied and yet collider bias persists. Here, I generate data in which $z$ is a collider for the effect of $x$ on $y$. library(t
Which OLS assumptions are colliders violating? It is very easy to demonstrate that all the assumptions of OLS can be satisfied and yet collider bias persists. Here, I generate data in which $z$ is a collider for the effect of $x$ on $y$. library(tidyverse) r = rerun(1000,{ w = rnorm(100) u = rnorm(100) z = 3*u-w + rnorm(100, 0, 0.5) x = 2*w + rnorm(100, 0, 0.3) y = 5*x - u + rnorm(100, 0, 0.75) mod1 = lm(y~x+w) mod2 = lm(y~x+z) tibble(`No Collider` = coef(mod1)['x'], `Collider` = coef(mod2)['x']) }) %>% bind_rows Note all the assumptions of linear regression are satisfied: i) Observations are iid ii) The functional form is correct iii) Homogeneity of variance, and iv) The likelihood is normal (though this is not as important, hence its place last...) Plotting a 1000 replications of this experiment, we find that model 1 (which correctly blocks the effect of confounders "closing the back door") provides an unbiased estimate of the effect of $x$ on $y$. However, model 2 (which conditions on the colider) has a systematic bias resulting in an estimated effect of $x$ on $y$ which is smaller than the truth. EDIT: 1) , we can prove that the estimate for $\beta$ must be unbiased, that is, $E(\hat{\beta}) = \beta$. The coefficients of the model are unbiased estimates sure, but the question becomes unbiased estimates of what? Whatever they are, they are not the unbiased estimates of the causal effect of $x$ on $y$. 2) I do not think observations are iid is an OLS assumption You are correct. The assumptions I've listed here would be the assumptions of a gaussian GLM which are stricter than OLS Also, did you mean homogeneity (if so, what does it mean?) I did mean homogeneity, but I meant homogeneity of variance not errors. I've fixed that. Homogeneity of variance is a simpler way of saying (or spelling) homoskedasticity. 3) Can controlling for a collider "fool" us? Is it wrong to control for a collider? If so, why? Let's start from there Yes, it can. This example demonstrates this. The real effect of changing x one unit is 5. The first model controlling for x and u (thereby blocking all backdoors from y to x) shows an unbiased estimate of 5. The model controlling for the collider produces an estimate of x's effect on y which is systematically lower than than 5. The "why" of colliders is still a bit of a mystery to me. In the readings I've done, authors just say "the flow of information is blocked by a collider, but conditioning on the collider opens the back door" or something in that spirit. If you find a satisfactory explanation for why the collider bias happens, let me know. 4) I don't think your model is well specified. In the population y is a function of x and u. Yet you are only controlling for x What if u is some expensive measurement, or one we forgot to collect? We can't collect data on everything that effects the outcome. That being said, you're right to be suspicious of this. There are more formal ways of checking that the model you've written down is consistent with the data that involve checking conditional independence. You can find ways to test these implications here under "Testable Implications".
Which OLS assumptions are colliders violating? It is very easy to demonstrate that all the assumptions of OLS can be satisfied and yet collider bias persists. Here, I generate data in which $z$ is a collider for the effect of $x$ on $y$. library(t
17,005
Which OLS assumptions are colliders violating?
The problem here is that "collider" is a causal concept while OLS regression not necessarily deal with causality. About "regression and causality" read here: Under which assumptions a regression can be interpreted causally? If we intend OLS regression as an estimator of the linear CEF, collider and other causal problems not matters. Read here: Regression and the CEF Moreover, unfortunately, several books are ambiguous if not erroneous about the meaning of regression, especially about his possible causal use (read here: How would econometricians answer the objections and recommendations raised by Chen and Pearl (2013)?) EDIT: following the discussion with Richard Hardy I add here the same example revised in my perspective: The structural causal model (SCM) is \begin{aligned} Y&=\beta_1X+u_Y, \\ Z&=\beta_2X+\beta_3Y+u_Z \\ X&=u_X \end{aligned} so that $Z$ is a collider: $X\rightarrow Z\leftarrow Y$. structural errors can be considered the exogenous variables in the system and we assume them as zero mean and independent each others. Note that one implication of that is: $E[u_Y|X]=0$, $E[u_Z|X,Y]=0$. Note that, in general, the SCM encode (explicitly) all causal assumptions made by the researcher. Now the question is that we are interested in the causal effect of $X$ on $Y$, then we looking for the regression equation that permit us to identify $\beta_1$; note that this is the direct causal effect of interest, and in this particular case it is the total too (assumption). The reply is quite simple, because from this regression $Y=\theta_1X+r_1$ $\theta_1$ identify $\beta_1$ now, in general for identification of the causal effect of interest the regression above is not what we need ($\theta_1$ do not identify the effect of interest). We have to add some control variables. Now the original question is (more or less): why control for collider is not a good idea? In our example we can try to add the collider as control and compute the regression as follow: $Y=\theta_2X+\theta_3Z+r_2$ but $\theta_2$ NOT identify $\beta_1$. It is so because the admissible control sets have to comply with backdoor criterion; so, $[Z]$ is not among them while the empty set is. So, including $Z$ (collider) is a bad idea. Worse, this regression do not identify any causal effect implied by the SCM. Indeed not all regressions can help in causal inference. For other example in the same fashion you can see: Infer one link of a causal structure, from observations Endogenous controls in linear regression - Alternative approach? Said that, I don't know if this example is what the asker looking for. The problem is deeper. The so called "OLS assumptions" play some role above? This can be matter of debate. I wrote a lot about that in this site: see links above, and links therein. However my short answer is: NO. This because "OLS assumptions", wherever presented, not include any clear causal assumptions.
Which OLS assumptions are colliders violating?
The problem here is that "collider" is a causal concept while OLS regression not necessarily deal with causality. About "regression and causality" read here: Under which assumptions a regression can b
Which OLS assumptions are colliders violating? The problem here is that "collider" is a causal concept while OLS regression not necessarily deal with causality. About "regression and causality" read here: Under which assumptions a regression can be interpreted causally? If we intend OLS regression as an estimator of the linear CEF, collider and other causal problems not matters. Read here: Regression and the CEF Moreover, unfortunately, several books are ambiguous if not erroneous about the meaning of regression, especially about his possible causal use (read here: How would econometricians answer the objections and recommendations raised by Chen and Pearl (2013)?) EDIT: following the discussion with Richard Hardy I add here the same example revised in my perspective: The structural causal model (SCM) is \begin{aligned} Y&=\beta_1X+u_Y, \\ Z&=\beta_2X+\beta_3Y+u_Z \\ X&=u_X \end{aligned} so that $Z$ is a collider: $X\rightarrow Z\leftarrow Y$. structural errors can be considered the exogenous variables in the system and we assume them as zero mean and independent each others. Note that one implication of that is: $E[u_Y|X]=0$, $E[u_Z|X,Y]=0$. Note that, in general, the SCM encode (explicitly) all causal assumptions made by the researcher. Now the question is that we are interested in the causal effect of $X$ on $Y$, then we looking for the regression equation that permit us to identify $\beta_1$; note that this is the direct causal effect of interest, and in this particular case it is the total too (assumption). The reply is quite simple, because from this regression $Y=\theta_1X+r_1$ $\theta_1$ identify $\beta_1$ now, in general for identification of the causal effect of interest the regression above is not what we need ($\theta_1$ do not identify the effect of interest). We have to add some control variables. Now the original question is (more or less): why control for collider is not a good idea? In our example we can try to add the collider as control and compute the regression as follow: $Y=\theta_2X+\theta_3Z+r_2$ but $\theta_2$ NOT identify $\beta_1$. It is so because the admissible control sets have to comply with backdoor criterion; so, $[Z]$ is not among them while the empty set is. So, including $Z$ (collider) is a bad idea. Worse, this regression do not identify any causal effect implied by the SCM. Indeed not all regressions can help in causal inference. For other example in the same fashion you can see: Infer one link of a causal structure, from observations Endogenous controls in linear regression - Alternative approach? Said that, I don't know if this example is what the asker looking for. The problem is deeper. The so called "OLS assumptions" play some role above? This can be matter of debate. I wrote a lot about that in this site: see links above, and links therein. However my short answer is: NO. This because "OLS assumptions", wherever presented, not include any clear causal assumptions.
Which OLS assumptions are colliders violating? The problem here is that "collider" is a causal concept while OLS regression not necessarily deal with causality. About "regression and causality" read here: Under which assumptions a regression can b
17,006
P-values equal to 0 in permutation test
Discussion A permutation test generates all relevant permutations of a dataset, computes a designated test statistic for each such permutation, and assesses the actual test statistic in the context of the resulting permutation distribution of the statistics. A common way to assess it is to report the proportion of statistics which are (in some sense) "as or more extreme" than actual statistic. This is often called a "p-value." Because the actual dataset is one of those permutations, its statistic will necessarily be among those found within permutation distribution. Therefore, the p-value can never be zero. Unless the dataset is very small (less than about 20-30 total numbers, typically) or the test statistic has a particularly nice mathematical form, is not practicable to generate all the permutations. (An example where all permutations are generated appears at Permutation Test in R.) Therefore computer implementations of permutation tests typically sample from the permutation distribution. They do so by generating some independent random permutations and hope that the results are a representative sample of all the permutations. Therefore, any numbers (such as a "p-value") derived from such a sample are only estimators of the properties of the permutation distribution. It is quite possible--and often happens when effects are large--that the estimated p-value is zero. There is nothing wrong with that, but it immediately raises the heretofore neglected issue of how much could the estimated p-value differ from the correct one? Because the sampling distribution of a proportion (such as an estimated p-value) is Binomial, this uncertainty can be addressed with a Binomial confidence interval. Architecture A well-constructed implementation will follow the discussion closely in all respects. It would begin with a routine to compute the test statistic, as this one to compare the means of two groups: diff.means <- function(control, treatment) mean(treatment) - mean(control) Write another routine to generate a random permutation of the dataset and apply the test statistic. The interface to this one allows the caller to supply the test statistic as an argument. It will compare the first m elements of an array (presumed to be a reference group) to the remaining elements (the "treatment" group). f <- function(..., sample, m, statistic) { s <- sample(sample) statistic(s[1:m], s[-(1:m)]) } The permutation test is carried out first by finding the statistic for the actual data (assumed here to be stored in two arrays control and treatment) and then finding statistics for many independent random permutations thereof: z <- stat(control, treatment) # Test statistic for the observed data sim<- sapply(1:1e4, f, sample=c(control,treatment), m=length(control), statistic=diff.means) Now compute the binomial estimate of the p-value and a confidence interval for it. One method uses the built-in binconf procedure in the HMisc package: require(Hmisc) # Exports `binconf` k <- sum(abs(sim) >= abs(z)) # Two-tailed test zapsmall(binconf(k, length(sim), method='exact')) # 95% CI by default It's not a bad idea to compare the result to another test, even if that is known not to be quite applicable: at least you might get an order of magnitude sense of where the result ought to lie. In this example (of comparing means), a Student t-test usually gives a good result anyway: t.test(treatment, control) This architecture is illustrated in a more complex situation, with working R code, at Test Whether Variables Follow the Same Distribution. Example As a test, I generated $10$ normally distributed "control" values from a distribution with mean $0$ and $20$ normally distributed "treatment" values from a distribution with mean $1.5$. set.seed(17) control <- rnorm(10) treatment <- rnorm(20, 1.5) After using the preceding code to run a permutation test I plotted the sample of the permutation distribution along with a vertical red line to mark the actual statistic: h <- hist(c(z, sim), plot=FALSE) hist(sim, breaks=h$breaks) abline(v = stat(control, treatment), col="Red") The Binomial confidence limit calculation resulted in PointEst Lower Upper 0 0 0.0003688199 In other words, the estimated p-value was exactly zero with a (default 95%) confidence interval from $0$ to $0.00037$. The Student t-test reports a p-value of 3.16e-05, which is consistent with this. This supports our more nuanced understanding that an estimated p-value of zero in this case corresponds to a very small p-value that we can legitimately take to be less than $0.00037$. That information, although uncertain, usually suffices to make a definite conclusion about the hypothesis test (because $0.00037$ is far below common thresholds of $0.05$, $0.01$, or $0.001$). Comments When $k$ out of $N$ values in the sample of the permutation distribution are considered "extreme," then both $k/N$ and $(k+1)/(N+1)$ are reasonable estimates of the true p-value. (Other estimates are reasonable, too.) Normally there is little reason to prefer one to the other. If they lead to different decisions, that means $N$ is too small. Take a larger sample of the permutation distribution instead of fudging the way in which the p-value is estimated. If greater precision in the estimate is needed, just run the permutation test longer. Because confidence interval widths typically scale inversely proportional to the square root of the sample size, to improve the confidence interval by a factor of $10$ I ran $10^2=100$ times as many permutations. This time the estimated p-value was $0.000005$ (five of the permutation results were at least as far from zero as the actual statistic) with a confidence interval from $1.6$ through $11.7$ parts per million: a little smaller than the Student t-test reported. Although the data were generated with normal random number generators, which would justify using the Student t-test, the permutation test results differ from the Student t-test results because the distributions within each group of observations are not perfectly normal.
P-values equal to 0 in permutation test
Discussion A permutation test generates all relevant permutations of a dataset, computes a designated test statistic for each such permutation, and assesses the actual test statistic in the context of
P-values equal to 0 in permutation test Discussion A permutation test generates all relevant permutations of a dataset, computes a designated test statistic for each such permutation, and assesses the actual test statistic in the context of the resulting permutation distribution of the statistics. A common way to assess it is to report the proportion of statistics which are (in some sense) "as or more extreme" than actual statistic. This is often called a "p-value." Because the actual dataset is one of those permutations, its statistic will necessarily be among those found within permutation distribution. Therefore, the p-value can never be zero. Unless the dataset is very small (less than about 20-30 total numbers, typically) or the test statistic has a particularly nice mathematical form, is not practicable to generate all the permutations. (An example where all permutations are generated appears at Permutation Test in R.) Therefore computer implementations of permutation tests typically sample from the permutation distribution. They do so by generating some independent random permutations and hope that the results are a representative sample of all the permutations. Therefore, any numbers (such as a "p-value") derived from such a sample are only estimators of the properties of the permutation distribution. It is quite possible--and often happens when effects are large--that the estimated p-value is zero. There is nothing wrong with that, but it immediately raises the heretofore neglected issue of how much could the estimated p-value differ from the correct one? Because the sampling distribution of a proportion (such as an estimated p-value) is Binomial, this uncertainty can be addressed with a Binomial confidence interval. Architecture A well-constructed implementation will follow the discussion closely in all respects. It would begin with a routine to compute the test statistic, as this one to compare the means of two groups: diff.means <- function(control, treatment) mean(treatment) - mean(control) Write another routine to generate a random permutation of the dataset and apply the test statistic. The interface to this one allows the caller to supply the test statistic as an argument. It will compare the first m elements of an array (presumed to be a reference group) to the remaining elements (the "treatment" group). f <- function(..., sample, m, statistic) { s <- sample(sample) statistic(s[1:m], s[-(1:m)]) } The permutation test is carried out first by finding the statistic for the actual data (assumed here to be stored in two arrays control and treatment) and then finding statistics for many independent random permutations thereof: z <- stat(control, treatment) # Test statistic for the observed data sim<- sapply(1:1e4, f, sample=c(control,treatment), m=length(control), statistic=diff.means) Now compute the binomial estimate of the p-value and a confidence interval for it. One method uses the built-in binconf procedure in the HMisc package: require(Hmisc) # Exports `binconf` k <- sum(abs(sim) >= abs(z)) # Two-tailed test zapsmall(binconf(k, length(sim), method='exact')) # 95% CI by default It's not a bad idea to compare the result to another test, even if that is known not to be quite applicable: at least you might get an order of magnitude sense of where the result ought to lie. In this example (of comparing means), a Student t-test usually gives a good result anyway: t.test(treatment, control) This architecture is illustrated in a more complex situation, with working R code, at Test Whether Variables Follow the Same Distribution. Example As a test, I generated $10$ normally distributed "control" values from a distribution with mean $0$ and $20$ normally distributed "treatment" values from a distribution with mean $1.5$. set.seed(17) control <- rnorm(10) treatment <- rnorm(20, 1.5) After using the preceding code to run a permutation test I plotted the sample of the permutation distribution along with a vertical red line to mark the actual statistic: h <- hist(c(z, sim), plot=FALSE) hist(sim, breaks=h$breaks) abline(v = stat(control, treatment), col="Red") The Binomial confidence limit calculation resulted in PointEst Lower Upper 0 0 0.0003688199 In other words, the estimated p-value was exactly zero with a (default 95%) confidence interval from $0$ to $0.00037$. The Student t-test reports a p-value of 3.16e-05, which is consistent with this. This supports our more nuanced understanding that an estimated p-value of zero in this case corresponds to a very small p-value that we can legitimately take to be less than $0.00037$. That information, although uncertain, usually suffices to make a definite conclusion about the hypothesis test (because $0.00037$ is far below common thresholds of $0.05$, $0.01$, or $0.001$). Comments When $k$ out of $N$ values in the sample of the permutation distribution are considered "extreme," then both $k/N$ and $(k+1)/(N+1)$ are reasonable estimates of the true p-value. (Other estimates are reasonable, too.) Normally there is little reason to prefer one to the other. If they lead to different decisions, that means $N$ is too small. Take a larger sample of the permutation distribution instead of fudging the way in which the p-value is estimated. If greater precision in the estimate is needed, just run the permutation test longer. Because confidence interval widths typically scale inversely proportional to the square root of the sample size, to improve the confidence interval by a factor of $10$ I ran $10^2=100$ times as many permutations. This time the estimated p-value was $0.000005$ (five of the permutation results were at least as far from zero as the actual statistic) with a confidence interval from $1.6$ through $11.7$ parts per million: a little smaller than the Student t-test reported. Although the data were generated with normal random number generators, which would justify using the Student t-test, the permutation test results differ from the Student t-test results because the distributions within each group of observations are not perfectly normal.
P-values equal to 0 in permutation test Discussion A permutation test generates all relevant permutations of a dataset, computes a designated test statistic for each such permutation, and assesses the actual test statistic in the context of
17,007
P-values equal to 0 in permutation test
Since estimated p-values are used in order to decide whether to reject the null hypothesis, it is important to consider how the choice of estimator affects the probability of a false rejection. The cited paper by Smyth & Phipson's points out that the unbiased estimator ($\frac{B}{M}$) fails to control the type-I error rate correctly. In contrast, ($\frac{B+1}{M+1}$) is a valid (but conservative) p-value estimator - it doesn't lead to excess rejection of the null. (B is the number of random permutations in which a statistic greater or equal than the observed one is obtained and M is the total number of random permutations sampled). Smyth & Phipson also demonstrate that the invalidity of ($\frac{B}{M}$) becomes critical in multiple comparisons settings, where very small p-value estimates are derived and then corrected by multiplication with a factor. An estimation of a zero p-value under the null is especially disastrous in these settings, since it stays zero regardless of the corrections applied.
P-values equal to 0 in permutation test
Since estimated p-values are used in order to decide whether to reject the null hypothesis, it is important to consider how the choice of estimator affects the probability of a false rejection. The ci
P-values equal to 0 in permutation test Since estimated p-values are used in order to decide whether to reject the null hypothesis, it is important to consider how the choice of estimator affects the probability of a false rejection. The cited paper by Smyth & Phipson's points out that the unbiased estimator ($\frac{B}{M}$) fails to control the type-I error rate correctly. In contrast, ($\frac{B+1}{M+1}$) is a valid (but conservative) p-value estimator - it doesn't lead to excess rejection of the null. (B is the number of random permutations in which a statistic greater or equal than the observed one is obtained and M is the total number of random permutations sampled). Smyth & Phipson also demonstrate that the invalidity of ($\frac{B}{M}$) becomes critical in multiple comparisons settings, where very small p-value estimates are derived and then corrected by multiplication with a factor. An estimation of a zero p-value under the null is especially disastrous in these settings, since it stays zero regardless of the corrections applied.
P-values equal to 0 in permutation test Since estimated p-values are used in order to decide whether to reject the null hypothesis, it is important to consider how the choice of estimator affects the probability of a false rejection. The ci
17,008
Dropping outliers based on "2.5 times the RMSE"
The reason for dropping this data is stated right there in the quote: namely, to "eliminate outliers and implausible income reports". The fact that they refer to both of these things in conjunction means that they are conceding that at least some of their outliers are not implausible values, and in any case, they give no argument for why values with a high residual should be considered "implausible" income values. By doing this, they are effectively removing data points because the residuals are higher than what is expected in their regression model. As I have stated in another answers here, this is tantamount to requiring reality to conform to your model assumptions, and ignoring parts of reality that are non-compliant with those assumptions. Whether or not this is a common practice, it is a terrible practice. It occurs because the outlying data points are hard to deal with, and the analyst is unwilling to model them properly (e.g., by using a model that allows higher kurtosis in the error terms), so they just remove parts of reality that don't conform to their ability to undertake statistical modelling. This practice is statistically undesirable and it leads to inferences that systematically underestimate variance and kurtosis in the error terms. The authors of this paper report that they dropped 3.22% of their data due to the removal of these outliers (p. 16490). Since most of these data points would have been very high incomes, this casts substantial doubt on their ability to make robust conclusions about the effect of high incomes (which is the goal of their paper).
Dropping outliers based on "2.5 times the RMSE"
The reason for dropping this data is stated right there in the quote: namely, to "eliminate outliers and implausible income reports". The fact that they refer to both of these things in conjunction m
Dropping outliers based on "2.5 times the RMSE" The reason for dropping this data is stated right there in the quote: namely, to "eliminate outliers and implausible income reports". The fact that they refer to both of these things in conjunction means that they are conceding that at least some of their outliers are not implausible values, and in any case, they give no argument for why values with a high residual should be considered "implausible" income values. By doing this, they are effectively removing data points because the residuals are higher than what is expected in their regression model. As I have stated in another answers here, this is tantamount to requiring reality to conform to your model assumptions, and ignoring parts of reality that are non-compliant with those assumptions. Whether or not this is a common practice, it is a terrible practice. It occurs because the outlying data points are hard to deal with, and the analyst is unwilling to model them properly (e.g., by using a model that allows higher kurtosis in the error terms), so they just remove parts of reality that don't conform to their ability to undertake statistical modelling. This practice is statistically undesirable and it leads to inferences that systematically underestimate variance and kurtosis in the error terms. The authors of this paper report that they dropped 3.22% of their data due to the removal of these outliers (p. 16490). Since most of these data points would have been very high incomes, this casts substantial doubt on their ability to make robust conclusions about the effect of high incomes (which is the goal of their paper).
Dropping outliers based on "2.5 times the RMSE" The reason for dropping this data is stated right there in the quote: namely, to "eliminate outliers and implausible income reports". The fact that they refer to both of these things in conjunction m
17,009
What is the mathematical relationship between R2 and MSE? [duplicate]
Yes, allow me to elaborate. Recall that for some outcome $y_i \in \mathbb{R}, \forall i=1,2,..,n$ we define MSE and $\textrm{R}^2$ as \begin{equation} \textrm{MSE}(y, \hat{y} ) = \frac{1}{n} \sum_{i=1}^{n} (y_i - \hat{y_i})^{2} \end{equation} \begin{equation} \textrm{R}^2(y, \hat{y} ) = 1 - \frac{\sum_{i=1}^{n} (y_i - \hat{y_i})^{2}}{ \sum_{i=1}^{n} (y_i - \bar{y})^{2} } \end{equation} So, as you noted, $\textrm{R}^2$ is a normalized version of MSE, we use MSE for reporting because I think it's a simple metric and it is technically the loss-function we are minimizing when we solve the normal equations. $\textrm{R}^2$ is useful because it is often easier to interpret since it doesn't depend on the scale of the data. As a concrete example, consider two models: one predicting income and the other predicting age, $\textrm{R}^2$ will make it easier to state which model is performing better.* *In general, this isn't a great idea and you shouldn't compare metrics like $\textrm{R}^2$ across different models to make these sorts of claims because some things are just fundamentally harder to predict than others (e.g., stock markets vs. who survived the Titanic).
What is the mathematical relationship between R2 and MSE? [duplicate]
Yes, allow me to elaborate. Recall that for some outcome $y_i \in \mathbb{R}, \forall i=1,2,..,n$ we define MSE and $\textrm{R}^2$ as \begin{equation} \textrm{MSE}(y, \hat{y} ) = \frac{1}{n} \sum_{i=1
What is the mathematical relationship between R2 and MSE? [duplicate] Yes, allow me to elaborate. Recall that for some outcome $y_i \in \mathbb{R}, \forall i=1,2,..,n$ we define MSE and $\textrm{R}^2$ as \begin{equation} \textrm{MSE}(y, \hat{y} ) = \frac{1}{n} \sum_{i=1}^{n} (y_i - \hat{y_i})^{2} \end{equation} \begin{equation} \textrm{R}^2(y, \hat{y} ) = 1 - \frac{\sum_{i=1}^{n} (y_i - \hat{y_i})^{2}}{ \sum_{i=1}^{n} (y_i - \bar{y})^{2} } \end{equation} So, as you noted, $\textrm{R}^2$ is a normalized version of MSE, we use MSE for reporting because I think it's a simple metric and it is technically the loss-function we are minimizing when we solve the normal equations. $\textrm{R}^2$ is useful because it is often easier to interpret since it doesn't depend on the scale of the data. As a concrete example, consider two models: one predicting income and the other predicting age, $\textrm{R}^2$ will make it easier to state which model is performing better.* *In general, this isn't a great idea and you shouldn't compare metrics like $\textrm{R}^2$ across different models to make these sorts of claims because some things are just fundamentally harder to predict than others (e.g., stock markets vs. who survived the Titanic).
What is the mathematical relationship between R2 and MSE? [duplicate] Yes, allow me to elaborate. Recall that for some outcome $y_i \in \mathbb{R}, \forall i=1,2,..,n$ we define MSE and $\textrm{R}^2$ as \begin{equation} \textrm{MSE}(y, \hat{y} ) = \frac{1}{n} \sum_{i=1
17,010
Why is the softmax used to represent a probability distribution?
From an optimization perspective it has some nice properties in terms of differentiability. For a lot of machine learning problems it's a good fit for 1-of-N classification. From a deep learning perspective: One could also argue that in theory, using a deep network with a softmax classifier on top can represent any N-class probability function over the feature space as MLPs have the Universal Approximation property.
Why is the softmax used to represent a probability distribution?
From an optimization perspective it has some nice properties in terms of differentiability. For a lot of machine learning problems it's a good fit for 1-of-N classification. From a deep learning pers
Why is the softmax used to represent a probability distribution? From an optimization perspective it has some nice properties in terms of differentiability. For a lot of machine learning problems it's a good fit for 1-of-N classification. From a deep learning perspective: One could also argue that in theory, using a deep network with a softmax classifier on top can represent any N-class probability function over the feature space as MLPs have the Universal Approximation property.
Why is the softmax used to represent a probability distribution? From an optimization perspective it has some nice properties in terms of differentiability. For a lot of machine learning problems it's a good fit for 1-of-N classification. From a deep learning pers
17,011
Why is the softmax used to represent a probability distribution?
See section 4.2 of Bishop's Pattern Recognition and Machine Learning on probabilistic generative models. He shows on page 197 that in the two-class case, given classes $ \mathcal{C}_1, \mathcal{C}_2 $, that $ p(\mathcal{C}_1 \mid \mathbf{x}) $, the posterior for class $ \mathcal{C}_1 $ (i.e. conditional on input example $ \mathbf{x} \in \mathbb{R}^d $), is such that (by Bayes' rule) $$ p(\mathcal{C}_1 \mid \mathbf{x}) = \frac{p(\mathbf{x} \mid \mathcal{C}_1)p(\mathcal{C}_1)}{p(\mathbf{x} \mid \mathcal{C}_1)p(\mathcal{C}_1) + p(\mathbf{x} \mid \mathcal{C}_2)p(\mathcal{C}_2)} = \frac{1}{1 + e^{-a(\mathbf{x})}} = \sigma \circ a(\mathbf{x}) $$ Here $ \sigma : \mathbb{R} \rightarrow (0, 1) $ is the sigmoid function and $ a : \mathbb{R}^d \rightarrow \mathbb{R} $ is such that $$ a(\mathbf{x}) \triangleq \log\frac{p(\mathbf{x} \mid\mathcal{C}_1)p(\mathcal{C}_1)}{p(\mathbf{x} \mid \mathcal{C}_2)p(\mathcal{C}_2)} = \log\frac{p(\mathcal{C_1}, \mathbf{x})}{p(\mathcal{C}_2, \mathbf{x})} $$ In the multiclass case, i.e. with classes $ \mathcal{C}_1, \ldots \mathcal{C}_K $, we naturally have for $ k \in \{1, \ldots K\} $, $$ p(\mathcal{C}_k \mid \mathbf{x}) = \frac{p(\mathbf{x} \mid \mathcal{C}_k)p(\mathcal{C}_k)}{\sum_{j = 1}^Kp(\mathbf{x} \mid \mathcal{C}_k)p(\mathcal{C}_j)} = \frac{e^{a_k(\mathbf{x})}}{\sum_{j = 1}^Ke^{a_j(\mathbf{x})}} $$ Here for $ k \in \{1, \ldots K\} $, $ a_k : \mathbb{R}^d \rightarrow \mathbb{R} $ is such that $$ a_k(\mathbf{x}) \triangleq \log\left(p(\mathbf{x} \mid \mathcal{C}_k)p(\mathcal{C}_k)\right) = \log p(\mathcal{C}_k, \mathbf{x}) $$ The $ a, a_k $ functions can be given parametric forms--for example, in multiclass logistic regression, $ a_k(\mathbf{x}) \triangleq \mathbf{w}_k^\top\mathbf{x} + b_k $ for $ \mathbf{w}_k \in \mathbb{R}^d, b_k \in \mathbb{R} $. In fact, page 203 states that for class conditional distributions, i.e. $ X \mid \mathcal{C}_k $, that are members of the exponential family of distributions, the $ a_k $ functions are affine functions of $ \mathbf{x} $. An example is linear discriminant analysis, which assumes Gaussian class-conditional distributions with a shared covariance matrix, as equation (4.68) on page 199 shows that the $ a_k $ function is affine. The softmax function itself, probabilistic interpretations aside, is a smooth, differentiable approximation to the max function, which of course the other answers have mentioned is helpful when using gradient-based methods to minimize an objective function. For example, the binary (multiclass) logistic regression objective is convex and differentiable, with the differentiability partly because of its inclusion of the sigmoid (softmax) function.
Why is the softmax used to represent a probability distribution?
See section 4.2 of Bishop's Pattern Recognition and Machine Learning on probabilistic generative models. He shows on page 197 that in the two-class case, given classes $ \mathcal{C}_1, \mathcal{C}_2 $
Why is the softmax used to represent a probability distribution? See section 4.2 of Bishop's Pattern Recognition and Machine Learning on probabilistic generative models. He shows on page 197 that in the two-class case, given classes $ \mathcal{C}_1, \mathcal{C}_2 $, that $ p(\mathcal{C}_1 \mid \mathbf{x}) $, the posterior for class $ \mathcal{C}_1 $ (i.e. conditional on input example $ \mathbf{x} \in \mathbb{R}^d $), is such that (by Bayes' rule) $$ p(\mathcal{C}_1 \mid \mathbf{x}) = \frac{p(\mathbf{x} \mid \mathcal{C}_1)p(\mathcal{C}_1)}{p(\mathbf{x} \mid \mathcal{C}_1)p(\mathcal{C}_1) + p(\mathbf{x} \mid \mathcal{C}_2)p(\mathcal{C}_2)} = \frac{1}{1 + e^{-a(\mathbf{x})}} = \sigma \circ a(\mathbf{x}) $$ Here $ \sigma : \mathbb{R} \rightarrow (0, 1) $ is the sigmoid function and $ a : \mathbb{R}^d \rightarrow \mathbb{R} $ is such that $$ a(\mathbf{x}) \triangleq \log\frac{p(\mathbf{x} \mid\mathcal{C}_1)p(\mathcal{C}_1)}{p(\mathbf{x} \mid \mathcal{C}_2)p(\mathcal{C}_2)} = \log\frac{p(\mathcal{C_1}, \mathbf{x})}{p(\mathcal{C}_2, \mathbf{x})} $$ In the multiclass case, i.e. with classes $ \mathcal{C}_1, \ldots \mathcal{C}_K $, we naturally have for $ k \in \{1, \ldots K\} $, $$ p(\mathcal{C}_k \mid \mathbf{x}) = \frac{p(\mathbf{x} \mid \mathcal{C}_k)p(\mathcal{C}_k)}{\sum_{j = 1}^Kp(\mathbf{x} \mid \mathcal{C}_k)p(\mathcal{C}_j)} = \frac{e^{a_k(\mathbf{x})}}{\sum_{j = 1}^Ke^{a_j(\mathbf{x})}} $$ Here for $ k \in \{1, \ldots K\} $, $ a_k : \mathbb{R}^d \rightarrow \mathbb{R} $ is such that $$ a_k(\mathbf{x}) \triangleq \log\left(p(\mathbf{x} \mid \mathcal{C}_k)p(\mathcal{C}_k)\right) = \log p(\mathcal{C}_k, \mathbf{x}) $$ The $ a, a_k $ functions can be given parametric forms--for example, in multiclass logistic regression, $ a_k(\mathbf{x}) \triangleq \mathbf{w}_k^\top\mathbf{x} + b_k $ for $ \mathbf{w}_k \in \mathbb{R}^d, b_k \in \mathbb{R} $. In fact, page 203 states that for class conditional distributions, i.e. $ X \mid \mathcal{C}_k $, that are members of the exponential family of distributions, the $ a_k $ functions are affine functions of $ \mathbf{x} $. An example is linear discriminant analysis, which assumes Gaussian class-conditional distributions with a shared covariance matrix, as equation (4.68) on page 199 shows that the $ a_k $ function is affine. The softmax function itself, probabilistic interpretations aside, is a smooth, differentiable approximation to the max function, which of course the other answers have mentioned is helpful when using gradient-based methods to minimize an objective function. For example, the binary (multiclass) logistic regression objective is convex and differentiable, with the differentiability partly because of its inclusion of the sigmoid (softmax) function.
Why is the softmax used to represent a probability distribution? See section 4.2 of Bishop's Pattern Recognition and Machine Learning on probabilistic generative models. He shows on page 197 that in the two-class case, given classes $ \mathcal{C}_1, \mathcal{C}_2 $
17,012
Why is the softmax used to represent a probability distribution?
The softmax function has a number of desirable properties for optimisation and other mathematical methods dealing with probability vectors. Its most important property is that it gives a mapping that allows you to represent any probability vector as a point in unconstrained Euclidean space, but it does this in a way that has some nice smoothness properties and other properties that are useful in various types of problems. Given any scale parameter $\lambda>0$ the probability vector $\mathbf{p} = (p_0,p_1,...,p_n)$ can be mapped to or from a point $\boldsymbol{\eta} \in \mathbb{R}^n$ using the softmax function and inverse-softmax function: $$\begin{align} \text{soft}(\boldsymbol{\eta}) &= \Bigg( \frac{1}{1 + \sum_{i=1}^n \exp(\lambda \eta_i)}, \frac{\exp(\lambda \eta_1)}{1 + \sum_{i=1}^n \exp(\lambda \eta_i)}, ..., \frac{\exp(\lambda \eta_n)}{1 + \sum_{i=1}^n \exp(\lambda \eta_i)} \Bigg), \\[12pt] \text{invsoft}(\mathbf{p}) &= \Bigg( \frac{\log(p_1) - \log(p_0)}{\lambda}, ..., \frac{\log(p_n) - \log(p_0)}{\lambda} \Bigg). \\[12pt] \end{align}$$ This mapping is sufficient for all probability vectors with non-zero elements, which also gets you arbitrarily close to any probability vector with one or more zero elements. The mapping has several useful properties: The domain of the softmax function is unconstrained Euclidean space, which makes it useful in optimisation problems that have a probability vector as an input. Specifically, if you have an objective function $H$ that maps an input probability vector to a real number, you can form the function composition $G = H \circ s : \mathbb{R}^n \rightarrow \mathbb{R}$ to convert the problem to an unconstrained optimisation. The softmax function is an analytic function (i.e., it is infinitely differentiable and has a convergent Taylor series) which means it is nice and smooth and can be represented closely by a polynomial in a neighbourhood of any point. The first two derivatives of the softmax function and inverse-softmax function have simple forms and can be computed explicitly. This is also useful for optimisation problems and other mathematical problems involving probability vectors. Of course, it is possible to form other mappings from unconstrained Euclidean space to the space of probability vectors, and other forms might also be similarly useful for some purposes. The softmax function is a form that has simple derivatives and so it is useful in a range of optimisation problems and other mathematical methods involving probability vectors.
Why is the softmax used to represent a probability distribution?
The softmax function has a number of desirable properties for optimisation and other mathematical methods dealing with probability vectors. Its most important property is that it gives a mapping that
Why is the softmax used to represent a probability distribution? The softmax function has a number of desirable properties for optimisation and other mathematical methods dealing with probability vectors. Its most important property is that it gives a mapping that allows you to represent any probability vector as a point in unconstrained Euclidean space, but it does this in a way that has some nice smoothness properties and other properties that are useful in various types of problems. Given any scale parameter $\lambda>0$ the probability vector $\mathbf{p} = (p_0,p_1,...,p_n)$ can be mapped to or from a point $\boldsymbol{\eta} \in \mathbb{R}^n$ using the softmax function and inverse-softmax function: $$\begin{align} \text{soft}(\boldsymbol{\eta}) &= \Bigg( \frac{1}{1 + \sum_{i=1}^n \exp(\lambda \eta_i)}, \frac{\exp(\lambda \eta_1)}{1 + \sum_{i=1}^n \exp(\lambda \eta_i)}, ..., \frac{\exp(\lambda \eta_n)}{1 + \sum_{i=1}^n \exp(\lambda \eta_i)} \Bigg), \\[12pt] \text{invsoft}(\mathbf{p}) &= \Bigg( \frac{\log(p_1) - \log(p_0)}{\lambda}, ..., \frac{\log(p_n) - \log(p_0)}{\lambda} \Bigg). \\[12pt] \end{align}$$ This mapping is sufficient for all probability vectors with non-zero elements, which also gets you arbitrarily close to any probability vector with one or more zero elements. The mapping has several useful properties: The domain of the softmax function is unconstrained Euclidean space, which makes it useful in optimisation problems that have a probability vector as an input. Specifically, if you have an objective function $H$ that maps an input probability vector to a real number, you can form the function composition $G = H \circ s : \mathbb{R}^n \rightarrow \mathbb{R}$ to convert the problem to an unconstrained optimisation. The softmax function is an analytic function (i.e., it is infinitely differentiable and has a convergent Taylor series) which means it is nice and smooth and can be represented closely by a polynomial in a neighbourhood of any point. The first two derivatives of the softmax function and inverse-softmax function have simple forms and can be computed explicitly. This is also useful for optimisation problems and other mathematical problems involving probability vectors. Of course, it is possible to form other mappings from unconstrained Euclidean space to the space of probability vectors, and other forms might also be similarly useful for some purposes. The softmax function is a form that has simple derivatives and so it is useful in a range of optimisation problems and other mathematical methods involving probability vectors.
Why is the softmax used to represent a probability distribution? The softmax function has a number of desirable properties for optimisation and other mathematical methods dealing with probability vectors. Its most important property is that it gives a mapping that
17,013
Why is the softmax used to represent a probability distribution?
Softmax is also a generalization of the logistic sigmoid function and therefore it carries the properties of the sigmoid such as ease of differentiation and being in the range 0-1. The output of a logistic sigmoid function is also between 0 and 1 and therefore naturally a suitable choice for representing probability. Its derivative is also exoressed in terms of its own output. However, if your function has a vector output you need to use the Softmax function to get the probability distribution over the output vector. There are some other advantages of using Softmax which Indie AI has mentioned, although it does not necessarily has anything to do with the Universal Approximation theory since Softmax is not a function only used for Neural Networks. References Logistic function Softmax function Ease of Differentiation on Softmax Ease of Differentiation of Sigmoid
Why is the softmax used to represent a probability distribution?
Softmax is also a generalization of the logistic sigmoid function and therefore it carries the properties of the sigmoid such as ease of differentiation and being in the range 0-1. The output of a log
Why is the softmax used to represent a probability distribution? Softmax is also a generalization of the logistic sigmoid function and therefore it carries the properties of the sigmoid such as ease of differentiation and being in the range 0-1. The output of a logistic sigmoid function is also between 0 and 1 and therefore naturally a suitable choice for representing probability. Its derivative is also exoressed in terms of its own output. However, if your function has a vector output you need to use the Softmax function to get the probability distribution over the output vector. There are some other advantages of using Softmax which Indie AI has mentioned, although it does not necessarily has anything to do with the Universal Approximation theory since Softmax is not a function only used for Neural Networks. References Logistic function Softmax function Ease of Differentiation on Softmax Ease of Differentiation of Sigmoid
Why is the softmax used to represent a probability distribution? Softmax is also a generalization of the logistic sigmoid function and therefore it carries the properties of the sigmoid such as ease of differentiation and being in the range 0-1. The output of a log
17,014
How to use auto.arima to impute missing values
First, be aware that forecast computes out-of-sample predictions but you are interested in in-sample observations. The Kalman filter handles missing values. Thus you can take the state space form of the ARIMA model from the output returned by forecast::auto.arima or stats::arima and pass it to KalmanRun. Edit (fix in the code based on answer by stats0007) In a previous version I took the column of the filtered states related to the observed series, however I should use the entire matrix and do the corresponding matrix operation of the observation equation, $y_t = Z \alpha_t$. (Thanks to @stats0007 for the comments.) Below I update the code and plot accordingly. I use a ts object as a sample series instead of zoo, but it should be the same: require(forecast) # sample series x0 <- x <- log(AirPassengers) y <- x # set some missing values x[c(10,60:71,100,130)] <- NA # fit model fit <- auto.arima(x) # Kalman filter kr <- KalmanRun(x, fit$model) # impute missing values Z %*% alpha at each missing observation id.na <- which(is.na(x)) for (i in id.na) y[i] <- fit$model$Z %*% kr$states[i,] # alternative to the explicit loop above sapply(id.na, FUN = function(x, Z, alpha) Z %*% alpha[x,], Z = fit$model$Z, alpha = kr$states) y[id.na] # [1] 4.767653 5.348100 5.364654 5.397167 5.523751 5.478211 5.482107 5.593442 # [9] 5.666549 5.701984 5.569021 5.463723 5.339286 5.855145 6.005067 You can plot the result (for the whole series and for the entire year with missing observations in the middle of the sample): par(mfrow = c(2, 1), mar = c(2.2,2.2,2,2)) plot(x0, col = "gray") lines(x) points(time(x0)[id.na], x0[id.na], col = "blue", pch = 19) points(time(y)[id.na], y[id.na], col = "red", pch = 17) legend("topleft", legend = c("true values", "imputed values"), col = c("blue", "red"), pch = c(19, 17)) plot(time(x0)[60:71], x0[60:71], type = "b", col = "blue", pch = 19, ylim = range(x0[60:71])) points(time(y)[60:71], y[60:71], col = "red", pch = 17) lines(time(y)[60:71], y[60:71], col = "red") legend("topleft", legend = c("true values", "imputed values"), col = c("blue", "red"), pch = c(19, 17), lty = c(1, 1)) You can repeat the same example using the Kalman smoother instead of the Kalman filter. All you need to change are these lines: kr <- KalmanSmooth(x, fit$model) y[i] <- kr$smooth[i,] Dealing with missing observations by means of the Kalman filter is sometimes interpreted as extrapolation of the series; when the Kalman smoother is used, missing observations are said to be filled in by interpolation in the observed series.
How to use auto.arima to impute missing values
First, be aware that forecast computes out-of-sample predictions but you are interested in in-sample observations. The Kalman filter handles missing values. Thus you can take the state space form of t
How to use auto.arima to impute missing values First, be aware that forecast computes out-of-sample predictions but you are interested in in-sample observations. The Kalman filter handles missing values. Thus you can take the state space form of the ARIMA model from the output returned by forecast::auto.arima or stats::arima and pass it to KalmanRun. Edit (fix in the code based on answer by stats0007) In a previous version I took the column of the filtered states related to the observed series, however I should use the entire matrix and do the corresponding matrix operation of the observation equation, $y_t = Z \alpha_t$. (Thanks to @stats0007 for the comments.) Below I update the code and plot accordingly. I use a ts object as a sample series instead of zoo, but it should be the same: require(forecast) # sample series x0 <- x <- log(AirPassengers) y <- x # set some missing values x[c(10,60:71,100,130)] <- NA # fit model fit <- auto.arima(x) # Kalman filter kr <- KalmanRun(x, fit$model) # impute missing values Z %*% alpha at each missing observation id.na <- which(is.na(x)) for (i in id.na) y[i] <- fit$model$Z %*% kr$states[i,] # alternative to the explicit loop above sapply(id.na, FUN = function(x, Z, alpha) Z %*% alpha[x,], Z = fit$model$Z, alpha = kr$states) y[id.na] # [1] 4.767653 5.348100 5.364654 5.397167 5.523751 5.478211 5.482107 5.593442 # [9] 5.666549 5.701984 5.569021 5.463723 5.339286 5.855145 6.005067 You can plot the result (for the whole series and for the entire year with missing observations in the middle of the sample): par(mfrow = c(2, 1), mar = c(2.2,2.2,2,2)) plot(x0, col = "gray") lines(x) points(time(x0)[id.na], x0[id.na], col = "blue", pch = 19) points(time(y)[id.na], y[id.na], col = "red", pch = 17) legend("topleft", legend = c("true values", "imputed values"), col = c("blue", "red"), pch = c(19, 17)) plot(time(x0)[60:71], x0[60:71], type = "b", col = "blue", pch = 19, ylim = range(x0[60:71])) points(time(y)[60:71], y[60:71], col = "red", pch = 17) lines(time(y)[60:71], y[60:71], col = "red") legend("topleft", legend = c("true values", "imputed values"), col = c("blue", "red"), pch = c(19, 17), lty = c(1, 1)) You can repeat the same example using the Kalman smoother instead of the Kalman filter. All you need to change are these lines: kr <- KalmanSmooth(x, fit$model) y[i] <- kr$smooth[i,] Dealing with missing observations by means of the Kalman filter is sometimes interpreted as extrapolation of the series; when the Kalman smoother is used, missing observations are said to be filled in by interpolation in the observed series.
How to use auto.arima to impute missing values First, be aware that forecast computes out-of-sample predictions but you are interested in in-sample observations. The Kalman filter handles missing values. Thus you can take the state space form of t
17,015
How to use auto.arima to impute missing values
Here would be my solution: # Take AirPassengers as example data <- AirPassengers # Set missing values data[c(44,45,88,90,111,122,129,130,135,136)] <- NA missindx <- is.na(data) arimaModel <- auto.arima(data) model <- arimaModel$model #Kalman smoothing kal <- KalmanSmooth(data, model, nit ) erg <- kal$smooth for ( i in 1:length(model$Z)) { erg[,i] = erg[,i] * model$Z[i] } karima <-rowSums(erg) for (i in 1:length(data)) { if (is.na(data[i])) { data[i] <- karima[i] } } #Original TimeSeries with imputed values print(data) @ Javlacalle: Thx for your post, very interesting! I have two questions to your solution, hope you can help me: Why do you use KalmanRun instead of KalmanSmooth ? I read KalmanRun is considered extrapolation, while smooth would be estimation. I also do not get your id part. Why don't you use all components in .Z ? I mean for example .Z gives 1, 0,0,0,0,1,-1 -> 7 values. This would mean .smooth (in your case for KalmanRun states) gives me 7 columns. As I understand alle columns which are 1 or -1 go into the model. Let's say row number 5 is missing in AirPass. Then I would take the sum of row 5 like this: I would add value from column 1 (because Z gave 1), I wouldn't add column 2-4 (because Z says 0), I would add column 5 and I would add the negative value of column 7 (because Z says -1) Is my solution wrong? Or are they both ok? Can you perhaps explain to me further?
How to use auto.arima to impute missing values
Here would be my solution: # Take AirPassengers as example data <- AirPassengers # Set missing values data[c(44,45,88,90,111,122,129,130,135,136)] <- NA missindx <- is.na(data) arimaModel <- auto.
How to use auto.arima to impute missing values Here would be my solution: # Take AirPassengers as example data <- AirPassengers # Set missing values data[c(44,45,88,90,111,122,129,130,135,136)] <- NA missindx <- is.na(data) arimaModel <- auto.arima(data) model <- arimaModel$model #Kalman smoothing kal <- KalmanSmooth(data, model, nit ) erg <- kal$smooth for ( i in 1:length(model$Z)) { erg[,i] = erg[,i] * model$Z[i] } karima <-rowSums(erg) for (i in 1:length(data)) { if (is.na(data[i])) { data[i] <- karima[i] } } #Original TimeSeries with imputed values print(data) @ Javlacalle: Thx for your post, very interesting! I have two questions to your solution, hope you can help me: Why do you use KalmanRun instead of KalmanSmooth ? I read KalmanRun is considered extrapolation, while smooth would be estimation. I also do not get your id part. Why don't you use all components in .Z ? I mean for example .Z gives 1, 0,0,0,0,1,-1 -> 7 values. This would mean .smooth (in your case for KalmanRun states) gives me 7 columns. As I understand alle columns which are 1 or -1 go into the model. Let's say row number 5 is missing in AirPass. Then I would take the sum of row 5 like this: I would add value from column 1 (because Z gave 1), I wouldn't add column 2-4 (because Z says 0), I would add column 5 and I would add the negative value of column 7 (because Z says -1) Is my solution wrong? Or are they both ok? Can you perhaps explain to me further?
How to use auto.arima to impute missing values Here would be my solution: # Take AirPassengers as example data <- AirPassengers # Set missing values data[c(44,45,88,90,111,122,129,130,135,136)] <- NA missindx <- is.na(data) arimaModel <- auto.
17,016
Expected rolls to roll every number on a dice an odd number of times
You can think about your problem as a Markov chain, i.e., a set of states with certain transition probabilities between states. You start in one state (all cards face up) and end up in an absorbing state (all cards face down). Your question is about the expected number of steps until you reach that absorbing state, either for a single chain, or for the expected minimum number of steps across $n$ independent Markov chains running simultaneously. And there are actually two slightly different ways to look at this. The first one, as whuber comments, is to consider the six cards as six different bits $\{0,1\}$ and consider the state as a six-vector in $\{0,1\}^6$, i.e., the six-dimensional discrete hypercube. We start out at the vertex $(0,0,0,0,0,0)$, and the absorbing state is $(1,1,1,1,1,1)$. A step can take us to an adjacent vertex, in which exactly one bit is flipped with respect to the original state. That is, transitions take us from one vertex to any neighboring one with Hamming distance exactly one, and each such neighbor has an equal probability of being the next state. There is some literature on random walks and Markov chains on discrete cubes with Hamming distances, but nothing I could locate at short notice. We have a very nice thread on Random walk on the edges of a cube, which might be interesting. The second way to look at this is to use the fact that all cards are interchangeable (assuming a fair die). Then we can use just seven different states, corresponding to the number of cards face down. We start in the state $i=0$, and the absorbing state is $i=6$. The transition probabilities depend on the state we are in: From $i=0$ (all cards face up), we will flip one card down and end up with one card face down with certainty: we have the transition probability $p_{01}=1$ (and $p_{0j}=0$ for $j\neq 1$). From $i=1$, we can reach $j=0$ with probability $p_{10}=\frac{1}{6}$ and $j=2$ with probability $p_{12}=\frac{5}{6}$. Overall, we get the following transition matrix: $$ T=\begin{pmatrix} 0 & \frac{6}{6} & 0 & 0 & 0 & 0 & 0 \\ \frac{1}{6} & 0 & \frac{5}{6} & 0 & 0 & 0 & 0 \\ 0 & \frac{2}{6} & 0 & \frac{4}{6} & 0 & 0 & 0 \\ 0 & 0 & \frac{3}{6} & 0 & \frac{3}{6} & 0 & 0 \\ 0 & 0 & 0 & \frac{4}{6} & 0 & \frac{2}{6} & 0 \\ 0 & 0 & 0 & 0 & \frac{5}{6} & 0 & \frac{1}{6} \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix} $$ We start with certainty in the state $i=0$. We can encode the probabilities for each state at a certain point with a vector $v\in[0,1]^7$, and our starting state corresponds to $v_0=(1,0,0,0,0,0,0)$. Here is a fundamental fact about Markov chains (which is easy to see and to prove via induction): the probabilities for the state after $k$ transitions are given by $v_k=(T')^kv_0$. (That is $T$ transposed. You can also work with row vectors $v$, then you don't need to transpose, but "$v_0T^k$" takes a little getting used to.) Thus, the probability that we have ended up in the absorbing state $i=6$ after $k$ steps is precisely the last entry in that vector, or $v_k[6]=((T')^kv_0)[6]$. Of course, we could already have been in the absorbing state after $k-1$ steps. So the probability that our Markov chain ends up in the absorbing state for the first time after $k$ steps is $$ p_k := ((T')^kv_0)[6]-((T')^{k-1}v_0)[6]. $$ We can numerically calculate $p_k$ for a large enough number of $k\leq K$ such that $\sum_{k=0}^Kp_k\approx 1$, and there may even be a closed form solution. Then, given $p_k$, we can calculate the expectation as $$ \sum_{k=0}^\infty kp_k \approx \sum_{k=0}^K kp_k. $$ Next, assume we have $n$ players, and we want to know after how many steps the game will end, i.e., when the first player has all their cards face down. We can easily calculate the probability $q_k^n$ that at least one player has all cards face down after $k$ or fewer steps by noting that $$ \begin{align*} q_k^n &= P(\text{at least one player has all cards face down after $k$ or fewer steps}) \\ &= 1-P(\text{all $n$ players need at least $k+1$ steps}) \\ &= 1-P(\text{ONE player needs at least $k+1$ steps})^n \\ &= 1-\bigg(\sum_{j=k+1}^\infty p_j\bigg)^n \\ &= 1-\bigg(1-\sum_{j=0}^k p_j\bigg)^n. \end{align*} $$ From this, we can derive the probability $p^n_k$ that a game of $n$ players ends after exactly $k$ steps: $$ p^n_k = q_k^n-q_{k-1}^n = \bigg(1-\sum_{j=0}^{k-1} p_j\bigg)^n-\bigg(1-\sum_{j=0}^k p_j\bigg)^n. $$ And from this, in turn, we can again calculate the expected length of a game with $n$ players: $$ \sum_{k=0}^\infty kp^n_k \approx \sum_{k=0}^K kp^n_k. $$ As I wrote above, there may be a closed form solution for the $p_k$, but for now, we can numerically evaluate them using R. I'm using $K=10,000$, so that $\sum_{k=0}^K p_k=1$ up to machine accuracy. max_steps <- 10000 state_probabilities <- matrix(NA,nrow=max_steps+1,ncol=7,dimnames=list(0:max_steps,6:0)) state_probabilities[1,] <- c(1,0,0,0,0,0,0) transition_matrix <- rbind( c(0,6,0,0,0,0,0), c(1,0,5,0,0,0,0), c(0,2,0,4,0,0,0), c(0,0,3,0,3,0,0), c(0,0,0,4,0,2,0), c(0,0,0,0,5,0,1), c(0,0,0,0,0,0,6))/6 for ( kk in 1:max_steps ) { state_probabilities[kk+1,] <- t(transition_matrix)%*%state_probabilities[kk,] } probs <- diff(state_probabilities[,7]) sum(probs) # yields 1 sum(probs*seq_along(probs)) # yields 83.2 plot(probs[1:400],type="h",xlab="Number of steps",ylab="Probability",las=1) Next, this is how we get the probabilities $p^4_k$ for $n=4$ players: n_players <- 4 probs_minimum <- sapply(1:max_steps, function(kk)(1-sum(probs[1:(kk-1)]))^n_players-(1-sum(probs[1:kk]))^n_players) head(probs_minimum) plot(probs_minimum[1:400],type="h",xlab="Number of steps",ylab="Probability", las=1,main=paste(n_players,"players")) Of course, four persons finish more quickly than a single person. For $n=4$, we get an expected value of sum(probs_minimum*seq_along(probs_minimum)) [1] 25.44876 Finally, I like to confirm calculations like this using simulation. n_sims <- 1e5 steps_minimum <- rep(NA,n_sims) pb <- winProgressBar(max=n_sims) for ( ii in 1:n_sims ) { setWinProgressBar(pb,ii,paste(ii,"of",n_sims)) set.seed(ii) # for reproducibility states <- matrix(FALSE,nrow=6,ncol=n_players) n_steps <- 0 while ( TRUE ) { n_steps <- n_steps+1 for ( jj in 1:n_players ) { roll <- sample(1:6,1) states[roll,jj] <- !states[roll,jj] } if ( any ( colSums(states) == 6 ) ) { steps_minimum[ii] <- n_steps break } } } close(pb) The distribution of the numbers of steps needed in our $10^5$ simulated games matches the calculated $p^4_k$ rather well: result <- structure(rep(0,length(probs_minimum)),.Names=seq_along(probs_minimum)) result[names(table(steps_minimum))] <- as.vector(table(steps_minimum))/n_sims cbind(result,probs_minimum)[1:30,] result probs_minimum 1 0.00000 0.00000000 2 0.00000 0.00000000 3 0.00000 0.00000000 4 0.00000 0.00000000 5 0.00000 0.00000000 6 0.06063 0.06031414 7 0.00000 0.00000000 8 0.08072 0.07919228 9 0.00000 0.00000000 10 0.08037 0.08026479 11 0.00000 0.00000000 12 0.07382 0.07543464 13 0.00000 0.00000000 14 0.06826 0.06905406 15 0.00000 0.00000000 16 0.06409 0.06260212 17 0.00000 0.00000000 18 0.05668 0.05654555 19 0.00000 0.00000000 20 0.05180 0.05100393 21 0.00000 0.00000000 22 0.04570 0.04598101 23 0.00000 0.00000000 24 0.04078 0.04144437 25 0.00000 0.00000000 26 0.03749 0.03735245 27 0.00000 0.00000000 28 0.03241 0.03366354 29 0.00000 0.00000000 30 0.03026 0.03033861 Finally, the mean of the steps needed in the simulated games also matches the calculated expectation quite well: mean(steps_minimum) [1] 25.43862
Expected rolls to roll every number on a dice an odd number of times
You can think about your problem as a Markov chain, i.e., a set of states with certain transition probabilities between states. You start in one state (all cards face up) and end up in an absorbing st
Expected rolls to roll every number on a dice an odd number of times You can think about your problem as a Markov chain, i.e., a set of states with certain transition probabilities between states. You start in one state (all cards face up) and end up in an absorbing state (all cards face down). Your question is about the expected number of steps until you reach that absorbing state, either for a single chain, or for the expected minimum number of steps across $n$ independent Markov chains running simultaneously. And there are actually two slightly different ways to look at this. The first one, as whuber comments, is to consider the six cards as six different bits $\{0,1\}$ and consider the state as a six-vector in $\{0,1\}^6$, i.e., the six-dimensional discrete hypercube. We start out at the vertex $(0,0,0,0,0,0)$, and the absorbing state is $(1,1,1,1,1,1)$. A step can take us to an adjacent vertex, in which exactly one bit is flipped with respect to the original state. That is, transitions take us from one vertex to any neighboring one with Hamming distance exactly one, and each such neighbor has an equal probability of being the next state. There is some literature on random walks and Markov chains on discrete cubes with Hamming distances, but nothing I could locate at short notice. We have a very nice thread on Random walk on the edges of a cube, which might be interesting. The second way to look at this is to use the fact that all cards are interchangeable (assuming a fair die). Then we can use just seven different states, corresponding to the number of cards face down. We start in the state $i=0$, and the absorbing state is $i=6$. The transition probabilities depend on the state we are in: From $i=0$ (all cards face up), we will flip one card down and end up with one card face down with certainty: we have the transition probability $p_{01}=1$ (and $p_{0j}=0$ for $j\neq 1$). From $i=1$, we can reach $j=0$ with probability $p_{10}=\frac{1}{6}$ and $j=2$ with probability $p_{12}=\frac{5}{6}$. Overall, we get the following transition matrix: $$ T=\begin{pmatrix} 0 & \frac{6}{6} & 0 & 0 & 0 & 0 & 0 \\ \frac{1}{6} & 0 & \frac{5}{6} & 0 & 0 & 0 & 0 \\ 0 & \frac{2}{6} & 0 & \frac{4}{6} & 0 & 0 & 0 \\ 0 & 0 & \frac{3}{6} & 0 & \frac{3}{6} & 0 & 0 \\ 0 & 0 & 0 & \frac{4}{6} & 0 & \frac{2}{6} & 0 \\ 0 & 0 & 0 & 0 & \frac{5}{6} & 0 & \frac{1}{6} \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix} $$ We start with certainty in the state $i=0$. We can encode the probabilities for each state at a certain point with a vector $v\in[0,1]^7$, and our starting state corresponds to $v_0=(1,0,0,0,0,0,0)$. Here is a fundamental fact about Markov chains (which is easy to see and to prove via induction): the probabilities for the state after $k$ transitions are given by $v_k=(T')^kv_0$. (That is $T$ transposed. You can also work with row vectors $v$, then you don't need to transpose, but "$v_0T^k$" takes a little getting used to.) Thus, the probability that we have ended up in the absorbing state $i=6$ after $k$ steps is precisely the last entry in that vector, or $v_k[6]=((T')^kv_0)[6]$. Of course, we could already have been in the absorbing state after $k-1$ steps. So the probability that our Markov chain ends up in the absorbing state for the first time after $k$ steps is $$ p_k := ((T')^kv_0)[6]-((T')^{k-1}v_0)[6]. $$ We can numerically calculate $p_k$ for a large enough number of $k\leq K$ such that $\sum_{k=0}^Kp_k\approx 1$, and there may even be a closed form solution. Then, given $p_k$, we can calculate the expectation as $$ \sum_{k=0}^\infty kp_k \approx \sum_{k=0}^K kp_k. $$ Next, assume we have $n$ players, and we want to know after how many steps the game will end, i.e., when the first player has all their cards face down. We can easily calculate the probability $q_k^n$ that at least one player has all cards face down after $k$ or fewer steps by noting that $$ \begin{align*} q_k^n &= P(\text{at least one player has all cards face down after $k$ or fewer steps}) \\ &= 1-P(\text{all $n$ players need at least $k+1$ steps}) \\ &= 1-P(\text{ONE player needs at least $k+1$ steps})^n \\ &= 1-\bigg(\sum_{j=k+1}^\infty p_j\bigg)^n \\ &= 1-\bigg(1-\sum_{j=0}^k p_j\bigg)^n. \end{align*} $$ From this, we can derive the probability $p^n_k$ that a game of $n$ players ends after exactly $k$ steps: $$ p^n_k = q_k^n-q_{k-1}^n = \bigg(1-\sum_{j=0}^{k-1} p_j\bigg)^n-\bigg(1-\sum_{j=0}^k p_j\bigg)^n. $$ And from this, in turn, we can again calculate the expected length of a game with $n$ players: $$ \sum_{k=0}^\infty kp^n_k \approx \sum_{k=0}^K kp^n_k. $$ As I wrote above, there may be a closed form solution for the $p_k$, but for now, we can numerically evaluate them using R. I'm using $K=10,000$, so that $\sum_{k=0}^K p_k=1$ up to machine accuracy. max_steps <- 10000 state_probabilities <- matrix(NA,nrow=max_steps+1,ncol=7,dimnames=list(0:max_steps,6:0)) state_probabilities[1,] <- c(1,0,0,0,0,0,0) transition_matrix <- rbind( c(0,6,0,0,0,0,0), c(1,0,5,0,0,0,0), c(0,2,0,4,0,0,0), c(0,0,3,0,3,0,0), c(0,0,0,4,0,2,0), c(0,0,0,0,5,0,1), c(0,0,0,0,0,0,6))/6 for ( kk in 1:max_steps ) { state_probabilities[kk+1,] <- t(transition_matrix)%*%state_probabilities[kk,] } probs <- diff(state_probabilities[,7]) sum(probs) # yields 1 sum(probs*seq_along(probs)) # yields 83.2 plot(probs[1:400],type="h",xlab="Number of steps",ylab="Probability",las=1) Next, this is how we get the probabilities $p^4_k$ for $n=4$ players: n_players <- 4 probs_minimum <- sapply(1:max_steps, function(kk)(1-sum(probs[1:(kk-1)]))^n_players-(1-sum(probs[1:kk]))^n_players) head(probs_minimum) plot(probs_minimum[1:400],type="h",xlab="Number of steps",ylab="Probability", las=1,main=paste(n_players,"players")) Of course, four persons finish more quickly than a single person. For $n=4$, we get an expected value of sum(probs_minimum*seq_along(probs_minimum)) [1] 25.44876 Finally, I like to confirm calculations like this using simulation. n_sims <- 1e5 steps_minimum <- rep(NA,n_sims) pb <- winProgressBar(max=n_sims) for ( ii in 1:n_sims ) { setWinProgressBar(pb,ii,paste(ii,"of",n_sims)) set.seed(ii) # for reproducibility states <- matrix(FALSE,nrow=6,ncol=n_players) n_steps <- 0 while ( TRUE ) { n_steps <- n_steps+1 for ( jj in 1:n_players ) { roll <- sample(1:6,1) states[roll,jj] <- !states[roll,jj] } if ( any ( colSums(states) == 6 ) ) { steps_minimum[ii] <- n_steps break } } } close(pb) The distribution of the numbers of steps needed in our $10^5$ simulated games matches the calculated $p^4_k$ rather well: result <- structure(rep(0,length(probs_minimum)),.Names=seq_along(probs_minimum)) result[names(table(steps_minimum))] <- as.vector(table(steps_minimum))/n_sims cbind(result,probs_minimum)[1:30,] result probs_minimum 1 0.00000 0.00000000 2 0.00000 0.00000000 3 0.00000 0.00000000 4 0.00000 0.00000000 5 0.00000 0.00000000 6 0.06063 0.06031414 7 0.00000 0.00000000 8 0.08072 0.07919228 9 0.00000 0.00000000 10 0.08037 0.08026479 11 0.00000 0.00000000 12 0.07382 0.07543464 13 0.00000 0.00000000 14 0.06826 0.06905406 15 0.00000 0.00000000 16 0.06409 0.06260212 17 0.00000 0.00000000 18 0.05668 0.05654555 19 0.00000 0.00000000 20 0.05180 0.05100393 21 0.00000 0.00000000 22 0.04570 0.04598101 23 0.00000 0.00000000 24 0.04078 0.04144437 25 0.00000 0.00000000 26 0.03749 0.03735245 27 0.00000 0.00000000 28 0.03241 0.03366354 29 0.00000 0.00000000 30 0.03026 0.03033861 Finally, the mean of the steps needed in the simulated games also matches the calculated expectation quite well: mean(steps_minimum) [1] 25.43862
Expected rolls to roll every number on a dice an odd number of times You can think about your problem as a Markov chain, i.e., a set of states with certain transition probabilities between states. You start in one state (all cards face up) and end up in an absorbing st
17,017
Expected rolls to roll every number on a dice an odd number of times
I think I've found the answer for the single player case: If we write $e_{i}$ for the expected remaining length of the game if $i$ cards are facedown, then we can work out that: (i). $e_{5} = \frac{1}{6}(1) + \frac{5}{6}(e_{4} + 1)$ (ii). $e_{4} = \frac{2}{6}(e_{5} + 1) + \frac{4}{6}(e_{3} + 1)$ (iii). $e_{3} = \frac{3}{6}(e_{4} + 1) + \frac{3}{6}(e_{2} + 1)$ (iv). $e_{2} = \frac{4}{6}(e_{3} + 1) + \frac{2}{6}(e_{1} + 1)$ (v). $e_{1} = \frac{5}{6}(e_{2} + 1) + \frac{1}{6}(e_{0} + 1)$ (vi). $e_{0} = \frac{6}{6}(e_{1} + 1)$ (vi) and (v) then give us (vii). $e_{1} = e_{2} + \frac{7}{5}$; (vii) and (iv) then give us (viii). $e_{2} = e_{3} + \frac{11}{5}$; (viii) and (iii) then give us (ix). $e_{3} = e_{4} + \frac{21}{5}$; (ix) and (ii) then give us (x). $e_{4} = e_{5} + \frac{57}{5}$; (x) and (i) then give us $e_{5} = 63 $ We can then add up to get $e_{0} = 63 + \frac{57}{5} + \frac{21}{5} + \frac{11}{5} + \frac{7}{5} + 1 = 83.2$. Now, how would one generalize this to find the expected length of game with $n$ players?
Expected rolls to roll every number on a dice an odd number of times
I think I've found the answer for the single player case: If we write $e_{i}$ for the expected remaining length of the game if $i$ cards are facedown, then we can work out that: (i). $e_{5} = \frac{1}
Expected rolls to roll every number on a dice an odd number of times I think I've found the answer for the single player case: If we write $e_{i}$ for the expected remaining length of the game if $i$ cards are facedown, then we can work out that: (i). $e_{5} = \frac{1}{6}(1) + \frac{5}{6}(e_{4} + 1)$ (ii). $e_{4} = \frac{2}{6}(e_{5} + 1) + \frac{4}{6}(e_{3} + 1)$ (iii). $e_{3} = \frac{3}{6}(e_{4} + 1) + \frac{3}{6}(e_{2} + 1)$ (iv). $e_{2} = \frac{4}{6}(e_{3} + 1) + \frac{2}{6}(e_{1} + 1)$ (v). $e_{1} = \frac{5}{6}(e_{2} + 1) + \frac{1}{6}(e_{0} + 1)$ (vi). $e_{0} = \frac{6}{6}(e_{1} + 1)$ (vi) and (v) then give us (vii). $e_{1} = e_{2} + \frac{7}{5}$; (vii) and (iv) then give us (viii). $e_{2} = e_{3} + \frac{11}{5}$; (viii) and (iii) then give us (ix). $e_{3} = e_{4} + \frac{21}{5}$; (ix) and (ii) then give us (x). $e_{4} = e_{5} + \frac{57}{5}$; (x) and (i) then give us $e_{5} = 63 $ We can then add up to get $e_{0} = 63 + \frac{57}{5} + \frac{21}{5} + \frac{11}{5} + \frac{7}{5} + 1 = 83.2$. Now, how would one generalize this to find the expected length of game with $n$ players?
Expected rolls to roll every number on a dice an odd number of times I think I've found the answer for the single player case: If we write $e_{i}$ for the expected remaining length of the game if $i$ cards are facedown, then we can work out that: (i). $e_{5} = \frac{1}
17,018
Random Forest mtry Question
No, that's not how this works. Consider a single tree being added to a Random Forest (RF) model. The standard recursive partitioning algorithm would start with all the data and do an exhaustive search over all variables and possible split points to find the one that best "explained" the entire data - reduced the node impurity the most. The data are split according to the best split point and the process repeated in the left and right leaves in turn, recursively, until some stopping rules are met. The key thing here is that each time the recursive partitioning algorithm looks for a split all the variables are included in the search. Where RF models differ is that when forming each split in a tree, the algorithm randomly selects mtry variables from the set of predictors available. Hence when forming each split a different random set of variables is selected within which the best split point is chosen. Hence for large trees, which is what RFs use, it is at least conceivable that all variables might be used at some point when searching for split points whilst growing the tree.
Random Forest mtry Question
No, that's not how this works. Consider a single tree being added to a Random Forest (RF) model. The standard recursive partitioning algorithm would start with all the data and do an exhaustive search
Random Forest mtry Question No, that's not how this works. Consider a single tree being added to a Random Forest (RF) model. The standard recursive partitioning algorithm would start with all the data and do an exhaustive search over all variables and possible split points to find the one that best "explained" the entire data - reduced the node impurity the most. The data are split according to the best split point and the process repeated in the left and right leaves in turn, recursively, until some stopping rules are met. The key thing here is that each time the recursive partitioning algorithm looks for a split all the variables are included in the search. Where RF models differ is that when forming each split in a tree, the algorithm randomly selects mtry variables from the set of predictors available. Hence when forming each split a different random set of variables is selected within which the best split point is chosen. Hence for large trees, which is what RFs use, it is at least conceivable that all variables might be used at some point when searching for split points whilst growing the tree.
Random Forest mtry Question No, that's not how this works. Consider a single tree being added to a Random Forest (RF) model. The standard recursive partitioning algorithm would start with all the data and do an exhaustive search
17,019
Unequal sample sizes: When to call it quits
Statistical tests do not make assumptions about sample size. There are, of course, differing assumptions with various tests (e.g., normality), but the equality of sample sizes is not one of them. Unless the test used is inappropriate in some other way (I can't think of an issue right now), the type I error rate will not be affected by drastically unequal group sizes. Moreover, their phrasing implies (to my mind) that they believe it will. Thus, they are confused about these issues. On the other hand, type II error rates very much will be affected by highly unequal $n$s. This will be true no matter what the test (e.g., the $t$-test, Mann-Whitney $U$-test, or $z$-test for equality of proportions will all be affected in this way). For an example of this, see my answer here: How should one interpret the comparison of means from different sample sizes? Thus, they may well be "justified in throwing in the towel" with respect to this issue. (Specifically, if you expect to get a non-significant result whether the effect is real or not, what is the point of the test?) As the sample sizes diverge, statistical power will converge to $\alpha$. This fact actually leads to a different suggestion, which I suspect few people have ever heard of and would probably have trouble getting past reviewers (no offense intended): a compromise power analysis. The idea is relatively straightforward: In any power analysis, $\alpha$, $\beta$, $n_1$, $n_2$, and the effect size $d$, exist in relationship to each other. Having specified all but one, you can solve for the last. Typically, people do what is called an a-priori power analysis, in which you solve for $N$ (generally you are assuming $n_1=n_2$). On the other hand, you can fix $n_1$, $n_2$, and $d$, and solve for $\alpha$ (or equivalently $\beta$), if you specify the ratio of type I to type II error rates that you are willing to live with. Conventionally, $\alpha=.05$ and $\beta=.20$, so you are saying that type I errors are four times worse than type I errors. Of course, a given researcher might disagree with that, but having specified a given ratio, you can solve for what $\alpha$ you should be using in order to possibly maintain some adequate power. This approach is a logically valid option for the researchers in this situation, although I acknowledge the exoticness of this approach may make it a tough sell in the larger research community that probably has never heard of such a thing.
Unequal sample sizes: When to call it quits
Statistical tests do not make assumptions about sample size. There are, of course, differing assumptions with various tests (e.g., normality), but the equality of sample sizes is not one of them. Un
Unequal sample sizes: When to call it quits Statistical tests do not make assumptions about sample size. There are, of course, differing assumptions with various tests (e.g., normality), but the equality of sample sizes is not one of them. Unless the test used is inappropriate in some other way (I can't think of an issue right now), the type I error rate will not be affected by drastically unequal group sizes. Moreover, their phrasing implies (to my mind) that they believe it will. Thus, they are confused about these issues. On the other hand, type II error rates very much will be affected by highly unequal $n$s. This will be true no matter what the test (e.g., the $t$-test, Mann-Whitney $U$-test, or $z$-test for equality of proportions will all be affected in this way). For an example of this, see my answer here: How should one interpret the comparison of means from different sample sizes? Thus, they may well be "justified in throwing in the towel" with respect to this issue. (Specifically, if you expect to get a non-significant result whether the effect is real or not, what is the point of the test?) As the sample sizes diverge, statistical power will converge to $\alpha$. This fact actually leads to a different suggestion, which I suspect few people have ever heard of and would probably have trouble getting past reviewers (no offense intended): a compromise power analysis. The idea is relatively straightforward: In any power analysis, $\alpha$, $\beta$, $n_1$, $n_2$, and the effect size $d$, exist in relationship to each other. Having specified all but one, you can solve for the last. Typically, people do what is called an a-priori power analysis, in which you solve for $N$ (generally you are assuming $n_1=n_2$). On the other hand, you can fix $n_1$, $n_2$, and $d$, and solve for $\alpha$ (or equivalently $\beta$), if you specify the ratio of type I to type II error rates that you are willing to live with. Conventionally, $\alpha=.05$ and $\beta=.20$, so you are saying that type I errors are four times worse than type I errors. Of course, a given researcher might disagree with that, but having specified a given ratio, you can solve for what $\alpha$ you should be using in order to possibly maintain some adequate power. This approach is a logically valid option for the researchers in this situation, although I acknowledge the exoticness of this approach may make it a tough sell in the larger research community that probably has never heard of such a thing.
Unequal sample sizes: When to call it quits Statistical tests do not make assumptions about sample size. There are, of course, differing assumptions with various tests (e.g., normality), but the equality of sample sizes is not one of them. Un
17,020
Unequal sample sizes: When to call it quits
While the answer from @gung is excellent, I think there is one important issue that should be considered when looking at wildly different group sizes. Generally, as long as all the requirement of the test are fulfilled the difference in group sizes is not important. However, in some cases the different group size will have a dramatic effect on the robustness of the test against violations against these assumption. The classical two-sample unpaired t-test for example assumes variance homongenity and is robust against violations only if both groups are similarily sized (in order of magnitude). Otherwise higher variance in the smaller group will lead to Type I errors. Now with the t-test this is not much of a problem since commonly the Welch t-test is used instead and it does not assume variance homogenity. However, similar effects can arise in linear models. In summary, I would say that this is in no way a hindrance to a statistical analysis, but it has to be kept in mind when deciding how to proceed.
Unequal sample sizes: When to call it quits
While the answer from @gung is excellent, I think there is one important issue that should be considered when looking at wildly different group sizes. Generally, as long as all the requirement of the
Unequal sample sizes: When to call it quits While the answer from @gung is excellent, I think there is one important issue that should be considered when looking at wildly different group sizes. Generally, as long as all the requirement of the test are fulfilled the difference in group sizes is not important. However, in some cases the different group size will have a dramatic effect on the robustness of the test against violations against these assumption. The classical two-sample unpaired t-test for example assumes variance homongenity and is robust against violations only if both groups are similarily sized (in order of magnitude). Otherwise higher variance in the smaller group will lead to Type I errors. Now with the t-test this is not much of a problem since commonly the Welch t-test is used instead and it does not assume variance homogenity. However, similar effects can arise in linear models. In summary, I would say that this is in no way a hindrance to a statistical analysis, but it has to be kept in mind when deciding how to proceed.
Unequal sample sizes: When to call it quits While the answer from @gung is excellent, I think there is one important issue that should be considered when looking at wildly different group sizes. Generally, as long as all the requirement of the
17,021
How to test uniformity in several dimensions?
The standard method uses Ripley's K function or something derived from it such as an L function. This is a plot that summarizes the average number of neighbors of the points as a function of maximum distance apart ($\rho$). For a uniform distribution in $n$ dimensions, that average ought to behave like $\rho^n$: and it always will for small $\rho$. It departs from such behavior due to clustering, other forms of spatial non-independence, and edge effects (whence it is crucial to specify the region sampled by the points). Because of this complication--which gets worse as $n$ increases--in most applications a confidence band is erected for the null K function via simulation and the observed K function is overplotted to detect excursions. With some thought and experience, the excursions can be interpreted in terms of tendencies to cluster or not at certain distances. Examples of a K function and its associated L-function from Dixon (2001), ibid. The L function is constructed so that $L(\rho)-\rho$ for a uniform distribution is the horizontal line at zero: a good visual reference. The dashed lines are confidence bands for this particular study area, computed via simulation. The solid gray trace is the L function for the data. The positive excursion at distances 0-20 m indicates some clustering at these distances. I posted a worked example in response to a related question at https://stats.stackexchange.com/a/7984, where a plot derived from the K-function for a uniform distribution on a two-dimensional manifold embedded in $\mathbb{R}^3$ is estimated by simulation. In R, the spatstat functions kest and k3est compute the K-function for $n=2$ and $n=3$, respectively. In more than 3 dimensions you are probably on your own, but the algorithms would be exactly the same. You could do the computations from a distance matrix as computed (with moderate efficiency) by stats::dist.
How to test uniformity in several dimensions?
The standard method uses Ripley's K function or something derived from it such as an L function. This is a plot that summarizes the average number of neighbors of the points as a function of maximum d
How to test uniformity in several dimensions? The standard method uses Ripley's K function or something derived from it such as an L function. This is a plot that summarizes the average number of neighbors of the points as a function of maximum distance apart ($\rho$). For a uniform distribution in $n$ dimensions, that average ought to behave like $\rho^n$: and it always will for small $\rho$. It departs from such behavior due to clustering, other forms of spatial non-independence, and edge effects (whence it is crucial to specify the region sampled by the points). Because of this complication--which gets worse as $n$ increases--in most applications a confidence band is erected for the null K function via simulation and the observed K function is overplotted to detect excursions. With some thought and experience, the excursions can be interpreted in terms of tendencies to cluster or not at certain distances. Examples of a K function and its associated L-function from Dixon (2001), ibid. The L function is constructed so that $L(\rho)-\rho$ for a uniform distribution is the horizontal line at zero: a good visual reference. The dashed lines are confidence bands for this particular study area, computed via simulation. The solid gray trace is the L function for the data. The positive excursion at distances 0-20 m indicates some clustering at these distances. I posted a worked example in response to a related question at https://stats.stackexchange.com/a/7984, where a plot derived from the K-function for a uniform distribution on a two-dimensional manifold embedded in $\mathbb{R}^3$ is estimated by simulation. In R, the spatstat functions kest and k3est compute the K-function for $n=2$ and $n=3$, respectively. In more than 3 dimensions you are probably on your own, but the algorithms would be exactly the same. You could do the computations from a distance matrix as computed (with moderate efficiency) by stats::dist.
How to test uniformity in several dimensions? The standard method uses Ripley's K function or something derived from it such as an L function. This is a plot that summarizes the average number of neighbors of the points as a function of maximum d
17,022
How to test uniformity in several dimensions?
It turns out that the question is more difficult than I thought. Still, I did my homework and after looking around, I found two methods in addition to Ripley's functions to test uniformity in several dimensions. I made an R package called unf that implements both tests. You can download it from github at https://github.com/gui11aume/unf. A large part of it is in C so you will need to compile it on your machine with R CMD INSTALL unf. The articles on which the implementation is based are in pdf format in the package. The first method comes from a reference mentioned by @Procrastinator (Testing multivariate uniformity and its applications, Liang et al., 2000) and allows to test uniformity on the unit hypercube only. The idea is to design discrepancy statistics that are asymptotically Gaussian by the Central Limit theorem. This allows to compute a $\chi^2$ statistic, which is the basis of the test. library(unf) set.seed(123) # Put 20 points uniformally in the 5D hypercube. x <- matrix(runif(100), ncol=20) liang(x) # Outputs the p-value of the test. [1] 0.9470392 The second approach is less conventional and uses minimum spanning trees. The initial work was performed by Friedman & Rafsky in 1979 (reference in the package) to test whether two multivariate samples come from the same distribution. The image below illustrates the principle. Points from two bivariate samples are plotted in red or blue, depending on their original sample (left panel). The minimum spanning tree of the pooled sample in two dimensions is computed (middle panel). This is the tree with minimum sum of edge lengths. The tree is decomposed in subtrees where all the points have the same labels (right panel). In the figure below, I show a case where blue dots are aggregated, which reduces the number of trees at the end of the process, as you can see on the right panel. Friedman and Rafsky have computed the asymptotic distribution of the number of trees that one obtains in the process, which allows to perform a test. This idea to create a general test for uniformity of a multivariate sample has been developed by Smith and Jain in 1984, and implemented by Ben Pfaff in C (reference in the package). The second sample is generated uniformly in the approximate convex hull of the first sample and the test of Friedman and Rafsky is performed on the two-sample pool. The advantage of the method is that it tests uniformity on every convex multivariate shape and not only on the hypercube. The strong disadvantage, is that the test has a random component because the second sample is generated at random. Of course, one can repeat the test and average the results to get a reproducible answer, but this is not handy. Continuing previous R session, here is how it goes. pfaff(x) # Outputs the p-value of the test. pfaff(x) # Most likely another p-value. Feel free to copy/fork the code from github.
How to test uniformity in several dimensions?
It turns out that the question is more difficult than I thought. Still, I did my homework and after looking around, I found two methods in addition to Ripley's functions to test uniformity in several
How to test uniformity in several dimensions? It turns out that the question is more difficult than I thought. Still, I did my homework and after looking around, I found two methods in addition to Ripley's functions to test uniformity in several dimensions. I made an R package called unf that implements both tests. You can download it from github at https://github.com/gui11aume/unf. A large part of it is in C so you will need to compile it on your machine with R CMD INSTALL unf. The articles on which the implementation is based are in pdf format in the package. The first method comes from a reference mentioned by @Procrastinator (Testing multivariate uniformity and its applications, Liang et al., 2000) and allows to test uniformity on the unit hypercube only. The idea is to design discrepancy statistics that are asymptotically Gaussian by the Central Limit theorem. This allows to compute a $\chi^2$ statistic, which is the basis of the test. library(unf) set.seed(123) # Put 20 points uniformally in the 5D hypercube. x <- matrix(runif(100), ncol=20) liang(x) # Outputs the p-value of the test. [1] 0.9470392 The second approach is less conventional and uses minimum spanning trees. The initial work was performed by Friedman & Rafsky in 1979 (reference in the package) to test whether two multivariate samples come from the same distribution. The image below illustrates the principle. Points from two bivariate samples are plotted in red or blue, depending on their original sample (left panel). The minimum spanning tree of the pooled sample in two dimensions is computed (middle panel). This is the tree with minimum sum of edge lengths. The tree is decomposed in subtrees where all the points have the same labels (right panel). In the figure below, I show a case where blue dots are aggregated, which reduces the number of trees at the end of the process, as you can see on the right panel. Friedman and Rafsky have computed the asymptotic distribution of the number of trees that one obtains in the process, which allows to perform a test. This idea to create a general test for uniformity of a multivariate sample has been developed by Smith and Jain in 1984, and implemented by Ben Pfaff in C (reference in the package). The second sample is generated uniformly in the approximate convex hull of the first sample and the test of Friedman and Rafsky is performed on the two-sample pool. The advantage of the method is that it tests uniformity on every convex multivariate shape and not only on the hypercube. The strong disadvantage, is that the test has a random component because the second sample is generated at random. Of course, one can repeat the test and average the results to get a reproducible answer, but this is not handy. Continuing previous R session, here is how it goes. pfaff(x) # Outputs the p-value of the test. pfaff(x) # Most likely another p-value. Feel free to copy/fork the code from github.
How to test uniformity in several dimensions? It turns out that the question is more difficult than I thought. Still, I did my homework and after looking around, I found two methods in addition to Ripley's functions to test uniformity in several
17,023
How to test uniformity in several dimensions?
Would the pair $(U,Z)$ be dependent unifroms where $U \sim {\rm Uniform}(0,1)$ and $Z=U$ with probability $0<p<1$ and $W$ with probability $1-p$ where $W$ is also ${\rm Uniform}(0,1)$ and independent of $U$? For independent random variables in $n$ dimensions divide the $n$-dimensional unit cube it a set of smaller disjoint cubes with the same side length. Then do a $\chi^2$ test for uniformity. This will only work well if n is small like 3-5.
How to test uniformity in several dimensions?
Would the pair $(U,Z)$ be dependent unifroms where $U \sim {\rm Uniform}(0,1)$ and $Z=U$ with probability $0<p<1$ and $W$ with probability $1-p$ where $W$ is also ${\rm Uniform}(0,1)$ and independent
How to test uniformity in several dimensions? Would the pair $(U,Z)$ be dependent unifroms where $U \sim {\rm Uniform}(0,1)$ and $Z=U$ with probability $0<p<1$ and $W$ with probability $1-p$ where $W$ is also ${\rm Uniform}(0,1)$ and independent of $U$? For independent random variables in $n$ dimensions divide the $n$-dimensional unit cube it a set of smaller disjoint cubes with the same side length. Then do a $\chi^2$ test for uniformity. This will only work well if n is small like 3-5.
How to test uniformity in several dimensions? Would the pair $(U,Z)$ be dependent unifroms where $U \sim {\rm Uniform}(0,1)$ and $Z=U$ with probability $0<p<1$ and $W$ with probability $1-p$ where $W$ is also ${\rm Uniform}(0,1)$ and independent
17,024
How to plot data output of clustering?
Usually you'd plot the original values in a scatterplot (or a matrix of scatterplots if you have many of them) and use colour to show your groups. You asked for an answer in python, and you actually do all the clustering and plotting with scipy, numpy and matplotlib: Start by making some data import numpy as np from scipy import cluster from matplotlib import pyplot np.random.seed(123) tests = np.reshape( np.random.uniform(0,100,60), (30,2) ) #tests[1:4] #array([[ 22.68514536, 55.13147691], # [ 71.94689698, 42.31064601], # [ 98.07641984, 68.48297386]]) How many clusters? This is the hard thing about k-means, and there are lots of methods. Let's use the elbow method #plot variance for each value for 'k' between 1,10 initial = [cluster.vq.kmeans(tests,i) for i in range(1,10)] pyplot.plot([var for (cent,var) in initial]) pyplot.show() Assign your observations to classes, and plot them I reckon index 3 (i.e. 4 clusters) is as good as any so cent, var = initial[3] #use vq() to get as assignment for each obs. assignment,cdist = cluster.vq.vq(tests,cent) pyplot.scatter(tests[:,0], tests[:,1], c=assignment) pyplot.show() Just work out where you can stick whatever you've already done into that workdflow (and I hope you clusters are a bit nicer than the random ones!)
How to plot data output of clustering?
Usually you'd plot the original values in a scatterplot (or a matrix of scatterplots if you have many of them) and use colour to show your groups. You asked for an answer in python, and you actually d
How to plot data output of clustering? Usually you'd plot the original values in a scatterplot (or a matrix of scatterplots if you have many of them) and use colour to show your groups. You asked for an answer in python, and you actually do all the clustering and plotting with scipy, numpy and matplotlib: Start by making some data import numpy as np from scipy import cluster from matplotlib import pyplot np.random.seed(123) tests = np.reshape( np.random.uniform(0,100,60), (30,2) ) #tests[1:4] #array([[ 22.68514536, 55.13147691], # [ 71.94689698, 42.31064601], # [ 98.07641984, 68.48297386]]) How many clusters? This is the hard thing about k-means, and there are lots of methods. Let's use the elbow method #plot variance for each value for 'k' between 1,10 initial = [cluster.vq.kmeans(tests,i) for i in range(1,10)] pyplot.plot([var for (cent,var) in initial]) pyplot.show() Assign your observations to classes, and plot them I reckon index 3 (i.e. 4 clusters) is as good as any so cent, var = initial[3] #use vq() to get as assignment for each obs. assignment,cdist = cluster.vq.vq(tests,cent) pyplot.scatter(tests[:,0], tests[:,1], c=assignment) pyplot.show() Just work out where you can stick whatever you've already done into that workdflow (and I hope you clusters are a bit nicer than the random ones!)
How to plot data output of clustering? Usually you'd plot the original values in a scatterplot (or a matrix of scatterplots if you have many of them) and use colour to show your groups. You asked for an answer in python, and you actually d
17,025
How to plot data output of clustering?
Perhaps try something like Fastmap to plot your set of marks using their relative distances. (still) nothing clever has written up Fastmap in python to plot strings and could be easily updated to handle lists of attributes if you wrote up your own distance metric. Below is a standard euclidean distance I use that takes two lists of attributes as parameters. If your lists have a class value, don't use it in the distance calculation. def distance(vecone, vectwo, d=0.0): for i in range(len(vecone)): if isnumeric(vecone[i]): d = d + (vecone[i] - vectwo[i])**2 elif vecone[i] is not vectwo[i]: d += 1.0 return math.sqrt(d) def isnumeric(s): try: float(s) return True except ValueError: return False
How to plot data output of clustering?
Perhaps try something like Fastmap to plot your set of marks using their relative distances. (still) nothing clever has written up Fastmap in python to plot strings and could be easily updated to ha
How to plot data output of clustering? Perhaps try something like Fastmap to plot your set of marks using their relative distances. (still) nothing clever has written up Fastmap in python to plot strings and could be easily updated to handle lists of attributes if you wrote up your own distance metric. Below is a standard euclidean distance I use that takes two lists of attributes as parameters. If your lists have a class value, don't use it in the distance calculation. def distance(vecone, vectwo, d=0.0): for i in range(len(vecone)): if isnumeric(vecone[i]): d = d + (vecone[i] - vectwo[i])**2 elif vecone[i] is not vectwo[i]: d += 1.0 return math.sqrt(d) def isnumeric(s): try: float(s) return True except ValueError: return False
How to plot data output of clustering? Perhaps try something like Fastmap to plot your set of marks using their relative distances. (still) nothing clever has written up Fastmap in python to plot strings and could be easily updated to ha
17,026
How to plot data output of clustering?
I'm not a python expert, but it is extremely helpful to plot the 1st 2 principal components against each other on the x,y axes. Not sure which packages you are using, but here is a sample link: http://pyrorobotics.org/?page=PyroModuleAnalysis
How to plot data output of clustering?
I'm not a python expert, but it is extremely helpful to plot the 1st 2 principal components against each other on the x,y axes. Not sure which packages you are using, but here is a sample link: http:/
How to plot data output of clustering? I'm not a python expert, but it is extremely helpful to plot the 1st 2 principal components against each other on the x,y axes. Not sure which packages you are using, but here is a sample link: http://pyrorobotics.org/?page=PyroModuleAnalysis
How to plot data output of clustering? I'm not a python expert, but it is extremely helpful to plot the 1st 2 principal components against each other on the x,y axes. Not sure which packages you are using, but here is a sample link: http:/
17,027
Why would they pick a gamma distribution here?
When you're considering simple parametric models for the conditional distribution of data (i.e. the distribution of each group, or the expected distribution for each combination of predictor variables), and you are dealing with a positive continuous distribution, the two common choices are Gamma and log-Normal. Besides satisfying the specification of the domain of the distribution (real numbers greater than zero), these distributions are computationally convenient and often make mechanistic sense. The log-Normal distribution is easily derived by exponentiating a Normal distribution (conversely, log-transforming log-Normal deviates gives Normal deviates). From a mechanistic point of view, the log-Normal arises via the Central Limit Theorem when each observation reflects the product of a large number of iid random variables. Once you've log-transformed the data, you have access to a huge variety of computational and analytical tools (e.g., anything assuming Normality or using least-squares methods). As your question points out, one way that a Gamma distribution arises is as the distribution of waiting times until $n$ independent events with a constant waiting time $\lambda$ occur. I can't easily find a reference for a mechanistic model of Gamma distributions of insurance claims, but it also makes sense to use a Gamma distribution from a phenomenological (i.e., data description/computational convenience) point of view. The Gamma distribution is part of the exponential family (which includes the Normal but not the log-Normal), which means that all of the machinery of generalized linear models is available; it also has a particularly convenient form for analysis. There are other reasons one might pick one or the other - for example, the "heaviness" of the tail of the distribution, which might be important in predicting the frequency of extreme events. There are plenty of other positive, continuous distributions (e.g see this list), but they tend to be used in more specialized applications. Very few of these distributions will capture the multi-modality you see in the marginal distributions above, but multi-modality may be explained by the data being grouped into categories described by observed categorical predictors. If there are no observable predictors that explain the multimodality, one might choose to fit a finite mixture model based on a mixture of a (small, discrete) number of positive continuous distributions.
Why would they pick a gamma distribution here?
When you're considering simple parametric models for the conditional distribution of data (i.e. the distribution of each group, or the expected distribution for each combination of predictor variables
Why would they pick a gamma distribution here? When you're considering simple parametric models for the conditional distribution of data (i.e. the distribution of each group, or the expected distribution for each combination of predictor variables), and you are dealing with a positive continuous distribution, the two common choices are Gamma and log-Normal. Besides satisfying the specification of the domain of the distribution (real numbers greater than zero), these distributions are computationally convenient and often make mechanistic sense. The log-Normal distribution is easily derived by exponentiating a Normal distribution (conversely, log-transforming log-Normal deviates gives Normal deviates). From a mechanistic point of view, the log-Normal arises via the Central Limit Theorem when each observation reflects the product of a large number of iid random variables. Once you've log-transformed the data, you have access to a huge variety of computational and analytical tools (e.g., anything assuming Normality or using least-squares methods). As your question points out, one way that a Gamma distribution arises is as the distribution of waiting times until $n$ independent events with a constant waiting time $\lambda$ occur. I can't easily find a reference for a mechanistic model of Gamma distributions of insurance claims, but it also makes sense to use a Gamma distribution from a phenomenological (i.e., data description/computational convenience) point of view. The Gamma distribution is part of the exponential family (which includes the Normal but not the log-Normal), which means that all of the machinery of generalized linear models is available; it also has a particularly convenient form for analysis. There are other reasons one might pick one or the other - for example, the "heaviness" of the tail of the distribution, which might be important in predicting the frequency of extreme events. There are plenty of other positive, continuous distributions (e.g see this list), but they tend to be used in more specialized applications. Very few of these distributions will capture the multi-modality you see in the marginal distributions above, but multi-modality may be explained by the data being grouped into categories described by observed categorical predictors. If there are no observable predictors that explain the multimodality, one might choose to fit a finite mixture model based on a mixture of a (small, discrete) number of positive continuous distributions.
Why would they pick a gamma distribution here? When you're considering simple parametric models for the conditional distribution of data (i.e. the distribution of each group, or the expected distribution for each combination of predictor variables
17,028
In survival analysis, when should we use fully parametric models over semi-parametric ones?
When you know the actual functional form of the hazard function, the fully parametric survival model is far more efficient than the Cox model. Statistical efficiency is like power. A good way to think of it is the width of the confidence interval for your final estimate of the log-hazard ratios: a tight CI is the result of an efficient analysis (assuming you have an unbiased estimator). Exponential and Weibull survival models are indeed popular examples of "known" hazard functions (constant and linear in time respectively). But you could have any old baseline hazard function $\lambda(t)$, and calculate the expected survival at any time for any combination of covariates given a parameter estimate $\theta$ as: $$S(\theta, t) = \exp(\Lambda(t)\exp(\theta \mathbf{X}))$$ where $\Lambda(t)$ is the cumulative hazard. An iterative EM-type solver would lead to maximum likelihood estimates of $\theta$. It is a neat fact that, assuming a constant hazard, the relatively efficiency of the Cox model to the Weibull model to the Exponential fully parametric survival model is 3:2:1. That is, when the data are actually exponential, it will take 9 times as many observations under a Cox model to produce a confidence interval for the effect estimate, $\theta$ with an equal expected half-width as that of the exponential survival model. You must use what you know when you know it, but never assume wrongly.
In survival analysis, when should we use fully parametric models over semi-parametric ones?
When you know the actual functional form of the hazard function, the fully parametric survival model is far more efficient than the Cox model. Statistical efficiency is like power. A good way to think
In survival analysis, when should we use fully parametric models over semi-parametric ones? When you know the actual functional form of the hazard function, the fully parametric survival model is far more efficient than the Cox model. Statistical efficiency is like power. A good way to think of it is the width of the confidence interval for your final estimate of the log-hazard ratios: a tight CI is the result of an efficient analysis (assuming you have an unbiased estimator). Exponential and Weibull survival models are indeed popular examples of "known" hazard functions (constant and linear in time respectively). But you could have any old baseline hazard function $\lambda(t)$, and calculate the expected survival at any time for any combination of covariates given a parameter estimate $\theta$ as: $$S(\theta, t) = \exp(\Lambda(t)\exp(\theta \mathbf{X}))$$ where $\Lambda(t)$ is the cumulative hazard. An iterative EM-type solver would lead to maximum likelihood estimates of $\theta$. It is a neat fact that, assuming a constant hazard, the relatively efficiency of the Cox model to the Weibull model to the Exponential fully parametric survival model is 3:2:1. That is, when the data are actually exponential, it will take 9 times as many observations under a Cox model to produce a confidence interval for the effect estimate, $\theta$ with an equal expected half-width as that of the exponential survival model. You must use what you know when you know it, but never assume wrongly.
In survival analysis, when should we use fully parametric models over semi-parametric ones? When you know the actual functional form of the hazard function, the fully parametric survival model is far more efficient than the Cox model. Statistical efficiency is like power. A good way to think
17,029
In survival analysis, when should we use fully parametric models over semi-parametric ones?
This has been studied in detail for many years and there is a large literature. I really like spline hazard models. The simplest answer to your question is this: If you want to estimate covariate effects, especially in the absence of time-dependent covariates, then semiparametric models such as the Cox proportional hazards model are usually preferred because they are fast, robust, and Y-transformation invariant Flexible parametric models are a little more efficient for estimating absolute quantities such as survival curves Parametric models provide a formula that makes prediction easier If you can integrate the hazard function analytically when time-dependent covariates are present, parametric models provide faster prediction and more intuition Parametric models can extrapolate (but beware) to yield survival estimates beyond the last follow-up time, and to estimate expected (mean) survival time In summary I'd say the main reason to like parametric survival models is not efficiency, but rather ease of interpretation and of obtaining predictions for future observations. See this paper for example.
In survival analysis, when should we use fully parametric models over semi-parametric ones?
This has been studied in detail for many years and there is a large literature. I really like spline hazard models. The simplest answer to your question is this: If you want to estimate covariate e
In survival analysis, when should we use fully parametric models over semi-parametric ones? This has been studied in detail for many years and there is a large literature. I really like spline hazard models. The simplest answer to your question is this: If you want to estimate covariate effects, especially in the absence of time-dependent covariates, then semiparametric models such as the Cox proportional hazards model are usually preferred because they are fast, robust, and Y-transformation invariant Flexible parametric models are a little more efficient for estimating absolute quantities such as survival curves Parametric models provide a formula that makes prediction easier If you can integrate the hazard function analytically when time-dependent covariates are present, parametric models provide faster prediction and more intuition Parametric models can extrapolate (but beware) to yield survival estimates beyond the last follow-up time, and to estimate expected (mean) survival time In summary I'd say the main reason to like parametric survival models is not efficiency, but rather ease of interpretation and of obtaining predictions for future observations. See this paper for example.
In survival analysis, when should we use fully parametric models over semi-parametric ones? This has been studied in detail for many years and there is a large literature. I really like spline hazard models. The simplest answer to your question is this: If you want to estimate covariate e
17,030
In survival analysis, when should we use fully parametric models over semi-parametric ones?
I've spent a lot of time working with the general case of interval censoring, i.e., when the event time may be known exactly, right or left censored or only known up to an interval. For example, suppose a part is inspected and passed at $T_1$ and then inspected again at $T_2$ and failed. Then all we know is that it failed in the interval $(T_1, T_2]$. In the interval censored case, while we can use bootstrap + asymptotic normality to make inference about the regression coefficients, this is not the case for the baseline survival curve itself. Thus, if one wants to make inference about actual survival times and not just hazard ratios, one needs to use the fully parametric model. As such, the semi-parametric model is often used more to check model fit rather than for full inference in regards to survival times. Of course, this is not the case for right censored data. I would guess that the confidence intervals for the survival estimates are a bit tighter for a fully parametric model, although I have not tested that. In fact, see @AdamO's answer for more on that. As another point, the AFT model does not have a semi-parametric model (in the sense of a Kaplan-Meier-like baseline distribution), even for right censored or uncensored data. Or more specifically, the model is very difficult to optimize. The reason for this is that you can think of the AFT model as rescaling the times, compared to the proportional hazards or odds models, which rescale the survival probabilities. The issue with this is that in a semi-parametric model, the only way in which event or censoring times affects the likelihood is the relative rank. Small enough movements of the event times will not change the ranks at all (assuming no ties in the data), meaning the derivatives are all zero without ties. And when there are ties, the derivatives are unbounded! Not a very fun optimization problem. Given that the AFT model is more resilient to missing covariates and more interpretable, there's a strong argument to use AFT, even though there is no semi-parametric model. One more reason to favor parametric models over semi-parametric is that they can be easier to generalize. For example, if one wants to perform a Bayesian analysis, it's much easier with a parametric model. Or if one wants to build a cure-rate model, this is non-identifiable for a semi-parametric model, but is identifiable for a parametric model.
In survival analysis, when should we use fully parametric models over semi-parametric ones?
I've spent a lot of time working with the general case of interval censoring, i.e., when the event time may be known exactly, right or left censored or only known up to an interval. For example, suppo
In survival analysis, when should we use fully parametric models over semi-parametric ones? I've spent a lot of time working with the general case of interval censoring, i.e., when the event time may be known exactly, right or left censored or only known up to an interval. For example, suppose a part is inspected and passed at $T_1$ and then inspected again at $T_2$ and failed. Then all we know is that it failed in the interval $(T_1, T_2]$. In the interval censored case, while we can use bootstrap + asymptotic normality to make inference about the regression coefficients, this is not the case for the baseline survival curve itself. Thus, if one wants to make inference about actual survival times and not just hazard ratios, one needs to use the fully parametric model. As such, the semi-parametric model is often used more to check model fit rather than for full inference in regards to survival times. Of course, this is not the case for right censored data. I would guess that the confidence intervals for the survival estimates are a bit tighter for a fully parametric model, although I have not tested that. In fact, see @AdamO's answer for more on that. As another point, the AFT model does not have a semi-parametric model (in the sense of a Kaplan-Meier-like baseline distribution), even for right censored or uncensored data. Or more specifically, the model is very difficult to optimize. The reason for this is that you can think of the AFT model as rescaling the times, compared to the proportional hazards or odds models, which rescale the survival probabilities. The issue with this is that in a semi-parametric model, the only way in which event or censoring times affects the likelihood is the relative rank. Small enough movements of the event times will not change the ranks at all (assuming no ties in the data), meaning the derivatives are all zero without ties. And when there are ties, the derivatives are unbounded! Not a very fun optimization problem. Given that the AFT model is more resilient to missing covariates and more interpretable, there's a strong argument to use AFT, even though there is no semi-parametric model. One more reason to favor parametric models over semi-parametric is that they can be easier to generalize. For example, if one wants to perform a Bayesian analysis, it's much easier with a parametric model. Or if one wants to build a cure-rate model, this is non-identifiable for a semi-parametric model, but is identifiable for a parametric model.
In survival analysis, when should we use fully parametric models over semi-parametric ones? I've spent a lot of time working with the general case of interval censoring, i.e., when the event time may be known exactly, right or left censored or only known up to an interval. For example, suppo
17,031
What is principal subspace in probabilistic PCA?
This is an excellent question. Probabilistic PCA (PPCA) is the following latent variable model \begin{align} \mathbf z &\sim \mathcal N(\mathbf 0, \mathbf I) \\ \mathbf x &\sim \mathcal N(\mathbf W \mathbf z + \boldsymbol \mu, \sigma^2 \mathbf I), \end{align} where $\mathbf x\in\mathbb R^p$ is one observation and $\mathbf z\in\mathbb R^q$ is a latent variable vector; usually $q\ll p$. Note that this differs from factor analysis in only one little detail: error covariance structure in PPCA is $\sigma^2 \mathbf I$ and in FA it is an arbitrary diagonal matrix $\boldsymbol \Psi$. Tipping & Bishop, 1999, Probabilistic Principal Component Analysis prove the following theorem: the maximum likelihood solution for PPCA can be obtained analytically and is given by (Eq. 7): $$\mathbf W_\mathrm{ML} = \mathbf U_q (\boldsymbol \Lambda_q - \sigma_\mathrm{ML}^2 \mathbf I)^{1/2} \mathbf R,$$ where $\mathbf U_q$ is a matrix of $q$ leading principal directions (eigenvectors of the covariance matrix), $\boldsymbol \Lambda_q$ is the diagonal matrix of corresponding eigenvalues, $\sigma_\mathrm{ML}^2$ is also given by an explicit formula, and $\mathbf R$ is an arbitrary $q\times q$ rotation matrix (corresponding to rotations in the latent space). The ppca() function implements expectation-maximization algorithm to fit the model, but we know that it must converge to the $\mathbf W_\mathrm{ML}$ as given above. Your question is: how to get $\mathbf U_q$ if you know $\mathbf W_\mathrm{ML}$. The answer is that you can simply use singular value decomposition of $\mathbf W_\mathrm{ML}$. The formula above is already of the form orthogonal matrix times diagonal matrix times orthogonal matrix, so it gives the SVD, and as it is unique, you will get $\mathbf U_q$ as left singular vectors of $\mathbf W_\mathrm{ML}$. That is exactly what Matlab's ppca() function is doing in line 305: % Orthogonalize W to the standard PCA subspace [coeff,~] = svd(W,'econ'); Can I assume principal subspace is spanned only by a unique set of orthonormal vectors? No! There is an infinite number of orthogonal bases spanning the same principal subspace. If you apply some arbitrary orthogonalization process to $\mathbf W_\mathrm{ML}$ you are not guaranteed to obtain $\mathbf U_q$. But if you use SVD or something equivalent, then it will work.
What is principal subspace in probabilistic PCA?
This is an excellent question. Probabilistic PCA (PPCA) is the following latent variable model \begin{align} \mathbf z &\sim \mathcal N(\mathbf 0, \mathbf I) \\ \mathbf x &\sim \mathcal N(\mathbf W \m
What is principal subspace in probabilistic PCA? This is an excellent question. Probabilistic PCA (PPCA) is the following latent variable model \begin{align} \mathbf z &\sim \mathcal N(\mathbf 0, \mathbf I) \\ \mathbf x &\sim \mathcal N(\mathbf W \mathbf z + \boldsymbol \mu, \sigma^2 \mathbf I), \end{align} where $\mathbf x\in\mathbb R^p$ is one observation and $\mathbf z\in\mathbb R^q$ is a latent variable vector; usually $q\ll p$. Note that this differs from factor analysis in only one little detail: error covariance structure in PPCA is $\sigma^2 \mathbf I$ and in FA it is an arbitrary diagonal matrix $\boldsymbol \Psi$. Tipping & Bishop, 1999, Probabilistic Principal Component Analysis prove the following theorem: the maximum likelihood solution for PPCA can be obtained analytically and is given by (Eq. 7): $$\mathbf W_\mathrm{ML} = \mathbf U_q (\boldsymbol \Lambda_q - \sigma_\mathrm{ML}^2 \mathbf I)^{1/2} \mathbf R,$$ where $\mathbf U_q$ is a matrix of $q$ leading principal directions (eigenvectors of the covariance matrix), $\boldsymbol \Lambda_q$ is the diagonal matrix of corresponding eigenvalues, $\sigma_\mathrm{ML}^2$ is also given by an explicit formula, and $\mathbf R$ is an arbitrary $q\times q$ rotation matrix (corresponding to rotations in the latent space). The ppca() function implements expectation-maximization algorithm to fit the model, but we know that it must converge to the $\mathbf W_\mathrm{ML}$ as given above. Your question is: how to get $\mathbf U_q$ if you know $\mathbf W_\mathrm{ML}$. The answer is that you can simply use singular value decomposition of $\mathbf W_\mathrm{ML}$. The formula above is already of the form orthogonal matrix times diagonal matrix times orthogonal matrix, so it gives the SVD, and as it is unique, you will get $\mathbf U_q$ as left singular vectors of $\mathbf W_\mathrm{ML}$. That is exactly what Matlab's ppca() function is doing in line 305: % Orthogonalize W to the standard PCA subspace [coeff,~] = svd(W,'econ'); Can I assume principal subspace is spanned only by a unique set of orthonormal vectors? No! There is an infinite number of orthogonal bases spanning the same principal subspace. If you apply some arbitrary orthogonalization process to $\mathbf W_\mathrm{ML}$ you are not guaranteed to obtain $\mathbf U_q$. But if you use SVD or something equivalent, then it will work.
What is principal subspace in probabilistic PCA? This is an excellent question. Probabilistic PCA (PPCA) is the following latent variable model \begin{align} \mathbf z &\sim \mathcal N(\mathbf 0, \mathbf I) \\ \mathbf x &\sim \mathcal N(\mathbf W \m
17,032
Why doesn't measurement error in the dependent variable bias the results?
When you want to estimate a simple model like $$Y_i = \alpha + \beta X_i + \epsilon_i$$ and instead of the true $Y_i$ you only observe it with some error $\widetilde{Y}_i = Y_i + \nu_i$ which is such that it is uncorrelated with $X$ and $\epsilon$, if you regress $$\widetilde{Y}_i = \alpha + \beta X_i + \epsilon_i$$ your estimated $\beta$ is $$ \begin{align} \widehat{\beta} &= \frac{Cov(\widetilde{Y}_i,X_i)}{Var(X_i)} \newline &= \frac{Cov(Y_i + \nu_i,X_i)}{Var(X_i)} \newline &= \frac{Cov(\alpha + \beta X_i + \epsilon_i + \nu_i,X_i)}{Var(X_i)} \newline &= \frac{Cov(\alpha ,X_i)}{Var(X_i)} + \beta\frac{Cov(X_i,X_i)}{Var(X_i)} + \frac{Cov(\epsilon_i,X_i)}{Var(X_i)} + \frac{Cov(\nu_i,X_i)}{Var(X_i)} \newline &= \beta \frac{Var(X_i)}{Var(X_i)} \newline &= \beta \end{align} $$ because the covariance between a random variable and a constant ($\alpha$) is zero as well as the covariances between $X_i$ and $\epsilon_i, \nu_i$ since we assumed that they are uncorrelated. So you see that your coefficient is consistently estimated. The only worry is that $\widetilde{Y}_i = Y_i + \nu_i = \alpha + \beta X_i + \epsilon_i + \nu_i$ gives you an additional term in the error which reduces the power of your statistical tests. In very bad cases of such measurement error in the dependent variable you may not find a significant effect even though it might be there in reality. Generally, instrumental variables will not help you in this case because they tend to be even more imprecise than OLS and they can only help with measurement error in the explanatory variable.
Why doesn't measurement error in the dependent variable bias the results?
When you want to estimate a simple model like $$Y_i = \alpha + \beta X_i + \epsilon_i$$ and instead of the true $Y_i$ you only observe it with some error $\widetilde{Y}_i = Y_i + \nu_i$ which is such
Why doesn't measurement error in the dependent variable bias the results? When you want to estimate a simple model like $$Y_i = \alpha + \beta X_i + \epsilon_i$$ and instead of the true $Y_i$ you only observe it with some error $\widetilde{Y}_i = Y_i + \nu_i$ which is such that it is uncorrelated with $X$ and $\epsilon$, if you regress $$\widetilde{Y}_i = \alpha + \beta X_i + \epsilon_i$$ your estimated $\beta$ is $$ \begin{align} \widehat{\beta} &= \frac{Cov(\widetilde{Y}_i,X_i)}{Var(X_i)} \newline &= \frac{Cov(Y_i + \nu_i,X_i)}{Var(X_i)} \newline &= \frac{Cov(\alpha + \beta X_i + \epsilon_i + \nu_i,X_i)}{Var(X_i)} \newline &= \frac{Cov(\alpha ,X_i)}{Var(X_i)} + \beta\frac{Cov(X_i,X_i)}{Var(X_i)} + \frac{Cov(\epsilon_i,X_i)}{Var(X_i)} + \frac{Cov(\nu_i,X_i)}{Var(X_i)} \newline &= \beta \frac{Var(X_i)}{Var(X_i)} \newline &= \beta \end{align} $$ because the covariance between a random variable and a constant ($\alpha$) is zero as well as the covariances between $X_i$ and $\epsilon_i, \nu_i$ since we assumed that they are uncorrelated. So you see that your coefficient is consistently estimated. The only worry is that $\widetilde{Y}_i = Y_i + \nu_i = \alpha + \beta X_i + \epsilon_i + \nu_i$ gives you an additional term in the error which reduces the power of your statistical tests. In very bad cases of such measurement error in the dependent variable you may not find a significant effect even though it might be there in reality. Generally, instrumental variables will not help you in this case because they tend to be even more imprecise than OLS and they can only help with measurement error in the explanatory variable.
Why doesn't measurement error in the dependent variable bias the results? When you want to estimate a simple model like $$Y_i = \alpha + \beta X_i + \epsilon_i$$ and instead of the true $Y_i$ you only observe it with some error $\widetilde{Y}_i = Y_i + \nu_i$ which is such
17,033
Why doesn't measurement error in the dependent variable bias the results?
Regression analysis answers the question, "What is the AVERAGE Y value for those who have given X values?" or, equivalently, "How much is Y predicted to change ON AVERAGE if we change X by one unit?" Random measurement error doesn't change the average values of a variable, or the average values for subsets of individuals, so random error in the dependent variable will not bias regression estimates. Let's say you have height data on a sample of individuals. These heights are very precisely measured, accurately reflecting everyone's true stature. Within the sample, the average for men is 175 cm and the average for women is 162 cm. If you use regression to calculate how well gender predicts height, you estimate the model $\mathit{HEIGHT = CONSTANT + β * GENDER + RESIDUAL}$ If women are coded as 0 and men as 1, $\mathit{CONSTANT}$ is the female average, or 162 cm. The regression coefficient $\mathit{β}$ shows how much height changes ON AVERAGE when you change $\mathit{GENDER}$ by one unit (from 0 to 1). $\mathit{β}$ equals 13 because people whose value for $\mathit{GENDER}$ is 0 (women) have a mean height of 162 cm while people whose value for $\mathit{GENDER}$ is 1 (men) have a mean height of 175 cm; $\mathit{β}$ estimates the average difference between men's and women's heights, which is 13 cm. ($\mathit{RESIDUAL}$ reflects the within-gender variance in height.) Now, if you randomly add -1 cm or +1 cm to everyone's true height, what will happen? Individuals whose actual height is, say, 170 cm will now be reported as being 169 or 171 cm. However, the average of the sample, or any subsample, will not change. Those whose actual height is 170 cm will average 170 cm in the new, erroneous dataset, women will average 162 cm, etc. If you rerun the regression model specified above using this new dataset, the (expected) value of $\mathit{β}$ will not change because the average difference between men and women is still 13 cm, regardless of the measurement error. (The standard error of $\mathit{β}$ will be larger than before because the variance of the dependent variable is now larger.) If there's measurement error in the independent variable rather than the dependent variable, $\mathit{β}$ will be a biased estimate. This is easy to understand when you consider the height example. If there's random measurement error in the $\mathit{GENDER}$ variable, some men will be erroneously coded as female and vice versa. The effect of this is to reduce apparent gender differences in height, because moving males to the female group will make the female mean is larger while moving females to the male group will make the male mean smaller. With measurement error in the independent variable, $\mathit{β}$ will be lower than the unbiased value of 13 cm. While I used a categorical independent variable ($\mathit{GENDER}$) for simplicity here, the same logic applies to continuous variables. For example, if you used a continuous variable like birth height to predict adult height, the expected value of $\mathit{β}$ would be the same regardless of the amount of random error in adult height measurements.
Why doesn't measurement error in the dependent variable bias the results?
Regression analysis answers the question, "What is the AVERAGE Y value for those who have given X values?" or, equivalently, "How much is Y predicted to change ON AVERAGE if we change X by one unit?"
Why doesn't measurement error in the dependent variable bias the results? Regression analysis answers the question, "What is the AVERAGE Y value for those who have given X values?" or, equivalently, "How much is Y predicted to change ON AVERAGE if we change X by one unit?" Random measurement error doesn't change the average values of a variable, or the average values for subsets of individuals, so random error in the dependent variable will not bias regression estimates. Let's say you have height data on a sample of individuals. These heights are very precisely measured, accurately reflecting everyone's true stature. Within the sample, the average for men is 175 cm and the average for women is 162 cm. If you use regression to calculate how well gender predicts height, you estimate the model $\mathit{HEIGHT = CONSTANT + β * GENDER + RESIDUAL}$ If women are coded as 0 and men as 1, $\mathit{CONSTANT}$ is the female average, or 162 cm. The regression coefficient $\mathit{β}$ shows how much height changes ON AVERAGE when you change $\mathit{GENDER}$ by one unit (from 0 to 1). $\mathit{β}$ equals 13 because people whose value for $\mathit{GENDER}$ is 0 (women) have a mean height of 162 cm while people whose value for $\mathit{GENDER}$ is 1 (men) have a mean height of 175 cm; $\mathit{β}$ estimates the average difference between men's and women's heights, which is 13 cm. ($\mathit{RESIDUAL}$ reflects the within-gender variance in height.) Now, if you randomly add -1 cm or +1 cm to everyone's true height, what will happen? Individuals whose actual height is, say, 170 cm will now be reported as being 169 or 171 cm. However, the average of the sample, or any subsample, will not change. Those whose actual height is 170 cm will average 170 cm in the new, erroneous dataset, women will average 162 cm, etc. If you rerun the regression model specified above using this new dataset, the (expected) value of $\mathit{β}$ will not change because the average difference between men and women is still 13 cm, regardless of the measurement error. (The standard error of $\mathit{β}$ will be larger than before because the variance of the dependent variable is now larger.) If there's measurement error in the independent variable rather than the dependent variable, $\mathit{β}$ will be a biased estimate. This is easy to understand when you consider the height example. If there's random measurement error in the $\mathit{GENDER}$ variable, some men will be erroneously coded as female and vice versa. The effect of this is to reduce apparent gender differences in height, because moving males to the female group will make the female mean is larger while moving females to the male group will make the male mean smaller. With measurement error in the independent variable, $\mathit{β}$ will be lower than the unbiased value of 13 cm. While I used a categorical independent variable ($\mathit{GENDER}$) for simplicity here, the same logic applies to continuous variables. For example, if you used a continuous variable like birth height to predict adult height, the expected value of $\mathit{β}$ would be the same regardless of the amount of random error in adult height measurements.
Why doesn't measurement error in the dependent variable bias the results? Regression analysis answers the question, "What is the AVERAGE Y value for those who have given X values?" or, equivalently, "How much is Y predicted to change ON AVERAGE if we change X by one unit?"
17,034
Why doesn't measurement error in the dependent variable bias the results?
Another way to see this is an M-estimation argument. In general, OLS is consistent and asymptotically normal for data $(Y_i, X_i)$ coming from a model that satisfies $\mathbb E(Y_i|X_i) = X \beta$ and some mild regularity conditions, with $\hat \beta_\text{ols} \to \beta$. See Chapter 7 of [1], for example. Using Andy's notation from above, since $\mathbb E(\tilde Y_i|X_i) = X \beta$, we're in luck and the standard results apply. [1] Boos, Dennis D, and L. A Stefanski. Essential Statistical Inference. Vol. 120. Springer Texts in Statistics. New York, NY: Springer New York, 2013. https://doi.org/10.1007/978-1-4614-4818-1.
Why doesn't measurement error in the dependent variable bias the results?
Another way to see this is an M-estimation argument. In general, OLS is consistent and asymptotically normal for data $(Y_i, X_i)$ coming from a model that satisfies $\mathbb E(Y_i|X_i) = X \beta$ and
Why doesn't measurement error in the dependent variable bias the results? Another way to see this is an M-estimation argument. In general, OLS is consistent and asymptotically normal for data $(Y_i, X_i)$ coming from a model that satisfies $\mathbb E(Y_i|X_i) = X \beta$ and some mild regularity conditions, with $\hat \beta_\text{ols} \to \beta$. See Chapter 7 of [1], for example. Using Andy's notation from above, since $\mathbb E(\tilde Y_i|X_i) = X \beta$, we're in luck and the standard results apply. [1] Boos, Dennis D, and L. A Stefanski. Essential Statistical Inference. Vol. 120. Springer Texts in Statistics. New York, NY: Springer New York, 2013. https://doi.org/10.1007/978-1-4614-4818-1.
Why doesn't measurement error in the dependent variable bias the results? Another way to see this is an M-estimation argument. In general, OLS is consistent and asymptotically normal for data $(Y_i, X_i)$ coming from a model that satisfies $\mathbb E(Y_i|X_i) = X \beta$ and
17,035
What are the differences between AUC and F1-score?
F1 score is applicable for any particular point of the ROC curve. This point may represent for example a particular threshold value in a binary classifier and thus corresponds to a particular value of precision and recall. Remember, F score is a smart way to represent both recall and precision. For F score to be high, both precision and recall should be high. Thus, the ROC curve is for various different levels of thresholds and has many F score values for various points on its curve.
What are the differences between AUC and F1-score?
F1 score is applicable for any particular point of the ROC curve. This point may represent for example a particular threshold value in a binary classifier and thus corresponds to a particular value of
What are the differences between AUC and F1-score? F1 score is applicable for any particular point of the ROC curve. This point may represent for example a particular threshold value in a binary classifier and thus corresponds to a particular value of precision and recall. Remember, F score is a smart way to represent both recall and precision. For F score to be high, both precision and recall should be high. Thus, the ROC curve is for various different levels of thresholds and has many F score values for various points on its curve.
What are the differences between AUC and F1-score? F1 score is applicable for any particular point of the ROC curve. This point may represent for example a particular threshold value in a binary classifier and thus corresponds to a particular value of
17,036
What are the differences between AUC and F1-score?
AUC is of dimension [PRECISION]*[RECALL] and it is the area under ROC curve. F1 is for a fixed pair of precision and recall. So they are different. But there are some connections. See this: http://pages.cs.wisc.edu/~jdavis/davisgoadrichcamera2.pdf
What are the differences between AUC and F1-score?
AUC is of dimension [PRECISION]*[RECALL] and it is the area under ROC curve. F1 is for a fixed pair of precision and recall. So they are different. But there are some connections. See this: http://pa
What are the differences between AUC and F1-score? AUC is of dimension [PRECISION]*[RECALL] and it is the area under ROC curve. F1 is for a fixed pair of precision and recall. So they are different. But there are some connections. See this: http://pages.cs.wisc.edu/~jdavis/davisgoadrichcamera2.pdf
What are the differences between AUC and F1-score? AUC is of dimension [PRECISION]*[RECALL] and it is the area under ROC curve. F1 is for a fixed pair of precision and recall. So they are different. But there are some connections. See this: http://pa
17,037
What are the differences between AUC and F1-score?
The axes of an ROC curve are the true positive rate (recall, AKA sensitivity) and false positive rate (false alarm rate), not precision, AKA PPV, positive predictive value.
What are the differences between AUC and F1-score?
The axes of an ROC curve are the true positive rate (recall, AKA sensitivity) and false positive rate (false alarm rate), not precision, AKA PPV, positive predictive value.
What are the differences between AUC and F1-score? The axes of an ROC curve are the true positive rate (recall, AKA sensitivity) and false positive rate (false alarm rate), not precision, AKA PPV, positive predictive value.
What are the differences between AUC and F1-score? The axes of an ROC curve are the true positive rate (recall, AKA sensitivity) and false positive rate (false alarm rate), not precision, AKA PPV, positive predictive value.
17,038
Variance-Covariance matrix interpretation
This matrix displays estimates of the variance and covariance between the regression coefficients. In particular, for your design matrix $\mathbf{X}$, and an estimate of the variance, $\widehat{\sigma}^2$, your displayed matrix is $\widehat{\sigma}^2(\mathbf{X}'\mathbf{X})^{-1}$. The diagonal entries are the variance of the regression coefficients and the off-diagonals are the covariance between the corresponding regression coefficients. As far as assumptions go, apply the cov2cor() function to your variance-covariance matrix. This function will convert the given matrix to a correlation matrix. You wil get estimates of the correlations between the regression coefficients. Hint: for this matrix, each of the correlations will have large magnitudes. To say something about the model in particular, we need point estimates of the regression coefficients to say anything further.
Variance-Covariance matrix interpretation
This matrix displays estimates of the variance and covariance between the regression coefficients. In particular, for your design matrix $\mathbf{X}$, and an estimate of the variance, $\widehat{\sigma
Variance-Covariance matrix interpretation This matrix displays estimates of the variance and covariance between the regression coefficients. In particular, for your design matrix $\mathbf{X}$, and an estimate of the variance, $\widehat{\sigma}^2$, your displayed matrix is $\widehat{\sigma}^2(\mathbf{X}'\mathbf{X})^{-1}$. The diagonal entries are the variance of the regression coefficients and the off-diagonals are the covariance between the corresponding regression coefficients. As far as assumptions go, apply the cov2cor() function to your variance-covariance matrix. This function will convert the given matrix to a correlation matrix. You wil get estimates of the correlations between the regression coefficients. Hint: for this matrix, each of the correlations will have large magnitudes. To say something about the model in particular, we need point estimates of the regression coefficients to say anything further.
Variance-Covariance matrix interpretation This matrix displays estimates of the variance and covariance between the regression coefficients. In particular, for your design matrix $\mathbf{X}$, and an estimate of the variance, $\widehat{\sigma
17,039
Variance-Covariance matrix interpretation
@Donnie has provided a good answer (+1). Let me add a couple points. Running down the main diagonal of the variance-covariance matrix are the variances of the sampling distributions of your parameter estimates (i.e., $\hat\beta_j$'s). Thus, taking the square roots of those values yields the standard errors that are reported with statistical output: SEs = sqrt(diag(vcov(Model1))) SEs # [1] 5.37569530 4.43883431 6.51701235 0.09634532 These are used to form confidence intervals and test hypotheses about your betas. The off-diagonal elements would be $0$ if all variables were orthogonal, but your values are far from $0$. Using the cov2cor() function, or standardizing the covariances by the square roots of the constituent variable variances reveals that all variables are highly correlated ($|r| > .97$), so you have substantial multicollinearity. This makes your standard errors much larger than they would otherwise be. Likewise, it means that there is a great deal of information about the sampling distributions of the betas that is being left out of standard hypothesis tests ($\hat\beta_j/SE(\hat\beta_j)$), so you may want to use a sequential testing strategy based on type I sums of squares.
Variance-Covariance matrix interpretation
@Donnie has provided a good answer (+1). Let me add a couple points. Running down the main diagonal of the variance-covariance matrix are the variances of the sampling distributions of your paramet
Variance-Covariance matrix interpretation @Donnie has provided a good answer (+1). Let me add a couple points. Running down the main diagonal of the variance-covariance matrix are the variances of the sampling distributions of your parameter estimates (i.e., $\hat\beta_j$'s). Thus, taking the square roots of those values yields the standard errors that are reported with statistical output: SEs = sqrt(diag(vcov(Model1))) SEs # [1] 5.37569530 4.43883431 6.51701235 0.09634532 These are used to form confidence intervals and test hypotheses about your betas. The off-diagonal elements would be $0$ if all variables were orthogonal, but your values are far from $0$. Using the cov2cor() function, or standardizing the covariances by the square roots of the constituent variable variances reveals that all variables are highly correlated ($|r| > .97$), so you have substantial multicollinearity. This makes your standard errors much larger than they would otherwise be. Likewise, it means that there is a great deal of information about the sampling distributions of the betas that is being left out of standard hypothesis tests ($\hat\beta_j/SE(\hat\beta_j)$), so you may want to use a sequential testing strategy based on type I sums of squares.
Variance-Covariance matrix interpretation @Donnie has provided a good answer (+1). Let me add a couple points. Running down the main diagonal of the variance-covariance matrix are the variances of the sampling distributions of your paramet
17,040
Mixing continuous and binary data with linear SVM?
SVMs will handle both binary and continuous variables as long as you make some preprocessing: all features should be scaled or normalised. After that step, from the algorithms' perspective it doesn't matter if features are continuous or binary: for binaries, it sees samples that are either "far" away, or very similar; for continuous there are also the in between values. Kernel doesn't matter in respect to the type of variables.
Mixing continuous and binary data with linear SVM?
SVMs will handle both binary and continuous variables as long as you make some preprocessing: all features should be scaled or normalised. After that step, from the algorithms' perspective it doesn't
Mixing continuous and binary data with linear SVM? SVMs will handle both binary and continuous variables as long as you make some preprocessing: all features should be scaled or normalised. After that step, from the algorithms' perspective it doesn't matter if features are continuous or binary: for binaries, it sees samples that are either "far" away, or very similar; for continuous there are also the in between values. Kernel doesn't matter in respect to the type of variables.
Mixing continuous and binary data with linear SVM? SVMs will handle both binary and continuous variables as long as you make some preprocessing: all features should be scaled or normalised. After that step, from the algorithms' perspective it doesn't
17,041
Mixing continuous and binary data with linear SVM?
Replicating my answer from http://www.quora.com/Machine-Learning/What-are-good-ways-to-handle-discrete-and-continuous-inputs-together/answer/Arun-Iyer-1 Rescale bounded continuous features: All continuous input that are bounded, rescale them to $[-1, 1]$ through $x = \frac{2x - \max - \min}{\max - \min}$. Standardize all continuous features: All continuous input should be standardized and by this I mean, for every continuous feature, compute its mean ($\mu$) and standard deviation ($\sigma$) and do $x = \frac{x - \mu}{\sigma}$. Binarize categorical/discrete features: For all categorical features, represent them as multiple boolean features. For example, instead of having one feature called marriage_status, have 3 boolean features - married_status_single, married_status_married, married_status_divorced and appropriately set these features to 1 or -1. As you can see, for every categorical feature, you are adding k binary feature where k is the number of values that the categorical feature takes. Now, you can represent all the features in a single vector which we can assume to be embedded in $\mathbb{R}^n$ and start using off-the-shelf packages for classification/regression etc. Addendum: If you use Kernel Based Methods, you can avoid this explicit embedding to $\mathbb{R}^n$ and focus on designing custom kernels for your feature vectors. You can even split your kernel into multiple kernels and use MKL models to learn weights over them. However, you may want to ensure positive semi-definiteness of your kernel so that the solver doesn't have any problems. However, if you are unsure of whether you can design custom kernels, you can just follow the earlier embedding approach.
Mixing continuous and binary data with linear SVM?
Replicating my answer from http://www.quora.com/Machine-Learning/What-are-good-ways-to-handle-discrete-and-continuous-inputs-together/answer/Arun-Iyer-1 Rescale bounded continuous features: All conti
Mixing continuous and binary data with linear SVM? Replicating my answer from http://www.quora.com/Machine-Learning/What-are-good-ways-to-handle-discrete-and-continuous-inputs-together/answer/Arun-Iyer-1 Rescale bounded continuous features: All continuous input that are bounded, rescale them to $[-1, 1]$ through $x = \frac{2x - \max - \min}{\max - \min}$. Standardize all continuous features: All continuous input should be standardized and by this I mean, for every continuous feature, compute its mean ($\mu$) and standard deviation ($\sigma$) and do $x = \frac{x - \mu}{\sigma}$. Binarize categorical/discrete features: For all categorical features, represent them as multiple boolean features. For example, instead of having one feature called marriage_status, have 3 boolean features - married_status_single, married_status_married, married_status_divorced and appropriately set these features to 1 or -1. As you can see, for every categorical feature, you are adding k binary feature where k is the number of values that the categorical feature takes. Now, you can represent all the features in a single vector which we can assume to be embedded in $\mathbb{R}^n$ and start using off-the-shelf packages for classification/regression etc. Addendum: If you use Kernel Based Methods, you can avoid this explicit embedding to $\mathbb{R}^n$ and focus on designing custom kernels for your feature vectors. You can even split your kernel into multiple kernels and use MKL models to learn weights over them. However, you may want to ensure positive semi-definiteness of your kernel so that the solver doesn't have any problems. However, if you are unsure of whether you can design custom kernels, you can just follow the earlier embedding approach.
Mixing continuous and binary data with linear SVM? Replicating my answer from http://www.quora.com/Machine-Learning/What-are-good-ways-to-handle-discrete-and-continuous-inputs-together/answer/Arun-Iyer-1 Rescale bounded continuous features: All conti
17,042
Can Random Forest Methodology be Applied to Linear Regressions?
I partially disagree with the present answers because the methodology random forest is built upon introduces variance (CARTs built on bootstrapped samples + random subspace method) to make them independent. Once you have orthogonal trees then the average of their predictions tends (in many cases) to be better than the prediction of the average tree (because of Jensen's inequality). Although CARTs have noticeable perks when subject to this treatment this methodology definitely applies to any model and linear models are no exception. Here is an R package which is exactly what you are looking for. It presents a nice tutorial on how to tune and interpret them and bibliography on the subject: Random Generalized Linear Models.
Can Random Forest Methodology be Applied to Linear Regressions?
I partially disagree with the present answers because the methodology random forest is built upon introduces variance (CARTs built on bootstrapped samples + random subspace method) to make them indepe
Can Random Forest Methodology be Applied to Linear Regressions? I partially disagree with the present answers because the methodology random forest is built upon introduces variance (CARTs built on bootstrapped samples + random subspace method) to make them independent. Once you have orthogonal trees then the average of their predictions tends (in many cases) to be better than the prediction of the average tree (because of Jensen's inequality). Although CARTs have noticeable perks when subject to this treatment this methodology definitely applies to any model and linear models are no exception. Here is an R package which is exactly what you are looking for. It presents a nice tutorial on how to tune and interpret them and bibliography on the subject: Random Generalized Linear Models.
Can Random Forest Methodology be Applied to Linear Regressions? I partially disagree with the present answers because the methodology random forest is built upon introduces variance (CARTs built on bootstrapped samples + random subspace method) to make them indepe
17,043
Can Random Forest Methodology be Applied to Linear Regressions?
To put @ziggystar's response in terms of machine learning jargon: the idea behind bootstrap aggregation techniques (e.g. Random Forests) is to fit many low-bias, high-variance models to data with some element of "randomness" or "instability." In the case of random forests, instability is added through bootstrapping and by picking a random set of features to split each node of the tree. Averaging across these noisy, but low-bias, trees alleviates the high variance of any individual tree. While regression/classification trees are "low-bias, high-variance" models, linear regression models are typically the opposite - "high-bias, low-variance." Thus, the problem one often faces with linear models is reducing bias, not reducing variance. Bootstrap aggregation is simply not made to do this. An addition problem is that bootstrapping may not provide enough "randomness" or "instability" in a typical linear model. I would expect a regression tree to be more sensitive to the randomness of bootstrap samples, since each leaf typically only holds a handful of data points. Additionally, regression trees can be stochastically grown by splitting the tree on a random subset of variables at each node. See this previous question for why this is important: Why are Random Forests splitted based on m random features? All that being said, you can certainly use bootstrapping on linear models [LINK], and this can be very helpful in certain contexts. However, the motivation is much different from bootstrap aggregation techniques.
Can Random Forest Methodology be Applied to Linear Regressions?
To put @ziggystar's response in terms of machine learning jargon: the idea behind bootstrap aggregation techniques (e.g. Random Forests) is to fit many low-bias, high-variance models to data with some
Can Random Forest Methodology be Applied to Linear Regressions? To put @ziggystar's response in terms of machine learning jargon: the idea behind bootstrap aggregation techniques (e.g. Random Forests) is to fit many low-bias, high-variance models to data with some element of "randomness" or "instability." In the case of random forests, instability is added through bootstrapping and by picking a random set of features to split each node of the tree. Averaging across these noisy, but low-bias, trees alleviates the high variance of any individual tree. While regression/classification trees are "low-bias, high-variance" models, linear regression models are typically the opposite - "high-bias, low-variance." Thus, the problem one often faces with linear models is reducing bias, not reducing variance. Bootstrap aggregation is simply not made to do this. An addition problem is that bootstrapping may not provide enough "randomness" or "instability" in a typical linear model. I would expect a regression tree to be more sensitive to the randomness of bootstrap samples, since each leaf typically only holds a handful of data points. Additionally, regression trees can be stochastically grown by splitting the tree on a random subset of variables at each node. See this previous question for why this is important: Why are Random Forests splitted based on m random features? All that being said, you can certainly use bootstrapping on linear models [LINK], and this can be very helpful in certain contexts. However, the motivation is much different from bootstrap aggregation techniques.
Can Random Forest Methodology be Applied to Linear Regressions? To put @ziggystar's response in terms of machine learning jargon: the idea behind bootstrap aggregation techniques (e.g. Random Forests) is to fit many low-bias, high-variance models to data with some
17,044
Can Random Forest Methodology be Applied to Linear Regressions?
I suppose that, if you were going to do this with $k$ going to infinity, you would obtain the linear regression model you obtain by doing ordinary linear regression with the full sample. Just notice that the average of $k$ structurally equal linear models is again a structurally equal linear model, simply with the parameters averaged (use distributive law). But I didn't do the math and I'm not completely sure. And here is why it isn't as attractive to do the "random"-thing with linear models as it is with decision trees: A large decision tree created from a large sample is very likely to overfit the data, and the random forest method fights this effect by relying on a vote of many small trees. Linear regression on the other hand, is a model that is not very prone to overfitting and thus isn't hurt by training it on the complete sample in the beginning. And even if you have many regressor variables, you can apply other techniques, such as regularization, to combat overfitting.
Can Random Forest Methodology be Applied to Linear Regressions?
I suppose that, if you were going to do this with $k$ going to infinity, you would obtain the linear regression model you obtain by doing ordinary linear regression with the full sample. Just notice t
Can Random Forest Methodology be Applied to Linear Regressions? I suppose that, if you were going to do this with $k$ going to infinity, you would obtain the linear regression model you obtain by doing ordinary linear regression with the full sample. Just notice that the average of $k$ structurally equal linear models is again a structurally equal linear model, simply with the parameters averaged (use distributive law). But I didn't do the math and I'm not completely sure. And here is why it isn't as attractive to do the "random"-thing with linear models as it is with decision trees: A large decision tree created from a large sample is very likely to overfit the data, and the random forest method fights this effect by relying on a vote of many small trees. Linear regression on the other hand, is a model that is not very prone to overfitting and thus isn't hurt by training it on the complete sample in the beginning. And even if you have many regressor variables, you can apply other techniques, such as regularization, to combat overfitting.
Can Random Forest Methodology be Applied to Linear Regressions? I suppose that, if you were going to do this with $k$ going to infinity, you would obtain the linear regression model you obtain by doing ordinary linear regression with the full sample. Just notice t
17,045
Can Random Forest Methodology be Applied to Linear Regressions?
I agree with @ziggystar. As the number of bootstrap samples $k$ converges to infinity, bagged estimate of the linear model converges to the OLS (Ordinary Least Squares) estimate of the linear model run on the whole sample. The way to prove this is seeing that bootstrap "pretends" that the population distribution is the same as the empirical distribution. As you sample more and more data sets from this empirical distribution, the average of estimated hyperplanes will converge to the "true hyperplane" (which is OLS estimate run on the whole data) by asymptotic properties of Ordinary Least Squares. Also, bagging is not always a good thing. Not only does it not fight the bias, it may increase the bias in some peculiar cases. Example: $$ X_1, X_2, ..., X_n \sim Be(p) $$ (Bernoulli trials which take value 1 with probability $p$ and value 0 with probability $1-p$). Further, let us define parameter $$ \theta = 1_{\{p > 0\}} $$ and try to estimate it. Naturally, it suffices to see a single data point $X_i = 1$ to know that $\theta = 1$. The whole sample may contain such a data point and allow us to estimate $\theta$ without any error. On the other hand, any bootstrap sample may not contain such a data point and lead us to wrongly estimating $\theta$ with 0 (we adopt no Bayesian framework here, juts good old method of maximum likelihood). In other words, $$ {\rm Bias}_{\rm\ bagging} = {\rm Prob(in\ a\ bootstrap\ sample\ X_{(1)} = ... = X_{(n)} = 0)} > 0, $$ conditional on $\theta = 1$.
Can Random Forest Methodology be Applied to Linear Regressions?
I agree with @ziggystar. As the number of bootstrap samples $k$ converges to infinity, bagged estimate of the linear model converges to the OLS (Ordinary Least Squares) estimate of the linear model ru
Can Random Forest Methodology be Applied to Linear Regressions? I agree with @ziggystar. As the number of bootstrap samples $k$ converges to infinity, bagged estimate of the linear model converges to the OLS (Ordinary Least Squares) estimate of the linear model run on the whole sample. The way to prove this is seeing that bootstrap "pretends" that the population distribution is the same as the empirical distribution. As you sample more and more data sets from this empirical distribution, the average of estimated hyperplanes will converge to the "true hyperplane" (which is OLS estimate run on the whole data) by asymptotic properties of Ordinary Least Squares. Also, bagging is not always a good thing. Not only does it not fight the bias, it may increase the bias in some peculiar cases. Example: $$ X_1, X_2, ..., X_n \sim Be(p) $$ (Bernoulli trials which take value 1 with probability $p$ and value 0 with probability $1-p$). Further, let us define parameter $$ \theta = 1_{\{p > 0\}} $$ and try to estimate it. Naturally, it suffices to see a single data point $X_i = 1$ to know that $\theta = 1$. The whole sample may contain such a data point and allow us to estimate $\theta$ without any error. On the other hand, any bootstrap sample may not contain such a data point and lead us to wrongly estimating $\theta$ with 0 (we adopt no Bayesian framework here, juts good old method of maximum likelihood). In other words, $$ {\rm Bias}_{\rm\ bagging} = {\rm Prob(in\ a\ bootstrap\ sample\ X_{(1)} = ... = X_{(n)} = 0)} > 0, $$ conditional on $\theta = 1$.
Can Random Forest Methodology be Applied to Linear Regressions? I agree with @ziggystar. As the number of bootstrap samples $k$ converges to infinity, bagged estimate of the linear model converges to the OLS (Ordinary Least Squares) estimate of the linear model ru
17,046
Size of a test and level of significance
Suppose you have a random sample $X_1,\dots,X_n$ from a distribution that involves a parameter $\theta$ which assumes values in a parameter space $\Theta$. You partition the parameter space as $\Theta=\Theta_0\cup\Theta_1$, and you want to test the hypotheses $$ H_0 : \theta \in \Theta_0 \, , $$ $$ H_1 : \theta \in \Theta_1 \, , $$ which are called the null and alternative hypotheses, respectively. Let $\mathscr{X}$ denote the sample space of all possible values of the random vector $X=(X_1,\dots,X_n)$. Your goal in building a test procedure is to partition this sample space $\mathscr{X}$ into two pieces: the critical region $\mathscr{C}$, containing the values of $X$ for which you will reject the null hypothesis $H_0$ (and, so, accept the alternative $H_1$), and the acceptance region $\mathscr{A}$, containing the values of $X$ for which you will not reject the null hypothesis $H_0$ (and, therefore, reject the alternative $H_1$). Formally, a test procedure can be described as a measurable function $\varphi:\mathscr{X}\to\{0,1\}$, with the obvious interpretation in terms of the decisions made in favor of each of the hypotheses. The critical region is $\mathscr{C}=\varphi^{-1}(\{1\})$, and the acceptance region is $\mathscr{A}=\varphi^{-1}(\{0\})$. For each test procedure $\varphi$, we define its power function $\pi_\varphi:\Theta\to[0,1]$ by $$ \pi_\varphi(\theta) = \Pr(\varphi(X)=1\mid\theta) = \Pr(X\in\mathscr{C}\mid\theta) \, . $$ In words, $\pi_\varphi(\theta)$ gives you the probability of rejecting $H_0$ when the parameter value is $\theta$. The decision to reject $H_0$ when $\theta\in\Theta_0$ is wrong. So, for a given problem, you may want to consider only those test procedures $\varphi$ for which $\pi_\varphi(\theta)\leq\alpha$, for every $\theta\in\Theta_0$, in which $\alpha$ is some significance level ($0<\alpha<1$). Note that the significance level is a property of a class of test procedures. We can describe this class precisely as $$ \mathscr{T}_{\alpha} = \left\{ \varphi\in\{0,1\}^\mathscr{X} : \pi_\varphi(\theta)\leq\alpha, \textrm{for every}\; \theta\in\Theta_0\right\} \, . $$ For each individual test procedure $\varphi$, the maximum probability $\alpha_\varphi=\sup_{\theta\in\Theta_0}\pi_\varphi(\theta)$ of wrongly rejecting $H_0$ is called the size of the test procedure $\varphi$. It follows directly from these definitions that, once we have established a significance level $\alpha$, and therefore determined the class $\mathscr{T}_{\alpha}$ of acceptable test procedures, each test procedure $\varphi$ within this class will have size $\alpha_\varphi\leq\alpha$, and conversely. Concisely, $\varphi\in\mathscr{T}_{\alpha}$ if and only if $\alpha_\varphi\leq\alpha$.
Size of a test and level of significance
Suppose you have a random sample $X_1,\dots,X_n$ from a distribution that involves a parameter $\theta$ which assumes values in a parameter space $\Theta$. You partition the parameter space as $\Theta
Size of a test and level of significance Suppose you have a random sample $X_1,\dots,X_n$ from a distribution that involves a parameter $\theta$ which assumes values in a parameter space $\Theta$. You partition the parameter space as $\Theta=\Theta_0\cup\Theta_1$, and you want to test the hypotheses $$ H_0 : \theta \in \Theta_0 \, , $$ $$ H_1 : \theta \in \Theta_1 \, , $$ which are called the null and alternative hypotheses, respectively. Let $\mathscr{X}$ denote the sample space of all possible values of the random vector $X=(X_1,\dots,X_n)$. Your goal in building a test procedure is to partition this sample space $\mathscr{X}$ into two pieces: the critical region $\mathscr{C}$, containing the values of $X$ for which you will reject the null hypothesis $H_0$ (and, so, accept the alternative $H_1$), and the acceptance region $\mathscr{A}$, containing the values of $X$ for which you will not reject the null hypothesis $H_0$ (and, therefore, reject the alternative $H_1$). Formally, a test procedure can be described as a measurable function $\varphi:\mathscr{X}\to\{0,1\}$, with the obvious interpretation in terms of the decisions made in favor of each of the hypotheses. The critical region is $\mathscr{C}=\varphi^{-1}(\{1\})$, and the acceptance region is $\mathscr{A}=\varphi^{-1}(\{0\})$. For each test procedure $\varphi$, we define its power function $\pi_\varphi:\Theta\to[0,1]$ by $$ \pi_\varphi(\theta) = \Pr(\varphi(X)=1\mid\theta) = \Pr(X\in\mathscr{C}\mid\theta) \, . $$ In words, $\pi_\varphi(\theta)$ gives you the probability of rejecting $H_0$ when the parameter value is $\theta$. The decision to reject $H_0$ when $\theta\in\Theta_0$ is wrong. So, for a given problem, you may want to consider only those test procedures $\varphi$ for which $\pi_\varphi(\theta)\leq\alpha$, for every $\theta\in\Theta_0$, in which $\alpha$ is some significance level ($0<\alpha<1$). Note that the significance level is a property of a class of test procedures. We can describe this class precisely as $$ \mathscr{T}_{\alpha} = \left\{ \varphi\in\{0,1\}^\mathscr{X} : \pi_\varphi(\theta)\leq\alpha, \textrm{for every}\; \theta\in\Theta_0\right\} \, . $$ For each individual test procedure $\varphi$, the maximum probability $\alpha_\varphi=\sup_{\theta\in\Theta_0}\pi_\varphi(\theta)$ of wrongly rejecting $H_0$ is called the size of the test procedure $\varphi$. It follows directly from these definitions that, once we have established a significance level $\alpha$, and therefore determined the class $\mathscr{T}_{\alpha}$ of acceptable test procedures, each test procedure $\varphi$ within this class will have size $\alpha_\varphi\leq\alpha$, and conversely. Concisely, $\varphi\in\mathscr{T}_{\alpha}$ if and only if $\alpha_\varphi\leq\alpha$.
Size of a test and level of significance Suppose you have a random sample $X_1,\dots,X_n$ from a distribution that involves a parameter $\theta$ which assumes values in a parameter space $\Theta$. You partition the parameter space as $\Theta
17,047
Why is the Mann–Whitney U test significant when the medians are equal?
FAQ: Why is the Mann-Whitney significant when the medians are equal?
Why is the Mann–Whitney U test significant when the medians are equal?
FAQ: Why is the Mann-Whitney significant when the medians are equal?
Why is the Mann–Whitney U test significant when the medians are equal? FAQ: Why is the Mann-Whitney significant when the medians are equal?
Why is the Mann–Whitney U test significant when the medians are equal? FAQ: Why is the Mann-Whitney significant when the medians are equal?
17,048
Why is the Mann–Whitney U test significant when the medians are equal?
Here is a graph that shows the same point the FAQ Bernd linked to explains in detail. The two groups have equal medians but very different distributions. The P value from the Mann-Whitney test is tiny (0.0288), demonstrating that it doesn't really compare medians.
Why is the Mann–Whitney U test significant when the medians are equal?
Here is a graph that shows the same point the FAQ Bernd linked to explains in detail. The two groups have equal medians but very different distributions. The P value from the Mann-Whitney test is tiny
Why is the Mann–Whitney U test significant when the medians are equal? Here is a graph that shows the same point the FAQ Bernd linked to explains in detail. The two groups have equal medians but very different distributions. The P value from the Mann-Whitney test is tiny (0.0288), demonstrating that it doesn't really compare medians.
Why is the Mann–Whitney U test significant when the medians are equal? Here is a graph that shows the same point the FAQ Bernd linked to explains in detail. The two groups have equal medians but very different distributions. The P value from the Mann-Whitney test is tiny
17,049
How can the square of an asymptotically normal variable also be asympotically normal?
There seems to be some confusion about what the delta methods really says. This statement is fundamentally about the asymptotic distribution of the function of an asymptotically normal estimator. In your examples, the functions are defined on $X$, which as you note could follow any distribution! The classic Delta method is fundamentally a statement about the asymptotic distribution of functions of an estimator that is asymptotically normal (which in the case of the sample mean is ensured by the CLT for any $X$ that satisfies the assumptions of CLT). So one example could be $f(X_n) = X_n^2 = \bigg(\frac{1}{n}\sum_i X_i\bigg)^2$. The Delta method says that if $X_n$ follows a normal distribution with mean $\theta$, then $f(X_n)$ also follows a normal distribution with mean $f(\theta)$. To explicitly answer your scenario where $g(X_n) = X_n^2$, the point is that $g(X_n)$ is not chi square. Suppose we draw $X_i$ iid from some distribution, and suppose that $Var(X_i) = 1$. Let's consider the sequence $\{g(X_n)\}_n$, where $g(X_n) = X_n^2 = \bigg(\frac{1}{n}\sum_i X_i\bigg)^2$. By the CLT, we have that $\sqrt{n}(X_n - \mu) \xrightarrow{d} N(0,1)$ (or, in your post, you just automatically get that distribution without needing to appeal to the CLT). But $X_n^2$ is not Chi-square, because $X_n$ is not standard normal. Instead, $\sqrt{n}(X_n - \mu)$ is standard normal (either by assumption of distribution of $X_n$ or by the CLT) and we accordingly have that $$\big(\sqrt{n}(X_n - \mu)\big)^2 \xrightarrow{d} \chi^2$$ But you're not interested in the distribution of whatever that is. You're interested in the distribution of $X_n^2$. For the sake of exploring, we can think about the distribution of $X_n^2$. Well if $Z\sim N(\mu,\sigma^2)$, then $\frac{Z^2}{\sigma^2}$ is a scaled non-central chi square distribution with one degree of freedom and non-central parameter $\lambda = (\frac{\mu}{\sigma})^2$. But in your case (either by your assumption or by CLT), we have that $\sigma^2 = 1/n$, and so $nX_n^2$ follows a non-central chi square distribution with $\lambda = \mu^2n$ and so $\lambda \to \infty$ as $n\to\infty$. I won't go through the proof, but if you check the wiki page I linked on non central chi square distributions, under Related Distributions, you'll note that for $Z$ noncentral chi with $k$ degrees of freedom and non central parameter $\lambda$, as $\lambda \to \infty$ we have that $$\frac{Z - (k+\lambda)}{\sqrt{2(k+2\lambda)}} \xrightarrow{d} N(0,1) $$ In our case, $Z = nX_n^2,\lambda = \mu^2n,k = 1$, and so we have that as $n$ goes to infinity, we have that $$\frac{nX_n^2 - (1+\mu^2n)}{\sqrt{2(1+2\mu^2n)}} = \frac{n(X_n^2 - \mu^2) - 1}{\sqrt{2+4\mu^2n}} \xrightarrow{d} N(0,1)$$ I won't be formal, but since $n$ is getting arbitrarily large, it's clear that $$\frac{n(X_n^2 - \mu^2) - 1}{\sqrt{2+4\mu^2n}} \approx \frac{n(X_n^2 - \mu^2)}{2\mu\sqrt{n}} = \frac{1}{2\mu}\sqrt{n}(X_n^2 - \mu^2)\xrightarrow{d} N(0,1) $$ and using normal properties, we thus have that $$\sqrt{n}(X_n^2 - \mu^2)\xrightarrow{d} N(0,4\mu^2) $$ Seems pretty nice! And what does Delta tell us again? Well, by Delta, we should have that for $g(\theta) = \theta^2$, $$\sqrt{n}(X_n^2 - \mu^2)\xrightarrow{d} N(0,\sigma^2 g'(\theta)^2) = N(0,(2\theta)^2) = N(0,4\mu^2)$$ Sweet! But all those steps were kind of a pain to do.. luckily, the univariate proof of the delta method just approximates all this using a first order taylor expansion as in the wiki page for Delta and it's just a few steps after that. From that proof, you can see that all you really need is for the estimator of $\theta$ to be asymptotically normal and that $f'(\theta)$ is well-defined and non-zero. In the case where it is zero, you can try taking further order taylor expansions, so you may still be able to recover an asymptotic distribution.
How can the square of an asymptotically normal variable also be asympotically normal?
There seems to be some confusion about what the delta methods really says. This statement is fundamentally about the asymptotic distribution of the function of an asymptotically normal estimator. In y
How can the square of an asymptotically normal variable also be asympotically normal? There seems to be some confusion about what the delta methods really says. This statement is fundamentally about the asymptotic distribution of the function of an asymptotically normal estimator. In your examples, the functions are defined on $X$, which as you note could follow any distribution! The classic Delta method is fundamentally a statement about the asymptotic distribution of functions of an estimator that is asymptotically normal (which in the case of the sample mean is ensured by the CLT for any $X$ that satisfies the assumptions of CLT). So one example could be $f(X_n) = X_n^2 = \bigg(\frac{1}{n}\sum_i X_i\bigg)^2$. The Delta method says that if $X_n$ follows a normal distribution with mean $\theta$, then $f(X_n)$ also follows a normal distribution with mean $f(\theta)$. To explicitly answer your scenario where $g(X_n) = X_n^2$, the point is that $g(X_n)$ is not chi square. Suppose we draw $X_i$ iid from some distribution, and suppose that $Var(X_i) = 1$. Let's consider the sequence $\{g(X_n)\}_n$, where $g(X_n) = X_n^2 = \bigg(\frac{1}{n}\sum_i X_i\bigg)^2$. By the CLT, we have that $\sqrt{n}(X_n - \mu) \xrightarrow{d} N(0,1)$ (or, in your post, you just automatically get that distribution without needing to appeal to the CLT). But $X_n^2$ is not Chi-square, because $X_n$ is not standard normal. Instead, $\sqrt{n}(X_n - \mu)$ is standard normal (either by assumption of distribution of $X_n$ or by the CLT) and we accordingly have that $$\big(\sqrt{n}(X_n - \mu)\big)^2 \xrightarrow{d} \chi^2$$ But you're not interested in the distribution of whatever that is. You're interested in the distribution of $X_n^2$. For the sake of exploring, we can think about the distribution of $X_n^2$. Well if $Z\sim N(\mu,\sigma^2)$, then $\frac{Z^2}{\sigma^2}$ is a scaled non-central chi square distribution with one degree of freedom and non-central parameter $\lambda = (\frac{\mu}{\sigma})^2$. But in your case (either by your assumption or by CLT), we have that $\sigma^2 = 1/n$, and so $nX_n^2$ follows a non-central chi square distribution with $\lambda = \mu^2n$ and so $\lambda \to \infty$ as $n\to\infty$. I won't go through the proof, but if you check the wiki page I linked on non central chi square distributions, under Related Distributions, you'll note that for $Z$ noncentral chi with $k$ degrees of freedom and non central parameter $\lambda$, as $\lambda \to \infty$ we have that $$\frac{Z - (k+\lambda)}{\sqrt{2(k+2\lambda)}} \xrightarrow{d} N(0,1) $$ In our case, $Z = nX_n^2,\lambda = \mu^2n,k = 1$, and so we have that as $n$ goes to infinity, we have that $$\frac{nX_n^2 - (1+\mu^2n)}{\sqrt{2(1+2\mu^2n)}} = \frac{n(X_n^2 - \mu^2) - 1}{\sqrt{2+4\mu^2n}} \xrightarrow{d} N(0,1)$$ I won't be formal, but since $n$ is getting arbitrarily large, it's clear that $$\frac{n(X_n^2 - \mu^2) - 1}{\sqrt{2+4\mu^2n}} \approx \frac{n(X_n^2 - \mu^2)}{2\mu\sqrt{n}} = \frac{1}{2\mu}\sqrt{n}(X_n^2 - \mu^2)\xrightarrow{d} N(0,1) $$ and using normal properties, we thus have that $$\sqrt{n}(X_n^2 - \mu^2)\xrightarrow{d} N(0,4\mu^2) $$ Seems pretty nice! And what does Delta tell us again? Well, by Delta, we should have that for $g(\theta) = \theta^2$, $$\sqrt{n}(X_n^2 - \mu^2)\xrightarrow{d} N(0,\sigma^2 g'(\theta)^2) = N(0,(2\theta)^2) = N(0,4\mu^2)$$ Sweet! But all those steps were kind of a pain to do.. luckily, the univariate proof of the delta method just approximates all this using a first order taylor expansion as in the wiki page for Delta and it's just a few steps after that. From that proof, you can see that all you really need is for the estimator of $\theta$ to be asymptotically normal and that $f'(\theta)$ is well-defined and non-zero. In the case where it is zero, you can try taking further order taylor expansions, so you may still be able to recover an asymptotic distribution.
How can the square of an asymptotically normal variable also be asympotically normal? There seems to be some confusion about what the delta methods really says. This statement is fundamentally about the asymptotic distribution of the function of an asymptotically normal estimator. In y
17,050
How can the square of an asymptotically normal variable also be asympotically normal?
The Delta method says $$\sqrt{n}(g(X_n)-g(\mu))\stackrel{d}{\to} N(0, g'(\mu)^2)$$ In your $g(x)=X^2$ example, there are two cases. First, the degenerate case, when $\mu=0$ and thus $g'(\mu)=0$. The Delta method is correct if you interpret $N(0,0)$ as point mass at zero: $$\sqrt{n}(g(X_n)-g(\mu))\stackrel{p}{\to} 0$$ So, while $X_n^2$ is asymptotically $\chi^2_1$, it's $$nX_n^2\stackrel{d}{\to}\chi^2_1$$ and $$\sqrt{n}X_n^2\stackrel{d}{\to} 0.$$ Second, the non-degenerate case really gives a Normal. Suppose you had $X_n\sim N(1,1/\sqrt{n})$, giving $\mu=1$. Write $Z_n=X_n-1$ Then $$X_n^2 = Z_n^2+2Z_n+1.$$ The $2Z_n$ term is Normal, and the $Z_n^2$ term is of order $1/n$, so it disappears when multiplied by $\sqrt{n}$. You have $$\sqrt{n}\left(X_n^2-1\right)= \sqrt{n}\left(Z_n^2+2Z_n\right)=\sqrt{n}Z_n^2+2\sqrt{n}Z_n$$ Now, just as in the first example $\sqrt{n}Z_n^2\stackrel{d}{\to} 0$, and $2\sqrt{n}Z_n\stackrel{d}{\to} N(0,2^2)$. Combining those $$\sqrt{n}\left(g(X_n)-g(\mu)\right)\stackrel{d}{\to}N(0, g'(\mu)^2)$$ as required That's basically what happens in all the non-degenerate cases: the term of highest order is Normal, and the non-Normal terms are asymptotically negligible. Third, trying to do this with $1/X_n$ for $X_n\sim N(0,1/\sqrt{n})$ fails because $g(x)=1/x$ does not have a continuous derivative at $\mu=0$ (which is the other key assumption of the delta method). For $X_n\sim N(\mu,1/\sqrt{n})$ with $\mu\neq 0$ you end up with the same sort of argument as my one for $g(x)=x^2$. By Taylor's theorem $$1/X_n=1/\mu - \frac{1}{\mu^2}(X_n-\mu) + r_n$$ so $$\sqrt{n}(1/X_n -1/\mu)=-\sqrt{n}\frac{1}{\mu^2}(X_n-\mu)+\sqrt{n}r_n$$ Now $r_n$ involves $(X_n-\mu_n)^2$, so $\sqrt{n}r_n\stackrel{d}{\to} 0$, in the same way as the first example, and $$-\sqrt{n}\frac{1}{\mu^2}(X_n-\mu)\sim N(0, 1/\mu^4)$$ So, $$\sqrt{n}(1/X_n -1/\mu)\stackrel{d}{\to}N(0, g'(\mu)^2)$$ as required.
How can the square of an asymptotically normal variable also be asympotically normal?
The Delta method says $$\sqrt{n}(g(X_n)-g(\mu))\stackrel{d}{\to} N(0, g'(\mu)^2)$$ In your $g(x)=X^2$ example, there are two cases. First, the degenerate case, when $\mu=0$ and thus $g'(\mu)=0$. The D
How can the square of an asymptotically normal variable also be asympotically normal? The Delta method says $$\sqrt{n}(g(X_n)-g(\mu))\stackrel{d}{\to} N(0, g'(\mu)^2)$$ In your $g(x)=X^2$ example, there are two cases. First, the degenerate case, when $\mu=0$ and thus $g'(\mu)=0$. The Delta method is correct if you interpret $N(0,0)$ as point mass at zero: $$\sqrt{n}(g(X_n)-g(\mu))\stackrel{p}{\to} 0$$ So, while $X_n^2$ is asymptotically $\chi^2_1$, it's $$nX_n^2\stackrel{d}{\to}\chi^2_1$$ and $$\sqrt{n}X_n^2\stackrel{d}{\to} 0.$$ Second, the non-degenerate case really gives a Normal. Suppose you had $X_n\sim N(1,1/\sqrt{n})$, giving $\mu=1$. Write $Z_n=X_n-1$ Then $$X_n^2 = Z_n^2+2Z_n+1.$$ The $2Z_n$ term is Normal, and the $Z_n^2$ term is of order $1/n$, so it disappears when multiplied by $\sqrt{n}$. You have $$\sqrt{n}\left(X_n^2-1\right)= \sqrt{n}\left(Z_n^2+2Z_n\right)=\sqrt{n}Z_n^2+2\sqrt{n}Z_n$$ Now, just as in the first example $\sqrt{n}Z_n^2\stackrel{d}{\to} 0$, and $2\sqrt{n}Z_n\stackrel{d}{\to} N(0,2^2)$. Combining those $$\sqrt{n}\left(g(X_n)-g(\mu)\right)\stackrel{d}{\to}N(0, g'(\mu)^2)$$ as required That's basically what happens in all the non-degenerate cases: the term of highest order is Normal, and the non-Normal terms are asymptotically negligible. Third, trying to do this with $1/X_n$ for $X_n\sim N(0,1/\sqrt{n})$ fails because $g(x)=1/x$ does not have a continuous derivative at $\mu=0$ (which is the other key assumption of the delta method). For $X_n\sim N(\mu,1/\sqrt{n})$ with $\mu\neq 0$ you end up with the same sort of argument as my one for $g(x)=x^2$. By Taylor's theorem $$1/X_n=1/\mu - \frac{1}{\mu^2}(X_n-\mu) + r_n$$ so $$\sqrt{n}(1/X_n -1/\mu)=-\sqrt{n}\frac{1}{\mu^2}(X_n-\mu)+\sqrt{n}r_n$$ Now $r_n$ involves $(X_n-\mu_n)^2$, so $\sqrt{n}r_n\stackrel{d}{\to} 0$, in the same way as the first example, and $$-\sqrt{n}\frac{1}{\mu^2}(X_n-\mu)\sim N(0, 1/\mu^4)$$ So, $$\sqrt{n}(1/X_n -1/\mu)\stackrel{d}{\to}N(0, g'(\mu)^2)$$ as required.
How can the square of an asymptotically normal variable also be asympotically normal? The Delta method says $$\sqrt{n}(g(X_n)-g(\mu))\stackrel{d}{\to} N(0, g'(\mu)^2)$$ In your $g(x)=X^2$ example, there are two cases. First, the degenerate case, when $\mu=0$ and thus $g'(\mu)=0$. The D
17,051
How can the square of an asymptotically normal variable also be asympotically normal?
A similar issue occured in this question Implicit hypothesis testing: mean greater than variance and Delta Method The idea about the delta method is that it is a linear approximation which becomes more and more accurate as the sample increases. But this is only true when you are actually on a slope of the function $g(X)$. In your counter example $g(X)=X^2$, if the slope is zero around the mean for $\mu_X=0$, then this is indeed not the case. The following images illustrate this (note that the distribution of the sample mean $X_n$ becomes more narrow as $n$ increases and the function $g(X)$ is effectively more linear or 'flat', a bit in the same way as the earth seems flat when you get closer to the surface and look at a smaller scale) See more about those images in the answer to the before mentioned question https://stats.stackexchange.com/a/441688
How can the square of an asymptotically normal variable also be asympotically normal?
A similar issue occured in this question Implicit hypothesis testing: mean greater than variance and Delta Method The idea about the delta method is that it is a linear approximation which becomes mor
How can the square of an asymptotically normal variable also be asympotically normal? A similar issue occured in this question Implicit hypothesis testing: mean greater than variance and Delta Method The idea about the delta method is that it is a linear approximation which becomes more and more accurate as the sample increases. But this is only true when you are actually on a slope of the function $g(X)$. In your counter example $g(X)=X^2$, if the slope is zero around the mean for $\mu_X=0$, then this is indeed not the case. The following images illustrate this (note that the distribution of the sample mean $X_n$ becomes more narrow as $n$ increases and the function $g(X)$ is effectively more linear or 'flat', a bit in the same way as the earth seems flat when you get closer to the surface and look at a smaller scale) See more about those images in the answer to the before mentioned question https://stats.stackexchange.com/a/441688
How can the square of an asymptotically normal variable also be asympotically normal? A similar issue occured in this question Implicit hypothesis testing: mean greater than variance and Delta Method The idea about the delta method is that it is a linear approximation which becomes mor
17,052
How can the square of an asymptotically normal variable also be asympotically normal?
Your $X_n^2$ does not have a chi-squared distribution because $X_n$ does not have a mean of $0$. $X_n^2$ instead has a scaled noncentral chi-squared distribution with mean $1+\frac1n$ and variance $\frac4n +\frac2{n^2}$ and so $Z_n =\sqrt{n}(X_n^2-1)$ has a relocated and scaled noncentral chi-squared distribution with mean $\frac1{\sqrt{n}}$ and variance $4 +\frac2{n}$ and standard deviation $\sqrt{4+\frac2n}$. As $n$ grows, these clearly converge on $0$ and $4$ and $2$, as predicted by the Delta method: if $g(x)=x^2$ then $g'(1)=2$. $Z_n$ does converge in distribution to the relevant normal distribution and you can prove this using characteristic functions. It may be more convincing to show the densities for $Z_n$ as $n$ increases, illustrated here when $n$ is $1$ (red), $5$ (blue), $25$ (green) and $125$ (pink), and compare it with the predicted limiting normal distribution in black. For small $n$ the approximation is poor, especially since $Z_n \ge -\sqrt{n}$ with probability $1$, but for large $n$ you can see the convergence in distribution.
How can the square of an asymptotically normal variable also be asympotically normal?
Your $X_n^2$ does not have a chi-squared distribution because $X_n$ does not have a mean of $0$. $X_n^2$ instead has a scaled noncentral chi-squared distribution with mean $1+\frac1n$ and variance $\f
How can the square of an asymptotically normal variable also be asympotically normal? Your $X_n^2$ does not have a chi-squared distribution because $X_n$ does not have a mean of $0$. $X_n^2$ instead has a scaled noncentral chi-squared distribution with mean $1+\frac1n$ and variance $\frac4n +\frac2{n^2}$ and so $Z_n =\sqrt{n}(X_n^2-1)$ has a relocated and scaled noncentral chi-squared distribution with mean $\frac1{\sqrt{n}}$ and variance $4 +\frac2{n}$ and standard deviation $\sqrt{4+\frac2n}$. As $n$ grows, these clearly converge on $0$ and $4$ and $2$, as predicted by the Delta method: if $g(x)=x^2$ then $g'(1)=2$. $Z_n$ does converge in distribution to the relevant normal distribution and you can prove this using characteristic functions. It may be more convincing to show the densities for $Z_n$ as $n$ increases, illustrated here when $n$ is $1$ (red), $5$ (blue), $25$ (green) and $125$ (pink), and compare it with the predicted limiting normal distribution in black. For small $n$ the approximation is poor, especially since $Z_n \ge -\sqrt{n}$ with probability $1$, but for large $n$ you can see the convergence in distribution.
How can the square of an asymptotically normal variable also be asympotically normal? Your $X_n^2$ does not have a chi-squared distribution because $X_n$ does not have a mean of $0$. $X_n^2$ instead has a scaled noncentral chi-squared distribution with mean $1+\frac1n$ and variance $\f
17,053
Using glm() as substitute for simple chi square test
You can use an offset: glm with family="binomial" estimates parameters on the log-odds or logit scale, so $\beta_0=0$ corresponds to log-odds of 0 or a probability of 0.5. If you want to compare against a probability of $p$, you want the baseline value to be $q = \textrm{logit}(p)=\log(p/(1-p))$. The statistical model is now \begin{split} Y & \sim \textrm{Binom}(\mu) \\ \mu & =1/(1+\exp(-\eta)) \\ \eta & = \beta_0 + q \end{split} where only the last line has changed from the standard setup. In R code: use offset(q) in the formula the logit/log-odds function is qlogis(p) slightly annoyingly, you have to provide an offset value for each element in the response variable - R won't automatically replicate a constant value for you. This is done below by setting up a data frame, but you could just use rep(q,100). x = rbinom(100, 1, .7) dd <- data.frame(x, q = qlogis(0.7)) summary(glm(x ~ 1 + offset(q), data=dd, family = "binomial"))
Using glm() as substitute for simple chi square test
You can use an offset: glm with family="binomial" estimates parameters on the log-odds or logit scale, so $\beta_0=0$ corresponds to log-odds of 0 or a probability of 0.5. If you want to compare again
Using glm() as substitute for simple chi square test You can use an offset: glm with family="binomial" estimates parameters on the log-odds or logit scale, so $\beta_0=0$ corresponds to log-odds of 0 or a probability of 0.5. If you want to compare against a probability of $p$, you want the baseline value to be $q = \textrm{logit}(p)=\log(p/(1-p))$. The statistical model is now \begin{split} Y & \sim \textrm{Binom}(\mu) \\ \mu & =1/(1+\exp(-\eta)) \\ \eta & = \beta_0 + q \end{split} where only the last line has changed from the standard setup. In R code: use offset(q) in the formula the logit/log-odds function is qlogis(p) slightly annoyingly, you have to provide an offset value for each element in the response variable - R won't automatically replicate a constant value for you. This is done below by setting up a data frame, but you could just use rep(q,100). x = rbinom(100, 1, .7) dd <- data.frame(x, q = qlogis(0.7)) summary(glm(x ~ 1 + offset(q), data=dd, family = "binomial"))
Using glm() as substitute for simple chi square test You can use an offset: glm with family="binomial" estimates parameters on the log-odds or logit scale, so $\beta_0=0$ corresponds to log-odds of 0 or a probability of 0.5. If you want to compare again
17,054
Using glm() as substitute for simple chi square test
Look at confidence interval for parameters of your GLM: > set.seed(1) > x = rbinom(100, 1, .7) > model<-glm(x ~ 1, family = "binomial") > confint(model) Waiting for profiling to be done... 2.5 % 97.5 % 0.3426412 1.1862042 This is a confidence interval for log-odds. For $p=0.5$ we have $\log(odds) = \log \frac{p}{1-p} = \log 1 = 0$. So testing hypothesis that $p=0.5$ is equivalent to checking if confidence interval contains 0. This one does not, so hypothesis is rejected. Now, for any arbitrary $p$, you can compute log-odds and check if it is inside confidence interval.
Using glm() as substitute for simple chi square test
Look at confidence interval for parameters of your GLM: > set.seed(1) > x = rbinom(100, 1, .7) > model<-glm(x ~ 1, family = "binomial") > confint(model) Waiting for profiling to be done... 2.5 %
Using glm() as substitute for simple chi square test Look at confidence interval for parameters of your GLM: > set.seed(1) > x = rbinom(100, 1, .7) > model<-glm(x ~ 1, family = "binomial") > confint(model) Waiting for profiling to be done... 2.5 % 97.5 % 0.3426412 1.1862042 This is a confidence interval for log-odds. For $p=0.5$ we have $\log(odds) = \log \frac{p}{1-p} = \log 1 = 0$. So testing hypothesis that $p=0.5$ is equivalent to checking if confidence interval contains 0. This one does not, so hypothesis is rejected. Now, for any arbitrary $p$, you can compute log-odds and check if it is inside confidence interval.
Using glm() as substitute for simple chi square test Look at confidence interval for parameters of your GLM: > set.seed(1) > x = rbinom(100, 1, .7) > model<-glm(x ~ 1, family = "binomial") > confint(model) Waiting for profiling to be done... 2.5 %
17,055
Using glm() as substitute for simple chi square test
It is not (entirely) correct/accurate to use the p-values based on the z-/t-values in the glm.summary function as a hypothesis test. It is confusing language. The reported values are named z-values. But in this case they use the estimated standard error in place of the true deviation. Therefore in reality they are closer to t-values. Compare the following three outputs: 1) summary.glm 2) t-test 3) z-test > set.seed(1) > x = rbinom(100, 1, .7) > coef1 <- summary(glm(x ~ 1, offset=rep(qlogis(0.7),length(x)), family = "binomial"))$coefficients > coef2 <- summary(glm(x ~ 1, family = "binomial"))$coefficients > coef1[4] # output from summary.glm [1] 0.6626359 > 2*pt(-abs((qlogis(0.7)-coef2[1])/coef2[2]),99,ncp=0) # manual t-test [1] 0.6635858 > 2*pnorm(-abs((qlogis(0.7)-coef2[1])/coef2[2]),0,1) # manual z-test [1] 0.6626359 They are not exact p-values. An exact computation of the p-value using the binomial distribution would work better (with the computing power nowadays, this is not a problem). The t-distribution, assuming a Gaussian distribution of the error, is not exact (it overestimates p, exceeding the alpha level occurs less often in "reality"). See the following comparison: # trying all 100 possible outcomes if the true value is p=0.7 px <- dbinom(0:100,100,0.7) p_model = rep(0,101) for (i in 0:100) { xi = c(rep(1,i),rep(0,100-i)) model = glm(xi ~ 1, offset=rep(qlogis(0.7),100), family="binomial") p_model[i+1] = 1-summary(model)$coefficients[4] } # plotting cumulative distribution of outcomes outcomes <- p_model[order(p_model)] cdf <- cumsum(px[order(p_model)]) plot(1-outcomes,1-cdf, ylab="cumulative probability", xlab= "calculated glm p-value", xlim=c(10^-4,1),ylim=c(10^-4,1),col=2,cex=0.5,log="xy") lines(c(0.00001,1),c(0.00001,1)) for (i in 1:100) { lines(1-c(outcomes[i],outcomes[i+1]),1-c(cdf[i+1],cdf[i+1]),col=2) # lines(1-c(outcomes[i],outcomes[i]),1-c(cdf[i],cdf[i+1]),col=2) } title("probability for rejection as function of set alpha level") The black curve represents equality. The red curve is below it. That means that for a given calculated p-value by the glm summary function, we find this situation (or larger difference) less often in reality than the p-value indicates.
Using glm() as substitute for simple chi square test
It is not (entirely) correct/accurate to use the p-values based on the z-/t-values in the glm.summary function as a hypothesis test. It is confusing language. The reported values are named z-values.
Using glm() as substitute for simple chi square test It is not (entirely) correct/accurate to use the p-values based on the z-/t-values in the glm.summary function as a hypothesis test. It is confusing language. The reported values are named z-values. But in this case they use the estimated standard error in place of the true deviation. Therefore in reality they are closer to t-values. Compare the following three outputs: 1) summary.glm 2) t-test 3) z-test > set.seed(1) > x = rbinom(100, 1, .7) > coef1 <- summary(glm(x ~ 1, offset=rep(qlogis(0.7),length(x)), family = "binomial"))$coefficients > coef2 <- summary(glm(x ~ 1, family = "binomial"))$coefficients > coef1[4] # output from summary.glm [1] 0.6626359 > 2*pt(-abs((qlogis(0.7)-coef2[1])/coef2[2]),99,ncp=0) # manual t-test [1] 0.6635858 > 2*pnorm(-abs((qlogis(0.7)-coef2[1])/coef2[2]),0,1) # manual z-test [1] 0.6626359 They are not exact p-values. An exact computation of the p-value using the binomial distribution would work better (with the computing power nowadays, this is not a problem). The t-distribution, assuming a Gaussian distribution of the error, is not exact (it overestimates p, exceeding the alpha level occurs less often in "reality"). See the following comparison: # trying all 100 possible outcomes if the true value is p=0.7 px <- dbinom(0:100,100,0.7) p_model = rep(0,101) for (i in 0:100) { xi = c(rep(1,i),rep(0,100-i)) model = glm(xi ~ 1, offset=rep(qlogis(0.7),100), family="binomial") p_model[i+1] = 1-summary(model)$coefficients[4] } # plotting cumulative distribution of outcomes outcomes <- p_model[order(p_model)] cdf <- cumsum(px[order(p_model)]) plot(1-outcomes,1-cdf, ylab="cumulative probability", xlab= "calculated glm p-value", xlim=c(10^-4,1),ylim=c(10^-4,1),col=2,cex=0.5,log="xy") lines(c(0.00001,1),c(0.00001,1)) for (i in 1:100) { lines(1-c(outcomes[i],outcomes[i+1]),1-c(cdf[i+1],cdf[i+1]),col=2) # lines(1-c(outcomes[i],outcomes[i]),1-c(cdf[i],cdf[i+1]),col=2) } title("probability for rejection as function of set alpha level") The black curve represents equality. The red curve is below it. That means that for a given calculated p-value by the glm summary function, we find this situation (or larger difference) less often in reality than the p-value indicates.
Using glm() as substitute for simple chi square test It is not (entirely) correct/accurate to use the p-values based on the z-/t-values in the glm.summary function as a hypothesis test. It is confusing language. The reported values are named z-values.
17,056
Prediction interval for binomial random variable
Ok, let's try this. I'll give two answers - the Bayesian one, which is in my opinion simple and natural, and one of the possible frequentist ones. Bayesian solution We assume a Beta prior on $p$, i,e., $p \sim Beta(\alpha,\beta)$, because the Beta-Binomial model is conjugate, which means that the posterior distribution is also a Beta distribution with parameters $\hat{\alpha}=\alpha+k,\hat{\beta}=\beta+n-k$, (I'm using $k$ to denote the number of successes in $n$ trials, instead of $y$). Thus, inference is greatly simplified. Now, if you have some prior knowledge on the likely values of $p$, you could use it to set the values of $\alpha$ and $\beta$, i.e., to define your Beta prior, otherwise you could assume a uniform (noninformative) prior, with $\alpha=\beta=1$, or other noninformative priors (see for example here). In any case, your posterior is $Pr(p|n,k)=Beta(\alpha+k,\beta+n-k)$ In Bayesian inference, all that matters is the posterior probability, meaning that once you know that, you can make inferences for all other quantities in your model. You want to make inference on the observables $y$: in particular, on a vector of new results $\mathbf{y}=y_1,\dots,y_m$, where $m$ is not necessarily equal to $n$. Specifically, for each $j=0,\dots,m$, we want to compute the probability of having exactly $j$ successes in the next $m$ trials, given that we got $k$ successes in the preceding $n$ trials; the posterior predictive mass function: $Pr(j|m,y)=Pr(j|m,n,k)=\int_0^1 Pr(j,p|m,n,k)dp = \int_0^1 Pr(j|p,m,n,k)Pr(p|n,k)dp$ However, our Binomial model for $Y$ means that, conditionally on $p$ having a certain value, the probability of having $j$ successes in $m$ trials doesn't depend on past results: it's simply $f(j|m,p)=\binom{j}{m} p^j(1-p)^j$ Thus the expression becomes $Pr(j|m,n,k)=\int_0^1 \binom{j}{m} p^j(1-p)^j Pr(p|n,k)dp=\int_0^1 \binom{j}{m} p^j(1-p)^j Beta(\alpha+k,\beta+n-k)dp$ The result of this integral is a well-known distribution called the Beta-Binomial distribution: skipping the passages, we get the horrible expression $Pr(j|m,n,k)=\frac{m!}{j!(m-j)!}\frac{\Gamma(\alpha+\beta+n)}{\Gamma(\alpha+k)\Gamma(\beta+n-k)}\frac{\Gamma(\alpha+k+j)\Gamma(\beta+n+m-k-j)}{\Gamma(\alpha+\beta+n+m)}$ Our point estimate for $j$, given quadratic loss, is of course the mean of this distribution, i.e., $\mu=\frac{m(\alpha+k)}{(\alpha+\beta+n)}$ Now, let's look for a prediction interval. Since this is a discrete distribution, we don't have a closed form expression for $[j_1,j_2]$, such that $Pr(j_1\leq j \leq j_2)= 0.95$. The reason is that, depending on how you define a quantile, for a discrete distribution the quantile function is either not a function or is a discontinuous function. But this is not a big problem: for small $m$, you can just write down the $m$ probabilities $Pr(j=0|m,n,k),Pr(j\leq 1|m,n,k),\dots,Pr(j \leq m-1|m,n,k)$ and from here find $j_1,j_2$ such that $Pr(j_1\leq j \leq j_2)=Pr(j\leq j_2|m,n,k)-Pr(j < j_1|m,n,k)\geq 0.95$ Of course you would find more than one couple, so you would ideally look for the smallest $[j_1,j_2]$ such that the above is satisfied. Note that $Pr(j=0|m,n,k)=p_0,Pr(j\leq 1|m,n,k)=p_1,\dots,Pr(j \leq m-1|m,n,k)=p_{m-1}$ are just the values of the CMF (Cumulative Mass Function) of the Beta-Binomial distribution, and as such there is a closed form expression, but this is in terms of the generalized hypergeometric function and thus is quite complicated. I'd rather just install the R package extraDistr and call pbbinom to compute the CMF of the Beta-Binomial distribution. Specifically, if you want to compute all the probabilities $p_0,\dots,p_{m-1}$ in one go, just write: library(extraDistr) jvec <- seq(0, m-1, by = 1) probs <- pbbinom(jvec, m, alpha = alpha + k, beta = beta + n - k) where alpha and beta are the values of the parameters of your Beta prior, i.e., $\alpha$ and $\beta$ (thus 1 if you're using a uniform prior over $p$). Of course it would all be much simpler if R provided a quantile function for the Beta-Binomial distribution, but unfortunately it doesn't. Practical example with the Bayesian solution Let $n=100$, $k=70$ (thus we initially observed 70 successes in 100 trials). We want a point estimate and a 95%-prediction interval for the number of successes $j$ in the next $m=20$ trials. Then n <- 100 k <- 70 m <- 20 alpha <- 1 beta <- 1 where I assumed a uniform prior on $p$: depending on the prior knowledge for your specific application, this may or may not be a good prior. Thus bayesian_point_estimate <- m * (alpha + k)/(alpha + beta + n) #13.92157 Clearly a non-integer estimate for $j$ doesn't make sense, so we could just round to the nearest integer (14). Then, for the prediction interval: jvec <- seq(0, m-1, by = 1) library(extraDistr) probabilities <- pbbinom(jvec, m, alpha = alpha + k, beta = beta + n - k) The probabilities are > probabilities [1] 1.335244e-09 3.925617e-08 5.686014e-07 5.398876e-06 [5] 3.772061e-05 2.063557e-04 9.183707e-04 3.410423e-03 [9] 1.075618e-02 2.917888e-02 6.872028e-02 1.415124e-01 [13] 2.563000e-01 4.105894e-01 5.857286e-01 7.511380e-01 [17] 8.781487e-01 9.546188e-01 9.886056e-01 9.985556e-01 For an equal-tail probabilities interval, we want the smallest $j_2$ such that $Pr(j\leq j_2|m,n,k)\ge 0.975$ and the largest $j_1$ such that $Pr(j < j_1|m,n,k)=Pr(j \le j_1-1|m,n,k)\le 0.025$. This way, we will have $Pr(j_1\leq j \leq j_2|m,n,k)=Pr(j\leq j_2|m,n,k)-Pr(j < j_1|m,n,k)\ge 0.975-0.025=0.95$ Thus, by looking at the above probabilities, we see that $j_2=18$ and $j_1=9$. The probability of this Bayesian prediction interval is 0.9778494, which is larger than 0.95. We could find shorter intervals such that $Pr(j_1\leq j \leq j_2|m,n,k)\ge 0.95$, but in that case at least one of the two inequalities for the tail probabilities wouldn't be satisfied. Frequentist solution I'll follow the treatment of Krishnamoorthy and Peng, 2011. Let $Y\sim Binom(m,p)$ and $X\sim Binom(n,p)$ be independently Binominally distributed. We want a $1-2\alpha-$prediction interval for $Y$, based on a observation of $X$. In other words we look for $I=[L(X;n,m,\alpha),U(X;n,m,\alpha)]$ such that: $Pr_{X,Y}(Y\in I)=Pr_{X,Y}(L(X;n,m,\alpha)\leq Y\leq U(X;n,m,\alpha)]\geq 1-2\alpha$ The "$\geq 1-2\alpha$" is due to the fact that we are dealing with a discrete random variable, and thus we cannot expect to get exact coverage...but we can look for an interval which has always at least the nominal coverage, thus a conservative interval. Now, it can be proved that the conditional distribution of $X$ given $X+Y=k+j=s$ is hypergeometric with sample size $s$, number of successes in the population $n$ and population size $n+m$. Thus the conditional pmf is $Pr(X=k|X+Y=s,n,n+m)=\frac{\binom{n}{k}\binom{m}{s-k}}{\binom{m+n}{s}}$ The conditional CDF of $X$ given $X+Y=s$ is thus $Pr(X\leq k|s,n,n+m)=H(k;s,n,n+m)=\sum_{i=0}^k\frac{\binom{n}{i}\binom{m}{s-i}}{\binom{m+n}{s}}$ The first great thing about this CDF is that it doesn't depend on $p$, which we don't know. The second great thing is that it allows to easily find our PI: as a matter of fact, if we observed a value $k$ of X, then the $1-\alpha$ lower prediction limit is the smallest integer $L$ such that $Pr(X\geq k|k+L,n,n+m)=1-H(k-1;k+L,n,n+m)>\alpha$ correspondingly, the the $1-\alpha$ upper prediction limit is the largest integer such that $Pr(X\leq k|k+U,n,n+m)=H(k;k+U,n,n+m)>\alpha$ Thus, $[L,U]$ is a prediction interval for $Y$ of coverage at least $1-2\alpha$. Note that when $p$ is close to 0 or 1, this interval is conservative even for large $n$, $m$, i.e., its coverage is quite larger than $1-2\alpha$. Practical example with the Frequentist solution Same setting as before, but we don't need to specify $\alpha$ and $\beta$ (there are no priors in the Frequentist framework): n <- 100 k <- 70 m <- 20 The point estimate is now obtained using the MLE estimate for the probability of successes, $\hat{p}=\frac{k}{n}$, which in turns leads to the following estimate for the number of successes in $m$ trials: frequentist_point_estimate <- m * k/n #14 For the prediction interval, the procedure is a bit different. We look for the largest $U$ such that $Pr(X\leq k|k+U,n,n+m)=H(k;k+U,n,n+m)>\alpha$, thus let's compute the above expression for all $U$ in $[0,m]$: jvec <- seq(0, m, by = 1) probabilities <- phyper(k,n,m,k+jvec) We can see that the largest $U$ such that the probability is still larger than 0.025 is jvec[which.min(probabilities > 0.025) - 1] # 18 Same as for the Bayesian approach. The lower prediction bound $L$ is the smallest integer such that $Pr(X\geq k|k+L,n,n+m)=1-H(k-1;k+L,n,n+m)>\alpha$, thus probabilities <- 1-phyper(k-1,n,m,k+jvec) jvec[which.max(probabilities > 0.025) - 1] # 8 Thus our frequentist "exact" prediction interval is $[L,U]=[8,18]$.
Prediction interval for binomial random variable
Ok, let's try this. I'll give two answers - the Bayesian one, which is in my opinion simple and natural, and one of the possible frequentist ones. Bayesian solution We assume a Beta prior on $p$, i,e
Prediction interval for binomial random variable Ok, let's try this. I'll give two answers - the Bayesian one, which is in my opinion simple and natural, and one of the possible frequentist ones. Bayesian solution We assume a Beta prior on $p$, i,e., $p \sim Beta(\alpha,\beta)$, because the Beta-Binomial model is conjugate, which means that the posterior distribution is also a Beta distribution with parameters $\hat{\alpha}=\alpha+k,\hat{\beta}=\beta+n-k$, (I'm using $k$ to denote the number of successes in $n$ trials, instead of $y$). Thus, inference is greatly simplified. Now, if you have some prior knowledge on the likely values of $p$, you could use it to set the values of $\alpha$ and $\beta$, i.e., to define your Beta prior, otherwise you could assume a uniform (noninformative) prior, with $\alpha=\beta=1$, or other noninformative priors (see for example here). In any case, your posterior is $Pr(p|n,k)=Beta(\alpha+k,\beta+n-k)$ In Bayesian inference, all that matters is the posterior probability, meaning that once you know that, you can make inferences for all other quantities in your model. You want to make inference on the observables $y$: in particular, on a vector of new results $\mathbf{y}=y_1,\dots,y_m$, where $m$ is not necessarily equal to $n$. Specifically, for each $j=0,\dots,m$, we want to compute the probability of having exactly $j$ successes in the next $m$ trials, given that we got $k$ successes in the preceding $n$ trials; the posterior predictive mass function: $Pr(j|m,y)=Pr(j|m,n,k)=\int_0^1 Pr(j,p|m,n,k)dp = \int_0^1 Pr(j|p,m,n,k)Pr(p|n,k)dp$ However, our Binomial model for $Y$ means that, conditionally on $p$ having a certain value, the probability of having $j$ successes in $m$ trials doesn't depend on past results: it's simply $f(j|m,p)=\binom{j}{m} p^j(1-p)^j$ Thus the expression becomes $Pr(j|m,n,k)=\int_0^1 \binom{j}{m} p^j(1-p)^j Pr(p|n,k)dp=\int_0^1 \binom{j}{m} p^j(1-p)^j Beta(\alpha+k,\beta+n-k)dp$ The result of this integral is a well-known distribution called the Beta-Binomial distribution: skipping the passages, we get the horrible expression $Pr(j|m,n,k)=\frac{m!}{j!(m-j)!}\frac{\Gamma(\alpha+\beta+n)}{\Gamma(\alpha+k)\Gamma(\beta+n-k)}\frac{\Gamma(\alpha+k+j)\Gamma(\beta+n+m-k-j)}{\Gamma(\alpha+\beta+n+m)}$ Our point estimate for $j$, given quadratic loss, is of course the mean of this distribution, i.e., $\mu=\frac{m(\alpha+k)}{(\alpha+\beta+n)}$ Now, let's look for a prediction interval. Since this is a discrete distribution, we don't have a closed form expression for $[j_1,j_2]$, such that $Pr(j_1\leq j \leq j_2)= 0.95$. The reason is that, depending on how you define a quantile, for a discrete distribution the quantile function is either not a function or is a discontinuous function. But this is not a big problem: for small $m$, you can just write down the $m$ probabilities $Pr(j=0|m,n,k),Pr(j\leq 1|m,n,k),\dots,Pr(j \leq m-1|m,n,k)$ and from here find $j_1,j_2$ such that $Pr(j_1\leq j \leq j_2)=Pr(j\leq j_2|m,n,k)-Pr(j < j_1|m,n,k)\geq 0.95$ Of course you would find more than one couple, so you would ideally look for the smallest $[j_1,j_2]$ such that the above is satisfied. Note that $Pr(j=0|m,n,k)=p_0,Pr(j\leq 1|m,n,k)=p_1,\dots,Pr(j \leq m-1|m,n,k)=p_{m-1}$ are just the values of the CMF (Cumulative Mass Function) of the Beta-Binomial distribution, and as such there is a closed form expression, but this is in terms of the generalized hypergeometric function and thus is quite complicated. I'd rather just install the R package extraDistr and call pbbinom to compute the CMF of the Beta-Binomial distribution. Specifically, if you want to compute all the probabilities $p_0,\dots,p_{m-1}$ in one go, just write: library(extraDistr) jvec <- seq(0, m-1, by = 1) probs <- pbbinom(jvec, m, alpha = alpha + k, beta = beta + n - k) where alpha and beta are the values of the parameters of your Beta prior, i.e., $\alpha$ and $\beta$ (thus 1 if you're using a uniform prior over $p$). Of course it would all be much simpler if R provided a quantile function for the Beta-Binomial distribution, but unfortunately it doesn't. Practical example with the Bayesian solution Let $n=100$, $k=70$ (thus we initially observed 70 successes in 100 trials). We want a point estimate and a 95%-prediction interval for the number of successes $j$ in the next $m=20$ trials. Then n <- 100 k <- 70 m <- 20 alpha <- 1 beta <- 1 where I assumed a uniform prior on $p$: depending on the prior knowledge for your specific application, this may or may not be a good prior. Thus bayesian_point_estimate <- m * (alpha + k)/(alpha + beta + n) #13.92157 Clearly a non-integer estimate for $j$ doesn't make sense, so we could just round to the nearest integer (14). Then, for the prediction interval: jvec <- seq(0, m-1, by = 1) library(extraDistr) probabilities <- pbbinom(jvec, m, alpha = alpha + k, beta = beta + n - k) The probabilities are > probabilities [1] 1.335244e-09 3.925617e-08 5.686014e-07 5.398876e-06 [5] 3.772061e-05 2.063557e-04 9.183707e-04 3.410423e-03 [9] 1.075618e-02 2.917888e-02 6.872028e-02 1.415124e-01 [13] 2.563000e-01 4.105894e-01 5.857286e-01 7.511380e-01 [17] 8.781487e-01 9.546188e-01 9.886056e-01 9.985556e-01 For an equal-tail probabilities interval, we want the smallest $j_2$ such that $Pr(j\leq j_2|m,n,k)\ge 0.975$ and the largest $j_1$ such that $Pr(j < j_1|m,n,k)=Pr(j \le j_1-1|m,n,k)\le 0.025$. This way, we will have $Pr(j_1\leq j \leq j_2|m,n,k)=Pr(j\leq j_2|m,n,k)-Pr(j < j_1|m,n,k)\ge 0.975-0.025=0.95$ Thus, by looking at the above probabilities, we see that $j_2=18$ and $j_1=9$. The probability of this Bayesian prediction interval is 0.9778494, which is larger than 0.95. We could find shorter intervals such that $Pr(j_1\leq j \leq j_2|m,n,k)\ge 0.95$, but in that case at least one of the two inequalities for the tail probabilities wouldn't be satisfied. Frequentist solution I'll follow the treatment of Krishnamoorthy and Peng, 2011. Let $Y\sim Binom(m,p)$ and $X\sim Binom(n,p)$ be independently Binominally distributed. We want a $1-2\alpha-$prediction interval for $Y$, based on a observation of $X$. In other words we look for $I=[L(X;n,m,\alpha),U(X;n,m,\alpha)]$ such that: $Pr_{X,Y}(Y\in I)=Pr_{X,Y}(L(X;n,m,\alpha)\leq Y\leq U(X;n,m,\alpha)]\geq 1-2\alpha$ The "$\geq 1-2\alpha$" is due to the fact that we are dealing with a discrete random variable, and thus we cannot expect to get exact coverage...but we can look for an interval which has always at least the nominal coverage, thus a conservative interval. Now, it can be proved that the conditional distribution of $X$ given $X+Y=k+j=s$ is hypergeometric with sample size $s$, number of successes in the population $n$ and population size $n+m$. Thus the conditional pmf is $Pr(X=k|X+Y=s,n,n+m)=\frac{\binom{n}{k}\binom{m}{s-k}}{\binom{m+n}{s}}$ The conditional CDF of $X$ given $X+Y=s$ is thus $Pr(X\leq k|s,n,n+m)=H(k;s,n,n+m)=\sum_{i=0}^k\frac{\binom{n}{i}\binom{m}{s-i}}{\binom{m+n}{s}}$ The first great thing about this CDF is that it doesn't depend on $p$, which we don't know. The second great thing is that it allows to easily find our PI: as a matter of fact, if we observed a value $k$ of X, then the $1-\alpha$ lower prediction limit is the smallest integer $L$ such that $Pr(X\geq k|k+L,n,n+m)=1-H(k-1;k+L,n,n+m)>\alpha$ correspondingly, the the $1-\alpha$ upper prediction limit is the largest integer such that $Pr(X\leq k|k+U,n,n+m)=H(k;k+U,n,n+m)>\alpha$ Thus, $[L,U]$ is a prediction interval for $Y$ of coverage at least $1-2\alpha$. Note that when $p$ is close to 0 or 1, this interval is conservative even for large $n$, $m$, i.e., its coverage is quite larger than $1-2\alpha$. Practical example with the Frequentist solution Same setting as before, but we don't need to specify $\alpha$ and $\beta$ (there are no priors in the Frequentist framework): n <- 100 k <- 70 m <- 20 The point estimate is now obtained using the MLE estimate for the probability of successes, $\hat{p}=\frac{k}{n}$, which in turns leads to the following estimate for the number of successes in $m$ trials: frequentist_point_estimate <- m * k/n #14 For the prediction interval, the procedure is a bit different. We look for the largest $U$ such that $Pr(X\leq k|k+U,n,n+m)=H(k;k+U,n,n+m)>\alpha$, thus let's compute the above expression for all $U$ in $[0,m]$: jvec <- seq(0, m, by = 1) probabilities <- phyper(k,n,m,k+jvec) We can see that the largest $U$ such that the probability is still larger than 0.025 is jvec[which.min(probabilities > 0.025) - 1] # 18 Same as for the Bayesian approach. The lower prediction bound $L$ is the smallest integer such that $Pr(X\geq k|k+L,n,n+m)=1-H(k-1;k+L,n,n+m)>\alpha$, thus probabilities <- 1-phyper(k-1,n,m,k+jvec) jvec[which.max(probabilities > 0.025) - 1] # 8 Thus our frequentist "exact" prediction interval is $[L,U]=[8,18]$.
Prediction interval for binomial random variable Ok, let's try this. I'll give two answers - the Bayesian one, which is in my opinion simple and natural, and one of the possible frequentist ones. Bayesian solution We assume a Beta prior on $p$, i,e
17,057
Likelihood Ratio vs Wald test
It's important to note that although the likelihood ratio test and the Wald test are used by researchers to accomplish the same empirical goal(s), they are testing different hypotheses. The likelihood ratio test evaluates whether the data were likely to have come from a more complex model, vs. a more simple model. Put another way, does the addition of a particular effect allow the model to account for more information. The Wald test, conversely, evaluates whether it is likely that the estimated effect could be zero. It's a nuanced difference, to be sure, but an important conceptual difference nonetheless. Agresti (2007) contrasts likelihood ratio testing, Wald testing, and a third method called the "score test" (he hardly elaborates on this test further). From his book (p. 13): When the sample size is small to moderate, the Wald test is the least reliable of the three tests. We should not trust it for such a small n as in this example (n = 10). Likelihood-ratio inference and score-test based inference are better in terms of actual error probabilities coming close to matching nominal levels. A marked divergence in the values of the three statistics indicates that the distribution of the ML estimator may be far from normality. In that case, small-sample methods are more appropriate than large-sample methods. Looking at your data and output, it seems that you do indeed have a relatively small sample, and therefore may want to place greater stock in the likelihood ratio test results vs. the Wald test results. References Agresti, A. (2007). An introduction to categorical data analysis (2nd edition). Hoboken, NJ: John Wiley & Sons.
Likelihood Ratio vs Wald test
It's important to note that although the likelihood ratio test and the Wald test are used by researchers to accomplish the same empirical goal(s), they are testing different hypotheses. The likelihood
Likelihood Ratio vs Wald test It's important to note that although the likelihood ratio test and the Wald test are used by researchers to accomplish the same empirical goal(s), they are testing different hypotheses. The likelihood ratio test evaluates whether the data were likely to have come from a more complex model, vs. a more simple model. Put another way, does the addition of a particular effect allow the model to account for more information. The Wald test, conversely, evaluates whether it is likely that the estimated effect could be zero. It's a nuanced difference, to be sure, but an important conceptual difference nonetheless. Agresti (2007) contrasts likelihood ratio testing, Wald testing, and a third method called the "score test" (he hardly elaborates on this test further). From his book (p. 13): When the sample size is small to moderate, the Wald test is the least reliable of the three tests. We should not trust it for such a small n as in this example (n = 10). Likelihood-ratio inference and score-test based inference are better in terms of actual error probabilities coming close to matching nominal levels. A marked divergence in the values of the three statistics indicates that the distribution of the ML estimator may be far from normality. In that case, small-sample methods are more appropriate than large-sample methods. Looking at your data and output, it seems that you do indeed have a relatively small sample, and therefore may want to place greater stock in the likelihood ratio test results vs. the Wald test results. References Agresti, A. (2007). An introduction to categorical data analysis (2nd edition). Hoboken, NJ: John Wiley & Sons.
Likelihood Ratio vs Wald test It's important to note that although the likelihood ratio test and the Wald test are used by researchers to accomplish the same empirical goal(s), they are testing different hypotheses. The likelihood
17,058
Likelihood Ratio vs Wald test
First, I disagree somewhat with jsakaluk's answer that the two tests are testing different things - they are both testing whether the coefficient in the larger model is zero. They are just testing this hypothesis by making different approximations (see article linked to below). Regarding the differences between their results, as jsakaluk said, this is likely due to the small sample size / that the log likelihood is far from quadratic. I wrote a blog post back in 2014 which goes through this for a simple binomial model, which may help further: http://thestatsgeek.com/2014/02/08/wald-vs-likelihood-ratio-test/
Likelihood Ratio vs Wald test
First, I disagree somewhat with jsakaluk's answer that the two tests are testing different things - they are both testing whether the coefficient in the larger model is zero. They are just testing thi
Likelihood Ratio vs Wald test First, I disagree somewhat with jsakaluk's answer that the two tests are testing different things - they are both testing whether the coefficient in the larger model is zero. They are just testing this hypothesis by making different approximations (see article linked to below). Regarding the differences between their results, as jsakaluk said, this is likely due to the small sample size / that the log likelihood is far from quadratic. I wrote a blog post back in 2014 which goes through this for a simple binomial model, which may help further: http://thestatsgeek.com/2014/02/08/wald-vs-likelihood-ratio-test/
Likelihood Ratio vs Wald test First, I disagree somewhat with jsakaluk's answer that the two tests are testing different things - they are both testing whether the coefficient in the larger model is zero. They are just testing thi
17,059
Likelihood Ratio vs Wald test
The two tests are asymptotically equivalent. Of course, their performance (size and power) in finite samples can differ. The best you can do to understand the difference is to run a Monte Carlo study for a setting similar to yours.
Likelihood Ratio vs Wald test
The two tests are asymptotically equivalent. Of course, their performance (size and power) in finite samples can differ. The best you can do to understand the difference is to run a Monte Carlo study
Likelihood Ratio vs Wald test The two tests are asymptotically equivalent. Of course, their performance (size and power) in finite samples can differ. The best you can do to understand the difference is to run a Monte Carlo study for a setting similar to yours.
Likelihood Ratio vs Wald test The two tests are asymptotically equivalent. Of course, their performance (size and power) in finite samples can differ. The best you can do to understand the difference is to run a Monte Carlo study
17,060
What is the connection between Markov chain and Markov chain monte carlo
The draws from MCMC form a Markov chain. From Gelman, Bayesian Data Analysis (3rd ed), p. 265: Markov chain simulation (also called Markov chain Monte Carlo or MCMC) is a general method based on drawing values of $\theta$ from appropriate distributions and then correcting those draws to better approximate the target posterior distribution, $p(\theta|y)$. The sampling is done sequentially, with the distribution of the sampled draws depending on the last value drawn; hence, the draws form a Markov chain.
What is the connection between Markov chain and Markov chain monte carlo
The draws from MCMC form a Markov chain. From Gelman, Bayesian Data Analysis (3rd ed), p. 265: Markov chain simulation (also called Markov chain Monte Carlo or MCMC) is a general method based on draw
What is the connection between Markov chain and Markov chain monte carlo The draws from MCMC form a Markov chain. From Gelman, Bayesian Data Analysis (3rd ed), p. 265: Markov chain simulation (also called Markov chain Monte Carlo or MCMC) is a general method based on drawing values of $\theta$ from appropriate distributions and then correcting those draws to better approximate the target posterior distribution, $p(\theta|y)$. The sampling is done sequentially, with the distribution of the sampled draws depending on the last value drawn; hence, the draws form a Markov chain.
What is the connection between Markov chain and Markov chain monte carlo The draws from MCMC form a Markov chain. From Gelman, Bayesian Data Analysis (3rd ed), p. 265: Markov chain simulation (also called Markov chain Monte Carlo or MCMC) is a general method based on draw
17,061
What is the connection between Markov chain and Markov chain monte carlo
The connection between both concepts is that Markov chain Monte Carlo (aka MCMC) methods rely on Markov chain theory to produce simulations and Monte Carlo approximations from a complex target distribution $\pi$. In practice, these simulation methods output a sequence $X_1,\ldots,X_N$ that is a Markov chain, i.e., such that the distribution of $X_i$ given the whole past $\{X_{i-1},\ldots,X_1\}$ only depends on $X_{i-1}$. In other words, $$X_i=f(X_{i-1},\epsilon_i)$$ where $f$ is a function specified by the algorithm and the target distribution $\pi$ and the $\epsilon_i$'s are iid. The (ergodic) theory guarantees that $X_i$ converges (in distribution) to $\pi$ as $i$ gets to $\infty$. The easiest example of an MCMC algorithm is the slice sampler: at iteration i of this algorithm, do simulate $\epsilon^1_i\sim\mathrm{U}(0,1)$ simulate $X_{i}\sim\mathrm{U}(\{x;\pi(x)\ge\epsilon^1_i\pi(X_{i-1})\})$ (which amounts to generating a second independent $\epsilon^2_i$) For instance, if the target distribution is a normal $\mathrm{N}(0,1)$ [for which you obviously would not need MCMC in practice, this is a toy example!] the above translates as simulate $\epsilon^1_i\sim\mathrm{U}(0,1)$ simulate $X_{i}\sim\mathrm{U}(\{x;x^2\le-2\log(\sqrt{2\pi}\epsilon^1_i\})$, i.e., $X_i=\pm \epsilon_i^2\{-2\log(\sqrt{2\pi}\epsilon^1_i)\varphi(X_{i-1})\}^{1/2}$ with $\epsilon_i^2\sim\mathrm{U}(0,1)$ or in R T=1e4 x=y=runif(T) #random initial value for (t in 2:T){ epsilon=runif(2)#uniform white noise y[t]=epsilon[1]*dnorm(x[t-1])#vertical move x[t]=sample(c(-1,1),1)*epsilon[2]*sqrt(-2*#Markov move from log(sqrt(2*pi)*y[t]))}#x[t-1] to x[t] Here is a representation of the output, showing the right fit to the $\mathrm{N}(0,1)$ target and the evolution of the Markov chain $(X_i)$. And here is a zoom on the evolution of the Markov chain $(X_i,\epsilon^1_i\pi(X_i))$ over the last 100 iterations, obtained by curve(dnorm,-3,3,lwd=2,col="sienna",ylab="") for (t in (T-100):T){ lines(rep(x[t-1],2),c(y[t-1],y[t]),col="steelblue"); lines(x[(t-1):t],rep(y[t],2),col="steelblue")} that follows vertical and horizontal moves of the Markov chain under the target density curve.
What is the connection between Markov chain and Markov chain monte carlo
The connection between both concepts is that Markov chain Monte Carlo (aka MCMC) methods rely on Markov chain theory to produce simulations and Monte Carlo approximations from a complex target d
What is the connection between Markov chain and Markov chain monte carlo The connection between both concepts is that Markov chain Monte Carlo (aka MCMC) methods rely on Markov chain theory to produce simulations and Monte Carlo approximations from a complex target distribution $\pi$. In practice, these simulation methods output a sequence $X_1,\ldots,X_N$ that is a Markov chain, i.e., such that the distribution of $X_i$ given the whole past $\{X_{i-1},\ldots,X_1\}$ only depends on $X_{i-1}$. In other words, $$X_i=f(X_{i-1},\epsilon_i)$$ where $f$ is a function specified by the algorithm and the target distribution $\pi$ and the $\epsilon_i$'s are iid. The (ergodic) theory guarantees that $X_i$ converges (in distribution) to $\pi$ as $i$ gets to $\infty$. The easiest example of an MCMC algorithm is the slice sampler: at iteration i of this algorithm, do simulate $\epsilon^1_i\sim\mathrm{U}(0,1)$ simulate $X_{i}\sim\mathrm{U}(\{x;\pi(x)\ge\epsilon^1_i\pi(X_{i-1})\})$ (which amounts to generating a second independent $\epsilon^2_i$) For instance, if the target distribution is a normal $\mathrm{N}(0,1)$ [for which you obviously would not need MCMC in practice, this is a toy example!] the above translates as simulate $\epsilon^1_i\sim\mathrm{U}(0,1)$ simulate $X_{i}\sim\mathrm{U}(\{x;x^2\le-2\log(\sqrt{2\pi}\epsilon^1_i\})$, i.e., $X_i=\pm \epsilon_i^2\{-2\log(\sqrt{2\pi}\epsilon^1_i)\varphi(X_{i-1})\}^{1/2}$ with $\epsilon_i^2\sim\mathrm{U}(0,1)$ or in R T=1e4 x=y=runif(T) #random initial value for (t in 2:T){ epsilon=runif(2)#uniform white noise y[t]=epsilon[1]*dnorm(x[t-1])#vertical move x[t]=sample(c(-1,1),1)*epsilon[2]*sqrt(-2*#Markov move from log(sqrt(2*pi)*y[t]))}#x[t-1] to x[t] Here is a representation of the output, showing the right fit to the $\mathrm{N}(0,1)$ target and the evolution of the Markov chain $(X_i)$. And here is a zoom on the evolution of the Markov chain $(X_i,\epsilon^1_i\pi(X_i))$ over the last 100 iterations, obtained by curve(dnorm,-3,3,lwd=2,col="sienna",ylab="") for (t in (T-100):T){ lines(rep(x[t-1],2),c(y[t-1],y[t]),col="steelblue"); lines(x[(t-1):t],rep(y[t],2),col="steelblue")} that follows vertical and horizontal moves of the Markov chain under the target density curve.
What is the connection between Markov chain and Markov chain monte carlo The connection between both concepts is that Markov chain Monte Carlo (aka MCMC) methods rely on Markov chain theory to produce simulations and Monte Carlo approximations from a complex target d
17,062
Transforming Data: All variables or just the non-normal ones?
You quote several pieces of advice, all of which is no doubt intended helpfully, but it is difficult to find much merit in any of it. In each case I rely totally on what you cite as a summary. In the authors' defence I would like to believe that they add appropriate qualifications in surrounding or other material. (Full bibliographic references in usual name(s), date, title, (publisher, place) or (journal title, volume, pages) format would enhance the question.) Field This advice is intended helpfully, but is at best vastly oversimplified. Field's advice seems to be intended generally; for example, the reference to Levene's test implies some temporary focus on analysis of variance. For example, suppose I have one predictor that on various grounds should be logged and another indicator variable that is $(1,0)$. The latter (a) cannot be logged (b) should not be logged. (Indeed any transformation of an indicator variable to any two distinct values has no important effect.) More generally, it is common -- in many fields the usual situation -- that some predictors should be transformed and the rest left as is. It's true that encountering in a paper or dissertation a mixture of transformations applied differently to different predictors (including as a special case, identity transformation, or leaving as is) is often a matter of concern for a reader. Is the mix a well thought out set of choices, or was it arbitrary and capricious? Furthermore, in a series of studies consistency of approach (always applying logarithms to a response, or never doing it) does aid enormously in comparing results, and differing approach makes it more difficult. But that's not to say there could never be reasons for a mix of transformations. I don't see that most of the section you cite has much bearing on the key advice you highlight in yellow. This in itself is a matter of concern: it's a strange business to announce an absolute rule and then not really to explain it. Conversely, the injunction "Remember" suggests that Field's grounds were supplied earlier in the book. Anonymous paper The context here is regression models. As often, talking of OLS strangely emphasises estimation method rather than model, but we can understand what is intended. GWR I construe as geographically weighted regression. The argument here is that you should transform non-normal predictors and leave the others as is. Again, this raises a question about what you can and should do with indicator variables, which cannot be normally distributed (which as above can be answered by pointing out that non-normality in that case is not a problem). But the injunction has it backwards in implying that it's non-normality of predictors that is the problem. Not so; it's no part of regression modelling to assume anything about marginal distributions of the predictors. In practice, if you make predictors more nearly normal, then you will often be applying transformations that make the functional form $X\beta$ more nearly right for the data, which I would assert to be the major reason for transformation, despite the enormous emphasis on error structure in many texts. In other words, logging predictors to get them closer to normality can be doing the right thing for the wrong reason if you get closer to linearity in the transformed space. There is so much extraordinarily good advice on transformations in this forum that I have focused on discussing what you cite. P.S. You add a statement starting "For instance, in a comparison of means, comparing logs to raw data would obviously yield a significant difference." I am not clear what you have in mind, but comparing values for one group with logarithms of values for another group would just be nonsensical. I don't understand the rest of your statement at all.
Transforming Data: All variables or just the non-normal ones?
You quote several pieces of advice, all of which is no doubt intended helpfully, but it is difficult to find much merit in any of it. In each case I rely totally on what you cite as a summary. In the
Transforming Data: All variables or just the non-normal ones? You quote several pieces of advice, all of which is no doubt intended helpfully, but it is difficult to find much merit in any of it. In each case I rely totally on what you cite as a summary. In the authors' defence I would like to believe that they add appropriate qualifications in surrounding or other material. (Full bibliographic references in usual name(s), date, title, (publisher, place) or (journal title, volume, pages) format would enhance the question.) Field This advice is intended helpfully, but is at best vastly oversimplified. Field's advice seems to be intended generally; for example, the reference to Levene's test implies some temporary focus on analysis of variance. For example, suppose I have one predictor that on various grounds should be logged and another indicator variable that is $(1,0)$. The latter (a) cannot be logged (b) should not be logged. (Indeed any transformation of an indicator variable to any two distinct values has no important effect.) More generally, it is common -- in many fields the usual situation -- that some predictors should be transformed and the rest left as is. It's true that encountering in a paper or dissertation a mixture of transformations applied differently to different predictors (including as a special case, identity transformation, or leaving as is) is often a matter of concern for a reader. Is the mix a well thought out set of choices, or was it arbitrary and capricious? Furthermore, in a series of studies consistency of approach (always applying logarithms to a response, or never doing it) does aid enormously in comparing results, and differing approach makes it more difficult. But that's not to say there could never be reasons for a mix of transformations. I don't see that most of the section you cite has much bearing on the key advice you highlight in yellow. This in itself is a matter of concern: it's a strange business to announce an absolute rule and then not really to explain it. Conversely, the injunction "Remember" suggests that Field's grounds were supplied earlier in the book. Anonymous paper The context here is regression models. As often, talking of OLS strangely emphasises estimation method rather than model, but we can understand what is intended. GWR I construe as geographically weighted regression. The argument here is that you should transform non-normal predictors and leave the others as is. Again, this raises a question about what you can and should do with indicator variables, which cannot be normally distributed (which as above can be answered by pointing out that non-normality in that case is not a problem). But the injunction has it backwards in implying that it's non-normality of predictors that is the problem. Not so; it's no part of regression modelling to assume anything about marginal distributions of the predictors. In practice, if you make predictors more nearly normal, then you will often be applying transformations that make the functional form $X\beta$ more nearly right for the data, which I would assert to be the major reason for transformation, despite the enormous emphasis on error structure in many texts. In other words, logging predictors to get them closer to normality can be doing the right thing for the wrong reason if you get closer to linearity in the transformed space. There is so much extraordinarily good advice on transformations in this forum that I have focused on discussing what you cite. P.S. You add a statement starting "For instance, in a comparison of means, comparing logs to raw data would obviously yield a significant difference." I am not clear what you have in mind, but comparing values for one group with logarithms of values for another group would just be nonsensical. I don't understand the rest of your statement at all.
Transforming Data: All variables or just the non-normal ones? You quote several pieces of advice, all of which is no doubt intended helpfully, but it is difficult to find much merit in any of it. In each case I rely totally on what you cite as a summary. In the
17,063
Transforming Data: All variables or just the non-normal ones?
First of all, both quotes are misleading insofar as any transformation applied to data intended for use in a regression model is not done to make the variable PDFs more normally distributed, it's done to make the model residuals more symmetric since one assumption in classic regression is that the errors are Gaussian. This implies a deeper level of rigor and stringency than merely symmetrizing a PDF. Moreover both quotes are weak in that neither one delves into the motivations for their prescriptions (at least based on the information provided). As it happens, I disagree with both. In the passage you've highlighted, the SPSS book claims that mixtures of transformations (e.g., natural log for one variable, sq root for another) are not permitted. Why is this illegal? Mixtures of transformations violate no regression assumptions that I'm aware of. Please check any regression texts on regression assumptions to confirm that this is the case. Transformation mixtures might present a substantive descriptive problem in terms of their interpretation, but that's not a question of whether or not mixtures are illegal. The SPSS guy is wrong. As far as the second text goes, again, transformations are totally a matter of analyst choice -- whether one does them at all, transforms all inputs or some variables and not others. None of this violates any assumptions. Where I think the second quote goes off the rails is in the assertion that, "...to avoid the potential multicollinearity...only one land use indicator (was used)..." This is blatantly bad advice and sounds like the kind of thing some analysts will do as a dimension reduction technique where they will factor analyze a bunch of variables and pick the highest loading variable on each factor. This heuristic has been around for years and is not one I either use or recommend. Again, this a matter of analyst preference and training. But this point is not targeted to answering your specific questions. At the end of the day, both quotes come off as being assertions of the authors' opinions in the absence of any supporting evidence, based on the information provided.
Transforming Data: All variables or just the non-normal ones?
First of all, both quotes are misleading insofar as any transformation applied to data intended for use in a regression model is not done to make the variable PDFs more normally distributed, it's done
Transforming Data: All variables or just the non-normal ones? First of all, both quotes are misleading insofar as any transformation applied to data intended for use in a regression model is not done to make the variable PDFs more normally distributed, it's done to make the model residuals more symmetric since one assumption in classic regression is that the errors are Gaussian. This implies a deeper level of rigor and stringency than merely symmetrizing a PDF. Moreover both quotes are weak in that neither one delves into the motivations for their prescriptions (at least based on the information provided). As it happens, I disagree with both. In the passage you've highlighted, the SPSS book claims that mixtures of transformations (e.g., natural log for one variable, sq root for another) are not permitted. Why is this illegal? Mixtures of transformations violate no regression assumptions that I'm aware of. Please check any regression texts on regression assumptions to confirm that this is the case. Transformation mixtures might present a substantive descriptive problem in terms of their interpretation, but that's not a question of whether or not mixtures are illegal. The SPSS guy is wrong. As far as the second text goes, again, transformations are totally a matter of analyst choice -- whether one does them at all, transforms all inputs or some variables and not others. None of this violates any assumptions. Where I think the second quote goes off the rails is in the assertion that, "...to avoid the potential multicollinearity...only one land use indicator (was used)..." This is blatantly bad advice and sounds like the kind of thing some analysts will do as a dimension reduction technique where they will factor analyze a bunch of variables and pick the highest loading variable on each factor. This heuristic has been around for years and is not one I either use or recommend. Again, this a matter of analyst preference and training. But this point is not targeted to answering your specific questions. At the end of the day, both quotes come off as being assertions of the authors' opinions in the absence of any supporting evidence, based on the information provided.
Transforming Data: All variables or just the non-normal ones? First of all, both quotes are misleading insofar as any transformation applied to data intended for use in a regression model is not done to make the variable PDFs more normally distributed, it's done
17,064
Meaning of "design" in design matrix?
To give an example in line with @neverKnowsBest's response, consider that in a $2^3$ factorial experiment there are 3 factors, each treated as categorical variables with 2 levels, and each possible combination of the factor levels is tested within each replication. If the experiment were only administered once (no replication) this design would require $2^3=8$ runs. The runs can be described by the following 8x3 matrix: $$ \left[\begin{array}{rr} 0 & 0 & 0 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \\ 1 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 1 \\ 0 & 1 & 1 \\ 1 & 1 & 1 \\ \end{array} \right] $$ where the rows represent the runs and the columns represent the levels of the factors: $$ \left[\begin{array}{rr} A & B & C \\ \end{array} \right]. $$ (The first column represents the level of factor A, the second column B, and the third column C). This is referred to as the Design Matrix because it describes the design of the experiment. The first run is collected at the 'low' level of all of the factors, the second run is collected at the 'high' level of factor A and the 'low' levels of factors B and C, and so on. This is contrasted with the model matrix, which if you were evaluating main effects and all possible interactions for the experiment discussed in this post would look like: $$ \left[\begin{array}{rr} 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 \\ 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 \\ 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 \\ 1 & 1 & 1 & 0 & 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 \\ 1 & 1 & 0 & 1 & 0 & 1 & 0 & 0 \\ 1 & 0 & 1 & 1 & 0 & 0 & 1 & 0 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \end{array} \right] $$ where the columns represent independent variables: $$ \left[\begin{array}{rr} I & A & B & C & AB & AC & BC & ABC \\ \end{array} \right]. $$ Although the two matrices are related the design matrix describes how data is collected, while the model matrix is used in analyzing the results of the experiment. Citations Montgomery, D. (2009). Design and Analysis of Experiments, 7th Edition. John Wiley & Sons Inc.
Meaning of "design" in design matrix?
To give an example in line with @neverKnowsBest's response, consider that in a $2^3$ factorial experiment there are 3 factors, each treated as categorical variables with 2 levels, and each possible co
Meaning of "design" in design matrix? To give an example in line with @neverKnowsBest's response, consider that in a $2^3$ factorial experiment there are 3 factors, each treated as categorical variables with 2 levels, and each possible combination of the factor levels is tested within each replication. If the experiment were only administered once (no replication) this design would require $2^3=8$ runs. The runs can be described by the following 8x3 matrix: $$ \left[\begin{array}{rr} 0 & 0 & 0 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \\ 1 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 1 \\ 0 & 1 & 1 \\ 1 & 1 & 1 \\ \end{array} \right] $$ where the rows represent the runs and the columns represent the levels of the factors: $$ \left[\begin{array}{rr} A & B & C \\ \end{array} \right]. $$ (The first column represents the level of factor A, the second column B, and the third column C). This is referred to as the Design Matrix because it describes the design of the experiment. The first run is collected at the 'low' level of all of the factors, the second run is collected at the 'high' level of factor A and the 'low' levels of factors B and C, and so on. This is contrasted with the model matrix, which if you were evaluating main effects and all possible interactions for the experiment discussed in this post would look like: $$ \left[\begin{array}{rr} 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 \\ 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 \\ 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 \\ 1 & 1 & 1 & 0 & 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 \\ 1 & 1 & 0 & 1 & 0 & 1 & 0 & 0 \\ 1 & 0 & 1 & 1 & 0 & 0 & 1 & 0 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \end{array} \right] $$ where the columns represent independent variables: $$ \left[\begin{array}{rr} I & A & B & C & AB & AC & BC & ABC \\ \end{array} \right]. $$ Although the two matrices are related the design matrix describes how data is collected, while the model matrix is used in analyzing the results of the experiment. Citations Montgomery, D. (2009). Design and Analysis of Experiments, 7th Edition. John Wiley & Sons Inc.
Meaning of "design" in design matrix? To give an example in line with @neverKnowsBest's response, consider that in a $2^3$ factorial experiment there are 3 factors, each treated as categorical variables with 2 levels, and each possible co
17,065
Meaning of "design" in design matrix?
In designed experiments we often fuss about the design matrix $\mathbf{X}$ containing the levels of the factors at which we perform the experiment, and the model matrix (also written as $\mathbf{X}$ but really a function of the design matrix) containing things like a column of all 1's (representing the intercept term) and products and powers of the columns of the design matrix (representing things like interaction and polynomial model terms). I'd call $\mathbf{X}$ in $\mathbf{y} = \mathbf{X}\boldsymbol{\beta}$ the model matrix. Design of experiments focuses on how to construct the design matrix and model matrix since it happens before data is collected. If the data is already collected then the design is set in stone but you can still change the model matrix. Sometimes a designed experiment will have in the design matrix certain fixed columns called covariates that can't control but you can observe. There are some things that can happen depending on your choice of model and design... certain parameters can become hard to estimate (larger variances of the estimator) or you may not be able to estimate certain parameters at all. I'd say deciding on an appropriate model has some elements of art to it, and there's certainly an art to designing experiments.
Meaning of "design" in design matrix?
In designed experiments we often fuss about the design matrix $\mathbf{X}$ containing the levels of the factors at which we perform the experiment, and the model matrix (also written as $\mathbf{X}$ b
Meaning of "design" in design matrix? In designed experiments we often fuss about the design matrix $\mathbf{X}$ containing the levels of the factors at which we perform the experiment, and the model matrix (also written as $\mathbf{X}$ but really a function of the design matrix) containing things like a column of all 1's (representing the intercept term) and products and powers of the columns of the design matrix (representing things like interaction and polynomial model terms). I'd call $\mathbf{X}$ in $\mathbf{y} = \mathbf{X}\boldsymbol{\beta}$ the model matrix. Design of experiments focuses on how to construct the design matrix and model matrix since it happens before data is collected. If the data is already collected then the design is set in stone but you can still change the model matrix. Sometimes a designed experiment will have in the design matrix certain fixed columns called covariates that can't control but you can observe. There are some things that can happen depending on your choice of model and design... certain parameters can become hard to estimate (larger variances of the estimator) or you may not be able to estimate certain parameters at all. I'd say deciding on an appropriate model has some elements of art to it, and there's certainly an art to designing experiments.
Meaning of "design" in design matrix? In designed experiments we often fuss about the design matrix $\mathbf{X}$ containing the levels of the factors at which we perform the experiment, and the model matrix (also written as $\mathbf{X}$ b
17,066
Meaning of "design" in design matrix?
$X$ is just your data (minus the response variable). I believe it's referred to as the design matrix because it defines the "design" of your model (via training). Can X be designed or constructed arbitrarily to some degree as in art? Basically this question boils down to "can you build a model trained on manufactured data" to which the answer is obviously yes. For example, here's one way to construct an arbitrary design matrix (design vector, really) that will give a model with a predefined slope and intercept: design_mat=function(b, a){ X = runif(100) Y = a*X + b data.frame(X,Y) } df = design_mat(-5, 12.3) (lm(Y~X, data=df)) Call: lm(formula = Y ~ X, data = df) Coefficients: (Intercept) X -5.0 12.3 In my example I "constructed" the response from random design data for illustrative purposes, but you could just as easily have constructed the design matrix from a random response using $X = \frac{Y-b}{a}$.
Meaning of "design" in design matrix?
$X$ is just your data (minus the response variable). I believe it's referred to as the design matrix because it defines the "design" of your model (via training). Can X be designed or constructed ar
Meaning of "design" in design matrix? $X$ is just your data (minus the response variable). I believe it's referred to as the design matrix because it defines the "design" of your model (via training). Can X be designed or constructed arbitrarily to some degree as in art? Basically this question boils down to "can you build a model trained on manufactured data" to which the answer is obviously yes. For example, here's one way to construct an arbitrary design matrix (design vector, really) that will give a model with a predefined slope and intercept: design_mat=function(b, a){ X = runif(100) Y = a*X + b data.frame(X,Y) } df = design_mat(-5, 12.3) (lm(Y~X, data=df)) Call: lm(formula = Y ~ X, data = df) Coefficients: (Intercept) X -5.0 12.3 In my example I "constructed" the response from random design data for illustrative purposes, but you could just as easily have constructed the design matrix from a random response using $X = \frac{Y-b}{a}$.
Meaning of "design" in design matrix? $X$ is just your data (minus the response variable). I believe it's referred to as the design matrix because it defines the "design" of your model (via training). Can X be designed or constructed ar
17,067
Meaning of "design" in design matrix?
It is called a design matrix because the columns of the matrix $X$ are based on the design of the model. I don't believe $X$ can be created arbitrarily in the sense that as soon as the model has been decided upon so has the design matrix (basically one column in $X$ for every $\beta$ you are trying to estimate). However, since model building can be considered an art, I suppose then so can building the design matrix.
Meaning of "design" in design matrix?
It is called a design matrix because the columns of the matrix $X$ are based on the design of the model. I don't believe $X$ can be created arbitrarily in the sense that as soon as the model has been
Meaning of "design" in design matrix? It is called a design matrix because the columns of the matrix $X$ are based on the design of the model. I don't believe $X$ can be created arbitrarily in the sense that as soon as the model has been decided upon so has the design matrix (basically one column in $X$ for every $\beta$ you are trying to estimate). However, since model building can be considered an art, I suppose then so can building the design matrix.
Meaning of "design" in design matrix? It is called a design matrix because the columns of the matrix $X$ are based on the design of the model. I don't believe $X$ can be created arbitrarily in the sense that as soon as the model has been
17,068
What is the difference between logit-transformed linear regression, logistic regression, and a logistic mixed model?
Models 1 and 2 are different because the first transforms the response & the 2nd transforms its expected value. For Model 1 the logit of each response is Normally distributed $$\newcommand{\logit}{\operatorname{logit}}\logit Y_i\sim\mathrm{N}\left(\mu_i,\sigma^2\right)$$ with its mean being a linear function of the predictor & coefficent vectors. $$\mu_i=x_i'\beta$$ & therefore $$ Y_i=\logit^{-1}\left(x_i'\beta+\varepsilon_i\right)$$ For Model 2 the response itself is normally distributed $$\newcommand{\logit}{\operatorname{logit}} Y_i\sim\mathrm{N}\left(\mu_i,\sigma^2\right)$$ with the logit of its mean being a linear function of the predictor and coefficent vectors $$\logit\mu_i=x_i\beta$$ & therefore $$ Y_i=\logit^{-1}\left(x_i'\beta\right)+\varepsilon_i$$ So the variance structure will be different. Imagine simulating from Model 2: the variance will be independent of the expected value; & though the expected values of the responses will be between 0 & 1, the responses will not all be. Generalized linear mixed models like your Model 4 are different again because they contain random effects: see here & here.
What is the difference between logit-transformed linear regression, logistic regression, and a logis
Models 1 and 2 are different because the first transforms the response & the 2nd transforms its expected value. For Model 1 the logit of each response is Normally distributed $$\newcommand{\logit}{\o
What is the difference between logit-transformed linear regression, logistic regression, and a logistic mixed model? Models 1 and 2 are different because the first transforms the response & the 2nd transforms its expected value. For Model 1 the logit of each response is Normally distributed $$\newcommand{\logit}{\operatorname{logit}}\logit Y_i\sim\mathrm{N}\left(\mu_i,\sigma^2\right)$$ with its mean being a linear function of the predictor & coefficent vectors. $$\mu_i=x_i'\beta$$ & therefore $$ Y_i=\logit^{-1}\left(x_i'\beta+\varepsilon_i\right)$$ For Model 2 the response itself is normally distributed $$\newcommand{\logit}{\operatorname{logit}} Y_i\sim\mathrm{N}\left(\mu_i,\sigma^2\right)$$ with the logit of its mean being a linear function of the predictor and coefficent vectors $$\logit\mu_i=x_i\beta$$ & therefore $$ Y_i=\logit^{-1}\left(x_i'\beta\right)+\varepsilon_i$$ So the variance structure will be different. Imagine simulating from Model 2: the variance will be independent of the expected value; & though the expected values of the responses will be between 0 & 1, the responses will not all be. Generalized linear mixed models like your Model 4 are different again because they contain random effects: see here & here.
What is the difference between logit-transformed linear regression, logistic regression, and a logis Models 1 and 2 are different because the first transforms the response & the 2nd transforms its expected value. For Model 1 the logit of each response is Normally distributed $$\newcommand{\logit}{\o
17,069
What is the difference between logit-transformed linear regression, logistic regression, and a logistic mixed model?
+1 to @Scortchi, who has provided a very clear and concise answer. I want to make a couple of complementary points. First, for your second model, you are specifying that your response distribution is Gaussian (a.k.a., normal). This must be false, because each answer is scored as correct or incorrect. That is, each answer is a Bernoulli trial. Thus, your response distribution is a Binomial. This idea is accurately reflected in your code as well. Next, the probability that governs the response distribution is normally distributed, so the link ought to be probit, not logit. Lastly, if this were a real situation, you would need to account for random effects for both subjects and questions, as they are extremely unlikely to be identical. The way you generated these data, the only relevant aspect of each person is their IQ, which you have accounted for explicitly. Thus, there is nothing left over that needs to be accounted for by a random effect in the model. This is also true for the questions, because random variations in question difficulty are not part of the data generating process in your code. I don't mean to be nitpicking here. I recognize that your setup is simply designed to facilitate your question, and it has served that purpose; @Scortchi was able to address your questions very directly, with minimal fuss. However, I point these things out because they offer additional opportunities to understand the situation you are grappling with, and because you may not have realized that your code matches some parts of your storyline but not others.
What is the difference between logit-transformed linear regression, logistic regression, and a logis
+1 to @Scortchi, who has provided a very clear and concise answer. I want to make a couple of complementary points. First, for your second model, you are specifying that your response distribution i
What is the difference between logit-transformed linear regression, logistic regression, and a logistic mixed model? +1 to @Scortchi, who has provided a very clear and concise answer. I want to make a couple of complementary points. First, for your second model, you are specifying that your response distribution is Gaussian (a.k.a., normal). This must be false, because each answer is scored as correct or incorrect. That is, each answer is a Bernoulli trial. Thus, your response distribution is a Binomial. This idea is accurately reflected in your code as well. Next, the probability that governs the response distribution is normally distributed, so the link ought to be probit, not logit. Lastly, if this were a real situation, you would need to account for random effects for both subjects and questions, as they are extremely unlikely to be identical. The way you generated these data, the only relevant aspect of each person is their IQ, which you have accounted for explicitly. Thus, there is nothing left over that needs to be accounted for by a random effect in the model. This is also true for the questions, because random variations in question difficulty are not part of the data generating process in your code. I don't mean to be nitpicking here. I recognize that your setup is simply designed to facilitate your question, and it has served that purpose; @Scortchi was able to address your questions very directly, with minimal fuss. However, I point these things out because they offer additional opportunities to understand the situation you are grappling with, and because you may not have realized that your code matches some parts of your storyline but not others.
What is the difference between logit-transformed linear regression, logistic regression, and a logis +1 to @Scortchi, who has provided a very clear and concise answer. I want to make a couple of complementary points. First, for your second model, you are specifying that your response distribution i
17,070
R square in mixed model with random effects
The R package MuMIn also now has a function for calculating Nakagawa and Schielzeth's r-squared for mixed models. That is the function r.squaredGLMM() and you simply feed it a lmer object (from package lme4) to obtain the values. MuMIn has excellent documentation so you should be able to learn any details there.
R square in mixed model with random effects
The R package MuMIn also now has a function for calculating Nakagawa and Schielzeth's r-squared for mixed models. That is the function r.squaredGLMM() and you simply feed it a lmer object (from packa
R square in mixed model with random effects The R package MuMIn also now has a function for calculating Nakagawa and Schielzeth's r-squared for mixed models. That is the function r.squaredGLMM() and you simply feed it a lmer object (from package lme4) to obtain the values. MuMIn has excellent documentation so you should be able to learn any details there.
R square in mixed model with random effects The R package MuMIn also now has a function for calculating Nakagawa and Schielzeth's r-squared for mixed models. That is the function r.squaredGLMM() and you simply feed it a lmer object (from packa
17,071
R square in mixed model with random effects
The following paper was just published and may give you the answer to your question: Nakagawa, S. and Schielzeth, H. (2012). A general and simple method for obtaining $R^2$ from generalized linear mixed-effects models. Methods in Ecology and Evolution, in press. DOI: 10.1111/j.2041-210x.2012.00261.x
R square in mixed model with random effects
The following paper was just published and may give you the answer to your question: Nakagawa, S. and Schielzeth, H. (2012). A general and simple method for obtaining $R^2$ from generalized linear m
R square in mixed model with random effects The following paper was just published and may give you the answer to your question: Nakagawa, S. and Schielzeth, H. (2012). A general and simple method for obtaining $R^2$ from generalized linear mixed-effects models. Methods in Ecology and Evolution, in press. DOI: 10.1111/j.2041-210x.2012.00261.x
R square in mixed model with random effects The following paper was just published and may give you the answer to your question: Nakagawa, S. and Schielzeth, H. (2012). A general and simple method for obtaining $R^2$ from generalized linear m
17,072
How can I calculate a critical t value using R?
Welcome to the Stats Stackexchange. You can compute the $t_{\alpha/2, n-p}$ critical value in R by doing qt(1-alpha/2, n-p). In the following example, I ask R to give me the $95\%$ critical value for $df=1, 2, \dots, 10$. The result is a list of the first ten critical values for the t-distribution at the given confidence level: > qt(.975, 1:10) [1] 12.706205 4.302653 3.182446 2.776445 2.570582 2.446912 2.364624 2.306004 2.262157 2.228139
How can I calculate a critical t value using R?
Welcome to the Stats Stackexchange. You can compute the $t_{\alpha/2, n-p}$ critical value in R by doing qt(1-alpha/2, n-p). In the following example, I ask R to give me the $95\%$ critical value for
How can I calculate a critical t value using R? Welcome to the Stats Stackexchange. You can compute the $t_{\alpha/2, n-p}$ critical value in R by doing qt(1-alpha/2, n-p). In the following example, I ask R to give me the $95\%$ critical value for $df=1, 2, \dots, 10$. The result is a list of the first ten critical values for the t-distribution at the given confidence level: > qt(.975, 1:10) [1] 12.706205 4.302653 3.182446 2.776445 2.570582 2.446912 2.364624 2.306004 2.262157 2.228139
How can I calculate a critical t value using R? Welcome to the Stats Stackexchange. You can compute the $t_{\alpha/2, n-p}$ critical value in R by doing qt(1-alpha/2, n-p). In the following example, I ask R to give me the $95\%$ critical value for
17,073
What is the difference/relationship between method of moments and GMM?
Both MOM and GMM are very general methods for estimating parameters of statistical models. GMM is - as the name suggests - a generalisation of MOM. It was developed by Lars Peter Hansen and first published in Econometrica [1]. As there are numerous textbooks on the subject (e.g. [2]) I presume you want a non-technical answer here. Traditional or Classical Method of Moments Estimator The MOM estimator is a consistent but inefficient estimator. Assume a vector of data $y$ which were generated by a probability distribution indexed by a parameter vector $\theta$ with $k$ elements. In the method of moments, $\theta$ is estimated by computing $k$ sample moments of $y$, setting them equal to population moments derived from the assumed probability distribution, and solving for $\theta$. For example, the population moment of $\mu$ is the expectation of $y$, whereas the sample moment of $\mu$ is the sample mean of $y$. You would repeat this for each of the $k$ elements of $\theta$. As sample moments are generally consistent estimators of population moments, $\hat\theta$ will be consistent for $\theta$. Generalised Method of Moments In the example above, we had the same number of moment conditions as unknown parameters, so all we would have done is solved the $k$ equations in $k$ unknowns to obtain the parameter estimates. Hansen asked: What happens when we have more moment conditions than parameters as usually occurs in econometric models? How can we combine them optimally? That is the purpose of the GMM estimator. In GMM we estimate the parameter vector by minimising the sum of squares of the differences between the population moments and the sample moments, using the variance of the moments as a metric. This is the minimum variance estimator in the class of estimators that use these moment conditions. [1] Hansen, L. P. (1982): Large Sample Properties of Generalized Method of Moments Estimators, Econometrica, 50, 1029-1054 [2] Hall, A. R. (2005). Generalized Method of Moments (Advanced Texts in Econometrics). Oxford University Press
What is the difference/relationship between method of moments and GMM?
Both MOM and GMM are very general methods for estimating parameters of statistical models. GMM is - as the name suggests - a generalisation of MOM. It was developed by Lars Peter Hansen and first publ
What is the difference/relationship between method of moments and GMM? Both MOM and GMM are very general methods for estimating parameters of statistical models. GMM is - as the name suggests - a generalisation of MOM. It was developed by Lars Peter Hansen and first published in Econometrica [1]. As there are numerous textbooks on the subject (e.g. [2]) I presume you want a non-technical answer here. Traditional or Classical Method of Moments Estimator The MOM estimator is a consistent but inefficient estimator. Assume a vector of data $y$ which were generated by a probability distribution indexed by a parameter vector $\theta$ with $k$ elements. In the method of moments, $\theta$ is estimated by computing $k$ sample moments of $y$, setting them equal to population moments derived from the assumed probability distribution, and solving for $\theta$. For example, the population moment of $\mu$ is the expectation of $y$, whereas the sample moment of $\mu$ is the sample mean of $y$. You would repeat this for each of the $k$ elements of $\theta$. As sample moments are generally consistent estimators of population moments, $\hat\theta$ will be consistent for $\theta$. Generalised Method of Moments In the example above, we had the same number of moment conditions as unknown parameters, so all we would have done is solved the $k$ equations in $k$ unknowns to obtain the parameter estimates. Hansen asked: What happens when we have more moment conditions than parameters as usually occurs in econometric models? How can we combine them optimally? That is the purpose of the GMM estimator. In GMM we estimate the parameter vector by minimising the sum of squares of the differences between the population moments and the sample moments, using the variance of the moments as a metric. This is the minimum variance estimator in the class of estimators that use these moment conditions. [1] Hansen, L. P. (1982): Large Sample Properties of Generalized Method of Moments Estimators, Econometrica, 50, 1029-1054 [2] Hall, A. R. (2005). Generalized Method of Moments (Advanced Texts in Econometrics). Oxford University Press
What is the difference/relationship between method of moments and GMM? Both MOM and GMM are very general methods for estimating parameters of statistical models. GMM is - as the name suggests - a generalisation of MOM. It was developed by Lars Peter Hansen and first publ
17,074
How do I calculate the standard deviation of the log-odds?
Essentially, the Delta Method is a way of "linearizing" a non-linear function using a Taylor Series expansion so that you can find the variance and hence the standard error. For example, let's say you have a function $f(X) = Y$ that has a first and second order derivative. Then a first order Taylor Series Expansion centered around $\mu$ is given by: \begin{eqnarray*} \newcommand{\Var}{{\rm Var}} Y=f(X) \approx f(\mu)+f^{\prime}\left(\mu\right)(X-\mu)] \end{eqnarray*} And a second order approximation given by: \begin{eqnarray*} f(X) \approx f(\mu) + f^\prime({\mu})(X-\mu)+ {1 \over{2}}f^{\prime\prime}(\mu)(X-\mu)^2 \end{eqnarray*} So, assuming $E(X)=\mu$ and $Var(X)=\sigma^2$ to find the approximate expected value of the nonlinear function $Y$, we have: \begin{eqnarray*} E(Y)\approx E[f(X)] & = & E[f(\mu)]+E\big[f^{\prime}\left(\mu\right)(X-\mu)\big]+\frac{1}{2}E\big[f^{\prime\prime}\left(\mu\right)(X-\mu)^{2}\big]\\ & = & f(\mu)+f^{\prime}\left(\mu\right)(\mu-\mu)+\frac{1}{2}f^{\prime\prime}\left(\mu\right)E[(X-\mu)^{2}]\\ & = & f(\mu)+\frac{1}{2}f^{\prime\prime}\left(\mu\right)\sigma^{2} \end{eqnarray*} Its corresponding Variance can be estimated by: \begin{array}{} \Var(Y)=\Var\left[f(X)\right]=E\left\{ [f(x)-E(f(x))]^{2}\right\} & \approx & E\left[f(\mu)+f^{\prime}(\mu)(X-\mu)-f(\mu))\right] \\ & &\mbox{(Substituting 1st order polynomial)} \\ & = & \left[f^{\prime}(\mu)^{2}\right]E\left[(X-\mu)^{2}\right]\\ & = & \left[f^{\prime}(\mu)^{2}\right]\Var(X) \end{array} So, in the case of the log odds, $\log(\hat{OR)}$, Let $Y = \log(\hat{OR)}$. Then, because the groups $n_1$ and $n_2$ are independent we have: \begin{eqnarray*} \Var\left[\log(\hat{OR})\right] & = & \Var\left[\log\left(\frac{\frac{\hat{p}_1}{1-\hat{p}_1}}{\frac{\hat{p}_2}{1-\hat{p}_2}}\right)\right]\\[5pt] & = & \Var\left[\log\left(\frac{\hat{p}_1}{1-\hat{p}_1}\right)\right]+ \Var\left[\log\left(\frac{\hat{p}_2}{1-\hat{p}_2}\right)\right]\\[5pt] & = & \left(\frac{1}{\hat{p}_{1}\left(1-\hat{p}_{1}\right)}\right)^{2}\frac{\hat{p}_{1}(1-\hat{p}_{1})}{n_{1}}+\left(\frac{1}{\hat{p}_{2}\left(1-\hat{p}_{2}\right)}\right)^{2}\frac{\hat{p}_{2}(1-\hat{p}_{2})}{n_{2}}\\[5pt] & = & \frac{1}{n_{1}\hat{p}_{1}(1-\hat{p}_{1})}+\frac{1}{n_{2}\hat{p}_{2}(1-\hat{p}_{2})}\\[5pt] & = & \frac{1}{n_{1}\hat{p}_{1}}+\frac{1}{n_{1}(1-\hat{p}_{1})}+\frac{1}{n_{2}\hat{p}_{2}}+\frac{1}{n_{2}(1-\hat{p}_{2})}\\[5pt] & = & \frac{1}{a}+\frac{1}{b}+\frac{1}{c}+\frac{1}{d} \end{eqnarray*} To obtain the standard error, we simply take the square root and you obtain your results from your notes: \begin{eqnarray*} SE[\log(\hat{OR})] = \sqrt{\Var\left[\log(\hat{OR})\right]}= \sqrt{\frac{1}{a}+\frac{1}{b}+\frac{1}{c}+\frac{1}{d}} \end{eqnarray*}
How do I calculate the standard deviation of the log-odds?
Essentially, the Delta Method is a way of "linearizing" a non-linear function using a Taylor Series expansion so that you can find the variance and hence the standard error. For example, let's say yo
How do I calculate the standard deviation of the log-odds? Essentially, the Delta Method is a way of "linearizing" a non-linear function using a Taylor Series expansion so that you can find the variance and hence the standard error. For example, let's say you have a function $f(X) = Y$ that has a first and second order derivative. Then a first order Taylor Series Expansion centered around $\mu$ is given by: \begin{eqnarray*} \newcommand{\Var}{{\rm Var}} Y=f(X) \approx f(\mu)+f^{\prime}\left(\mu\right)(X-\mu)] \end{eqnarray*} And a second order approximation given by: \begin{eqnarray*} f(X) \approx f(\mu) + f^\prime({\mu})(X-\mu)+ {1 \over{2}}f^{\prime\prime}(\mu)(X-\mu)^2 \end{eqnarray*} So, assuming $E(X)=\mu$ and $Var(X)=\sigma^2$ to find the approximate expected value of the nonlinear function $Y$, we have: \begin{eqnarray*} E(Y)\approx E[f(X)] & = & E[f(\mu)]+E\big[f^{\prime}\left(\mu\right)(X-\mu)\big]+\frac{1}{2}E\big[f^{\prime\prime}\left(\mu\right)(X-\mu)^{2}\big]\\ & = & f(\mu)+f^{\prime}\left(\mu\right)(\mu-\mu)+\frac{1}{2}f^{\prime\prime}\left(\mu\right)E[(X-\mu)^{2}]\\ & = & f(\mu)+\frac{1}{2}f^{\prime\prime}\left(\mu\right)\sigma^{2} \end{eqnarray*} Its corresponding Variance can be estimated by: \begin{array}{} \Var(Y)=\Var\left[f(X)\right]=E\left\{ [f(x)-E(f(x))]^{2}\right\} & \approx & E\left[f(\mu)+f^{\prime}(\mu)(X-\mu)-f(\mu))\right] \\ & &\mbox{(Substituting 1st order polynomial)} \\ & = & \left[f^{\prime}(\mu)^{2}\right]E\left[(X-\mu)^{2}\right]\\ & = & \left[f^{\prime}(\mu)^{2}\right]\Var(X) \end{array} So, in the case of the log odds, $\log(\hat{OR)}$, Let $Y = \log(\hat{OR)}$. Then, because the groups $n_1$ and $n_2$ are independent we have: \begin{eqnarray*} \Var\left[\log(\hat{OR})\right] & = & \Var\left[\log\left(\frac{\frac{\hat{p}_1}{1-\hat{p}_1}}{\frac{\hat{p}_2}{1-\hat{p}_2}}\right)\right]\\[5pt] & = & \Var\left[\log\left(\frac{\hat{p}_1}{1-\hat{p}_1}\right)\right]+ \Var\left[\log\left(\frac{\hat{p}_2}{1-\hat{p}_2}\right)\right]\\[5pt] & = & \left(\frac{1}{\hat{p}_{1}\left(1-\hat{p}_{1}\right)}\right)^{2}\frac{\hat{p}_{1}(1-\hat{p}_{1})}{n_{1}}+\left(\frac{1}{\hat{p}_{2}\left(1-\hat{p}_{2}\right)}\right)^{2}\frac{\hat{p}_{2}(1-\hat{p}_{2})}{n_{2}}\\[5pt] & = & \frac{1}{n_{1}\hat{p}_{1}(1-\hat{p}_{1})}+\frac{1}{n_{2}\hat{p}_{2}(1-\hat{p}_{2})}\\[5pt] & = & \frac{1}{n_{1}\hat{p}_{1}}+\frac{1}{n_{1}(1-\hat{p}_{1})}+\frac{1}{n_{2}\hat{p}_{2}}+\frac{1}{n_{2}(1-\hat{p}_{2})}\\[5pt] & = & \frac{1}{a}+\frac{1}{b}+\frac{1}{c}+\frac{1}{d} \end{eqnarray*} To obtain the standard error, we simply take the square root and you obtain your results from your notes: \begin{eqnarray*} SE[\log(\hat{OR})] = \sqrt{\Var\left[\log(\hat{OR})\right]}= \sqrt{\frac{1}{a}+\frac{1}{b}+\frac{1}{c}+\frac{1}{d}} \end{eqnarray*}
How do I calculate the standard deviation of the log-odds? Essentially, the Delta Method is a way of "linearizing" a non-linear function using a Taylor Series expansion so that you can find the variance and hence the standard error. For example, let's say yo
17,075
How do I calculate the standard deviation of the log-odds?
I discovered a slightly easier way of coming to the same conclusion: \begin{align}\text{OR} &= \frac{ad}{bc}\\ \log(\text{OR}) &= \log(a) + \log(d) – \log(b) – \log(c) \\ & =\log(a) – \log(b) – \log(c) + \log(d)\end{align} We treat the four counts as independent Poisson so $$\operatorname{var}(a) = a, ~\operatorname{var}(b) = b, ~\operatorname{var}(c) = c, ~\operatorname{var}(d) = d. $$ By Delta Method: $$ \operatorname{var} (f(X)) = (f^\prime(X))^2 \cdot \operatorname{var}(X)$$ $$\operatorname{var}(\log(a)) = (1/a)^2 \cdot \operatorname{var}(a) = 1/a$$ And by independence $$\operatorname{var}(\log(\text{OR})) = 1/a + 1/b + 1/c + 1/d.$$ QED
How do I calculate the standard deviation of the log-odds?
I discovered a slightly easier way of coming to the same conclusion: \begin{align}\text{OR} &= \frac{ad}{bc}\\ \log(\text{OR}) &= \log(a) + \log(d) – \log(b) – \log(c) \\ & =\log(a) – \log(b) – \log
How do I calculate the standard deviation of the log-odds? I discovered a slightly easier way of coming to the same conclusion: \begin{align}\text{OR} &= \frac{ad}{bc}\\ \log(\text{OR}) &= \log(a) + \log(d) – \log(b) – \log(c) \\ & =\log(a) – \log(b) – \log(c) + \log(d)\end{align} We treat the four counts as independent Poisson so $$\operatorname{var}(a) = a, ~\operatorname{var}(b) = b, ~\operatorname{var}(c) = c, ~\operatorname{var}(d) = d. $$ By Delta Method: $$ \operatorname{var} (f(X)) = (f^\prime(X))^2 \cdot \operatorname{var}(X)$$ $$\operatorname{var}(\log(a)) = (1/a)^2 \cdot \operatorname{var}(a) = 1/a$$ And by independence $$\operatorname{var}(\log(\text{OR})) = 1/a + 1/b + 1/c + 1/d.$$ QED
How do I calculate the standard deviation of the log-odds? I discovered a slightly easier way of coming to the same conclusion: \begin{align}\text{OR} &= \frac{ad}{bc}\\ \log(\text{OR}) &= \log(a) + \log(d) – \log(b) – \log(c) \\ & =\log(a) – \log(b) – \log
17,076
Why is OLS estimator of AR(1) coefficient biased?
As essentially discussed in the comments, unbiasedness is a finite sample property, and if it held it would be expressed as $$E (\hat \beta ) = \beta$$ (where the expected value is the first moment of the finite-sample distribution) while consistency is an asymptotic property expressed as $$\text{plim} \hat \beta = \beta$$ The OP shows that even though OLS in this context is biased, it is still consistent. $$E (\hat \beta ) \neq \beta\;\;\; \text{but}\;\;\; \text{plim} \hat \beta = \beta$$ No contradiction here.
Why is OLS estimator of AR(1) coefficient biased?
As essentially discussed in the comments, unbiasedness is a finite sample property, and if it held it would be expressed as $$E (\hat \beta ) = \beta$$ (where the expected value is the first moment o
Why is OLS estimator of AR(1) coefficient biased? As essentially discussed in the comments, unbiasedness is a finite sample property, and if it held it would be expressed as $$E (\hat \beta ) = \beta$$ (where the expected value is the first moment of the finite-sample distribution) while consistency is an asymptotic property expressed as $$\text{plim} \hat \beta = \beta$$ The OP shows that even though OLS in this context is biased, it is still consistent. $$E (\hat \beta ) \neq \beta\;\;\; \text{but}\;\;\; \text{plim} \hat \beta = \beta$$ No contradiction here.
Why is OLS estimator of AR(1) coefficient biased? As essentially discussed in the comments, unbiasedness is a finite sample property, and if it held it would be expressed as $$E (\hat \beta ) = \beta$$ (where the expected value is the first moment o
17,077
Why is OLS estimator of AR(1) coefficient biased?
@Alecos nicely explains why a correct plim and unbiasedbess are not the same. As for the underlying reason why the estimator is not unbiased, recall that unbiasedness of an estimator requires that all error terms are mean independent of all regressor values, $E(\epsilon|X)=0$. In the present case, the regressor matrix consists of the values $y_1,\ldots,y_{T-1}$, so that - see mpiktas' comment - the condition translates into $E(\epsilon_s|y_1,\ldots,y_{T-1})=0$ for all $s=2,\ldots,T$. Here, we have \begin{equation*} y_{t}=\beta y_{t-1}+\epsilon _{t}, \end{equation*} Even under the assumption $E(\epsilon_{t}y_{t-1})=0$ we have that \begin{equation*} E(\epsilon_ty_{t})=E(\epsilon_t(\beta y_{t-1}+\epsilon _{t}))=E(\epsilon _{t}^{2})\neq 0. \end{equation*} But, $y_t$ is also a regressor for future values in ain AR model, as $y_{t+1}=\beta y_{t}+\epsilon_{t+1}$.
Why is OLS estimator of AR(1) coefficient biased?
@Alecos nicely explains why a correct plim and unbiasedbess are not the same. As for the underlying reason why the estimator is not unbiased, recall that unbiasedness of an estimator requires that all
Why is OLS estimator of AR(1) coefficient biased? @Alecos nicely explains why a correct plim and unbiasedbess are not the same. As for the underlying reason why the estimator is not unbiased, recall that unbiasedness of an estimator requires that all error terms are mean independent of all regressor values, $E(\epsilon|X)=0$. In the present case, the regressor matrix consists of the values $y_1,\ldots,y_{T-1}$, so that - see mpiktas' comment - the condition translates into $E(\epsilon_s|y_1,\ldots,y_{T-1})=0$ for all $s=2,\ldots,T$. Here, we have \begin{equation*} y_{t}=\beta y_{t-1}+\epsilon _{t}, \end{equation*} Even under the assumption $E(\epsilon_{t}y_{t-1})=0$ we have that \begin{equation*} E(\epsilon_ty_{t})=E(\epsilon_t(\beta y_{t-1}+\epsilon _{t}))=E(\epsilon _{t}^{2})\neq 0. \end{equation*} But, $y_t$ is also a regressor for future values in ain AR model, as $y_{t+1}=\beta y_{t}+\epsilon_{t+1}$.
Why is OLS estimator of AR(1) coefficient biased? @Alecos nicely explains why a correct plim and unbiasedbess are not the same. As for the underlying reason why the estimator is not unbiased, recall that unbiasedness of an estimator requires that all
17,078
Why is OLS estimator of AR(1) coefficient biased?
Expanding on two good answers. Write down the OLS estimator: $$\hat\beta =\beta + \frac{\sum_{t=2}^Ty_{t-1}\varepsilon_t}{\sum_{t=2}^Ty_{t-1}^2}$$ For unbiasedness we need $$E\left[\frac{\sum_{t=2}^Ty_{t-1}\varepsilon_t}{\sum_{t=2}^Ty_{t-1}^2}\right]=0.$$ But for that we need that $E(\varepsilon_t|y_{1},...,y_{T-1})=0,$ for each $t$. For AR(1) model this clearly fails, since $\varepsilon_t$ is related to the future values $y_{t},y_{t+1},...,y_{T}$.
Why is OLS estimator of AR(1) coefficient biased?
Expanding on two good answers. Write down the OLS estimator: $$\hat\beta =\beta + \frac{\sum_{t=2}^Ty_{t-1}\varepsilon_t}{\sum_{t=2}^Ty_{t-1}^2}$$ For unbiasedness we need $$E\left[\frac{\sum_{t=2}^Ty
Why is OLS estimator of AR(1) coefficient biased? Expanding on two good answers. Write down the OLS estimator: $$\hat\beta =\beta + \frac{\sum_{t=2}^Ty_{t-1}\varepsilon_t}{\sum_{t=2}^Ty_{t-1}^2}$$ For unbiasedness we need $$E\left[\frac{\sum_{t=2}^Ty_{t-1}\varepsilon_t}{\sum_{t=2}^Ty_{t-1}^2}\right]=0.$$ But for that we need that $E(\varepsilon_t|y_{1},...,y_{T-1})=0,$ for each $t$. For AR(1) model this clearly fails, since $\varepsilon_t$ is related to the future values $y_{t},y_{t+1},...,y_{T}$.
Why is OLS estimator of AR(1) coefficient biased? Expanding on two good answers. Write down the OLS estimator: $$\hat\beta =\beta + \frac{\sum_{t=2}^Ty_{t-1}\varepsilon_t}{\sum_{t=2}^Ty_{t-1}^2}$$ For unbiasedness we need $$E\left[\frac{\sum_{t=2}^Ty
17,079
Why does a confidence interval including 0 mean the difference is not significant?
If the confidence interval (with your chosen level of confidence) includes $0$, that implies you think $0$ is a reasonable possibility for the true value of the difference. In general, by 'significant' people usually mean that they no longer believe the null hypothesis ($0$) is a reasonable possibility. Note that if a $95\%$ CI doesn't include $0$, the $p$-value would be $<.05$, which is the conventional cutoff for 'significance'.
Why does a confidence interval including 0 mean the difference is not significant?
If the confidence interval (with your chosen level of confidence) includes $0$, that implies you think $0$ is a reasonable possibility for the true value of the difference. In general, by 'significan
Why does a confidence interval including 0 mean the difference is not significant? If the confidence interval (with your chosen level of confidence) includes $0$, that implies you think $0$ is a reasonable possibility for the true value of the difference. In general, by 'significant' people usually mean that they no longer believe the null hypothesis ($0$) is a reasonable possibility. Note that if a $95\%$ CI doesn't include $0$, the $p$-value would be $<.05$, which is the conventional cutoff for 'significance'.
Why does a confidence interval including 0 mean the difference is not significant? If the confidence interval (with your chosen level of confidence) includes $0$, that implies you think $0$ is a reasonable possibility for the true value of the difference. In general, by 'significan
17,080
Why does a confidence interval including 0 mean the difference is not significant?
"Having zero in one's confidence interval implies that a treatment effect has no effect." : This is often how such confidence intervals are interpreted, but this is a mistake. A confidence interval that contains zero is not certainty that there is no treatment effect, but that it is uncertain whether there is a treatment effect. Having zero in one's confidence interval implies that a treatment effect could have a positive or negative effect on the outcome of interest. (DataCamp).
Why does a confidence interval including 0 mean the difference is not significant?
"Having zero in one's confidence interval implies that a treatment effect has no effect." : This is often how such confidence intervals are interpreted, but this is a mistake. A confidence interval th
Why does a confidence interval including 0 mean the difference is not significant? "Having zero in one's confidence interval implies that a treatment effect has no effect." : This is often how such confidence intervals are interpreted, but this is a mistake. A confidence interval that contains zero is not certainty that there is no treatment effect, but that it is uncertain whether there is a treatment effect. Having zero in one's confidence interval implies that a treatment effect could have a positive or negative effect on the outcome of interest. (DataCamp).
Why does a confidence interval including 0 mean the difference is not significant? "Having zero in one's confidence interval implies that a treatment effect has no effect." : This is often how such confidence intervals are interpreted, but this is a mistake. A confidence interval th
17,081
Why does a confidence interval including 0 mean the difference is not significant?
In the simple scenarios you're probably considering, it's a logical equivalence: the point defining the null hypothesis is inside a confidence interval with confidence level $\gamma=1-\alpha$ if and only if the observed value of the test statistic is outside the critical region of a test with size $\alpha$. Consider the case of the usual $Z$ test. You have a random sample $X_1,\dots,X_n$ from a normal distribution with unknown mean $\mu$ and known variance $\sigma_0^2$. Suppose you want to perform a two tailed hypothesis test with null hypothesis $H_0:\mu=\mu_0$ and alternative $H_A:\mu\ne\mu_0$. The test statistic is $Z=(\bar{X}-\mu_0)/(\sigma_0/\sqrt{n})$ and, under the null hypothesis $H_0$, $Z$ has a standard normal distribution. The critical region is $$ \mathscr{C}_\alpha = ( -\infty, -z_{\alpha/2}] \;\cup\; [z_{\alpha/2} ,\infty). $$ On the other hand, a confidence interval for $\mu$, with confidence level $\gamma=1-\alpha$, is given by $$ \left( \bar{x} - z_{\alpha/2} \frac{\sigma_0}{\sqrt{n}} , \bar{x} + z_{\alpha/2} \frac{\sigma_0}{\sqrt{n}} \right). $$ It follows by simple algebra that $$ \mu_0\in \left( \bar{x} - z_{\alpha/2} \frac{\sigma_0}{\sqrt{n}} , \bar{x} + z_{\alpha/2} \frac{\sigma_0}{\sqrt{n}} \right) \;\;\Leftrightarrow\;\; z_{\text{obs}}=(\bar{x}-\mu_0)/(\sigma_0/\sqrt{n})\notin\mathscr{C}_\alpha. $$ The connection with the usual definition of a $p$-value for this problem is also immediate.
Why does a confidence interval including 0 mean the difference is not significant?
In the simple scenarios you're probably considering, it's a logical equivalence: the point defining the null hypothesis is inside a confidence interval with confidence level $\gamma=1-\alpha$ if and o
Why does a confidence interval including 0 mean the difference is not significant? In the simple scenarios you're probably considering, it's a logical equivalence: the point defining the null hypothesis is inside a confidence interval with confidence level $\gamma=1-\alpha$ if and only if the observed value of the test statistic is outside the critical region of a test with size $\alpha$. Consider the case of the usual $Z$ test. You have a random sample $X_1,\dots,X_n$ from a normal distribution with unknown mean $\mu$ and known variance $\sigma_0^2$. Suppose you want to perform a two tailed hypothesis test with null hypothesis $H_0:\mu=\mu_0$ and alternative $H_A:\mu\ne\mu_0$. The test statistic is $Z=(\bar{X}-\mu_0)/(\sigma_0/\sqrt{n})$ and, under the null hypothesis $H_0$, $Z$ has a standard normal distribution. The critical region is $$ \mathscr{C}_\alpha = ( -\infty, -z_{\alpha/2}] \;\cup\; [z_{\alpha/2} ,\infty). $$ On the other hand, a confidence interval for $\mu$, with confidence level $\gamma=1-\alpha$, is given by $$ \left( \bar{x} - z_{\alpha/2} \frac{\sigma_0}{\sqrt{n}} , \bar{x} + z_{\alpha/2} \frac{\sigma_0}{\sqrt{n}} \right). $$ It follows by simple algebra that $$ \mu_0\in \left( \bar{x} - z_{\alpha/2} \frac{\sigma_0}{\sqrt{n}} , \bar{x} + z_{\alpha/2} \frac{\sigma_0}{\sqrt{n}} \right) \;\;\Leftrightarrow\;\; z_{\text{obs}}=(\bar{x}-\mu_0)/(\sigma_0/\sqrt{n})\notin\mathscr{C}_\alpha. $$ The connection with the usual definition of a $p$-value for this problem is also immediate.
Why does a confidence interval including 0 mean the difference is not significant? In the simple scenarios you're probably considering, it's a logical equivalence: the point defining the null hypothesis is inside a confidence interval with confidence level $\gamma=1-\alpha$ if and o
17,082
Why does a confidence interval including 0 mean the difference is not significant?
This has to be explained in the context of interpreting the significance of parameters in building let's say linear regression. Null hypothesis of linear regression says that the predictor coefficients are 0, alternate hypothesis is coefficients are non-zero.Confidence interval tells you the actual coefficient value can lie within that range. If that interval includes 0, that means the actual coefficient value can be zero and that means that the predictor has no relationship with the response variable or it is insignificant in terms of its influence on response variable. Hope that explains.
Why does a confidence interval including 0 mean the difference is not significant?
This has to be explained in the context of interpreting the significance of parameters in building let's say linear regression. Null hypothesis of linear regression says that the predictor coefficient
Why does a confidence interval including 0 mean the difference is not significant? This has to be explained in the context of interpreting the significance of parameters in building let's say linear regression. Null hypothesis of linear regression says that the predictor coefficients are 0, alternate hypothesis is coefficients are non-zero.Confidence interval tells you the actual coefficient value can lie within that range. If that interval includes 0, that means the actual coefficient value can be zero and that means that the predictor has no relationship with the response variable or it is insignificant in terms of its influence on response variable. Hope that explains.
Why does a confidence interval including 0 mean the difference is not significant? This has to be explained in the context of interpreting the significance of parameters in building let's say linear regression. Null hypothesis of linear regression says that the predictor coefficient
17,083
Using the caret package is it possible to obtain confusion matrices for specific threshold values?
Most classification models in R produce both a class prediction and the probabilities for each class. For binary data, in almost every case, the class prediction is based on a 50% probability cutoff. glm is the same. With caret, using predict(object, newdata) gives you the predicted class and predict(object, new data, type = "prob") will give you class-specific probabilities (when object is generated by train). You can do things differently by defining your own model and applying whatever cutoff that you want. The caret website also has an example that uses resampling to optimize the probability cutoff. tl;dr confusionMatrix uses the predicted classes and thus a 50% probability cutoff Max
Using the caret package is it possible to obtain confusion matrices for specific threshold values?
Most classification models in R produce both a class prediction and the probabilities for each class. For binary data, in almost every case, the class prediction is based on a 50% probability cutoff.
Using the caret package is it possible to obtain confusion matrices for specific threshold values? Most classification models in R produce both a class prediction and the probabilities for each class. For binary data, in almost every case, the class prediction is based on a 50% probability cutoff. glm is the same. With caret, using predict(object, newdata) gives you the predicted class and predict(object, new data, type = "prob") will give you class-specific probabilities (when object is generated by train). You can do things differently by defining your own model and applying whatever cutoff that you want. The caret website also has an example that uses resampling to optimize the probability cutoff. tl;dr confusionMatrix uses the predicted classes and thus a 50% probability cutoff Max
Using the caret package is it possible to obtain confusion matrices for specific threshold values? Most classification models in R produce both a class prediction and the probabilities for each class. For binary data, in almost every case, the class prediction is based on a 50% probability cutoff.
17,084
Using the caret package is it possible to obtain confusion matrices for specific threshold values?
There is a pretty easy way, assuming tune <- train(...): probsTest <- predict(tune, test, type = "prob") threshold <- 0.5 pred <- factor( ifelse(probsTest[, "yes"] > threshold, "yes", "no") ) pred <- relevel(pred, "yes") # you may or may not need this; I did confusionMatrix(pred, test$response) Obviously, you can set threshold to whatever you want to try or pick the "best" one, where best means highest combined specificity and sensitivity: library(pROC) probsTrain <- predict(tune, train, type = "prob") rocCurve <- roc(response = train$response, predictor = probsTrain[, "yes"], levels = rev(levels(train$response))) plot(rocCurve, print.thres = "best") After looking at the example Max posted, I'm not sure if there are some statistical nuances making my approach less desired.
Using the caret package is it possible to obtain confusion matrices for specific threshold values?
There is a pretty easy way, assuming tune <- train(...): probsTest <- predict(tune, test, type = "prob") threshold <- 0.5 pred <- factor( ifelse(probsTest[, "yes"] > threshold, "yes", "no") ) pre
Using the caret package is it possible to obtain confusion matrices for specific threshold values? There is a pretty easy way, assuming tune <- train(...): probsTest <- predict(tune, test, type = "prob") threshold <- 0.5 pred <- factor( ifelse(probsTest[, "yes"] > threshold, "yes", "no") ) pred <- relevel(pred, "yes") # you may or may not need this; I did confusionMatrix(pred, test$response) Obviously, you can set threshold to whatever you want to try or pick the "best" one, where best means highest combined specificity and sensitivity: library(pROC) probsTrain <- predict(tune, train, type = "prob") rocCurve <- roc(response = train$response, predictor = probsTrain[, "yes"], levels = rev(levels(train$response))) plot(rocCurve, print.thres = "best") After looking at the example Max posted, I'm not sure if there are some statistical nuances making my approach less desired.
Using the caret package is it possible to obtain confusion matrices for specific threshold values? There is a pretty easy way, assuming tune <- train(...): probsTest <- predict(tune, test, type = "prob") threshold <- 0.5 pred <- factor( ifelse(probsTest[, "yes"] > threshold, "yes", "no") ) pre
17,085
MCMC methods - burning samples?
Burn-in is intended to give the Markov Chain time to reach its equilibrium distribution, particularly if it has started from a lousy starting point. To "burn in" a chain, you just discard the first $n$ samples before you start collecting points. The idea is that a "bad" starting point may over-sample regions that are actually very low probability under the equilibrium distribution before it settles into the equilibrium distribution. If you throw those points away, then the points which should be unlikely will be suitably rare. This page gives a nice example, but it also points out that burn-in is more of a hack/artform than a principled technique. In theory, you could just sample for a really long time or find some way to choose a decent starting point instead. Edit: Mixing time refers to how long it takes the chain to approach its steady-state, but it's often difficult to calculate directly. If you knew the mixing time, you'd just discard that many samples, but in many cases, you don't. Thus, you choose a burn-in time that is hopefully large enough instead. As far as stability--it depends. If your chain has converged, then...it's converged. However, there are also situations where the chain appears to have converged but actually is just "hanging out" in one part of the state space. For example, imagine that there are several modes, but each mode is poorly connected to the others. It might take a very long time for the sampler to make it across that gap and it will look like the chain converged right until it makes that jump. There are diagnostics for convergence, but many of them have a hard time telling true convergence and pseudo-convergence apart. Charles Geyer's chapter (#1) in the Handbook of Markov Chain Monte Carlo is pretty pessimistic about everything but running the chain for as long as you can.
MCMC methods - burning samples?
Burn-in is intended to give the Markov Chain time to reach its equilibrium distribution, particularly if it has started from a lousy starting point. To "burn in" a chain, you just discard the first $n
MCMC methods - burning samples? Burn-in is intended to give the Markov Chain time to reach its equilibrium distribution, particularly if it has started from a lousy starting point. To "burn in" a chain, you just discard the first $n$ samples before you start collecting points. The idea is that a "bad" starting point may over-sample regions that are actually very low probability under the equilibrium distribution before it settles into the equilibrium distribution. If you throw those points away, then the points which should be unlikely will be suitably rare. This page gives a nice example, but it also points out that burn-in is more of a hack/artform than a principled technique. In theory, you could just sample for a really long time or find some way to choose a decent starting point instead. Edit: Mixing time refers to how long it takes the chain to approach its steady-state, but it's often difficult to calculate directly. If you knew the mixing time, you'd just discard that many samples, but in many cases, you don't. Thus, you choose a burn-in time that is hopefully large enough instead. As far as stability--it depends. If your chain has converged, then...it's converged. However, there are also situations where the chain appears to have converged but actually is just "hanging out" in one part of the state space. For example, imagine that there are several modes, but each mode is poorly connected to the others. It might take a very long time for the sampler to make it across that gap and it will look like the chain converged right until it makes that jump. There are diagnostics for convergence, but many of them have a hard time telling true convergence and pseudo-convergence apart. Charles Geyer's chapter (#1) in the Handbook of Markov Chain Monte Carlo is pretty pessimistic about everything but running the chain for as long as you can.
MCMC methods - burning samples? Burn-in is intended to give the Markov Chain time to reach its equilibrium distribution, particularly if it has started from a lousy starting point. To "burn in" a chain, you just discard the first $n
17,086
MCMC methods - burning samples?
The Metropolis-Hastings algorithm randomly samples from the posterior distribution. Typically, initial samples are not completely valid because the Markov Chain has not stabilized to the stationary distribution. The burn in samples allow you to discard these initial samples that are not yet at the stationary.
MCMC methods - burning samples?
The Metropolis-Hastings algorithm randomly samples from the posterior distribution. Typically, initial samples are not completely valid because the Markov Chain has not stabilized to the stationary d
MCMC methods - burning samples? The Metropolis-Hastings algorithm randomly samples from the posterior distribution. Typically, initial samples are not completely valid because the Markov Chain has not stabilized to the stationary distribution. The burn in samples allow you to discard these initial samples that are not yet at the stationary.
MCMC methods - burning samples? The Metropolis-Hastings algorithm randomly samples from the posterior distribution. Typically, initial samples are not completely valid because the Markov Chain has not stabilized to the stationary d
17,087
Mathematics behind classification and regression trees
CART and decision trees like algorithms work through recursive partitioning of the training set in order to obtain subsets that are as pure as possible to a given target class. Each node of the tree is associated to a particular set of records $T$ that is splitted by a specific test on a feature. For example, a split on a continuous attribute $A$ can be induced by the test $ A \le x$. The set of records $T$ is then partitioned in two subsets that leads to the left branch of the tree and the right one. $T_l = \{ t \in T: t(A) \le x \}$ and $T_r = \{ t \in T: t(A) > x \}$ Similarly, a categorical feature $B$ can be used to induce splits according to its values. For example, if $B = \{b_1, \dots, b_k\}$ each branch $i$ can be induced by the test $B = b_i$. The divide step of the recursive algorithm to induce decision tree takes into account all possible splits for each feature and tries to find the best one according to a chosen quality measure: the splitting criterion. If your dataset is induced on the following scheme $$A_1, \dots, A_m, C$$ where $A_j$ are attributes and $C$ is the target class, all candidates splits are generated and evaluated by the splitting criterion. Splits on continuous attributes and categorical ones are generated as described above. The selection of the best split is usually carried out by impurity measures. The impurity of the parent node has to be decreased by the split. Let $(E_1, E_2, \dots, E_k)$ be a split induced on the set of records $E$, a splitting criterion that makes used of the impurity measure $I(\cdot)$ is: $$\Delta = I(E) - \sum_{i=1}^{k}\frac{|E_i|}{|E|}I(E_i)$$ Standard impurity measures are the Shannon entropy or the Gini index. More specifically, CART uses the Gini index that is defined for the set $E$ as following. Let $p_j$ be the fraction of records in $E$ of class $c_j$ $$p_j = \frac{|\{t \in E:t[C] = c_j\}|}{|E|} $$ then $$ \mathit{Gini}(E) = 1 - \sum_{j=1}^{Q}p_j^2$$ where $Q$ is the number of classes. It leads to a 0 impurity when all records belong to the same class. As an example, let's say that we have a binary class set of records $T$ where the class distribution is $(1/2, 1/2)$ - the following is a good split for $T$ the probability distribution of records in $T_l$ is $(1,0)$ and the $T_r$'s one is $(0,1)$. Let's say that $T_l$ and $T_r$ are the same size, thus $|T_l|/|T| = |T_r|/|T| = 1/2$. We can see that $\Delta$ is high: $$\Delta = 1 - 1/2^2 - 1/2^2 - 0 - 0 = 1/2$$ The following split is worse than the first one and the splitting criterion $\Delta$ reflects this characteristic. $$\Delta = 1 - 1/2^2 - 1/2^2 - 1/2 \bigg( 1 - (3/4)^2 - (1/4)^2 \bigg) - 1/2 \bigg( 1 - (1/4)^2 - (3/4)^2 \bigg) = 1/2 - 1/2(3/8) - 1/2(3/8) = 1/8$$ The first split will be selected as best split and then the algorithm proceeds in a recursive fashion. It is easy to classify a new instance with a decision tree, in fact it is enough to follow the path from the root node to a leaf. A record is classified with the majority class of the leaf that it reaches. Say that we want to classify the square on this figure that is the graphical representation of a training set induced on the scheme $A,B,C$, where $C$ is the target class and $A$ and $B$ are two continuous features. A possible induced decision tree might be the following: It is clear that the record square will be classified by the decision tree as a circle given that the record falls on a leaf labeled with circles. In this toy example the accuracy on the training set is 100% because no record is mis-classified by the tree. On the graphical representation of the training set above we can see the boundaries (gray dashed lines) that the tree uses to classify new instances. There is plenty of literature on decision trees, I wanted just to write down a sketchy introduction. Another famous implementation is C4.5.
Mathematics behind classification and regression trees
CART and decision trees like algorithms work through recursive partitioning of the training set in order to obtain subsets that are as pure as possible to a given target class. Each node of the tree i
Mathematics behind classification and regression trees CART and decision trees like algorithms work through recursive partitioning of the training set in order to obtain subsets that are as pure as possible to a given target class. Each node of the tree is associated to a particular set of records $T$ that is splitted by a specific test on a feature. For example, a split on a continuous attribute $A$ can be induced by the test $ A \le x$. The set of records $T$ is then partitioned in two subsets that leads to the left branch of the tree and the right one. $T_l = \{ t \in T: t(A) \le x \}$ and $T_r = \{ t \in T: t(A) > x \}$ Similarly, a categorical feature $B$ can be used to induce splits according to its values. For example, if $B = \{b_1, \dots, b_k\}$ each branch $i$ can be induced by the test $B = b_i$. The divide step of the recursive algorithm to induce decision tree takes into account all possible splits for each feature and tries to find the best one according to a chosen quality measure: the splitting criterion. If your dataset is induced on the following scheme $$A_1, \dots, A_m, C$$ where $A_j$ are attributes and $C$ is the target class, all candidates splits are generated and evaluated by the splitting criterion. Splits on continuous attributes and categorical ones are generated as described above. The selection of the best split is usually carried out by impurity measures. The impurity of the parent node has to be decreased by the split. Let $(E_1, E_2, \dots, E_k)$ be a split induced on the set of records $E$, a splitting criterion that makes used of the impurity measure $I(\cdot)$ is: $$\Delta = I(E) - \sum_{i=1}^{k}\frac{|E_i|}{|E|}I(E_i)$$ Standard impurity measures are the Shannon entropy or the Gini index. More specifically, CART uses the Gini index that is defined for the set $E$ as following. Let $p_j$ be the fraction of records in $E$ of class $c_j$ $$p_j = \frac{|\{t \in E:t[C] = c_j\}|}{|E|} $$ then $$ \mathit{Gini}(E) = 1 - \sum_{j=1}^{Q}p_j^2$$ where $Q$ is the number of classes. It leads to a 0 impurity when all records belong to the same class. As an example, let's say that we have a binary class set of records $T$ where the class distribution is $(1/2, 1/2)$ - the following is a good split for $T$ the probability distribution of records in $T_l$ is $(1,0)$ and the $T_r$'s one is $(0,1)$. Let's say that $T_l$ and $T_r$ are the same size, thus $|T_l|/|T| = |T_r|/|T| = 1/2$. We can see that $\Delta$ is high: $$\Delta = 1 - 1/2^2 - 1/2^2 - 0 - 0 = 1/2$$ The following split is worse than the first one and the splitting criterion $\Delta$ reflects this characteristic. $$\Delta = 1 - 1/2^2 - 1/2^2 - 1/2 \bigg( 1 - (3/4)^2 - (1/4)^2 \bigg) - 1/2 \bigg( 1 - (1/4)^2 - (3/4)^2 \bigg) = 1/2 - 1/2(3/8) - 1/2(3/8) = 1/8$$ The first split will be selected as best split and then the algorithm proceeds in a recursive fashion. It is easy to classify a new instance with a decision tree, in fact it is enough to follow the path from the root node to a leaf. A record is classified with the majority class of the leaf that it reaches. Say that we want to classify the square on this figure that is the graphical representation of a training set induced on the scheme $A,B,C$, where $C$ is the target class and $A$ and $B$ are two continuous features. A possible induced decision tree might be the following: It is clear that the record square will be classified by the decision tree as a circle given that the record falls on a leaf labeled with circles. In this toy example the accuracy on the training set is 100% because no record is mis-classified by the tree. On the graphical representation of the training set above we can see the boundaries (gray dashed lines) that the tree uses to classify new instances. There is plenty of literature on decision trees, I wanted just to write down a sketchy introduction. Another famous implementation is C4.5.
Mathematics behind classification and regression trees CART and decision trees like algorithms work through recursive partitioning of the training set in order to obtain subsets that are as pure as possible to a given target class. Each node of the tree i
17,088
Mathematics behind classification and regression trees
I am not an expert on CARTs but you can try the book "Elements of Statistical Learning" which is freely available online (see chapter 9 for CARTs). I believe the book was written by one of the creators of the CART algorithm (Friedman).
Mathematics behind classification and regression trees
I am not an expert on CARTs but you can try the book "Elements of Statistical Learning" which is freely available online (see chapter 9 for CARTs). I believe the book was written by one of the creator
Mathematics behind classification and regression trees I am not an expert on CARTs but you can try the book "Elements of Statistical Learning" which is freely available online (see chapter 9 for CARTs). I believe the book was written by one of the creators of the CART algorithm (Friedman).
Mathematics behind classification and regression trees I am not an expert on CARTs but you can try the book "Elements of Statistical Learning" which is freely available online (see chapter 9 for CARTs). I believe the book was written by one of the creator
17,089
LME() error - iteration limit reached
I haven't heard of the error argument to lme and I don't see it in the documentation. Are you sure that isn't a typo? But, to answer the question you asked: Try ?lmeControl Setting the maxIter, msMaxIter, niterEM, and/or msMaxEval arguments to higher values than the default may fix this. Capture the output from lmeControl to an object and then pass that object to the control argument of lme. Or... The new default optimizer lme uses is flaky. Half the time these sorts of problems get solved for me when I change it back to the old optimizer. You do this by setting the opt argument for lmeControl to 'optim'. So, putting it together: ctrl <- lmeControl(opt='optim'); flow.lme <- lme(rate ~ nozzle, error= nozzle|operator, control=ctrl, data=Flow);
LME() error - iteration limit reached
I haven't heard of the error argument to lme and I don't see it in the documentation. Are you sure that isn't a typo? But, to answer the question you asked: Try ?lmeControl Setting the maxIter, msMaxI
LME() error - iteration limit reached I haven't heard of the error argument to lme and I don't see it in the documentation. Are you sure that isn't a typo? But, to answer the question you asked: Try ?lmeControl Setting the maxIter, msMaxIter, niterEM, and/or msMaxEval arguments to higher values than the default may fix this. Capture the output from lmeControl to an object and then pass that object to the control argument of lme. Or... The new default optimizer lme uses is flaky. Half the time these sorts of problems get solved for me when I change it back to the old optimizer. You do this by setting the opt argument for lmeControl to 'optim'. So, putting it together: ctrl <- lmeControl(opt='optim'); flow.lme <- lme(rate ~ nozzle, error= nozzle|operator, control=ctrl, data=Flow);
LME() error - iteration limit reached I haven't heard of the error argument to lme and I don't see it in the documentation. Are you sure that isn't a typo? But, to answer the question you asked: Try ?lmeControl Setting the maxIter, msMaxI
17,090
LME() error - iteration limit reached
First, this is an ANOVA model, not a mixed model. Second, it seems to me that your model is not identified. In equation form, you have $$ \mbox{response}_{ij} = \beta_1 \mbox{nozzle type}_{1ij} + \beta_2 \mbox{nozzle type}_{2ij} + \beta_3 \mbox{nozzle type}_{3ij} + \mbox{operator}_i + \mbox{nozzle within operator}_{ij} $$ where nozzle types are fixed effects (dummy variables), operator is a random effect, and nozzle within operator is a random effect, too. The last term has 15 separate values for 15 observations that you have. There are no degrees of freedom left to get any other terms in the model. Including interactions was a poor advice. You'd have to drop them whatsoever; even including them as crossed effects won't help, as they will then be perfectly collinear with the fixed effects, and won't be estimable. A maximum likelihood or REML model with 15 observations does not make sense; the asymptotic results of maximum likelihood theory simply won't work: this is a Ferrari you are trying to drive on a plowed field.
LME() error - iteration limit reached
First, this is an ANOVA model, not a mixed model. Second, it seems to me that your model is not identified. In equation form, you have $$ \mbox{response}_{ij} = \beta_1 \mbox{nozzle type}_{1ij} + \bet
LME() error - iteration limit reached First, this is an ANOVA model, not a mixed model. Second, it seems to me that your model is not identified. In equation form, you have $$ \mbox{response}_{ij} = \beta_1 \mbox{nozzle type}_{1ij} + \beta_2 \mbox{nozzle type}_{2ij} + \beta_3 \mbox{nozzle type}_{3ij} + \mbox{operator}_i + \mbox{nozzle within operator}_{ij} $$ where nozzle types are fixed effects (dummy variables), operator is a random effect, and nozzle within operator is a random effect, too. The last term has 15 separate values for 15 observations that you have. There are no degrees of freedom left to get any other terms in the model. Including interactions was a poor advice. You'd have to drop them whatsoever; even including them as crossed effects won't help, as they will then be perfectly collinear with the fixed effects, and won't be estimable. A maximum likelihood or REML model with 15 observations does not make sense; the asymptotic results of maximum likelihood theory simply won't work: this is a Ferrari you are trying to drive on a plowed field.
LME() error - iteration limit reached First, this is an ANOVA model, not a mixed model. Second, it seems to me that your model is not identified. In equation form, you have $$ \mbox{response}_{ij} = \beta_1 \mbox{nozzle type}_{1ij} + \bet
17,091
When does LASSO select correlated predictors?
The collinearity problem is way overrated! Thomas, you articulated a common viewpoint, that if predictors are correlated, even the best variable selection technique just picks one at random out of the bunch. Fortunately, that's way underselling regression's ability to uncover the truth! If you've got the right type of explanatory variables (exogenous), multiple regression promises to find the effect of each variable holding the others constant. Now if variables are perfectly correlated, than this is literally impossible. If the variables are correlated, it may be harder, but with the size of the typical data set today, it's not that much harder. Collinearity is a low-information problem. Have a look at this parody of collinearity by Art Goldberger on Dave Giles's blog. The way we talk about collinearity would sound silly if applied to a mean instead of a partial regression coefficient. Still not convinced? It's time for some code. set.seed(34234) N <- 1000 x1 <- rnorm(N) x2 <- 2*x1 + .7 * rnorm(N) cor(x1, x2) # correlation is .94 plot(x2 ~ x1) I've created highly correlated variables x1 and x2, but you can see in the plot below that when x1 is near -1, we still see variability in x2. Now it's time to add the "truth": y <- .5 * x1 - .7 * x2 + rnorm(N) # Data Generating Process Can ordinary regression succeed amidst the mighty collinearity problem? summary(lm(y ~ x1 + x2)) Oh yes it can: Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.0005334 0.0312637 -0.017 0.986 x1 0.6376689 0.0927472 6.875 1.09e-11 *** x2 -0.7530805 0.0444443 -16.944 < 2e-16 *** Now I didn't talk about LASSO, which your question focused on. But let me ask you this. If old-school regression w/ backward elimination doesn't get fooled by collinearity, why would you think state-of-the-art LASSO would?
When does LASSO select correlated predictors?
The collinearity problem is way overrated! Thomas, you articulated a common viewpoint, that if predictors are correlated, even the best variable selection technique just picks one at random out of the
When does LASSO select correlated predictors? The collinearity problem is way overrated! Thomas, you articulated a common viewpoint, that if predictors are correlated, even the best variable selection technique just picks one at random out of the bunch. Fortunately, that's way underselling regression's ability to uncover the truth! If you've got the right type of explanatory variables (exogenous), multiple regression promises to find the effect of each variable holding the others constant. Now if variables are perfectly correlated, than this is literally impossible. If the variables are correlated, it may be harder, but with the size of the typical data set today, it's not that much harder. Collinearity is a low-information problem. Have a look at this parody of collinearity by Art Goldberger on Dave Giles's blog. The way we talk about collinearity would sound silly if applied to a mean instead of a partial regression coefficient. Still not convinced? It's time for some code. set.seed(34234) N <- 1000 x1 <- rnorm(N) x2 <- 2*x1 + .7 * rnorm(N) cor(x1, x2) # correlation is .94 plot(x2 ~ x1) I've created highly correlated variables x1 and x2, but you can see in the plot below that when x1 is near -1, we still see variability in x2. Now it's time to add the "truth": y <- .5 * x1 - .7 * x2 + rnorm(N) # Data Generating Process Can ordinary regression succeed amidst the mighty collinearity problem? summary(lm(y ~ x1 + x2)) Oh yes it can: Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.0005334 0.0312637 -0.017 0.986 x1 0.6376689 0.0927472 6.875 1.09e-11 *** x2 -0.7530805 0.0444443 -16.944 < 2e-16 *** Now I didn't talk about LASSO, which your question focused on. But let me ask you this. If old-school regression w/ backward elimination doesn't get fooled by collinearity, why would you think state-of-the-art LASSO would?
When does LASSO select correlated predictors? The collinearity problem is way overrated! Thomas, you articulated a common viewpoint, that if predictors are correlated, even the best variable selection technique just picks one at random out of the
17,092
When does LASSO select correlated predictors?
Ben's answer inspired me to go one step further on the path he provided, what will happen if the "truth", y, is in other situations. In the original example, y is dependent on the two highly correlated variables x1 and x2. Assuming there is another variable, x3, say x3 = c(1:N)/250 # N is defined before, N = 1000, x3 is in the similar scale as x1, and the scale of x3 has effects on the linear regression results below. The "truth" y is now defined as follow y = .5 * x1 - .7 * x3 + rnorm(N) # Data Generating Process What would happen to the regression? summary(lm(y ~ x1 + x2)) There exists strong collinearity effect. The standard error of x2 is too large. However, the linear regression identifies x2 as a non-significant variable. Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -1.39164 0.04172 -33.354 < 2e-16 *** x1 0.65329 0.12550 5.205 2.35e-07 *** x2 -0.07878 0.05848 -1.347 0.178 vif(lm(y ~ x1 + x2)) x1 x2 9.167429 9.167429 What about another regression case? summary(lm(y ~ x1 + x2 + x3)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.02100 0.06573 0.319 0.749 x1 0.55398 0.09880 5.607 2.67e-08 *** x2 -0.02966 0.04604 -0.644 0.520 x3 -0.70562 0.02845 -24.805 < 2e-16 *** The variable x2 is not significant, and recommended to be removed by the linear regression. vif (lm(y ~ x1 + x2 + x3)) x1 x2 x3 9.067865 9.067884 1.000105 From above results, the collinearity is not a problem in linear regression, and checking VIF is not very helpful. Let's look at another situation. x3 = c(1:N) # N is defined before, N = 1000, x3 is not in the same scale as x1. The "truth" y is defined the same as above y = .5 * x1 - .7 * x3 + rnorm(N) # Data Generating Process What would happen to the regression? summary(lm(y ~ x1 + x2)) There exists strong collinearity effect. The standard errors of x1, x2 are too large. The linear regression fails to identify the important variable x1. Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -350.347 6.395 -54.783 <2e-16 *** x1 25.207 19.237 1.310 0.190 x2 -12.212 8.963 -1.362 0.173 vif(lm(y ~ x1 + x2)) x1 x2 9.167429 9.167429 What about another regression case? summary(lm(y ~ x1 + x2 + x3)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.0360104 0.0610405 0.590 0.555 x1 0.5742955 0.0917555 6.259 5.75e-10 *** x2 -0.0277623 0.0427585 -0.649 0.516 x3 -0.7000676 0.0001057 -6625.170 < 2e-16 *** The variable x2 is not significant, and recommended to be removed by the linear regression. vif (lm(y ~ x1 + x2 + x3)) x1 x2 x3 9.182507 9.184419 1.001853 Note: the regression of y on x1 and x3. Notice that the standard error of x1 is only 0.03. summary(lm(y ~ x1 + x3)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.1595528 0.0647908 -2.463 0.014 * x1 0.4871557 0.0321623 15.147 <2e-16 *** x3 -0.6997853 0.0001121 -6240.617 <2e-16 *** Based on above results, my conclusion is that when the predictor variables are in the similar scales, the collinearity is not a problem in linear regression; when the predictor variables are not in the similar scales, when the two highly correlated variables are both in the true model, the collinearity is not a problem; when only one of the two highly correlated variables is in the true model, If the other "true" variables are included in the linear regression, the linear regression will identify the non-significant variables that are correlated with the significant variable. If the other "true" variables are not included in the linear regression, the problem of collinearity is severe, resulting in standard error inflation.
When does LASSO select correlated predictors?
Ben's answer inspired me to go one step further on the path he provided, what will happen if the "truth", y, is in other situations. In the original example, y is dependent on the two highly correlat
When does LASSO select correlated predictors? Ben's answer inspired me to go one step further on the path he provided, what will happen if the "truth", y, is in other situations. In the original example, y is dependent on the two highly correlated variables x1 and x2. Assuming there is another variable, x3, say x3 = c(1:N)/250 # N is defined before, N = 1000, x3 is in the similar scale as x1, and the scale of x3 has effects on the linear regression results below. The "truth" y is now defined as follow y = .5 * x1 - .7 * x3 + rnorm(N) # Data Generating Process What would happen to the regression? summary(lm(y ~ x1 + x2)) There exists strong collinearity effect. The standard error of x2 is too large. However, the linear regression identifies x2 as a non-significant variable. Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -1.39164 0.04172 -33.354 < 2e-16 *** x1 0.65329 0.12550 5.205 2.35e-07 *** x2 -0.07878 0.05848 -1.347 0.178 vif(lm(y ~ x1 + x2)) x1 x2 9.167429 9.167429 What about another regression case? summary(lm(y ~ x1 + x2 + x3)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.02100 0.06573 0.319 0.749 x1 0.55398 0.09880 5.607 2.67e-08 *** x2 -0.02966 0.04604 -0.644 0.520 x3 -0.70562 0.02845 -24.805 < 2e-16 *** The variable x2 is not significant, and recommended to be removed by the linear regression. vif (lm(y ~ x1 + x2 + x3)) x1 x2 x3 9.067865 9.067884 1.000105 From above results, the collinearity is not a problem in linear regression, and checking VIF is not very helpful. Let's look at another situation. x3 = c(1:N) # N is defined before, N = 1000, x3 is not in the same scale as x1. The "truth" y is defined the same as above y = .5 * x1 - .7 * x3 + rnorm(N) # Data Generating Process What would happen to the regression? summary(lm(y ~ x1 + x2)) There exists strong collinearity effect. The standard errors of x1, x2 are too large. The linear regression fails to identify the important variable x1. Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -350.347 6.395 -54.783 <2e-16 *** x1 25.207 19.237 1.310 0.190 x2 -12.212 8.963 -1.362 0.173 vif(lm(y ~ x1 + x2)) x1 x2 9.167429 9.167429 What about another regression case? summary(lm(y ~ x1 + x2 + x3)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.0360104 0.0610405 0.590 0.555 x1 0.5742955 0.0917555 6.259 5.75e-10 *** x2 -0.0277623 0.0427585 -0.649 0.516 x3 -0.7000676 0.0001057 -6625.170 < 2e-16 *** The variable x2 is not significant, and recommended to be removed by the linear regression. vif (lm(y ~ x1 + x2 + x3)) x1 x2 x3 9.182507 9.184419 1.001853 Note: the regression of y on x1 and x3. Notice that the standard error of x1 is only 0.03. summary(lm(y ~ x1 + x3)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.1595528 0.0647908 -2.463 0.014 * x1 0.4871557 0.0321623 15.147 <2e-16 *** x3 -0.6997853 0.0001121 -6240.617 <2e-16 *** Based on above results, my conclusion is that when the predictor variables are in the similar scales, the collinearity is not a problem in linear regression; when the predictor variables are not in the similar scales, when the two highly correlated variables are both in the true model, the collinearity is not a problem; when only one of the two highly correlated variables is in the true model, If the other "true" variables are included in the linear regression, the linear regression will identify the non-significant variables that are correlated with the significant variable. If the other "true" variables are not included in the linear regression, the problem of collinearity is severe, resulting in standard error inflation.
When does LASSO select correlated predictors? Ben's answer inspired me to go one step further on the path he provided, what will happen if the "truth", y, is in other situations. In the original example, y is dependent on the two highly correlat
17,093
How to specify the null hypothesis in hypothesis testing
A rule of the thumb from a good advisor of mine was to set the Null-Hypothesis to the outcome you do not want to be true i.e. the outcome whose direct opposite you want to show. Basic example: Suppose you have developed a new medical treatment and you want to show that it is indeed better than placebo. So you set Null-Hypothesis $H_0:=$new treament is equal or worse than placebo and Alternative Hypothesis $H_1:=$new treatment is better than placebo. This because in the course of a statistical test you either reject the Null-Hypothesis (and favor the Alternative Hypothesis) or you cannot reject it. Since your "goal" is to reject the Null-Hypothesis you set it to the outcome you do not want to be true. Side Note: I am aware that one should not set up a statistical test to twist it and break it until the Null-Hypothesis is rejected, the casual language was only used to make this rule easier to remember. This also may be helpful: What is the meaning of p values and t values in statistical tests? and/or What is a good introduction to statistical hypothesis testing for computer scientists?
How to specify the null hypothesis in hypothesis testing
A rule of the thumb from a good advisor of mine was to set the Null-Hypothesis to the outcome you do not want to be true i.e. the outcome whose direct opposite you want to show. Basic example: Suppose
How to specify the null hypothesis in hypothesis testing A rule of the thumb from a good advisor of mine was to set the Null-Hypothesis to the outcome you do not want to be true i.e. the outcome whose direct opposite you want to show. Basic example: Suppose you have developed a new medical treatment and you want to show that it is indeed better than placebo. So you set Null-Hypothesis $H_0:=$new treament is equal or worse than placebo and Alternative Hypothesis $H_1:=$new treatment is better than placebo. This because in the course of a statistical test you either reject the Null-Hypothesis (and favor the Alternative Hypothesis) or you cannot reject it. Since your "goal" is to reject the Null-Hypothesis you set it to the outcome you do not want to be true. Side Note: I am aware that one should not set up a statistical test to twist it and break it until the Null-Hypothesis is rejected, the casual language was only used to make this rule easier to remember. This also may be helpful: What is the meaning of p values and t values in statistical tests? and/or What is a good introduction to statistical hypothesis testing for computer scientists?
How to specify the null hypothesis in hypothesis testing A rule of the thumb from a good advisor of mine was to set the Null-Hypothesis to the outcome you do not want to be true i.e. the outcome whose direct opposite you want to show. Basic example: Suppose
17,094
How to specify the null hypothesis in hypothesis testing
If hypothesis B is the interesting hypothesis you can take not-B as the null hypothesis and control, under the null, the probability of the type I error for wrongly rejecting not-B at level $\alpha$. Rejecting not-B is then interpreted as evidence in favor of B because we control the type I error, hence it is unlikely that not-B is true. Confused ... ? Take the example of treatment vs. no treatment in two groups from a population. The interesting hypothesis is that treatment has an effect, that is, there is a difference between the treated group and the untreated group due to the treatment. The null hypothesis is that there is no difference, and we control the probability of wrongly rejecting this hypothesis. Thus we control the probability of wrongly concluding that there is a treatment effect when there is no treatment effect. The type II error is the probability of wrongly accepting the null when there is a treatment effect. The formulation above is based on the Neyman-Pearson framework for statistical testing, where statistical testing is seen as a decision problem between to cases, the null and the alternative. The level $\alpha$ is the fraction of times we make a type I error if we (independently) repeat the test. In this framework there is really not any formal distinction between the null and the alternative. If we interchange the null and the alternative, we interchange the probability of type I and type II errors. We did not, however, control the type II error probability above (it depends upon how big the treatment effect is), and due to this asymmetry, we may prefer to say that we fail to reject the null hypothesis (instead of that we accept the null hypothesis). Thus we should be careful about concluding that the null hypothesis is true just because we can't reject it. In a Fisherian significance testing framework there is really only a null hypothesis and one computes, under the null, a $p$-value for the observed data. Smaller $p$-values are interpreted as stronger evidence against the null. Here the null hypothesis is definitely not-B (no effect of treatment) and the $p$-value is interpreted as the amount of evidence against the null. With a small $p$-value we can confidently reject the null, that there is no treatment effect, and conclude that there is a treatment effect. In this framework we can only reject or not reject (never accept) the null, and it is all about falsifying the null. Note that the $p$-value does not need to be justified by an (imaginary) repeated number of decisions. Neither framework is without problems, and the terminology is often mixed up. I can recommend the book Statistical evidence: a likelihood paradigm by Richard M. Royall for a clear treatment of the different concepts.
How to specify the null hypothesis in hypothesis testing
If hypothesis B is the interesting hypothesis you can take not-B as the null hypothesis and control, under the null, the probability of the type I error for wrongly rejecting not-B at level $\alpha$.
How to specify the null hypothesis in hypothesis testing If hypothesis B is the interesting hypothesis you can take not-B as the null hypothesis and control, under the null, the probability of the type I error for wrongly rejecting not-B at level $\alpha$. Rejecting not-B is then interpreted as evidence in favor of B because we control the type I error, hence it is unlikely that not-B is true. Confused ... ? Take the example of treatment vs. no treatment in two groups from a population. The interesting hypothesis is that treatment has an effect, that is, there is a difference between the treated group and the untreated group due to the treatment. The null hypothesis is that there is no difference, and we control the probability of wrongly rejecting this hypothesis. Thus we control the probability of wrongly concluding that there is a treatment effect when there is no treatment effect. The type II error is the probability of wrongly accepting the null when there is a treatment effect. The formulation above is based on the Neyman-Pearson framework for statistical testing, where statistical testing is seen as a decision problem between to cases, the null and the alternative. The level $\alpha$ is the fraction of times we make a type I error if we (independently) repeat the test. In this framework there is really not any formal distinction between the null and the alternative. If we interchange the null and the alternative, we interchange the probability of type I and type II errors. We did not, however, control the type II error probability above (it depends upon how big the treatment effect is), and due to this asymmetry, we may prefer to say that we fail to reject the null hypothesis (instead of that we accept the null hypothesis). Thus we should be careful about concluding that the null hypothesis is true just because we can't reject it. In a Fisherian significance testing framework there is really only a null hypothesis and one computes, under the null, a $p$-value for the observed data. Smaller $p$-values are interpreted as stronger evidence against the null. Here the null hypothesis is definitely not-B (no effect of treatment) and the $p$-value is interpreted as the amount of evidence against the null. With a small $p$-value we can confidently reject the null, that there is no treatment effect, and conclude that there is a treatment effect. In this framework we can only reject or not reject (never accept) the null, and it is all about falsifying the null. Note that the $p$-value does not need to be justified by an (imaginary) repeated number of decisions. Neither framework is without problems, and the terminology is often mixed up. I can recommend the book Statistical evidence: a likelihood paradigm by Richard M. Royall for a clear treatment of the different concepts.
How to specify the null hypothesis in hypothesis testing If hypothesis B is the interesting hypothesis you can take not-B as the null hypothesis and control, under the null, the probability of the type I error for wrongly rejecting not-B at level $\alpha$.
17,095
How to specify the null hypothesis in hypothesis testing
The "frequentist" response is to invent a null hypothesis of the form "not B" and then argue against "not B", as in Steffen's response. This is the logical equivalent of making the argument "You are wrong, therefore I must be right". This is the kind of reasoning politician's use (i.e. the other party is bad, therefore we are good). It is quite difficult to deal with more than 1 alternative under this sort of reasoning. This is because that "you are wrong, therefore I am right" argument only makes sense when it is not possible for both to be wrong, which can certainly happen when there is more than one alternative hypothesis. The "Bayesian" response is to simply calculate the probability of the hypothesis that you are interested in testing, conditional on whatever evidence you have. Always this contains prior information, which is simply the assumptions you have made to made your problem well posed (all statistical procedures rely on prior information, Bayesian ones just make them more explicit). It also usually consists of some data, and we have by bayes theorem $$P(H_{0}|DI)=\frac{P(H_{0}|I)P(D|H_{0}I)}{\sum_{k}P(H_{k}|I)P(D|H_{k}I)}$$ This form is independent of what is called the "null" and what is called the "alternative", because you have to calculate exactly the same quantities for every hypothesis that you are going to consider - the prior and the likelihood. This is in a sense, analogous to calculate the "type 1" and "type 2" error rates in Neyman Pearson hypothesis testing, simply because a "type 2" error rate when $H_0$ is the "null" is the same thing as the "type 1" error rate with $H_0$ is the "alternative". It is only the connotations implied by the words "null" and "alternative" which make them seem different. You can show equivalence in the case of the "Neyman Pearson Lemma" when there are two hypothesis, for this is simply the likelihood ratio, which is given at once by taking the odds of the above bayes theorem: $$\frac{P(H_{0}|DI)}{P(H_{1}|DI)}=\frac{P(H_{0}|I)}{P(H_{1}|I)}\times\frac{P(D|H_{0}I)}{P(D|H_{1}I)}=\frac{P(H_{0}|I)}{P(H_{1}|I)}\times\Lambda$$ So the decision problems are the same: accept $H_0$ when $\Lambda > \tilde{\Lambda}$ for some cut-off $\tilde{\Lambda}$, and accept $H_1$ otherwise. Thus, the procedures are basically different rationales for choosing the cut-off value, or decision boundary. "Bayesians" would say it should be the product of the prior odds times the loss ratio $\frac{L_2}{L_1}$ where $L_1$ is the "type 1 error loss" and $L_2$ is the "type 2 error loss". These are losses, not probabilities, which describe the relative severity of making each of the two errors. The frequentist criterion is to minimise the one of the average error rates, type 1 or 2, while keeping the other fixed. But because they lead to the same form of decision boundary, we can always find an equivalent bayesian prior*loss ratio for every frequentist minimised error rate. In short, if you are using the likelihood ratio to test your hypothesis, it does not matter what you call the null hypothesis. Switching the null to the alternative just changes the decision to $\Lambda^{-1}<\tilde{\Lambda}^{-1}$ which is mathematically the same thing (you will make the same decision - but based on inverse chi-square cut-off rather than chi-square for your p-value). Playing word games with "failing to reject the null" just doesn't apply to the hypothesis test, because it is a decision, so if there are only two options, then "failing to reject the null" means the same thing as "accepting the null".
How to specify the null hypothesis in hypothesis testing
The "frequentist" response is to invent a null hypothesis of the form "not B" and then argue against "not B", as in Steffen's response. This is the logical equivalent of making the argument "You are
How to specify the null hypothesis in hypothesis testing The "frequentist" response is to invent a null hypothesis of the form "not B" and then argue against "not B", as in Steffen's response. This is the logical equivalent of making the argument "You are wrong, therefore I must be right". This is the kind of reasoning politician's use (i.e. the other party is bad, therefore we are good). It is quite difficult to deal with more than 1 alternative under this sort of reasoning. This is because that "you are wrong, therefore I am right" argument only makes sense when it is not possible for both to be wrong, which can certainly happen when there is more than one alternative hypothesis. The "Bayesian" response is to simply calculate the probability of the hypothesis that you are interested in testing, conditional on whatever evidence you have. Always this contains prior information, which is simply the assumptions you have made to made your problem well posed (all statistical procedures rely on prior information, Bayesian ones just make them more explicit). It also usually consists of some data, and we have by bayes theorem $$P(H_{0}|DI)=\frac{P(H_{0}|I)P(D|H_{0}I)}{\sum_{k}P(H_{k}|I)P(D|H_{k}I)}$$ This form is independent of what is called the "null" and what is called the "alternative", because you have to calculate exactly the same quantities for every hypothesis that you are going to consider - the prior and the likelihood. This is in a sense, analogous to calculate the "type 1" and "type 2" error rates in Neyman Pearson hypothesis testing, simply because a "type 2" error rate when $H_0$ is the "null" is the same thing as the "type 1" error rate with $H_0$ is the "alternative". It is only the connotations implied by the words "null" and "alternative" which make them seem different. You can show equivalence in the case of the "Neyman Pearson Lemma" when there are two hypothesis, for this is simply the likelihood ratio, which is given at once by taking the odds of the above bayes theorem: $$\frac{P(H_{0}|DI)}{P(H_{1}|DI)}=\frac{P(H_{0}|I)}{P(H_{1}|I)}\times\frac{P(D|H_{0}I)}{P(D|H_{1}I)}=\frac{P(H_{0}|I)}{P(H_{1}|I)}\times\Lambda$$ So the decision problems are the same: accept $H_0$ when $\Lambda > \tilde{\Lambda}$ for some cut-off $\tilde{\Lambda}$, and accept $H_1$ otherwise. Thus, the procedures are basically different rationales for choosing the cut-off value, or decision boundary. "Bayesians" would say it should be the product of the prior odds times the loss ratio $\frac{L_2}{L_1}$ where $L_1$ is the "type 1 error loss" and $L_2$ is the "type 2 error loss". These are losses, not probabilities, which describe the relative severity of making each of the two errors. The frequentist criterion is to minimise the one of the average error rates, type 1 or 2, while keeping the other fixed. But because they lead to the same form of decision boundary, we can always find an equivalent bayesian prior*loss ratio for every frequentist minimised error rate. In short, if you are using the likelihood ratio to test your hypothesis, it does not matter what you call the null hypothesis. Switching the null to the alternative just changes the decision to $\Lambda^{-1}<\tilde{\Lambda}^{-1}$ which is mathematically the same thing (you will make the same decision - but based on inverse chi-square cut-off rather than chi-square for your p-value). Playing word games with "failing to reject the null" just doesn't apply to the hypothesis test, because it is a decision, so if there are only two options, then "failing to reject the null" means the same thing as "accepting the null".
How to specify the null hypothesis in hypothesis testing The "frequentist" response is to invent a null hypothesis of the form "not B" and then argue against "not B", as in Steffen's response. This is the logical equivalent of making the argument "You are
17,096
How to specify the null hypothesis in hypothesis testing
The null hypothesis should generally assume that differences in a response variable are due to error alone. For example if you want to test the effect of some factor A on response x, then the null would be: $H_0$ = There is no effect of A on response x. Failing to reject this null hypothesis would be interpreted as: 1) any differences in x are due to error alone and not A or, 2) that the data are inadequate to detect a difference even though one exists (see Type 2 error below). Rejecting this null hypothesis would be interpreted as the alternative hypothesis: $H_a$ = There is an effect of A on response x, is true. Type 1 and Type 2 errors are related to the use of the null hypothesis but not its designation really. Type 1 error occurs when you reject $H_0$ even though it is true - that is, you incorrectly conclude an effect of A on x when one didn't exist. Type 2 error occurs when you fail to reject the $H_0$ even though it is false - that is, you incorrectly conclude no effect of A on x even though one exists.
How to specify the null hypothesis in hypothesis testing
The null hypothesis should generally assume that differences in a response variable are due to error alone. For example if you want to test the effect of some factor A on response x, then the null wou
How to specify the null hypothesis in hypothesis testing The null hypothesis should generally assume that differences in a response variable are due to error alone. For example if you want to test the effect of some factor A on response x, then the null would be: $H_0$ = There is no effect of A on response x. Failing to reject this null hypothesis would be interpreted as: 1) any differences in x are due to error alone and not A or, 2) that the data are inadequate to detect a difference even though one exists (see Type 2 error below). Rejecting this null hypothesis would be interpreted as the alternative hypothesis: $H_a$ = There is an effect of A on response x, is true. Type 1 and Type 2 errors are related to the use of the null hypothesis but not its designation really. Type 1 error occurs when you reject $H_0$ even though it is true - that is, you incorrectly conclude an effect of A on x when one didn't exist. Type 2 error occurs when you fail to reject the $H_0$ even though it is false - that is, you incorrectly conclude no effect of A on x even though one exists.
How to specify the null hypothesis in hypothesis testing The null hypothesis should generally assume that differences in a response variable are due to error alone. For example if you want to test the effect of some factor A on response x, then the null wou
17,097
Mixed effects model: Compare random variance component across levels of a grouping variable
There's more than one way to test this hypothesis. For example, the procedure outlined by @amoeba should work. But it seems to me that the simplest, most expedient way to test it is using a good old likelihood ratio test comparing two nested models. The only potentially tricky part of this approach is in knowing how to set up the pair of models so that dropping out a single parameter will cleanly test the desired hypothesis of unequal variances. Below I explain how to do that. Short answer Switch to contrast (sum to zero) coding for your independent variable and then do a likelihood ratio test comparing your full model to a model that forces the correlation between random slopes and random intercepts to be 0: # switch to numeric (not factor) contrast codes d$contrast <- 2*(d$condition == 'experimental') - 1 # reduced model without correlation parameter mod1 <- lmer(sim_1 ~ contrast + (contrast || participant_id), data=d) # full model with correlation parameter mod2 <- lmer(sim_1 ~ contrast + (contrast | participant_id), data=d) # likelihood ratio test anova(mod1, mod2) Visual explanation / intuition In order for this answer to make sense, you need to have an intuitive understanding of what different values of the correlation parameter imply for the observed data. Consider the (randomly varying) subject-specific regression lines. Basically, the correlation parameter controls whether the participant regression lines "fan out to the right" (positive correlation) or "fan out to the left" (negative correlation) relative to the point $X=0$, where X is your contrast-coded independent variable. Either of these imply unequal variance in participants' conditional mean responses. This is illustrated below: In this plot, we ignore the multiple observations that we have for each subject in each condition and instead just plot each subject's two random means, with a line connecting them, representing that subject's random slope. (This is made up data from 10 hypothetical subjects, not the data posted in the OP.) In the column on the left, where there's a strong negative slope-intercept correlation, the regression lines fan out to the left relative to the point $X=0$. As you can see clearly in the figure, this leads to a greater variance in the subjects' random means in condition $X=-1$ than in condition $X=1$. The column on the right shows the reverse, mirror image of this pattern. In this case there is greater variance in the subjects' random means in condition $X=1$ than in condition $X=-1$. The column in the middle shows what happens when the random slopes and random intercepts are uncorrelated. This means that the regression lines fan out to the left exactly as much as they fan out to the right, relative to the point $X=0$. This implies that the variances of the subjects' means in the two conditions are equal. It's crucial here that we've used a sum-to-zero contrast coding scheme, not dummy codes (that is, not setting the groups at $X=0$ vs. $X=1$). It is only under the contrast coding scheme that we have this relationship wherein the variances are equal if and only if the slope-intercept correlation is 0. The figure below tries to build that intuition: What this figure shows is the same exact dataset in both columns, but with the independent variable coded two different ways. In the column on the left we use contrast codes -- this is exactly the situation from the first figure. In the column on the right we use dummy codes. This alters the meaning of the intercepts -- now the intercepts represent the subjects' predicted responses in the control group. The bottom panel shows the consequence of this change, namely, that the slope-intercept correlation is no longer anywhere close to 0, even though the data are the same in a deep sense and the conditional variances are equal in both cases. If this still doesn't seem to make much sense, studying this previous answer of mine where I talk more about this phenomenon may help. Proof Let $y_{ijk}$ be the $j$th response of the $i$th subject under condition $k$. (We have only two conditions here, so $k$ is just either 1 or 2.) Then the mixed model can be written $$ y_{ijk} = \alpha_i + \beta_ix_k + e_{ijk}, $$ where $\alpha_i$ are the subjects' random intercepts and have variance $\sigma^2_\alpha$, $\beta_i$ are the subjects' random slope and have variance $\sigma^2_\beta$, $e_{ijk}$ is the observation-level error term, and $\text{cov}(\alpha_i, \beta_i)=\sigma_{\alpha\beta}$. We wish to show that $$ \text{var}(\alpha_i + \beta_ix_1) = \text{var}(\alpha_i + \beta_ix_2) \Leftrightarrow \sigma_{\alpha\beta}=0. $$ Beginning with the left hand side of this implication, we have $$ \begin{aligned} \text{var}(\alpha_i + \beta_ix_1) &= \text{var}(\alpha_i + \beta_ix_2) \\ \sigma^2_\alpha + x^2_1\sigma^2_\beta + 2x_1\sigma_{\alpha\beta} &= \sigma^2_\alpha + x^2_2\sigma^2_\beta + 2x_2\sigma_{\alpha\beta} \\ \sigma^2_\beta(x_1^2 - x_2^2) + 2\sigma_{\alpha\beta}(x_1 - x_2) &= 0. \end{aligned} $$ Sum-to-zero contrast codes imply that $x_1 + x_2 = 0$ and $x_1^2 = x_2^2 = x^2$. Then we can further reduce the last line of the above to $$ \begin{aligned} \sigma^2_\beta(x^2 - x^2) + 2\sigma_{\alpha\beta}(x_1 + x_1) &= 0 \\ \sigma_{\alpha\beta} &= 0, \end{aligned} $$ which is what we wanted to prove. (To establish the other direction of the implication, we can just follow these same steps in reverse.) To reiterate, this shows that if the independent variable is contrast (sum to zero) coded, then the variances of the subjects' random means in each condition are equal if and only if the correlation between random slopes and random intercepts is 0. The key take-away point from all this is that testing the null hypothesis that $\sigma_{\alpha\beta} = 0$ will test the null hypothesis of equal variances described by the OP. This does NOT work if the independent variable is, say, dummy coded. Specifically, if we plug the values $x_1=0$ and $x_2=1$ into the equations above, we find that $$ \text{var}(\alpha_i) = \text{var}(\alpha_i + \beta_i) \Leftrightarrow \sigma_{\alpha\beta} = -\frac{\sigma^2_\beta}{2}. $$
Mixed effects model: Compare random variance component across levels of a grouping variable
There's more than one way to test this hypothesis. For example, the procedure outlined by @amoeba should work. But it seems to me that the simplest, most expedient way to test it is using a good old l
Mixed effects model: Compare random variance component across levels of a grouping variable There's more than one way to test this hypothesis. For example, the procedure outlined by @amoeba should work. But it seems to me that the simplest, most expedient way to test it is using a good old likelihood ratio test comparing two nested models. The only potentially tricky part of this approach is in knowing how to set up the pair of models so that dropping out a single parameter will cleanly test the desired hypothesis of unequal variances. Below I explain how to do that. Short answer Switch to contrast (sum to zero) coding for your independent variable and then do a likelihood ratio test comparing your full model to a model that forces the correlation between random slopes and random intercepts to be 0: # switch to numeric (not factor) contrast codes d$contrast <- 2*(d$condition == 'experimental') - 1 # reduced model without correlation parameter mod1 <- lmer(sim_1 ~ contrast + (contrast || participant_id), data=d) # full model with correlation parameter mod2 <- lmer(sim_1 ~ contrast + (contrast | participant_id), data=d) # likelihood ratio test anova(mod1, mod2) Visual explanation / intuition In order for this answer to make sense, you need to have an intuitive understanding of what different values of the correlation parameter imply for the observed data. Consider the (randomly varying) subject-specific regression lines. Basically, the correlation parameter controls whether the participant regression lines "fan out to the right" (positive correlation) or "fan out to the left" (negative correlation) relative to the point $X=0$, where X is your contrast-coded independent variable. Either of these imply unequal variance in participants' conditional mean responses. This is illustrated below: In this plot, we ignore the multiple observations that we have for each subject in each condition and instead just plot each subject's two random means, with a line connecting them, representing that subject's random slope. (This is made up data from 10 hypothetical subjects, not the data posted in the OP.) In the column on the left, where there's a strong negative slope-intercept correlation, the regression lines fan out to the left relative to the point $X=0$. As you can see clearly in the figure, this leads to a greater variance in the subjects' random means in condition $X=-1$ than in condition $X=1$. The column on the right shows the reverse, mirror image of this pattern. In this case there is greater variance in the subjects' random means in condition $X=1$ than in condition $X=-1$. The column in the middle shows what happens when the random slopes and random intercepts are uncorrelated. This means that the regression lines fan out to the left exactly as much as they fan out to the right, relative to the point $X=0$. This implies that the variances of the subjects' means in the two conditions are equal. It's crucial here that we've used a sum-to-zero contrast coding scheme, not dummy codes (that is, not setting the groups at $X=0$ vs. $X=1$). It is only under the contrast coding scheme that we have this relationship wherein the variances are equal if and only if the slope-intercept correlation is 0. The figure below tries to build that intuition: What this figure shows is the same exact dataset in both columns, but with the independent variable coded two different ways. In the column on the left we use contrast codes -- this is exactly the situation from the first figure. In the column on the right we use dummy codes. This alters the meaning of the intercepts -- now the intercepts represent the subjects' predicted responses in the control group. The bottom panel shows the consequence of this change, namely, that the slope-intercept correlation is no longer anywhere close to 0, even though the data are the same in a deep sense and the conditional variances are equal in both cases. If this still doesn't seem to make much sense, studying this previous answer of mine where I talk more about this phenomenon may help. Proof Let $y_{ijk}$ be the $j$th response of the $i$th subject under condition $k$. (We have only two conditions here, so $k$ is just either 1 or 2.) Then the mixed model can be written $$ y_{ijk} = \alpha_i + \beta_ix_k + e_{ijk}, $$ where $\alpha_i$ are the subjects' random intercepts and have variance $\sigma^2_\alpha$, $\beta_i$ are the subjects' random slope and have variance $\sigma^2_\beta$, $e_{ijk}$ is the observation-level error term, and $\text{cov}(\alpha_i, \beta_i)=\sigma_{\alpha\beta}$. We wish to show that $$ \text{var}(\alpha_i + \beta_ix_1) = \text{var}(\alpha_i + \beta_ix_2) \Leftrightarrow \sigma_{\alpha\beta}=0. $$ Beginning with the left hand side of this implication, we have $$ \begin{aligned} \text{var}(\alpha_i + \beta_ix_1) &= \text{var}(\alpha_i + \beta_ix_2) \\ \sigma^2_\alpha + x^2_1\sigma^2_\beta + 2x_1\sigma_{\alpha\beta} &= \sigma^2_\alpha + x^2_2\sigma^2_\beta + 2x_2\sigma_{\alpha\beta} \\ \sigma^2_\beta(x_1^2 - x_2^2) + 2\sigma_{\alpha\beta}(x_1 - x_2) &= 0. \end{aligned} $$ Sum-to-zero contrast codes imply that $x_1 + x_2 = 0$ and $x_1^2 = x_2^2 = x^2$. Then we can further reduce the last line of the above to $$ \begin{aligned} \sigma^2_\beta(x^2 - x^2) + 2\sigma_{\alpha\beta}(x_1 + x_1) &= 0 \\ \sigma_{\alpha\beta} &= 0, \end{aligned} $$ which is what we wanted to prove. (To establish the other direction of the implication, we can just follow these same steps in reverse.) To reiterate, this shows that if the independent variable is contrast (sum to zero) coded, then the variances of the subjects' random means in each condition are equal if and only if the correlation between random slopes and random intercepts is 0. The key take-away point from all this is that testing the null hypothesis that $\sigma_{\alpha\beta} = 0$ will test the null hypothesis of equal variances described by the OP. This does NOT work if the independent variable is, say, dummy coded. Specifically, if we plug the values $x_1=0$ and $x_2=1$ into the equations above, we find that $$ \text{var}(\alpha_i) = \text{var}(\alpha_i + \beta_i) \Leftrightarrow \sigma_{\alpha\beta} = -\frac{\sigma^2_\beta}{2}. $$
Mixed effects model: Compare random variance component across levels of a grouping variable There's more than one way to test this hypothesis. For example, the procedure outlined by @amoeba should work. But it seems to me that the simplest, most expedient way to test it is using a good old l
17,098
Mixed effects model: Compare random variance component across levels of a grouping variable
You can test significance, of model parameters, with the help of estimated confidence intervals for which the lme4 package has the confint.merMod function. bootstrapping (see for instance Confidence Interval from bootstrap) > confint(m, method="boot", nsim=500, oldNames= FALSE) Computing bootstrap confidence intervals ... 2.5 % 97.5 % sd_(Intercept)|participant_id 0.32764600 0.64763277 cor_conditionexperimental.(Intercept)|participant_id -1.00000000 1.00000000 sd_conditionexperimental|participant_id 0.02249989 0.46871800 sigma 0.97933979 1.08314696 (Intercept) -0.29669088 0.06169473 conditionexperimental 0.26539992 0.60940435 likelihood profile (see for instance What is the relationship between profile likelihood and confidence intervals?) > confint(m, method="profile", oldNames= FALSE) Computing profile confidence intervals ... 2.5 % 97.5 % sd_(Intercept)|participant_id 0.3490878 0.66714551 cor_conditionexperimental.(Intercept)|participant_id -1.0000000 1.00000000 sd_conditionexperimental|participant_id 0.0000000 0.49076950 sigma 0.9759407 1.08217870 (Intercept) -0.2999380 0.07194055 conditionexperimental 0.2707319 0.60727448 There is also a method 'Wald' but this is applied to fixed effects only. There also exist some kind of anova (likelihood ratio) type of expression in the package lmerTest which is named ranova. But I can not seem to make sense out of this. The distribution of the differences in logLikelihood, when the null hypothesis (zero variance for the random effect) is true is not chi-square distributed (possibly when number of participants and trials is high the likelihood ratio test might make sense). Variance in specific groups To obtain results for variance in specific groups you could reparameterize # different model with alternative parameterization (and also correlation taken out) fml1 <- "~ condition + (0 + control + experimental || participant_id) " Where we added two columns to the data-frame (this is only needed if you wish to evaluate non-correlated 'control' and 'experimental' the function (0 + condition || participant_id) would not lead to the evaluation of the different factors in condition as non-correlated) #adding extra columns for control and experimental d <- cbind(d,as.numeric(d$condition=='control')) d <- cbind(d,1-as.numeric(d$condition=='control')) names(d)[c(4,5)] <- c("control","experimental") Now lmer will give variance for the different groups > m <- lmer(paste("sim_1 ", fml1), data=d) > m Linear mixed model fit by REML ['lmerModLmerTest'] Formula: paste("sim_1 ", fml1) Data: d REML criterion at convergence: 2408.186 Random effects: Groups Name Std.Dev. participant_id control 0.4963 participant_id.1 experimental 0.4554 Residual 1.0268 Number of obs: 800, groups: participant_id, 40 Fixed Effects: (Intercept) conditionexperimental -0.114 0.439 And you can apply the profile methods to these. For instance now confint gives confidence intervals for the control and exerimental variance. > confint(m, method="profile", oldNames= FALSE) Computing profile confidence intervals ... 2.5 % 97.5 % sd_control|participant_id 0.3490873 0.66714568 sd_experimental|participant_id 0.3106425 0.61975534 sigma 0.9759407 1.08217872 (Intercept) -0.2999382 0.07194076 conditionexperimental 0.1865125 0.69149396 Simplicity You could use the likelihood function to get more advanced comparisons, but there are many ways to make approximations along the road (e.g. you could do a conservative anova/lrt-test, but is that what you want?). At this point it makes me wonder what is actually the point of this (not so common) comparison between variances. I wonder whether it starts to become too sophisticated. Why the difference between variances instead of the ratio between variances (which relates to the classical F-distribution)? Why not just report confidence intervals? We need to take a step back, and clarify the data and the story it is supposed to tell, before going into advanced pathways that may be superfluous and loose touch with the statistical matter and the statistical considerations that are actually the main topic. I wonder whether one should do much more than simply stating the confidence intervals (which may actually tell much more than a hypothesis test. a hypothesis test gives a yes no answer but no information about the actual spread of the population. given enough data you can make any slight difference to be reported as a significant difference). To go more deeply into the matter (for whatever purpose), requires, I believe, a more specific (narrowly defined) research question in order to guide the mathematical machinery to make the proper simplifications (even when an exact calculation might be feasible or when it could be approximated by simulations/bootstrapping, even then in in some settings it still requires some appropriate interpretation). Compare with Fisher's exact test to solve a (particular) question (about contingency tables) exactly, but which may not be the right question. Simple example To provide an example of the simplicity that is possible I show below a comparison (by simulations) with a simple assessment of the difference between the two group variances based on an F-test done by comparing variances in the individual mean responses and done by comparing the mixed model derived variances. For the F-test we simply compare the variance of the values (means) of the individuals in the two groups. Those means are for condition $j$ distributed as: $$\hat{Y}_{i,j} \sim N(\mu_j, \sigma_j^2 + \frac{\sigma_{\epsilon}^2}{10})$$ if the measurement error variance $\sigma_\epsilon$ is equal for all individuals and conditions, and if the variance for the two conditions $\sigma_{j}$ (with $j = \lbrace 1,2 \rbrace$) is equal then the ratio for the variance for the 40 means in the condition 1 and the variance for the 40 means in the condition 2 is distributed according to the F-distribution with degrees of freedom 39 and 39 for numerator and denominator. You can see this in the simulation of the below graph where aside for the F-score based on sample means an F-score is calculated based on the predicted variances (or sums of squared error) from the model. The image is modeled with 10 000 repetitions using $\sigma_{j=1} = \sigma_{j=2} = 0.5$ and $\sigma_\epsilon=1$. You can see that there is some difference. This difference may be due to fact that the mixed effects linear model is obtaining the sums of squared error (for the random effect) in a different way. And these squared error terms are not (anymore) well expressed as a simple Chi-squared distribution, but still closely related and they can be approximated. Aside from the (small) difference when the null-hypothesis is true, more interesting is the case when the null hypothesis is not true. Especially the condition when $\sigma_{j=1} \neq \sigma_{j=2}$. The distribution of the means $\hat{Y}_{i,j}$ are not only dependent on those $\sigma_j$ but also on the measurement error $\sigma_\epsilon$. In the case of the mixed effects model this latter error is 'filtered out', and it is expected that the F-score based on the random effects model variances has a higher power. The image is modeled with 10 000 repetitions using $\sigma_{j=1} = 0.5$, $\sigma_{j=2} = 0.25$ and $\sigma_\epsilon=1$. So the model based on the means is very exact. But it is less powerful. This shows that the correct strategy depends on what you want/need. In the example above when you set the right tail boundaries at 2.1 and 3.1 you get approximately 1% of the population in the case of equal variance (resp 103 and 104 of the 10 000 cases) but in the case of unequal variance these boundaries differ a lot (giving 5334 and 6716 of the cases) code: set.seed(23432) # different model with alternative parameterization (and also correlation taken out) fml1 <- "~ condition + (0 + control + experimental || participant_id) " fml <- "~ condition + (condition | participant_id)" n <- 10000 theta_m <- matrix(rep(0,n*2),n) theta_f <- matrix(rep(0,n*2),n) # initial data frame later changed into d by adding a sixth sim_1 column ds <- expand.grid(participant_id=1:40, trial_num=1:10) ds <- rbind(cbind(ds, condition="control"), cbind(ds, condition="experimental")) #adding extra columns for control and experimental ds <- cbind(ds,as.numeric(ds$condition=='control')) ds <- cbind(ds,1-as.numeric(ds$condition=='control')) names(ds)[c(4,5)] <- c("control","experimental") # defining variances for the population of individual means stdevs <- c(0.5,0.5) # c(control,experimental) pb <- txtProgressBar(title = "progress bar", min = 0, max = n, style=3) for (i in 1:n) { indv_means <- c(rep(0,40)+rnorm(40,0,stdevs[1]),rep(0.5,40)+rnorm(40,0,stdevs[2])) fill <- indv_means[d[,1]+d[,5]*40]+rnorm(80*10,0,sqrt(1)) #using a different way to make the data because the simulate is not creating independent data in the two groups #fill <- suppressMessages(simulate(formula(fml), # newparams=list(beta=c(0, .5), # theta=c(.5, 0, 0), # sigma=1), # family=gaussian, # newdata=ds)) d <- cbind(ds, fill) names(d)[6] <- c("sim_1") m <- lmer(paste("sim_1 ", fml1), data=d) m theta_m[i,] <- m@theta^2 imeans <- aggregate(d[, 6], list(d[,c(1)],d[,c(3)]), mean) theta_f[i,1] <- var(imeans[c(1:40),3]) theta_f[i,2] <- var(imeans[c(41:80),3]) setTxtProgressBar(pb, i) } close(pb) p1 <- hist(theta_f[,1]/theta_f[,2], breaks = seq(0,6,0.06)) fr <- theta_m[,1]/theta_m[,2] fr <- fr[which(fr<30)] p2 <- hist(fr, breaks = seq(0,30,0.06)) plot(-100,-100, xlim=c(0,6), ylim=c(0,800), xlab="F-score", ylab = "counts [n out of 10 000]") plot( p1, col=rgb(0,0,1,1/4), xlim=c(0,6), ylim=c(0,800), add=T) # means based F-score plot( p2, col=rgb(1,0,0,1/4), xlim=c(0,6), ylim=c(0,800), add=T) # model based F-score fr <- seq(0, 4, 0.01) lines(fr,df(fr,39,39)*n*0.06,col=1) legend(2, 800, c("means based F-score","mixed regression based F-score"), fill=c(rgb(0,0,1,1/4),rgb(1,0,0,1/4)),box.col =NA, bg = NA) legend(2, 760, c("F(39,39) distribution"), lty=c(1),box.col = NA,bg = NA) title(expression(paste(sigma[1]==0.5, " , ", sigma[2]==0.5, " and ", sigma[epsilon]==1)))
Mixed effects model: Compare random variance component across levels of a grouping variable
You can test significance, of model parameters, with the help of estimated confidence intervals for which the lme4 package has the confint.merMod function. bootstrapping (see for instance Confidence
Mixed effects model: Compare random variance component across levels of a grouping variable You can test significance, of model parameters, with the help of estimated confidence intervals for which the lme4 package has the confint.merMod function. bootstrapping (see for instance Confidence Interval from bootstrap) > confint(m, method="boot", nsim=500, oldNames= FALSE) Computing bootstrap confidence intervals ... 2.5 % 97.5 % sd_(Intercept)|participant_id 0.32764600 0.64763277 cor_conditionexperimental.(Intercept)|participant_id -1.00000000 1.00000000 sd_conditionexperimental|participant_id 0.02249989 0.46871800 sigma 0.97933979 1.08314696 (Intercept) -0.29669088 0.06169473 conditionexperimental 0.26539992 0.60940435 likelihood profile (see for instance What is the relationship between profile likelihood and confidence intervals?) > confint(m, method="profile", oldNames= FALSE) Computing profile confidence intervals ... 2.5 % 97.5 % sd_(Intercept)|participant_id 0.3490878 0.66714551 cor_conditionexperimental.(Intercept)|participant_id -1.0000000 1.00000000 sd_conditionexperimental|participant_id 0.0000000 0.49076950 sigma 0.9759407 1.08217870 (Intercept) -0.2999380 0.07194055 conditionexperimental 0.2707319 0.60727448 There is also a method 'Wald' but this is applied to fixed effects only. There also exist some kind of anova (likelihood ratio) type of expression in the package lmerTest which is named ranova. But I can not seem to make sense out of this. The distribution of the differences in logLikelihood, when the null hypothesis (zero variance for the random effect) is true is not chi-square distributed (possibly when number of participants and trials is high the likelihood ratio test might make sense). Variance in specific groups To obtain results for variance in specific groups you could reparameterize # different model with alternative parameterization (and also correlation taken out) fml1 <- "~ condition + (0 + control + experimental || participant_id) " Where we added two columns to the data-frame (this is only needed if you wish to evaluate non-correlated 'control' and 'experimental' the function (0 + condition || participant_id) would not lead to the evaluation of the different factors in condition as non-correlated) #adding extra columns for control and experimental d <- cbind(d,as.numeric(d$condition=='control')) d <- cbind(d,1-as.numeric(d$condition=='control')) names(d)[c(4,5)] <- c("control","experimental") Now lmer will give variance for the different groups > m <- lmer(paste("sim_1 ", fml1), data=d) > m Linear mixed model fit by REML ['lmerModLmerTest'] Formula: paste("sim_1 ", fml1) Data: d REML criterion at convergence: 2408.186 Random effects: Groups Name Std.Dev. participant_id control 0.4963 participant_id.1 experimental 0.4554 Residual 1.0268 Number of obs: 800, groups: participant_id, 40 Fixed Effects: (Intercept) conditionexperimental -0.114 0.439 And you can apply the profile methods to these. For instance now confint gives confidence intervals for the control and exerimental variance. > confint(m, method="profile", oldNames= FALSE) Computing profile confidence intervals ... 2.5 % 97.5 % sd_control|participant_id 0.3490873 0.66714568 sd_experimental|participant_id 0.3106425 0.61975534 sigma 0.9759407 1.08217872 (Intercept) -0.2999382 0.07194076 conditionexperimental 0.1865125 0.69149396 Simplicity You could use the likelihood function to get more advanced comparisons, but there are many ways to make approximations along the road (e.g. you could do a conservative anova/lrt-test, but is that what you want?). At this point it makes me wonder what is actually the point of this (not so common) comparison between variances. I wonder whether it starts to become too sophisticated. Why the difference between variances instead of the ratio between variances (which relates to the classical F-distribution)? Why not just report confidence intervals? We need to take a step back, and clarify the data and the story it is supposed to tell, before going into advanced pathways that may be superfluous and loose touch with the statistical matter and the statistical considerations that are actually the main topic. I wonder whether one should do much more than simply stating the confidence intervals (which may actually tell much more than a hypothesis test. a hypothesis test gives a yes no answer but no information about the actual spread of the population. given enough data you can make any slight difference to be reported as a significant difference). To go more deeply into the matter (for whatever purpose), requires, I believe, a more specific (narrowly defined) research question in order to guide the mathematical machinery to make the proper simplifications (even when an exact calculation might be feasible or when it could be approximated by simulations/bootstrapping, even then in in some settings it still requires some appropriate interpretation). Compare with Fisher's exact test to solve a (particular) question (about contingency tables) exactly, but which may not be the right question. Simple example To provide an example of the simplicity that is possible I show below a comparison (by simulations) with a simple assessment of the difference between the two group variances based on an F-test done by comparing variances in the individual mean responses and done by comparing the mixed model derived variances. For the F-test we simply compare the variance of the values (means) of the individuals in the two groups. Those means are for condition $j$ distributed as: $$\hat{Y}_{i,j} \sim N(\mu_j, \sigma_j^2 + \frac{\sigma_{\epsilon}^2}{10})$$ if the measurement error variance $\sigma_\epsilon$ is equal for all individuals and conditions, and if the variance for the two conditions $\sigma_{j}$ (with $j = \lbrace 1,2 \rbrace$) is equal then the ratio for the variance for the 40 means in the condition 1 and the variance for the 40 means in the condition 2 is distributed according to the F-distribution with degrees of freedom 39 and 39 for numerator and denominator. You can see this in the simulation of the below graph where aside for the F-score based on sample means an F-score is calculated based on the predicted variances (or sums of squared error) from the model. The image is modeled with 10 000 repetitions using $\sigma_{j=1} = \sigma_{j=2} = 0.5$ and $\sigma_\epsilon=1$. You can see that there is some difference. This difference may be due to fact that the mixed effects linear model is obtaining the sums of squared error (for the random effect) in a different way. And these squared error terms are not (anymore) well expressed as a simple Chi-squared distribution, but still closely related and they can be approximated. Aside from the (small) difference when the null-hypothesis is true, more interesting is the case when the null hypothesis is not true. Especially the condition when $\sigma_{j=1} \neq \sigma_{j=2}$. The distribution of the means $\hat{Y}_{i,j}$ are not only dependent on those $\sigma_j$ but also on the measurement error $\sigma_\epsilon$. In the case of the mixed effects model this latter error is 'filtered out', and it is expected that the F-score based on the random effects model variances has a higher power. The image is modeled with 10 000 repetitions using $\sigma_{j=1} = 0.5$, $\sigma_{j=2} = 0.25$ and $\sigma_\epsilon=1$. So the model based on the means is very exact. But it is less powerful. This shows that the correct strategy depends on what you want/need. In the example above when you set the right tail boundaries at 2.1 and 3.1 you get approximately 1% of the population in the case of equal variance (resp 103 and 104 of the 10 000 cases) but in the case of unequal variance these boundaries differ a lot (giving 5334 and 6716 of the cases) code: set.seed(23432) # different model with alternative parameterization (and also correlation taken out) fml1 <- "~ condition + (0 + control + experimental || participant_id) " fml <- "~ condition + (condition | participant_id)" n <- 10000 theta_m <- matrix(rep(0,n*2),n) theta_f <- matrix(rep(0,n*2),n) # initial data frame later changed into d by adding a sixth sim_1 column ds <- expand.grid(participant_id=1:40, trial_num=1:10) ds <- rbind(cbind(ds, condition="control"), cbind(ds, condition="experimental")) #adding extra columns for control and experimental ds <- cbind(ds,as.numeric(ds$condition=='control')) ds <- cbind(ds,1-as.numeric(ds$condition=='control')) names(ds)[c(4,5)] <- c("control","experimental") # defining variances for the population of individual means stdevs <- c(0.5,0.5) # c(control,experimental) pb <- txtProgressBar(title = "progress bar", min = 0, max = n, style=3) for (i in 1:n) { indv_means <- c(rep(0,40)+rnorm(40,0,stdevs[1]),rep(0.5,40)+rnorm(40,0,stdevs[2])) fill <- indv_means[d[,1]+d[,5]*40]+rnorm(80*10,0,sqrt(1)) #using a different way to make the data because the simulate is not creating independent data in the two groups #fill <- suppressMessages(simulate(formula(fml), # newparams=list(beta=c(0, .5), # theta=c(.5, 0, 0), # sigma=1), # family=gaussian, # newdata=ds)) d <- cbind(ds, fill) names(d)[6] <- c("sim_1") m <- lmer(paste("sim_1 ", fml1), data=d) m theta_m[i,] <- m@theta^2 imeans <- aggregate(d[, 6], list(d[,c(1)],d[,c(3)]), mean) theta_f[i,1] <- var(imeans[c(1:40),3]) theta_f[i,2] <- var(imeans[c(41:80),3]) setTxtProgressBar(pb, i) } close(pb) p1 <- hist(theta_f[,1]/theta_f[,2], breaks = seq(0,6,0.06)) fr <- theta_m[,1]/theta_m[,2] fr <- fr[which(fr<30)] p2 <- hist(fr, breaks = seq(0,30,0.06)) plot(-100,-100, xlim=c(0,6), ylim=c(0,800), xlab="F-score", ylab = "counts [n out of 10 000]") plot( p1, col=rgb(0,0,1,1/4), xlim=c(0,6), ylim=c(0,800), add=T) # means based F-score plot( p2, col=rgb(1,0,0,1/4), xlim=c(0,6), ylim=c(0,800), add=T) # model based F-score fr <- seq(0, 4, 0.01) lines(fr,df(fr,39,39)*n*0.06,col=1) legend(2, 800, c("means based F-score","mixed regression based F-score"), fill=c(rgb(0,0,1,1/4),rgb(1,0,0,1/4)),box.col =NA, bg = NA) legend(2, 760, c("F(39,39) distribution"), lty=c(1),box.col = NA,bg = NA) title(expression(paste(sigma[1]==0.5, " , ", sigma[2]==0.5, " and ", sigma[epsilon]==1)))
Mixed effects model: Compare random variance component across levels of a grouping variable You can test significance, of model parameters, with the help of estimated confidence intervals for which the lme4 package has the confint.merMod function. bootstrapping (see for instance Confidence
17,099
Mixed effects model: Compare random variance component across levels of a grouping variable
One relatively straight-forward way could be to use likelihood-ratio tests via anova as described in the lme4 FAQ. We start with a full model in which the variances are unconstrained (i.e., two different variances are allowed) and then fit one constrained model in which the two variances are assumed to be equal. We simply compare them with anova() (note that I set REML = FALSE although REML = TRUE with anova(..., refit = FALSE) is completely feasible). m_full <- lmer(sim_1 ~ condition + (condition | participant_id), data=d, REML = FALSE) summary(m_full)$varcor # Groups Name Std.Dev. Corr # participant_id (Intercept) 0.48741 # conditionexperimental 0.26468 -0.419 # Residual 1.02677 m_red <- lmer(sim_1 ~ condition + (1 | participant_id), data=d, REML = FALSE) summary(m_red)$varcor # Groups Name Std.Dev. # participant_id (Intercept) 0.44734 # Residual 1.03571 anova(m_full, m_red) # Data: d # Models: # m_red: sim_1 ~ condition + (1 | participant_id) # m_full: sim_1 ~ condition + (condition | participant_id) # Df AIC BIC logLik deviance Chisq Chi Df Pr(>Chisq) # m_red 4 2396.6 2415.3 -1194.3 2388.6 # m_full 6 2398.7 2426.8 -1193.3 2386.7 1.9037 2 0.386 However, this test is likely conservative. For example, the FAQ says: Keep in mind that LRT-based null hypothesis tests are conservative when the null value (such as σ2=0) is on the boundary of the feasible space; in the simplest case (single random effect variance), the p-value is approximately twice as large as it should be (Pinheiro and Bates 2000). There are several alternatives: Create an appropriate test distribution, which usually consists of a mixture of $\chi^2$ distributions. See e.g., Self, S. G., & Liang, K.-Y. (1987). Asymptotic Properties of Maximum Likelihood Estimators and Likelihood Ratio Tests Under Nonstandard Conditions. Journal of the American Statistical Association, 82(398), 605. https://doi.org/10.2307/2289471 However, this is quite complicated. Simulate the correct distribution using RLRsim (as also described in the FAQ). I will demonstrate the second option in the following: library("RLRsim") ## reparametrize model so we can get one parameter that we want to be zero: afex::set_sum_contrasts() ## warning, changes contrasts globally d <- cbind(d, difference = model.matrix(~condition, d)[,"condition1"]) m_full2 <- lmer(sim_1 ~ condition + (difference | participant_id), data=d, REML = FALSE) all.equal(deviance(m_full), deviance(m_full2)) ## both full models are identical ## however, we need the full model without correlation! m_full2b <- lmer(sim_1 ~ condition + (1| participant_id) + (0 + difference | participant_id), data=d, REML = FALSE) summary(m_full2b)$varcor # Groups Name Std.Dev. # participant_id (Intercept) 0.44837 # participant_id.1 difference 0.13234 # Residual 1.02677 ## model that only has random effect to be tested m_red <- update(m_full2b, . ~ . - (1 | participant_id), data=d, REML = FALSE) summary(m_red)$varcor # Groups Name Std.Dev. # participant_id difference 0.083262 # Residual 1.125116 ## Null model m_null <- update(m_full2b, . ~ . - (0 + difference | participant_id), data=d, REML = FALSE) summary(m_null)$varcor # Groups Name Std.Dev. # participant_id (Intercept) 0.44734 # Residual 1.03571 exactRLRT(m_red, m_full2b, m_null) # Using restricted likelihood evaluated at ML estimators. # Refit with method="REML" for exact results. # # simulated finite sample distribution of RLRT. # # (p-value based on 10000 simulated values) # # data: # RLRT = 1.9698, p-value = 0.0719 As we can see, the output suggests that with REML = TRUE we would have gotten exact results. But this is left as an exercise to the reader. Regarding the bonus, I am not sure if RLRsim allows simultaneous testing of multiple components, but if so, this can be done in the same way. Response to comment: So it is true, then, that in general the random slope $\theta_X$ allows the random intercept $\theta_0$ to vary across levels of $X$? I am not sure this question can receive a reasonable answer. A random intercept allows an idiosyncratic difference in the overall level for each level of the grouping factor. For example, if the dependent variable is response time, some participants are faster and some are slower. A random slope allows each level of the grouping factor an idiosyncratic effect of the factor for which random slopes are estimated. For example, if the factor is congruency, then some participants can have a higher congruency effect than others. So do random-slopes affect the random-intercept? In some sense this might make sense, as they allow each level of the grouping factor a completely idiosyncratic effect for each condition. In the end, we estimate two idiosyncratic parameters for two conditions. However, I think the distinction between the overall level captured by the intercept and the condition specific effect captured by the random slope is a important and then the random slope cannot really affect the random intercept. However, it still allows each level of the grouping factor an idiosyncratic separately for each level of the condition. Nevertheless, my test still does what the original question wants. It tests whether the difference in variances between the two conditions is zero. If it is zero, then the variances in both conditions are equal. In other words, only if there is no need for a random-slope is the variance in both conditions identical. I hope that makes sense.
Mixed effects model: Compare random variance component across levels of a grouping variable
One relatively straight-forward way could be to use likelihood-ratio tests via anova as described in the lme4 FAQ. We start with a full model in which the variances are unconstrained (i.e., two differ
Mixed effects model: Compare random variance component across levels of a grouping variable One relatively straight-forward way could be to use likelihood-ratio tests via anova as described in the lme4 FAQ. We start with a full model in which the variances are unconstrained (i.e., two different variances are allowed) and then fit one constrained model in which the two variances are assumed to be equal. We simply compare them with anova() (note that I set REML = FALSE although REML = TRUE with anova(..., refit = FALSE) is completely feasible). m_full <- lmer(sim_1 ~ condition + (condition | participant_id), data=d, REML = FALSE) summary(m_full)$varcor # Groups Name Std.Dev. Corr # participant_id (Intercept) 0.48741 # conditionexperimental 0.26468 -0.419 # Residual 1.02677 m_red <- lmer(sim_1 ~ condition + (1 | participant_id), data=d, REML = FALSE) summary(m_red)$varcor # Groups Name Std.Dev. # participant_id (Intercept) 0.44734 # Residual 1.03571 anova(m_full, m_red) # Data: d # Models: # m_red: sim_1 ~ condition + (1 | participant_id) # m_full: sim_1 ~ condition + (condition | participant_id) # Df AIC BIC logLik deviance Chisq Chi Df Pr(>Chisq) # m_red 4 2396.6 2415.3 -1194.3 2388.6 # m_full 6 2398.7 2426.8 -1193.3 2386.7 1.9037 2 0.386 However, this test is likely conservative. For example, the FAQ says: Keep in mind that LRT-based null hypothesis tests are conservative when the null value (such as σ2=0) is on the boundary of the feasible space; in the simplest case (single random effect variance), the p-value is approximately twice as large as it should be (Pinheiro and Bates 2000). There are several alternatives: Create an appropriate test distribution, which usually consists of a mixture of $\chi^2$ distributions. See e.g., Self, S. G., & Liang, K.-Y. (1987). Asymptotic Properties of Maximum Likelihood Estimators and Likelihood Ratio Tests Under Nonstandard Conditions. Journal of the American Statistical Association, 82(398), 605. https://doi.org/10.2307/2289471 However, this is quite complicated. Simulate the correct distribution using RLRsim (as also described in the FAQ). I will demonstrate the second option in the following: library("RLRsim") ## reparametrize model so we can get one parameter that we want to be zero: afex::set_sum_contrasts() ## warning, changes contrasts globally d <- cbind(d, difference = model.matrix(~condition, d)[,"condition1"]) m_full2 <- lmer(sim_1 ~ condition + (difference | participant_id), data=d, REML = FALSE) all.equal(deviance(m_full), deviance(m_full2)) ## both full models are identical ## however, we need the full model without correlation! m_full2b <- lmer(sim_1 ~ condition + (1| participant_id) + (0 + difference | participant_id), data=d, REML = FALSE) summary(m_full2b)$varcor # Groups Name Std.Dev. # participant_id (Intercept) 0.44837 # participant_id.1 difference 0.13234 # Residual 1.02677 ## model that only has random effect to be tested m_red <- update(m_full2b, . ~ . - (1 | participant_id), data=d, REML = FALSE) summary(m_red)$varcor # Groups Name Std.Dev. # participant_id difference 0.083262 # Residual 1.125116 ## Null model m_null <- update(m_full2b, . ~ . - (0 + difference | participant_id), data=d, REML = FALSE) summary(m_null)$varcor # Groups Name Std.Dev. # participant_id (Intercept) 0.44734 # Residual 1.03571 exactRLRT(m_red, m_full2b, m_null) # Using restricted likelihood evaluated at ML estimators. # Refit with method="REML" for exact results. # # simulated finite sample distribution of RLRT. # # (p-value based on 10000 simulated values) # # data: # RLRT = 1.9698, p-value = 0.0719 As we can see, the output suggests that with REML = TRUE we would have gotten exact results. But this is left as an exercise to the reader. Regarding the bonus, I am not sure if RLRsim allows simultaneous testing of multiple components, but if so, this can be done in the same way. Response to comment: So it is true, then, that in general the random slope $\theta_X$ allows the random intercept $\theta_0$ to vary across levels of $X$? I am not sure this question can receive a reasonable answer. A random intercept allows an idiosyncratic difference in the overall level for each level of the grouping factor. For example, if the dependent variable is response time, some participants are faster and some are slower. A random slope allows each level of the grouping factor an idiosyncratic effect of the factor for which random slopes are estimated. For example, if the factor is congruency, then some participants can have a higher congruency effect than others. So do random-slopes affect the random-intercept? In some sense this might make sense, as they allow each level of the grouping factor a completely idiosyncratic effect for each condition. In the end, we estimate two idiosyncratic parameters for two conditions. However, I think the distinction between the overall level captured by the intercept and the condition specific effect captured by the random slope is a important and then the random slope cannot really affect the random intercept. However, it still allows each level of the grouping factor an idiosyncratic separately for each level of the condition. Nevertheless, my test still does what the original question wants. It tests whether the difference in variances between the two conditions is zero. If it is zero, then the variances in both conditions are equal. In other words, only if there is no need for a random-slope is the variance in both conditions identical. I hope that makes sense.
Mixed effects model: Compare random variance component across levels of a grouping variable One relatively straight-forward way could be to use likelihood-ratio tests via anova as described in the lme4 FAQ. We start with a full model in which the variances are unconstrained (i.e., two differ
17,100
Mixed effects model: Compare random variance component across levels of a grouping variable
Your model m = lmer(sim_1 ~ condition + (condition | participant_id), data=d) already allows the across-subject variance in the control condition to differ from the across-subject variance in the experimental condition. This can be made more explicit by an equivalent re-parametrization: m = lmer(sim_1 ~ 0 + condition + (0 + condition | participant_id), data=d) The random covariance matrix now has a simpler interpretation: Random effects: Groups Name Variance Std.Dev. Corr participant_id conditioncontrol 0.2464 0.4963 conditionexperimental 0.2074 0.4554 0.83 Here the two variances are precisely the two variances you are interested in: the [across-subjects] variance of conditional mean responses in the control condition and the same in the experimental condition. In your simulated dataset, they are 0.25 and 0.21. The difference is given by delta = as.data.frame(VarCorr(m))[1,4] - as.data.frame(VarCorr(m))[2,4] and is equal to 0.039. You want to test if it is significantly different from zero. EDIT: I realized that the permutation test that I describe below is incorrect; it won't work as intended if the means in control/experimental condition are not the same (because then the observations are not exchangeable under the null). It might be a better idea to bootstrap subjects (or subjects/items in the Bonus case) and obtain the confidence interval for delta. I will try to fix the code below to do that. Original permutation-based suggestion (wrong) I often find that one can save oneself a lot of trouble by doing a permutation test. Indeed, in this case it is very easy to set up. Let's permute control/experimental conditions for each subject separately; then any difference in variances should be eliminated. Repeating this many times will yield the null distribution for the differences. (I do not program in R; everybody please feel free to re-write the following in a better R style.) set.seed(42) nrep = 100 v = matrix(nrow=nrep, ncol=1) for (i in 1:nrep) { dp = d for (s in unique(d$participant_id)){ if (rbinom(1,1,.5)==1){ dp[p$participant_id==s & d$condition=='control',]$condition = 'experimental' dp[p$participant_id==s & d$condition=='experimental',]$condition = 'control' } } m <- lmer(sim_1 ~ 0 + condition + (0 + condition | participant_id), data=dp) v[i,] = as.data.frame(VarCorr(m))[1,4] - as.data.frame(VarCorr(m))[2,4] } pvalue = sum(abs(v) >= abs(delta)) / nrep Running this yields the p-value $p=0.7$. One can increase nrep to 1000 or so. Exactly the same logic can be applied in your Bonus case.
Mixed effects model: Compare random variance component across levels of a grouping variable
Your model m = lmer(sim_1 ~ condition + (condition | participant_id), data=d) already allows the across-subject variance in the control condition to differ from the across-subject variance in the ex
Mixed effects model: Compare random variance component across levels of a grouping variable Your model m = lmer(sim_1 ~ condition + (condition | participant_id), data=d) already allows the across-subject variance in the control condition to differ from the across-subject variance in the experimental condition. This can be made more explicit by an equivalent re-parametrization: m = lmer(sim_1 ~ 0 + condition + (0 + condition | participant_id), data=d) The random covariance matrix now has a simpler interpretation: Random effects: Groups Name Variance Std.Dev. Corr participant_id conditioncontrol 0.2464 0.4963 conditionexperimental 0.2074 0.4554 0.83 Here the two variances are precisely the two variances you are interested in: the [across-subjects] variance of conditional mean responses in the control condition and the same in the experimental condition. In your simulated dataset, they are 0.25 and 0.21. The difference is given by delta = as.data.frame(VarCorr(m))[1,4] - as.data.frame(VarCorr(m))[2,4] and is equal to 0.039. You want to test if it is significantly different from zero. EDIT: I realized that the permutation test that I describe below is incorrect; it won't work as intended if the means in control/experimental condition are not the same (because then the observations are not exchangeable under the null). It might be a better idea to bootstrap subjects (or subjects/items in the Bonus case) and obtain the confidence interval for delta. I will try to fix the code below to do that. Original permutation-based suggestion (wrong) I often find that one can save oneself a lot of trouble by doing a permutation test. Indeed, in this case it is very easy to set up. Let's permute control/experimental conditions for each subject separately; then any difference in variances should be eliminated. Repeating this many times will yield the null distribution for the differences. (I do not program in R; everybody please feel free to re-write the following in a better R style.) set.seed(42) nrep = 100 v = matrix(nrow=nrep, ncol=1) for (i in 1:nrep) { dp = d for (s in unique(d$participant_id)){ if (rbinom(1,1,.5)==1){ dp[p$participant_id==s & d$condition=='control',]$condition = 'experimental' dp[p$participant_id==s & d$condition=='experimental',]$condition = 'control' } } m <- lmer(sim_1 ~ 0 + condition + (0 + condition | participant_id), data=dp) v[i,] = as.data.frame(VarCorr(m))[1,4] - as.data.frame(VarCorr(m))[2,4] } pvalue = sum(abs(v) >= abs(delta)) / nrep Running this yields the p-value $p=0.7$. One can increase nrep to 1000 or so. Exactly the same logic can be applied in your Bonus case.
Mixed effects model: Compare random variance component across levels of a grouping variable Your model m = lmer(sim_1 ~ condition + (condition | participant_id), data=d) already allows the across-subject variance in the control condition to differ from the across-subject variance in the ex