idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
34,901
Ridge regression: regularizing towards a value
I have an answer for "Why regularize towards a value? Does this change the interpretation of $\beta$?" Transfer learning is a type of Machine Learning where knowledge from the source domain when performing a task is transfered to the target domain when performing the same task i.e. the task remains the same but datasets in the two domains differ. One way to perform transfer learning is parameter sharing. The high level intuition is that target domain model parameters should be very close to source domain model parameters while still allowing for some uncertainty. Mathematically this intuition is captured by penalizing the deviation of the parameters i.e., $\lambda\|W_{target}−W_{source}\|^2_2$ , where, $λ$ is the penalization parameter and W's are a vector of model parameters. I have used this approach to perform transfer learning for conditional random fields, look at Eq. 4 and related text. I had a similar question for Ridge regression posted here on the interpretability of the closed form solution.
Ridge regression: regularizing towards a value
I have an answer for "Why regularize towards a value? Does this change the interpretation of $\beta$?" Transfer learning is a type of Machine Learning where knowledge from the source domain when perfo
Ridge regression: regularizing towards a value I have an answer for "Why regularize towards a value? Does this change the interpretation of $\beta$?" Transfer learning is a type of Machine Learning where knowledge from the source domain when performing a task is transfered to the target domain when performing the same task i.e. the task remains the same but datasets in the two domains differ. One way to perform transfer learning is parameter sharing. The high level intuition is that target domain model parameters should be very close to source domain model parameters while still allowing for some uncertainty. Mathematically this intuition is captured by penalizing the deviation of the parameters i.e., $\lambda\|W_{target}−W_{source}\|^2_2$ , where, $λ$ is the penalization parameter and W's are a vector of model parameters. I have used this approach to perform transfer learning for conditional random fields, look at Eq. 4 and related text. I had a similar question for Ridge regression posted here on the interpretability of the closed form solution.
Ridge regression: regularizing towards a value I have an answer for "Why regularize towards a value? Does this change the interpretation of $\beta$?" Transfer learning is a type of Machine Learning where knowledge from the source domain when perfo
34,902
Ridge regression: regularizing towards a value
It is possible to understand it from a Bayesian point of view. Ridge regularization for linear regression is a Bayesian method in disguise. See : https://en.wikipedia.org/wiki/Lasso_(statistics)#Bayesian_interpretation (it is easier to understand explained on the wikipedia"s Lasso page, but it's the same idea with Ridge). The convention I use for regularization is the following. Minimize: $\left(\displaystyle\sum_{i=1}^N(y_i-\beta x_i)^2\right)+\lambda\|\beta-\beta_0\|^2$. Assume that the noise has variance $\sigma^2=1$ for simplicity (otherwise replace $\lambda$ by $\lambda/\sigma^2$ everywhere). Regularization with coefficient $\lambda$ means assuming a normal prior $N(0;\frac{1}{\lambda}I)$: "I expect as a prior belief that the coefficients are small": The prior distribution is a normal distribution with mean $0$ and "radius" $\sqrt\frac{1}{\lambda}$. Regularizing towards $\beta_0$ means assuming a normal prior $N(\beta_0;\frac{1}{\lambda}I)$: "I expect as a prior belief that the coefficients are not far from $\beta_0$": the prior distribution is a normal distribution with mean $\beta_0$ and "radius" $\sqrt\frac{1}{\lambda}$. This prior often results from a previous training that gave $\beta_0$ as an estimate. The strength of your belief $\lambda$ is the statistical power of your first training set. A big lambda means that you had previously a lot of information, your belief is only slightly changed for each new sample: a small update by sample.
Ridge regression: regularizing towards a value
It is possible to understand it from a Bayesian point of view. Ridge regularization for linear regression is a Bayesian method in disguise. See : https://en.wikipedia.org/wiki/Lasso_(statistics)#Baye
Ridge regression: regularizing towards a value It is possible to understand it from a Bayesian point of view. Ridge regularization for linear regression is a Bayesian method in disguise. See : https://en.wikipedia.org/wiki/Lasso_(statistics)#Bayesian_interpretation (it is easier to understand explained on the wikipedia"s Lasso page, but it's the same idea with Ridge). The convention I use for regularization is the following. Minimize: $\left(\displaystyle\sum_{i=1}^N(y_i-\beta x_i)^2\right)+\lambda\|\beta-\beta_0\|^2$. Assume that the noise has variance $\sigma^2=1$ for simplicity (otherwise replace $\lambda$ by $\lambda/\sigma^2$ everywhere). Regularization with coefficient $\lambda$ means assuming a normal prior $N(0;\frac{1}{\lambda}I)$: "I expect as a prior belief that the coefficients are small": The prior distribution is a normal distribution with mean $0$ and "radius" $\sqrt\frac{1}{\lambda}$. Regularizing towards $\beta_0$ means assuming a normal prior $N(\beta_0;\frac{1}{\lambda}I)$: "I expect as a prior belief that the coefficients are not far from $\beta_0$": the prior distribution is a normal distribution with mean $\beta_0$ and "radius" $\sqrt\frac{1}{\lambda}$. This prior often results from a previous training that gave $\beta_0$ as an estimate. The strength of your belief $\lambda$ is the statistical power of your first training set. A big lambda means that you had previously a lot of information, your belief is only slightly changed for each new sample: a small update by sample.
Ridge regression: regularizing towards a value It is possible to understand it from a Bayesian point of view. Ridge regularization for linear regression is a Bayesian method in disguise. See : https://en.wikipedia.org/wiki/Lasso_(statistics)#Baye
34,903
Can cosine kernel be understood as a case of Beta distribution?
The cosine kernel is not a beta distribution. Note that the following things are all true of the standard cosine density: $f(0)=1$ $f(0.5)=0.5$ The right half of this density is rotationally symmetric about $x=\frac12$: (i.e. considering the other two properties it implies $1-f(x)=f(1-x)$ ) But no beta density on (-1,1) will have all these properties together. The symmetric beta kernel density can be written as: $g(x;a)= \frac{(1-x^2)^{a-1}}{\text{B}(a,a)2^{2a-1}}\,,\:-1<x<1\,,\:a>0$ For example, the first condition implies a $a$ of about $3.38175$ ($p=2.38175$). The second implies an $a$ of 1 ($p=0$). However, values of $a$ near that choice of $a$ (3.38175) gives densities really quite close to the cosine. [This is quite close to your $p=2.35$ (since $p=a-1$); a range of values in this region give densities similar to the cosine.] The smallest absolute deviation in density happens for $p\approx 2.3575$ -- not that minimizing the absolute deviations will make the properties most alike. Here's the cosine and beta (with $p=2.3575$): Even though they're not the same, they're quite alike in shape.
Can cosine kernel be understood as a case of Beta distribution?
The cosine kernel is not a beta distribution. Note that the following things are all true of the standard cosine density: $f(0)=1$ $f(0.5)=0.5$ The right half of this density is rotationally symmetri
Can cosine kernel be understood as a case of Beta distribution? The cosine kernel is not a beta distribution. Note that the following things are all true of the standard cosine density: $f(0)=1$ $f(0.5)=0.5$ The right half of this density is rotationally symmetric about $x=\frac12$: (i.e. considering the other two properties it implies $1-f(x)=f(1-x)$ ) But no beta density on (-1,1) will have all these properties together. The symmetric beta kernel density can be written as: $g(x;a)= \frac{(1-x^2)^{a-1}}{\text{B}(a,a)2^{2a-1}}\,,\:-1<x<1\,,\:a>0$ For example, the first condition implies a $a$ of about $3.38175$ ($p=2.38175$). The second implies an $a$ of 1 ($p=0$). However, values of $a$ near that choice of $a$ (3.38175) gives densities really quite close to the cosine. [This is quite close to your $p=2.35$ (since $p=a-1$); a range of values in this region give densities similar to the cosine.] The smallest absolute deviation in density happens for $p\approx 2.3575$ -- not that minimizing the absolute deviations will make the properties most alike. Here's the cosine and beta (with $p=2.3575$): Even though they're not the same, they're quite alike in shape.
Can cosine kernel be understood as a case of Beta distribution? The cosine kernel is not a beta distribution. Note that the following things are all true of the standard cosine density: $f(0)=1$ $f(0.5)=0.5$ The right half of this density is rotationally symmetri
34,904
Why are mean and median not equal for asymmetric distributions?
There's a related question here: Does mean = median imply that a unimodal distribution is symmetric? which you should also read, but your title question of why asymmetry can make mean and median unequal should be addressed in some detail (which is why I don't think of this question as a duplicate of that one), and I am taking the opportunity to make a more intuitive, picture-based discussion of some of the issues raised here. The discussion of the why question comes at the end, after dealing with the mistaken premise in the question and the mistakes in the unnamed textbooks you refer to. I'm going to flip your post about and deal with what textbooks say first: But textbooks say that the mean and median are equal only if the p.d.f. is symmetrical. Some textbooks do say things things like this, but they're wrong. The mean and median of asymmetrical distributions can be equal. It would in some sense be nearly correct to say it the other way around -- i.e. "if the pdf is symmetrical the mean and median are equal" but that's not quite true in general either, since for some symmetrical distributions, the population mean is undefined. There's actually one measure of skewness based on the difference between mean and median (sometimes called second Pearson skewness or median skewness), but having zero second Pearson skewness doesn't imply symmetry. Usually when a distribution is asymmetrical, mean and median are unequal, but we can find as many exceptions as we like. Let's look at one. In my answer to this question: Does mean=mode imply a symmetric distribution? I showed the following example: This is an example where the distribution is plainly not symmetric (there's a different number of modes each side of the main peak, for one thing) but the mean and the median turn out to be exactly equal. It's very easy to construct discrete examples, but people tend to find continuous examples more interesting, I think. My reasoning is as follows: the p.d.f. is divided by the mean (expected value) into two parts, for which the areas under the p.d.f. curve are equal, No, the median divides the pdf into two equal areas. Means in general do not. hence the probabilities that random variable takes a value less then or equal to the mean are 0.5, Let's look at an example. Consider a standard exponential distribution (which is moderately skewed to the right). Its median -- the value that divides the area under the pdf into two equal parts occurs at $\ln 2\approx 0.69$ while the mean is at $1$, and has only $1/e \approx 37\%$ of the area to its right. [It's possibly you may have meant that for a symmetric distribution (let's assume the mean is finite), that the mean will be at the median. This is true but doesn't establish the case that mean=median implies symmetry, and as we saw there are counterexamples to the idea.] But let me return to your title question... Why: Why are mean and median not equal for asymmetric distributions? Let's look at a comparison of sample mean and median [which converts directly to a comparison of population mean and median on a discrete distribution]. sample: 1 2 3 4 6 9 16 95 median = 5 mean = 17 proportion of observations > median = 1/2 proportion of observations > mean = 1/8 So how is it that the mean is higher than almost all of the data? For the median it just looks at how many observations are above or below, but the mean also looks are how far away they are. The further up or down the number is the more it "pulls" the mean. As a result a very skewed distribution, one with a heavy tail on one side but not the other, will pull the mean away from the median toward the long tail, leaving a gap between them. That's why the mean in the exponential distribution above is relatively high, well above the 50% point. By taking a sequence of heavier right tails, you can in fact move a finite mean above any proportion of the distribution you like (as long as it's less than 100%). So why isn't it always like this? If it's asymmetric why isn't the mean pulled away from the median -- why can some asymmetries leave the mean equal to the median? Imagine you have a little bump of probability off to some distance to one side of the median. You have two components of how hard it "pulls" -- one is how far away it is, and one is how much of it there is (how much probability). Twice as far away pulls twice as hard, but so does twice as much probability. So if you place bumps of probability on both sides of the median, you can use the two components together to "balance out" (say using a larger bump of probability a medium distance away on one side and two smaller bumps, one closer and one further away on the other side), and so leave the mean at the median, while having the distribution of probability not being symmetric. In the case of the spiky distribution of my example, my bumps are about seven triangular-shaped bits of probability of different sizes carefully placed to achieve all the different things I wanted it to show (some of the triangles overlap though, which you see showing up as flat sections and parts with varying slopes and so on)
Why are mean and median not equal for asymmetric distributions?
There's a related question here: Does mean = median imply that a unimodal distribution is symmetric? which you should also read, but your title question of why asymmetry can make mean and median unequ
Why are mean and median not equal for asymmetric distributions? There's a related question here: Does mean = median imply that a unimodal distribution is symmetric? which you should also read, but your title question of why asymmetry can make mean and median unequal should be addressed in some detail (which is why I don't think of this question as a duplicate of that one), and I am taking the opportunity to make a more intuitive, picture-based discussion of some of the issues raised here. The discussion of the why question comes at the end, after dealing with the mistaken premise in the question and the mistakes in the unnamed textbooks you refer to. I'm going to flip your post about and deal with what textbooks say first: But textbooks say that the mean and median are equal only if the p.d.f. is symmetrical. Some textbooks do say things things like this, but they're wrong. The mean and median of asymmetrical distributions can be equal. It would in some sense be nearly correct to say it the other way around -- i.e. "if the pdf is symmetrical the mean and median are equal" but that's not quite true in general either, since for some symmetrical distributions, the population mean is undefined. There's actually one measure of skewness based on the difference between mean and median (sometimes called second Pearson skewness or median skewness), but having zero second Pearson skewness doesn't imply symmetry. Usually when a distribution is asymmetrical, mean and median are unequal, but we can find as many exceptions as we like. Let's look at one. In my answer to this question: Does mean=mode imply a symmetric distribution? I showed the following example: This is an example where the distribution is plainly not symmetric (there's a different number of modes each side of the main peak, for one thing) but the mean and the median turn out to be exactly equal. It's very easy to construct discrete examples, but people tend to find continuous examples more interesting, I think. My reasoning is as follows: the p.d.f. is divided by the mean (expected value) into two parts, for which the areas under the p.d.f. curve are equal, No, the median divides the pdf into two equal areas. Means in general do not. hence the probabilities that random variable takes a value less then or equal to the mean are 0.5, Let's look at an example. Consider a standard exponential distribution (which is moderately skewed to the right). Its median -- the value that divides the area under the pdf into two equal parts occurs at $\ln 2\approx 0.69$ while the mean is at $1$, and has only $1/e \approx 37\%$ of the area to its right. [It's possibly you may have meant that for a symmetric distribution (let's assume the mean is finite), that the mean will be at the median. This is true but doesn't establish the case that mean=median implies symmetry, and as we saw there are counterexamples to the idea.] But let me return to your title question... Why: Why are mean and median not equal for asymmetric distributions? Let's look at a comparison of sample mean and median [which converts directly to a comparison of population mean and median on a discrete distribution]. sample: 1 2 3 4 6 9 16 95 median = 5 mean = 17 proportion of observations > median = 1/2 proportion of observations > mean = 1/8 So how is it that the mean is higher than almost all of the data? For the median it just looks at how many observations are above or below, but the mean also looks are how far away they are. The further up or down the number is the more it "pulls" the mean. As a result a very skewed distribution, one with a heavy tail on one side but not the other, will pull the mean away from the median toward the long tail, leaving a gap between them. That's why the mean in the exponential distribution above is relatively high, well above the 50% point. By taking a sequence of heavier right tails, you can in fact move a finite mean above any proportion of the distribution you like (as long as it's less than 100%). So why isn't it always like this? If it's asymmetric why isn't the mean pulled away from the median -- why can some asymmetries leave the mean equal to the median? Imagine you have a little bump of probability off to some distance to one side of the median. You have two components of how hard it "pulls" -- one is how far away it is, and one is how much of it there is (how much probability). Twice as far away pulls twice as hard, but so does twice as much probability. So if you place bumps of probability on both sides of the median, you can use the two components together to "balance out" (say using a larger bump of probability a medium distance away on one side and two smaller bumps, one closer and one further away on the other side), and so leave the mean at the median, while having the distribution of probability not being symmetric. In the case of the spiky distribution of my example, my bumps are about seven triangular-shaped bits of probability of different sizes carefully placed to achieve all the different things I wanted it to show (some of the triangles overlap though, which you see showing up as flat sections and parts with varying slopes and so on)
Why are mean and median not equal for asymmetric distributions? There's a related question here: Does mean = median imply that a unimodal distribution is symmetric? which you should also read, but your title question of why asymmetry can make mean and median unequ
34,905
What's the difference between a randomized block design and two factor design?
In both cases, you have two categorical variables and numerical response variable but in a randomised block design the second variable is a nuisance variable, while in the two factor factorial design the second variable is also of interest and you would like to understand the interaction. I think this is the main difference. It's a bit confusing because ANOVA is really a family of methods and two way ANOVA can refer to two distinct but related models. I'll try to illustrate this with examples and a bit of maths. Suppose you were studying 3 different types of barley, this is your treatment variable $\alpha_i$. You want to determine their typical yield in terms of tons per hectare, this your response variable $x_{ij}$. Yields wills vary based on local conditions so you pick 10 different fields and split each one into thirds randomly assigning assigning one wheat variety to each 1/3 of a field. The fields are your blocks $\beta_j$. There a still a bunch of things that you haven't controlled for like rainfall, soil type, pests, sunlight hours, and the calibration of your scales these are described by the measurement error $\epsilon_{ij}$. In this situation, you have a randomised block design. The model describing the yields is given by: $$ x_{ij} = \mu + \alpha_i + \beta_j + \epsilon_{ij} $$ where $i$ records the barley variety and $j$ records which field you are. This is the additive model. Suppose again you are studying those 3 barley varieties ($\alpha_i$), but you are interested in the effects of soil salinity which can be low, medium or high. Soil salinity is another treatment variable $\beta_j$. Your response is again $x_{ij}$. Because you want to understand how yield is effected by the barley type and the soil salinity, for each different barley type you grow a sample with each different level of salinity. As you say, you can think of salinity level as being a block with respect to barley type and visa versa. This is a two factor factorial design. The model describing the yields is given by $$ x_{ij} = \mu + \alpha_i + \beta_j + \alpha_i \beta_j + \epsilon_{ij} $$ where notice you have a term describing the interaction $\alpha_i \beta_j$ effect. If you were to just use the randomised block design, the interaction term $\alpha_i \beta_j$ would but lumped in with the error term $\epsilon_{ij}$. This is the interaction model. This distinction between the additive and interaction models then carries over into the details of how you conduct the two way ANOVA test.
What's the difference between a randomized block design and two factor design?
In both cases, you have two categorical variables and numerical response variable but in a randomised block design the second variable is a nuisance variable, while in the two factor factorial design
What's the difference between a randomized block design and two factor design? In both cases, you have two categorical variables and numerical response variable but in a randomised block design the second variable is a nuisance variable, while in the two factor factorial design the second variable is also of interest and you would like to understand the interaction. I think this is the main difference. It's a bit confusing because ANOVA is really a family of methods and two way ANOVA can refer to two distinct but related models. I'll try to illustrate this with examples and a bit of maths. Suppose you were studying 3 different types of barley, this is your treatment variable $\alpha_i$. You want to determine their typical yield in terms of tons per hectare, this your response variable $x_{ij}$. Yields wills vary based on local conditions so you pick 10 different fields and split each one into thirds randomly assigning assigning one wheat variety to each 1/3 of a field. The fields are your blocks $\beta_j$. There a still a bunch of things that you haven't controlled for like rainfall, soil type, pests, sunlight hours, and the calibration of your scales these are described by the measurement error $\epsilon_{ij}$. In this situation, you have a randomised block design. The model describing the yields is given by: $$ x_{ij} = \mu + \alpha_i + \beta_j + \epsilon_{ij} $$ where $i$ records the barley variety and $j$ records which field you are. This is the additive model. Suppose again you are studying those 3 barley varieties ($\alpha_i$), but you are interested in the effects of soil salinity which can be low, medium or high. Soil salinity is another treatment variable $\beta_j$. Your response is again $x_{ij}$. Because you want to understand how yield is effected by the barley type and the soil salinity, for each different barley type you grow a sample with each different level of salinity. As you say, you can think of salinity level as being a block with respect to barley type and visa versa. This is a two factor factorial design. The model describing the yields is given by $$ x_{ij} = \mu + \alpha_i + \beta_j + \alpha_i \beta_j + \epsilon_{ij} $$ where notice you have a term describing the interaction $\alpha_i \beta_j$ effect. If you were to just use the randomised block design, the interaction term $\alpha_i \beta_j$ would but lumped in with the error term $\epsilon_{ij}$. This is the interaction model. This distinction between the additive and interaction models then carries over into the details of how you conduct the two way ANOVA test.
What's the difference between a randomized block design and two factor design? In both cases, you have two categorical variables and numerical response variable but in a randomised block design the second variable is a nuisance variable, while in the two factor factorial design
34,906
What's the difference between a randomized block design and two factor design?
I agree with MachineEpsilon's answer but will clarify two issues. First, there is a design difference between the models even if the two-way ANOVA is estimated in the same way. With the randomized-block design, randomization to conditions on the factor occurs within levels of the blocking variable. That is, the sample is stratified into the blocks and then randomized within each block to conditions of the factor. In a two-way factorial design, the sample is simply randomized into the cells of the factorial design. Second, there are situations where you might be interested in the interaction between the factor and block in a block randomized design. This would assess whether the effect of the factor (e.g., treatment effect) differs across blocks (e.g., person's with different characteristics).
What's the difference between a randomized block design and two factor design?
I agree with MachineEpsilon's answer but will clarify two issues. First, there is a design difference between the models even if the two-way ANOVA is estimated in the same way. With the randomized-blo
What's the difference between a randomized block design and two factor design? I agree with MachineEpsilon's answer but will clarify two issues. First, there is a design difference between the models even if the two-way ANOVA is estimated in the same way. With the randomized-block design, randomization to conditions on the factor occurs within levels of the blocking variable. That is, the sample is stratified into the blocks and then randomized within each block to conditions of the factor. In a two-way factorial design, the sample is simply randomized into the cells of the factorial design. Second, there are situations where you might be interested in the interaction between the factor and block in a block randomized design. This would assess whether the effect of the factor (e.g., treatment effect) differs across blocks (e.g., person's with different characteristics).
What's the difference between a randomized block design and two factor design? I agree with MachineEpsilon's answer but will clarify two issues. First, there is a design difference between the models even if the two-way ANOVA is estimated in the same way. With the randomized-blo
34,907
Econometrics text claims that convergence in distribution implies convergence in moments
A sufficient additional condition is that of uniform integrability, i.e., that $$\lim_{M\to\infty} \sup_n \int_{|X_n|>M}|X_n|dP= \lim_{M\to\infty} \sup_n E [|X_n|1_{|X_n|>M}]=0.$$ Then, one gets that $X$ is integrable and $\lim_{n\to\infty}E[X_n]=\mathbb{E}[X]$. Heuristically, this condition rules out that there are still "extreme" contributions to the integral (expectation) asymptotically. Now, this is indeed precisely what happens in your counterexample, as - never mind with vanishing probability - $z_n$ may take the diverging value $n$. Somewhat more precisely, $E[|z_n|1_{\{|z_n|>M\}}]=E[z_n1_{\{z_n>M\}}]=1$ for all $n>M$. Hence, $E[z_n1_{\{z_n>M\}}]$ does not uniformly converge to zero, as we cannot find an $N$ such that $E[z_n1_{\{z_n>M\}}]<\epsilon$ for all $n\geq N$, all $\epsilon>0$ and all $M$. A sufficient condition for uniform integrability is $$\sup_n E[|X_n|^{1+\epsilon}]<\infty$$ for some $\epsilon>0$. And while not satisfying the sufficient condition is of course no proof of lack of uniform integrability, it is even more direct to see that this condition is not satisfied, as $$E[|X_n|^{1+\epsilon}]=n^\epsilon,$$ which evidently does not have a finite $\sup$ over $n$.
Econometrics text claims that convergence in distribution implies convergence in moments
A sufficient additional condition is that of uniform integrability, i.e., that $$\lim_{M\to\infty} \sup_n \int_{|X_n|>M}|X_n|dP= \lim_{M\to\infty} \sup_n E [|X_n|1_{|X_n|>M}]=0.$$ Then, one gets that
Econometrics text claims that convergence in distribution implies convergence in moments A sufficient additional condition is that of uniform integrability, i.e., that $$\lim_{M\to\infty} \sup_n \int_{|X_n|>M}|X_n|dP= \lim_{M\to\infty} \sup_n E [|X_n|1_{|X_n|>M}]=0.$$ Then, one gets that $X$ is integrable and $\lim_{n\to\infty}E[X_n]=\mathbb{E}[X]$. Heuristically, this condition rules out that there are still "extreme" contributions to the integral (expectation) asymptotically. Now, this is indeed precisely what happens in your counterexample, as - never mind with vanishing probability - $z_n$ may take the diverging value $n$. Somewhat more precisely, $E[|z_n|1_{\{|z_n|>M\}}]=E[z_n1_{\{z_n>M\}}]=1$ for all $n>M$. Hence, $E[z_n1_{\{z_n>M\}}]$ does not uniformly converge to zero, as we cannot find an $N$ such that $E[z_n1_{\{z_n>M\}}]<\epsilon$ for all $n\geq N$, all $\epsilon>0$ and all $M$. A sufficient condition for uniform integrability is $$\sup_n E[|X_n|^{1+\epsilon}]<\infty$$ for some $\epsilon>0$. And while not satisfying the sufficient condition is of course no proof of lack of uniform integrability, it is even more direct to see that this condition is not satisfied, as $$E[|X_n|^{1+\epsilon}]=n^\epsilon,$$ which evidently does not have a finite $\sup$ over $n$.
Econometrics text claims that convergence in distribution implies convergence in moments A sufficient additional condition is that of uniform integrability, i.e., that $$\lim_{M\to\infty} \sup_n \int_{|X_n|>M}|X_n|dP= \lim_{M\to\infty} \sup_n E [|X_n|1_{|X_n|>M}]=0.$$ Then, one gets that
34,908
Econometrics text claims that convergence in distribution implies convergence in moments
Indeed, it is a known erratum of this book (see its website in the errata .pdf), that the specific lemma does not state the moment-boundedness condition $$\exists \; \delta : E(|z_n|^{s+\delta}) < M < \infty\;\; \forall n$$
Econometrics text claims that convergence in distribution implies convergence in moments
Indeed, it is a known erratum of this book (see its website in the errata .pdf), that the specific lemma does not state the moment-boundedness condition $$\exists \; \delta : E(|z_n|^{s+\delta}) < M
Econometrics text claims that convergence in distribution implies convergence in moments Indeed, it is a known erratum of this book (see its website in the errata .pdf), that the specific lemma does not state the moment-boundedness condition $$\exists \; \delta : E(|z_n|^{s+\delta}) < M < \infty\;\; \forall n$$
Econometrics text claims that convergence in distribution implies convergence in moments Indeed, it is a known erratum of this book (see its website in the errata .pdf), that the specific lemma does not state the moment-boundedness condition $$\exists \; \delta : E(|z_n|^{s+\delta}) < M
34,909
Econometrics text claims that convergence in distribution implies convergence in moments
I think that there is a bit of confusion in the question. There are two possible interpretation of the question. (1) the probability space is $\mathbb P$= Lebesgue measure on $\Omega=[0,1]$, and the random variable is $Z_n(\omega)=n$ when $0\le \omega\le 1/n$ and 0 otherwise. In this case the law of $Z_n$ is ${\mathbb P}^{Z_n}=n\delta_{1/n}$, the expectation is 1 , but the sequence of $Z_n$ does not converge in weakly a.k.a. in distribution (it is not tight) (2) the probability space is whatever, and the law ${\mathbb P}^{Z_n}$ has density wrt Lebesgue measure $\rho(t)=n$ when $0\le t\le 1/n$ and 0 otherwise. In this case $Z_n$ does converge in weakly a.k.a. in distribution to zero, but also in $L^p$, so the expectation as well goes to zero As a matter of fact I think that in general when $Z_n\to Z$ in distribution then $\limsup_n E[|Z_n|^p]\le E[|Z|^p]$.
Econometrics text claims that convergence in distribution implies convergence in moments
I think that there is a bit of confusion in the question. There are two possible interpretation of the question. (1) the probability space is $\mathbb P$= Lebesgue measure on $\Omega=[0,1]$, and the r
Econometrics text claims that convergence in distribution implies convergence in moments I think that there is a bit of confusion in the question. There are two possible interpretation of the question. (1) the probability space is $\mathbb P$= Lebesgue measure on $\Omega=[0,1]$, and the random variable is $Z_n(\omega)=n$ when $0\le \omega\le 1/n$ and 0 otherwise. In this case the law of $Z_n$ is ${\mathbb P}^{Z_n}=n\delta_{1/n}$, the expectation is 1 , but the sequence of $Z_n$ does not converge in weakly a.k.a. in distribution (it is not tight) (2) the probability space is whatever, and the law ${\mathbb P}^{Z_n}$ has density wrt Lebesgue measure $\rho(t)=n$ when $0\le t\le 1/n$ and 0 otherwise. In this case $Z_n$ does converge in weakly a.k.a. in distribution to zero, but also in $L^p$, so the expectation as well goes to zero As a matter of fact I think that in general when $Z_n\to Z$ in distribution then $\limsup_n E[|Z_n|^p]\le E[|Z|^p]$.
Econometrics text claims that convergence in distribution implies convergence in moments I think that there is a bit of confusion in the question. There are two possible interpretation of the question. (1) the probability space is $\mathbb P$= Lebesgue measure on $\Omega=[0,1]$, and the r
34,910
Difference-in-difference vs fixed effect models
The difference in differences (DiD) model is actually a type of fixed effects because the differencing gets rid of the individual fixed effects.$^1$ Regarding the pros and cons, it really depends what you want to do. DiD is mainly for causal inference with observational data whereas the fixed effects model primary task is to get rid of the correlation between observed explanatory variables and the unobserved fixed effects. The key difference is that DiD requires the so-called common trends assumption. This assumption says that in the absence of the treatment, the outcomes of the treated and control group units would have evolved in a parallel way. It would look something like this. Where the green line is the outcome in the treatment group. Before the treatment (red vertical line), treatment and control groups evolve in the same way, hence we would assume that they also evolve like this after the treatment in the absence of the treatment (dashed blue line). The "treatment effect" is then the difference between the green line and the dashed blue line. $^1$ If you have a model $y_{it} = \beta_1 post_t + \beta_2 treat_i + \delta (post_t\cdot treat_i) + c_i + \epsilon_{it}$ where $post=1$ in the treatment period and $treat_i=1$ for the treatment group. The DiD estimator is then $$ \begin{align} \delta = &E[y_{it}|post=1, treat=1] - E[y_{it}|post=0, treat=1] \\ (&E[y_{it}|post=1, treat=0] - E[y_{it}|post=0, treat=0]) \end{align} $$ If you now substitute the regression equation for $y_{it}$ in here, you will see that all the $c_i$ will cancel, so we get rid of the fixed effects.
Difference-in-difference vs fixed effect models
The difference in differences (DiD) model is actually a type of fixed effects because the differencing gets rid of the individual fixed effects.$^1$ Regarding the pros and cons, it really depends what
Difference-in-difference vs fixed effect models The difference in differences (DiD) model is actually a type of fixed effects because the differencing gets rid of the individual fixed effects.$^1$ Regarding the pros and cons, it really depends what you want to do. DiD is mainly for causal inference with observational data whereas the fixed effects model primary task is to get rid of the correlation between observed explanatory variables and the unobserved fixed effects. The key difference is that DiD requires the so-called common trends assumption. This assumption says that in the absence of the treatment, the outcomes of the treated and control group units would have evolved in a parallel way. It would look something like this. Where the green line is the outcome in the treatment group. Before the treatment (red vertical line), treatment and control groups evolve in the same way, hence we would assume that they also evolve like this after the treatment in the absence of the treatment (dashed blue line). The "treatment effect" is then the difference between the green line and the dashed blue line. $^1$ If you have a model $y_{it} = \beta_1 post_t + \beta_2 treat_i + \delta (post_t\cdot treat_i) + c_i + \epsilon_{it}$ where $post=1$ in the treatment period and $treat_i=1$ for the treatment group. The DiD estimator is then $$ \begin{align} \delta = &E[y_{it}|post=1, treat=1] - E[y_{it}|post=0, treat=1] \\ (&E[y_{it}|post=1, treat=0] - E[y_{it}|post=0, treat=0]) \end{align} $$ If you now substitute the regression equation for $y_{it}$ in here, you will see that all the $c_i$ will cancel, so we get rid of the fixed effects.
Difference-in-difference vs fixed effect models The difference in differences (DiD) model is actually a type of fixed effects because the differencing gets rid of the individual fixed effects.$^1$ Regarding the pros and cons, it really depends what
34,911
Does Basu's Theorem require minimal sufficiency?
To realise that sufficiency is not enough, consider that, when $T(X)$ is a sufficient statistic, $(T(X),S(X))$ is also a sufficient statistic. Including the case when $S(X)$ is an ancillary statistic. Meaning that $(T(X),S(X))$ and $S(X)$ cannot be independent.
Does Basu's Theorem require minimal sufficiency?
To realise that sufficiency is not enough, consider that, when $T(X)$ is a sufficient statistic, $(T(X),S(X))$ is also a sufficient statistic. Including the case when $S(X)$ is an ancillary statistic.
Does Basu's Theorem require minimal sufficiency? To realise that sufficiency is not enough, consider that, when $T(X)$ is a sufficient statistic, $(T(X),S(X))$ is also a sufficient statistic. Including the case when $S(X)$ is an ancillary statistic. Meaning that $(T(X),S(X))$ and $S(X)$ cannot be independent.
Does Basu's Theorem require minimal sufficiency? To realise that sufficiency is not enough, consider that, when $T(X)$ is a sufficient statistic, $(T(X),S(X))$ is also a sufficient statistic. Including the case when $S(X)$ is an ancillary statistic.
34,912
Testing Slope significance for multiple factor levels in a linear model
It depends on what you mean by statistically different. Statistically different from each other? Looking at your plot, you've got four that look pretty clearly the same and then two that are much different. So, if you run: library(lme4) summary(lmer(diam~day*race + (1+day|race), data=long)) You get, in part: Fixed effects: Estimate Std. Error t value (Intercept) -0.71786 0.11409 -6.292 day 1.08902 0.11022 9.880 raceSP621 0.36143 0.16135 2.240 racePR9638 0.48036 0.16135 2.977 raceSP9885 1.46143 0.16135 9.057 raceSP9839 0.61643 0.16135 3.820 raceSP8345 0.78071 0.16135 4.839 day:raceSP621 -0.13982 0.15588 -0.897 day:racePR9638 -0.07982 0.15588 -0.512 day:raceSP9885 -0.81652 0.15588 -5.238 day:raceSP9839 -0.21429 0.15588 -1.375 day:raceSP8345 -0.53491 0.15588 -3.432 lmer doesn't give p-values, because calculating degrees of freedom for these models isn't entirely straightforward, but looking at the t-value, you can see that you've got big values (in absolute terms) for the day:SP9885 interaction and the day:SP8345 interaction. This suggests that the slopes for those two conditions are shallower than the slopes for the others. Technically, this is treating the SP516 group as the baseline, and testing everything else for differences from that. If you wanted to set a different group as the baseline, you could run: long$race <- relevel(long$race, ref='SP9885') summary(lmer(diam~day*race + (1+day|race), data=long)) Truncated output: Fixed effects: Estimate Std. Error t value (Intercept) 0.7436 0.1176 6.324 day 0.2725 0.1086 2.508 raceSP516 -1.4614 0.1663 -8.789 raceSP621 -1.1000 0.1663 -6.615 racePR9638 -0.9811 0.1663 -5.900 raceSP9839 -0.8450 0.1663 -5.082 raceSP8345 -0.6807 0.1663 -4.094 day:raceSP516 0.8165 0.1536 5.314 day:raceSP621 0.6767 0.1536 4.404 day:racePR9638 0.7367 0.1536 4.795 day:raceSP9839 0.6022 0.1536 3.920 day:raceSP8345 0.2816 0.1536 1.833 If you're jonesin' for a p-value, you can see this faq EDIT: Using multilevel model here because I'm assuming that the observations across days are not independent for each of the fungi races. Thus, you've nested data.
Testing Slope significance for multiple factor levels in a linear model
It depends on what you mean by statistically different. Statistically different from each other? Looking at your plot, you've got four that look pretty clearly the same and then two that are much diff
Testing Slope significance for multiple factor levels in a linear model It depends on what you mean by statistically different. Statistically different from each other? Looking at your plot, you've got four that look pretty clearly the same and then two that are much different. So, if you run: library(lme4) summary(lmer(diam~day*race + (1+day|race), data=long)) You get, in part: Fixed effects: Estimate Std. Error t value (Intercept) -0.71786 0.11409 -6.292 day 1.08902 0.11022 9.880 raceSP621 0.36143 0.16135 2.240 racePR9638 0.48036 0.16135 2.977 raceSP9885 1.46143 0.16135 9.057 raceSP9839 0.61643 0.16135 3.820 raceSP8345 0.78071 0.16135 4.839 day:raceSP621 -0.13982 0.15588 -0.897 day:racePR9638 -0.07982 0.15588 -0.512 day:raceSP9885 -0.81652 0.15588 -5.238 day:raceSP9839 -0.21429 0.15588 -1.375 day:raceSP8345 -0.53491 0.15588 -3.432 lmer doesn't give p-values, because calculating degrees of freedom for these models isn't entirely straightforward, but looking at the t-value, you can see that you've got big values (in absolute terms) for the day:SP9885 interaction and the day:SP8345 interaction. This suggests that the slopes for those two conditions are shallower than the slopes for the others. Technically, this is treating the SP516 group as the baseline, and testing everything else for differences from that. If you wanted to set a different group as the baseline, you could run: long$race <- relevel(long$race, ref='SP9885') summary(lmer(diam~day*race + (1+day|race), data=long)) Truncated output: Fixed effects: Estimate Std. Error t value (Intercept) 0.7436 0.1176 6.324 day 0.2725 0.1086 2.508 raceSP516 -1.4614 0.1663 -8.789 raceSP621 -1.1000 0.1663 -6.615 racePR9638 -0.9811 0.1663 -5.900 raceSP9839 -0.8450 0.1663 -5.082 raceSP8345 -0.6807 0.1663 -4.094 day:raceSP516 0.8165 0.1536 5.314 day:raceSP621 0.6767 0.1536 4.404 day:racePR9638 0.7367 0.1536 4.795 day:raceSP9839 0.6022 0.1536 3.920 day:raceSP8345 0.2816 0.1536 1.833 If you're jonesin' for a p-value, you can see this faq EDIT: Using multilevel model here because I'm assuming that the observations across days are not independent for each of the fungi races. Thus, you've nested data.
Testing Slope significance for multiple factor levels in a linear model It depends on what you mean by statistically different. Statistically different from each other? Looking at your plot, you've got four that look pretty clearly the same and then two that are much diff
34,913
Testing Slope significance for multiple factor levels in a linear model
Here's what I would do given the comments under your question: ## GENERATE SOME EXAMPLE DATA set.seed(127) d <- data.frame( plate = rep(c(1:10), 42, each = 2), strain = rep(c(letters[1:6]), 7, each = 20), day = rep(c(1:7), each = 120), diameter = rnorm(840, 6, 3) ) require(ggplot2) ggplot(d, aes(x = day, y = diameter)) + geom_point() + geom_smooth(method = "lm") + facet_wrap(~strain) require(lme4) fit <- lmer(diameter ~ strain * day + (1|strain/plate), data = d) summary(fit) Don't forget to check the model fit with respect to the assumption of equal variances plot(fit) boxplot(residuals(fit) ~ d$strain + d$day) The random effect (1|strain/plate) expands to (1|strain) + (1|strain:plate). If you averaged your plate measurement you can do (1|strain). If you want random slopes of Day within Strain you can do (day|strain/plate) or (day|strain), respectively. To get an ANOVA table: require(afex) mixed(diameter ~ strain * day + (1|strain/plate), data = d, method='LRT') The rest depends on which of your factors are significant. See here for a potential follow-up if your interaction is significant.
Testing Slope significance for multiple factor levels in a linear model
Here's what I would do given the comments under your question: ## GENERATE SOME EXAMPLE DATA set.seed(127) d <- data.frame( plate = rep(c(1:10), 42, each = 2), strain = rep(c(letters[1:6]), 7, eac
Testing Slope significance for multiple factor levels in a linear model Here's what I would do given the comments under your question: ## GENERATE SOME EXAMPLE DATA set.seed(127) d <- data.frame( plate = rep(c(1:10), 42, each = 2), strain = rep(c(letters[1:6]), 7, each = 20), day = rep(c(1:7), each = 120), diameter = rnorm(840, 6, 3) ) require(ggplot2) ggplot(d, aes(x = day, y = diameter)) + geom_point() + geom_smooth(method = "lm") + facet_wrap(~strain) require(lme4) fit <- lmer(diameter ~ strain * day + (1|strain/plate), data = d) summary(fit) Don't forget to check the model fit with respect to the assumption of equal variances plot(fit) boxplot(residuals(fit) ~ d$strain + d$day) The random effect (1|strain/plate) expands to (1|strain) + (1|strain:plate). If you averaged your plate measurement you can do (1|strain). If you want random slopes of Day within Strain you can do (day|strain/plate) or (day|strain), respectively. To get an ANOVA table: require(afex) mixed(diameter ~ strain * day + (1|strain/plate), data = d, method='LRT') The rest depends on which of your factors are significant. See here for a potential follow-up if your interaction is significant.
Testing Slope significance for multiple factor levels in a linear model Here's what I would do given the comments under your question: ## GENERATE SOME EXAMPLE DATA set.seed(127) d <- data.frame( plate = rep(c(1:10), 42, each = 2), strain = rep(c(letters[1:6]), 7, eac
34,914
Testing Slope significance for multiple factor levels in a linear model
With multiple experimental replicates for each strain, you could use ANOVA to test if the slopes (growth rate) are statistically different. ANOVA will tell you if there are significant differences between the sample groups, not which strains are different. In order to do that you may want to use a multiple range test for comparing the means. Edit: You can preform linear regression for the growth of each strain to get the overall growth rate (the estimate for the day coefficient): lm.SP516<-lm(df$SP516~df$day) summary(lm.SP516) Repeat for each strain, and storing the values in a vector "gr" (growth rate). create vector with the names of the strains: strain<-c(SP516, SP621, PR9638, SP9885, SP9839, SP8345) Carry out ANOVA: dat<-data.frame(strain,gr) fit<-aov(dat$gr~dat$strain)) summary(fit)
Testing Slope significance for multiple factor levels in a linear model
With multiple experimental replicates for each strain, you could use ANOVA to test if the slopes (growth rate) are statistically different. ANOVA will tell you if there are significant differences bet
Testing Slope significance for multiple factor levels in a linear model With multiple experimental replicates for each strain, you could use ANOVA to test if the slopes (growth rate) are statistically different. ANOVA will tell you if there are significant differences between the sample groups, not which strains are different. In order to do that you may want to use a multiple range test for comparing the means. Edit: You can preform linear regression for the growth of each strain to get the overall growth rate (the estimate for the day coefficient): lm.SP516<-lm(df$SP516~df$day) summary(lm.SP516) Repeat for each strain, and storing the values in a vector "gr" (growth rate). create vector with the names of the strains: strain<-c(SP516, SP621, PR9638, SP9885, SP9839, SP8345) Carry out ANOVA: dat<-data.frame(strain,gr) fit<-aov(dat$gr~dat$strain)) summary(fit)
Testing Slope significance for multiple factor levels in a linear model With multiple experimental replicates for each strain, you could use ANOVA to test if the slopes (growth rate) are statistically different. ANOVA will tell you if there are significant differences bet
34,915
Testing Slope significance for multiple factor levels in a linear model
when you want to test whether the slope is different as a function of another variable, you include an interaction in the model. when the interacting variable is continuous, the slope changes linearly, but you can generalize this is a few ways. when the interacting variable is categorical, it allows different slopes in the different groups. see the wikipedia page https://en.wikipedia.org/wiki/Interaction_(statistics) for more info.
Testing Slope significance for multiple factor levels in a linear model
when you want to test whether the slope is different as a function of another variable, you include an interaction in the model. when the interacting variable is continuous, the slope changes linearly
Testing Slope significance for multiple factor levels in a linear model when you want to test whether the slope is different as a function of another variable, you include an interaction in the model. when the interacting variable is continuous, the slope changes linearly, but you can generalize this is a few ways. when the interacting variable is categorical, it allows different slopes in the different groups. see the wikipedia page https://en.wikipedia.org/wiki/Interaction_(statistics) for more info.
Testing Slope significance for multiple factor levels in a linear model when you want to test whether the slope is different as a function of another variable, you include an interaction in the model. when the interacting variable is continuous, the slope changes linearly
34,916
When the data set size is not a multiple of the mini-batch size, should the last mini-batch be smaller, or contain samples from other batches?
Same number, otherwise you're putting more weight on the samples in the final minibatch (unless you scale down the learning weight to match the smaller size). Adding random samples from the training set should be fine too (as long as your sampling pool includes the runt minibatch), since each sample has an equal chance of being seen twice in an epoch. Or just do a modulo and grab samples from the beginning again. In practice, it probably doesn't matter much.
When the data set size is not a multiple of the mini-batch size, should the last mini-batch be small
Same number, otherwise you're putting more weight on the samples in the final minibatch (unless you scale down the learning weight to match the smaller size). Adding random samples from the training
When the data set size is not a multiple of the mini-batch size, should the last mini-batch be smaller, or contain samples from other batches? Same number, otherwise you're putting more weight on the samples in the final minibatch (unless you scale down the learning weight to match the smaller size). Adding random samples from the training set should be fine too (as long as your sampling pool includes the runt minibatch), since each sample has an equal chance of being seen twice in an epoch. Or just do a modulo and grab samples from the beginning again. In practice, it probably doesn't matter much.
When the data set size is not a multiple of the mini-batch size, should the last mini-batch be small Same number, otherwise you're putting more weight on the samples in the final minibatch (unless you scale down the learning weight to match the smaller size). Adding random samples from the training
34,917
Dirichlet conjugate update derivation
There is nothing wrong with this derivation \begin{align} p({\alpha}|{\theta},{\nu},\eta) &\propto p({\alpha},{\theta}|{\nu},\eta)\\ &= f({\theta}|{\alpha})p({\alpha}|{\nu},\eta)\\ &\propto \left[\frac{1}{B({\alpha})}\exp\left(\sum_{i=1}^{K}\alpha_{i}\ln(\theta_i)-\ln(\theta_i)\right)\right]\times\nonumber\\ &\phantom{{}\propto} \left[\frac{1}{B({\alpha})^{\eta}}\exp\left(\sum_{i=1}^{K}\alpha_{i}\nu_{i}\right) \right]\\ &= \frac{1}{B({\alpha})^{\eta+1}}\exp\left(\sum_{i=1}^{K}\alpha_{i}\ln(\theta_i) + \alpha_{i}\nu_{i}-\ln(\theta_i)\right)\\ \end{align} but the part $$\exp\left(-\sum_{i=1}^{K}\ln(\theta_i)\right)$$ does not matter, since it is a multiplicative constant (in $\alpha$) term. Therefore \begin{align} p({\alpha}|{\theta},{\nu},\eta) &\propto \frac{1}{B({\alpha})^{\eta+1}}\exp\left(\sum_{i=1}^{K}\alpha_{i}\{\ln(\theta_i) +\nu_{i}\}\right) \end{align} In conclusion, $$\eta^\text{post}=\eta^\text{prior}+1 \qquad \nu_{i}^\text{post}=\nu_{i}^\text{prior}+\ln(\theta_i) $$ is the correct update. The quoted post has a typo, obviously. For the follow-up question, I do not think the distribution has an intuitive interpretation.
Dirichlet conjugate update derivation
There is nothing wrong with this derivation \begin{align} p({\alpha}|{\theta},{\nu},\eta) &\propto p({\alpha},{\theta}|{\nu},\eta)\\ &= f({\theta}|{\alpha})p({\alpha}|{\nu},\eta)\\ &\propto \left[\
Dirichlet conjugate update derivation There is nothing wrong with this derivation \begin{align} p({\alpha}|{\theta},{\nu},\eta) &\propto p({\alpha},{\theta}|{\nu},\eta)\\ &= f({\theta}|{\alpha})p({\alpha}|{\nu},\eta)\\ &\propto \left[\frac{1}{B({\alpha})}\exp\left(\sum_{i=1}^{K}\alpha_{i}\ln(\theta_i)-\ln(\theta_i)\right)\right]\times\nonumber\\ &\phantom{{}\propto} \left[\frac{1}{B({\alpha})^{\eta}}\exp\left(\sum_{i=1}^{K}\alpha_{i}\nu_{i}\right) \right]\\ &= \frac{1}{B({\alpha})^{\eta+1}}\exp\left(\sum_{i=1}^{K}\alpha_{i}\ln(\theta_i) + \alpha_{i}\nu_{i}-\ln(\theta_i)\right)\\ \end{align} but the part $$\exp\left(-\sum_{i=1}^{K}\ln(\theta_i)\right)$$ does not matter, since it is a multiplicative constant (in $\alpha$) term. Therefore \begin{align} p({\alpha}|{\theta},{\nu},\eta) &\propto \frac{1}{B({\alpha})^{\eta+1}}\exp\left(\sum_{i=1}^{K}\alpha_{i}\{\ln(\theta_i) +\nu_{i}\}\right) \end{align} In conclusion, $$\eta^\text{post}=\eta^\text{prior}+1 \qquad \nu_{i}^\text{post}=\nu_{i}^\text{prior}+\ln(\theta_i) $$ is the correct update. The quoted post has a typo, obviously. For the follow-up question, I do not think the distribution has an intuitive interpretation.
Dirichlet conjugate update derivation There is nothing wrong with this derivation \begin{align} p({\alpha}|{\theta},{\nu},\eta) &\propto p({\alpha},{\theta}|{\nu},\eta)\\ &= f({\theta}|{\alpha})p({\alpha}|{\nu},\eta)\\ &\propto \left[\
34,918
Dirichlet conjugate update derivation
First of all, exponential family updates are confusing in anything but the natural parametrization where the update rule is just addition. Stick to that parametrization. I would derive the conjugate prior in this way. The basic idea is the natural parameter values of your conjugate prior distribution $F'$ are $\eta'$ (sums of sufficient statistics of your original distribution $F$). Each observation adds to this vector. The sufficient statistics for the Dirichlet are $\log x_i$. Therefore, your update rule for its conjugate prior is to sum these up along with an extra parameter that keeps track of how many observations you've summed. Intuitively, this count parameter is always a concentration parameter; the other parameters are most sensitive to small values of $x_i$. It makes sense that if your Dirichlet sample has a small value of $x_i$, then it probably doesn't have a large $\alpha_i$ compared to the other $\alpha_i$. The converse is maybe less true?
Dirichlet conjugate update derivation
First of all, exponential family updates are confusing in anything but the natural parametrization where the update rule is just addition. Stick to that parametrization. I would derive the conjugate
Dirichlet conjugate update derivation First of all, exponential family updates are confusing in anything but the natural parametrization where the update rule is just addition. Stick to that parametrization. I would derive the conjugate prior in this way. The basic idea is the natural parameter values of your conjugate prior distribution $F'$ are $\eta'$ (sums of sufficient statistics of your original distribution $F$). Each observation adds to this vector. The sufficient statistics for the Dirichlet are $\log x_i$. Therefore, your update rule for its conjugate prior is to sum these up along with an extra parameter that keeps track of how many observations you've summed. Intuitively, this count parameter is always a concentration parameter; the other parameters are most sensitive to small values of $x_i$. It makes sense that if your Dirichlet sample has a small value of $x_i$, then it probably doesn't have a large $\alpha_i$ compared to the other $\alpha_i$. The converse is maybe less true?
Dirichlet conjugate update derivation First of all, exponential family updates are confusing in anything but the natural parametrization where the update rule is just addition. Stick to that parametrization. I would derive the conjugate
34,919
Dirichlet conjugate update derivation
I'm not sure if it makes any intuitive sense, but $\eta$ can be interpreted in a physical context as how far the Dirichlet distribution is from equilibrium, in a micro-canonical sense. See http://arxiv.org/pdf/cond-mat/0603120v1.pdf. This probably isn't the balls-and-urns sort of interpretation that most would find satisfying, but it is an interesting way to look at it!
Dirichlet conjugate update derivation
I'm not sure if it makes any intuitive sense, but $\eta$ can be interpreted in a physical context as how far the Dirichlet distribution is from equilibrium, in a micro-canonical sense. See http://arxi
Dirichlet conjugate update derivation I'm not sure if it makes any intuitive sense, but $\eta$ can be interpreted in a physical context as how far the Dirichlet distribution is from equilibrium, in a micro-canonical sense. See http://arxiv.org/pdf/cond-mat/0603120v1.pdf. This probably isn't the balls-and-urns sort of interpretation that most would find satisfying, but it is an interesting way to look at it!
Dirichlet conjugate update derivation I'm not sure if it makes any intuitive sense, but $\eta$ can be interpreted in a physical context as how far the Dirichlet distribution is from equilibrium, in a micro-canonical sense. See http://arxi
34,920
Plotting to check Homoskedasticity assumption for repeated-measures ANOVA in R
I'm assuming that a model which was fitted using the Error() function within aov() won't work when using in plot() because you will get more than one error stratum from which you can choose. Now according to this information here, one should use the proj() function which will give you the residuals for each error stratum, which then can be used for diagnostic plots. Edit 1 start More information regarding multistratum models and the proj() function is given in Venables and Ripley, page 284 (but start from page 281): Residuals in multistratum analyses: Projections. In the second sentence they write (I highlighted in bold): Thus fitted(oats.aov[[4]]) and resid(oats.aov[[4]]) are vectors of length 54 representing fitted values and residuals from the last stratum, based on 54 orthonormal linear functions of the original data vector. It is not possible to associate them uniquely with the plots of the original experiment. The function proj takes a fitted model object and finds the projections of the original data vector onto the subspaces defined by each line in the analysis of variance tables (including, for multistratum objects, the suppressed table with the grand mean only). The result is a list of matrices, one for each stratum, where the column names for each are the component names from the analysis of variance tables. For your example that means: ex.aov.proj <- proj(ex.aov) # Check number of strata summary(ex.aov.proj) # Check for normality by using last error stratum qqnorm(ex.aov.proj[[9]][, "Residuals"]) # Check for heteroscedasticity by using last error stratum plot(ex.aov.proj[[9]][, "Residuals"]) However, this will also lead into plots which I cannot fully interpret (especially the second one). In their case, the last stratum was the Within stratum. Since your model cannot estimate this (presumably due to your error term), I am not sure if simply using your last stratum is valid. Hopefully someone else can clarify. Edit 1 end Edit 2 start According to this source checking residuals to assess normality and heteroscedasticity should be performed without the Error() function. In order to check assumptions, you need to not use the error term. You can add the term without error, but the F tests are wrong. Assumption checking is OK, however. This seems reasonable to me but I hope someone else could clarify. Edit 2 end My alternative suggestion: First, I changed your dataset slightly and set a seed to make it reproducible (might be handy for some problems you have in the future): # Set seed to make it reproducible set.seed(12) # I changed the names of your variables to make them easier to remember # I also deleted a few nested `rep()` commands. Have a look at the `each=` argument. subj <- sort(factor(rep(1:20,8))) x1 <- rep(c('A','B'),80) x2 <- rep(c('A','B'),20,each=2) x3 <- rep(c('A','B'),10, each=4) outcome <- rnorm(80,10,2) d3 <- data.frame(outcome,subj,x1,x2,x3) Second, I used a linear mixed-effects model instead since you have repeated measures and hence a random term you can use: require(lme4) # I specified `subj` as random term to account for the repeated measurements on subject. m.lmer<-lmer(outcome ~ x1*x2*x3 + (1|subj), data = d3) summary(m.lmer) # Check for heteroscedasticity plot(m.lmer) # or boxplot(residuals(m.lmer) ~ d3$x1 + d3$x2 + d3$x3) # Check for normality qqnorm(residuals(m.lmer)) Using the afex package you can also get the fixed effects in ANOVA table format (you can also use the Anova() function from the car package as another option): require(afex) mixed(outcome ~ x1*x2*x3 + (1|subj), data = d3, method="LRT") Fitting 8 (g)lmer() models: [........] Effect df Chisq p.value 1 x1 1 0.04 .84 2 x2 1 2.53 .11 3 x3 1 7.68 ** .006 4 x1:x2 1 8.34 ** .004 5 x1:x3 1 10.51 ** .001 6 x2:x3 1 0.31 .58 7 x1:x2:x3 1 0.12 .73 Check ?mixed for the various options you can choose. Also regarding mixed models, there is a lot of information here on Cross Validated.
Plotting to check Homoskedasticity assumption for repeated-measures ANOVA in R
I'm assuming that a model which was fitted using the Error() function within aov() won't work when using in plot() because you will get more than one error stratum from which you can choose. Now accor
Plotting to check Homoskedasticity assumption for repeated-measures ANOVA in R I'm assuming that a model which was fitted using the Error() function within aov() won't work when using in plot() because you will get more than one error stratum from which you can choose. Now according to this information here, one should use the proj() function which will give you the residuals for each error stratum, which then can be used for diagnostic plots. Edit 1 start More information regarding multistratum models and the proj() function is given in Venables and Ripley, page 284 (but start from page 281): Residuals in multistratum analyses: Projections. In the second sentence they write (I highlighted in bold): Thus fitted(oats.aov[[4]]) and resid(oats.aov[[4]]) are vectors of length 54 representing fitted values and residuals from the last stratum, based on 54 orthonormal linear functions of the original data vector. It is not possible to associate them uniquely with the plots of the original experiment. The function proj takes a fitted model object and finds the projections of the original data vector onto the subspaces defined by each line in the analysis of variance tables (including, for multistratum objects, the suppressed table with the grand mean only). The result is a list of matrices, one for each stratum, where the column names for each are the component names from the analysis of variance tables. For your example that means: ex.aov.proj <- proj(ex.aov) # Check number of strata summary(ex.aov.proj) # Check for normality by using last error stratum qqnorm(ex.aov.proj[[9]][, "Residuals"]) # Check for heteroscedasticity by using last error stratum plot(ex.aov.proj[[9]][, "Residuals"]) However, this will also lead into plots which I cannot fully interpret (especially the second one). In their case, the last stratum was the Within stratum. Since your model cannot estimate this (presumably due to your error term), I am not sure if simply using your last stratum is valid. Hopefully someone else can clarify. Edit 1 end Edit 2 start According to this source checking residuals to assess normality and heteroscedasticity should be performed without the Error() function. In order to check assumptions, you need to not use the error term. You can add the term without error, but the F tests are wrong. Assumption checking is OK, however. This seems reasonable to me but I hope someone else could clarify. Edit 2 end My alternative suggestion: First, I changed your dataset slightly and set a seed to make it reproducible (might be handy for some problems you have in the future): # Set seed to make it reproducible set.seed(12) # I changed the names of your variables to make them easier to remember # I also deleted a few nested `rep()` commands. Have a look at the `each=` argument. subj <- sort(factor(rep(1:20,8))) x1 <- rep(c('A','B'),80) x2 <- rep(c('A','B'),20,each=2) x3 <- rep(c('A','B'),10, each=4) outcome <- rnorm(80,10,2) d3 <- data.frame(outcome,subj,x1,x2,x3) Second, I used a linear mixed-effects model instead since you have repeated measures and hence a random term you can use: require(lme4) # I specified `subj` as random term to account for the repeated measurements on subject. m.lmer<-lmer(outcome ~ x1*x2*x3 + (1|subj), data = d3) summary(m.lmer) # Check for heteroscedasticity plot(m.lmer) # or boxplot(residuals(m.lmer) ~ d3$x1 + d3$x2 + d3$x3) # Check for normality qqnorm(residuals(m.lmer)) Using the afex package you can also get the fixed effects in ANOVA table format (you can also use the Anova() function from the car package as another option): require(afex) mixed(outcome ~ x1*x2*x3 + (1|subj), data = d3, method="LRT") Fitting 8 (g)lmer() models: [........] Effect df Chisq p.value 1 x1 1 0.04 .84 2 x2 1 2.53 .11 3 x3 1 7.68 ** .006 4 x1:x2 1 8.34 ** .004 5 x1:x3 1 10.51 ** .001 6 x2:x3 1 0.31 .58 7 x1:x2:x3 1 0.12 .73 Check ?mixed for the various options you can choose. Also regarding mixed models, there is a lot of information here on Cross Validated.
Plotting to check Homoskedasticity assumption for repeated-measures ANOVA in R I'm assuming that a model which was fitted using the Error() function within aov() won't work when using in plot() because you will get more than one error stratum from which you can choose. Now accor
34,921
Plotting to check Homoskedasticity assumption for repeated-measures ANOVA in R
Full disclaimer: I love using R for many different analyses, but I do not like doing ANOVAs in R. Question 1: In the analytic context of ANOVAs, I'm more familiar with evaluating this assumption via tests of homogeneity of variances, vs. plotting homo/heteroscedasticity and visually evaluating it. Though there are multiple tests of homogeneity of variance, the one I see the most is Levene's test. In R , it appears you can do this via the car package using the leveneTest function. Based on your data it would look like this: leveneTest(x ~ y*z*w, d). Note, that I don't think you are able to specify the repeated-measures error structure in this function, and in all honesty, I'm not sure if/to what extent that matters for Levene's test. Comparing with other stastical analysis software, it seems that there is some variability in terms of how Levene's test in repeated-measures ANOVA is carried out. SPSS, for example, provides separate between-group Levene's tests for each level of your repeated-measure, whereas the leveneTest function provides a comprehensive test of all levels of all variables--other software might have alternative approaches too. Anyways, the SPSS approach also seems to ignore the dependency of the data by only evaluating the between-group homogeneity of variance. Question 2: If you're going to use a test of homogeneity of variance--Levene's or otherwise--it would probably be more informative to create simple bar-plots of the variances by each level of your variables (because that is what your homogeneity of variance test is explicitly evaluating). You could do this easily by estimating the variance of your outcome for every combination of your variables' levels, and then plotting them in base R, or using the ggplot2 package.
Plotting to check Homoskedasticity assumption for repeated-measures ANOVA in R
Full disclaimer: I love using R for many different analyses, but I do not like doing ANOVAs in R. Question 1: In the analytic context of ANOVAs, I'm more familiar with evaluating this assumption via t
Plotting to check Homoskedasticity assumption for repeated-measures ANOVA in R Full disclaimer: I love using R for many different analyses, but I do not like doing ANOVAs in R. Question 1: In the analytic context of ANOVAs, I'm more familiar with evaluating this assumption via tests of homogeneity of variances, vs. plotting homo/heteroscedasticity and visually evaluating it. Though there are multiple tests of homogeneity of variance, the one I see the most is Levene's test. In R , it appears you can do this via the car package using the leveneTest function. Based on your data it would look like this: leveneTest(x ~ y*z*w, d). Note, that I don't think you are able to specify the repeated-measures error structure in this function, and in all honesty, I'm not sure if/to what extent that matters for Levene's test. Comparing with other stastical analysis software, it seems that there is some variability in terms of how Levene's test in repeated-measures ANOVA is carried out. SPSS, for example, provides separate between-group Levene's tests for each level of your repeated-measure, whereas the leveneTest function provides a comprehensive test of all levels of all variables--other software might have alternative approaches too. Anyways, the SPSS approach also seems to ignore the dependency of the data by only evaluating the between-group homogeneity of variance. Question 2: If you're going to use a test of homogeneity of variance--Levene's or otherwise--it would probably be more informative to create simple bar-plots of the variances by each level of your variables (because that is what your homogeneity of variance test is explicitly evaluating). You could do this easily by estimating the variance of your outcome for every combination of your variables' levels, and then plotting them in base R, or using the ggplot2 package.
Plotting to check Homoskedasticity assumption for repeated-measures ANOVA in R Full disclaimer: I love using R for many different analyses, but I do not like doing ANOVAs in R. Question 1: In the analytic context of ANOVAs, I'm more familiar with evaluating this assumption via t
34,922
What is the name given to the set of numbers between quartiles
I don't think there is a universally accepted answer. Some people are happy also to call the groups quartiles; and are thus explicitly or implicitly optimistic that any ambiguity will not bite, or at least can be clarified quickly in context, e.g. by inspection of some suitable table, graph and/or algebraic definition. There is a long history of such usages, sometimes distinguished by nuance, e.g. that the quartiles (values) may be called the lower quartile, median and upper quartile, while the quartiles (bins) may be called the first, second, third and fourth quartiles. (Such practice reminds me of those who want means to be population quantities and averages to be sample quantities, which to me has never seemed very convincing, not least because I really want the freedom to refer to sample means.) Others would regard quarters as an alternative term. The verbal alternatives all appear to buy greater precision by being more long-winded (and to some tastes more pedantic), say quartile-based bins, classes, groups or intervals. In many ways the best solution is to avoid special words altogether: to be simply quantitative and talk about the first or lowest 25%, second 25%, and so on. [Grateful nod to @Glen_b for reminding me of this common practice.] Yet another alternative is to avoid any such terminology altogether, but this is not always possible. There isn't a universal notation for quantiles either: for example, there are many idiosyncratic notations for median, but none seems even common. The same terminological problem arises with any quantiles. EDIT 8 Oct 2020 In almost five years since this answer I've seen the bins, classes or intervals delimited by quartiles (quantiles generally) often called by the same names. The ambiguity between intervals and the levels that delimit them is unfortunate, but seemingly here to stay. In practice the ambiguity does not bite hard. The natural selection at work is that longer-winded terminology such as quartile-based bins evidently seems too fussy to find favour.
What is the name given to the set of numbers between quartiles
I don't think there is a universally accepted answer. Some people are happy also to call the groups quartiles; and are thus explicitly or implicitly optimistic that any ambiguity will not bite, or at
What is the name given to the set of numbers between quartiles I don't think there is a universally accepted answer. Some people are happy also to call the groups quartiles; and are thus explicitly or implicitly optimistic that any ambiguity will not bite, or at least can be clarified quickly in context, e.g. by inspection of some suitable table, graph and/or algebraic definition. There is a long history of such usages, sometimes distinguished by nuance, e.g. that the quartiles (values) may be called the lower quartile, median and upper quartile, while the quartiles (bins) may be called the first, second, third and fourth quartiles. (Such practice reminds me of those who want means to be population quantities and averages to be sample quantities, which to me has never seemed very convincing, not least because I really want the freedom to refer to sample means.) Others would regard quarters as an alternative term. The verbal alternatives all appear to buy greater precision by being more long-winded (and to some tastes more pedantic), say quartile-based bins, classes, groups or intervals. In many ways the best solution is to avoid special words altogether: to be simply quantitative and talk about the first or lowest 25%, second 25%, and so on. [Grateful nod to @Glen_b for reminding me of this common practice.] Yet another alternative is to avoid any such terminology altogether, but this is not always possible. There isn't a universal notation for quantiles either: for example, there are many idiosyncratic notations for median, but none seems even common. The same terminological problem arises with any quantiles. EDIT 8 Oct 2020 In almost five years since this answer I've seen the bins, classes or intervals delimited by quartiles (quantiles generally) often called by the same names. The ambiguity between intervals and the levels that delimit them is unfortunate, but seemingly here to stay. In practice the ambiguity does not bite hard. The natural selection at work is that longer-winded terminology such as quartile-based bins evidently seems too fussy to find favour.
What is the name given to the set of numbers between quartiles I don't think there is a universally accepted answer. Some people are happy also to call the groups quartiles; and are thus explicitly or implicitly optimistic that any ambiguity will not bite, or at
34,923
Why doesn't PDF of Dirichlet Distribution seem to integrate to 1?
With two variables, you are defining a line segment in $\mathbb{R}^2$, as you pointed out. However, due to the simplex constraint, one of these two variables is redundant in terms of specifying the density, since there is a one-to-one relationship between $x_1$ and $x_2$. Therefore, the density is specified over $K-1$ free variables (i.e., in $\mathbb{R}$) This is actually pointed out in the first line of this section of the Wikipedia article, albeit very subtly. Therefore, your density function becomes:. $$Dir_{1,1}(x_1,1-x_1)=\frac{\Gamma(2)}{\Gamma(1)^2}(x_1)^0(1-x_1)^0=1$$ Therefore, $$\int_0^1 Dir_{1,1}(x_1,1-x_1) dx_1 = 1$$ Response to OP Comment Due to the simplex constraints, the two-variable Dirichlet density is actually degenerate in $\mathbb{R}^2$, as shown by my construction above (it only requires one variable). While it is true it has a density of $1$, it does not have a density of $1$ on the line segment connecting $(1,0)$ with $(0,1)$. What the above construction shows is that the marginal density has a value of $1$. Your confusion comes from thinking of $x_2$ as a free variable, in which case the support of the Dirichlet on $\mathbb{R}^2$ would have a non-zero area. This intuition is fine in cases like the the bivariate gaussian, where the two variables are not perfectly correlated, but not in this case. We can formally derive this as follows: Let $L$ be some number in $[0,\sqrt{2}]$ specifying the distance from $(1,0)$ to $(0,1)$ along the connecting line segment. Thus, each value of $L$ identifies a unique $(x_ 1,x_2)$ pair. Using this notation, your assumption that the density is $1$ along this line boils down to: $$P(L \in [a,b] \subset)=b-a$$ However, we can show this is not the case through a formal treatment of the joint density of $x_1,x_2$: $$P_L(L\in [a,b])=P_{X_1,X_2}[(x_1,x_2) \in A_{[a,b]}]$$ Where $A_{[a,b]}:= \{(u,v): u \in [1-\frac{b}{\sqrt{2}},1-\frac{a}{\sqrt{2}}], v = 1- u]$ Now, let's calculate $P_L(L\in [a,b])$: $$P_L(L\in [a,b])= \int_{A_{[a,b]}} dP_{X_1,X_2}= \int_{A_{[a,b]}} dP_{X_1}dP_{X_2|X_1} =\int_{A_{[a,b]}} 1 \;dP_{X_1} = \int_{1-\frac{b}{\sqrt{2}}}^{1-\frac{a}{\sqrt{2}}}1\; du = $$ $$\left(1-\frac{a}{\sqrt{2}}\right) - \left(1-\frac{b}{\sqrt{2}}\right) = \frac{1}{\sqrt{2}}(b-a)$$ Where the third equality comes about because $dP_{X_2|X_1} = 1$ for $X_2=1-X_1$ (i.e., its not a density, but a point probability mass at $1-X_1$) As you can see, we've recovered the $\frac{1}{\sqrt{2}}$ normalizing constant for the density along the line segment in $\mathbb{R}^2$. Effectively, this (degenerate) joint density is just a linear transformation of one of the two marginals (either one will work). This results in the domain of the probability density to go from $1$ to $\sqrt{2}$, hence the density must decrease to compensate.
Why doesn't PDF of Dirichlet Distribution seem to integrate to 1?
With two variables, you are defining a line segment in $\mathbb{R}^2$, as you pointed out. However, due to the simplex constraint, one of these two variables is redundant in terms of specifying the de
Why doesn't PDF of Dirichlet Distribution seem to integrate to 1? With two variables, you are defining a line segment in $\mathbb{R}^2$, as you pointed out. However, due to the simplex constraint, one of these two variables is redundant in terms of specifying the density, since there is a one-to-one relationship between $x_1$ and $x_2$. Therefore, the density is specified over $K-1$ free variables (i.e., in $\mathbb{R}$) This is actually pointed out in the first line of this section of the Wikipedia article, albeit very subtly. Therefore, your density function becomes:. $$Dir_{1,1}(x_1,1-x_1)=\frac{\Gamma(2)}{\Gamma(1)^2}(x_1)^0(1-x_1)^0=1$$ Therefore, $$\int_0^1 Dir_{1,1}(x_1,1-x_1) dx_1 = 1$$ Response to OP Comment Due to the simplex constraints, the two-variable Dirichlet density is actually degenerate in $\mathbb{R}^2$, as shown by my construction above (it only requires one variable). While it is true it has a density of $1$, it does not have a density of $1$ on the line segment connecting $(1,0)$ with $(0,1)$. What the above construction shows is that the marginal density has a value of $1$. Your confusion comes from thinking of $x_2$ as a free variable, in which case the support of the Dirichlet on $\mathbb{R}^2$ would have a non-zero area. This intuition is fine in cases like the the bivariate gaussian, where the two variables are not perfectly correlated, but not in this case. We can formally derive this as follows: Let $L$ be some number in $[0,\sqrt{2}]$ specifying the distance from $(1,0)$ to $(0,1)$ along the connecting line segment. Thus, each value of $L$ identifies a unique $(x_ 1,x_2)$ pair. Using this notation, your assumption that the density is $1$ along this line boils down to: $$P(L \in [a,b] \subset)=b-a$$ However, we can show this is not the case through a formal treatment of the joint density of $x_1,x_2$: $$P_L(L\in [a,b])=P_{X_1,X_2}[(x_1,x_2) \in A_{[a,b]}]$$ Where $A_{[a,b]}:= \{(u,v): u \in [1-\frac{b}{\sqrt{2}},1-\frac{a}{\sqrt{2}}], v = 1- u]$ Now, let's calculate $P_L(L\in [a,b])$: $$P_L(L\in [a,b])= \int_{A_{[a,b]}} dP_{X_1,X_2}= \int_{A_{[a,b]}} dP_{X_1}dP_{X_2|X_1} =\int_{A_{[a,b]}} 1 \;dP_{X_1} = \int_{1-\frac{b}{\sqrt{2}}}^{1-\frac{a}{\sqrt{2}}}1\; du = $$ $$\left(1-\frac{a}{\sqrt{2}}\right) - \left(1-\frac{b}{\sqrt{2}}\right) = \frac{1}{\sqrt{2}}(b-a)$$ Where the third equality comes about because $dP_{X_2|X_1} = 1$ for $X_2=1-X_1$ (i.e., its not a density, but a point probability mass at $1-X_1$) As you can see, we've recovered the $\frac{1}{\sqrt{2}}$ normalizing constant for the density along the line segment in $\mathbb{R}^2$. Effectively, this (degenerate) joint density is just a linear transformation of one of the two marginals (either one will work). This results in the domain of the probability density to go from $1$ to $\sqrt{2}$, hence the density must decrease to compensate.
Why doesn't PDF of Dirichlet Distribution seem to integrate to 1? With two variables, you are defining a line segment in $\mathbb{R}^2$, as you pointed out. However, due to the simplex constraint, one of these two variables is redundant in terms of specifying the de
34,924
Optimal forecast window for timeseries
Using "window" to mean "how far to forecast into the future" is nonstandard usage. "Window" more frequently refers to a subsample of the past series, as in taking rolling means over a three-period window. You can see from the answers that this usage is confusing to experts. I recommend that you use the more common term "forecast horizon". As to your question: there is no "optimal" forecast horizon. You use the horizon you need for subsequent processes that use your forecast. For instance, I do forecasting for supermarkets. Sometimes I am interested in forecasts for the next five days (when generating replenishment orders from distribution centers, since each order typically only needs to cover three to five days' demand). Sometimes I am interested in two weeks (when doing some more fancy optimization on replenishment). Sometimes I am interested in three months (when planning promotional activities, price reductions and marketing, to notify suppliers). As @Aksakal notes, sometimes you have to satisfy regulations that prescribe a certain forecasting horizon. Demographical forecasting will typically use forecasting horizons on the order of decades. And climate forecasting can look ahead for centuries. In each case, you need forecasts for a certain horizon to support your decision-making today. (A two-year-ahead climate forecast won't help you in setting policy today.) And forecasting farther out than you need is useless. (No supermarket manager will be interested in a two-year-ahead forecast. The retailer's central strategy and planning department may well be.) So: decide based on what you will use the forecast for.
Optimal forecast window for timeseries
Using "window" to mean "how far to forecast into the future" is nonstandard usage. "Window" more frequently refers to a subsample of the past series, as in taking rolling means over a three-period win
Optimal forecast window for timeseries Using "window" to mean "how far to forecast into the future" is nonstandard usage. "Window" more frequently refers to a subsample of the past series, as in taking rolling means over a three-period window. You can see from the answers that this usage is confusing to experts. I recommend that you use the more common term "forecast horizon". As to your question: there is no "optimal" forecast horizon. You use the horizon you need for subsequent processes that use your forecast. For instance, I do forecasting for supermarkets. Sometimes I am interested in forecasts for the next five days (when generating replenishment orders from distribution centers, since each order typically only needs to cover three to five days' demand). Sometimes I am interested in two weeks (when doing some more fancy optimization on replenishment). Sometimes I am interested in three months (when planning promotional activities, price reductions and marketing, to notify suppliers). As @Aksakal notes, sometimes you have to satisfy regulations that prescribe a certain forecasting horizon. Demographical forecasting will typically use forecasting horizons on the order of decades. And climate forecasting can look ahead for centuries. In each case, you need forecasts for a certain horizon to support your decision-making today. (A two-year-ahead climate forecast won't help you in setting policy today.) And forecasting farther out than you need is useless. (No supermarket manager will be interested in a two-year-ahead forecast. The retailer's central strategy and planning department may well be.) So: decide based on what you will use the forecast for.
Optimal forecast window for timeseries Using "window" to mean "how far to forecast into the future" is nonstandard usage. "Window" more frequently refers to a subsample of the past series, as in taking rolling means over a three-period win
34,925
Optimal forecast window for timeseries
I don't think there is optimal forecast horizon. You can talk about maximum horizon, of course, which depends on the domain and the underlying process. Then again, there's no general rule of thumb. For instance, in some applications in finance such as market value-at-risk of a portfolio, it's prescribed by regulators to produce 1 or 10 day ahead 99% confidence VaR number based on 12 months of data. VaR is essentially a tail of the distribution of profits and losses (or returns). In this regard VaR is a forecast of sorts. In many economic applications, we have annual, quarterly, monthly and weekly seasonality. Obviously, you can't estimate annual and quarterly seasonality adjustments with one year data. Also, we prefer to have data over at least one business cycle, i.e. include boom/bust periods, which implies many years of data. Hence, in these applications with one year history your forecast horizon is limited with a couple of months, beyond which the forecast is questionable. A good analogy is extrapolation. Extrapolation becomes unreliable when you step farther outside the data points.
Optimal forecast window for timeseries
I don't think there is optimal forecast horizon. You can talk about maximum horizon, of course, which depends on the domain and the underlying process. Then again, there's no general rule of thumb. Fo
Optimal forecast window for timeseries I don't think there is optimal forecast horizon. You can talk about maximum horizon, of course, which depends on the domain and the underlying process. Then again, there's no general rule of thumb. For instance, in some applications in finance such as market value-at-risk of a portfolio, it's prescribed by regulators to produce 1 or 10 day ahead 99% confidence VaR number based on 12 months of data. VaR is essentially a tail of the distribution of profits and losses (or returns). In this regard VaR is a forecast of sorts. In many economic applications, we have annual, quarterly, monthly and weekly seasonality. Obviously, you can't estimate annual and quarterly seasonality adjustments with one year data. Also, we prefer to have data over at least one business cycle, i.e. include boom/bust periods, which implies many years of data. Hence, in these applications with one year history your forecast horizon is limited with a couple of months, beyond which the forecast is questionable. A good analogy is extrapolation. Extrapolation becomes unreliable when you step farther outside the data points.
Optimal forecast window for timeseries I don't think there is optimal forecast horizon. You can talk about maximum horizon, of course, which depends on the domain and the underlying process. Then again, there's no general rule of thumb. Fo
34,926
Optimal forecast window for timeseries
As @IrishStat has nicely put, a daily data of one year would be sufficient if it accomodates the trends, activity and seasonality. However, some trends(and/or) seasonality might not be captured even by the daily frequency. They might require data captured every minute for explaining the effects. So, a rule of thumb would be, if the frequency of data captured has the trends and seasonality which can explain your problem statement(or objective), then that would be the ideal window. A quick search returned this piece of literature about Window Selection for Out-of-Sample Forecasting with Time-Varying Parameters by Atsushi et al. ; they talk about a novel method for selecting the estimation window size for forecasting. Thought it might be of interest to you, so I attached it.
Optimal forecast window for timeseries
As @IrishStat has nicely put, a daily data of one year would be sufficient if it accomodates the trends, activity and seasonality. However, some trends(and/or) seasonality might not be captured even b
Optimal forecast window for timeseries As @IrishStat has nicely put, a daily data of one year would be sufficient if it accomodates the trends, activity and seasonality. However, some trends(and/or) seasonality might not be captured even by the daily frequency. They might require data captured every minute for explaining the effects. So, a rule of thumb would be, if the frequency of data captured has the trends and seasonality which can explain your problem statement(or objective), then that would be the ideal window. A quick search returned this piece of literature about Window Selection for Out-of-Sample Forecasting with Time-Varying Parameters by Atsushi et al. ; they talk about a novel method for selecting the estimation window size for forecasting. Thought it might be of interest to you, so I attached it.
Optimal forecast window for timeseries As @IrishStat has nicely put, a daily data of one year would be sufficient if it accomodates the trends, activity and seasonality. However, some trends(and/or) seasonality might not be captured even b
34,927
Optimal forecast window for timeseries
One year of daily data would be insufficient to estimate/identify annual repetitive activity. It would be sufficient to characterize day-of-the-week structure but even then holiday effects would distort them. As @stephan-kolassa pointed out the preferred term is forecast horizon not "window" but I for one did understand what you meant by window. In terms of optimal "window ahead" (forecast horizon) there is no "optimal" but there can be ever-increasing uncertainty which might be a mitigating factor when selecting an appropriate "window" or "horizon". Normally this is set by the objective/need of the forecasting activity. Certainly without incorporating weekly/monthly/holiday effects any forecast might be in jeopardy.
Optimal forecast window for timeseries
One year of daily data would be insufficient to estimate/identify annual repetitive activity. It would be sufficient to characterize day-of-the-week structure but even then holiday effects would disto
Optimal forecast window for timeseries One year of daily data would be insufficient to estimate/identify annual repetitive activity. It would be sufficient to characterize day-of-the-week structure but even then holiday effects would distort them. As @stephan-kolassa pointed out the preferred term is forecast horizon not "window" but I for one did understand what you meant by window. In terms of optimal "window ahead" (forecast horizon) there is no "optimal" but there can be ever-increasing uncertainty which might be a mitigating factor when selecting an appropriate "window" or "horizon". Normally this is set by the objective/need of the forecasting activity. Certainly without incorporating weekly/monthly/holiday effects any forecast might be in jeopardy.
Optimal forecast window for timeseries One year of daily data would be insufficient to estimate/identify annual repetitive activity. It would be sufficient to characterize day-of-the-week structure but even then holiday effects would disto
34,928
Marginalization of GP regression hyperparameters with Laplace approximation
You might be interested in the gpml_extensions repository here: https://github.com/rmgarnett/gpml_extensions/ There is code for computing the Hessian of the log likelihood wrt $\theta$ for both exact inference and the Laplace approximation for approximate GP inference. There is also convenience code for using these to find the Laplace approximation to the hyperparameter posterior $p(\theta \mid \mathbf{X}, \mathbf{y})$ (theta_posterior_laplace.m). Finally, this paper from UAI 2014 suggests a fast analytical approximation (called the MGP) to the posterior predictive distribution $p(f^\ast \mid \mathbf{x}^\ast, \mathbf{X}, \mathbf{y}) = \int p(f^\ast \mid \mathbf{x}^\ast, \mathbf{X}, \mathbf{y}, \theta) p(\theta \mid \mathbf{X}, \mathbf{y}) \, \mathrm{d}\theta$, under an arbitrary Gaussian approximation to the $\theta$ posterior: $p(\theta \mid \mathbf{X}, \mathbf{y}) \approx \mathcal{N}(\theta; \hat{\theta}, \mathbf{\Sigma})$. The Laplace approximation would be one way to derive such an approximation. There is an implementation of the MGP built atop gpml/gpml_extensions available from the same user, but I don't have the reputation to post that link.
Marginalization of GP regression hyperparameters with Laplace approximation
You might be interested in the gpml_extensions repository here: https://github.com/rmgarnett/gpml_extensions/ There is code for computing the Hessian of the log likelihood wrt $\theta$ for both exact
Marginalization of GP regression hyperparameters with Laplace approximation You might be interested in the gpml_extensions repository here: https://github.com/rmgarnett/gpml_extensions/ There is code for computing the Hessian of the log likelihood wrt $\theta$ for both exact inference and the Laplace approximation for approximate GP inference. There is also convenience code for using these to find the Laplace approximation to the hyperparameter posterior $p(\theta \mid \mathbf{X}, \mathbf{y})$ (theta_posterior_laplace.m). Finally, this paper from UAI 2014 suggests a fast analytical approximation (called the MGP) to the posterior predictive distribution $p(f^\ast \mid \mathbf{x}^\ast, \mathbf{X}, \mathbf{y}) = \int p(f^\ast \mid \mathbf{x}^\ast, \mathbf{X}, \mathbf{y}, \theta) p(\theta \mid \mathbf{X}, \mathbf{y}) \, \mathrm{d}\theta$, under an arbitrary Gaussian approximation to the $\theta$ posterior: $p(\theta \mid \mathbf{X}, \mathbf{y}) \approx \mathcal{N}(\theta; \hat{\theta}, \mathbf{\Sigma})$. The Laplace approximation would be one way to derive such an approximation. There is an implementation of the MGP built atop gpml/gpml_extensions available from the same user, but I don't have the reputation to post that link.
Marginalization of GP regression hyperparameters with Laplace approximation You might be interested in the gpml_extensions repository here: https://github.com/rmgarnett/gpml_extensions/ There is code for computing the Hessian of the log likelihood wrt $\theta$ for both exact
34,929
Marginalization of GP regression hyperparameters with Laplace approximation
The best reference I could find online so far, and a very fitting one, is Ville Pietiläinen's MSc thesis: Approximations for Integration Over the Hyperparameters in Gaussian Processes (2010). Pietiläinen compares a point estimate approach (so-called MAP-II, since the latent variables are marginalized analytically given a Gaussian likelihood) to three marginalization methods: Grid search (with a "smart" grid) Central composite design (CCD) Quasi-random importance sampling with Student-$t$ proposal distribution Interestingly, for the case studies considered in the thesis, marginalization of the hyperparameters seems to provide some benefits with respect to optimization only for small datasets (e.g., no more than 50-100 training points). More in general, as Pietiläinen argues, the advantage of marginalization should emerge for test points in regions in which the input density is low (hence prediction uncertainty is high). The three different marginalization methods perform somewhat similarly, although the comparison in the thesis is not exhaustive. CCD is appealing since it requires many less function evaluations than the other methods. Another interesting reference is Philip Boyle's PhD thesis: Gaussian Processes for Regression and Optimisation (2007). In particular, chapter 8 is focussed on marginalization over hyperparameters. In conclusion, I will eventually try and implement the CCD method, although at this point is not a priority as I am not expecting a major gain over optimization. I can probably better spend my time by playing with other factors that have a larger impact on the quality of the prediction (e.g., the choice of covariance function).
Marginalization of GP regression hyperparameters with Laplace approximation
The best reference I could find online so far, and a very fitting one, is Ville Pietiläinen's MSc thesis: Approximations for Integration Over the Hyperparameters in Gaussian Processes (2010). Pietilä
Marginalization of GP regression hyperparameters with Laplace approximation The best reference I could find online so far, and a very fitting one, is Ville Pietiläinen's MSc thesis: Approximations for Integration Over the Hyperparameters in Gaussian Processes (2010). Pietiläinen compares a point estimate approach (so-called MAP-II, since the latent variables are marginalized analytically given a Gaussian likelihood) to three marginalization methods: Grid search (with a "smart" grid) Central composite design (CCD) Quasi-random importance sampling with Student-$t$ proposal distribution Interestingly, for the case studies considered in the thesis, marginalization of the hyperparameters seems to provide some benefits with respect to optimization only for small datasets (e.g., no more than 50-100 training points). More in general, as Pietiläinen argues, the advantage of marginalization should emerge for test points in regions in which the input density is low (hence prediction uncertainty is high). The three different marginalization methods perform somewhat similarly, although the comparison in the thesis is not exhaustive. CCD is appealing since it requires many less function evaluations than the other methods. Another interesting reference is Philip Boyle's PhD thesis: Gaussian Processes for Regression and Optimisation (2007). In particular, chapter 8 is focussed on marginalization over hyperparameters. In conclusion, I will eventually try and implement the CCD method, although at this point is not a priority as I am not expecting a major gain over optimization. I can probably better spend my time by playing with other factors that have a larger impact on the quality of the prediction (e.g., the choice of covariance function).
Marginalization of GP regression hyperparameters with Laplace approximation The best reference I could find online so far, and a very fitting one, is Ville Pietiläinen's MSc thesis: Approximations for Integration Over the Hyperparameters in Gaussian Processes (2010). Pietilä
34,930
Why is this time-series stationary?
I didn't replicate your analysis, but it's surely possible to reject the Null of the ADF test with a process like this (also note that these tests are notorious for having low statistical power). I would recommend fitting an AR(1) model to the data as a sanity check- this is basically what you are doing with the ADF test, but you can get a better idea of what sort of AR(1) coefficient is being estimated, and whether or not this coefficient is near unit-root (close to 1). Remember, ADF tests for a unit root, not for stationarity per say. A process is (covariance) stationary if it has time-invariant 1st and 2nd moments. So it looks like the variance may not be constant, while the process could be stationary in the mean. For example, stock market returns usually reject the ADF test, and we assume they are stationary, though we know squared returns tend to cluster. Note that ADF tests for (or absence of) a unit root in the data process through autoregressive procedures. If the test is rejecting the null, then its more likely that your process has an AR(1) coefficient less than 1, aka, the process is being estimated as mean reverting, so the best guess for next period's value is not necessarily the previous period's value, but rather a value that is shrunken towards the mean of the process. Statistical test results, including ADF, are not the end all be all - they are tests and can never prove anything with 100% certainty - they just provide evidence for/against some hypothesis. Lastly, you could specify the mean of the process and model the variance as a GARCH process, but your limited sample size would be a concern when estimating such models. ADF Test: https://en.wikipedia.org/wiki/Dickey%E2%80%93Fuller_test AR Models: https://en.wikipedia.org/wiki/Autoregressive_model (G)ARCH Processes: https://en.wikipedia.org/wiki/Autoregressive_conditional_heteroskedasticity
Why is this time-series stationary?
I didn't replicate your analysis, but it's surely possible to reject the Null of the ADF test with a process like this (also note that these tests are notorious for having low statistical power). I wo
Why is this time-series stationary? I didn't replicate your analysis, but it's surely possible to reject the Null of the ADF test with a process like this (also note that these tests are notorious for having low statistical power). I would recommend fitting an AR(1) model to the data as a sanity check- this is basically what you are doing with the ADF test, but you can get a better idea of what sort of AR(1) coefficient is being estimated, and whether or not this coefficient is near unit-root (close to 1). Remember, ADF tests for a unit root, not for stationarity per say. A process is (covariance) stationary if it has time-invariant 1st and 2nd moments. So it looks like the variance may not be constant, while the process could be stationary in the mean. For example, stock market returns usually reject the ADF test, and we assume they are stationary, though we know squared returns tend to cluster. Note that ADF tests for (or absence of) a unit root in the data process through autoregressive procedures. If the test is rejecting the null, then its more likely that your process has an AR(1) coefficient less than 1, aka, the process is being estimated as mean reverting, so the best guess for next period's value is not necessarily the previous period's value, but rather a value that is shrunken towards the mean of the process. Statistical test results, including ADF, are not the end all be all - they are tests and can never prove anything with 100% certainty - they just provide evidence for/against some hypothesis. Lastly, you could specify the mean of the process and model the variance as a GARCH process, but your limited sample size would be a concern when estimating such models. ADF Test: https://en.wikipedia.org/wiki/Dickey%E2%80%93Fuller_test AR Models: https://en.wikipedia.org/wiki/Autoregressive_model (G)ARCH Processes: https://en.wikipedia.org/wiki/Autoregressive_conditional_heteroskedasticity
Why is this time-series stationary? I didn't replicate your analysis, but it's surely possible to reject the Null of the ADF test with a process like this (also note that these tests are notorious for having low statistical power). I wo
34,931
Why is this time-series stationary?
Take a look at the graph of the differences of your series here. It looks like volatility clustering with a stationary mean to me. I'd try something like GARCH or stochastic volatility. The other thing to note is that it appears that your jumps up are faster than drops down. This would suggest a threshold model, maybe nonlinear. finally, if you draw a histogram then clearly normal distribution is not a good fit, so you may look for non-gaussian errors. UPDATE: As in my comment, you may try testing your series for heteroscedastisicty, because ADF test will not catch it. There are tests such as Engle's ARCH test. It rejects the homoscedasticity for both levels and differences.
Why is this time-series stationary?
Take a look at the graph of the differences of your series here. It looks like volatility clustering with a stationary mean to me. I'd try something like GARCH or stochastic volatility. The other thi
Why is this time-series stationary? Take a look at the graph of the differences of your series here. It looks like volatility clustering with a stationary mean to me. I'd try something like GARCH or stochastic volatility. The other thing to note is that it appears that your jumps up are faster than drops down. This would suggest a threshold model, maybe nonlinear. finally, if you draw a histogram then clearly normal distribution is not a good fit, so you may look for non-gaussian errors. UPDATE: As in my comment, you may try testing your series for heteroscedastisicty, because ADF test will not catch it. There are tests such as Engle's ARCH test. It rejects the homoscedasticity for both levels and differences.
Why is this time-series stationary? Take a look at the graph of the differences of your series here. It looks like volatility clustering with a stationary mean to me. I'd try something like GARCH or stochastic volatility. The other thi
34,932
How to simulate a random slope model
@AdamO has done a good job identifying the specific error in your code. Let me address the question more generally. Here is how I simulate a linear mixed effects model: Mixed effects models assume each unit has random effects drawn from a multivariate normal distribution. (When a model is estimated, it is the variances and covariances of that multivariate normal that are being estimated for the random effects.) I start by specifying this distribution and generating (pseudo-)random values to serve as the random effects. It is often convenient to specify the variances as $1$, so that the covariance is the correlation between slopes and intercepts (which is easier for me to conceptualize). library(MASS) ni = 13 # number of subjects RE = mvrnorm(ni, mu=c(0,0), Sigma=rbind(c(1.0, 0.3), c(0.3, 1.0) )) colnames(RE) = c("ints","slopes"); t(round(RE,2)) # [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12] [,13] # ints 0.81 -0.52 -0.65 1.30 -0.29 -1.15 0.04 0.05 0.00 -0.29 2.40 -0.05 -0.47 # slopes -1.82 0.81 -0.70 1.28 0.82 -0.18 0.74 1.14 0.93 -0.20 0.04 0.68 -0.53 Next, I would generate my $X$ variables. I can't really follow the logic of your example, so I will use time as my only regressor. nj = 10 # number of timepoints data = data.frame(ID = rep(1:ni, each=nj), time = rep(1:nj, times=ni), RE.i = rep(RE[,1], each=nj), RE.s = rep(RE[,2], each=nj), y = NA ) head(data, 14) # ID time RE.i RE.s y # 1 1 1 0.8051709 -1.8152973 NA # 2 1 2 0.8051709 -1.8152973 NA # 3 1 3 0.8051709 -1.8152973 NA # 4 1 4 0.8051709 -1.8152973 NA # 5 1 5 0.8051709 -1.8152973 NA # 6 1 6 0.8051709 -1.8152973 NA # 7 1 7 0.8051709 -1.8152973 NA # 8 1 8 0.8051709 -1.8152973 NA # 9 1 9 0.8051709 -1.8152973 NA # 10 1 10 0.8051709 -1.8152973 NA # 11 2 1 -0.5174601 0.8135761 NA # 12 2 2 -0.5174601 0.8135761 NA # 13 2 3 -0.5174601 0.8135761 NA # 14 2 4 -0.5174601 0.8135761 NA Having generated your random effects and your regressors, you can specify the data generating process. Since you want some randomly missed timepoints, there is a level of additional complexity here. (Note that these data are missing completely at random; for more on simulating missing data, see: How to simulate the different types of missing data.) y = with(data, (0 + RE.i) + (.3 + RE.s)*time + rnorm(n=ni*nj, mean=0, sd=1)) m = rbinom(n=ni*nj, size=1, prob=.1) y[m==1] = NA data$y = y head(data, 14) # ID time RE.i RE.s y # 1 1 1 0.8051709 -1.8152973 -0.8659219 # 2 1 2 0.8051709 -1.8152973 -3.6961761 # 3 1 3 0.8051709 -1.8152973 -4.2188711 # 4 1 4 0.8051709 -1.8152973 -4.8380769 # 5 1 5 0.8051709 -1.8152973 -5.4126362 # 6 1 6 0.8051709 -1.8152973 -8.3894008 # 7 1 7 0.8051709 -1.8152973 NA # 8 1 8 0.8051709 -1.8152973 -11.3710128 # 9 1 9 0.8051709 -1.8152973 -14.2095646 # 10 1 10 0.8051709 -1.8152973 -14.7627970 # 11 2 1 -0.5174601 0.8135761 0.2018260 # 12 2 2 -0.5174601 0.8135761 NA # 13 2 3 -0.5174601 0.8135761 3.9232935 # 14 2 4 -0.5174601 0.8135761 NA At this point, you can fit your model. I typically use the lme4 package. library(lme4) summary(lmer(y~time+(time|ID), data)) # Linear mixed model fit by REML ['lmerMod'] # Formula: y ~ time + (time | ID) # Data: data # # REML criterion at convergence: 378.3 # # Scaled residuals: # Min 1Q Median 3Q Max # -2.48530 -0.61824 -0.08551 0.59285 2.70687 # # Random effects: # Groups Name Variance Std.Dev. Corr # ID (Intercept) 0.9970 0.9985 # time 0.8300 0.9110 -0.05 # Residual 0.7594 0.8715 # Number of obs: 112, groups: ID, 13 # # Fixed effects: # Estimate Std. Error t value # (Intercept) 0.03499 0.33247 0.105 # time 0.53454 0.25442 2.101 # # Correlation of Fixed Effects: # (Intr) # time -0.100
How to simulate a random slope model
@AdamO has done a good job identifying the specific error in your code. Let me address the question more generally. Here is how I simulate a linear mixed effects model: Mixed effects models assume
How to simulate a random slope model @AdamO has done a good job identifying the specific error in your code. Let me address the question more generally. Here is how I simulate a linear mixed effects model: Mixed effects models assume each unit has random effects drawn from a multivariate normal distribution. (When a model is estimated, it is the variances and covariances of that multivariate normal that are being estimated for the random effects.) I start by specifying this distribution and generating (pseudo-)random values to serve as the random effects. It is often convenient to specify the variances as $1$, so that the covariance is the correlation between slopes and intercepts (which is easier for me to conceptualize). library(MASS) ni = 13 # number of subjects RE = mvrnorm(ni, mu=c(0,0), Sigma=rbind(c(1.0, 0.3), c(0.3, 1.0) )) colnames(RE) = c("ints","slopes"); t(round(RE,2)) # [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12] [,13] # ints 0.81 -0.52 -0.65 1.30 -0.29 -1.15 0.04 0.05 0.00 -0.29 2.40 -0.05 -0.47 # slopes -1.82 0.81 -0.70 1.28 0.82 -0.18 0.74 1.14 0.93 -0.20 0.04 0.68 -0.53 Next, I would generate my $X$ variables. I can't really follow the logic of your example, so I will use time as my only regressor. nj = 10 # number of timepoints data = data.frame(ID = rep(1:ni, each=nj), time = rep(1:nj, times=ni), RE.i = rep(RE[,1], each=nj), RE.s = rep(RE[,2], each=nj), y = NA ) head(data, 14) # ID time RE.i RE.s y # 1 1 1 0.8051709 -1.8152973 NA # 2 1 2 0.8051709 -1.8152973 NA # 3 1 3 0.8051709 -1.8152973 NA # 4 1 4 0.8051709 -1.8152973 NA # 5 1 5 0.8051709 -1.8152973 NA # 6 1 6 0.8051709 -1.8152973 NA # 7 1 7 0.8051709 -1.8152973 NA # 8 1 8 0.8051709 -1.8152973 NA # 9 1 9 0.8051709 -1.8152973 NA # 10 1 10 0.8051709 -1.8152973 NA # 11 2 1 -0.5174601 0.8135761 NA # 12 2 2 -0.5174601 0.8135761 NA # 13 2 3 -0.5174601 0.8135761 NA # 14 2 4 -0.5174601 0.8135761 NA Having generated your random effects and your regressors, you can specify the data generating process. Since you want some randomly missed timepoints, there is a level of additional complexity here. (Note that these data are missing completely at random; for more on simulating missing data, see: How to simulate the different types of missing data.) y = with(data, (0 + RE.i) + (.3 + RE.s)*time + rnorm(n=ni*nj, mean=0, sd=1)) m = rbinom(n=ni*nj, size=1, prob=.1) y[m==1] = NA data$y = y head(data, 14) # ID time RE.i RE.s y # 1 1 1 0.8051709 -1.8152973 -0.8659219 # 2 1 2 0.8051709 -1.8152973 -3.6961761 # 3 1 3 0.8051709 -1.8152973 -4.2188711 # 4 1 4 0.8051709 -1.8152973 -4.8380769 # 5 1 5 0.8051709 -1.8152973 -5.4126362 # 6 1 6 0.8051709 -1.8152973 -8.3894008 # 7 1 7 0.8051709 -1.8152973 NA # 8 1 8 0.8051709 -1.8152973 -11.3710128 # 9 1 9 0.8051709 -1.8152973 -14.2095646 # 10 1 10 0.8051709 -1.8152973 -14.7627970 # 11 2 1 -0.5174601 0.8135761 0.2018260 # 12 2 2 -0.5174601 0.8135761 NA # 13 2 3 -0.5174601 0.8135761 3.9232935 # 14 2 4 -0.5174601 0.8135761 NA At this point, you can fit your model. I typically use the lme4 package. library(lme4) summary(lmer(y~time+(time|ID), data)) # Linear mixed model fit by REML ['lmerMod'] # Formula: y ~ time + (time | ID) # Data: data # # REML criterion at convergence: 378.3 # # Scaled residuals: # Min 1Q Median 3Q Max # -2.48530 -0.61824 -0.08551 0.59285 2.70687 # # Random effects: # Groups Name Variance Std.Dev. Corr # ID (Intercept) 0.9970 0.9985 # time 0.8300 0.9110 -0.05 # Residual 0.7594 0.8715 # Number of obs: 112, groups: ID, 13 # # Fixed effects: # Estimate Std. Error t value # (Intercept) 0.03499 0.33247 0.105 # time 0.53454 0.25442 2.101 # # Correlation of Fixed Effects: # (Intr) # time -0.100
How to simulate a random slope model @AdamO has done a good job identifying the specific error in your code. Let me address the question more generally. Here is how I simulate a linear mixed effects model: Mixed effects models assume
34,933
How to simulate a random slope model
There is an obvious mistake in your simulation. However, in general, it's impossible to generate data so that a random slopes model is guaranteed to converge. The fix you need to apply is to timepoint. Timepoint is a factor. You should not be using a factor level variable in a random slopes model, it is completely aliased with random intercept. Try data$timepoint <- as.numeric(data$timepoint) and slope <- lme(measure ~ factor(timepoint), data=data, random=~timepoint|subject, na.action=na.exclude, method="ML") This converges instantly. It's also appropriately nested within other models. Make good use of the try() command to "capture" simulation output with converge-fails. You can explore interesting behavior with numerical solvers that are "at the boundary" of their capabilities.
How to simulate a random slope model
There is an obvious mistake in your simulation. However, in general, it's impossible to generate data so that a random slopes model is guaranteed to converge. The fix you need to apply is to timepoint
How to simulate a random slope model There is an obvious mistake in your simulation. However, in general, it's impossible to generate data so that a random slopes model is guaranteed to converge. The fix you need to apply is to timepoint. Timepoint is a factor. You should not be using a factor level variable in a random slopes model, it is completely aliased with random intercept. Try data$timepoint <- as.numeric(data$timepoint) and slope <- lme(measure ~ factor(timepoint), data=data, random=~timepoint|subject, na.action=na.exclude, method="ML") This converges instantly. It's also appropriately nested within other models. Make good use of the try() command to "capture" simulation output with converge-fails. You can explore interesting behavior with numerical solvers that are "at the boundary" of their capabilities.
How to simulate a random slope model There is an obvious mistake in your simulation. However, in general, it's impossible to generate data so that a random slopes model is guaranteed to converge. The fix you need to apply is to timepoint
34,934
CausalImpact on single time series
There are two ways of running an analysis with the CausalImpact R package. The documentation covers both. You can either let the package construct a suitable model automatically or you can specify a custom model. In the former case, the kind of model constructed by the package depends on your input data: If your data, as in your case, contains no predictor time series (i.e., the data argument is a univariate time series), then the model contains a local level component and, if specified in model.args, a seasonal component. It's generally not recommended to do this as the counterfactuals predicted by your model will be overly simplistic. They are not using any information from the post-period. Causal inference then becomes as hard as forecasting. Having said this, the model still provides you with prediction intervals, which you can use to assess whether the deviation of the time series in the post-period from its baseline is significant. If your data contains one or more predictor time series (i.e., the data argument has at least two columns), then, on top of the above, the model contains a regression component. In all practical cases I've seen it really is the predictor time series that make the model powerful as they allow you to compute much more plausible counterfactuals. I'd generally recommend adding at least a handful of predictor time series. You can find the implementation of the above in: https://github.com/google/CausalImpact/blob/master/R/impact_model.R
CausalImpact on single time series
There are two ways of running an analysis with the CausalImpact R package. The documentation covers both. You can either let the package construct a suitable model automatically or you can specify a c
CausalImpact on single time series There are two ways of running an analysis with the CausalImpact R package. The documentation covers both. You can either let the package construct a suitable model automatically or you can specify a custom model. In the former case, the kind of model constructed by the package depends on your input data: If your data, as in your case, contains no predictor time series (i.e., the data argument is a univariate time series), then the model contains a local level component and, if specified in model.args, a seasonal component. It's generally not recommended to do this as the counterfactuals predicted by your model will be overly simplistic. They are not using any information from the post-period. Causal inference then becomes as hard as forecasting. Having said this, the model still provides you with prediction intervals, which you can use to assess whether the deviation of the time series in the post-period from its baseline is significant. If your data contains one or more predictor time series (i.e., the data argument has at least two columns), then, on top of the above, the model contains a regression component. In all practical cases I've seen it really is the predictor time series that make the model powerful as they allow you to compute much more plausible counterfactuals. I'd generally recommend adding at least a handful of predictor time series. You can find the implementation of the above in: https://github.com/google/CausalImpact/blob/master/R/impact_model.R
CausalImpact on single time series There are two ways of running an analysis with the CausalImpact R package. The documentation covers both. You can either let the package construct a suitable model automatically or you can specify a c
34,935
Significance of individual coefficients vs Significance of both
Consider the Wald statistic, which resembles the familiar F-statistic $F$ (we use the default version that is not robust to heteroskedasticity): \begin{align*} W&=n(Rb-u)'\left[R\left[n\cdot s^2\cdot(X'X)^{-1}\right]R'\right]^{-1}(Rb-u)\notag\\ &=(Rb-u)'\left[R(X'X)^{-1}R'\right]^{-1}(Rb-u)/s^2\\ &=J\cdot F\notag, \end{align*} where $J$ gives the number of restrictions tested, with $H_0: R\beta=u$. If you want to test if neither of the variables enters the model, you simply take $R=I$, the identity matrix, and $u=(0,0)^T$. Let us now find the non-rejection region of the Wald test as a function of the parameter vector $\beta$ (so the set of hypotheses you would not reject given a certain statistic computed from the data). $H_{0}$ is to be rejected at level $\alpha$ if $$W>\chi^{2}(J,1-\alpha),$$ the $1-\alpha$-quantile the $\chi^{2}$-distribution with $J$ degrees of freedom. The acceptance region thus corresponds to the values $$\theta=R\beta$$ for which $H_0$ would not have been rejected at level $\alpha$, $$ \{\theta:W\leq\chi^{2}(J,1-\alpha)\} $$ To visualize, consider the case $J=2$. Then, $\chi^{2}(2,0.95)=5.99$ for $\alpha=0.05$ and $\chi^{2}(2,0.99)=9.21$ for $\alpha=0.01$. Write $T=Rb$ (with $b$ the OLS estimator for the two coefficients) and $z=\theta-T$. Further, to abbreviate the algebra, summarize the inverse matrix as $$ R\left[n\cdot s^2\cdot(X'X)^{-1}\right]R'=:V:=\left( \begin{array}{cc} 1 & r \\ r & a \\ \end{array} \right), $$ where $|r|<\sqrt{a}$ to ensure invertibility of $V$. We further have $$ V^{-1}=\frac{1}{a-r^2}\cdot\left( \begin{array}{cc} a & -r \\ -r & 1 \\ \end{array} \right), $$ and $W=z'V^{-1}z$ or $$ W=(az_1^2+z_2^2-2\,r\,z_1 z_2)/(a-r^2)\qquad\qquad(*) $$ We hence now consider $W$ as a function of the hypothesized coefficients $\theta$. The result for $T=0$ (so an OLS estimate of $(0,0)^T$), $r=0.6,\,a=1$ (see below for the code): The dashed lines indicate the acceptance regions $[-1.96,1.96]$ that you get if you test each coefficient separately. The rectangle formed by the two intervals gives you the region where neither t-test rejects. The ellipses give you the regions of pairs of parameter values for which you would not have rejected the null at either 5 or 1%. So, here is the answer: you see that there is small lightblue region outside the rectangle but inside the 5%-acceptance region of the Wald test, i.e., a region where both individual t-tests would have rejected but the joint test would not. So, yes, there are counterexamples, which as indicated by the example are however not expected to occur frequently. EDIT: To follow up on the point made by @whuber here is the corresponding figure for the case $r=0$, i.e. no correlation. r <- 0.6 # set to zero for uncorrelated case a <- 1 W <- function(beta1,beta2,a,r) (a*beta1^2+beta2^2−2*r*beta1*beta2)/(1−r^2) alpha <- 0.05 beta1 <- beta2 <- seq(-3,3,0.01) z <- outer(beta1,beta2,W,a=a,r=r) normcv <- qnorm(1-alpha/2) contour(beta1,beta2,z,levels=qchisq(1-alpha,2)) abline(h=-normcv,lty=2) abline(h=normcv,lty=2) abline(v=-normcv,lty=2) abline(v=normcv,lty=2) z.nonrej <- z<=qchisq(1-alpha,2) beta1.nw <- beta1 >= normcv beta2.nw <- beta2 >= normcv beta.nw <- outer(beta1.nw,beta2.nw,"+")==2 nw.nonrejection.Wald <- (z.nonrej + beta.nw)==2 ind.nw <- which(nw.nonrejection.Wald==T, arr.ind = T) points(beta1[ind.nw[,1]],beta2[ind.nw[,2]], col="lightblue", cex=.1) beta1.se <- beta1 <= -normcv beta2.se <- beta2 <= -normcv beta.se <- outer(beta1.se,beta2.se,"+")==2 se.nonrejection.Wald <- (z.nonrej + beta.se)==2 ind.se <- which(se.nonrejection.Wald==T, arr.ind = T) points(beta1[ind.se[,1]],beta2[ind.se[,2]], col="lightblue", pch='.') The figure shows that producing the counterexample indeed required allowing for correlation among the estimates. EDIT 2: In response to Kevin Kim's question in the comments: Interestingly, the fact that it is possible that neither individual test rejects but that the Wald test does when there is no correlation is not a general result for any significance level $\alpha$. When choosing a sufficiently high significance level $\alpha$ of beyond roughly $\alpha\approx0.2151$, the ball covers the entire rectangle. Basically, consider the function of the circle of the acceptance border of the Wald test, i.e. $(*)$ for $a=1$ and $r=0$ set equal to $\chi^{2}(2,1-\alpha)$ and solving for $z_2$ (focusing on the positive quadrant w.l.o.g.): $$ z_2(z_1)=\sqrt{\chi^{2}(2,1-\alpha)-z_1^2} $$ We now seek the value for $\alpha$ for which the function evaluated at the normal quantile is just the normal quantile, or $$ \sqrt{\chi^{2}(2,1-\alpha)-\Phi^{-1}(1-\alpha/2)^2}=\Phi^{-1}(1-\alpha/2),$$ i.e., where the curve is equal to the corner of the rectangle. Doing this numerically in R gives rootfunc <- function(alpha) sqrt(qchisq(1-alpha,2) - qnorm(1-alpha/2)^2) - qnorm(1-alpha/2) uniroot(rootfunc,interval = c(0.00001,0.9999)) with solution $root [1] 0.2151346 So indeed, the ball seems to shrink more slowly than the rectangle.
Significance of individual coefficients vs Significance of both
Consider the Wald statistic, which resembles the familiar F-statistic $F$ (we use the default version that is not robust to heteroskedasticity): \begin{align*} W&=n(Rb-u)'\left[R\left[n\cdot s^2\cdot(
Significance of individual coefficients vs Significance of both Consider the Wald statistic, which resembles the familiar F-statistic $F$ (we use the default version that is not robust to heteroskedasticity): \begin{align*} W&=n(Rb-u)'\left[R\left[n\cdot s^2\cdot(X'X)^{-1}\right]R'\right]^{-1}(Rb-u)\notag\\ &=(Rb-u)'\left[R(X'X)^{-1}R'\right]^{-1}(Rb-u)/s^2\\ &=J\cdot F\notag, \end{align*} where $J$ gives the number of restrictions tested, with $H_0: R\beta=u$. If you want to test if neither of the variables enters the model, you simply take $R=I$, the identity matrix, and $u=(0,0)^T$. Let us now find the non-rejection region of the Wald test as a function of the parameter vector $\beta$ (so the set of hypotheses you would not reject given a certain statistic computed from the data). $H_{0}$ is to be rejected at level $\alpha$ if $$W>\chi^{2}(J,1-\alpha),$$ the $1-\alpha$-quantile the $\chi^{2}$-distribution with $J$ degrees of freedom. The acceptance region thus corresponds to the values $$\theta=R\beta$$ for which $H_0$ would not have been rejected at level $\alpha$, $$ \{\theta:W\leq\chi^{2}(J,1-\alpha)\} $$ To visualize, consider the case $J=2$. Then, $\chi^{2}(2,0.95)=5.99$ for $\alpha=0.05$ and $\chi^{2}(2,0.99)=9.21$ for $\alpha=0.01$. Write $T=Rb$ (with $b$ the OLS estimator for the two coefficients) and $z=\theta-T$. Further, to abbreviate the algebra, summarize the inverse matrix as $$ R\left[n\cdot s^2\cdot(X'X)^{-1}\right]R'=:V:=\left( \begin{array}{cc} 1 & r \\ r & a \\ \end{array} \right), $$ where $|r|<\sqrt{a}$ to ensure invertibility of $V$. We further have $$ V^{-1}=\frac{1}{a-r^2}\cdot\left( \begin{array}{cc} a & -r \\ -r & 1 \\ \end{array} \right), $$ and $W=z'V^{-1}z$ or $$ W=(az_1^2+z_2^2-2\,r\,z_1 z_2)/(a-r^2)\qquad\qquad(*) $$ We hence now consider $W$ as a function of the hypothesized coefficients $\theta$. The result for $T=0$ (so an OLS estimate of $(0,0)^T$), $r=0.6,\,a=1$ (see below for the code): The dashed lines indicate the acceptance regions $[-1.96,1.96]$ that you get if you test each coefficient separately. The rectangle formed by the two intervals gives you the region where neither t-test rejects. The ellipses give you the regions of pairs of parameter values for which you would not have rejected the null at either 5 or 1%. So, here is the answer: you see that there is small lightblue region outside the rectangle but inside the 5%-acceptance region of the Wald test, i.e., a region where both individual t-tests would have rejected but the joint test would not. So, yes, there are counterexamples, which as indicated by the example are however not expected to occur frequently. EDIT: To follow up on the point made by @whuber here is the corresponding figure for the case $r=0$, i.e. no correlation. r <- 0.6 # set to zero for uncorrelated case a <- 1 W <- function(beta1,beta2,a,r) (a*beta1^2+beta2^2−2*r*beta1*beta2)/(1−r^2) alpha <- 0.05 beta1 <- beta2 <- seq(-3,3,0.01) z <- outer(beta1,beta2,W,a=a,r=r) normcv <- qnorm(1-alpha/2) contour(beta1,beta2,z,levels=qchisq(1-alpha,2)) abline(h=-normcv,lty=2) abline(h=normcv,lty=2) abline(v=-normcv,lty=2) abline(v=normcv,lty=2) z.nonrej <- z<=qchisq(1-alpha,2) beta1.nw <- beta1 >= normcv beta2.nw <- beta2 >= normcv beta.nw <- outer(beta1.nw,beta2.nw,"+")==2 nw.nonrejection.Wald <- (z.nonrej + beta.nw)==2 ind.nw <- which(nw.nonrejection.Wald==T, arr.ind = T) points(beta1[ind.nw[,1]],beta2[ind.nw[,2]], col="lightblue", cex=.1) beta1.se <- beta1 <= -normcv beta2.se <- beta2 <= -normcv beta.se <- outer(beta1.se,beta2.se,"+")==2 se.nonrejection.Wald <- (z.nonrej + beta.se)==2 ind.se <- which(se.nonrejection.Wald==T, arr.ind = T) points(beta1[ind.se[,1]],beta2[ind.se[,2]], col="lightblue", pch='.') The figure shows that producing the counterexample indeed required allowing for correlation among the estimates. EDIT 2: In response to Kevin Kim's question in the comments: Interestingly, the fact that it is possible that neither individual test rejects but that the Wald test does when there is no correlation is not a general result for any significance level $\alpha$. When choosing a sufficiently high significance level $\alpha$ of beyond roughly $\alpha\approx0.2151$, the ball covers the entire rectangle. Basically, consider the function of the circle of the acceptance border of the Wald test, i.e. $(*)$ for $a=1$ and $r=0$ set equal to $\chi^{2}(2,1-\alpha)$ and solving for $z_2$ (focusing on the positive quadrant w.l.o.g.): $$ z_2(z_1)=\sqrt{\chi^{2}(2,1-\alpha)-z_1^2} $$ We now seek the value for $\alpha$ for which the function evaluated at the normal quantile is just the normal quantile, or $$ \sqrt{\chi^{2}(2,1-\alpha)-\Phi^{-1}(1-\alpha/2)^2}=\Phi^{-1}(1-\alpha/2),$$ i.e., where the curve is equal to the corner of the rectangle. Doing this numerically in R gives rootfunc <- function(alpha) sqrt(qchisq(1-alpha,2) - qnorm(1-alpha/2)^2) - qnorm(1-alpha/2) uniroot(rootfunc,interval = c(0.00001,0.9999)) with solution $root [1] 0.2151346 So indeed, the ball seems to shrink more slowly than the rectangle.
Significance of individual coefficients vs Significance of both Consider the Wald statistic, which resembles the familiar F-statistic $F$ (we use the default version that is not robust to heteroskedasticity): \begin{align*} W&=n(Rb-u)'\left[R\left[n\cdot s^2\cdot(
34,936
What's the correct way to visualize discrete variables?
There's not 'one correct way'; there are some good ways. The obvious one to my mind would be a Cleveland dot-chart; it's for displaying numeric data on a factor. Some people would use a bar chart for this purpose instead. If you have a useful classification (such as by region), you'd split by that classification. With GDPs (whether raw or per capita), the variable covers several orders of magnitude, so it might make a great deal more sense to look on the log-scale (this also obviates any concerns some people might have with 0 not being on the scale above). There are several uses in such a plot. 1. explicit comparison between countries (is A larger than B?). 2. extracting a data value (what is A's GDP?). The Cleveland dotchart (or Cleveland dot plot) is based on research[1] into the kinds of comparisons that people are good at or less good at. We're very good at comparison of position along common scales, slightly less good with relative lengths and quite bad at relative areas or angles. In respect of 1. above this comparison is between the values represented by the points (which point is further to the right). In 2. this comparison is between the point and the parallel axis, both comparisons we're good at. The plot eliminates almost all ink that doesn't serve to directly aid these comparisons. Quick, which is bigger, lemon or lime? Very thin bars would make for a very similar sort of plot to a Cleveland dot-chart and can sometimes do well (particularly when both plots include 0), but dotcharts have an advantage when you want to plot several numbers for each country, since they can be represented by different symbols. This advantage is even larger if you're only able to use black and white. You also can't really use a log-scale on bar charts (where does the bottom of the bar start and what does the bar-length represent?) and so it's less suitable for data that spans several orders of magnitude. [1]: Cleveland W.S. and McGill, R. (1984), "Graphical Perception: Theory, Experimentation, and Application to the Development of Graphical Methods," Journal of the American Statistical Association, 79:387 (Sep.), 531-554.
What's the correct way to visualize discrete variables?
There's not 'one correct way'; there are some good ways. The obvious one to my mind would be a Cleveland dot-chart; it's for displaying numeric data on a factor. Some people would use a bar chart for
What's the correct way to visualize discrete variables? There's not 'one correct way'; there are some good ways. The obvious one to my mind would be a Cleveland dot-chart; it's for displaying numeric data on a factor. Some people would use a bar chart for this purpose instead. If you have a useful classification (such as by region), you'd split by that classification. With GDPs (whether raw or per capita), the variable covers several orders of magnitude, so it might make a great deal more sense to look on the log-scale (this also obviates any concerns some people might have with 0 not being on the scale above). There are several uses in such a plot. 1. explicit comparison between countries (is A larger than B?). 2. extracting a data value (what is A's GDP?). The Cleveland dotchart (or Cleveland dot plot) is based on research[1] into the kinds of comparisons that people are good at or less good at. We're very good at comparison of position along common scales, slightly less good with relative lengths and quite bad at relative areas or angles. In respect of 1. above this comparison is between the values represented by the points (which point is further to the right). In 2. this comparison is between the point and the parallel axis, both comparisons we're good at. The plot eliminates almost all ink that doesn't serve to directly aid these comparisons. Quick, which is bigger, lemon or lime? Very thin bars would make for a very similar sort of plot to a Cleveland dot-chart and can sometimes do well (particularly when both plots include 0), but dotcharts have an advantage when you want to plot several numbers for each country, since they can be represented by different symbols. This advantage is even larger if you're only able to use black and white. You also can't really use a log-scale on bar charts (where does the bottom of the bar start and what does the bar-length represent?) and so it's less suitable for data that spans several orders of magnitude. [1]: Cleveland W.S. and McGill, R. (1984), "Graphical Perception: Theory, Experimentation, and Application to the Development of Graphical Methods," Journal of the American Statistical Association, 79:387 (Sep.), 531-554.
What's the correct way to visualize discrete variables? There's not 'one correct way'; there are some good ways. The obvious one to my mind would be a Cleveland dot-chart; it's for displaying numeric data on a factor. Some people would use a bar chart for
34,937
How to compute the R-squared for a transformed response variable?
It is not appropriate to compare linear regression models in terms of their summary/fit statistics (RMSE and $R^2$), when some models' dependent variables were transformed so that units changed, as summary statistics are not comparable. Consider the following nice explanation by Maddala (1988, p. 177): When comparing the linear with the log-linear forms, we cannot compare the R-squared's because R-squared is the ratio of explained variance to the total variance and the variances of y and log y are different. Comparing R-squared's in this case is like comparing two individuals, A and B, where A eats 65% of a carrot cake and B eats 70% of a strawberry cake. The comparison does not make sense because there are two different cakes. In order to compensate for the scale change, traditionally, people revert the log transformation back to the original scale by using so-called back-transformation method (see this page for more details, explanation and examples). In regard to the information theory-based model statistics, such as AIC/BIC, in general it is not possible to use them to compare non-transformed and transformed models (see this and this. It is, however, possible to compare AIC with a modified AIC (not sure about BIC), as discussed here and here. One additional - and the final - note is that it is usually preferred to use adjusted $R^2$ instead of the standard one. Please see my relevant answer and links provided there. References Maddala, G. S. (1988). Introduction to econometrics. New York: Macmillan Publishing
How to compute the R-squared for a transformed response variable?
It is not appropriate to compare linear regression models in terms of their summary/fit statistics (RMSE and $R^2$), when some models' dependent variables were transformed so that units changed, as su
How to compute the R-squared for a transformed response variable? It is not appropriate to compare linear regression models in terms of their summary/fit statistics (RMSE and $R^2$), when some models' dependent variables were transformed so that units changed, as summary statistics are not comparable. Consider the following nice explanation by Maddala (1988, p. 177): When comparing the linear with the log-linear forms, we cannot compare the R-squared's because R-squared is the ratio of explained variance to the total variance and the variances of y and log y are different. Comparing R-squared's in this case is like comparing two individuals, A and B, where A eats 65% of a carrot cake and B eats 70% of a strawberry cake. The comparison does not make sense because there are two different cakes. In order to compensate for the scale change, traditionally, people revert the log transformation back to the original scale by using so-called back-transformation method (see this page for more details, explanation and examples). In regard to the information theory-based model statistics, such as AIC/BIC, in general it is not possible to use them to compare non-transformed and transformed models (see this and this. It is, however, possible to compare AIC with a modified AIC (not sure about BIC), as discussed here and here. One additional - and the final - note is that it is usually preferred to use adjusted $R^2$ instead of the standard one. Please see my relevant answer and links provided there. References Maddala, G. S. (1988). Introduction to econometrics. New York: Macmillan Publishing
How to compute the R-squared for a transformed response variable? It is not appropriate to compare linear regression models in terms of their summary/fit statistics (RMSE and $R^2$), when some models' dependent variables were transformed so that units changed, as su
34,938
Skewed Distributions for Logistic Regression
The date as a predictor may be failing because it is highly collinear with the constant. If you enter it as a year, it's variability is about 10/2000 = 0.005 (in fact less because most of your data are in the more recent years), and when squared it becomes 4e-6. When inverting a matrix with eigenvalues 1 and 4e-6, the package that you use may decide it is a zero in finite precision arithmetics, and throw this error message. The solution is simple -- center your data, at least approximately, by subtracting 2000 from the year.
Skewed Distributions for Logistic Regression
The date as a predictor may be failing because it is highly collinear with the constant. If you enter it as a year, it's variability is about 10/2000 = 0.005 (in fact less because most of your data ar
Skewed Distributions for Logistic Regression The date as a predictor may be failing because it is highly collinear with the constant. If you enter it as a year, it's variability is about 10/2000 = 0.005 (in fact less because most of your data are in the more recent years), and when squared it becomes 4e-6. When inverting a matrix with eigenvalues 1 and 4e-6, the package that you use may decide it is a zero in finite precision arithmetics, and throw this error message. The solution is simple -- center your data, at least approximately, by subtracting 2000 from the year.
Skewed Distributions for Logistic Regression The date as a predictor may be failing because it is highly collinear with the constant. If you enter it as a year, it's variability is about 10/2000 = 0.005 (in fact less because most of your data ar
34,939
Skewed Distributions for Logistic Regression
Restricted cubic splines would be expected to work well here. You are worried slightly too much about marginal distributions of predictors. Length of stay is in the wrong part of the causal pathway to use it as a predictor of death. And watch out for other operation required. I don't see much value in univariable analyses.
Skewed Distributions for Logistic Regression
Restricted cubic splines would be expected to work well here. You are worried slightly too much about marginal distributions of predictors. Length of stay is in the wrong part of the causal pathway t
Skewed Distributions for Logistic Regression Restricted cubic splines would be expected to work well here. You are worried slightly too much about marginal distributions of predictors. Length of stay is in the wrong part of the causal pathway to use it as a predictor of death. And watch out for other operation required. I don't see much value in univariable analyses.
Skewed Distributions for Logistic Regression Restricted cubic splines would be expected to work well here. You are worried slightly too much about marginal distributions of predictors. Length of stay is in the wrong part of the causal pathway t
34,940
Cronbach's alpha in R
Cronbach's $\alpha$ is a measure of internal consistency of a questionnaire or test. It says how correlated the items are that are included in the scale. This is the reason why you need preferably much more than two items: you cannot correlate one item with itself and if you had only two items you could use a "traditional" correlation between the two items. So you need at least a few of them. You also ask why a matrix or data.frame are needed instead of list. This is a broader topic about R's data types, however, questionnaire data that are used for calculation of Cronbach's $\alpha$ consist of several items, let's say $k$ and responses for those items by a group of $n$ individuals, so $n\times k$ matrix is a natural way of storing this kind of data. If you have this kind of data saved as a list (e.g., $k$ vectors of length $n$) you can always transform the list into a data.frame or a matrix. What you have to remember with Cronbach's $\alpha$ is that it is a correlation measure, so you would get perfect $\alpha$ for a scale consisting of several items that are identical, while this would be a very poor questionnaire. So the general idea that correlated items are the best ones has its flaws and you have to remember about that. That is one of the reasons why you should rather not use $\alpha$ alone for psychometric analysis, but combine it with other methods, e.g. with Item Response Theory based methods being one of the most popular nowadays (check e.g. Comparison of Classical Test Theory and Item Response Theory and Their Applications to Test Development paper by Ronald K. Hambleton and Russell W. Jones ). Check out also alpha documentation or the tutorial on personality-project.org (site by psych library developer with a great deal of information about psychometrics and R).
Cronbach's alpha in R
Cronbach's $\alpha$ is a measure of internal consistency of a questionnaire or test. It says how correlated the items are that are included in the scale. This is the reason why you need preferably muc
Cronbach's alpha in R Cronbach's $\alpha$ is a measure of internal consistency of a questionnaire or test. It says how correlated the items are that are included in the scale. This is the reason why you need preferably much more than two items: you cannot correlate one item with itself and if you had only two items you could use a "traditional" correlation between the two items. So you need at least a few of them. You also ask why a matrix or data.frame are needed instead of list. This is a broader topic about R's data types, however, questionnaire data that are used for calculation of Cronbach's $\alpha$ consist of several items, let's say $k$ and responses for those items by a group of $n$ individuals, so $n\times k$ matrix is a natural way of storing this kind of data. If you have this kind of data saved as a list (e.g., $k$ vectors of length $n$) you can always transform the list into a data.frame or a matrix. What you have to remember with Cronbach's $\alpha$ is that it is a correlation measure, so you would get perfect $\alpha$ for a scale consisting of several items that are identical, while this would be a very poor questionnaire. So the general idea that correlated items are the best ones has its flaws and you have to remember about that. That is one of the reasons why you should rather not use $\alpha$ alone for psychometric analysis, but combine it with other methods, e.g. with Item Response Theory based methods being one of the most popular nowadays (check e.g. Comparison of Classical Test Theory and Item Response Theory and Their Applications to Test Development paper by Ronald K. Hambleton and Russell W. Jones ). Check out also alpha documentation or the tutorial on personality-project.org (site by psych library developer with a great deal of information about psychometrics and R).
Cronbach's alpha in R Cronbach's $\alpha$ is a measure of internal consistency of a questionnaire or test. It says how correlated the items are that are included in the scale. This is the reason why you need preferably muc
34,941
p-value correction for multiple t-tests?
You absolutely do want to apply a correction. The key idea is identifying significance by chance. As you increase the number of comparisons you increase the number of those that will be significant by chance. For example, let's take the generic example of doing 100 comparisons using a significance threshold of 0.05. Now, a p-value of 0.05 means there is a 5% chance of getting that result when the null hypothesis is true. Therefore, if you do these 100 comparisons, you would expect to find 5 genes significant just by random chance. As such, to avoid making these false-positives (Type 1 Errors) we 'correct' the p-value thereby making the test more conservative. The choice in correction can vary too. Bonferroni is a common correction but if you have 1000's of genes, it is going to be exceedingly unlikely you will find anything significant because it will be so conservative. In that case, you may use the 'FDR' (False Discovery Rate) correction. There is no absolute answer so you need to explore the possibilities and make the best choice and of course report what correction you applied. EDIT Regarding you comments below I thought an example can help demonstrate the concept. Using R, I generate completely random values for 250 genes with two treatments (A and B) set.seed(8) df <- data.frame(expression=runif(1000), gene=rep(paste("gene", seq(250)), 4), treatment = rep(c("A","A","B","B"), each=250)) I then split the data by each gene and run a t.test comparing between the two groups. out <- do.call("rbind", lapply(split(df, df$gene), function(x) t.test(expression~treatment, x)$p.value)) Now, given that this is completely random data there shouldn't be any significant differences and yet when I count how many there are 9 significant genes!!! length(which(out < 0.05)) [1] 9 Avoiding mistakes like these is the point behind making these corrections. Hopefully this helps clarify for you.
p-value correction for multiple t-tests?
You absolutely do want to apply a correction. The key idea is identifying significance by chance. As you increase the number of comparisons you increase the number of those that will be significant
p-value correction for multiple t-tests? You absolutely do want to apply a correction. The key idea is identifying significance by chance. As you increase the number of comparisons you increase the number of those that will be significant by chance. For example, let's take the generic example of doing 100 comparisons using a significance threshold of 0.05. Now, a p-value of 0.05 means there is a 5% chance of getting that result when the null hypothesis is true. Therefore, if you do these 100 comparisons, you would expect to find 5 genes significant just by random chance. As such, to avoid making these false-positives (Type 1 Errors) we 'correct' the p-value thereby making the test more conservative. The choice in correction can vary too. Bonferroni is a common correction but if you have 1000's of genes, it is going to be exceedingly unlikely you will find anything significant because it will be so conservative. In that case, you may use the 'FDR' (False Discovery Rate) correction. There is no absolute answer so you need to explore the possibilities and make the best choice and of course report what correction you applied. EDIT Regarding you comments below I thought an example can help demonstrate the concept. Using R, I generate completely random values for 250 genes with two treatments (A and B) set.seed(8) df <- data.frame(expression=runif(1000), gene=rep(paste("gene", seq(250)), 4), treatment = rep(c("A","A","B","B"), each=250)) I then split the data by each gene and run a t.test comparing between the two groups. out <- do.call("rbind", lapply(split(df, df$gene), function(x) t.test(expression~treatment, x)$p.value)) Now, given that this is completely random data there shouldn't be any significant differences and yet when I count how many there are 9 significant genes!!! length(which(out < 0.05)) [1] 9 Avoiding mistakes like these is the point behind making these corrections. Hopefully this helps clarify for you.
p-value correction for multiple t-tests? You absolutely do want to apply a correction. The key idea is identifying significance by chance. As you increase the number of comparisons you increase the number of those that will be significant
34,942
p-value correction for multiple t-tests?
You say that no comparisons are being conducted because genes are not being compared to each other. However, each t-test is still a comparison. In fact, that's what a t-test is--a comparison of two means. In your case, each comparison is between the healthy group and the unhealthy group, rather than between gene A and gene B, but it is a comparison nonetheless. This confusion can be avoided by substituting the synonym "multiple testing" for "multiple comparisons."
p-value correction for multiple t-tests?
You say that no comparisons are being conducted because genes are not being compared to each other. However, each t-test is still a comparison. In fact, that's what a t-test is--a comparison of two me
p-value correction for multiple t-tests? You say that no comparisons are being conducted because genes are not being compared to each other. However, each t-test is still a comparison. In fact, that's what a t-test is--a comparison of two means. In your case, each comparison is between the healthy group and the unhealthy group, rather than between gene A and gene B, but it is a comparison nonetheless. This confusion can be avoided by substituting the synonym "multiple testing" for "multiple comparisons."
p-value correction for multiple t-tests? You say that no comparisons are being conducted because genes are not being compared to each other. However, each t-test is still a comparison. In fact, that's what a t-test is--a comparison of two me
34,943
Clarification: The covariance of intercept and slope in simple linear regression?
To answer your question as asked, if you adopt the frequentist view of statistics then $\beta_0$ and $\beta_1$ are not random variables and thus have no covariance. They are fixed (and unobserved) values that describe the true relationship between your $Y$ and $X$ variables. The covariance between them is undefined in the sense that the covariance between $4.5$ and $\pi$ is undefined; they're not random variables, they're just numbers. If you adopt the Bayesian view of statistics, and you thus view $\beta_0$ and $\beta_1$ as random variables themselves, I imagine you could model them as covarying somehow. Maybe someone can elaborate on this in the comments as I'm not really sure of this. However, I suspect you're asking something different; namely, what is the covariance between the estimates of these coefficients (sometimes called $\hat{\beta_0}$ and $\hat{\beta_1}$, sometimes called $b_0$ and $b_1$). This is answered very well by the top answer here. If you look at the off-diagonal elements of the variance-covariance matrix of the estimates (equation 6.78a in the textbook the OP posted), you will see $$\mathrm{Cov}(\hat{\beta_0},\hat{\beta_1}) = \frac{-\bar{X}\sigma^2}{\sum{(X_i-\bar{X})^2}} = -\bar{X}\mathrm{Var}(\hat{\beta_1})$$ where $\sigma^2$ is the variance of the error terms. To answer your question of what range of values it can take on, let's look at the equation. This equation shows that as the spread of $X$ values increases, the magnitude of the covariance decreases (ie, as the denominator gets larger, the expression gets smaller in magnitude). As the error term variance $\sigma^2$ increases, so does the magnitude of the covariance. Additionally, the sign of the covariance is the opposite sign of $\bar{X}$. So it can actually take on any range of values from $-\infty$ to $0$ if $\bar{X}>0$ and any range of values from $0$ to $\infty$ if $\bar{X}<0$. The magnitude of the value it takes on depends on the spread of your $X$ values and the variance of your error terms. edit: I added $-\bar{X}\mathrm{Var}(\hat{\beta_1})$ in the equation to further help with intuition. The variance of our slope estimate, $\mathrm{Var}(\hat{\beta_1})$, is a measure of how precise that estimate is; in a perfect world, we want this variance to be small so that our estimate is very precise. In light of this, I don't really think that the covariance between the intercept and slope estimates is a very useful or enlightening concept on its own. As far as I can tell, it is the negative of the product of two more easy-to-interpret values: $\bar{X}$ and $\mathrm{Var}(\hat{\beta_1})$.
Clarification: The covariance of intercept and slope in simple linear regression?
To answer your question as asked, if you adopt the frequentist view of statistics then $\beta_0$ and $\beta_1$ are not random variables and thus have no covariance. They are fixed (and unobserved) val
Clarification: The covariance of intercept and slope in simple linear regression? To answer your question as asked, if you adopt the frequentist view of statistics then $\beta_0$ and $\beta_1$ are not random variables and thus have no covariance. They are fixed (and unobserved) values that describe the true relationship between your $Y$ and $X$ variables. The covariance between them is undefined in the sense that the covariance between $4.5$ and $\pi$ is undefined; they're not random variables, they're just numbers. If you adopt the Bayesian view of statistics, and you thus view $\beta_0$ and $\beta_1$ as random variables themselves, I imagine you could model them as covarying somehow. Maybe someone can elaborate on this in the comments as I'm not really sure of this. However, I suspect you're asking something different; namely, what is the covariance between the estimates of these coefficients (sometimes called $\hat{\beta_0}$ and $\hat{\beta_1}$, sometimes called $b_0$ and $b_1$). This is answered very well by the top answer here. If you look at the off-diagonal elements of the variance-covariance matrix of the estimates (equation 6.78a in the textbook the OP posted), you will see $$\mathrm{Cov}(\hat{\beta_0},\hat{\beta_1}) = \frac{-\bar{X}\sigma^2}{\sum{(X_i-\bar{X})^2}} = -\bar{X}\mathrm{Var}(\hat{\beta_1})$$ where $\sigma^2$ is the variance of the error terms. To answer your question of what range of values it can take on, let's look at the equation. This equation shows that as the spread of $X$ values increases, the magnitude of the covariance decreases (ie, as the denominator gets larger, the expression gets smaller in magnitude). As the error term variance $\sigma^2$ increases, so does the magnitude of the covariance. Additionally, the sign of the covariance is the opposite sign of $\bar{X}$. So it can actually take on any range of values from $-\infty$ to $0$ if $\bar{X}>0$ and any range of values from $0$ to $\infty$ if $\bar{X}<0$. The magnitude of the value it takes on depends on the spread of your $X$ values and the variance of your error terms. edit: I added $-\bar{X}\mathrm{Var}(\hat{\beta_1})$ in the equation to further help with intuition. The variance of our slope estimate, $\mathrm{Var}(\hat{\beta_1})$, is a measure of how precise that estimate is; in a perfect world, we want this variance to be small so that our estimate is very precise. In light of this, I don't really think that the covariance between the intercept and slope estimates is a very useful or enlightening concept on its own. As far as I can tell, it is the negative of the product of two more easy-to-interpret values: $\bar{X}$ and $\mathrm{Var}(\hat{\beta_1})$.
Clarification: The covariance of intercept and slope in simple linear regression? To answer your question as asked, if you adopt the frequentist view of statistics then $\beta_0$ and $\beta_1$ are not random variables and thus have no covariance. They are fixed (and unobserved) val
34,944
How do I force the L-BFGS-B to not stop early? Projected gradient is zero
I don't know much about the SciPy wrapper, but the underlying L-BFGS-B code gives several options. The help file for the R interface lists several of them. Assuming your gradient is just small but isn't actually zero, you have several options that will either increase the size of the gradient or decrease the size that the software will tolerate. You can rescale the parameters so that a small difference in the parameters produces a more substantial change in your objective function. The R wrapper has a way to do this automatically, but I don't see one in the SciPy one. You could also do it manually. You can rescale your objective function (e.g. by multiplying it by some constant) so that the differences and derivatives are larger (e.g. greater than $10^-5$). You can adjust the method's tolerances. The tolerance limit you're bumping up against is for pgtol, which is $10^-5$, by default. The documentation for L-BFGS-B seems to suggest (at the end of Section 3) that you could safely bring this value down to the "square root of machine precision", which is about $10^-8$ on most machines. The other tolerance limits (absolute and relative) might also become important once you relax pgtol, if your gradients are very small. Link to the L-BFGS-B documentation (postscript format) Link to the R documentation for L-BFGS-B
How do I force the L-BFGS-B to not stop early? Projected gradient is zero
I don't know much about the SciPy wrapper, but the underlying L-BFGS-B code gives several options. The help file for the R interface lists several of them. Assuming your gradient is just small but is
How do I force the L-BFGS-B to not stop early? Projected gradient is zero I don't know much about the SciPy wrapper, but the underlying L-BFGS-B code gives several options. The help file for the R interface lists several of them. Assuming your gradient is just small but isn't actually zero, you have several options that will either increase the size of the gradient or decrease the size that the software will tolerate. You can rescale the parameters so that a small difference in the parameters produces a more substantial change in your objective function. The R wrapper has a way to do this automatically, but I don't see one in the SciPy one. You could also do it manually. You can rescale your objective function (e.g. by multiplying it by some constant) so that the differences and derivatives are larger (e.g. greater than $10^-5$). You can adjust the method's tolerances. The tolerance limit you're bumping up against is for pgtol, which is $10^-5$, by default. The documentation for L-BFGS-B seems to suggest (at the end of Section 3) that you could safely bring this value down to the "square root of machine precision", which is about $10^-8$ on most machines. The other tolerance limits (absolute and relative) might also become important once you relax pgtol, if your gradients are very small. Link to the L-BFGS-B documentation (postscript format) Link to the R documentation for L-BFGS-B
How do I force the L-BFGS-B to not stop early? Projected gradient is zero I don't know much about the SciPy wrapper, but the underlying L-BFGS-B code gives several options. The help file for the R interface lists several of them. Assuming your gradient is just small but is
34,945
How do I force the L-BFGS-B to not stop early? Projected gradient is zero
Scipy's BFGS solver uses a step size of epsilon = 1e-8 to calculate the gradient (meaning that it adds 1e-8 to each of the parameters in turn, to see how much the objective function changes), which is quite small for some applications. You can scale this up as much as you want based on the scale of the problem - for my problem I even used epsilon = 1.
How do I force the L-BFGS-B to not stop early? Projected gradient is zero
Scipy's BFGS solver uses a step size of epsilon = 1e-8 to calculate the gradient (meaning that it adds 1e-8 to each of the parameters in turn, to see how much the objective function changes), which is
How do I force the L-BFGS-B to not stop early? Projected gradient is zero Scipy's BFGS solver uses a step size of epsilon = 1e-8 to calculate the gradient (meaning that it adds 1e-8 to each of the parameters in turn, to see how much the objective function changes), which is quite small for some applications. You can scale this up as much as you want based on the scale of the problem - for my problem I even used epsilon = 1.
How do I force the L-BFGS-B to not stop early? Projected gradient is zero Scipy's BFGS solver uses a step size of epsilon = 1e-8 to calculate the gradient (meaning that it adds 1e-8 to each of the parameters in turn, to see how much the objective function changes), which is
34,946
Can multinomial distribution be simulated by a sequence of binomial draws?
This is correct, if $$ (X_1,\ldots,X_K)\sim\mathcal{M}(N;p_1,\ldots,p_K) $$ then \begin{align*} X_1 &\sim\mathcal{B}(N,p_1)\\ X_2|X_1 &\sim\mathcal{B}\{N-X_1,p_2/(1-p_1)\}\\ &\vdots\\ X_{K-1}|X_1,\ldots,X_{K-2} &\sim \mathcal{B}\big\{N-X_1-\cdots-X_{K-2},\frac{p_{K-1}}{\big(1 - \sum_{j<K-1}p_j\big)} \big\} \end{align*} as can be shown be equating $$ \mathbb{P}((X_1,…,X_K)=(x_1,\ldots,x_k)) $$ and $$ \mathbb{P}(X_1=x_1)\mathbb{P}(X_2=x_2|X_1=x_1)\cdots\mathbb{P}(X_{K-1}=x_{k-1}|X_1=x_1,\ldots,X_{K-2}=x_{K-2}) $$
Can multinomial distribution be simulated by a sequence of binomial draws?
This is correct, if $$ (X_1,\ldots,X_K)\sim\mathcal{M}(N;p_1,\ldots,p_K) $$ then \begin{align*} X_1 &\sim\mathcal{B}(N,p_1)\\ X_2|X_1 &\sim\mathcal{B}\{N-X_1,p_2/(1-p_1)\}\\ &\vdots\\ X_{K-1}|X_1,\ldo
Can multinomial distribution be simulated by a sequence of binomial draws? This is correct, if $$ (X_1,\ldots,X_K)\sim\mathcal{M}(N;p_1,\ldots,p_K) $$ then \begin{align*} X_1 &\sim\mathcal{B}(N,p_1)\\ X_2|X_1 &\sim\mathcal{B}\{N-X_1,p_2/(1-p_1)\}\\ &\vdots\\ X_{K-1}|X_1,\ldots,X_{K-2} &\sim \mathcal{B}\big\{N-X_1-\cdots-X_{K-2},\frac{p_{K-1}}{\big(1 - \sum_{j<K-1}p_j\big)} \big\} \end{align*} as can be shown be equating $$ \mathbb{P}((X_1,…,X_K)=(x_1,\ldots,x_k)) $$ and $$ \mathbb{P}(X_1=x_1)\mathbb{P}(X_2=x_2|X_1=x_1)\cdots\mathbb{P}(X_{K-1}=x_{k-1}|X_1=x_1,\ldots,X_{K-2}=x_{K-2}) $$
Can multinomial distribution be simulated by a sequence of binomial draws? This is correct, if $$ (X_1,\ldots,X_K)\sim\mathcal{M}(N;p_1,\ldots,p_K) $$ then \begin{align*} X_1 &\sim\mathcal{B}(N,p_1)\\ X_2|X_1 &\sim\mathcal{B}\{N-X_1,p_2/(1-p_1)\}\\ &\vdots\\ X_{K-1}|X_1,\ldo
34,947
Can multinomial distribution be simulated by a sequence of binomial draws?
A simple Python program to sample multinomial distribution according to this idea: def sample_multinom(pVec, k): n = len(pVec) if n==1: return int(np.random.uniform(0, 1, 1) > 1 - pVec[0]) k1 = sample_binom_using_random(k, pVec[0]) if n==2: return (k1, k-k1) else: return (k1, *sample_multinom([x/(1-pVec[0]) for x in pVec[1:]], k-k1))
Can multinomial distribution be simulated by a sequence of binomial draws?
A simple Python program to sample multinomial distribution according to this idea: def sample_multinom(pVec, k): n = len(pVec) if n==1: return int(np.random.uniform(0, 1, 1) > 1 - pVec
Can multinomial distribution be simulated by a sequence of binomial draws? A simple Python program to sample multinomial distribution according to this idea: def sample_multinom(pVec, k): n = len(pVec) if n==1: return int(np.random.uniform(0, 1, 1) > 1 - pVec[0]) k1 = sample_binom_using_random(k, pVec[0]) if n==2: return (k1, k-k1) else: return (k1, *sample_multinom([x/(1-pVec[0]) for x in pVec[1:]], k-k1))
Can multinomial distribution be simulated by a sequence of binomial draws? A simple Python program to sample multinomial distribution according to this idea: def sample_multinom(pVec, k): n = len(pVec) if n==1: return int(np.random.uniform(0, 1, 1) > 1 - pVec
34,948
Ranking two models based on ROC-AUC and PR-AUC
Usually you would obtain the same conclusion based on both measures. It is possible to get conflicting conclusions if the performance curves (both PR and ROC) of the models cross, e.g. one model is better at low recall while the other is better at high recall. Relying on summaries like AUC is good, but don't neglect the actual curves. Your result implies that neither model is better than the other over the full operating range. If you still want to make a statement about which is better, you will need to be more specific about your priorities: do you want high recall, high precision, high specificity? (instead of asking which is best in any setting, e.g. the full operating range) ROC-AUC is high, then PR-AUC is also high. Yes, but note that high is relative. Depending on the class balance, a PR-AUC of $20\%$ can already be excellent. So if the ROC curve of method-1 dominates, so should method-1's PR curve. To quote the paper of Davis and Goadrich "a curve dominates in ROC space if and only if it dominates in PR space". This means that if you have one model A whose PR/ROC curve is entirely above another model B's PR/ROC curve, the ROC/PR curve for A will also be above that of B in the entire range.
Ranking two models based on ROC-AUC and PR-AUC
Usually you would obtain the same conclusion based on both measures. It is possible to get conflicting conclusions if the performance curves (both PR and ROC) of the models cross, e.g. one model is be
Ranking two models based on ROC-AUC and PR-AUC Usually you would obtain the same conclusion based on both measures. It is possible to get conflicting conclusions if the performance curves (both PR and ROC) of the models cross, e.g. one model is better at low recall while the other is better at high recall. Relying on summaries like AUC is good, but don't neglect the actual curves. Your result implies that neither model is better than the other over the full operating range. If you still want to make a statement about which is better, you will need to be more specific about your priorities: do you want high recall, high precision, high specificity? (instead of asking which is best in any setting, e.g. the full operating range) ROC-AUC is high, then PR-AUC is also high. Yes, but note that high is relative. Depending on the class balance, a PR-AUC of $20\%$ can already be excellent. So if the ROC curve of method-1 dominates, so should method-1's PR curve. To quote the paper of Davis and Goadrich "a curve dominates in ROC space if and only if it dominates in PR space". This means that if you have one model A whose PR/ROC curve is entirely above another model B's PR/ROC curve, the ROC/PR curve for A will also be above that of B in the entire range.
Ranking two models based on ROC-AUC and PR-AUC Usually you would obtain the same conclusion based on both measures. It is possible to get conflicting conclusions if the performance curves (both PR and ROC) of the models cross, e.g. one model is be
34,949
Ranking two models based on ROC-AUC and PR-AUC
ROC-AUC and PR-AUC are all AUC, confined by two axes with all thresholds. ROC-AUC: one axis is True Positive Rate (TPR), i.e., true positives / all positives the other axis is False Positive Rate (FPR), i.e., false positives / all negatives PR-AUC: one axis is Recall (which is another name for TPR) the other axis is Precision, i.e., true positives / (true positives + false positives) We can see they both have TRP as one axis, their difference comes from the other axis. Basically, PR-AUC misses the part true negatives, and focuses on true positives (both axes have true positives in the numerator, divided by different denominators). In contrast, ROC-AUC has taken true negatives into consideration, the other axis FPR = false positives / (false positives + true negatives). Thus, when we observe ROC-AUC increases while PR-AUC decreases, the model improves in FPR while deteriorating in Precision.
Ranking two models based on ROC-AUC and PR-AUC
ROC-AUC and PR-AUC are all AUC, confined by two axes with all thresholds. ROC-AUC: one axis is True Positive Rate (TPR), i.e., true positives / all positives the other axis is False Positive Rate (
Ranking two models based on ROC-AUC and PR-AUC ROC-AUC and PR-AUC are all AUC, confined by two axes with all thresholds. ROC-AUC: one axis is True Positive Rate (TPR), i.e., true positives / all positives the other axis is False Positive Rate (FPR), i.e., false positives / all negatives PR-AUC: one axis is Recall (which is another name for TPR) the other axis is Precision, i.e., true positives / (true positives + false positives) We can see they both have TRP as one axis, their difference comes from the other axis. Basically, PR-AUC misses the part true negatives, and focuses on true positives (both axes have true positives in the numerator, divided by different denominators). In contrast, ROC-AUC has taken true negatives into consideration, the other axis FPR = false positives / (false positives + true negatives). Thus, when we observe ROC-AUC increases while PR-AUC decreases, the model improves in FPR while deteriorating in Precision.
Ranking two models based on ROC-AUC and PR-AUC ROC-AUC and PR-AUC are all AUC, confined by two axes with all thresholds. ROC-AUC: one axis is True Positive Rate (TPR), i.e., true positives / all positives the other axis is False Positive Rate (
34,950
Average and standard deviation of timestamps (time wraps around at midnight)
Let's use the simplification you suggest: only use the data from positive readings and disregard the value of the reading, so we are left with a single set of circular data. You can use the circular dispersion, as whuber suggested, possibly multiplied by some constant to determine how much of the data should be seen as an outlier. A good text that is slightly easier to understand than the Wikipedia page would be Statistical Analysis of Circular data, by N.I. Fisher (1995). I'll give some more straightforward formulas than the Wiki page, and give some sample code. The dispersion can be calculated as (due to Fisher (p.32-34)): Denote the data by $\boldsymbol\theta = \{\theta_1, \dots, \theta_n\}.$ An estimate of the mean $\bar\theta$ can be calculated with $S=\sum_{i=1}^{n}\sin(\theta_i)$, $C=\sum_{i=1}^{n}\cos(\theta_i)$, $\hat\mu = \text{atan2}(S, C)$. (See http://en.wikipedia.org/wiki/Atan2) Calculate $\bar{R} = \frac{\sqrt{S^2 + C^2}}{n}$. Calculate the dispersion as suggested by whuber. I'm not sure why, but Wikipedia's definition seems to differ slightly from Fisher. I will use Fisher's: $\hat\delta = \frac{1 - \left[ (1/n) \sum_{i=1}^{n} \cos 2 (\theta_i - \hat\mu) \right]}{2\bar{R}^2}.$ Then, choose some constant $c$ (1 is probably fine, but you may fine-tune). Then, the interval is given by $ \left[\hat\mu - c \hat\delta, \hat\mu + c \hat\delta \right]$. I know you want to avoid R, but just to show how to calculate this in code, here is some basic R code anyway, which also generates a plot: n <- 200 th <- runif(n, 0.5 * pi, 1.5 * pi) plot(cos(th), sin(th), xlim=c(-1, 1), ylim = c(-1, 1)) S <- sum(sin(th)) C <- sum(cos(th)) mu_hat <- atan2(S, C) R_bar <- sqrt(S^2 + C^2) / n delta_hat <- (1 - sum(cos(2 * (th-mu_hat)))/n) / (2 * R_bar^2) constant <- 0.8 CI <- mu_hat + c(-1, 1) * constant * delta_hat lines(x = c(0, cos(CI[1])), y = c(0, sin(CI[1])), col="green") lines(x = c(0, cos(CI[2])), y = c(0, sin(CI[2])), col="blue") In a final note, it may still be better to use the additional information provided by the value of the reading, and not only the sign, because it may provide better estimates. However, the simplification of only using the sign makes the problem much more manageable. If anyone has a good solution that incorporates the readings, I would love to know!
Average and standard deviation of timestamps (time wraps around at midnight)
Let's use the simplification you suggest: only use the data from positive readings and disregard the value of the reading, so we are left with a single set of circular data. You can use the circular d
Average and standard deviation of timestamps (time wraps around at midnight) Let's use the simplification you suggest: only use the data from positive readings and disregard the value of the reading, so we are left with a single set of circular data. You can use the circular dispersion, as whuber suggested, possibly multiplied by some constant to determine how much of the data should be seen as an outlier. A good text that is slightly easier to understand than the Wikipedia page would be Statistical Analysis of Circular data, by N.I. Fisher (1995). I'll give some more straightforward formulas than the Wiki page, and give some sample code. The dispersion can be calculated as (due to Fisher (p.32-34)): Denote the data by $\boldsymbol\theta = \{\theta_1, \dots, \theta_n\}.$ An estimate of the mean $\bar\theta$ can be calculated with $S=\sum_{i=1}^{n}\sin(\theta_i)$, $C=\sum_{i=1}^{n}\cos(\theta_i)$, $\hat\mu = \text{atan2}(S, C)$. (See http://en.wikipedia.org/wiki/Atan2) Calculate $\bar{R} = \frac{\sqrt{S^2 + C^2}}{n}$. Calculate the dispersion as suggested by whuber. I'm not sure why, but Wikipedia's definition seems to differ slightly from Fisher. I will use Fisher's: $\hat\delta = \frac{1 - \left[ (1/n) \sum_{i=1}^{n} \cos 2 (\theta_i - \hat\mu) \right]}{2\bar{R}^2}.$ Then, choose some constant $c$ (1 is probably fine, but you may fine-tune). Then, the interval is given by $ \left[\hat\mu - c \hat\delta, \hat\mu + c \hat\delta \right]$. I know you want to avoid R, but just to show how to calculate this in code, here is some basic R code anyway, which also generates a plot: n <- 200 th <- runif(n, 0.5 * pi, 1.5 * pi) plot(cos(th), sin(th), xlim=c(-1, 1), ylim = c(-1, 1)) S <- sum(sin(th)) C <- sum(cos(th)) mu_hat <- atan2(S, C) R_bar <- sqrt(S^2 + C^2) / n delta_hat <- (1 - sum(cos(2 * (th-mu_hat)))/n) / (2 * R_bar^2) constant <- 0.8 CI <- mu_hat + c(-1, 1) * constant * delta_hat lines(x = c(0, cos(CI[1])), y = c(0, sin(CI[1])), col="green") lines(x = c(0, cos(CI[2])), y = c(0, sin(CI[2])), col="blue") In a final note, it may still be better to use the additional information provided by the value of the reading, and not only the sign, because it may provide better estimates. However, the simplification of only using the sign makes the problem much more manageable. If anyone has a good solution that incorporates the readings, I would love to know!
Average and standard deviation of timestamps (time wraps around at midnight) Let's use the simplification you suggest: only use the data from positive readings and disregard the value of the reading, so we are left with a single set of circular data. You can use the circular d
34,951
Average and standard deviation of timestamps (time wraps around at midnight)
This might be kind of dumb, but I just plotted the points as if they lie in a disc: the angle is taken to be time, and I assumed unit radius. The angle of the centroid of this point cloud is the mean angle of your data, i.e. the mean time of day which respects the "wrap around at midnight" property. This value also has the nice property of maximizing the MLE for the location parameter of a von Mises distribution. Since all of our points lie roughly between 5 a.m. and 5 p.m, it shouldn't be surprising that their mean is near noon. But, at least for the moment, this is all of the circular statistics that I understand! I wish I could give you some more help, but I'm still puzzling through the wikipedia articles. sec.radians <- 2*pi*sec/(60*60*24) plot(cos(sec.radians), sin(sec.radians), xlim=c(-1, 1), ylim=c(-1, 1)) theta <- seq(0, 2*pi, by=0.01) lines(cos(theta), sin(theta), col="red", lty="dashed") landmarks <- c(2*pi, 3*pi/2, pi, pi/2) text(0.5*cos(landmarks), 0.5*sin(landmarks), c("midnight", "6 a.m.", "noon", "6 p.m.")) centroid <- data.frame(x=mean(cos(sec.radians)), y=mean(sin(sec.radians))) points(centroid, col="purple", lwd=5) theta.mean <- atan2(centroid$y, centroid$x)
Average and standard deviation of timestamps (time wraps around at midnight)
This might be kind of dumb, but I just plotted the points as if they lie in a disc: the angle is taken to be time, and I assumed unit radius. The angle of the centroid of this point cloud is the mean
Average and standard deviation of timestamps (time wraps around at midnight) This might be kind of dumb, but I just plotted the points as if they lie in a disc: the angle is taken to be time, and I assumed unit radius. The angle of the centroid of this point cloud is the mean angle of your data, i.e. the mean time of day which respects the "wrap around at midnight" property. This value also has the nice property of maximizing the MLE for the location parameter of a von Mises distribution. Since all of our points lie roughly between 5 a.m. and 5 p.m, it shouldn't be surprising that their mean is near noon. But, at least for the moment, this is all of the circular statistics that I understand! I wish I could give you some more help, but I'm still puzzling through the wikipedia articles. sec.radians <- 2*pi*sec/(60*60*24) plot(cos(sec.radians), sin(sec.radians), xlim=c(-1, 1), ylim=c(-1, 1)) theta <- seq(0, 2*pi, by=0.01) lines(cos(theta), sin(theta), col="red", lty="dashed") landmarks <- c(2*pi, 3*pi/2, pi, pi/2) text(0.5*cos(landmarks), 0.5*sin(landmarks), c("midnight", "6 a.m.", "noon", "6 p.m.")) centroid <- data.frame(x=mean(cos(sec.radians)), y=mean(sin(sec.radians))) points(centroid, col="purple", lwd=5) theta.mean <- atan2(centroid$y, centroid$x)
Average and standard deviation of timestamps (time wraps around at midnight) This might be kind of dumb, but I just plotted the points as if they lie in a disc: the angle is taken to be time, and I assumed unit radius. The angle of the centroid of this point cloud is the mean
34,952
What do NORB and CIFAR stand for?
NORB = NYU Object Recognition Benchmark. Source: http://www.cs.nyu.edu/~yann/research/norb/. CIFAR = Canadian Institute for Advanced Research. Source: http://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf (page 32).
What do NORB and CIFAR stand for?
NORB = NYU Object Recognition Benchmark. Source: http://www.cs.nyu.edu/~yann/research/norb/. CIFAR = Canadian Institute for Advanced Research. Source: http://www.cs.toronto.edu/~kriz/learning-features
What do NORB and CIFAR stand for? NORB = NYU Object Recognition Benchmark. Source: http://www.cs.nyu.edu/~yann/research/norb/. CIFAR = Canadian Institute for Advanced Research. Source: http://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf (page 32).
What do NORB and CIFAR stand for? NORB = NYU Object Recognition Benchmark. Source: http://www.cs.nyu.edu/~yann/research/norb/. CIFAR = Canadian Institute for Advanced Research. Source: http://www.cs.toronto.edu/~kriz/learning-features
34,953
First differences vs. fixed effects model for panel data
If you have $N$ individuals and you include $N-1$ individual dummies (one less in order to avoid the dummy variable trap) in an OLS regression like $$y_{it} = X'_{it}\beta + \sum_{i=1}^{N-1}\delta_i (\text{individual}_i) + \epsilon_{it}$$ then this is called a least squares dummy variable (LSDV) regression. In this case, each individual dummy will "absorb" the individual fixed effects $u_i$ that are hidden in the error term $\epsilon_{it} = u_i + e_{it}$. Mundlak (1978) has shown that the LSDV regression is equivalent to the fixed effects estimator: $$y_{it} - \overline{y}_{i} = (X_{it} - \overline{X}_i)\beta + \epsilon_{it} - \overline{\epsilon}_i$$ where $\overline{y}_{i} = \frac{1}{T}\sum^{T}_{t=1}y_{it}$, $\overline{x}_{i} = \frac{1}{T}\sum^{T}_{t=1}x_{it}$, and $\overline{\epsilon}_{i} = \frac{1}{T}\sum^{T}_{t=1}\epsilon_{it}$. Back in the days when computers weren't very fast, having large panels basically made LSDV infeasible because there were too many dummies. Therefore Mundlak's finding was very useful because it dispenses of including all these individual dummies and instead using the within transformation made things much simpler. So if you do a fixed effects regression you don't need to include all individual dummies. In fact, your statistical software will just drop them should you include them in a fixed effects regression. Also in a first differences regression the individual dummies will drop out because they do not change over time, hence the difference is zero for all the dummies and then your statistical software will omit them due to perfect collinearity. Doing either fixed effects or first differences already solves the problem of time-invariant unobserved variables ($u_i$). LSDV is just another way of doing it and for this reason it won't help you to combine it with the other methods. When you include individual dummies after first differencing your other variables, i.e. a first differences regression with individual dummies, those dummies will estimate individual trend effects (see page 77, footnote 1 in the notes here).
First differences vs. fixed effects model for panel data
If you have $N$ individuals and you include $N-1$ individual dummies (one less in order to avoid the dummy variable trap) in an OLS regression like $$y_{it} = X'_{it}\beta + \sum_{i=1}^{N-1}\delta_i (
First differences vs. fixed effects model for panel data If you have $N$ individuals and you include $N-1$ individual dummies (one less in order to avoid the dummy variable trap) in an OLS regression like $$y_{it} = X'_{it}\beta + \sum_{i=1}^{N-1}\delta_i (\text{individual}_i) + \epsilon_{it}$$ then this is called a least squares dummy variable (LSDV) regression. In this case, each individual dummy will "absorb" the individual fixed effects $u_i$ that are hidden in the error term $\epsilon_{it} = u_i + e_{it}$. Mundlak (1978) has shown that the LSDV regression is equivalent to the fixed effects estimator: $$y_{it} - \overline{y}_{i} = (X_{it} - \overline{X}_i)\beta + \epsilon_{it} - \overline{\epsilon}_i$$ where $\overline{y}_{i} = \frac{1}{T}\sum^{T}_{t=1}y_{it}$, $\overline{x}_{i} = \frac{1}{T}\sum^{T}_{t=1}x_{it}$, and $\overline{\epsilon}_{i} = \frac{1}{T}\sum^{T}_{t=1}\epsilon_{it}$. Back in the days when computers weren't very fast, having large panels basically made LSDV infeasible because there were too many dummies. Therefore Mundlak's finding was very useful because it dispenses of including all these individual dummies and instead using the within transformation made things much simpler. So if you do a fixed effects regression you don't need to include all individual dummies. In fact, your statistical software will just drop them should you include them in a fixed effects regression. Also in a first differences regression the individual dummies will drop out because they do not change over time, hence the difference is zero for all the dummies and then your statistical software will omit them due to perfect collinearity. Doing either fixed effects or first differences already solves the problem of time-invariant unobserved variables ($u_i$). LSDV is just another way of doing it and for this reason it won't help you to combine it with the other methods. When you include individual dummies after first differencing your other variables, i.e. a first differences regression with individual dummies, those dummies will estimate individual trend effects (see page 77, footnote 1 in the notes here).
First differences vs. fixed effects model for panel data If you have $N$ individuals and you include $N-1$ individual dummies (one less in order to avoid the dummy variable trap) in an OLS regression like $$y_{it} = X'_{it}\beta + \sum_{i=1}^{N-1}\delta_i (
34,954
Specification of panel data
For the Stata commands in this answer let me collect your variables in a local: local xlist sse01 wartosc_sr_trw_per_capita zatr_przem_bud podm_gosp_na_10tys_ludn proc_ludn_wiek_prod ludnosc_na_km2 So now you can always call all the variables with `xlist' 1) There are two commands that you can use after your fixed effects regression. xttest2 performs a Breusch-Pagan LM test with the null hypothesis of no dependence between the residuals. This is a test for contemporaneous correlation. Not rejecting the null means that the test did not detect any cross-sectional dependence in your residuals. xttest3 performs a modified version of the Wald test for groupwise heteroscedasticity. The null hypothesis is homoscedasticity. You can install both commands by typing ssc instal xttest2 and ssc instal xttest3. If you detect correlations between your residuals you can correct for this with the robust option: xtreg st_bezr 'xlist', fe robust To test for autocorrelation you can apply a Lagrange Multiplier test via xtserial: xtserial st_bezr 'xlist' The null hypothesis is no serial correlation. To correct for both serial correlation and heteroscedasticity you can use the cluster option with your id variable: xtreg st_bezr 'xlist', fe cluster(id) 2) For the normality test for the residuals: you can obtain the residuals via the predict command predict res, e after your fixed effects regression. For visual inspection you can use: kdensity res, normal (plots the distribution of the residuals and compares it to a normal) pnorm res (plots a standardized normal probability plot) qnorm res (plots the quantiles of the residuals against the quantiles of a normal distribution) With pnorm you can see if there is non-normality in the middle of the distribution and qnorm shows you any non-normality in the tails. A formal test can be obtained by swilk res. The null hypothesis is that the residuals are normally distributed. Generally, non-normality is not a too big concern but it matters for inference. You can again correct for this with the robust option. 3) Having corr(u_i, Xb) = -0.9961 means that the fixed effects are strongly correlated with your explanatory variables, so you did well by controlling for these fixed effects. A strong correlation of this type usually indicates that pooled OLS or random effects will not be suitable for your purpose because both of these models assume that the correlation between $u_i$ and $X\beta$ is zero. 4) Generally yes but it depends what you want to estimate or how you can treat your data, i.e. whether your variables are random variables or not. Here is an excellent explanation for the difference between mixed effects and panel data models by @mpiktas which will surely help you.
Specification of panel data
For the Stata commands in this answer let me collect your variables in a local: local xlist sse01 wartosc_sr_trw_per_capita zatr_przem_bud podm_gosp_na_10tys_ludn proc_ludn_wiek_prod ludnosc_na_km2
Specification of panel data For the Stata commands in this answer let me collect your variables in a local: local xlist sse01 wartosc_sr_trw_per_capita zatr_przem_bud podm_gosp_na_10tys_ludn proc_ludn_wiek_prod ludnosc_na_km2 So now you can always call all the variables with `xlist' 1) There are two commands that you can use after your fixed effects regression. xttest2 performs a Breusch-Pagan LM test with the null hypothesis of no dependence between the residuals. This is a test for contemporaneous correlation. Not rejecting the null means that the test did not detect any cross-sectional dependence in your residuals. xttest3 performs a modified version of the Wald test for groupwise heteroscedasticity. The null hypothesis is homoscedasticity. You can install both commands by typing ssc instal xttest2 and ssc instal xttest3. If you detect correlations between your residuals you can correct for this with the robust option: xtreg st_bezr 'xlist', fe robust To test for autocorrelation you can apply a Lagrange Multiplier test via xtserial: xtserial st_bezr 'xlist' The null hypothesis is no serial correlation. To correct for both serial correlation and heteroscedasticity you can use the cluster option with your id variable: xtreg st_bezr 'xlist', fe cluster(id) 2) For the normality test for the residuals: you can obtain the residuals via the predict command predict res, e after your fixed effects regression. For visual inspection you can use: kdensity res, normal (plots the distribution of the residuals and compares it to a normal) pnorm res (plots a standardized normal probability plot) qnorm res (plots the quantiles of the residuals against the quantiles of a normal distribution) With pnorm you can see if there is non-normality in the middle of the distribution and qnorm shows you any non-normality in the tails. A formal test can be obtained by swilk res. The null hypothesis is that the residuals are normally distributed. Generally, non-normality is not a too big concern but it matters for inference. You can again correct for this with the robust option. 3) Having corr(u_i, Xb) = -0.9961 means that the fixed effects are strongly correlated with your explanatory variables, so you did well by controlling for these fixed effects. A strong correlation of this type usually indicates that pooled OLS or random effects will not be suitable for your purpose because both of these models assume that the correlation between $u_i$ and $X\beta$ is zero. 4) Generally yes but it depends what you want to estimate or how you can treat your data, i.e. whether your variables are random variables or not. Here is an excellent explanation for the difference between mixed effects and panel data models by @mpiktas which will surely help you.
Specification of panel data For the Stata commands in this answer let me collect your variables in a local: local xlist sse01 wartosc_sr_trw_per_capita zatr_przem_bud podm_gosp_na_10tys_ludn proc_ludn_wiek_prod ludnosc_na_km2
34,955
What does the Reward function depend on in a Markov Decision Processes (MDPs), in the context of Reinforcement Learning?
The two definitions are not the same, but it essentially boils down to a modelling choice: for some problems, the reward function might be easier to define on the (state,action) pairs, while for others, the tuple (state,action,state) might be more appropriate. There's even a third option that only defines the reward on the current state (this can also be found in some references). I do think the definition of the reward function R(s,a) on the (state, action) pair is the most common, however. But the core learning algorithms remain the same whatever your exact design choice for the reward function.
What does the Reward function depend on in a Markov Decision Processes (MDPs), in the context of Rei
The two definitions are not the same, but it essentially boils down to a modelling choice: for some problems, the reward function might be easier to define on the (state,action) pairs, while for other
What does the Reward function depend on in a Markov Decision Processes (MDPs), in the context of Reinforcement Learning? The two definitions are not the same, but it essentially boils down to a modelling choice: for some problems, the reward function might be easier to define on the (state,action) pairs, while for others, the tuple (state,action,state) might be more appropriate. There's even a third option that only defines the reward on the current state (this can also be found in some references). I do think the definition of the reward function R(s,a) on the (state, action) pair is the most common, however. But the core learning algorithms remain the same whatever your exact design choice for the reward function.
What does the Reward function depend on in a Markov Decision Processes (MDPs), in the context of Rei The two definitions are not the same, but it essentially boils down to a modelling choice: for some problems, the reward function might be easier to define on the (state,action) pairs, while for other
34,956
What does the Reward function depend on in a Markov Decision Processes (MDPs), in the context of Reinforcement Learning?
In addition to Pierre Lison's answer in favor of a reward function as $ R: S \times A \rightarrow \mathbb{R} $, Sutton and Barto touch on the topic in chapter 3.6 of their book "Reinforcement Learning: An Introduction". Although the accepted answer is correct in terms of what is most commonly used, they prefer $ \mathcal{R}: S \times A \times S \rightarrow \mathbb{R} $. From said chapter: In conventional MDP theory, $\mathcal{R}_{ss'}^a $ always appears in an expected value sum [...], and therefore it is easier to use $R_s^a$. In reinforcement learning, however, we more often have to refer to individual actual or sample outcomes. In teaching reinforcement learning, we have found the notation $\mathcal{R}_{ss'}^a $ to be more straightforward conceptually and easier to understand.
What does the Reward function depend on in a Markov Decision Processes (MDPs), in the context of Rei
In addition to Pierre Lison's answer in favor of a reward function as $ R: S \times A \rightarrow \mathbb{R} $, Sutton and Barto touch on the topic in chapter 3.6 of their book "Reinforcement Learning
What does the Reward function depend on in a Markov Decision Processes (MDPs), in the context of Reinforcement Learning? In addition to Pierre Lison's answer in favor of a reward function as $ R: S \times A \rightarrow \mathbb{R} $, Sutton and Barto touch on the topic in chapter 3.6 of their book "Reinforcement Learning: An Introduction". Although the accepted answer is correct in terms of what is most commonly used, they prefer $ \mathcal{R}: S \times A \times S \rightarrow \mathbb{R} $. From said chapter: In conventional MDP theory, $\mathcal{R}_{ss'}^a $ always appears in an expected value sum [...], and therefore it is easier to use $R_s^a$. In reinforcement learning, however, we more often have to refer to individual actual or sample outcomes. In teaching reinforcement learning, we have found the notation $\mathcal{R}_{ss'}^a $ to be more straightforward conceptually and easier to understand.
What does the Reward function depend on in a Markov Decision Processes (MDPs), in the context of Rei In addition to Pierre Lison's answer in favor of a reward function as $ R: S \times A \rightarrow \mathbb{R} $, Sutton and Barto touch on the topic in chapter 3.6 of their book "Reinforcement Learning
34,957
What does the Reward function depend on in a Markov Decision Processes (MDPs), in the context of Reinforcement Learning?
I think $R(s,a,s')$ is the same thing as $R(s,a)$ in the MDP setting because $s'$ is determined by the transition function $T(s,a)$. Therefore $R(s,a,s')$ becomes $R(s,a,T(s,a))$, which can be simplified as $R(s,a)$.
What does the Reward function depend on in a Markov Decision Processes (MDPs), in the context of Rei
I think $R(s,a,s')$ is the same thing as $R(s,a)$ in the MDP setting because $s'$ is determined by the transition function $T(s,a)$. Therefore $R(s,a,s')$ becomes $R(s,a,T(s,a))$, which can be simplif
What does the Reward function depend on in a Markov Decision Processes (MDPs), in the context of Reinforcement Learning? I think $R(s,a,s')$ is the same thing as $R(s,a)$ in the MDP setting because $s'$ is determined by the transition function $T(s,a)$. Therefore $R(s,a,s')$ becomes $R(s,a,T(s,a))$, which can be simplified as $R(s,a)$.
What does the Reward function depend on in a Markov Decision Processes (MDPs), in the context of Rei I think $R(s,a,s')$ is the same thing as $R(s,a)$ in the MDP setting because $s'$ is determined by the transition function $T(s,a)$. Therefore $R(s,a,s')$ becomes $R(s,a,T(s,a))$, which can be simplif
34,958
What does the Reward function depend on in a Markov Decision Processes (MDPs), in the context of Reinforcement Learning?
They are equivalent in the following sense. Now suppose you have an offline dataset consist of (s1,a1),(s2,a2),..,(sk,ak) with reward r1,r2,...,rk. Based on the data, you can estimate the MDP model with transition probability T(s,a,s') and R(s,a,s'). You can also estimate the MDP model to be T(s,a,s') and R(s,a). Solve these two MDP models theoretically, you should obtain the same results of policy and value. The above is model-based learning. You can also use the Q-learning method by using the offline dataset in an online fashion (assume that you observe sk,ak,rk sequentially). Obviously, the estimated Q-matrix Q^t(s,a) would be different during the learning as the two forms of reward are different. However, you should obtain the same results of Q(s,a) when it converges. The reason is because P(s,a,s') has already contained the information you need to obtain R(s,a) from R(s,a,s').
What does the Reward function depend on in a Markov Decision Processes (MDPs), in the context of Rei
They are equivalent in the following sense. Now suppose you have an offline dataset consist of (s1,a1),(s2,a2),..,(sk,ak) with reward r1,r2,...,rk. Based on the data, you can estimate the MDP model wi
What does the Reward function depend on in a Markov Decision Processes (MDPs), in the context of Reinforcement Learning? They are equivalent in the following sense. Now suppose you have an offline dataset consist of (s1,a1),(s2,a2),..,(sk,ak) with reward r1,r2,...,rk. Based on the data, you can estimate the MDP model with transition probability T(s,a,s') and R(s,a,s'). You can also estimate the MDP model to be T(s,a,s') and R(s,a). Solve these two MDP models theoretically, you should obtain the same results of policy and value. The above is model-based learning. You can also use the Q-learning method by using the offline dataset in an online fashion (assume that you observe sk,ak,rk sequentially). Obviously, the estimated Q-matrix Q^t(s,a) would be different during the learning as the two forms of reward are different. However, you should obtain the same results of Q(s,a) when it converges. The reason is because P(s,a,s') has already contained the information you need to obtain R(s,a) from R(s,a,s').
What does the Reward function depend on in a Markov Decision Processes (MDPs), in the context of Rei They are equivalent in the following sense. Now suppose you have an offline dataset consist of (s1,a1),(s2,a2),..,(sk,ak) with reward r1,r2,...,rk. Based on the data, you can estimate the MDP model wi
34,959
Unbiased estimate of population standard deviation: is sqrt(2) a superior correction?
Maybe. What it appears that you did, is hit upon the $c_4(N)$ correction factor stated also in this wikipedia article. Specifically: You propose the estimator $$\tilde s = \frac 1{\sqrt {N-2^{1/2}}}\cdot (S_x)^{1/2} $$ where $S_x$ is the sum of squared deviations from the mean The article you mention defines (although not very clearly) the estimator $$\hat s = \frac 1{\sqrt {N-1}}\cdot\left[\sqrt{\frac{2}{N-1}}\,\,\,\frac{\Gamma\left(\frac{N}{2}\right)}{\Gamma\left(\frac{N-1}{2}\right)}\right]^{-1} \cdot (S_x)^{1/2} = \frac {\Gamma\left(\frac{N-1}{2}\right)}{2^{1/2}\Gamma\left(\frac{N}{2}\right)}\cdot (S_x)^{1/2}$$ where $$c_4(N) = \sqrt{\frac{2}{N-1}}\,\,\,\frac{\Gamma\left(\frac{N}{2}\right)}{\Gamma\left(\frac{N-1}{2}\right)}$$ Calculating the values of the two proposed multiplication factors we find \begin{array}{| r | r | r |} \hline N & \frac{1}{\sqrt{N-2^{1/2}}} & \frac{1}{c_4(N)\sqrt{N-1}} \\ \hline 3 & 0.7941 & 0.7979 \\ 4 & 0.6219 & 0.6267 \\ 5 & 0.5281 & 0.5319 \\ 6 & 0.467 & 0.47 \\ 7 & 0.4231 & 0.4255 \\ 8 & 0.3897 & 0.3917 \\ 9 & 0.3631 & 0.3647 \\ 10 & 0.3413 & 0.3427 \\ 11 & 0.323 & 0.3242 \\ 12 & 0.3074 & 0.3084 \\ 13 & 0.2938 & 0.2947 \\ 14 & 0.2819 & 0.2827 \\ 15 & 0.2713 & 0.2721 \\ 16 & 0.2618 & 0.2625 \\ 17 & 0.2533 & 0.2539 \\ 18 & 0.2455 & 0.2461 \\ 19 & 0.2385 & 0.239 \\ 20 & 0.232 & 0.2325 \\ 21 & 0.226 & 0.2264 \\ 22 & 0.2204 & 0.2208 \\ 23 & 0.2152 & 0.2156 \\ 24 & 0.2104 & 0.2108 \\ 25 & 0.2059 & 0.2063 \\ 26 & 0.2017 & 0.202 \\ 27 & 0.1977 & 0.198 \\ 28 & 0.1939 & 0.1942 \\ 29 & 0.1904 & 0.1907 \\ 30 & 0.187 & 0.1873 \\ \hline \end{array} Now what you have to do is first check whether this closeness in values continues for large $N$, and second simulate the estimation using the $c_4(N)$ correction factor, and compare it to yours. If these come out favorable, then you either a) have found a better, valid and useful (simpler to calculate) "rule of thumb"/substitute for the $c_4(N)$ correction factor, or b) you have found a better correction factor. If it is b), then it is publication material.
Unbiased estimate of population standard deviation: is sqrt(2) a superior correction?
Maybe. What it appears that you did, is hit upon the $c_4(N)$ correction factor stated also in this wikipedia article. Specifically: You propose the estimator $$\tilde s = \frac 1{\sqrt {N-2^{1/2}}}\c
Unbiased estimate of population standard deviation: is sqrt(2) a superior correction? Maybe. What it appears that you did, is hit upon the $c_4(N)$ correction factor stated also in this wikipedia article. Specifically: You propose the estimator $$\tilde s = \frac 1{\sqrt {N-2^{1/2}}}\cdot (S_x)^{1/2} $$ where $S_x$ is the sum of squared deviations from the mean The article you mention defines (although not very clearly) the estimator $$\hat s = \frac 1{\sqrt {N-1}}\cdot\left[\sqrt{\frac{2}{N-1}}\,\,\,\frac{\Gamma\left(\frac{N}{2}\right)}{\Gamma\left(\frac{N-1}{2}\right)}\right]^{-1} \cdot (S_x)^{1/2} = \frac {\Gamma\left(\frac{N-1}{2}\right)}{2^{1/2}\Gamma\left(\frac{N}{2}\right)}\cdot (S_x)^{1/2}$$ where $$c_4(N) = \sqrt{\frac{2}{N-1}}\,\,\,\frac{\Gamma\left(\frac{N}{2}\right)}{\Gamma\left(\frac{N-1}{2}\right)}$$ Calculating the values of the two proposed multiplication factors we find \begin{array}{| r | r | r |} \hline N & \frac{1}{\sqrt{N-2^{1/2}}} & \frac{1}{c_4(N)\sqrt{N-1}} \\ \hline 3 & 0.7941 & 0.7979 \\ 4 & 0.6219 & 0.6267 \\ 5 & 0.5281 & 0.5319 \\ 6 & 0.467 & 0.47 \\ 7 & 0.4231 & 0.4255 \\ 8 & 0.3897 & 0.3917 \\ 9 & 0.3631 & 0.3647 \\ 10 & 0.3413 & 0.3427 \\ 11 & 0.323 & 0.3242 \\ 12 & 0.3074 & 0.3084 \\ 13 & 0.2938 & 0.2947 \\ 14 & 0.2819 & 0.2827 \\ 15 & 0.2713 & 0.2721 \\ 16 & 0.2618 & 0.2625 \\ 17 & 0.2533 & 0.2539 \\ 18 & 0.2455 & 0.2461 \\ 19 & 0.2385 & 0.239 \\ 20 & 0.232 & 0.2325 \\ 21 & 0.226 & 0.2264 \\ 22 & 0.2204 & 0.2208 \\ 23 & 0.2152 & 0.2156 \\ 24 & 0.2104 & 0.2108 \\ 25 & 0.2059 & 0.2063 \\ 26 & 0.2017 & 0.202 \\ 27 & 0.1977 & 0.198 \\ 28 & 0.1939 & 0.1942 \\ 29 & 0.1904 & 0.1907 \\ 30 & 0.187 & 0.1873 \\ \hline \end{array} Now what you have to do is first check whether this closeness in values continues for large $N$, and second simulate the estimation using the $c_4(N)$ correction factor, and compare it to yours. If these come out favorable, then you either a) have found a better, valid and useful (simpler to calculate) "rule of thumb"/substitute for the $c_4(N)$ correction factor, or b) you have found a better correction factor. If it is b), then it is publication material.
Unbiased estimate of population standard deviation: is sqrt(2) a superior correction? Maybe. What it appears that you did, is hit upon the $c_4(N)$ correction factor stated also in this wikipedia article. Specifically: You propose the estimator $$\tilde s = \frac 1{\sqrt {N-2^{1/2}}}\c
34,960
Unbiased estimate of population standard deviation: is sqrt(2) a superior correction?
For the sake of others who find this page, it is probably worth backing up a step and ask whether you really want an unbiased estimate of the population SD or an unbiased estimate of the population variance. If you are going to use the SD to compute a confidence interval of a mean (or difference between two means) or to run a t test or ANOVA.., then I believe all the math is based on variances, not standard deviations. For these purposes, you want an unbiased variance, which is the standard deviation squared. If you compute the SD using the usual n-1 rule, the variance will be unbiased. But if you compute an unbiased SD, as you show here, then the variance would be biased.
Unbiased estimate of population standard deviation: is sqrt(2) a superior correction?
For the sake of others who find this page, it is probably worth backing up a step and ask whether you really want an unbiased estimate of the population SD or an unbiased estimate of the population va
Unbiased estimate of population standard deviation: is sqrt(2) a superior correction? For the sake of others who find this page, it is probably worth backing up a step and ask whether you really want an unbiased estimate of the population SD or an unbiased estimate of the population variance. If you are going to use the SD to compute a confidence interval of a mean (or difference between two means) or to run a t test or ANOVA.., then I believe all the math is based on variances, not standard deviations. For these purposes, you want an unbiased variance, which is the standard deviation squared. If you compute the SD using the usual n-1 rule, the variance will be unbiased. But if you compute an unbiased SD, as you show here, then the variance would be biased.
Unbiased estimate of population standard deviation: is sqrt(2) a superior correction? For the sake of others who find this page, it is probably worth backing up a step and ask whether you really want an unbiased estimate of the population SD or an unbiased estimate of the population va
34,961
Unbiased estimate of population standard deviation: is sqrt(2) a superior correction?
If I felt that the correction is necessary beyond the standard $n-1$ why would I do the rule of a thumb? I'd go for the exact expression if it is know, such as in the case of the normal distribution. I have never seen anyone using this rule of a thumb anyways. So, my answer is no, you didn't find anything fantastic. At best you found marginally better fit to a practically useless rule of thumb applicable to normal distribution. This will certainly not work for all distributions. Check it yourself by replacing rnorm() in your code by something else, such as rchisq(size, df=0.1).
Unbiased estimate of population standard deviation: is sqrt(2) a superior correction?
If I felt that the correction is necessary beyond the standard $n-1$ why would I do the rule of a thumb? I'd go for the exact expression if it is know, such as in the case of the normal distribution.
Unbiased estimate of population standard deviation: is sqrt(2) a superior correction? If I felt that the correction is necessary beyond the standard $n-1$ why would I do the rule of a thumb? I'd go for the exact expression if it is know, such as in the case of the normal distribution. I have never seen anyone using this rule of a thumb anyways. So, my answer is no, you didn't find anything fantastic. At best you found marginally better fit to a practically useless rule of thumb applicable to normal distribution. This will certainly not work for all distributions. Check it yourself by replacing rnorm() in your code by something else, such as rchisq(size, df=0.1).
Unbiased estimate of population standard deviation: is sqrt(2) a superior correction? If I felt that the correction is necessary beyond the standard $n-1$ why would I do the rule of a thumb? I'd go for the exact expression if it is know, such as in the case of the normal distribution.
34,962
Connection between power law and Zipf's law
Zipf's law is generally understood to simply be a power-law distribution with integer values, that is, a probability distribution with the form $p(x) \propto x^{-\alpha}$ for $x\geq x_{\min}>0$, $\alpha>1$ and $x\in \mathbb{N}_{>0}$ where $x_{\min}$ is the smallest value for which the power law holds, and is generally 1 for Zipf's Law (although not always; there is some ambiguity in the literature as to whether the term Zipf's Law is reserved for the $x_{\min}=1$ case or whether it can be used for $x_{\min}>1$). But, power-law distributions have the special property that the complementary cumulative distribution function (ccdf) is also a power law form, $P(x) \propto x^{-\beta}$ but now where $\beta>0$ (and $\beta=\alpha-1$). This presents some ambiguity in interpreting what exactly people mean when they state that the estimated such-and-such a parameter for Zipf's Law. Do they mean $\alpha$ or $\beta$? It's important to be clear about which one you are stating. So long as you say whether the parameter you estimate is the pdf or cdf parameter, you should be fine. Another small point: when people talk about Pareto distributions and data, they often talk about "rank-frequency" plots. These are the same thing as the ccdf (a point we discuss a little more in our SIAM Review paper that you link to), just with the axes reversed. That means you can easily transform an exponent someone has estimated from a rank-frequency plot (what Lada Adamic calls the Pareto form) to a regular pdf exponent by taking the reciprocal. But, people don't really distinguish between Zipf and Pareto laws like that. Both are just power-law distributions, so it's better to just talk about $\alpha$.
Connection between power law and Zipf's law
Zipf's law is generally understood to simply be a power-law distribution with integer values, that is, a probability distribution with the form $p(x) \propto x^{-\alpha}$ for $x\geq x_{\min}>0$, $\alp
Connection between power law and Zipf's law Zipf's law is generally understood to simply be a power-law distribution with integer values, that is, a probability distribution with the form $p(x) \propto x^{-\alpha}$ for $x\geq x_{\min}>0$, $\alpha>1$ and $x\in \mathbb{N}_{>0}$ where $x_{\min}$ is the smallest value for which the power law holds, and is generally 1 for Zipf's Law (although not always; there is some ambiguity in the literature as to whether the term Zipf's Law is reserved for the $x_{\min}=1$ case or whether it can be used for $x_{\min}>1$). But, power-law distributions have the special property that the complementary cumulative distribution function (ccdf) is also a power law form, $P(x) \propto x^{-\beta}$ but now where $\beta>0$ (and $\beta=\alpha-1$). This presents some ambiguity in interpreting what exactly people mean when they state that the estimated such-and-such a parameter for Zipf's Law. Do they mean $\alpha$ or $\beta$? It's important to be clear about which one you are stating. So long as you say whether the parameter you estimate is the pdf or cdf parameter, you should be fine. Another small point: when people talk about Pareto distributions and data, they often talk about "rank-frequency" plots. These are the same thing as the ccdf (a point we discuss a little more in our SIAM Review paper that you link to), just with the axes reversed. That means you can easily transform an exponent someone has estimated from a rank-frequency plot (what Lada Adamic calls the Pareto form) to a regular pdf exponent by taking the reciprocal. But, people don't really distinguish between Zipf and Pareto laws like that. Both are just power-law distributions, so it's better to just talk about $\alpha$.
Connection between power law and Zipf's law Zipf's law is generally understood to simply be a power-law distribution with integer values, that is, a probability distribution with the form $p(x) \propto x^{-\alpha}$ for $x\geq x_{\min}>0$, $\alp
34,963
Is it inappropriate to call multiple regression analysis 'correlational'?
Correlation can be two things. Correlation is a mathematical construct on one hand, which is de facto Pearson correlation. Correlation is also a counterpart to causal on the other, meaning conditional dependence between an "exposure" and "outcome" that may be mediated by 100s of unmeasured factors. Calling work (i.e. the analyses/results of a study) "correlational" doesn't immediately suggest to me whether you mean they summarized several bivariate associations using partial correlations or whether the study was conducted from observational data. I am strongly inclined to believe that you and the reviewer hold opposing ideas of what "correlational" means in this context. This is giving generous credit to the idea that other aspects of this communication did not denigrate anyone's research/findings. As far as regression analyses are concerned, you can use regression models to analyze "quasiexperimental" data (or observational data) in which adjustment for confounding variables is used to infer what a hypothetically controlled (blocked/randomized) experiment would yield as a result. This leads to the distinction between correlation and causation. Only randomized controlled trials are worthy of discussing results in a causal context. Other results are not "correlational" but you may refer to findings as "associations". The word correlation is confusing. In literature presented to a statistical audience, I am careful to avoid correlation altogether except in the context of Pearson's correlation. I would favor "empirical" or "epidemiological" or something of that ilk to refer to findings from observational studies.
Is it inappropriate to call multiple regression analysis 'correlational'?
Correlation can be two things. Correlation is a mathematical construct on one hand, which is de facto Pearson correlation. Correlation is also a counterpart to causal on the other, meaning conditional
Is it inappropriate to call multiple regression analysis 'correlational'? Correlation can be two things. Correlation is a mathematical construct on one hand, which is de facto Pearson correlation. Correlation is also a counterpart to causal on the other, meaning conditional dependence between an "exposure" and "outcome" that may be mediated by 100s of unmeasured factors. Calling work (i.e. the analyses/results of a study) "correlational" doesn't immediately suggest to me whether you mean they summarized several bivariate associations using partial correlations or whether the study was conducted from observational data. I am strongly inclined to believe that you and the reviewer hold opposing ideas of what "correlational" means in this context. This is giving generous credit to the idea that other aspects of this communication did not denigrate anyone's research/findings. As far as regression analyses are concerned, you can use regression models to analyze "quasiexperimental" data (or observational data) in which adjustment for confounding variables is used to infer what a hypothetically controlled (blocked/randomized) experiment would yield as a result. This leads to the distinction between correlation and causation. Only randomized controlled trials are worthy of discussing results in a causal context. Other results are not "correlational" but you may refer to findings as "associations". The word correlation is confusing. In literature presented to a statistical audience, I am careful to avoid correlation altogether except in the context of Pearson's correlation. I would favor "empirical" or "epidemiological" or something of that ilk to refer to findings from observational studies.
Is it inappropriate to call multiple regression analysis 'correlational'? Correlation can be two things. Correlation is a mathematical construct on one hand, which is de facto Pearson correlation. Correlation is also a counterpart to causal on the other, meaning conditional
34,964
Is it inappropriate to call multiple regression analysis 'correlational'?
Knowing no more than what you've said, I agree with @NickCox, @AdamO, and with you for the most part. If you have discussed this "completely mischaracterized" work in further depth than you've said here, it may not be safe to assume the objection is mostly to your characterization of it as "correlational", unless the reviewer has made that really clear. His/her objections seem very emphatic, so you're right to suspect s/he is an author. Might you be able to talk to this reviewer more directly to seek consensus (or at least compromise) on how to describe the work? I suppose some review processes might rule out conferral outside the formal framework, but it seems counterproductive in this case if you can't work together with this person. Of course, this isn't to say there's any certainty of whom you're dealing with beyond what you actually know; I only agree it seems likely, and echo the other Nick's suggestion to seek clarification from this person. It would seem to be in your mutual best interest to describe everything optimally from both perspectives. Hopefully all the initial bluster will die down as you communicate further. Sometimes people begin by making a bigger fuss than is really necessary just to ensure they get your attention and make an impression, despite the evident risk of that being a worse impression (e.g., biased, unreasonable, alarmist). It's easier for some to proceed with restraint and mutual respect once they've been reassured that the lines of communication are open and the other party is paying attention. It seems you've made the opposite first impression with what you've written, so an overblown reaction is understandable, if still unreasonable. Clarification should help greatly if s/he's willing to work with you. Regression is indeed correlational in the broad sense, as has already been said here, but the strictest, most simplistic sense of correlational may appeal to those who have a less nuanced understanding of general linear models. The stricter usage may also appeal more to people who understand correlational analysis as a loaded term in causal research contexts, such as your problem reviewer, it seems. If your intention was to imply a weakness in causal evidence, then your usage was loaded intentionally, and some defensiveness is to be expected. You'd be right to say that multiple regression doesn't really provide more causal evidence than a bivariate correlation (the stricter sense of "correlational" that AdamO described) – it mostly makes relational evidence clearer regardless of whether these relationships are causal. Hence I doubt your reviewer is simply objecting to your characterization because the method involved multiple regression, not just bivariate correlations. Maybe the reviewer felt the original design had other, "more sophisticated" elements that provide "much stronger evidence". Maybe I give this person too much credit though; I've been led to think one should never underestimate the capacity of reviewers to overreact to issues that amount to nitpicking. Generally, I wouldn't object to describing regression as correlational, and might have done so myself in your case initially, but given the apparent offense this has caused, I don't see any harm in backing off and rephrasing. If your intention was to imply critique of causal evidence, it would be better to state the critique clearly and delicately, not to just imply it. If your alternate phrasing of the "distinction between statistical and mechanistic relationships" captures your meaning just as well, maybe you can avoid the issue by replacing the "correlational" phrasing entirely, but again, if you can confer with your reviewer about this alternative, you'll stand a better chance of having the change received well, of course. AdamO has provided some other good alternatives, and your comment on his answer seems quite a bit clearer about the distinction you intended to make between your work and your reference. As for "empirical", I think you're encountering the same basic issue by using a single word with a variety of possible interpretations where several sentences that clarify your intention with context would be preferable.
Is it inappropriate to call multiple regression analysis 'correlational'?
Knowing no more than what you've said, I agree with @NickCox, @AdamO, and with you for the most part. If you have discussed this "completely mischaracterized" work in further depth than you've said he
Is it inappropriate to call multiple regression analysis 'correlational'? Knowing no more than what you've said, I agree with @NickCox, @AdamO, and with you for the most part. If you have discussed this "completely mischaracterized" work in further depth than you've said here, it may not be safe to assume the objection is mostly to your characterization of it as "correlational", unless the reviewer has made that really clear. His/her objections seem very emphatic, so you're right to suspect s/he is an author. Might you be able to talk to this reviewer more directly to seek consensus (or at least compromise) on how to describe the work? I suppose some review processes might rule out conferral outside the formal framework, but it seems counterproductive in this case if you can't work together with this person. Of course, this isn't to say there's any certainty of whom you're dealing with beyond what you actually know; I only agree it seems likely, and echo the other Nick's suggestion to seek clarification from this person. It would seem to be in your mutual best interest to describe everything optimally from both perspectives. Hopefully all the initial bluster will die down as you communicate further. Sometimes people begin by making a bigger fuss than is really necessary just to ensure they get your attention and make an impression, despite the evident risk of that being a worse impression (e.g., biased, unreasonable, alarmist). It's easier for some to proceed with restraint and mutual respect once they've been reassured that the lines of communication are open and the other party is paying attention. It seems you've made the opposite first impression with what you've written, so an overblown reaction is understandable, if still unreasonable. Clarification should help greatly if s/he's willing to work with you. Regression is indeed correlational in the broad sense, as has already been said here, but the strictest, most simplistic sense of correlational may appeal to those who have a less nuanced understanding of general linear models. The stricter usage may also appeal more to people who understand correlational analysis as a loaded term in causal research contexts, such as your problem reviewer, it seems. If your intention was to imply a weakness in causal evidence, then your usage was loaded intentionally, and some defensiveness is to be expected. You'd be right to say that multiple regression doesn't really provide more causal evidence than a bivariate correlation (the stricter sense of "correlational" that AdamO described) – it mostly makes relational evidence clearer regardless of whether these relationships are causal. Hence I doubt your reviewer is simply objecting to your characterization because the method involved multiple regression, not just bivariate correlations. Maybe the reviewer felt the original design had other, "more sophisticated" elements that provide "much stronger evidence". Maybe I give this person too much credit though; I've been led to think one should never underestimate the capacity of reviewers to overreact to issues that amount to nitpicking. Generally, I wouldn't object to describing regression as correlational, and might have done so myself in your case initially, but given the apparent offense this has caused, I don't see any harm in backing off and rephrasing. If your intention was to imply critique of causal evidence, it would be better to state the critique clearly and delicately, not to just imply it. If your alternate phrasing of the "distinction between statistical and mechanistic relationships" captures your meaning just as well, maybe you can avoid the issue by replacing the "correlational" phrasing entirely, but again, if you can confer with your reviewer about this alternative, you'll stand a better chance of having the change received well, of course. AdamO has provided some other good alternatives, and your comment on his answer seems quite a bit clearer about the distinction you intended to make between your work and your reference. As for "empirical", I think you're encountering the same basic issue by using a single word with a variety of possible interpretations where several sentences that clarify your intention with context would be preferable.
Is it inappropriate to call multiple regression analysis 'correlational'? Knowing no more than what you've said, I agree with @NickCox, @AdamO, and with you for the most part. If you have discussed this "completely mischaracterized" work in further depth than you've said he
34,965
statistical method for spatial correlation between images
Most simplest way how to solve this in two images is extract the values from both rasters and do correlation. I am not sure if this solution will fit to your spacific case. In what "format" do you have the images? (greyscale, RGB, size, resolution...). Please give more specific details. Two rasters in R for demonstration: Values for picture A: x <- c(1.0,1.0,1.0,1.0,0.5,0.5,0.0,0.0,0.5,0.5, 2.0,2.0,1.5,1.5,1.0,1.0,0.5,1.0,1.0,1.0, 2.5,2.0,2.0,2.0,2.0,1.0,1.0,1.5,2.0,2.0, 2.5,3.0,3.0,3.0,2.5,2.0,2.0,2.0,2.5,2.5, 2.5,3.5,4.0,3.5,2.5,2.0,2.5,3.0,3.0,3.5, 2.5,3.5,3.5,2.5,2.0,2.5,3.0,3.5,4.0,3.5, 2.5,3.5,3.5,3.0,3.5,4.0,4.0,4.0,3.5,2.5, 2.5,3.5,4.0,4.0,3.5,3.5,3.0,3.0,2.5,2.0, 2.5,3.5,3.5,3.0,2.5,2.5,2.0,2.0,2.0,1.5, 2.0,3.0,2.5,2.0,2.0,1.5,1.5,1.5,1.0,1.0) Values for picture B: y <- c(rep(1, times = 10), rep(2, times = 6), 1, rep(2, times = 3), rep(2, times = 10), rep(3, times = 4), rep(2, times = 4), 3,3, 3,4,4,3,2,rep(3, times = 4), 4, 3,4,rep(3, times = 5), rep(4, times = 3), 3,4,3,3,3,4,4,4,3,3, 3, rep(4, times = 4), rep(3, times=4), 2, 3,3,4,3,3,3,rep(2, times = 4), 2,3,3,3,rep(2, times = 6)) Creation of arrays -> conversion of arrays into rasters x_array<-array(x, dim=c(10,10)) y_array<-array(y, dim=c(10,10)) x_raster<-raster(x_array) y_raster<-raster(y_array) Setting color palette and plotting... colors_x <- c("#fff7f3","#fde0dd","#fcc5c0","#fa9fb5","#f768a1","#dd3497", "#ae017e","#7a0177","#49006a") colors_y <- c("#fff7f3","#fcc5c0","#f768a1","#ae017e") par(mfrow=c(1,2)) plot(x_raster, col = colors_x) plot(y_raster, col = colors_y) ...and here is the correlation cor(x,y) Pearson's product-moment correlation data: x and y t = 21.7031, df = 98, p-value < 2.2e-16 alternative hypothesis: true correlation is not equal to 0 95 percent confidence interval: 0.8686333 0.9385211 sample estimates: cor 0.9098219 Maybe there is more specialized solution to this but I think that this solution is pretty robust, simple and straightforward. Link worth of interest: (for ImageJ) http://imagej.nih.gov/ij/plugins/intracell/index.html
statistical method for spatial correlation between images
Most simplest way how to solve this in two images is extract the values from both rasters and do correlation. I am not sure if this solution will fit to your spacific case. In what "format" do you hav
statistical method for spatial correlation between images Most simplest way how to solve this in two images is extract the values from both rasters and do correlation. I am not sure if this solution will fit to your spacific case. In what "format" do you have the images? (greyscale, RGB, size, resolution...). Please give more specific details. Two rasters in R for demonstration: Values for picture A: x <- c(1.0,1.0,1.0,1.0,0.5,0.5,0.0,0.0,0.5,0.5, 2.0,2.0,1.5,1.5,1.0,1.0,0.5,1.0,1.0,1.0, 2.5,2.0,2.0,2.0,2.0,1.0,1.0,1.5,2.0,2.0, 2.5,3.0,3.0,3.0,2.5,2.0,2.0,2.0,2.5,2.5, 2.5,3.5,4.0,3.5,2.5,2.0,2.5,3.0,3.0,3.5, 2.5,3.5,3.5,2.5,2.0,2.5,3.0,3.5,4.0,3.5, 2.5,3.5,3.5,3.0,3.5,4.0,4.0,4.0,3.5,2.5, 2.5,3.5,4.0,4.0,3.5,3.5,3.0,3.0,2.5,2.0, 2.5,3.5,3.5,3.0,2.5,2.5,2.0,2.0,2.0,1.5, 2.0,3.0,2.5,2.0,2.0,1.5,1.5,1.5,1.0,1.0) Values for picture B: y <- c(rep(1, times = 10), rep(2, times = 6), 1, rep(2, times = 3), rep(2, times = 10), rep(3, times = 4), rep(2, times = 4), 3,3, 3,4,4,3,2,rep(3, times = 4), 4, 3,4,rep(3, times = 5), rep(4, times = 3), 3,4,3,3,3,4,4,4,3,3, 3, rep(4, times = 4), rep(3, times=4), 2, 3,3,4,3,3,3,rep(2, times = 4), 2,3,3,3,rep(2, times = 6)) Creation of arrays -> conversion of arrays into rasters x_array<-array(x, dim=c(10,10)) y_array<-array(y, dim=c(10,10)) x_raster<-raster(x_array) y_raster<-raster(y_array) Setting color palette and plotting... colors_x <- c("#fff7f3","#fde0dd","#fcc5c0","#fa9fb5","#f768a1","#dd3497", "#ae017e","#7a0177","#49006a") colors_y <- c("#fff7f3","#fcc5c0","#f768a1","#ae017e") par(mfrow=c(1,2)) plot(x_raster, col = colors_x) plot(y_raster, col = colors_y) ...and here is the correlation cor(x,y) Pearson's product-moment correlation data: x and y t = 21.7031, df = 98, p-value < 2.2e-16 alternative hypothesis: true correlation is not equal to 0 95 percent confidence interval: 0.8686333 0.9385211 sample estimates: cor 0.9098219 Maybe there is more specialized solution to this but I think that this solution is pretty robust, simple and straightforward. Link worth of interest: (for ImageJ) http://imagej.nih.gov/ij/plugins/intracell/index.html
statistical method for spatial correlation between images Most simplest way how to solve this in two images is extract the values from both rasters and do correlation. I am not sure if this solution will fit to your spacific case. In what "format" do you hav
34,966
statistical method for spatial correlation between images
This is a problem that has been analyzed most extensively in the field of astronomy or cosmology with things like galaxy spatial correlation functions. The short answer is that you probably want to compute a 2D correlation function which can be computed efficiently with the Fast Fourier Transform (if needed). You might also want to Google terms like the Landy-Szalay estimator which allows treatment of masked-out areas and boundaries. It sounds like you also want to compute uncertainties or confidence intervals. This is a little trickier. In astronomy it has been estimated with Jack-knife techniques though I think it still lacks a rigorous foundation. Using Monte Carlo techniques is often useful for this as well but is also not entirely on a rigorous foundation either.
statistical method for spatial correlation between images
This is a problem that has been analyzed most extensively in the field of astronomy or cosmology with things like galaxy spatial correlation functions. The short answer is that you probably want to co
statistical method for spatial correlation between images This is a problem that has been analyzed most extensively in the field of astronomy or cosmology with things like galaxy spatial correlation functions. The short answer is that you probably want to compute a 2D correlation function which can be computed efficiently with the Fast Fourier Transform (if needed). You might also want to Google terms like the Landy-Szalay estimator which allows treatment of masked-out areas and boundaries. It sounds like you also want to compute uncertainties or confidence intervals. This is a little trickier. In astronomy it has been estimated with Jack-knife techniques though I think it still lacks a rigorous foundation. Using Monte Carlo techniques is often useful for this as well but is also not entirely on a rigorous foundation either.
statistical method for spatial correlation between images This is a problem that has been analyzed most extensively in the field of astronomy or cosmology with things like galaxy spatial correlation functions. The short answer is that you probably want to co
34,967
statistical method for spatial correlation between images
You could manually trace the centerline or the walls of the blood vessels (or use machine learning to fill those areas. Then you could build a buffer fence around that area. As a second step, you could identify the particles on the image (either manually or by machine learning). Then you could calculate the statistics related to then number of nanoparticles inside the filled area of the buffer fence vs outside of it. With fifty pairs of images, it might be faster and more accurate to draw the buffer fences and measure the number of particles in and out, manually.
statistical method for spatial correlation between images
You could manually trace the centerline or the walls of the blood vessels (or use machine learning to fill those areas. Then you could build a buffer fence around that area. As a second step, you coul
statistical method for spatial correlation between images You could manually trace the centerline or the walls of the blood vessels (or use machine learning to fill those areas. Then you could build a buffer fence around that area. As a second step, you could identify the particles on the image (either manually or by machine learning). Then you could calculate the statistics related to then number of nanoparticles inside the filled area of the buffer fence vs outside of it. With fifty pairs of images, it might be faster and more accurate to draw the buffer fences and measure the number of particles in and out, manually.
statistical method for spatial correlation between images You could manually trace the centerline or the walls of the blood vessels (or use machine learning to fill those areas. Then you could build a buffer fence around that area. As a second step, you coul
34,968
statistical method for spatial correlation between images
Please see the R package SpatialPack. There you will find three different statistical approaches to address this problem
statistical method for spatial correlation between images
Please see the R package SpatialPack. There you will find three different statistical approaches to address this problem
statistical method for spatial correlation between images Please see the R package SpatialPack. There you will find three different statistical approaches to address this problem
statistical method for spatial correlation between images Please see the R package SpatialPack. There you will find three different statistical approaches to address this problem
34,969
Expected value of maximum likelihood coin parameter estimate
First of all this is a self-study question, so I'm going to go too much into each and every little technical detail, but I'm not going on a derivation frenzy either. There are many ways to do this. I'll help you by using general properties of the maximum likelihood estimator. Background information In order to solve your problem I think you need to study maximum likelihood from the beginning. You are probably using some kind of text book, and the answer should really be there somewhere. I'll help you find out what to look for. Maximum Likelihood is an estimation method which is basically what we call an M-estimator (think of the "M" as "maximize/minimize"). If the conditions required for using these methods are satisfied, we can show that the parameter estimates are consistent and asymptotically normally distributed, so we have: $$ \sqrt{N}(\hat\theta-\theta_0)\overset{d}{\to}\text{Normal}(0,A_0^{-1}B_0A_0^{-1}), $$ where $A_0$ and $B_0$ are some matrices. When using maximum likelihood we can show that $A_0=B_0$, and thus we have a simple expression: $$ \sqrt{N}(\hat\theta-\theta_0)\overset{d}{\to}\text{Normal}(0,A_0^{-1}). $$ We have that $A_0\equiv -E(H(\theta_0))$ where $H$ denotes the hessian. This is what you need to estimate in order to get your variance. Your specific problem So how do we do it? Here let's call our parameter vector $\theta$ what you do: $p$. This is just a scalar, so our "score" is just the derivative and the "hessian" is just the second order derivative. Our likelihood function can be written as: $$ l(p)=(p)^x (1-p)^{n-x}, $$ which is what we want to maximize. You used the first derivative of this or the log likelihood to find your $p^*$. Instead of setting the first derivative equal to zero, we can differentiate again, to find the second order derivative $H(p)$. First we take logs: $$ ll(p)\equiv\log(l(p))=x\log(p)+(n-x)\log(1-p) $$ Then our 'score' is: $$ ll'(p)=\frac{x}{p}+\frac{n-x}{1-p}, $$ and our 'hessian': $$ H(p)=ll''(p)=-\frac{x}{p^2}-\frac{n-x}{(1-p)^2}. $$ Then our general theory from above just tells you to find $(-E(H(p)))^{-1}$. Now you just have to take the expectation of $H(p)$ (Hint: use $E(x/n)=p$), multiply by $-1$ and take the inverse. Then you'll have your variance of the estimator.
Expected value of maximum likelihood coin parameter estimate
First of all this is a self-study question, so I'm going to go too much into each and every little technical detail, but I'm not going on a derivation frenzy either. There are many ways to do this. I'
Expected value of maximum likelihood coin parameter estimate First of all this is a self-study question, so I'm going to go too much into each and every little technical detail, but I'm not going on a derivation frenzy either. There are many ways to do this. I'll help you by using general properties of the maximum likelihood estimator. Background information In order to solve your problem I think you need to study maximum likelihood from the beginning. You are probably using some kind of text book, and the answer should really be there somewhere. I'll help you find out what to look for. Maximum Likelihood is an estimation method which is basically what we call an M-estimator (think of the "M" as "maximize/minimize"). If the conditions required for using these methods are satisfied, we can show that the parameter estimates are consistent and asymptotically normally distributed, so we have: $$ \sqrt{N}(\hat\theta-\theta_0)\overset{d}{\to}\text{Normal}(0,A_0^{-1}B_0A_0^{-1}), $$ where $A_0$ and $B_0$ are some matrices. When using maximum likelihood we can show that $A_0=B_0$, and thus we have a simple expression: $$ \sqrt{N}(\hat\theta-\theta_0)\overset{d}{\to}\text{Normal}(0,A_0^{-1}). $$ We have that $A_0\equiv -E(H(\theta_0))$ where $H$ denotes the hessian. This is what you need to estimate in order to get your variance. Your specific problem So how do we do it? Here let's call our parameter vector $\theta$ what you do: $p$. This is just a scalar, so our "score" is just the derivative and the "hessian" is just the second order derivative. Our likelihood function can be written as: $$ l(p)=(p)^x (1-p)^{n-x}, $$ which is what we want to maximize. You used the first derivative of this or the log likelihood to find your $p^*$. Instead of setting the first derivative equal to zero, we can differentiate again, to find the second order derivative $H(p)$. First we take logs: $$ ll(p)\equiv\log(l(p))=x\log(p)+(n-x)\log(1-p) $$ Then our 'score' is: $$ ll'(p)=\frac{x}{p}+\frac{n-x}{1-p}, $$ and our 'hessian': $$ H(p)=ll''(p)=-\frac{x}{p^2}-\frac{n-x}{(1-p)^2}. $$ Then our general theory from above just tells you to find $(-E(H(p)))^{-1}$. Now you just have to take the expectation of $H(p)$ (Hint: use $E(x/n)=p$), multiply by $-1$ and take the inverse. Then you'll have your variance of the estimator.
Expected value of maximum likelihood coin parameter estimate First of all this is a self-study question, so I'm going to go too much into each and every little technical detail, but I'm not going on a derivation frenzy either. There are many ways to do this. I'
34,970
Expected value of maximum likelihood coin parameter estimate
To start you off, let's do the expected value: If $x$ is the number of successes in $n$ throws, then $x/n$ is the proportion of successes in your sample. Consider $\mathbb{E}x$; for each throw, the probability of success is $p$ according to the assumptions, so when tossing the coin one time the expected "number of successes" is $p\times1+(1-p)\times 0=p$, right? Thus, if you throw the coin $n$ times, you would expect success $np$ times because the throws are independent. Then, since $np$ is the number of expected successes in $n$ throws, you get $$\mathbb{E}p^*=\mathbb{E}n^{-1}x=n^{-1}\mathbb{E}x=n^{-1}\times np=p$$ So the estimator is unbiased. Can you figure how to do the variance from here? Edit: Let's do the variance, too. We use that $\text{Var}(p^*)=\mathbb{E}p{^*}^2-(\mathbb{E}p{^*})^2$. The second term we already have from the calculation of the expected value, so let's do the first: $$\mathbb{E}p{^*}^2=n^{-2}\mathbb{E}x^2$$To simplify some, we can express the number of successes in $n$ throws as follows: $$x=\sum_1^n\chi _i,$$ where $\chi_i$ takes the value 1 if throw $i$ was a success and 0 otherwise. Hence, $$\mathbb{E}x^2=\mathbb{E}(\sum_1^n\chi _i)^2=\mathbb{E}[\sum_1^n\chi _i^2+2\sum_{i<j}\chi _i\chi_j]=np+n(n-1)p^2,$$ and so putting things together you arrive at $$\text{Var}(p^*)=\frac{p(1-p)}{n}$$.
Expected value of maximum likelihood coin parameter estimate
To start you off, let's do the expected value: If $x$ is the number of successes in $n$ throws, then $x/n$ is the proportion of successes in your sample. Consider $\mathbb{E}x$; for each throw, the pr
Expected value of maximum likelihood coin parameter estimate To start you off, let's do the expected value: If $x$ is the number of successes in $n$ throws, then $x/n$ is the proportion of successes in your sample. Consider $\mathbb{E}x$; for each throw, the probability of success is $p$ according to the assumptions, so when tossing the coin one time the expected "number of successes" is $p\times1+(1-p)\times 0=p$, right? Thus, if you throw the coin $n$ times, you would expect success $np$ times because the throws are independent. Then, since $np$ is the number of expected successes in $n$ throws, you get $$\mathbb{E}p^*=\mathbb{E}n^{-1}x=n^{-1}\mathbb{E}x=n^{-1}\times np=p$$ So the estimator is unbiased. Can you figure how to do the variance from here? Edit: Let's do the variance, too. We use that $\text{Var}(p^*)=\mathbb{E}p{^*}^2-(\mathbb{E}p{^*})^2$. The second term we already have from the calculation of the expected value, so let's do the first: $$\mathbb{E}p{^*}^2=n^{-2}\mathbb{E}x^2$$To simplify some, we can express the number of successes in $n$ throws as follows: $$x=\sum_1^n\chi _i,$$ where $\chi_i$ takes the value 1 if throw $i$ was a success and 0 otherwise. Hence, $$\mathbb{E}x^2=\mathbb{E}(\sum_1^n\chi _i)^2=\mathbb{E}[\sum_1^n\chi _i^2+2\sum_{i<j}\chi _i\chi_j]=np+n(n-1)p^2,$$ and so putting things together you arrive at $$\text{Var}(p^*)=\frac{p(1-p)}{n}$$.
Expected value of maximum likelihood coin parameter estimate To start you off, let's do the expected value: If $x$ is the number of successes in $n$ throws, then $x/n$ is the proportion of successes in your sample. Consider $\mathbb{E}x$; for each throw, the pr
34,971
ANCOVA in observational studies: what are the assumptions?
In an ANCOVA, you typically model $$E(Y|T,X)=\gamma T+X \beta$$ where $Y$ is your outcome variable, $T$ is your treatment indicator ($T=0$ to indicate control, and $T=1$ to indicate treatment), and $X$ is a covariate (or a vector of covariates). Then $\gamma$ is the average treatment effect (ATE) conditional on $X$. Now let $Y=TY^T+(1-T)Y^C$, where $Y^T$ is the outcome in treamtent group and $Y^C$ is the outcome in control group. The primary assumption, which is exploited by ANCOVA, is that the outcome variables $Y^T$ and $Y^C$ are independent from $T$ conditional on $X$. This is also called 'unconfoundedness' written as: $$P(T|Y^T,Y^C,X)=P(T|X)$$ Otherwise outcome variables and treatment assignment are confounded and (conditional) mean differences on $Y^T$ and $Y^C$ may be caused by other factors than the manipulation (i.e., even given $X$). If $T$ and $Y^C$ and $Y^T$ are unconfounded conditional on $X$, the ATE estimate $\gamma$ from ANCOVA will be unbiased given that also all other model assumptions are met. You may ask when it is clear whether there is unconfoundedness: this can never be assessed with absolute certainty and it represents the key weakness of adjustment for bias in observational studies. It is recommended (see ref. below) that you include all covariates that are even in tendency (p<.10) statistically associated (correlated) with either $T$, $Y^C$ or $Y^T$. This suggests that it is not problematic, rather desirable, that $X$ and $T$ are correlated when using ANCOVA (your first question). In fact, the correlation of covariate(s) with dependent variable 'within the groups' (i.e., $X$ with $Y^C$ or $Y^T$) is an indication that the unconfoundedness assumption holds or is more plausible (your second question). But correlation with $T$ likewise indicates this. However: an 'ideal' $X$ covariate is associated to, both, treatment indicator and outcome variables. Since ANOVA does not include $X$ (your third question), it would assume unconfoundedness unconditional $X$, i.e., $$P(T|Y^T,Y^C)=P(T)$$which is a very strong assumption and dependence of $X$ and $T$ would point to its potential violation. It is therefore not recommended in your hypothetical situation and should be preserved to fully randomized experiments, in which any $X$ by definition is independent of treatment and criterion variables. It is important to note that meeting all of the other model assumptions of ANCOVA is required to find unbiased ATE estimates (e.g., using least squares estimators). Chiefly, this suggests that there is no interaction between $T$ and $X$. This is sometimes referred to as effect homogeneity (as opposed to hetorogenous effects, if there is an interaction). Therefore, the model should at least include the interactions as well, which is not standard in ANCOVA models. Furthermore, you assume linearity (inspect residuals to check this assumption) and you also assume that the Y-model is correct (i.e., that you included all relevant $X$ to model $Y$). Sometimes, propensity score methods and nonparametric matching methods are superior to ANCOVA because they do not feature the linearity assumption and can include interactions 'on the go'. Moreover, so-called double-robust methods combine Y-modeling with propensity score methods. They guarantee unbiased effect estimates even if the model for $Y$ is incorrect (assuming the propensity score model is correct). Still all of these methods make the unconfoundedness assumption. For an excellent treatment of ANOCVA adjustment for selection bias (and also other methods) see: Schafer, J. L., & Kang, J. (2008). Average causal effects from nonrandomized studies: A practical guide and simulated example. Psychological Methods, 13(4), 279–313. doi:10.1037/a0014268
ANCOVA in observational studies: what are the assumptions?
In an ANCOVA, you typically model $$E(Y|T,X)=\gamma T+X \beta$$ where $Y$ is your outcome variable, $T$ is your treatment indicator ($T=0$ to indicate control, and $T=1$ to indicate treatment), and $X
ANCOVA in observational studies: what are the assumptions? In an ANCOVA, you typically model $$E(Y|T,X)=\gamma T+X \beta$$ where $Y$ is your outcome variable, $T$ is your treatment indicator ($T=0$ to indicate control, and $T=1$ to indicate treatment), and $X$ is a covariate (or a vector of covariates). Then $\gamma$ is the average treatment effect (ATE) conditional on $X$. Now let $Y=TY^T+(1-T)Y^C$, where $Y^T$ is the outcome in treamtent group and $Y^C$ is the outcome in control group. The primary assumption, which is exploited by ANCOVA, is that the outcome variables $Y^T$ and $Y^C$ are independent from $T$ conditional on $X$. This is also called 'unconfoundedness' written as: $$P(T|Y^T,Y^C,X)=P(T|X)$$ Otherwise outcome variables and treatment assignment are confounded and (conditional) mean differences on $Y^T$ and $Y^C$ may be caused by other factors than the manipulation (i.e., even given $X$). If $T$ and $Y^C$ and $Y^T$ are unconfounded conditional on $X$, the ATE estimate $\gamma$ from ANCOVA will be unbiased given that also all other model assumptions are met. You may ask when it is clear whether there is unconfoundedness: this can never be assessed with absolute certainty and it represents the key weakness of adjustment for bias in observational studies. It is recommended (see ref. below) that you include all covariates that are even in tendency (p<.10) statistically associated (correlated) with either $T$, $Y^C$ or $Y^T$. This suggests that it is not problematic, rather desirable, that $X$ and $T$ are correlated when using ANCOVA (your first question). In fact, the correlation of covariate(s) with dependent variable 'within the groups' (i.e., $X$ with $Y^C$ or $Y^T$) is an indication that the unconfoundedness assumption holds or is more plausible (your second question). But correlation with $T$ likewise indicates this. However: an 'ideal' $X$ covariate is associated to, both, treatment indicator and outcome variables. Since ANOVA does not include $X$ (your third question), it would assume unconfoundedness unconditional $X$, i.e., $$P(T|Y^T,Y^C)=P(T)$$which is a very strong assumption and dependence of $X$ and $T$ would point to its potential violation. It is therefore not recommended in your hypothetical situation and should be preserved to fully randomized experiments, in which any $X$ by definition is independent of treatment and criterion variables. It is important to note that meeting all of the other model assumptions of ANCOVA is required to find unbiased ATE estimates (e.g., using least squares estimators). Chiefly, this suggests that there is no interaction between $T$ and $X$. This is sometimes referred to as effect homogeneity (as opposed to hetorogenous effects, if there is an interaction). Therefore, the model should at least include the interactions as well, which is not standard in ANCOVA models. Furthermore, you assume linearity (inspect residuals to check this assumption) and you also assume that the Y-model is correct (i.e., that you included all relevant $X$ to model $Y$). Sometimes, propensity score methods and nonparametric matching methods are superior to ANCOVA because they do not feature the linearity assumption and can include interactions 'on the go'. Moreover, so-called double-robust methods combine Y-modeling with propensity score methods. They guarantee unbiased effect estimates even if the model for $Y$ is incorrect (assuming the propensity score model is correct). Still all of these methods make the unconfoundedness assumption. For an excellent treatment of ANOCVA adjustment for selection bias (and also other methods) see: Schafer, J. L., & Kang, J. (2008). Average causal effects from nonrandomized studies: A practical guide and simulated example. Psychological Methods, 13(4), 279–313. doi:10.1037/a0014268
ANCOVA in observational studies: what are the assumptions? In an ANCOVA, you typically model $$E(Y|T,X)=\gamma T+X \beta$$ where $Y$ is your outcome variable, $T$ is your treatment indicator ($T=0$ to indicate control, and $T=1$ to indicate treatment), and $X
34,972
ANCOVA in observational studies: what are the assumptions?
I think a good starting point with this issue is to think logically about the meaning of a covariate adjustment in such situations. If the expected value of the CV is conditional on the group how is there any way to remove variation associated only with the CV? Surely a CV adjustment removes group effects as well! What then do group differences actually mean? In fact, as far as I am aware, the only truely interpretable method of ANCOVA is one where th CV and treatments are wholly unrelated. In such situations "control" would seem the wrong metaphor as the ANCOVA is more of an error-reducing technique to increase power to detect group differences. I think this issue always needs logical consideration in terms of interpretation. There is no way of knowing quite what the adjustment made to the outcome is when looking for further group differences. Indeed, does it even make sense to consider groups as if they were the same on the CV? If the CV and the groups are so closely aliased does that not suggest that the CV may represent some fundamental element of group? More can be read about this in Miller & Chapman (2001) "Misunderstanding Analysis of Covariance". Although groups differing on the CV may not be a strict assumption of ANCOVA I'm of the opinion that there are limited legitimate ways of interpreting results if the condition is not met. ANCOVA is a tehnique for designed experiments with randomised treatment assignment. Use beyond this should always be treated with caution. I should perhaps add that I don't think that you can never use ANCOVA with non-randomised groups, but if you do you just need to be cautious. Generally speaking the only conditions that would need satisfying would be independence of group and CV (which you can test by running the ANOVA with the CV as the outcome variable), homogeneity of the regression slopes (which you can test by including an interaction term), and linearity, which can be checked using residuals. If your aim is to "control" for a concomitant variable then all assumptions need satisfying for your group differences to be interpretable. If, however, an assumption such as the homogeneity of slopes is violated then the model can always be re-framed as a multiple regression inclusive of the interaction term. The focus of the analysis would then be more exploratory and predictive then the classical ANCOVA, but ultimately allows you to see the CV not as a nuisance but as an interesting relationship to be explored.
ANCOVA in observational studies: what are the assumptions?
I think a good starting point with this issue is to think logically about the meaning of a covariate adjustment in such situations. If the expected value of the CV is conditional on the group how is t
ANCOVA in observational studies: what are the assumptions? I think a good starting point with this issue is to think logically about the meaning of a covariate adjustment in such situations. If the expected value of the CV is conditional on the group how is there any way to remove variation associated only with the CV? Surely a CV adjustment removes group effects as well! What then do group differences actually mean? In fact, as far as I am aware, the only truely interpretable method of ANCOVA is one where th CV and treatments are wholly unrelated. In such situations "control" would seem the wrong metaphor as the ANCOVA is more of an error-reducing technique to increase power to detect group differences. I think this issue always needs logical consideration in terms of interpretation. There is no way of knowing quite what the adjustment made to the outcome is when looking for further group differences. Indeed, does it even make sense to consider groups as if they were the same on the CV? If the CV and the groups are so closely aliased does that not suggest that the CV may represent some fundamental element of group? More can be read about this in Miller & Chapman (2001) "Misunderstanding Analysis of Covariance". Although groups differing on the CV may not be a strict assumption of ANCOVA I'm of the opinion that there are limited legitimate ways of interpreting results if the condition is not met. ANCOVA is a tehnique for designed experiments with randomised treatment assignment. Use beyond this should always be treated with caution. I should perhaps add that I don't think that you can never use ANCOVA with non-randomised groups, but if you do you just need to be cautious. Generally speaking the only conditions that would need satisfying would be independence of group and CV (which you can test by running the ANOVA with the CV as the outcome variable), homogeneity of the regression slopes (which you can test by including an interaction term), and linearity, which can be checked using residuals. If your aim is to "control" for a concomitant variable then all assumptions need satisfying for your group differences to be interpretable. If, however, an assumption such as the homogeneity of slopes is violated then the model can always be re-framed as a multiple regression inclusive of the interaction term. The focus of the analysis would then be more exploratory and predictive then the classical ANCOVA, but ultimately allows you to see the CV not as a nuisance but as an interesting relationship to be explored.
ANCOVA in observational studies: what are the assumptions? I think a good starting point with this issue is to think logically about the meaning of a covariate adjustment in such situations. If the expected value of the CV is conditional on the group how is t
34,973
Forecasting using Holt-Winters technique using R with less than 2 years of history
The Holt-Winters method is a poor choice for weekly data. It involves estimating a parameter for each week so the model has far too many degrees of freedom. One approach which should work ok is to use a TBATS model which uses Fourier terms for the seasonality, and so requires fewer coefficients. In your case: library(forecast) fit <- tbats(data_ts_s) fc <- forecast(fit, h=20) The TBATS model is a generalization of the Holt-Winters approach.
Forecasting using Holt-Winters technique using R with less than 2 years of history
The Holt-Winters method is a poor choice for weekly data. It involves estimating a parameter for each week so the model has far too many degrees of freedom. One approach which should work ok is to us
Forecasting using Holt-Winters technique using R with less than 2 years of history The Holt-Winters method is a poor choice for weekly data. It involves estimating a parameter for each week so the model has far too many degrees of freedom. One approach which should work ok is to use a TBATS model which uses Fourier terms for the seasonality, and so requires fewer coefficients. In your case: library(forecast) fit <- tbats(data_ts_s) fc <- forecast(fit, h=20) The TBATS model is a generalization of the Holt-Winters approach.
Forecasting using Holt-Winters technique using R with less than 2 years of history The Holt-Winters method is a poor choice for weekly data. It involves estimating a parameter for each week so the model has far too many degrees of freedom. One approach which should work ok is to us
34,974
Analysis of a time series with a fixed and random factor in R
For mixed-effects modells of time series I usually use the nlme package, because it offers facilities to model auto-correlation structures. If I don't need to consider auto-correlation, I prefer the lme4 package, which offers more flexibility for specifying random effects and is also usually faster. I recommend reading Zuur et al. 2009 (ISBN 978-0-387-87457-9) as an introduction to mixed effects modelling with R. The book contains a lot of nice and illustrative examples using the nlme package. From your question and comments the structure of your model should be as follows: fixed effects: intercept, plant type (make sure that this is a factor variable and not a numeric in R!), time and the interaction between both random effects: random intercept grouped by plant nested within box, possibly also a random slope vs. time with the same nesting structure correlation structure: some kind of auto-regressive moving average correlation structure with time as a co-variate and the same grouping structure as the random effect Thus, a full model could look like this: fit1 <- lme(height ~ type * time, random= ~ 1|box/plant, correlation=corARMA(0.2, form=~time|box/plant, p=1, q=0), data=mydata). You should test by comparing models using the anova function if including a random slope improves the model. You should also test, which auto-correlation structure is most appropriate (although Zuur et al. advise against spending too much effort on finding the optimal auto-correlation structure). Of course, you also need to study residual plots. Possibly you might need to specify a variance structure or transform the dependent if the model suffers from heteroskedasticity. Judging from the plot the relationship between height and time is pretty linear, but you could also try to transform time. Potential problems: You have an extremely small number of time points, which could result in problems when trying to fit the auto-correlation parameter(s). It could even be impossible to fit them. Also, your number of individuals and boxes is very small.
Analysis of a time series with a fixed and random factor in R
For mixed-effects modells of time series I usually use the nlme package, because it offers facilities to model auto-correlation structures. If I don't need to consider auto-correlation, I prefer the l
Analysis of a time series with a fixed and random factor in R For mixed-effects modells of time series I usually use the nlme package, because it offers facilities to model auto-correlation structures. If I don't need to consider auto-correlation, I prefer the lme4 package, which offers more flexibility for specifying random effects and is also usually faster. I recommend reading Zuur et al. 2009 (ISBN 978-0-387-87457-9) as an introduction to mixed effects modelling with R. The book contains a lot of nice and illustrative examples using the nlme package. From your question and comments the structure of your model should be as follows: fixed effects: intercept, plant type (make sure that this is a factor variable and not a numeric in R!), time and the interaction between both random effects: random intercept grouped by plant nested within box, possibly also a random slope vs. time with the same nesting structure correlation structure: some kind of auto-regressive moving average correlation structure with time as a co-variate and the same grouping structure as the random effect Thus, a full model could look like this: fit1 <- lme(height ~ type * time, random= ~ 1|box/plant, correlation=corARMA(0.2, form=~time|box/plant, p=1, q=0), data=mydata). You should test by comparing models using the anova function if including a random slope improves the model. You should also test, which auto-correlation structure is most appropriate (although Zuur et al. advise against spending too much effort on finding the optimal auto-correlation structure). Of course, you also need to study residual plots. Possibly you might need to specify a variance structure or transform the dependent if the model suffers from heteroskedasticity. Judging from the plot the relationship between height and time is pretty linear, but you could also try to transform time. Potential problems: You have an extremely small number of time points, which could result in problems when trying to fit the auto-correlation parameter(s). It could even be impossible to fit them. Also, your number of individuals and boxes is very small.
Analysis of a time series with a fixed and random factor in R For mixed-effects modells of time series I usually use the nlme package, because it offers facilities to model auto-correlation structures. If I don't need to consider auto-correlation, I prefer the l
34,975
How do SOMs reduce dimensionality of data?
The SOM grid is a 2-d manifold or topological space onto which each observation in the 10-d space is mapped via its similarity with the prototypes (code book vectors) for each cell in the SOM grid. The SOM grid is non-linear in the full dimensional space; the "grid" is warped to more-closely fit the input data during training. However, the key point in terms of dimension reduction is that distances can be measured in the topological space of the grid - i.e. the 2 dimensions - instead of the full $m$-dimensions. (Where $m$ is the number of variables.) Simply, the SOM is a mapping of the $m$-dimensions onto the 2-d SOM grid.
How do SOMs reduce dimensionality of data?
The SOM grid is a 2-d manifold or topological space onto which each observation in the 10-d space is mapped via its similarity with the prototypes (code book vectors) for each cell in the SOM grid. Th
How do SOMs reduce dimensionality of data? The SOM grid is a 2-d manifold or topological space onto which each observation in the 10-d space is mapped via its similarity with the prototypes (code book vectors) for each cell in the SOM grid. The SOM grid is non-linear in the full dimensional space; the "grid" is warped to more-closely fit the input data during training. However, the key point in terms of dimension reduction is that distances can be measured in the topological space of the grid - i.e. the 2 dimensions - instead of the full $m$-dimensions. (Where $m$ is the number of variables.) Simply, the SOM is a mapping of the $m$-dimensions onto the 2-d SOM grid.
How do SOMs reduce dimensionality of data? The SOM grid is a 2-d manifold or topological space onto which each observation in the 10-d space is mapped via its similarity with the prototypes (code book vectors) for each cell in the SOM grid. Th
34,976
How do SOMs reduce dimensionality of data?
Consider your 2-dimensional SOM artificial neurons units as aiming to have values equal to those of your high-dimensional data. It attains this through the learning process- where a sample (a row of your data) is taken from the data, compared for similarity with each of the units on the map. The unit that comes closer in terms of similarity to the sample becomes the winner of that sample. Then to effect the "learning", the value on the unit is adjusted to be closer to that of the sample it has just won. Units near this winner have their values adjusted too, but with smaller amounts than that of the winner. That adjustments of the units values make the learning to occur. The process is repeated for all samples from the data. At the end of the learning process, you have a learned SOM with units that come closer to resembling your data values. Note that your data values remain intact, they were only read and assisted in conducting the learning process. Now concentrate on the values carried by each unit at the end of the learning process. Each unit may have won several samples from the data and they are now "clustered" around it. That is, several samples from your data can be comfortably represented by one unit of the SOM - this brings in the dimensionality reduction idea! Your 10-dimensinal data can now be visualized as 2-dimensional since similar data in the original dataset can be respresented by one unit of the SOM. For a deeper understanding check out here
How do SOMs reduce dimensionality of data?
Consider your 2-dimensional SOM artificial neurons units as aiming to have values equal to those of your high-dimensional data. It attains this through the learning process- where a sample (a row of y
How do SOMs reduce dimensionality of data? Consider your 2-dimensional SOM artificial neurons units as aiming to have values equal to those of your high-dimensional data. It attains this through the learning process- where a sample (a row of your data) is taken from the data, compared for similarity with each of the units on the map. The unit that comes closer in terms of similarity to the sample becomes the winner of that sample. Then to effect the "learning", the value on the unit is adjusted to be closer to that of the sample it has just won. Units near this winner have their values adjusted too, but with smaller amounts than that of the winner. That adjustments of the units values make the learning to occur. The process is repeated for all samples from the data. At the end of the learning process, you have a learned SOM with units that come closer to resembling your data values. Note that your data values remain intact, they were only read and assisted in conducting the learning process. Now concentrate on the values carried by each unit at the end of the learning process. Each unit may have won several samples from the data and they are now "clustered" around it. That is, several samples from your data can be comfortably represented by one unit of the SOM - this brings in the dimensionality reduction idea! Your 10-dimensinal data can now be visualized as 2-dimensional since similar data in the original dataset can be respresented by one unit of the SOM. For a deeper understanding check out here
How do SOMs reduce dimensionality of data? Consider your 2-dimensional SOM artificial neurons units as aiming to have values equal to those of your high-dimensional data. It attains this through the learning process- where a sample (a row of y
34,977
How to account for repeated measures in glmer?
tl;dr: Your model already accounts for the fact that you have repeated measures. Nonetheless, if it fits, you would do best to use: glmer(y ~ x1*x2 + (x1:x2|subject), family=binomial) but if that isn't tractable, you could try: glmer(y ~ x1*x2 + (1|subject) + (0+x1|subject) + (0+x2|subject), family=binomial)    For an explanation of the syntax here, see: R's lmer cheat-sheet. Full version: You don't need to "tell" R that $x_1$ and $x_2$ are repeated measures variables. (This is really just a small semantic distinction, but) I wouldn't say that variables can be "repeated measures variables" vs. "non-repeated measures variables". Variables are just variables. I would say that, e.g., 'variable 1 is measured within patients, and variable 2 is measured between patients' or something like that. Of course, your phrasing is fine, you just don't want it to lead to some confusion where you think of repeated measures-ness as some ontological status intrinsic to the variable. At any rate, instead of telling R that a variable is measured within people, you simply need to formulate a model using random and/or effects fixed to account for the non-independence of the data that come from the same person. (Yes, you can use a fixed effect to account for this: every person would be a level of a categorical variable that is included. However, this will answer a slightly different question—almost certainly not the one you are interested in—and unless you have many measurements on the same person in every combination of conditions, the model will not be tractable.) In practice, you will use random effects to account for this. Specifically, you will have a random effect for each subject. Next you need to specify what you want random effects for. The syntax you used, (1|subject), will cause R to include a random intercept for each person. This will shift someone's line of best fit up or down relative to the mean. You should think about whether people are also likely to differ in their slopes—i.e., how strongly they respond to changes in your variables. You should also think about whether the random effects are correlated with each other, e.g., maybe people who start off higher when $x_1=0$ tend to also respond more strongly to increases in $x_1$. Common advice is to include all possible random effects and intercorrelations (Barr et al., 2013, "Keep it maximal", pdf). However, bear in mind that GLMMs are more difficult computationally than LMMs, so such a model may not be tractable.
How to account for repeated measures in glmer?
tl;dr: Your model already accounts for the fact that you have repeated measures. Nonetheless, if it fits, you would do best to use: glmer(y ~ x1*x2 + (x1:x2|subject), family=binomial) but if that
How to account for repeated measures in glmer? tl;dr: Your model already accounts for the fact that you have repeated measures. Nonetheless, if it fits, you would do best to use: glmer(y ~ x1*x2 + (x1:x2|subject), family=binomial) but if that isn't tractable, you could try: glmer(y ~ x1*x2 + (1|subject) + (0+x1|subject) + (0+x2|subject), family=binomial)    For an explanation of the syntax here, see: R's lmer cheat-sheet. Full version: You don't need to "tell" R that $x_1$ and $x_2$ are repeated measures variables. (This is really just a small semantic distinction, but) I wouldn't say that variables can be "repeated measures variables" vs. "non-repeated measures variables". Variables are just variables. I would say that, e.g., 'variable 1 is measured within patients, and variable 2 is measured between patients' or something like that. Of course, your phrasing is fine, you just don't want it to lead to some confusion where you think of repeated measures-ness as some ontological status intrinsic to the variable. At any rate, instead of telling R that a variable is measured within people, you simply need to formulate a model using random and/or effects fixed to account for the non-independence of the data that come from the same person. (Yes, you can use a fixed effect to account for this: every person would be a level of a categorical variable that is included. However, this will answer a slightly different question—almost certainly not the one you are interested in—and unless you have many measurements on the same person in every combination of conditions, the model will not be tractable.) In practice, you will use random effects to account for this. Specifically, you will have a random effect for each subject. Next you need to specify what you want random effects for. The syntax you used, (1|subject), will cause R to include a random intercept for each person. This will shift someone's line of best fit up or down relative to the mean. You should think about whether people are also likely to differ in their slopes—i.e., how strongly they respond to changes in your variables. You should also think about whether the random effects are correlated with each other, e.g., maybe people who start off higher when $x_1=0$ tend to also respond more strongly to increases in $x_1$. Common advice is to include all possible random effects and intercorrelations (Barr et al., 2013, "Keep it maximal", pdf). However, bear in mind that GLMMs are more difficult computationally than LMMs, so such a model may not be tractable.
How to account for repeated measures in glmer? tl;dr: Your model already accounts for the fact that you have repeated measures. Nonetheless, if it fits, you would do best to use: glmer(y ~ x1*x2 + (x1:x2|subject), family=binomial) but if that
34,978
Statistical test for a significant change in time series (sales) trend after policy change
Would a time series intervention analysis suit your needs? It estimates how much an intervention has changed a time series, if at all. how to in R: http://www.r-bloggers.com/time-series-intervention-analysis-wih-r-and-sas/ example use case: What test should I use to determine if a policy change had a statistically significant impact on website registrations? online course notes: https://onlinecourses.science.psu.edu/stat510/?q=node/76
Statistical test for a significant change in time series (sales) trend after policy change
Would a time series intervention analysis suit your needs? It estimates how much an intervention has changed a time series, if at all. how to in R: http://www.r-bloggers.com/time-series-intervention-
Statistical test for a significant change in time series (sales) trend after policy change Would a time series intervention analysis suit your needs? It estimates how much an intervention has changed a time series, if at all. how to in R: http://www.r-bloggers.com/time-series-intervention-analysis-wih-r-and-sas/ example use case: What test should I use to determine if a policy change had a statistically significant impact on website registrations? online course notes: https://onlinecourses.science.psu.edu/stat510/?q=node/76
Statistical test for a significant change in time series (sales) trend after policy change Would a time series intervention analysis suit your needs? It estimates how much an intervention has changed a time series, if at all. how to in R: http://www.r-bloggers.com/time-series-intervention-
34,979
Statistical test for a significant change in time series (sales) trend after policy change
I realise this answer is a bit late for the poster of this question, but I thought it may help others reviewing this question. There is an excellent tutorial paper on interrupted time series analysis by Bernal et al: Bernal, J. L., Cummins, S., Gasparrini; A. (2017). Interrupted time series regression for the evaluation of public health interventions: a tutorial, International Journal of Epidemiology, 46(1): 348–355. You can download it free at: https://www.researchgate.net/publication/303883790_Interrupted_time_series_regression_for_the_evaluation_of_public_health_interventions_A_tutorial And best of all the paper's supplementary material includes R code for all of the examples discussed in the paper. It is particularly relevant to the poster's question since it uses segmented linear regression which is more appropriate for smaller data sets (i.e. less than 100 time points) than ARIMA.
Statistical test for a significant change in time series (sales) trend after policy change
I realise this answer is a bit late for the poster of this question, but I thought it may help others reviewing this question. There is an excellent tutorial paper on interrupted time series analysis
Statistical test for a significant change in time series (sales) trend after policy change I realise this answer is a bit late for the poster of this question, but I thought it may help others reviewing this question. There is an excellent tutorial paper on interrupted time series analysis by Bernal et al: Bernal, J. L., Cummins, S., Gasparrini; A. (2017). Interrupted time series regression for the evaluation of public health interventions: a tutorial, International Journal of Epidemiology, 46(1): 348–355. You can download it free at: https://www.researchgate.net/publication/303883790_Interrupted_time_series_regression_for_the_evaluation_of_public_health_interventions_A_tutorial And best of all the paper's supplementary material includes R code for all of the examples discussed in the paper. It is particularly relevant to the poster's question since it uses segmented linear regression which is more appropriate for smaller data sets (i.e. less than 100 time points) than ARIMA.
Statistical test for a significant change in time series (sales) trend after policy change I realise this answer is a bit late for the poster of this question, but I thought it may help others reviewing this question. There is an excellent tutorial paper on interrupted time series analysis
34,980
Statistical test for a significant change in time series (sales) trend after policy change
Without knowing more about the data, especially the independent variables (the regressors), it will be impossible to give a one-fits-all solution for anyone here. However the issue you are adressing falls into the area of Econometrics. As such, there are a couple of good texts which will bring you further. They easiest book which will give you the tools to tackle this issue is Introduction to Econometrics by Stock&Watson. It is an undergraduate textbook which offers a very modern approach. Your small sample might give you issues. The standard books one would recommend in this instance are either "Econometrics" by Fumio Hayashi or Econometric Analysis by Greene. Finally, if you really want to dig deep into this problem, the go to guide is the landmark "Time Series Analysis" by Hamilton. It is, however, a challenging book. Be aware that there are other approaches in Statistics, often with their own terminology and goals. However in this case Econometrics is the best fit as it is designed exactly for problems such as this. Which tests you'd have to run specifically depend very much on data and model and approach. It will be some kind of structural break test, if you want to look into what's available there. But, as you already realized, with 12 observations the sample size and therefore the model selection will be an issue before you even get to these tests.
Statistical test for a significant change in time series (sales) trend after policy change
Without knowing more about the data, especially the independent variables (the regressors), it will be impossible to give a one-fits-all solution for anyone here. However the issue you are adressing f
Statistical test for a significant change in time series (sales) trend after policy change Without knowing more about the data, especially the independent variables (the regressors), it will be impossible to give a one-fits-all solution for anyone here. However the issue you are adressing falls into the area of Econometrics. As such, there are a couple of good texts which will bring you further. They easiest book which will give you the tools to tackle this issue is Introduction to Econometrics by Stock&Watson. It is an undergraduate textbook which offers a very modern approach. Your small sample might give you issues. The standard books one would recommend in this instance are either "Econometrics" by Fumio Hayashi or Econometric Analysis by Greene. Finally, if you really want to dig deep into this problem, the go to guide is the landmark "Time Series Analysis" by Hamilton. It is, however, a challenging book. Be aware that there are other approaches in Statistics, often with their own terminology and goals. However in this case Econometrics is the best fit as it is designed exactly for problems such as this. Which tests you'd have to run specifically depend very much on data and model and approach. It will be some kind of structural break test, if you want to look into what's available there. But, as you already realized, with 12 observations the sample size and therefore the model selection will be an issue before you even get to these tests.
Statistical test for a significant change in time series (sales) trend after policy change Without knowing more about the data, especially the independent variables (the regressors), it will be impossible to give a one-fits-all solution for anyone here. However the issue you are adressing f
34,981
Statistical test for a significant change in time series (sales) trend after policy change
SSD For R is a package that makes this type of intervention analysis fairly easy. You just need an value and a phase column, and ABRegres will give you the data to determine if the trends in the baseline and intervention phases are statistically significant. https://ssdanalysis.com
Statistical test for a significant change in time series (sales) trend after policy change
SSD For R is a package that makes this type of intervention analysis fairly easy. You just need an value and a phase column, and ABRegres will give you the data to determine if the trends in the basel
Statistical test for a significant change in time series (sales) trend after policy change SSD For R is a package that makes this type of intervention analysis fairly easy. You just need an value and a phase column, and ABRegres will give you the data to determine if the trends in the baseline and intervention phases are statistically significant. https://ssdanalysis.com
Statistical test for a significant change in time series (sales) trend after policy change SSD For R is a package that makes this type of intervention analysis fairly easy. You just need an value and a phase column, and ABRegres will give you the data to determine if the trends in the basel
34,982
Correlation, regression and causal modeling
Correlation vs Regression I am still confused as to how correlation differs to regression, technically. I understand that one is a measure of association and one a measure of causation This is incorrect, the difference between correlation and regression is not causal. Both measures are associational measures. I will elaborate on the difference between associational and causal quantities below, but let's quickly answer this part of your question. Mathematically, the correlation of $X$ an $Y$ is a symmetric quantity (that is, $cor(Y, X) = cor(X, Y)$) and it's given by: $$ cor(Y, X) = \frac{cov(Y, X)}{sd(Y)sd(X)}$$ Let $R_{yx}$ denote the regression coefficient of regressing $Y$ on $X$. This is usually not symmetric and it's given by: $$ R_{yx} = \frac{cov(Y,X)}{var(X)} $$ Notice that if $var(X) = var(Y) = 1$ then the correlation coefficient and the regression coefficient will be the same. These are pure associational quantities and you can see more about them here. Now let's move on to the causal inference part. Association vs Causation Let's start by stating what's the difference between association and causation. Consider two random variables, $X$ and $Y$ and now consider these two different questions: What's the expected value of $Y$ if I see $X = x$? Let's denote this by $\mathbb{E}[Y|X=x]$. What's the expected value of $Y$ if I set $X = x$? Let's denote this by $\mathbb{E}[Y|do(X = x)]$ The first question is what regression can always give you. It's an associational question. The second question is an interventional question --- what would happen if you could set the value of $X$ to whatever you please? Usually, this is not the same as regression. Let's see an example. Consider the following structural equations: $$ U = \epsilon_{u}\\ X = \delta U + \epsilon_{x}\\ Y = \beta X + \gamma U + \epsilon_y $$ Where all terms denoted by $\epsilon$ are mean zero and mutually independent gaussians. To simplify computation, also assume $U$, $X$ and $Y$ have been standardized (mean zero and unit variance). Suppose $U$ is unobserved. What is the expected value of $Y$ if we observe $X = x$? This is a traditional statistics questions and it's simply: $$\mathbb{E}[Y|X=x] = \left(\beta + \gamma\delta\right)x$$ And you can estimate $b =\left(\beta + \gamma\delta\right)$ with a linear regression of $Y$ on $X$. But what is the expected value of $Y$ if we set $X$ to $x$? Setting $X$ to $x$ means erasing the structural equation for $X$ and substituting for $X = x$. Hence: $$\mathbb{E}[Y|do(X=x)] = \beta x$$ And $\beta$ will be different from the regression coefficient $b$ unless either $\gamma$ or $\delta$ are equal to zero. When can you estimate causal quantities with regression? Now let's answer your question: how can you actually measure causation mathematically, without actually conducting a real life experiment. In our example, we can't estimate the causal effect of $X$ on $Y$ because there is an open confounding path ($X \leftarrow U \rightarrow Y$), shown in red in the causal diagram below. We usually call this a backdoor path. $\hskip2.5in$ However,notice that if we could observe $U$ we could recover the causal parameter $\beta$ with a regression conditioning on $X$ and $U$. More generally, the problem of recovering causal effects with adjustment from regression has been mathematically solved. You can recover the structural coefficient from observational data if you can find a set of variables that: (i) blocks all backdoor paths from $X$ to $Y$; and, (ii) do not open other confounding paths (if you want total effects, you also do not want to control for mediators). But how can you know which variables satisfy (i) and (ii)? You can only know that with causal assumptions. For example, in the model below you don't want to adjust for $Z$! $\hskip1in$ Take a look here for another discussion about confounders. That is, we need to know the causal graph (or equivalently, a set of structural equations) in order to tell which set of variables you can use to identify the effect via regression (if that set exists). You need causal assumptions to draw causal conclusions from observational data. So to sum up Correlation coefficients and regression coefficients are different associational quantities, their difference has nothing to do with causality; Also, regression is usually not equal to causal quantities. Regression asks: what if I observe X? Causal inference asks: what if I manipulate X? But, under some circumstances, you can use regression to identify causal quantities with observational data. In order to do that you need some causal assumptions, for example, to identify when a group of variables satisfy the back-door (or single-door) criterion. It's also worth noticing that adjustment via regression is not the only way to identify causal effects. For example, two other widely known methods are the front-door criterion and instrumental variables. If you want to learn more about this, you should check the references here.
Correlation, regression and causal modeling
Correlation vs Regression I am still confused as to how correlation differs to regression, technically. I understand that one is a measure of association and one a measure of causation This is incor
Correlation, regression and causal modeling Correlation vs Regression I am still confused as to how correlation differs to regression, technically. I understand that one is a measure of association and one a measure of causation This is incorrect, the difference between correlation and regression is not causal. Both measures are associational measures. I will elaborate on the difference between associational and causal quantities below, but let's quickly answer this part of your question. Mathematically, the correlation of $X$ an $Y$ is a symmetric quantity (that is, $cor(Y, X) = cor(X, Y)$) and it's given by: $$ cor(Y, X) = \frac{cov(Y, X)}{sd(Y)sd(X)}$$ Let $R_{yx}$ denote the regression coefficient of regressing $Y$ on $X$. This is usually not symmetric and it's given by: $$ R_{yx} = \frac{cov(Y,X)}{var(X)} $$ Notice that if $var(X) = var(Y) = 1$ then the correlation coefficient and the regression coefficient will be the same. These are pure associational quantities and you can see more about them here. Now let's move on to the causal inference part. Association vs Causation Let's start by stating what's the difference between association and causation. Consider two random variables, $X$ and $Y$ and now consider these two different questions: What's the expected value of $Y$ if I see $X = x$? Let's denote this by $\mathbb{E}[Y|X=x]$. What's the expected value of $Y$ if I set $X = x$? Let's denote this by $\mathbb{E}[Y|do(X = x)]$ The first question is what regression can always give you. It's an associational question. The second question is an interventional question --- what would happen if you could set the value of $X$ to whatever you please? Usually, this is not the same as regression. Let's see an example. Consider the following structural equations: $$ U = \epsilon_{u}\\ X = \delta U + \epsilon_{x}\\ Y = \beta X + \gamma U + \epsilon_y $$ Where all terms denoted by $\epsilon$ are mean zero and mutually independent gaussians. To simplify computation, also assume $U$, $X$ and $Y$ have been standardized (mean zero and unit variance). Suppose $U$ is unobserved. What is the expected value of $Y$ if we observe $X = x$? This is a traditional statistics questions and it's simply: $$\mathbb{E}[Y|X=x] = \left(\beta + \gamma\delta\right)x$$ And you can estimate $b =\left(\beta + \gamma\delta\right)$ with a linear regression of $Y$ on $X$. But what is the expected value of $Y$ if we set $X$ to $x$? Setting $X$ to $x$ means erasing the structural equation for $X$ and substituting for $X = x$. Hence: $$\mathbb{E}[Y|do(X=x)] = \beta x$$ And $\beta$ will be different from the regression coefficient $b$ unless either $\gamma$ or $\delta$ are equal to zero. When can you estimate causal quantities with regression? Now let's answer your question: how can you actually measure causation mathematically, without actually conducting a real life experiment. In our example, we can't estimate the causal effect of $X$ on $Y$ because there is an open confounding path ($X \leftarrow U \rightarrow Y$), shown in red in the causal diagram below. We usually call this a backdoor path. $\hskip2.5in$ However,notice that if we could observe $U$ we could recover the causal parameter $\beta$ with a regression conditioning on $X$ and $U$. More generally, the problem of recovering causal effects with adjustment from regression has been mathematically solved. You can recover the structural coefficient from observational data if you can find a set of variables that: (i) blocks all backdoor paths from $X$ to $Y$; and, (ii) do not open other confounding paths (if you want total effects, you also do not want to control for mediators). But how can you know which variables satisfy (i) and (ii)? You can only know that with causal assumptions. For example, in the model below you don't want to adjust for $Z$! $\hskip1in$ Take a look here for another discussion about confounders. That is, we need to know the causal graph (or equivalently, a set of structural equations) in order to tell which set of variables you can use to identify the effect via regression (if that set exists). You need causal assumptions to draw causal conclusions from observational data. So to sum up Correlation coefficients and regression coefficients are different associational quantities, their difference has nothing to do with causality; Also, regression is usually not equal to causal quantities. Regression asks: what if I observe X? Causal inference asks: what if I manipulate X? But, under some circumstances, you can use regression to identify causal quantities with observational data. In order to do that you need some causal assumptions, for example, to identify when a group of variables satisfy the back-door (or single-door) criterion. It's also worth noticing that adjustment via regression is not the only way to identify causal effects. For example, two other widely known methods are the front-door criterion and instrumental variables. If you want to learn more about this, you should check the references here.
Correlation, regression and causal modeling Correlation vs Regression I am still confused as to how correlation differs to regression, technically. I understand that one is a measure of association and one a measure of causation This is incor
34,983
How to choose the test statistic for permutation test?
Often there are several statistics that will all result in the same p-value/result. For example in a 2 sample case the difference of the 2 means, the mean of group A, and the sum of the values in group A will all result in the same p-value (this is because given the data values and sample sizes you can calculate the 1st 2 given only the 3rd). I would expect the t statistic to be similar to any of the above, but may not be exactly the same (due to the dividing by standard deviation(s)). There are other statistics that could be very different in the results, possibly the difference of the 2 medians, or the ratio of the 2 variances. These other statistics will be affected differently by the permutation process. Your choice should be based on a combination of what is most interesting based on the science and question being asked (sometimes medians might be of more interest, other times means would be) and what will give you power to detect a difference in reasonable/meaningful alternatives. You can test this later by simulating data from cases that you think likely or interesting and watching how the statistics perform.
How to choose the test statistic for permutation test?
Often there are several statistics that will all result in the same p-value/result. For example in a 2 sample case the difference of the 2 means, the mean of group A, and the sum of the values in gro
How to choose the test statistic for permutation test? Often there are several statistics that will all result in the same p-value/result. For example in a 2 sample case the difference of the 2 means, the mean of group A, and the sum of the values in group A will all result in the same p-value (this is because given the data values and sample sizes you can calculate the 1st 2 given only the 3rd). I would expect the t statistic to be similar to any of the above, but may not be exactly the same (due to the dividing by standard deviation(s)). There are other statistics that could be very different in the results, possibly the difference of the 2 medians, or the ratio of the 2 variances. These other statistics will be affected differently by the permutation process. Your choice should be based on a combination of what is most interesting based on the science and question being asked (sometimes medians might be of more interest, other times means would be) and what will give you power to detect a difference in reasonable/meaningful alternatives. You can test this later by simulating data from cases that you think likely or interesting and watching how the statistics perform.
How to choose the test statistic for permutation test? Often there are several statistics that will all result in the same p-value/result. For example in a 2 sample case the difference of the 2 means, the mean of group A, and the sum of the values in gro
34,984
How to choose the test statistic for permutation test?
You choose a test statistic that measures what you're interested in/has the properties you need. If you want to compare means, you base it on differences of means; if you want a robust comparison of location, you measure something else; if you want to compare standard deviations, you use a statistic that does that; if you want to compare the whole distribution you use a statistic that compares distributions (such as a k-sample version of the Kolmogorov-Smirnov test, for an example).
How to choose the test statistic for permutation test?
You choose a test statistic that measures what you're interested in/has the properties you need. If you want to compare means, you base it on differences of means; if you want a robust comparison of l
How to choose the test statistic for permutation test? You choose a test statistic that measures what you're interested in/has the properties you need. If you want to compare means, you base it on differences of means; if you want a robust comparison of location, you measure something else; if you want to compare standard deviations, you use a statistic that does that; if you want to compare the whole distribution you use a statistic that compares distributions (such as a k-sample version of the Kolmogorov-Smirnov test, for an example).
How to choose the test statistic for permutation test? You choose a test statistic that measures what you're interested in/has the properties you need. If you want to compare means, you base it on differences of means; if you want a robust comparison of l
34,985
Mixed model in simple english
Here is my effort at this - imperfect, but might help. Mixed effects models are needed when the variation in the response variable cannot be simply allocated between just a structural part and a residual individual randomness. Mixed effects models have both of these things but there is also randomness that is associated not just with individuals but groups. The classic example is students' performance. There is a (big) element of random variation at the individual level. But each school can also be seen as contributing a common random element to the performance of each of the individuals at that school. One particular school may, for random reasons (lucky to have good teachers, etc) have high scores. Hence those students' randomness cannot be treated as independent of eachother - breaking many of the assumptions of more traditional models. This concept can be extended beyond the simple residual randomness to also apply to random variation at the group level in the various parameters in the model (slopes, etc). Taken altogether, the mixed effects model then can not only avoid pitfalls in traditional models when their i.i.d. assumptions are violated; they can provide powerful techniques to identify how much randomness is based at different levels. The easiest to understand mixed effects models are those where the different sources of randomness are in a hierarchy (eg individuals-classes-schools). However, they can be extended beyond this to non hierarchical groupings. For example individual students could be grouped by their maths teacher and by their physics teacher, which may not have a simple relationship. But it would still be possible to estimate the individual randomness for each student (ie the individual residual) as well as a common effect for all the students of Mr A's maths students and another one for all of Ms B's physics students. (I am assuming that the response variable is some test of overall academic achievement that is shared by all these students, of course). So what makes it "mixed"? Mixed means the model mixes structural and random components. In a way, traditional models are already mixed - they have a structural component, and individual randomness. Just by historical accident of nomenclature, models are only called mixed when they also have at least one more random component in additional to the individual level.
Mixed model in simple english
Here is my effort at this - imperfect, but might help. Mixed effects models are needed when the variation in the response variable cannot be simply allocated between just a structural part and a resid
Mixed model in simple english Here is my effort at this - imperfect, but might help. Mixed effects models are needed when the variation in the response variable cannot be simply allocated between just a structural part and a residual individual randomness. Mixed effects models have both of these things but there is also randomness that is associated not just with individuals but groups. The classic example is students' performance. There is a (big) element of random variation at the individual level. But each school can also be seen as contributing a common random element to the performance of each of the individuals at that school. One particular school may, for random reasons (lucky to have good teachers, etc) have high scores. Hence those students' randomness cannot be treated as independent of eachother - breaking many of the assumptions of more traditional models. This concept can be extended beyond the simple residual randomness to also apply to random variation at the group level in the various parameters in the model (slopes, etc). Taken altogether, the mixed effects model then can not only avoid pitfalls in traditional models when their i.i.d. assumptions are violated; they can provide powerful techniques to identify how much randomness is based at different levels. The easiest to understand mixed effects models are those where the different sources of randomness are in a hierarchy (eg individuals-classes-schools). However, they can be extended beyond this to non hierarchical groupings. For example individual students could be grouped by their maths teacher and by their physics teacher, which may not have a simple relationship. But it would still be possible to estimate the individual randomness for each student (ie the individual residual) as well as a common effect for all the students of Mr A's maths students and another one for all of Ms B's physics students. (I am assuming that the response variable is some test of overall academic achievement that is shared by all these students, of course). So what makes it "mixed"? Mixed means the model mixes structural and random components. In a way, traditional models are already mixed - they have a structural component, and individual randomness. Just by historical accident of nomenclature, models are only called mixed when they also have at least one more random component in additional to the individual level.
Mixed model in simple english Here is my effort at this - imperfect, but might help. Mixed effects models are needed when the variation in the response variable cannot be simply allocated between just a structural part and a resid
34,986
Mixed model in simple english
I will provide a short response that I believe will be helpful, but won't be detailed. I make use of mixed effects models (i.e. models with both fixed and random effects) when I believe the error in my data is not from a single source, and when I have information that could identify alternate sources. Mixed models are often used when data are hierarchically structured. For example, students in classrooms. No doubt there will be error in measurement for each student, and it might be well modeled as a normal distribution. However, there may be additional error that is explained by the classroom in which the student learns (perhaps due to teachers/subject matter/text book). Thus, it might be important to have an additional error term to capture error due to classroom. That is to say a single error term to model a single source for individual measurement error may not appropriately capture error that is coming from a higher level source. Thus, to model the data properly, it may be necessary to partition the error due to students and error due to classrooms separately. The resulting fitted model will be a better fit to the data and ought to reduce bias in other model parameters. Mixed models can do a whole lot more, though. I won't go into those details because I think the above is what you're looking for at the moment.
Mixed model in simple english
I will provide a short response that I believe will be helpful, but won't be detailed. I make use of mixed effects models (i.e. models with both fixed and random effects) when I believe the error in m
Mixed model in simple english I will provide a short response that I believe will be helpful, but won't be detailed. I make use of mixed effects models (i.e. models with both fixed and random effects) when I believe the error in my data is not from a single source, and when I have information that could identify alternate sources. Mixed models are often used when data are hierarchically structured. For example, students in classrooms. No doubt there will be error in measurement for each student, and it might be well modeled as a normal distribution. However, there may be additional error that is explained by the classroom in which the student learns (perhaps due to teachers/subject matter/text book). Thus, it might be important to have an additional error term to capture error due to classroom. That is to say a single error term to model a single source for individual measurement error may not appropriately capture error that is coming from a higher level source. Thus, to model the data properly, it may be necessary to partition the error due to students and error due to classrooms separately. The resulting fitted model will be a better fit to the data and ought to reduce bias in other model parameters. Mixed models can do a whole lot more, though. I won't go into those details because I think the above is what you're looking for at the moment.
Mixed model in simple english I will provide a short response that I believe will be helpful, but won't be detailed. I make use of mixed effects models (i.e. models with both fixed and random effects) when I believe the error in m
34,987
Why is a pivot quantity not necessarily a statistic?
The reason is that a pivot is a function of data and (unknown) parameters, while a statistic is only a function of data. For example, if $Z_1, \dotsc, Z_n$ is an iid sample from the distribution $\text{Normal}(\mu, 1)$, then a pivot will be $(\bar{Z} - \mu)\sqrt{n}$, since this function of data and unknown parameter $\mu$ has (under this model) the known distribution $\text{Normal}(0,1)$. But this is not a statistic, since it does depend as a function on $\mu$, which is unknown, and not part of the data.
Why is a pivot quantity not necessarily a statistic?
The reason is that a pivot is a function of data and (unknown) parameters, while a statistic is only a function of data. For example, if $Z_1, \dotsc, Z_n$ is an iid sample from the distribution $\t
Why is a pivot quantity not necessarily a statistic? The reason is that a pivot is a function of data and (unknown) parameters, while a statistic is only a function of data. For example, if $Z_1, \dotsc, Z_n$ is an iid sample from the distribution $\text{Normal}(\mu, 1)$, then a pivot will be $(\bar{Z} - \mu)\sqrt{n}$, since this function of data and unknown parameter $\mu$ has (under this model) the known distribution $\text{Normal}(0,1)$. But this is not a statistic, since it does depend as a function on $\mu$, which is unknown, and not part of the data.
Why is a pivot quantity not necessarily a statistic? The reason is that a pivot is a function of data and (unknown) parameters, while a statistic is only a function of data. For example, if $Z_1, \dotsc, Z_n$ is an iid sample from the distribution $\t
34,988
Unbiased estimator for variance or Maximum Likelihood Estimator?
I think the answer is generally yes. If you know more about a distribution then you should use that information. For some distributions this will make very little difference, but for other it could be considerable. As an example, consider the poisson distribution. In this case the mean and the variance are both equal to the parameter $\lambda$ and the ML estimate of $\lambda$ is the sample mean. The charts below show 100 simulations of estimating the variance by taking the mean or the sample variance. The histogram labelled X1 is the using sample mean, and X2 is using the sample variance. As you can see, both are unbiased but the mean is a much better estimate of $\lambda$ and hence a better estimate of he variance. The R code for the above is here: library(ggplot2) library(reshape2) testpois = function(){ X = rpois(100, 4) mu = mean(X) v = var(X) return(c(mu, v)) } P = data.frame(t(replicate(100, testpois()))) P = melt(P) ggplot(P, aes(x=value)) + geom_histogram(binwidth=.1, colour="black", fill="white") + geom_vline(aes(xintercept=mean(value, na.rm=T)), # Ignore NA values for mean color="red", linetype="dashed", size=1) + facet_grid(variable~.) As to the question of bias, I wouldn't worry too much about your estimator being biased (in the example above it isn't, but that is just luck). If unbiasedness is important to you you can always use Jackknife to try remove the bias.
Unbiased estimator for variance or Maximum Likelihood Estimator?
I think the answer is generally yes. If you know more about a distribution then you should use that information. For some distributions this will make very little difference, but for other it could
Unbiased estimator for variance or Maximum Likelihood Estimator? I think the answer is generally yes. If you know more about a distribution then you should use that information. For some distributions this will make very little difference, but for other it could be considerable. As an example, consider the poisson distribution. In this case the mean and the variance are both equal to the parameter $\lambda$ and the ML estimate of $\lambda$ is the sample mean. The charts below show 100 simulations of estimating the variance by taking the mean or the sample variance. The histogram labelled X1 is the using sample mean, and X2 is using the sample variance. As you can see, both are unbiased but the mean is a much better estimate of $\lambda$ and hence a better estimate of he variance. The R code for the above is here: library(ggplot2) library(reshape2) testpois = function(){ X = rpois(100, 4) mu = mean(X) v = var(X) return(c(mu, v)) } P = data.frame(t(replicate(100, testpois()))) P = melt(P) ggplot(P, aes(x=value)) + geom_histogram(binwidth=.1, colour="black", fill="white") + geom_vline(aes(xintercept=mean(value, na.rm=T)), # Ignore NA values for mean color="red", linetype="dashed", size=1) + facet_grid(variable~.) As to the question of bias, I wouldn't worry too much about your estimator being biased (in the example above it isn't, but that is just luck). If unbiasedness is important to you you can always use Jackknife to try remove the bias.
Unbiased estimator for variance or Maximum Likelihood Estimator? I think the answer is generally yes. If you know more about a distribution then you should use that information. For some distributions this will make very little difference, but for other it could
34,989
Unbiased estimator for variance or Maximum Likelihood Estimator?
I have moved my comment to an answer so I can expand on it as requested. [If you mean the variance form $\frac{1}{n}\sum_{i=1}^{n}(X-\bar{X_{n}})^{2}$ as ML (which it is for the normal), then both forms use exactly the same information - the sums of squares of deviation from the mean. The only difference is that scaling factor.] If you need the variance estimate to be unbiased you could use it (note that in general you could take any MLE for the variance at a particular distribution and see if you can at least approximately unbias that; it may be more efficient), but it's not (say) minimum MSE for the variance, and it's not unbiased if you're taking the square root and using that for the standard deviation. At least the ML estimate for the variance is still ML for the s.d. (irrespective of which distribution for you have an MLE of the variance). Here's why I say that: MLE's have the property of being invariant to transformation of parameters - the MLE of $g(\theta)$ is $g(\hat{\theta})$ (or more concisely, $\widehat{g(\theta)}=g(\hat{\theta})$). See the brief discussion here, and the stuff under note 2 here. None of those prove it, but I'll give you a (somewhat handwavy) motivation/outline of an argument for the simple case of monotonic transformations. You can find a complete argument in many texts that discuss ML at more than a really elementary level. In the case of monotonic transformations: Take a simple case - imagine I have some curve ($y$ vs $x$) with a single peak somewhere in the middle (both a global and local maximum). Now I transform the $x$ to $\xi$ ($\xi=t(x)$) while $y$ is unchanged. The shape of the curve changes, but the corresponding $y$'s don't. The original maximum of $y$ is still the same maximum at the corresponding place in $\xi$ as it was under $x$ (that is, if the maximum was at $x^*$, it's now at $\xi^*=t(x^*)$. You should see how to extend that intuition to a monotonic transformation and any global maximum. [The more general case of non-monotonic transformations is less immediately obvious, but is still true. Edit: It's true in the case of one-to-one functions by a similar argument to the above.] Returning to the original answer: In practice (in the the $n$ vs $n-1$ case) there's rarely much difference and I regularly use each in different circumstances with little worry. I'm usually not worried about an unbiased variance estimate
Unbiased estimator for variance or Maximum Likelihood Estimator?
I have moved my comment to an answer so I can expand on it as requested. [If you mean the variance form $\frac{1}{n}\sum_{i=1}^{n}(X-\bar{X_{n}})^{2}$ as ML (which it is for the normal), then both for
Unbiased estimator for variance or Maximum Likelihood Estimator? I have moved my comment to an answer so I can expand on it as requested. [If you mean the variance form $\frac{1}{n}\sum_{i=1}^{n}(X-\bar{X_{n}})^{2}$ as ML (which it is for the normal), then both forms use exactly the same information - the sums of squares of deviation from the mean. The only difference is that scaling factor.] If you need the variance estimate to be unbiased you could use it (note that in general you could take any MLE for the variance at a particular distribution and see if you can at least approximately unbias that; it may be more efficient), but it's not (say) minimum MSE for the variance, and it's not unbiased if you're taking the square root and using that for the standard deviation. At least the ML estimate for the variance is still ML for the s.d. (irrespective of which distribution for you have an MLE of the variance). Here's why I say that: MLE's have the property of being invariant to transformation of parameters - the MLE of $g(\theta)$ is $g(\hat{\theta})$ (or more concisely, $\widehat{g(\theta)}=g(\hat{\theta})$). See the brief discussion here, and the stuff under note 2 here. None of those prove it, but I'll give you a (somewhat handwavy) motivation/outline of an argument for the simple case of monotonic transformations. You can find a complete argument in many texts that discuss ML at more than a really elementary level. In the case of monotonic transformations: Take a simple case - imagine I have some curve ($y$ vs $x$) with a single peak somewhere in the middle (both a global and local maximum). Now I transform the $x$ to $\xi$ ($\xi=t(x)$) while $y$ is unchanged. The shape of the curve changes, but the corresponding $y$'s don't. The original maximum of $y$ is still the same maximum at the corresponding place in $\xi$ as it was under $x$ (that is, if the maximum was at $x^*$, it's now at $\xi^*=t(x^*)$. You should see how to extend that intuition to a monotonic transformation and any global maximum. [The more general case of non-monotonic transformations is less immediately obvious, but is still true. Edit: It's true in the case of one-to-one functions by a similar argument to the above.] Returning to the original answer: In practice (in the the $n$ vs $n-1$ case) there's rarely much difference and I regularly use each in different circumstances with little worry. I'm usually not worried about an unbiased variance estimate
Unbiased estimator for variance or Maximum Likelihood Estimator? I have moved my comment to an answer so I can expand on it as requested. [If you mean the variance form $\frac{1}{n}\sum_{i=1}^{n}(X-\bar{X_{n}})^{2}$ as ML (which it is for the normal), then both for
34,990
How important is domain knowledge in our profession?
I make an analogy: Solving statistical problems without context is like boxing while blindfolded. You might knock your opponent out but you might bash your hand on the ringpost. I work mostly with medical and social science researchers. There seems to be a widespread feeling there that the proper model for research is 1) They come up with an idea, gather data, write about it and then 2) They give it to us to "do the statistics". So, I agree that we need to understand the issues; of course, we don't need as full an understanding of the research as the practitioner has. That is why I (and many other data-people) can work with people in different profession. But, the less we know about a subject, the more we need to interact with the professional to make sure that the results make sense. One of the many things I like about what I do is that I get to learn a bit about a lot of different subjects.
How important is domain knowledge in our profession?
I make an analogy: Solving statistical problems without context is like boxing while blindfolded. You might knock your opponent out but you might bash your hand on the ringpost. I work mostly with med
How important is domain knowledge in our profession? I make an analogy: Solving statistical problems without context is like boxing while blindfolded. You might knock your opponent out but you might bash your hand on the ringpost. I work mostly with medical and social science researchers. There seems to be a widespread feeling there that the proper model for research is 1) They come up with an idea, gather data, write about it and then 2) They give it to us to "do the statistics". So, I agree that we need to understand the issues; of course, we don't need as full an understanding of the research as the practitioner has. That is why I (and many other data-people) can work with people in different profession. But, the less we know about a subject, the more we need to interact with the professional to make sure that the results make sense. One of the many things I like about what I do is that I get to learn a bit about a lot of different subjects.
How important is domain knowledge in our profession? I make an analogy: Solving statistical problems without context is like boxing while blindfolded. You might knock your opponent out but you might bash your hand on the ringpost. I work mostly with med
34,991
How important is domain knowledge in our profession?
How important is domain knowledge in our profession? Important enough to give distinct names to the domain-oriented data analyses (e.g. -metrics: biometrics, psychometrics, chemometrics, ...) The mix of domain knowledge and statistical knowledge is extremely important in design of experiments, e.g. practical ./. statistical feasibility, domain specific norms, sample size planning guiding data analysis (What kind of transformations or pre-processing are physically/biologically/chemically meaningful? What corrections of the raw data are needed?, criteria for data quality, heuristics) checking whether results can possibly be meaningful/correct interpretation of results Here's an example of a domain-specific interpretation of a classifier that was possible only because both data-analytical and spectroscopic knowledge together were at hand (section "Descriptive LDA and spectroscopic interpretation"). Try to imagine the amount of communication that would be needed between a data-analyst without spectroscopic knowledge and a spectroscopist with no idea of LDA to arrive at such an interpretation. In the context of (lack of) reproducibility of published results, there is concern about research conducted as if there were no further knowledge of the field/problem/data, see e.g. E. R: Dougherty: Biomarker development: Prudence, risk, and reproducibility, BioEssays, 2012, 34, 277-279. Beck-Bornholt & Dubben would probably argue that incorporating more domain knowledge boosts the prevalence (prior probability) of good scientific ideas. The no free lunch theorem hints into the same direction. (I'm a chemist specialized in chemometrics and spectroscopy, and do both measurements and data analysis) Does selecting a domain when entering a job narrows your future options for domains and hence jobs? Maybe, but at the same time, you'll be able to claim more expertise in that area and consequently can apply for the specialized jobs (and my experience is that we chemometricians are a much wanted species). And, in addition, you show that you are able to join work in new domains.
How important is domain knowledge in our profession?
How important is domain knowledge in our profession? Important enough to give distinct names to the domain-oriented data analyses (e.g. -metrics: biometrics, psychometrics, chemometrics, ...) The mi
How important is domain knowledge in our profession? How important is domain knowledge in our profession? Important enough to give distinct names to the domain-oriented data analyses (e.g. -metrics: biometrics, psychometrics, chemometrics, ...) The mix of domain knowledge and statistical knowledge is extremely important in design of experiments, e.g. practical ./. statistical feasibility, domain specific norms, sample size planning guiding data analysis (What kind of transformations or pre-processing are physically/biologically/chemically meaningful? What corrections of the raw data are needed?, criteria for data quality, heuristics) checking whether results can possibly be meaningful/correct interpretation of results Here's an example of a domain-specific interpretation of a classifier that was possible only because both data-analytical and spectroscopic knowledge together were at hand (section "Descriptive LDA and spectroscopic interpretation"). Try to imagine the amount of communication that would be needed between a data-analyst without spectroscopic knowledge and a spectroscopist with no idea of LDA to arrive at such an interpretation. In the context of (lack of) reproducibility of published results, there is concern about research conducted as if there were no further knowledge of the field/problem/data, see e.g. E. R: Dougherty: Biomarker development: Prudence, risk, and reproducibility, BioEssays, 2012, 34, 277-279. Beck-Bornholt & Dubben would probably argue that incorporating more domain knowledge boosts the prevalence (prior probability) of good scientific ideas. The no free lunch theorem hints into the same direction. (I'm a chemist specialized in chemometrics and spectroscopy, and do both measurements and data analysis) Does selecting a domain when entering a job narrows your future options for domains and hence jobs? Maybe, but at the same time, you'll be able to claim more expertise in that area and consequently can apply for the specialized jobs (and my experience is that we chemometricians are a much wanted species). And, in addition, you show that you are able to join work in new domains.
How important is domain knowledge in our profession? How important is domain knowledge in our profession? Important enough to give distinct names to the domain-oriented data analyses (e.g. -metrics: biometrics, psychometrics, chemometrics, ...) The mi
34,992
Interaction in generalized linear model
In general, the existence of an interaction means that the effect of one variable depends on the value of the other variable with which it interacts. If there isn't an interaction, then the value of the other variable doesn't matter. This is easiest to understand in the case of linear regression. Imagine we are looking at the adult height (say at 25) of a child based on the adult height of the father. We further include sex as an additional predictor variable, because men and women differ considerably in adult height. Let's imagine that there is no interaction between these two variables (which may be true, at least to a first approximation). We could then plot our model simply as two lines on a scatterplot of the data. We may want to use different colors or symbols / line styles for men vs. women, but at any rate we would see a football-ish (or rugby-ball-ish, depending on where you live) shaped cloud of points with two parallel lines going through it. The important part is that the lines are parallel; if someone asked you what the effect would be of the father being 1 inch (1 cm) taller, you would respond with $\beta_{\text{height}}$. If they further asked you what the effect would be if the child were male or female, you would respond, 'that doesn't matter, you would expect them to be $\beta_{\text{height}}$ taller as an adult either way'. That is because the lines are parallel (with the same slope, $\beta_{\text{height}}$) / there is no interaction. Now imagine the case of anxiety on test taking performance when examining two populations: emotionally stable vs. emotionally unstable people. Lets imagine that there is an interaction such that emotionally unstable people are more strongly affected by anxiety. Then, if we plotted the model similarly, we would see two lines that are not parallel. One line (representing emotionally stable individuals) might be sloping downward gradually, while the other line (representing unstable students) might move downward much more quickly. If we had used reference cell coding, with the stable individuals as the reference category, the fitted regression model might be: $$ \text{test performance}=\beta_0 + \beta_1\text{anxiety} + \beta_2\text{unstable} + \beta_3\text{anxiety}*\text{unstable} $$ In such a case, the slope of the first line would be $\beta_\text{anxiety}$ (since $\text{unstable}$ would equal 0), but the slope of the second line would be $\beta_1+\beta_3$. If someone asked you how much test taking performance would be impaired if anxiety went up by one unit, you would have to say, 'that depends, emotionally stable students would score $\beta_1$ points lower, but emotionally unstable individuals would drop by $\beta_1+\beta_3$ points'. This is the essence of what an interaction is. In addition, these examples illustrate the necessity of interpreting simple effects only when interactions exist, and the value of using plots of your model to facilitate understanding. With a generalized linear model, the situation is essentially the same, but you may have to take into account the additional complexity of the link function (a non-linear transformation), depending on which scale you want to use to make your interpretation. Consider the case of logistic regression, there are (at least) three scales available: The betas exist on the logit (log odds) scale, whereas $\pi$ (the probability of 'success') exists only in the interval $(0,1)$ and behaves quite differently; in addition, the odds lie between them. So you need to chose which of these you want to use to interpret your model. For example, with respect to the log odds, the model is linear, and everything can be understood just as above. If you were using the odds, you can get odds ratios by exponentiating your betas. For example, if there is no interaction, the odds ratio associated with a one unit increase in $X_1$ is $\exp(\beta_1)$. This would also be the odds ratio of the reference category (like the emotionally stable students above) if there were an interaction with a dichotomous variable, but the contrasting category would be associated with an odds ratio of $\exp(\beta_1)*\exp(\beta_2)$. Unfortunately, neither of those are very intuitively accessible for people, and the non-linear transformation (the link function) makes life more complicated. It is important to recognize that this isn't specific to interactions; the change in the probability of 'success' associated with increasing $X$ by one unit is never the same as (say) decreasing $X$ by one unit (except in the special case where $x_i$ is associated with $\pi=.5$). In other words, the change in probability associated with a one unit change in $X$ depends on where you are starting from (in this sense, you could perhaps metaphorically say that it interacts with itself). The best way to determine the change in probability associated with moving from one level of $X$ to another, is to plug in those levels, solve the regression equation for $\hat\pi$, and then subtract. The same thing is true if you have more than one variable, but no 'interaction' with the variable in question. This isn't anything special, it's just that 'where you are starting from' depends on the other variables as well. Again, the best way to determine the change in probability would be to solve for $\hat\pi$ at both places and subtract. Interactions in a GLiM should also be treated similarly. It is best not to interpret interaction effects, but only simple effects (that is, the effect of $X_1$ on $Y$ holding $X_2$ constant). In addition, it's best to overlay plots of the predicted values (say, when $X_2=0$ vs. when $X_2=1$) on a scatterplot of your data. Now, for a logistic regression, it is often difficult to get a decent plot of your data as the points are all 0's and 1's, so you might just choose to leave them out. Nonetheless, a plot of the two curves will typically be the best thing to use. After you have the plot, a qualitative (verbal) description is often easy (e.g., 'probabilities don't start moving away from 0 until larger levels of $X_1$, and even then, raise more slowly'). Your situation is perhaps a little more complicated than this, because you have two continuous variables, rather than a continuous and a dichotomous one. However, this isn't a problem. Typically in this situation, people will be thinking primarily in terms of one of the predictor variables; then you can plot the relationship between that variable and $Y$ at several levels of the other predictor. If there are theoretically meaningful levels, you could use those, if not, you could use the mean and +/- 1 SD. If you didn't have a preference for one of the variables, you could flip a coin, or plot it both ways and see which will be easier to work with. I don't know if / how SPSS will let you make those plots, but if you aren't able to find a way, they should be easy to make manually in Excel.
Interaction in generalized linear model
In general, the existence of an interaction means that the effect of one variable depends on the value of the other variable with which it interacts. If there isn't an interaction, then the value of
Interaction in generalized linear model In general, the existence of an interaction means that the effect of one variable depends on the value of the other variable with which it interacts. If there isn't an interaction, then the value of the other variable doesn't matter. This is easiest to understand in the case of linear regression. Imagine we are looking at the adult height (say at 25) of a child based on the adult height of the father. We further include sex as an additional predictor variable, because men and women differ considerably in adult height. Let's imagine that there is no interaction between these two variables (which may be true, at least to a first approximation). We could then plot our model simply as two lines on a scatterplot of the data. We may want to use different colors or symbols / line styles for men vs. women, but at any rate we would see a football-ish (or rugby-ball-ish, depending on where you live) shaped cloud of points with two parallel lines going through it. The important part is that the lines are parallel; if someone asked you what the effect would be of the father being 1 inch (1 cm) taller, you would respond with $\beta_{\text{height}}$. If they further asked you what the effect would be if the child were male or female, you would respond, 'that doesn't matter, you would expect them to be $\beta_{\text{height}}$ taller as an adult either way'. That is because the lines are parallel (with the same slope, $\beta_{\text{height}}$) / there is no interaction. Now imagine the case of anxiety on test taking performance when examining two populations: emotionally stable vs. emotionally unstable people. Lets imagine that there is an interaction such that emotionally unstable people are more strongly affected by anxiety. Then, if we plotted the model similarly, we would see two lines that are not parallel. One line (representing emotionally stable individuals) might be sloping downward gradually, while the other line (representing unstable students) might move downward much more quickly. If we had used reference cell coding, with the stable individuals as the reference category, the fitted regression model might be: $$ \text{test performance}=\beta_0 + \beta_1\text{anxiety} + \beta_2\text{unstable} + \beta_3\text{anxiety}*\text{unstable} $$ In such a case, the slope of the first line would be $\beta_\text{anxiety}$ (since $\text{unstable}$ would equal 0), but the slope of the second line would be $\beta_1+\beta_3$. If someone asked you how much test taking performance would be impaired if anxiety went up by one unit, you would have to say, 'that depends, emotionally stable students would score $\beta_1$ points lower, but emotionally unstable individuals would drop by $\beta_1+\beta_3$ points'. This is the essence of what an interaction is. In addition, these examples illustrate the necessity of interpreting simple effects only when interactions exist, and the value of using plots of your model to facilitate understanding. With a generalized linear model, the situation is essentially the same, but you may have to take into account the additional complexity of the link function (a non-linear transformation), depending on which scale you want to use to make your interpretation. Consider the case of logistic regression, there are (at least) three scales available: The betas exist on the logit (log odds) scale, whereas $\pi$ (the probability of 'success') exists only in the interval $(0,1)$ and behaves quite differently; in addition, the odds lie between them. So you need to chose which of these you want to use to interpret your model. For example, with respect to the log odds, the model is linear, and everything can be understood just as above. If you were using the odds, you can get odds ratios by exponentiating your betas. For example, if there is no interaction, the odds ratio associated with a one unit increase in $X_1$ is $\exp(\beta_1)$. This would also be the odds ratio of the reference category (like the emotionally stable students above) if there were an interaction with a dichotomous variable, but the contrasting category would be associated with an odds ratio of $\exp(\beta_1)*\exp(\beta_2)$. Unfortunately, neither of those are very intuitively accessible for people, and the non-linear transformation (the link function) makes life more complicated. It is important to recognize that this isn't specific to interactions; the change in the probability of 'success' associated with increasing $X$ by one unit is never the same as (say) decreasing $X$ by one unit (except in the special case where $x_i$ is associated with $\pi=.5$). In other words, the change in probability associated with a one unit change in $X$ depends on where you are starting from (in this sense, you could perhaps metaphorically say that it interacts with itself). The best way to determine the change in probability associated with moving from one level of $X$ to another, is to plug in those levels, solve the regression equation for $\hat\pi$, and then subtract. The same thing is true if you have more than one variable, but no 'interaction' with the variable in question. This isn't anything special, it's just that 'where you are starting from' depends on the other variables as well. Again, the best way to determine the change in probability would be to solve for $\hat\pi$ at both places and subtract. Interactions in a GLiM should also be treated similarly. It is best not to interpret interaction effects, but only simple effects (that is, the effect of $X_1$ on $Y$ holding $X_2$ constant). In addition, it's best to overlay plots of the predicted values (say, when $X_2=0$ vs. when $X_2=1$) on a scatterplot of your data. Now, for a logistic regression, it is often difficult to get a decent plot of your data as the points are all 0's and 1's, so you might just choose to leave them out. Nonetheless, a plot of the two curves will typically be the best thing to use. After you have the plot, a qualitative (verbal) description is often easy (e.g., 'probabilities don't start moving away from 0 until larger levels of $X_1$, and even then, raise more slowly'). Your situation is perhaps a little more complicated than this, because you have two continuous variables, rather than a continuous and a dichotomous one. However, this isn't a problem. Typically in this situation, people will be thinking primarily in terms of one of the predictor variables; then you can plot the relationship between that variable and $Y$ at several levels of the other predictor. If there are theoretically meaningful levels, you could use those, if not, you could use the mean and +/- 1 SD. If you didn't have a preference for one of the variables, you could flip a coin, or plot it both ways and see which will be easier to work with. I don't know if / how SPSS will let you make those plots, but if you aren't able to find a way, they should be easy to make manually in Excel.
Interaction in generalized linear model In general, the existence of an interaction means that the effect of one variable depends on the value of the other variable with which it interacts. If there isn't an interaction, then the value of
34,993
Big Data vs multiple hypothesis testing?
This isn't the whole answer, but an important consideration is which part of your data is big. Consider the following example. I'm doing some analysis on physical measurements of human beings. For each volunteer I measure the distance between the eyes, then length of each digit, the length of the shins, etc. And I record everything in a big table for some exploratory analysis. If I decide to make my data bigger, I can do two things, I can make more measurements for each person (ie. more features). This is dangerous, as it increases the probability of spurious correlations. If I decide to increase the number of instances, however, it should actually reduce the probability of spurious correlations, and although the correlations found may not imply causation, they will be more significant. This is strongly related to the curse of dimensionality, which tells you that adding features (ie. dimensions) can cause an exponential increase in the number of instances required to reliably infer things from your data (unless your data has lower intrinsic dimension, ie. highly correlated features). Personally, I see big data as an increase in the number of instances rather than the number of features, but this is a cause of confusion.
Big Data vs multiple hypothesis testing?
This isn't the whole answer, but an important consideration is which part of your data is big. Consider the following example. I'm doing some analysis on physical measurements of human beings. For ea
Big Data vs multiple hypothesis testing? This isn't the whole answer, but an important consideration is which part of your data is big. Consider the following example. I'm doing some analysis on physical measurements of human beings. For each volunteer I measure the distance between the eyes, then length of each digit, the length of the shins, etc. And I record everything in a big table for some exploratory analysis. If I decide to make my data bigger, I can do two things, I can make more measurements for each person (ie. more features). This is dangerous, as it increases the probability of spurious correlations. If I decide to increase the number of instances, however, it should actually reduce the probability of spurious correlations, and although the correlations found may not imply causation, they will be more significant. This is strongly related to the curse of dimensionality, which tells you that adding features (ie. dimensions) can cause an exponential increase in the number of instances required to reliably infer things from your data (unless your data has lower intrinsic dimension, ie. highly correlated features). Personally, I see big data as an increase in the number of instances rather than the number of features, but this is a cause of confusion.
Big Data vs multiple hypothesis testing? This isn't the whole answer, but an important consideration is which part of your data is big. Consider the following example. I'm doing some analysis on physical measurements of human beings. For ea
34,994
Big Data vs multiple hypothesis testing?
Another thing to consider is how people work with big data (as opposed to 'small' data). Big data usually requires multiple pre-processing steps before it is fed into analysis. And sometimes it is not clear what to test for exactly in these data sets to begin with. Both facts combined allow for considerable wiggle room when it comes to the final analysis. What often happens is that people run multiple analyses and then chose (or tend to chose) the one that either confirms their preconception or that returns a 'positive' result and not a hard-to-publish null-result. In other words, rather than the analysis techniques it is the humans who fall into the "traps of spurious correlations, multiple hypothesis testing, and false positive results".
Big Data vs multiple hypothesis testing?
Another thing to consider is how people work with big data (as opposed to 'small' data). Big data usually requires multiple pre-processing steps before it is fed into analysis. And sometimes it is not
Big Data vs multiple hypothesis testing? Another thing to consider is how people work with big data (as opposed to 'small' data). Big data usually requires multiple pre-processing steps before it is fed into analysis. And sometimes it is not clear what to test for exactly in these data sets to begin with. Both facts combined allow for considerable wiggle room when it comes to the final analysis. What often happens is that people run multiple analyses and then chose (or tend to chose) the one that either confirms their preconception or that returns a 'positive' result and not a hard-to-publish null-result. In other words, rather than the analysis techniques it is the humans who fall into the "traps of spurious correlations, multiple hypothesis testing, and false positive results".
Big Data vs multiple hypothesis testing? Another thing to consider is how people work with big data (as opposed to 'small' data). Big data usually requires multiple pre-processing steps before it is fed into analysis. And sometimes it is not
34,995
Big Data vs multiple hypothesis testing?
'Big data' usually refers to data sets with gazillions of subjects, and relatively fewer measurements per subject (also called 'tall' data). For data that is wide, rather than tall, there is much work already done, a good source being Efron's recent book 'Large-Scale Inference: Empirical Bayes Methods for Estimation, Testing, and Prediction' which deals with (among other things) multiple hypothesis testing. For data that is truly tall, I haven't seen much theory, although there is tons of work relating to algorithms (see 'Mining of Massive Datasets' google it and you'll find a legally-free pdf). There is also some work on developing statistical methodology for tall data, like 'The Big Data bootstrap' by Kleiner, Talwalkar, Sarkar & Jordan.
Big Data vs multiple hypothesis testing?
'Big data' usually refers to data sets with gazillions of subjects, and relatively fewer measurements per subject (also called 'tall' data). For data that is wide, rather than tall, there is much work
Big Data vs multiple hypothesis testing? 'Big data' usually refers to data sets with gazillions of subjects, and relatively fewer measurements per subject (also called 'tall' data). For data that is wide, rather than tall, there is much work already done, a good source being Efron's recent book 'Large-Scale Inference: Empirical Bayes Methods for Estimation, Testing, and Prediction' which deals with (among other things) multiple hypothesis testing. For data that is truly tall, I haven't seen much theory, although there is tons of work relating to algorithms (see 'Mining of Massive Datasets' google it and you'll find a legally-free pdf). There is also some work on developing statistical methodology for tall data, like 'The Big Data bootstrap' by Kleiner, Talwalkar, Sarkar & Jordan.
Big Data vs multiple hypothesis testing? 'Big data' usually refers to data sets with gazillions of subjects, and relatively fewer measurements per subject (also called 'tall' data). For data that is wide, rather than tall, there is much work
34,996
Simple question about the asymptotics of estimators
That's exactly how asymptotic results are being used in practice, e.g., in logistic regression. I would probably factor it differently as $$\sqrt{N}\frac{\hat{M}-M}\sigma \overset{d}{\to}\mathcal{N}(0,1)$$ which shows the desired result more immediately, IMO (as mptikas mentioned in the comments, it is not kosher to have $N$ on the RHS of the asymptotic expression). The practical problem with this of course is that $\sigma$ is usually unknown, and needs to be estimated. The result, and the application, would still hold if a $\sqrt{N}$-consistent estimator is plugged in place of $\sigma$. In some applications, getting such an estimator is a non-trivial task, as is the case with say dependent data (time-series, cluster sampling, spatial data). Update: since the asymptotic distribution is the normal rather than Student, a $z$-test is more appropriate. In practice, $t$-tests are often used instead, but coming up with the degrees of freedom is often a challenge. Besides, for most sample statistics, the finite sample asymmetry and bias are greater concerns than heavy tails, and these obviously cannot be corrected by referring the test statistic to the $t$-distribution instead of the standard normal.
Simple question about the asymptotics of estimators
That's exactly how asymptotic results are being used in practice, e.g., in logistic regression. I would probably factor it differently as $$\sqrt{N}\frac{\hat{M}-M}\sigma \overset{d}{\to}\mathcal{N}(
Simple question about the asymptotics of estimators That's exactly how asymptotic results are being used in practice, e.g., in logistic regression. I would probably factor it differently as $$\sqrt{N}\frac{\hat{M}-M}\sigma \overset{d}{\to}\mathcal{N}(0,1)$$ which shows the desired result more immediately, IMO (as mptikas mentioned in the comments, it is not kosher to have $N$ on the RHS of the asymptotic expression). The practical problem with this of course is that $\sigma$ is usually unknown, and needs to be estimated. The result, and the application, would still hold if a $\sqrt{N}$-consistent estimator is plugged in place of $\sigma$. In some applications, getting such an estimator is a non-trivial task, as is the case with say dependent data (time-series, cluster sampling, spatial data). Update: since the asymptotic distribution is the normal rather than Student, a $z$-test is more appropriate. In practice, $t$-tests are often used instead, but coming up with the degrees of freedom is often a challenge. Besides, for most sample statistics, the finite sample asymmetry and bias are greater concerns than heavy tails, and these obviously cannot be corrected by referring the test statistic to the $t$-distribution instead of the standard normal.
Simple question about the asymptotics of estimators That's exactly how asymptotic results are being used in practice, e.g., in logistic regression. I would probably factor it differently as $$\sqrt{N}\frac{\hat{M}-M}\sigma \overset{d}{\to}\mathcal{N}(
34,997
Simple question about the asymptotics of estimators
Taking the question at its face value, the answer is no. I offer a counterexample where $\hat{M}$ approaches its estimand in distribution while its variance diverges: in such a case, the $t$ statistic must approach zero almost surely, proving it can have neither an asymptotic Normal or t distribution. Consider the usual Normal setting where $\hat{M}$ is an unbiased estimator of the mean based on $N \ge 2$ iid observations of a Normal$(\mu, \sigma^2)$ variable, $(X_1, X_2, \ldots, X_N)$. Let $\beta$ be a function of $N$ to be determined later and, writing $\bar{X}$ for the sample mean, consider the estimator $$\hat{M}(X_1,\ldots,X_N) = \beta(N)\bar{X}\ \text{ if }\ X_1\ge\max(X_1,\ldots,X_N)\ \text{ else }\ \frac{N-\beta(N)}{N-1}\bar{X}.$$ Because the first alternative in the definition of $\hat{M}$ happens with probability $1/N$ and the second with probability $(N-1)/N$, we can compute that $$\mathbb{E}(\hat{M}) = \mathbb{E}\left(\frac{1}{N}\beta(N)\bar{X}\ + \frac{N-1}{N}\frac{N-\beta(N)}{N-1}\bar{X}\right) = \mathbb{E}(\bar{X}) = \mu,$$ showing that $\hat{M}$ is an unbiased estimator of $\mu$, and (by computing the expectation of $\hat{M}^2$ and subtracting the square of the expectation of $\hat{M}$), $$\text{Var}(\hat{M}) = \frac{\sigma^2/N + \mu^2}{N(N-1)^2}\left((N-1)^2\beta(N)^2 + (N-1)\left(N-\beta(N)\right)^2\right) - \mu^2.$$ If we choose $\beta(N) = O(N^b)$ for $\frac{1}{2} \lt b \lt 1$, the right hand side (which is $O(N^{2b-1})$) will diverge but $\hat{M}$ will approach $\mu$ in distribution (because most of the time $\hat{M}$ will equal $ \frac{N-\beta(N)}{N-1}\bar{X}$ which is becoming arbitrarily close to $\bar{X}$). In a comment, StasK has noted that this estimator $\hat{M}$ is not exchangeable in the arguments ($X_1$ plays a favored role) and asks whether that might be part of the cause of the "bad" asymptotic behavior. I do not believe so. For instance, let $s$ be the sample standard deviation and $\bar{X_{\widehat{i}}}$ be the mean of the variables with $X_i$ excluded. The distribution of $(Y_i) = (X_i - \bar{X_{\widehat{i}}})/s)$ depends only on $N$ (not on $\mu$ or $\sigma$)--it is a multivariate distribution with scaled Student t distributions as marginals--so for each $N$ we there exists a number $t_N$ for which there is a $1/N$ chance that $\max(Y_i)\ge t_N$. In the definition of $\hat{M}$, replace the condition $X_1 \ge \max(X_i)$ by $\max{Y_i}\ge t_N$. Everything works out exactly as before, but this $\hat{M}$ is invariant under permutations of the data.
Simple question about the asymptotics of estimators
Taking the question at its face value, the answer is no. I offer a counterexample where $\hat{M}$ approaches its estimand in distribution while its variance diverges: in such a case, the $t$ statisti
Simple question about the asymptotics of estimators Taking the question at its face value, the answer is no. I offer a counterexample where $\hat{M}$ approaches its estimand in distribution while its variance diverges: in such a case, the $t$ statistic must approach zero almost surely, proving it can have neither an asymptotic Normal or t distribution. Consider the usual Normal setting where $\hat{M}$ is an unbiased estimator of the mean based on $N \ge 2$ iid observations of a Normal$(\mu, \sigma^2)$ variable, $(X_1, X_2, \ldots, X_N)$. Let $\beta$ be a function of $N$ to be determined later and, writing $\bar{X}$ for the sample mean, consider the estimator $$\hat{M}(X_1,\ldots,X_N) = \beta(N)\bar{X}\ \text{ if }\ X_1\ge\max(X_1,\ldots,X_N)\ \text{ else }\ \frac{N-\beta(N)}{N-1}\bar{X}.$$ Because the first alternative in the definition of $\hat{M}$ happens with probability $1/N$ and the second with probability $(N-1)/N$, we can compute that $$\mathbb{E}(\hat{M}) = \mathbb{E}\left(\frac{1}{N}\beta(N)\bar{X}\ + \frac{N-1}{N}\frac{N-\beta(N)}{N-1}\bar{X}\right) = \mathbb{E}(\bar{X}) = \mu,$$ showing that $\hat{M}$ is an unbiased estimator of $\mu$, and (by computing the expectation of $\hat{M}^2$ and subtracting the square of the expectation of $\hat{M}$), $$\text{Var}(\hat{M}) = \frac{\sigma^2/N + \mu^2}{N(N-1)^2}\left((N-1)^2\beta(N)^2 + (N-1)\left(N-\beta(N)\right)^2\right) - \mu^2.$$ If we choose $\beta(N) = O(N^b)$ for $\frac{1}{2} \lt b \lt 1$, the right hand side (which is $O(N^{2b-1})$) will diverge but $\hat{M}$ will approach $\mu$ in distribution (because most of the time $\hat{M}$ will equal $ \frac{N-\beta(N)}{N-1}\bar{X}$ which is becoming arbitrarily close to $\bar{X}$). In a comment, StasK has noted that this estimator $\hat{M}$ is not exchangeable in the arguments ($X_1$ plays a favored role) and asks whether that might be part of the cause of the "bad" asymptotic behavior. I do not believe so. For instance, let $s$ be the sample standard deviation and $\bar{X_{\widehat{i}}}$ be the mean of the variables with $X_i$ excluded. The distribution of $(Y_i) = (X_i - \bar{X_{\widehat{i}}})/s)$ depends only on $N$ (not on $\mu$ or $\sigma$)--it is a multivariate distribution with scaled Student t distributions as marginals--so for each $N$ we there exists a number $t_N$ for which there is a $1/N$ chance that $\max(Y_i)\ge t_N$. In the definition of $\hat{M}$, replace the condition $X_1 \ge \max(X_i)$ by $\max{Y_i}\ge t_N$. Everything works out exactly as before, but this $\hat{M}$ is invariant under permutations of the data.
Simple question about the asymptotics of estimators Taking the question at its face value, the answer is no. I offer a counterexample where $\hat{M}$ approaches its estimand in distribution while its variance diverges: in such a case, the $t$ statisti
34,998
Parameter estimation of exponential distribution with biased sampling
The maximum likelihood estimator for the parameter of the exponential distribution under type II censoring can be derived as follows. I assume the sample size is $m$, of which the $n < m$ smallest are observed and the $m - n$ largest are unobserved (but known to exist.) Let us assume (for notational simplicity) that the observed $x_i$ are ordered: $0 \leq x_1 \leq x_2 \leq \cdots \leq x_n$. Then the joint probability density of $x_1, \dots, x_n$ is: $f(x_1, \dots, x_n) = {m!\lambda^n \over {(m-n)!}}\exp\left\{-\lambda\sum_{i=1}^nx_i\right\}\exp\left\{-\lambda(m-n)x_n\right\}$ where the first exponential relates to the probabilities of the $n$ observed $x_i$ and the second to the probabilities of the $m-n$ unobserved $x_i$ that are greater than $x_n$ (which is just 1 - the CDF at $x_n$.) Rearranging terms leads to: $f(x_1, \dots, x_n) = {m!\lambda^n \over {(m-n)!}}\exp\left\{-\lambda\left[\sum_{i=1}^{n-1}x_i+(m-n+1)x_n\right]\right\}$ (Note the sum runs to $n-1$ as there is a "$+1$" in the coefficient of $x_n$.) Taking the log, then the derivative w.r.t. $\lambda$ and so on leads to the maximum likelihood estimator: $\hat{\lambda} = n / \left[\sum_{i=1}^{n-1}x_i+(m-n+1)x_n\right]$
Parameter estimation of exponential distribution with biased sampling
The maximum likelihood estimator for the parameter of the exponential distribution under type II censoring can be derived as follows. I assume the sample size is $m$, of which the $n < m$ smallest ar
Parameter estimation of exponential distribution with biased sampling The maximum likelihood estimator for the parameter of the exponential distribution under type II censoring can be derived as follows. I assume the sample size is $m$, of which the $n < m$ smallest are observed and the $m - n$ largest are unobserved (but known to exist.) Let us assume (for notational simplicity) that the observed $x_i$ are ordered: $0 \leq x_1 \leq x_2 \leq \cdots \leq x_n$. Then the joint probability density of $x_1, \dots, x_n$ is: $f(x_1, \dots, x_n) = {m!\lambda^n \over {(m-n)!}}\exp\left\{-\lambda\sum_{i=1}^nx_i\right\}\exp\left\{-\lambda(m-n)x_n\right\}$ where the first exponential relates to the probabilities of the $n$ observed $x_i$ and the second to the probabilities of the $m-n$ unobserved $x_i$ that are greater than $x_n$ (which is just 1 - the CDF at $x_n$.) Rearranging terms leads to: $f(x_1, \dots, x_n) = {m!\lambda^n \over {(m-n)!}}\exp\left\{-\lambda\left[\sum_{i=1}^{n-1}x_i+(m-n+1)x_n\right]\right\}$ (Note the sum runs to $n-1$ as there is a "$+1$" in the coefficient of $x_n$.) Taking the log, then the derivative w.r.t. $\lambda$ and so on leads to the maximum likelihood estimator: $\hat{\lambda} = n / \left[\sum_{i=1}^{n-1}x_i+(m-n+1)x_n\right]$
Parameter estimation of exponential distribution with biased sampling The maximum likelihood estimator for the parameter of the exponential distribution under type II censoring can be derived as follows. I assume the sample size is $m$, of which the $n < m$ smallest ar
34,999
Parameter estimation of exponential distribution with biased sampling
This links @jbowman's answer to my comment. Namely, under common working assumptions, one can use the 'standard survival likelihood' under type II censoring. > #------seed------ > set.seed(1907) > #---------------- > > #------some data------ > t <- sort(rexp(n=20, rate=2)) #true sample > t[16:20] <- t[15] #observed sample > delta <- c(rep(1, 15), rep(0, 5)) #censoring indicator > data <- data.frame(t, delta) #observed data > #--------------------- > > #-----using @jbowman's formula------ > 15 / (sum(t[1:14]) + (5 + 1)*t[15]) [1] 2.131323 > #----------------------------------- > > #------using the usual survival likelihood------ > library(survival) > fit <- survreg(Surv(t, delta)~1, dist="exponential", data=data) > exp(-fit$coef) (Intercept) 2.131323 > #----------------------------------------------- PS1: Note that this is not restricted to the exponential distribution. PS2: Details can be found in Section 2.2 of the book by Lawless.
Parameter estimation of exponential distribution with biased sampling
This links @jbowman's answer to my comment. Namely, under common working assumptions, one can use the 'standard survival likelihood' under type II censoring. > #------seed------ > set.seed(1907) > #--
Parameter estimation of exponential distribution with biased sampling This links @jbowman's answer to my comment. Namely, under common working assumptions, one can use the 'standard survival likelihood' under type II censoring. > #------seed------ > set.seed(1907) > #---------------- > > #------some data------ > t <- sort(rexp(n=20, rate=2)) #true sample > t[16:20] <- t[15] #observed sample > delta <- c(rep(1, 15), rep(0, 5)) #censoring indicator > data <- data.frame(t, delta) #observed data > #--------------------- > > #-----using @jbowman's formula------ > 15 / (sum(t[1:14]) + (5 + 1)*t[15]) [1] 2.131323 > #----------------------------------- > > #------using the usual survival likelihood------ > library(survival) > fit <- survreg(Surv(t, delta)~1, dist="exponential", data=data) > exp(-fit$coef) (Intercept) 2.131323 > #----------------------------------------------- PS1: Note that this is not restricted to the exponential distribution. PS2: Details can be found in Section 2.2 of the book by Lawless.
Parameter estimation of exponential distribution with biased sampling This links @jbowman's answer to my comment. Namely, under common working assumptions, one can use the 'standard survival likelihood' under type II censoring. > #------seed------ > set.seed(1907) > #--
35,000
Parameter estimation of exponential distribution with biased sampling
Assuming $n$ is known, an estimate can be obtained via $ \Phi(x_k)=1-e^{-\lambda x_k} \approx (k/n)$ where $x_k$, $0<k<m$, refers to the $k$'th smallest value in your reduced data set. The logic is: if you had the entire set of $n$ samples, you could construct the empirical CDF, $\Phi$, from this sample. Then if you took item $k$ of this sorted array, it would correspond to the CDF value $k/n$. In many cases, $k=n/2$ is a useful choice.
Parameter estimation of exponential distribution with biased sampling
Assuming $n$ is known, an estimate can be obtained via $ \Phi(x_k)=1-e^{-\lambda x_k} \approx (k/n)$ where $x_k$, $0<k<m$, refers to the $k$'th smallest value in your reduced data set. The logic is:
Parameter estimation of exponential distribution with biased sampling Assuming $n$ is known, an estimate can be obtained via $ \Phi(x_k)=1-e^{-\lambda x_k} \approx (k/n)$ where $x_k$, $0<k<m$, refers to the $k$'th smallest value in your reduced data set. The logic is: if you had the entire set of $n$ samples, you could construct the empirical CDF, $\Phi$, from this sample. Then if you took item $k$ of this sorted array, it would correspond to the CDF value $k/n$. In many cases, $k=n/2$ is a useful choice.
Parameter estimation of exponential distribution with biased sampling Assuming $n$ is known, an estimate can be obtained via $ \Phi(x_k)=1-e^{-\lambda x_k} \approx (k/n)$ where $x_k$, $0<k<m$, refers to the $k$'th smallest value in your reduced data set. The logic is: